Data science is inclusive

I’ve often heard data science described as a combination of three things: mathematics & statistics, computer science (sometimes simply called ‘hacking skills’) and domain knowledge. Drew Conway showed this using a, now ubiquitous, Venn diagram:

Drew Conway's data science Venn diagram

This accurately describes the set of skills that an employer is after when they seek to hire a single data scientist.

However, such people are rare. They have been compared to unicorns. To depict data science as an intersection of these skills presents a misleading picture of our ‘profession’. In reality, the term ‘data science’ covers work that is done by many existing professions.

To do data science on a decent scale, we need to engage a multidisciplinary team of data scientists who collectively have the required expertise. None of them will be unicorns, but together they can fill out the Venn diagram. That means data science is more accurately viewed as the union of these skills:

Data Science Venn Diagram v2.0

Evan Stubbs emphasised these points last week in his talk, Big Data, Big Mistake. According to him, the relentless search by employers for ‘unicorn’ data scientists has led to disappointment and disillusionment, and we need to communicate to them the idea that data science is groups of people.

With ‘data science’ now a mainstream term, we have a fantastic opportunity to unite our professions under a common banner and combine our skills together to solve problems we cannot do alone. This is not only good for all of us as practitioners. It is also what society seeks from us.

Let us embrace data science as an inclusive discipline.

Drew Conway’s Venn diagram is licensed under a Creative Commons Attribution-NonCommercial Licence and is reproduced here in its original form. The Data Science Venn Diagram v2.0 is an adaptation of Drew Conway’s diagram by Steven Geringer and is reproduced here by permission. The image of both diagrams link back to the original source.

Adam Bandt discusses evidence-based policy

Two weeks ago the Federal Member for Melbourne, Adam Bandt, gave a public lecture on the role of evidence in public policy in Australia. I helped to organise this talk as one of the monthly events for SSA Vic. Our goal was to hear how evidence is used (or not) by decision makers, in this case politicians.

Adam’s covered many topics and fielded a large number of questions from the audience. You can listen to the recording to hear it all (approx. 1 hour). Here, I summarise the points that stood out for me.

Lessons learnt from climate change policy

Climate change featured prominently in both Adam’s talk and the audience’s questions. As part of his role in the previous government, Adam was frank in describing both their successes and failures. Two of these stuck with me.

Early on, the government put together a committee to develop a set of policies to tackle climate change. It consisted of parliamentarians from multiple parties, and an equal number of experts from a variety of fields. Adam said the presence of the experts changed the dynamic of discussion considerably:

‘When you are sitting across the table from an expert…your ability to prosecute crap arguments diminishes drastically. You’ll be held to account very, very quickly by someone who’ll just tell you that’s simply not right.’

Seems like a great idea to me. Getting politicians and experts talking together, surely it’s a no brainer? Shouldn’t this happen more often?

On the other end, one of their major mistakes started once they had developed their policy and passed the legislation. They presumed there was no longer any need to talk about the problem. The public information campaign that followed concentrated on details of the carbon price and the compensation package, with little mention of global warming or the fact that this legislation is tackling a big social problem.

‘The failing to talk about the problem, and just presuming because you have a good technocratic fix to it then that’s enough, is part of the problem,’ according to Adam. This allowed the Opposition to shift the debate to be about something other than the underlying problem, to a debate about the Government’s credibility, without any reference to climate change.

Adam’s 3-step plan

Often it’s easy to point out problems but much harder to come up with solutions. Adam offered us three.

1. Entrench facts into government decision making, by law

Adam suggested two ways of doing this. Firstly, by setting up a sustainability commissioner in various government departments, whose role is to provide independent scientific advice (for example, about the impacts on biodiversity or energy use). The key point is that the relevant minister would be required, by law, to take that advice into account. Of course, they could chose to ‘ignore’ any advice but they would need to make a statement to this effect. Adam believes this would change the dynamic of many decisions and make evidence harder to ignore.

Secondly, an increased use of randomised controlled trials (RCTs) as part of policy development. However, Adam was a bit reserved on this point, wanting to see more evidence that these are indeed effective. He mentioned that a large review was underway in the UK to assess the ability of RCTs at measuring the effects of social policy.

2. Increase the scientific literacy of the population through public education

Those who wish to attack evidence-based positions can resort to variety of underhanded tactics. One is to manufacture doubt. Another is to falsely undermine the evidence by blurring the distinction between evidence and moral values.

Adam believes that increasing scientific literacy can help to blunt both of these attacks, and also lead to increased acceptance for a greater role for evidence in decisions. He would do this by investing more in science and mathematics education in primary and secondary schools.

A byproduct of such an education would be a greater ability by the public to distinguish between the use of evidence versus the use of values to guide decisions. Hopefully, this will lead us to a situation where politicians would be allowed (in fact, compelled) to change their policies in response to new evidence without being falsely accused of ‘flip-flopping’.

3. Get scientists & researchers to be more political

Adam’s final message was directed squarely at us, the scientists and researchers in the audience. Unless we fight for our slice of the political pie, according to Adam, it will be instead taken by those (of which there are many) who are motivated by self-interest and not necessarily the evidence.

One way to get political is to (like Adam) leave our jobs and stand for election. It would be great to have a few more scientists in Parliament, but that won’t be enough nor is it a realistic prospect for most of us.

Instead, Adam urged us to get organised and pool our efforts. Some of us will need to go out in public and advocate on behalf of scientists. We will also need an effective campaigning organisation. (Adam mentioned the Australian Academy of Science but noted that it acts more as an advisory body than as a campaigning organisation.) Comparing our plight with that of the mining industry, which collectively ran a multi-million dollar advertising campaign against the mining tax, Adam asked, ‘Where is the alternative, equivalent organisation…[who will] run a TV advertising campaign for science & research?’

The question of money arose. Adam admitted that this is indeed a challenge. However, a surmountable one. He said we need to find ‘allies’ out there who have an interest in Australia being a well-resourced, research & science community. There are many of them around and they are just waiting to be pulled together.

To explain or predict?

Inspired by a recent blog post from Rob Hyndman, last week I read Galit Shmueli’s paper, To explain or to pre­dict?.

I cannot recommend this paper enough. It should be essential reading for anyone involved in data analysis.

Shmueli distinguishes two different aims when analysing data: prediction and explanation. She describes in detail how the modelling and analysis process should differ whether you are doing one or the other. She even shows a concrete example where the model that works best for prediction is different to the model that works best for explanation. This was a key insight for me. Previously I had assumed the intuitively appealing idea that the best model for one will also be the best for the other. I’m glad to have this corrected. I see this idea advanced all the time, and now I know for sure that it’s false.

Another key message from Shmueli is that even though our primary aim will be either prediction or explanation, we should, if possible, assess our models on both criteria. We would expect good models to perform reasonably well in either setting, and it will usually be insightful to assess both.

Bin Yu gave a talk earlier this week on ‘mind-reading’, showcasing her group’s work on reconstructing movies from brain signal measurements. In one step of their modelling process, they do a trade-off between ‘explainability’ and ‘predictability’. Specifically, they chose a model that was easier to interpret at the expense of a bit of predictive performance. This is the first time I’ve seen anyone do this explicitly. It reminds me of the bias-variance trade-off and talks directly to the ideas in Shmueli’s paper.

Car share cost comparison

When I moved to the UK to study many years ago, one of the big changes for me was living much closer to my workplace. Having grown up in the Melbourne suburbs, this was a revelation. Suddenly, I didn’t need to spend hours every day commuting. I was an instant convert. It also allowed me to avoid buying a car, very handy on a student budget.

Upon returning to Melbourne, I was keen to continue a minimal-commute, car-free existence. I now live and work close to the CBD. Public transport is very easy when you are so central, there is plenty of choice and frequent service. I’m pleasantly surprised how little I actually need a car.

Nonetheless, sometimes only a car will do the job. What are the best options out there? The familiar ones are to take a taxi or rent a car. Over the last few years, a new option has entered the mix: car sharing. This is similar to renting, but you can book a car for shorter periods of time (for example, 2 hours for a big shopping trip) and with less hassle (simply reserve a car online, and then pick it up and drop it off without signing any forms).

Three car share businesses have established a presence in Melbourne: GoGet, Flexicar and GreenShareCar. I wanted to join one but it wasn’t clear which one was the best deal for me. Frustratingly, they each had a different pricing structure. So I whipped out the trusty spreadsheet and did some calculations, which made the choice much clearer.

Some people have asked me if I could share this around, so I’ve polished it up a bit and hopefully made it easy to use. You can grab a copy from here:

Australian car share comparison (Google Docs spreadsheet)

The instructions are on the first sheet. The easiest way to use it is to make a copy of it within Google Drive (File > Make a copy...).

The spreadsheet makes a number assumptions, such as averaging out your trips equally across all months, and not accounting for any uncertainty in the number of trips, but that’s probably fine for a rough estimate. Use it as a guide only, and trying different scenarios to see how much difference it makes.

Terry Speed sounds the alarm

Two weeks ago, the Victorian Branch of the Statistical Society of Australia celebrated its 50th anniversary. Many people turned out for the event, including a few members who were there right from the beginning. You can read a short historical account of the Branch in this short review from Ian Gordon and myself.

The clear highlight of the day was a lecture from Terry Speed, warning us that statisticians are at risk of being left out of the Big Data revolution. He certainly raised many eyebrows! Find out more in my review of his lecture.

Simple vivid advocacy

I heard many talks at the recent Science meets Parliament event (see my previous post for a summary). The most memorable for me was ‘How to talk like a policy maker’ by Professor Hugh White from the ANU.

The part that stood out the clearest were his three tips for communicating:

  • Simplify, without distortion.
  • Be vivid, without being needlessly provocative.
  • Advocate, don’t polemicise (that is, only use arguments backed by evidence).

Prof. White noted that being simple and vivid is more important than being concise. This was a revelation to me and immediately rang true. I had always conflated the two concepts, but I see now how they relate. The overall goal is to communicate an idea. Doing so in a simple and vivid manner is likely to be successful. Being concise is one strategy for this, and although it can often work it isn’t necessarily the only way (and can backfire, if it leads to oversimplification).

Another piece of advice he gave, related to his third point above, is how to deal with criticism. Rather than respond to the critics, you should respond to their arguments. In other words, focus on the evidence, reasoning and ideas. (This is well known advice, of course, but it’s good to be reminded of it.)

Prof. White had many other things to say, and was immediately followed by talk from Will Steffen. See Nick Falkner’s detailed summary of their talks if you are interested.

Science meets Parliament 2014

‘No other nation does it quite like this.’

Catriona Jackson, CEO of Science & Technology Australia, opened the proceedings at Science meets Parliament 2014 by telling us that more than half of our elected representatives will be personally involved.

Last week, almost 200 scientists from around Australia gathered to meet ‘face to face with the decision makers in Canberra’. Now in its 14th year, this two-day event aims to teach scientists how to communicate more effectively with politicians, policymakers and the media, and also to give them the opportunity to actually meet parliamentarians themselves and put this into action.

We were addressed by many speakers over those two days. These included some of our country’s leaders, each with their own call to action. Ian Macfarlane, Minister for Industry, appealed for closer collaboration with industry and greater commercialisation. Bill Shorten, Leader of the Opposition and Shadow Minister for Science, urged us to make science a national political issue. Ian Chubb, Chief Scientist of Australia, advocated a long-term strategy for science, which should include areas such as education and community engagement, as well as research.

However, the majority of speakers were there to teach us about effective communication. I found this very informative and wish to share with you what I have learnt.

Below, I summarise the main ideas from most of the talks. Many speakers touched on similar topics and advice, I’ve tried to combine them together into a single cohesive guide. If you would prefer more of a ‘blow-by-blow’ account of each speaker, check out Nick Falkner’s comprehensive notes on his blog.

A note about terminology: I use the terms politician and policymaker throughout. Just to be clear, I use the former to refer to our elected parliamentary representatives and the latter to refer more generally to anyone involved in formulating and influencing public policy (this includes, for example, public servants).

Communication: general tips

  • Tell your story. People can engage with and readily understand stories. Craft a coherent story from your findings. This should highlight the key pieces of evidence, and should include some relevant anecdotes (consistent with your evidence).

  • From complicated to meaningful. When explaining complicated concepts, talk only about the parts that are meaningful to the audience. Know when to stop, we don’t usually want the full complexity! Test out your message on some non-expert friends who can give you frank advice.

  • Alternative outcomes, rather than bare uncertainty. Uncertainty is a key part of scientific research and is not part of most people’s everyday language or experience. A good way to frame uncertainty is to present alternative outcomes and the risks associated with each.

  • Build relationships. Communication requires trust. There is a (warranted) widespread perception, especially amongst policymakers, that ‘evidence’ can be created to support any desired viewpoint. Hence, they will only believe facts and advice given to them by people and organisations they trust. This is why building a relationship is cruicial and will usually need to be done over a period of time.

  • Get expert help. It’s okay not to be a communications expert. Not everyone will have the aptitude or interest in this activity. It’s fine to ask others to do it for you. (But we can’t all pass the buck, some of us will need to be communications experts!)

Communicating with politicians

  • Understand the politicians’s goals and drivers. Your advice needs to help the politician meet their committments. For example, if you are talking with someone from the government, what did they promise in the last election? Of course, you also have your own goals. Aim to create win-win solutions.

  • Solutions, not entitlements. Don’t simply make requests. Politicians are bombarded by such claims all the time and will most likely ignore you. Instead, talk about solutions, and specifically for the problems that matter to them.

  • Craft your message. A successful one will have:

    • a narrative,
    • evidence (must be consistent with and supporting the narrative),
    • some ‘breakthrough’ examples (everyone loves scientific ‘breakthroughs’),
    • cost/benefit estimates.

    The last of these is important. The cost of any policy will be heavily scrutinised before it even gets close to the implementation stage. There are at least two benefits to discussing the costs yourself. Firstly, it shows that you can ‘speak the language’ of policymaking, by engaging in this key step in the decision making process. Secondly, it gives you the opportunity to make a compelling case for the benefits, otherwise it will be left to someone with less knowledge and enthusiasm.

  • Unite. For large groups it is very helpful to talk with a single voice. Bill Shorten gave the example of the NDIS. Providing assistance to people with disabilities was always a moral imperative, but it wasn’t until the very many support and lobby groups came together as part of the Every Australian Counts campaign and presented a single message that it gained significant political traction. According to Mr Shorten, a challenge for us when advocating for science is to find our unifying message.

  • Plan ahead. At the conclusion of the meeting you will want to have some next steps. Perhaps it might be the opportunity to present some more detailed findings, or a referral to a more senior politician. Think about your desired next steps as you plan your meeting.

Communicating with policymakers

  • Learn the ‘logic’ of policymaking. Science and policymaking have different goals. Science is about finding the truth, policy is about making decisions. This gives each a different dynamic. Science has a special status within policymaking due to its role in interrogating and elucidating true facts of the world. Nevertheless, the ultimate goal is to make decisions. Anything you say as a scientist should be to assist with that process.

  • Answer the question. It is vital to answer the exact question of interest to policymakers, with reasonable caveats. Don’t answer a tangential or related question simply because you know more about it (hence the importance of the ‘reasonable caveats’).

  • Understand the policy cycle. The are multiple stages to the development of policy: understanding and formulating the questions, exploring potential solutions, costing and comparing the various options, implementing the selected solution, and finally evaluating the outcomes. The stages aren’t necessarily linear, a policy can go back and forth many times as more is learnt about the problem and the policy is refined.

    If you wish to get involved, find out what stage of the development cycle the current policy is at when giving advice. For example, if a policy has already been implemented and is at the evaluation stage, it’s not helpful to give suggestions on how it should have been formulated differently.

    Think about at what stage your knowledge would be useful. Target, and time, your advice appropriately.

Communicating with journalists

  • Tell your story. This was already mentioned above, but is particularly important here. Journalists write stories. They need to turn your news into a story. You can help them by doing this for them. Otherwise, they will have to do it for you and may unwittingly distort the facts in the process.

  • What makes your story newsworthy? There are many factors that make stories ‘newsworthy’, including its timing & location, whether it is inherently interesting, involves people, is controversial, etc. You don’t need conflict to generate media interest. Conflict generally only enters the picture once the issue makes the transition from being only about science to also being about political action.

  • Simplify, just enough. Journalists need to dumb things down to make their stories accessible. Help them out by dumbing it down for them, in way that doesn’t distort the facts. Avoid jargon and unnecessary detail. Focus on key findings and messages.

  • Reduce ambiguity and uncertainty. Scientific research and journalism often has opposing aims. Journalists generally don’t like shades of grey and long timelines, they add complication to stories. Formulate a story that doesn’t require too much of either. Otherwise, they might do this for you in a way that you don’t like.

  • Keep searching for media time. There is plenty of space and time available in the media, just not necessarily on ‘prime time’. You can get exposure by going to local radio stations or newspapers, or for more specialist or niche shows and publishers.

  • Practise. You can develop your media skills by writing regularly. Two ways to do this are to write a blog or for The Conversation. Some more resources are available from Inspiring Australia.

Press coverage

If you wish to read more about Science meets Parliament, I recommend Ara Sarafian’s great summary on The Conversation.


I am grateful to the Victorian Branch of the Statistical Society of Australia, the Victorian Centre for Biostatistics and the Murdoch Childrens Research Institute for supporting my attendance at Science meets Parliament 2014.

I also wish to thank Science & Technology Australia for organising the event, all of the guest speakers and the very many Senators and Members of Parliament who made the time for private meetings with us.

Advertising for statisticians (in Australia)

Want to hire a statistician?

For advertising statistical job openings in Australia, I highly recommend:

These are both free and will reach a large number (perhaps even the majority?) of statisticians in Australia and New Zealand.

A few other avenues worth considering:

You may also want to advertise internationally. There are probably many avenues available. A few that I am familiar with, which tend to be popular for academic jobs, include:

The real lesson from wedding stats

Our interview on the BBC this week told the story of us optimising our wedding invitation list using statistical modelling.

I’m delighted that the BBC has conveyed the nature of the problem and a sense of playfulness. I’m even starting to get people contacting me to find out more about ‘guestimation’, as we’ve been calling it. I’m also glad to have this opportunity to show how good statistical thinking can help address (but not necessarily ‘solve’) a vexing problem that many of us might face.

I wrote about this project for Significance magazine last year because I thought it provided an ideal theme for an accessible introduction to some basic statistical ideas, and in their use to aid decision making. In particular, the idea that these situations are about trying to quantify uncertainty and manage risk, rather than seeking high-precision estimates (which are basically impossible in this scenario).

I’d like to take the opportunity, though, of explaining the main lesson from using our wedding statistics model. This key insight was missed by the BBC; it’s always difficult to guess what the main messages a journalist will publish after they speak to you.

The BBC focused on whether the model was ‘right’, concluding that in fact that there were ‘sizeable statistical errors’ and that the model was ‘wrong’. It ended with the moral, ‘if you can’t be right, then be lucky’, which isn’t the point as you’ll see from below.

‘True’ models

I have never seen any model that I could describe as ‘true’, in BBC’s sense of the word. Some might say that it’s a central tenet of applied statistics, and scientific research more generally, that our models will always be ‘wrong’.

George Box famously said that, ‘Essentially, all models are wrong, but some are useful.’ It is inevitable that our models will not be identical to reality, but that’s not what matters. Rather, it is whether they capture enough of reality to make them useful tools for understanding the world and helping us make good decisions.

In some parts of our lives we have embraced this. Weather forecasting is a great example. If the forecast for tomorrow is for rain, we can prepare by bringing a coat or umbrella as we head outside. We know that the forecast isn’t perfect and it won’t always rain, but we are grateful for the warning. The alternative is to make our own judgement by looking out of the window, which often works well, but is far less reliable.

In fact, weather forecasting is an area that has seen tremendous progress over the years (see Nate Silver’s book, The Signal and the Noise for a great account). I remember when I was a child how the 4-day forecasts were always taken with a huge pinch of salt, but I can now regularly rely on them when planning my week. Nevertheless, they are still ‘wrong’.

How ‘wrong’ were we?

Last year, Tim Harford wrote about us on his blog and for the Financial Times. He conveyed a similar message as in the More or Less episode, but put it more strongly. He described my modelling assumptions as ‘flat-out wrong’ and ‘felicitously flawed’. While I certainly don’t claim the model was right, the evidence doesn’t bear him out.

Harford notes that I overestimated the probability of attendance for each group of guests that we invited. He quotes the fact the group we called as ‘likely’ to attend, for which we assigned a probability of attendance of 80%, had in fact zero attendance. But he fails to mention that this group only included 2 invitees. Getting 0 out of 2 is certainly not strong evidence against the true probability being 80%, which any first-year statistics student can appreciate. You can hardly tell anything from a sample size of 2.

If you read my article, you’ll see that for each group of guests except one, the probability we assumed was within the 95% confidence interval calculated from the actual attendance. In other words, you can’t claim with confidence that our assumptions were very different from reality.

A minor exception was for the ‘definitely’ group, where we assumed 100% attendance. This was bound to be an overestimate, but it was a deliberate one we made for pragmatic reasons and were upfront about. Thus, it is not deserving of the ‘flawed’ label. (For the record, the attendance for the ‘definitely’ group turned out to be 96 out of 100 guests.)

How useful was our model?

Estimating wedding attendance is difficult. We hadn’t done it before, there are no reliable guides to doing it, and we had no previous data to work with. But we had to invite some guests and we certainly didn’t intend to do it blindly.

We drew on our own intuitions and life experiences to help us get a handle on how many people would come. This is the same information that any other couple would draw upon for their wedding. The only difference is that we formalised our intuitions into a concrete mathematical model.

As I described in my article, we were making a calculated stab in the dark. Our model could be described as an extreme Bayesian: all assumptions and no data. Hardly up to scientific research standards. Don’t do this at home kids!

A simple approach is to say something like, ‘95% of local invitees will come, and 20% of the overseas ones’, and then proceed to calculate a single number as your estimate. For example, if you invite 100 locals and 40 people from overseas, you expect on average \(100 \times 0.95 + 40 \times 0.20 = 103\) guests. This is somewhat useful, but begs a few questions. How close to 103 are we expecting to get? How likely are we to exceed a certain number (e.g. the capacity of the venue)?

We took this idea further and, with a few quite reasonable assumptions, calculated a prediction interval for our wedding, rather than just a single point estimate. This gave us a much better assessment of the true uncertainty.

We didn’t expect our assumptions to be perfect. However, the formulation as a model allowed us to more easily work out how any set of assumptions translated into an actual range of attendance.

This was particularly crucial for us because we didn’t care so much about the expected attendance, but more the fact that we didn’t exceed the capacity of the venue. This required selecting an invitation list where the expected number of guests is lower than the upper limit. But how much lower? There is no way to gauge this from a point estimate alone.

What’s the alternative?

We wanted to send our invitations in a single round and focused on calculating the optimal number to send.

Our modelling approach was more sophisticated than most couples would attempt. They might do more crude calculations, or none at all, and send out their invitations blindly hoping for the best.

Is this any more ‘wrong’ than our approach? Does our increased sophistication lead people to (falsely) expect magic-bullet results?

That’s possible, and understandable. In our case, we understood the limitations of our assumptions and expected them to be somewhat fallible. Those not familiar with using models in this manner might find it more difficult. One of my goals was to demystify this process using a familiar scenario. Alas, this did not filter through to the BBC coverage.

Are there better solutions?

Harford admits that he doesn’t have a better suggestion. He personally prefers multiple rounds of invitations, and selective `disinvitations’. He believes that it generally leads to less embarrassment overall (although he himself wasn’t so lucky when he tried to organise a party in this way). It’s certainly a valid strategy, although not one we were comfortable with for our wedding.

A friend of ours reduced his uncertainty by calling up each of his guests before sending out the official invitations. That’s a lot of work, but the payoff is much less risk, which he thought was worth the effort.

Errors cancel out

There is a passing reference in the BBC coverage to the idea that ‘errors cancel out’.

This is actually a fundamental idea in probability theory (see the law of large numbers and the central limit theorem) which plays a key role in the success of applied statistics. It is what allows us to make reliable and accurate inferences from relatively small samples.

Unfortunately, we didn’t have time in the interview to go into these ideas, and the fact that we had perfect wedding attendance was inaccurately put down to luck. However, I’m glad they included it anyway because it is such an important idea.

Engaging and educating

Overall, I thought the BBC stories of us were fun and engaging. I hope that the coverage helps to popularise statistics. It’s a tough job, combining the teaching of basic statistical concepts with news/entertainment. The BBC’s More or Less radio program generally does a good job and I hope that there will be more opportunities to get involved in future.

On the BBC

My wife Joan and I were featured on the BBC today. Twice!

For our wedding, I did some statistical modelling to optimise our invitation list. I wrote about it last year for the Young Writers Competition run by Significance, a popular statistics magazine. It was selected as the winning entry and published in the Aug 2013 issue.

This caught the eye of Tim Harford, the host of More or Less on BBC Radio 4. He interviewed both of us for today’s episode.

Ruth Alexander wrote an accompanying article for BBC News Magazine, also published today.

(Note: if you are streaming the episode from the website, our interview begins at 23:44. If you are listening to the podcast version, the interview begins at 22:53.)

Update: see my follow-up post for some more discussion.