
The Lancet Voice
The Lancet Voice is a fortnightly podcast from the Lancet family of journals. Lancet editors and their guests unravel the stories behind the best global health, policy and clinical research of the day―and what it means for people around the world.
The Lancet Voice
Proof, truth, and infectious disease
Professor Adam Kucharski joins Gavin & Jessamy to discuss the intersection of mathematics, epidemiology, and truth, off the back of his new book, Proof. How do we generate scientific knowledge? How can we communicate uncertainty? What is the impact of social media and artificial intelligence on public trust in science?
Adam also tells us about the importance of public engagement, the nuances of misinformation, the importance of transparency in policy, the future of scientific communications, the need for adaptable definitions of proof, and the value of pragmatic, transparent approaches to decision-making in health and science.
Read all of our content at https://www.thelancet.com/?dgcid=buzzsprout_tlv_podcast_generic_lancet
Check out all the podcasts from The Lancet Group:
https://www.thelancet.com/multimedia/podcasts?dgcid=buzzsprout_tlv_podcast_generic_lancet
Continue this conversation on social!
Follow us today at...
https://thelancet.bsky.social/
https://instagram.com/thelancetgroup
https://facebook.com/thelancetmedicaljournal
https://linkedIn.com/company/the-lancet
https://youtube.com/thelancettv
This transcript was automatically generated using speech recognition technology and may differ from the original audio. In citing or otherwise referring to the contents of this podcast, please ensure that you are quoting the recorded audio rather than this transcript.
Gavin: Hello, and welcome to the Lancet Voice. It's July, 2025, and I'm your host Gavin Cleaver here as ever with my co-host Jessamy Bagenal. Today we're excited to be joined by Professor Adam Kucharski, a leading expert in mathematical epidemiology and infectious disease modeling, whose new book proof is out now. We are gonna chat about the challenges of communicating uncertainty in science, and we'll also discuss the evolving landscape of public trust, the influence of social media and ai, and what all of these changes mean for scientific engagement and policy.
We hope you enjoyed this conversation with Adam Kucharski.
Professor Adam Kucharski. Thanks so much for joining us on the Lancet Voice today. Thank you for having me. I wanted to kick off by talking about a little bit of your background. You started out as a mathematician and then you ended up working in infectious diseases in epidemiology. I.
Interested as to how those two things overlap in the obvious way, obviously, because they both use numbers very broadly, but I'm sure there are more kind of myriad interactions between the two.
Adam: Yeah, so I started off I think like most people were interested in science, interested in maths, end up, pursue math at university.
Because I was at Warwick, you could do quite a lot of your degree from outside this kind of core pure math and theory and ended up doing more mathematical biology and a little bit of theoretical neuroscience and systems biology and so on. I think I became particularly interested in these situations where we don't necessarily know exactly what all the rules are.
So in physics, in theory you can often derive a lot of knowledge of these systems. Whereas if you think about things like behavior or health or disease, there are all these really important problems. And actually we don't have these kind of laws in the same way. We don't have that certainty.
You have increasing amounts of data and often these underlying dynamics, which you can describe in mathematics, is often just a way of formalizing in our heads what we think might be happening and think about ways of comparing it with data. So I think particularly going to PhD, which is applied maths, but really applied to a lot of evolution and immunology type questions, and then going more into outbreak analysis.
I think I realized as a field it was somewhere where if you think about patterns, if you think about underlying processes and interactions. There's some really important problems that can have a lot of impact, and you really have that toolkit that's starting to open up to tackle them. That's really
Gavin: interesting.
And then how did you make the leap into infectious diseases?
Adam: As I came into to PSDI did it Cambridge with with Julia Gog, and that was really much more around flu and how we can use, mathematical approaches to understand it. So flu particularly, you have this issue that because it evolves the amount of combinations of infection you could have had over a lifetime just is enormous.
If you have. Potential for your 30 viruses to circulate. It's two to the power of 30. So you're getting kind of billions, if not trillions of combinations. So as a mathematical statistical problem, it became very difficult to analyze simply. So for PhD Unified Math, that was a really good topic, but I think as I went on, became more interested in applications for the policy side, applications for behavior.
Started as with anyone who works with data, if you have frustrations about what's out there, you go in. Set up urine studies to, to collect things. So we started working more on going out and collecting antibody data sets, behavioral data sets, thinking about interactions between behavior and infection.
Running public engagement studies, thinking about how can we engage with audiences. Like schools, for example, where there's a lot of value in understanding behavioral and infection in those settings, but also you want to have more engagement with that, that you want to embed that within something where the children and the teachers are gonna get something out of it as well.
You've
Gavin: always been quite interested in that sort of public engagement work, right?
Adam: I think a lot of our funding, because it's. It's from public sources. I think we have a duty to, to go out there and explain what we're doing. I think especially in the the modern era, I think I've also got a lot out of it.
I think it's sometimes quite humbling to go and talk to a bunch of school kids and some of the things that you think are very important, they're not interested in some things that, that they're, they might be very interested in terms of disease and threats might not have been something you thought of.
So I've actually found over time it's probably maybe a better research. I think it makes you more grounded in actually just talking to people about. The sort of questions they have and how it interacts with your field rather than just, cliche, but it's sitting in an ivory tower and assuming you have some sense of what the public are thinking,
Gavin: what are the kids most interested in?
Adam: It's changed a lot over the years. One thing I found striking when I started doing a lot more of this, it would've been about 10, 15 years ago, is first of all just the half life on a pandemic threat. So even things like the 2009 pandemic within a few years, that's just meaningless. As a threat.
Things like measles. Very little awareness of that. I think probably more now, because the situation we're in, it was also interesting coming from pre COVID to post COVID. So we, we had a lot of materials that really introduced these things from quite basic principles of this is a virus, this is what transmission means.
And we thought actually the level of. I guess understanding or even boredom might be a much further advanced post COVID because people just seen so made these things, but actually, particularly children who may be slightly younger at the start of the pandemic, that's not necessarily the case that a lot of that awareness drops off quite quickly.
I think that's quite important to bear in mind that for a lot of adults, this is just enormous event in our memory, but we do have a generation very quickly coming up where it's not the same event for them. And I think that's very useful to bear in mind when we're as assuming certain concepts are obvious.
For example,
Gavin: your new book has just come out proof. Congratulations. Reading through it. I was interested in how much historical content there was in the book. I. And I guess one question I had from reading through it was are the sort of methods of coming to know things in modern times fundamentally different to historically?
And I mean that in the sense of things like social media or things like ai, modern technological inventions. Have they changed the way that we come to understand things?
Adam: I think there are a few. A few similarities and a few key differences. I think one of the reasons I like digging into the history, a lot of these ideas is there's many methods we now have.
If you take even just the idea that you'd run a clinical trial for something, it now seems just like such a natural way of testing whether something works or not, and. In medicine, the first real modern randomized clinical trial was in 1947. So why did it take that long to get to that point?
And why were these ideas and across different cultures? Why was it used in some places but not others? I think that's really useful to explore. I think going into the modern era though, I think one is social media's changed our interaction with information. So particularly harmful information, just the timescales you're dealing with are much, much faster.
You can get something, they can really get from the fringes of the internet and we've seen that. Even some of the run up to in the US some of the claims about votes being rigged and this sort of thing is something that's on a random internet site that, in the past it would've been a handful of people and it would've taken a very long time to get through.
Within hours, you've suddenly got that scale problem. I think also the emergence of ai. On the one hand is delivering some remarkable scientific insults insights. I think a lot of the, what we've seen in terms of particularly around things like structural biology just open up enormous avenues even in terms of decision making and, our understanding of strategy and games.
A lot of the work they've done on things like poker and it's talking to scientists in those fields. I think there's a. Almost a little bit of sadness that, that a lot of this ability to understand the biology that you might have had and might have got you into the field we're now relinquishing. But then I think also just awareness that biology is complex and that you can't always have a very neat summary that explains everything.
And if you've got tools that can make extremely valuable predictions, that's really helpful. But of course, alongside that, you've also then got elements like trust playing in that if you've got, a self-driving car that's something of a black box. The ability to trust the decisions it makes is very different to perhaps.
What we had in, in science in the past where, amateurs could pick it up and play around with it. None of us can have the resources to train our own AI models to the standard of of a lot of the ones that are currently being used. And so there is that element of trust and relationships with companies and technology in almost outsourcing that understanding.
Gavin: I think the idea of trust is a very interesting one, isn't it? And, I guess it feels to me that in the sort of modern era, social media has eroded trust a little bit in the sense that just as you were talking about, if something's written on a little website somewhere, you can go and find it and then as long as that backs up, I believe you already had.
You can feel that there's a belief being confirmed by this bit of evidence that you found no matter where it be on the internet. This is a broad question, but how do we come to trust things, in the modern era, and how do you feel about the trust of the scientific process at this point?
Adam: Yeah that's a really good question.
It's a big one, I think even. Even just the dynamics of social media, I think expose you a lot more to those extremes. I think there's some idea that you never see a opposing viewpoint, but the evidence doesn't suggest that's the case. That actually people do see opposing viewpoints, but they've seen the most extreme version of them.
So almost that drives this lack of engagement. 'cause you're not engaging where with the perhaps more moderate versions of that argument that you could then bridge and actually, make some progress with, I think in terms of. Trust in the science scientific process. We still see now in surveys relatively high trust in science and medicine.
As industries. I think we need to. Bear that data in mind, even against that wider landscape. And I think remember that sometimes of the most, loud and confident falsehoods come from a relatively small group of people. Even if you look at content on YouTube, for example 1% of people consume about 80% of the most kind of extreme fts on, on those kind of platforms.
And so one of,
Gavin: one of my own personal maxims is just never read the comment section.
Adam: I think I, I recently gave a talk actually that covered conspiracy theories and very quickly you get people launching into a lot of defensive list. But I find that interesting actually. Why do people believe that? And I think, during the COVID pandemic, I think during a lot of other health issues, it's important to, to understand that.
And when we talk about trust, it's not just that you've got a group of people who don't trust and a group who do. Often you have some kind of spectrum. And I know when COVID vaccines emerge, for example, my wife was pregnant. A lot of them women weren't in the trials, and there was that question between us of we've got this risk of a COVID wave.
We've got uncertainties around the vaccine, and we unbalanced, decided it was much better to get the vaccine based on the emerging real world evidence. Other people I talked to had some skepticism about pharma companies, about other things that they'd heard. And it wasn't that they were deep into conspiracy theories, but on that gradient of trust, they were just on a slightly different pace.
I think in some cases and there's been some emerging work in this area, things like overstated certainty can often undermine trust because actually people can handle a bit of that weight of evidence if it's communicated effectively and really understand where they're coming from.
Jessamy: No, I think it, it's a really interesting conversation and I suppose.
Relating it to now and the, where we currently find ourselves geopolitically. When you read the historical parts of your book, it's very clear that actually there are very few people that were engaged in these conversations of scientific advancement. Very few. It was an elite, generally male group of Europeans who were discussing important things and changing the landscape of science and medicine.
And now it's in a much, much broader conversation. But we are somehow failing. And I was interested by what you were saying earlier about public engagement. It feels to me like we are not necessarily always looking and talking about the things that are most relevant to people in their day-to-day lives, and how, particularly in health, thinking of health and medicine, how they access and use healthcare.
So we have a broader conversation. We're often talking about topics which are much more distant to people's day-to-day lives. And I wondered what your reflections were on that and what that process historically has been that we've come from a very narrow group of people to a much broader group of people, but somehow it's caused many more problems.
And that diversity of opinion is something that we haven't quite, I think I say yet, because I think we will find a way of managing it. We haven't yet found a solution to manage those, that diversity of opinion. Or that information ecosystem.
Adam: I think that's a really good question. It's a big question and one that even if we just take that, that wider, I guess access to information, access to expertise.
I, something I thought about quite a lot in outbreaks and certainly recent years, there's a lot of people who've added a lot of value who are not, PhDs in infectious disease or. Med medics in that specific area. And I thought a lot about why in some cases, data journalists, for example, spring to mind did fantastic work in documenting and communicating, understanding often much better than we did.
And I think a lot of it was intention of where they were coming from, that they were there to be useful and to try and push these forward rather than having that agenda. I think that's something that I've noticed particularly when you get people very actively pushing vault hoods, there's often a desire to be.
To have that linked into reputation. As they're someone who is independent thinker, they go against the grain. And then I think it, once they are doubling down at that, it becomes much harder to have that constructive conversation versus people who are much more interested in being useful, putting hypotheses out there, being able to test them.
And I think similarly just with wide interacting with wider audiences, I think one of the things you very quickly realize if you. Get involved in public engagement or even if you just try and talk to more diverse groups, is you're wrong about a lot of things. Or you're wrong in your perception of how people feel about things or think things are important.
And I think there's always a decision of what do you do next if you are wrong? And there's always a temptation to, put up the barriers and dig in or think about how you transition to, to a better place. And I think that balance hasn't always been right. Part of me would like to be optimistic.
We've the access to information and the access to potentially expertise is much bigger than it has been historically. But several comes intuitively where we're ending up in a place where actually the interpretation of that information has become even worse in a lot of ways.
Jessamy: Yeah. It's so interesting, isn't it?
And I wondered whether through your thinking and writing of the book, whether you had any sort of recommendations, what should we be doing at the Lancet? For example, I know that's probably too much to answer just on the spot.
Adam: I think one thing that struck me digging into particularly these questions of falsehoods and and misinterpretation and misinformation is, first of all the two elements of the problem.
It's much like we designed a trial that you want to guard against thinking something is true and it's not, but you want to also guard against, think something's false when it's not. And I think similarly, we have a lot of focus on people believing falsehoods, but there's also this risk that people won't believe things that.
True. And there's been some nice work even in past few months looking at the effects of a news consumption and that there is an effect of falsehood belief, but there's also just this disengagement with things that are true. And the majority of content that people interact with are generally from credible sources.
And that's something that we saw even in vaccine related content on Facebook. There's some nice work looking at this and of the content, particularly during the vaccine rollout for COVID. About 0.3% was flagged by fact checkers. That didn't mean the rest was fine. Because one of the really widely shared headlines said A doctor dies two weeks after getting his COVID vaccine.
CDC investigating. That was a factually accurate statement. Those things had all happened, but it was being shared with the insinuation that we could infer something about the safety of the vaccine or the effectiveness, which we couldn't from that particular report and that particular headline. Was actually reached about seven times more people than all of the fact-check content combined.
And so this something which is actually accurate, but has that potential misinterpretation was just much more I impactful. So I think if we're talking about actions, we need to be thinking about where the problem might actually be in terms of the impact these things are having on people and also of those balances.
That is, is it that these people are very. Deeply engaging with things that are false or is it that we're just seeing this erosion of trust and it trust in the ability that, that anything could be true. And I think then it also just plays into things like roles of institutions. And I know some of the people I talk to who work a lot on conspiracy theories made a very good point that the, these people aren't often lazy or, they poorly read on things.
They put a lot of effort in. I think making the effort to understand why do people get to these. Places. It's not just random. There's a whole bunch of factors that have led them there.
Gavin: We have quite a similar one actually. During the pandemic, one of the most read articles on The Lancet in 2020 was an editorial that Richard Horton Narrat in chief wrote, and the title of the editorial was COVID-19 is Not a Pandemic.
Now, if you read the article, Richard is discussing how COVID-19 is a endemic. It affects lots of different systems in a pandemic start at the same time that are all interacting with each other. But hundreds of thousands of people saw the headline. COVID-19 is Not As Pandemic says Richard Horton, Editor-in-Chief of The Lancet, and just ran with it.
And I was amazed isn't the right word, but it was like they were taking our scientific credibility, reading just the headline and going fine. It's not a pandemic. The answer says so.
Adam: I found a lot, even now, people will do things like that or they'll, send articles from early 2020 when it's just a very different epidemiological situation, very different combination of evidence and how you should interpret it.
But I think it's also just the volume. And I think that's something that for scientists walking into those debates is sometimes easy to not appreciate that whenever I've ended up interacting with people with very, strongly believe things that just. Evidenced about some aspects of health.
They will have 20 or 30 peer review papers in good journals to hand. The sum of those papers doesn't support the point that they're making, but that's a lot of work to then untangle those elements. I think if I'd gone into those conversations, assuming they're all just wrong, it's nonsense.
It's gonna be really easy to debunk. Ultimately, particularly in that very performative online space, we will have to remember that social media is a stage. Social media isn't a nice conversation in a cafe often. I would lose that debate very quickly if I went in with that kind of perception of how they were approaching the evidence that they were trying to present and support their argument.
Gavin: It is difficult to unpick, isn't it? I was really interested actually in your book where you talked about the sort of nuanced understanding of misinformation, how, like you said, people had 20 or 30 peer reviewed articles ready to go, but at some point. There was just like a, almost like a table, like holding up all of their beliefs.
There was some complete misunderstanding of a paradigm in particular, and that was where they were. They were taking all this true information, but there was some underpinning of it that was false or misunderstood, and it's so much work for someone trying to communicate science to work back to where that false understanding might be.
Adam: One thing that can be very helpful I find in communication is I. Not just trying to push back on the avalanche, but also explain the tactics that are being used. And actually I realized when I dug more into these rhetorical techniques and some of these kind of flooding techniques, I realized I've been falling for a lot of them.
And you stay up all night 'cause someone's flooded you with a million questions and you can't respond to all of them. And then, oh, I look bad. I look like I don't know what I'm talking about. And actually it can be far more successful to very succinctly. Call out what they're doing and say, I'm gonna, particularly if, if you say something and those people come after you with a million claims, you say I'm gonna go back to my original, and this is what I've said and this is why I've said it, and this is because even if you're not gonna convince that one perhaps person the extreme of the distribution, you've got a lot of onlookers where you are presenting useful, balanced information and not getting dragged into just some online argument, which often is the exact thing that they're looking for.
Gavin: Did you get dragged into many online arguments during COVID?
Adam: I'd like to think I, I avoided many of them. I think it's, particularly as the pandemic went on, everyone was just so tired and frustrated. I'm sure there's some things that were just sharper than I would've liked to be, but I actually ended up just logging off social media a lot of the time because I think I just saw the damage it was doing.
Just your wellbeing. I think just in, interactions with family, interactions with everybody. You just you can very easily live your life distracted and tired and frustrated. And I think in a way, because when the pandemic came, I'd just written another book about information spread online and all the unhealthy processes.
I, I think I was a little bit more guarded, but it's just very easy to get. So I think it's very hard to have a healthy relationship with social media. I think it takes a lot of work, and I've actually found him. Particularly in the last couple of years, almost become a bit of a kind of advice person for friends who've got into difficult situations on social media and don't really know what to do about it.
And they're dealing with the stress of it. And actually, even just saying the thing of, if someone's commented on your post, if it's not a big problem, nobody's gonna see that unless you've seen it, almost nobody else will and it'll go away and those kind of things, which is very hard in the moment to see that reality.
Jessamy: I'm, at the moment I'm in that phase of mind where I'm just trying to think of, positives of the future. Because all of this is very new, it's 15 years really. And we've had some huge things that have happened in those 15 years and, the pandemic was a massive learning experience for us all.
We're going through what, what seems to be another transformative stage now on the sort of global front. What are the learnings that we can take from this for us in the health and science community positively as to where we need to be moving towards? Because there are positive lessons here and I think, you know what the things that we're talking about the fact that lies spread faster than truth.
It's a much broader problem. The reason why those lies spread faster than truth is often because they're talking to something which is very real to a feeling, or an inequality, an injustice that people are feeling in a very deep manner, and that we are somehow missing with our. Factual or truth response.
I accept it as well. There's algorithms, there's a sorts of, there's many other factors at play, but there is often some core there. And I'd love to hear your reflections, Adam, on, we are in a very interesting space it seems to me in history. And I'd love to hear from you, what are the positive things that we can take forward and do, and where do you see this all?
In the next five years.
Adam: Maybe first on the positives, I think that ability to make connections across fields. I've had collaborations that have come off social media in the good old days where it was fewer angry people and a bit more science and those, yeah, I think even for early career researchers, the ability to just.
Have a preprint and put a summary out there and get thousands of eyeballs on it. It rather, especially in the era of climate change, you don't wanna be going to conferences every month. And you can mimic a lot of that. And I think there's a lot of potentially really good links that can be made.
And I think also that ability to to just to translate across fields to, in different ways. So even, for example, in COVID, weirdly, my most. My most shared post was an extremely mathematical post about the alpha variant, basically making the point that transmission will trump severity. So if you have something that's 50% more transmissible, you've got a bigger problem than 50% more severe.
'cause severe will just scale the numbers up and down. Whereas Transmi will you get the exponential effect and that took off on its own. But then you've got graphic artists reworking that for Instagram. Getting many more shares and then people reworked it for other platforms and people pulled the idea elsewhere.
And that's something that just couldn't have happened in the past. I probably would've, written a letter to a journal and it might have eventually in a few weeks, got somewhere. And now to, yeah, I think there's all these kind of glimmers of things that can be really powerful and promising and even just with bits of technology and AI coming through.
We had, you have some students who put together just little inter interactive things with the help of some coding and some ai, and be able to explore ideas and iterate much, a lot of the barriers to entry. Whereas previously there would've been kind of money and technical barriers. I've even found working with collaborators around the world that are building out their data science teams.
I was traveling a bit before Christmas and they were saying we'd love to put some dashboards, but we don't have resources to get it in. I said, look, let's just sit down for an afternoon with some AI co-pilots, and you can prototype, you can get over that hurdle and then start working on it.
But I think as you said, there are those downsides and I think about a little bit in terms of how we might approach health threats, that if you have. Harmful content. If you wanna keep the benefits and get rid of the harm, there's a few options you might have. But if you think about an outbreak, one is you just chase down the bad thing.
So we do it. If we have an outbreak of a new disease, you just somehow work out where the cases are when you treat them or you isolate them. Another option is you change the nature of interactions. And we saw that very dramatically in, in COVID and we didn't know where the infections were. And a lot of the country's response were, we're just gonna essentially.
Fragment the network and make it harder to take off. We saw that even in a financial crisis in 2008. One of the responses around like things like ring fencing, it was basically stopping that risk of spreading from the investment to the deposit side of banks. But again, that's something that's quite disruptive.
I think there's a lot of aspects of online interactions, which probably it would be healthy to slow them down or add some more friction. Companies are understandably, very reluctant to do that, given a lot of their money is in these things taking off. But the third thing, of course, and the thing that we often prefer in health is you have things like vaccines, which avoid the need to interfere with people's interactions, avoid the need to have to chase testing down in real time.
And I think it's partly about. That explanation, that awareness of how you might fall for these things and this harmful information. But I think as you said, it's also there's deeper social reasons that people might be mistrusting. I think to some extent, just saying we're gonna chase down the bad things when they emerge is a bit of a sticking plaster.
If there's actually just underlying reasons that there's a lot of susceptibility to these particularly ideas and beliefs.
Gavin: You mentioned AI there and one thing that occurred to me while I was reading your book and about your work during COVID, if we had the large language models that we have available now available during COVID five years ago, what do you think might have played out a little bit differently?
Adam: That's a really good question. It's something with we're working a lot on at the moment, actually. I think especially given some of the signals we're seeing with some pathogens around the world, I spent quite a lot of time almost trying to write down what's every task I did during COVID and during last outbreaks and why does that take so long?
And we did a a meeting early this year actually bringing together a lot of analytics groups and wrote down kind of common tasks you'd wanna do early in an epidemic. And each one on average, people estimated would take 'em about a working day. And if you think that's a kind of a real margin we can target with something like AI and think which bits could AI be good at, there will be some bits AI won't be good at which of the bits we don't want to let I ai near?
If you're making policy on something, you don't just want to have a. Fully generate a codebase and let it go. But there's elements, I've got, one of my colleagues has been Billy Ty's been doing some nice work looking at particularly things like narrative reports where you might have a big outbreak document with so and so went here on this day and did this thing, and can we convert that into a nice table that we can easily summarize and these things that you just sink time into, even extracting bits of information from the literature in a very rapid way.
I want an incubation period, or I want to know a certain delay in an outbreak. Can we just streamline a lot of that? And then I think also just from a learner point of view, one the big problems we had during COVID and for a lot of outbreaks is a lot of analysis. It. Is really bottlenecked that it becomes very centralized.
What has historically happened is your data will go to some international group. That international group will do the analysis usually amongst a small bunch of people, and then it will get communicated back. And that doesn't scale. You can't have suddenly a hundred countries wanting that. And it's also, it's not really sustainable or equitable.
You'd much rather have those tools being used within country and I think AI has a lot of promise in, in just helping people. With, sort training requests or helping to understand how to use these, or I've got this and I wanna change it to this. And it can get people over a hurdle that previously created a bottleneck of a small number of experts you're relying on.
So I think for me, there's a lot optimism that, aside from the fact you've got a workforce, which extremely depleted and tired, I'd hope that a lot more of that analysis could be done globally by growing teams. Rather than having to be outsourced,
Gavin: we talk quite a lot about communicating certainty, and, not that as you talk about in your book, there is really such a thing as complete certainty. But what struck me a lot during COVID with the comms situation was how difficult it was to communicate to the public uncertainty, the whole follow the science thing and, saying this is what the science believes. When the science was a completely shifting method, I thought it was interesting you mentioned in your book of traveling to work in early 2020, which at the time involved a lot of hand sanitizer and no masking, which obviously now we realize to be the wrong way to approach avoiding COVID infection.
So wonder if you've got any reflections on communicating uncertainty to, to the general public.
Adam: Yeah, I think that was extremely important question during the pandemic. I think there were a lot of times where we could point to communication that got it wrong. Even around debate around airborne transmission, there's a lot of certainty.
This is a fact, this is not airborne. And of course that wasn't a fact. There was even at the time, uncertainty. And now we know more evidence although again, context specific around different modes of transmission. I think there, there were some emerging examples of, I think countries that communicate that much better to the public.
Particularly in terms of change of direction. Denmark was one that's spring to mind. Singapore, I thought over time, particularly re reopening was explaining, we're doing this based on this evidence that evidence might change. Rather than this, the science tells us this is the correct thing, this is definitely the correct thing.
And then three days later. The science is actually complete checks. It's like it hasn't. It's just the policy space. I think also that separation between what we are seeing, what our views on how we intervene, and then the. The policy decisions on intervention I think is quite important and you, we see this for disease.
We see for climate change that you might have a lot of people who agree on the basics about a disease. You agree on the transmission route, you agree on the severity, you agree on broadly, the impacts it have on society. You might even agree on the effect. Different interventions will have, how good certain non-farm school interventions are, how good certain, border measures, testing and so on.
But you might strongly disagree on. What you do about it, because then you've got values, you've got all of these kind of social, moral aspects when it gets into the policy space as well. And I think it's really important to understand which level we're on. I think we're often during COVID, in other situations it gets blurred is people talk about one level as if it's the other.
So they talk about something which is perhaps a difference in, in values around implementing something and will present it as a debate around the science of the evidence of things. Or someone might be disagreeing on whether intervention works or not and present it as. A disagreement on policy. And actually in that case, it's more useful to get back into the science and at least agree that level before you move on to the next.
I think we saw a lot of blowing between that, and that's one thing I've, I found research in the book is not a new thing. Even Austin, Bradford Hill was work on smoking. He was quite focused on having that separation that you can establish the evidence around smoking causes cancer. But the policy of what you do about that is a separate issue is not for.
In his view, scientists to dictate the policy based on the evidence. I think a lot, I saw a lot of dis discussions that could have been much more constructive. I think if we were focusing on which bit are we actually talking about here?
Gavin: I guess I'm interested then to know what your kind of reflections are on how science interacts with policy post pandemic and what role do you think it has to play in government and for formation of policy?
I think that's a
Adam: great question. It's it's a very big one. I think there's. There's a lot of things where, science has played an normal, enormous role and can continue into going forward and informing policy. I think probably one of the challenges during COVID was particularly where the policy space was extremely narrow, and then that almost puts pressure on science because you are constrained within that.
The classic example is very early on with a lot of the snows that being looked at for the uk. A lot of those early scenarios were available three months of interventions. And if that's your constraint, you cannot come up with a scenario that doesn't either soon or later have a massive epidemic.
And so then you get that mix of, is the science saying you're gonna have a big epidemic or is it the policy constraint that's been put on the range of things being looked at. And so I think one of the really useful things that, that I think, could be done coming out of it is keeping the question attached to the analysis.
I think we often saw that happen in COVID that it was, here's some scenarios or here's some claim about something. And actually that was often in response to a very specific what if question. I think having those two things linked to if this is my question, this is how the evidence feeds in, I think that can be very helpful and understanding of how science is informing policy 'cause them.
If things are public, people can sense to be asked. Is that the right question? I think what also happened that was very helpful as the pandemic went on was faster and more transparent, release of evidence. And I think that was particularly helpful with these very dramatic measures where as we got into the pandemic, you'd see the evidence release almost a day or two later and people could very quickly critique it.
I found it. I know a lot of colleagues did early on, very hard. You were essentially, you had evidence, you knew, and it wasn't public and it then, even with media interactions, it's very hard to say things where there's no public record of it because then it's just, believe me, I'm a scientist type interaction.
And I much preferred being able to do media where you could walk people through and say, look, this is the emerging evidence about a variant here. You can read it. This is what you're seeing. This is why you're seeing it. And I think it just it created a much healthier interaction with communicating uncertainty.
Jessamy: If we had to write a new definition for proof, what would it be?
Adam: I think one, one of the things I found as I was going through the book was a little bit of this tension between the academic desire for certainty and then just the pragmatic real world approach to how we use things.
And even if you look back historic, there was this movement of pragmatists in the the 19th century, and they say proof is what works, which I think is perhaps the extreme version of that. I think it is important in this world of uncertainty and importantly, a world where you have to make decisions of thinking about where you set the bar.
And that bar may not be the same for every action. I think one of the dangers sometimes with requiring a very high level of confidence is by default you're gonna prefer inaction, and in inaction is a decision. And so I think sometimes we get a situation where rather than actually weighing up what are we dealing with and what.
Are the uncertainties and where are we happy to take that risk and even take the risk to accumulate more knowledge that we can then build on. That's the, almost the whole idea of a clinical trial and emergency is we are taking on some risk now to build knowledge that can prevent risk in future.
And I think that's a really important element to think about, rather than this kind of search for purity, which particularly under pressure, won't necessarily give us the outcome we want.
Gavin: Alright, Professor Adam Koski, thanks so much for joining us on the podcast. Congratulations on your new book.
Which I believe is out already. It is, yes. Yeah. Fantastic. People will be able to find a way they get their books and thank you for, thank you again for joining us. It's been really fascinating. Yeah, thank you. Great to chat.
Thanks so much for joining us today at Voice. If you've enjoyed the podcast, please leave us a review on the podcast platform that you generally use, and we'll see you again next time. Take care.