AI in Health Care - Evidence, Risks, and Real-World Use - Charlotte Blease, PhD, Uppsala University, Sweden and Research Affiliate at Digital Psychiatry, Harvard Medical School.

Generative AI tools such as ChatGPT are already being used by patients and clinicians, often without formal guidance or training. This episode examines what the current evidence shows about uptake, perceived benefits, and emerging risks, including gaps in patient research and clinician preparedness. Drawing on recent survey data, Charlotte Blease, PhD, discusses how patients use AI between appointments, why long-term and rare conditions may drive adoption, and what is known - and not yet known - about misinformation, empathy, mental health impacts, and the urgent need for training and policy to catch up with real-world use.

 

📺 Watch the Interview

🎧 Listen to the Podcast

👉 Listen on Spotify | Apple Podcasts | YouTube

 
 

This episode is sponsored by EASEE® by Precisis and had no influence over the editorial content or discussion. Learn more about EASEE® here.

 

Episode Highlights

  • Clear evidence on how patients and clinicians are already using generative AI in health care

  • What current research shows - and where major evidence gaps still exist

  • Key risks around misinformation, dependency, privacy, and mental health

  • Why training, policy, and patient guidance are lagging behind real-world use


About Charlotte Blease, PhD

Charlotte is a health informaticist, Assoc. Prof. at the Uppsala University, Sweden and Research Affiliate at Digital Psychiatry, Harvard Medical School. Her PhD was in philosophy of science and mind (2008). She has worked in health research for nearly 20 years, with 200 publications, holding academic posts in the US, the UK, and Europe. In 2026 she is Visiting Professor at University of Melbourne.  Charlotte has written the book “ Dr Bot: Why Doctors Can Fail Us and How AI Could Save Lives”.

Full profile: charlotte-blease

Topics mentioned

  • chatbot

  • rare diseases

  • chronicity

  • diagnosis

  • empathy

  • culture

  • developmental

  • paternalism

  • google

  • legislation

Related articles/papers

Over one in three using AI Chatbots for mental health support, as charity calls for urgent safeguards, https://mentalhealth-uk.org/blog/over-one-in-three-using-ai-chatbots-for-mental-health-support-as-charity-calls-for-urgent-safeguards, Mental Health UK, Nov 2025

Rare Disease in the U.S. 2025 https://ssrs.com/insights/rare-disease-in-the-us-2025

General practitioners’ adoption of generative artificial intelligence in clinical practice in the UK: An updated online survey https://journals.sagepub.com/doi/full/10.1177/20552076251394287, Charlotte Blease, et al, Nov 2025, DOI: 10.1177/205520762513942

Generative artificial intelligence in primary care: an online survey of UK general practitioners, https://informatics.bmj.com/content/31/1/e101102, Charlotte Blease, et al, Sep 2024, BMJ , DOI: 10.2196/74428

Generative artificial intelligence writing open notes: A mixed methods assessment of the functionality of GPT 3.5 and GPT 4.0, https://journals.sagepub.com/doi/full/10.1177/20552076241291384, Anna Kharko, et al, Oct 2024, Sage Journals, DOI: 10.1177/20552076241291384

How generative AI affects patient agency, https://www.bmj.com/content/391/bmj-2025-085323, Charlotte Blease, et al, Nov 2025, BMJ, DOI: 10.1136/bmj-2025-085323

  • Trailer & intro

    00:00 Charlotte Blease

    “Patients were very savvy about these tools and about the dangers of sort of going down, descending into sort-of cyber rabbit holes about everything pointing to cancer, and all the rest of it. but still used Google and other search engines. And doctors use these tools as well, so we've got to avoid that level of hypocrisy.”

    00:19 Torie Robinson

    I’m Torie Robinson, and today I’m joined by health informaticist, writer, and philospher, Dr. Charlotte Blease. We’re going to dig into the real-world uptake of generative AI in healthcare - including why over a third of British adults report using it for mental health support, and why most UK GPs report having no training in it at all!

    This episode is presented in partnership with EASEE®, by Precisis GmbH.

    Who’s using generative AI for health information?

    00:43 Torie Robinson

    Are there any headline statistics on how many patients or families are already using generative AI tools like ChatGPT, for example, for health information?

    00:52 Charlotte Blease

    Yes, but we don't have enough of them. But the findings that we do have are really interesting. So, there was a study in the US last year of patients and family members with rare diseases. So, like about one in 10 people has a rare disease, it's worth saying. But what they found was these people were twice as likely to use generative AI tools such as ChatGPT compared to people without people or family members without rare illnesses - which is fascinating. And another study which was published a few months ago, November 2025, in Britain, found that just over a third of British adults had used generative AI tools for mental health support (so, this was Mental Health UK did this study).What's interesting is when you delve into the study, two thirds of the people who use the tools did say they derive benefit, which is also interesting as well. We need many more of these kinds of studies, but people are using these tools.

    Patients or clinicians? Or both?

    02:02 Torie Robinson

    So, from your recent papers on generative AI in healthcare, what is the most striking statistic you've seen so far?

    02:08 Charlotte Blease

    It's been the level of uptake of these tools. So, a study that we have just got the data from, and which hasn't yet been published, we found over half of UK GPs were using generative AI tools. We're still undertaking the analysis of that. But from the last year (so 2025), we've been doing this (this is the third year we've done it), but last year we found that it was 25%. So that's doubled! And I will say that the tools that are most used, are… the leading tool that is used, is the Consumer Chatbot ChatGPT. So, 70% used this tool last year (again, we have to delve into it this year with the analysis). What another fascinating finding in relation to that is when we did the survey in 2025 was 95% of surveyed UK GPs said that they had no training in generative AI. So we've got this huge deficit and this huge divide between people are using these tools, not just patients, doctors as well, but there's a complete lack of guidance and training.

    03:23 Torie Robinson 

    It has to be said that does sound a little bit frightening.

    03:26 Charlotte Blease

    It is. Particularly I think in relation to the risks of course with these tools but also, privacy which is a massive concern.

    When and why are patients using generative AI?

    03:36 Torie Robinson

    Do we know which sort of patients and families are actually using generative AI for, you know, for in between appointments, but also, for explanation, reassurance, second opinions, or is the data still emerging? And also, do patterns look different in for long-term conditions, like for instance, the epilepsies compared to shorter term conditions?

    04:00 Charlotte Blease

    We don't know enough about how different people, different patient populations are experiencing these tools. I think it's fascinating that within the mental health space, where we're seeing this huge uptick of it and I also, think that from surveys we've done, younger people are faster adopters. Beyond that it's really hard to tell what's going on with particular patient populations, whether people who are suffering more chronic illness are using these tools. But, the rare disease space, I will say, we're seeing some interesting things. I've just done another study in the US (again, we're writing that up), but it seems that there's a lot of interest among people who want exactly as you've said, they want greater information, they want it rapidly, on medications, on diagnostic possibilities too (if you're unclear what's wrong), and also on current status of research as well. One of the things I think that's recurring, is people know that their doctor doesn't have a lot of time to answer their questions, and if they have questions between visits - bear in mind, can be few and far between with your doctor or a specialist - that's where these tools are catching patients, and patients are deriving some benefit. But there also being - anecdotally - patients do use them for advocacy as well, so as one patient said to me “I want to lose some of my patient voice, I want them to take me more seriously so I use these tools to write emails, and so on, to my doctor.”, and she told me she thought it really was beneficial in that regard; she was… the doctors responded differently.

    What about the tone?

    05:44 Charlotte Blease

    We did do a study ourselves and my research team asking generative AI to write clinical notes in an empathic and accessible way. We just use that prompt and we compared these to fictional GP notes in the UK that were checked by other doctors to make sure that they looked like they were, sort-of, a of a standard GP note. And what we found was that the notes were considered much more empathetic when written by generative AI (in this case it was ChatGPT). So we could not discern any signatures of empathy in the original note. Now that's interesting because we got into a little bit of the… was the note written in a medically faithful way, and there were some slight issues with the note, but on the whole doctors thought that the notes were well written. But the biggest finding, the biggest takeaway we found was in terms of empathy. There was this just augmentation of empathy. I will add to that - and there's been other studies on that as well where generative AI tools are really rich in, so I say, these signatures of what look like compassion and support, and they can be incredibly impressive on how they get the tone right. But what we also found, I should add, is that some of the notes were a little bit overweening. They were a bit sort of California-esque and sort of “We are on this journey together” and we wondered how a patient would experience that; they might just think it was a bit much, but on the whole, you know, so if it was toned down it might strike the right chord depends on the patient as well.

    07:33 Torie Robinson

    It would be interesting you mention “culture” there, because some people would really like that. So it could be potentially important to tell AI “This is where I'm from. This is the type of tone I use.” and see what happens then.

    07:47 Charlotte Blease

    Exactly, and that's where these tools start to become very useful, because you can, in seconds, get the bespoke tone that you want, or even as a patient - and this where we need more guidance because as a patient, more guardrails, if you like too - but as a patient, you can ask for it in whatever level of health literacy, tone, conversational style that you want, and that can be very helpful to patients - could potentially be.

    Charlotte’s book: Dr. Bot: Why Doctors Can Fail Us - and How AI Could Save Lives

    08:16 Torie Robinson

    Check out Charlotte's book it's… as well as being really informative it's actually funny, so look out… so keep your eye open for the… some may say slightly sarcastic bits(!) which is critical, I think, in this sphere, because if you don't laugh you cry sometimes.

    08:31 Sponsor mention

    Before we move on - with thanks to EASEE®, by Precisis GmbH.

    Harm and risks 

    Torie Robinson (12:14.638)

    So, there's a big concern about risk. Do we have any qualitative or quantitative evidence yet around harms like misinformation, or increased anxiety? Or is this, mostly, for most people, theoretical?

    08:48 Charlotte Blease

    We don't yet have the causal data to demonstrate that it's causing harm, nor do we have sufficient patient survey research to see - even self-reported research - to see how patients are experiencing harms and risks associated with these tools. I think it's really critical that we get it. I've actually written other papers saying we need more patient research. We've got a lot on healthcare professionals, but not enough on patients. And we've got a lot of headlines on the risks of these tools associated with suicide. So, again, October last year, OpenAI said 1.2 million users per month were “talking” (in inverted commas) to ChatGPT about suicidal thoughts. So people are using these tools at scale it seems, but we don't yet know enough about those sorts of risks, and I suspect there's going to be many risks, and again we can talk about them more theoretically. I mean, these tools are designed to pull you in, to keep you engaged, keep you using them. That's a big risk because you can be dependent on them and they may be telling you what you want to hear. They're so people pleasing, and so polite, but they do, you know, they increasingly, they have a question mark “Would you like me to do this next?” “Would you like me to do that?” and that just keeps me thinking “Maybe I should keep engaged.”, and for young people, I worry about developmental stages where they may be depending on these tools. There is huge scope. think we're going to find increasingly when we get more research in the future, there will be identified “at-risk” populations with these tools, potentially with young people. But I also think there's going to be benefits too. I think some people are going to derive a lot of benefits. It's going to be a very messy, mixed picture I suspect.

    Education and training 

    10:48 Torie Robinson

    From what we've been saying, so far is there needs to be far more education and training regarding AI. If you don't use it in the right way, then it can be dangerous. But if you use it in the right way, it can be incredibly useful.

    11:01 Charlotte Blease

    It can be, it can be very empowering. If you look at the body of survey research there, and lots of qualitative research too with patients, patients were very savvy about these tools and about the dangers of sort of going down descending into sort of cyber rabbit holes about everything pointing to cancer and all the rest of it. Patients knew, they were cynical, and… but still used Google and other search engines. And doctors use these tools as well, so we've got to avoid that level of hypocrisy. So there's always been this slight tension there where “We can use them but, you know, “silly old patients” won't be able to… won’t be capable of using them, they’ll get upset and all the rest of it.

    11:42 Torie Robinson

    Well, let's face it, that bias has existed for a very long time!

    11:45 Charlotte Blease

    A very long time

    11:45 Torie Robinson

    We didn't need AI for that.

    11:48 Charlotte Blease

    No, it's just a new incarnation of it, of medical paternalism.

    Exciting study: the Health Chatbot User's Guide

    11:52 Torie Robinson

    Focusing on something incredibly positive, you are leading with Joe Alderman, a new project, which I'm very happy to be part of, the Health Chatbot User's Guide. So, tell us with this, what evidence gap are we trying to fill around patient uptake or trust?

    12:08 Charlotte Blease

    What we're trying to do is to reach patients where they're at and to identify what their needs are in relation to Genitive AI. So, the overall goal is (and of course we're including a whole range of stakeholders in this, but it's predominantly a very patient oriented) piece of research that ultimately, we hope, will have implications for patients in their real lives. And that is to offer the guidance that they need on how they better use these tools. So, the risks associated with them, what hints and tips they need to stay vigilant when using these tools, and all the rest of it. And it's basically, I don't want to sort of pre-empt it, but we're doing surveys with patients, we're going to have sort of expert consensus with patients on what they think needs to happen, and we hope they will to be able to disseminate that very widely. And of course, in language that patients is usable and understandable and offers really tangible guidance. Because there just isn't enough on that.

    13:20 Torie Robinson

    It's gonna be something constructive that you will be able to use in your life.

    13:25 Charlotte Blease

    Definitely, and I think we've got to get away from this sort of scaremongering attitude. They're going to continue to use these tools, so, we have to move with it. We can't just put our heads in the sand, we've got to just, we’ve got to adapt. But we've got to also, ensure that the guidance is there while we wait for legislation that can better protect us to kick in in relation to these tools. So, that's what we're doing.

    Takeaways


    13:58 Torie Robinson

    If our listeners, viewers, remember one evidence-based takeaway about patients and clinicians and generative AI, what should it be? Especially for long term diseases like epilepsy, for instance?

    14:13 Charlotte Blease

    I think we still don't have enough evidence for patients in that regard, but what I will say is that we've always got to ask “Compared to what?” People are using these tools and they're using them to self-triage. Many patients just don't have access to health care. Many patients, even if they see a specialist, get to see them once… I mean, my sister sees her specialist for an annual visit. You know, it's not enough. In the interim, if you need advice or support, inevitably people are going to use these tools. That's what's happening and that's what's increasingly going to happen. So, I suppose if we can throw once, it's through our three-year surveys (we're doing these annual surveys, as I've said, with UK GPs), look, the uptake is massive. We've seen the doubling in the use in our survey. So, it would be hypocrisy to say to patients “Why are you using these tools?”.

    Clinicians need to be extremely careful about the use of these tools and this is where they also need their own training and guidance as well. Particularly worrying that ChatGPT is a top box use, and Copilot. So Heidi Health by the way is one they're using too which is a form of ambient AI that's designed for healthcare - but it's not without its challenges either. So, doctors are crying out for training. And the one piece of evidence, again, that I would sort of come back to is 95% telling us that they have had no training. If there's any medical educators, medical leaders listening to this, that's the thing that I think I would underscore in bold red ink, that you really need to take that seriously and stop training doctors in 20th century ways. It's time to join the 21st century, yesterday.

    Final thoughts

    16:06 Torie Robinson

    Big thanks to Charlotte for such a clear, evidence-led look at generative AI in healthcare - from patient uptake to the training gap for clinicians.

    Again, we thank EASEE®, by Precisis GmbH, for partnering with Epilepsy Sparks.

    Thank you for joining us! If you enjoyed this episode, please give it a like, subscribe, and hit the bell so you’re notified when new episodes drop. Do feel free to share constructive points or questions in the comments below. See you next week.

Next
Next