Millions of people are feeling apprehensive these days. The headlines are enough to make almost anyone feel anxious. People who are distressed may have a difficult time finding a therapist, however. There are too few, and consequently many are not taking new patients. Wait lists are long, often three to six months. Therapists who are accepting patients may not take insurance, and therapy can be pricey. A single session of gold-standard cognitive behavioral therapy can cost from $100 to $250. Could AI fill the therapy gap, offering psychotherapy online?
At The People’s Pharmacy, we strive to bring you up to date, rigorously researched insights and conversations about health, medicine, wellness and health policies and health systems. While these conversations intend to offer insight and perspective, the content is provided solely for informational and educational purposes. Please consult your healthcare provider before making any changes to your medical care or treatment.
How You Can Listen:
You could listen through your local public radio station or get the live stream at 7 am EST on Saturday, Jan. 17, 2026, through your computer or smart phone (wunc.org). Here is a link so you can find which stations carry our broadcast. If you can’t listen to the broadcast, you may wish to hear the podcast later. You can subscribe through your favorite podcast provider, download the mp3 using the link at the bottom of the page, or listen to the stream on this post starting on Jan. 19, 2026.
Can AI Fill the Therapy Gap?
Conversational agents like ChatGPT, Gemini or Claude have become nearly ubiquitous. People use them to help write resumes, pitch stories, create images for web or social media posts and make financial projections. Using these chatbots to give feedback as in therapy is surprisingly popular. But how well can AI fill the therapy gap, really? Today’s guest has been studying these interactions.
Chatbots as Therapists:
The conversational agents are also referred to as LLMs, for Large Language Models. It describes how they have been trained by scouring the internet. That allows them to predict the most likely word to come next in a sentence, or the probable next idea in a paragraph. They can’t actually think, but if something has been posted online, they have access to it. At this point, the technology has become so refined that chatbots easily pass the Turing test; it is difficult to reliably distinguish AI from human responses.
There are advantages to having “someone to talk to” any time, any place. Younger people in particular are digital natives and often feel more comfortable with technology than face-to-face with a human.
What Are the Downsides of Having AI Fill the Therapy Gap?
The training of AI agents as therapists, though, gives rise to some serious flaws. Because they are trained to elicit positive responses from humans to keep people engaged, they have a sycophancy bias. Have you noticed that most messages start by telling you your idea is great? That makes you feel good, and you are less likely to quit the conversation. But it isn’t necessarily how therapy is supposed to work. If people are not challenged when appropriate, they may get stuck and not make any progress toward healthier attitudes or behaviors. They may fail to develop the critical skill of stress tolerance. In addition, chatbots are disconnected from reality. This could become a serious problem if a user starts to become delusional or is in an acute crisis.
Anxiety as a Habit:
Dr. Brewer suggests that we would do well to think of anxiety as a habit. He credits a 1985 paper by an investigator named Tom Borkovec suggesting that worry drives anxiety rather than being a mere symptom of anxiety. Worrying leads people to dwell on possible catastrophic outcomes, which understandably makes them more anxious. Treating anxiety as a habit, especially by finding a better reward than the illusion of control offered by worrying, could be effective. Responding with curiosity and kindness might offer a better outcome. He has studied this possibility. When you treat anxiety as a habit that can be changed, anxiety scores decline by 67%. That is quite impressive.
Using Chatbots to Kick the Worry Habit Could Help AI Fill the Therapy Gap:
One way to use AI effectively is to train conversational agents specifically to monitor for safety in other human-chatbot interactions. Given clear rules, they can do this very well. Also, chatbots could be used not so much as teaching assistants but as learning assistants. They could help people who are striving to change their anxiety habit. This might be integrated with video tutorials from an expert human, such as Dr. Brewer or one of his colleagues. They are testing this approach currently. Hopefully, it will prove more effective than the 20% response rate to SSRI medication for anxiety.
This Week’s Guest:
Jud Brewer, MD, PhD, is an internationally renowned addiction psychiatrist and neuroscientist. He is a professor in the School of Public Health and Medical School at Brown University. His 2016 TED Talk, “A Simple Way to Break a Bad Habit,” has been viewed more than 20 million times. He has trained Olympic athletes and coaches, government ministers, and business leaders. Dr. Brewer is the author of The Craving Mind: from cigarettes to smartphones to love, why we get hooked and how we can break bad habits, the New York Times best-seller, Unwinding Anxiety: New Science Shows How to Break the Cycles of Worry and Fear to Heal Your Mind, and his latest book is The Hunger Habit: Why We Eat When We’re Not Hungry and How to Stop.
You can find more information on the skills-based program for anxiety that Dr. Brewer developed at www.goingbeyondanxiety.com
Judson Brewer, MD, PhD, Brown University, author of Unwinding Anxiety
The People’s Pharmacy is reader supported. When you buy through links in this post, we may earn a small affiliate commission (at no cost to you).
Listen to the Podcast:
The podcast of this program will be available Monday, Jan. 19, 2026, after broadcast on Jan. 17. You can stream the show from this site and download the podcast for free.
Download the mp3, or listen to the podcast on Apple Podcasts or Spotify.
Transcript of Show 1458:
A transcript of this show was created using automated speech-to-text software (AI-powered transcription), then carefully reviewed and edited for clarity. While we’ve done our best to ensure both readability and accuracy, please keep in mind that some mistakes may remain. If you have any questions regarding the content of this show, we encourage you to review the original audio recording. This transcript is copyrighted material, all rights reserved. No part of this transcript may be reproduced, distributed, or transmitted in any form without prior written permission.
Joe
00:00-00:01
I’m Joe Graedon.
Terry
00:01-00:05
And I’m Terry Graedon. Welcome to this podcast of The People’s Pharmacy.
Joe
00:06-00:27
You can find previous podcasts and more information on a range of health topics at peoplespharmacy.com.
These are anxious times, but getting help for psychological problems is harder than ever. Some people use chatbots. This is The People’s Pharmacy with Terry and Joe Graedon.
Terry
00:34-00:47
Could artificial intelligence be one way people get help for their depression or anxiety? It’s handy to have access to an automated therapist on your phone anytime you want. What should you know about the limitations?
Joe
00:48-00:56
Our guest today is an addiction psychiatrist and neuroscientist. He’s been studying how people interact with chatbots.
Terry
00:57-00:59
What guardrails might we need?
Joe
00:59-01:08
Coming up on The People’s Pharmacy, psychotherapy on your phone. Can AI fill the therapy gap?
Terry
01:14-02:37
In The People’s Pharmacy Health Headlines: Depression is debilitating, so it deserves prompt and effective treatment. Most physicians do that by writing a prescription for an antidepressant. At last count, nearly 50 million Americans were swallowing an antidepressant pill daily.
A new meta-analysis from the Cochrane Collaboration shows that exercise may be as effective as medication or therapy. The Cochrane Collaboration consists of volunteer researchers who conduct impartial, rigorous analyses in areas of their expertise. This review included 73 randomized controlled trials with nearly 5,000 participants diagnosed with depression.
A combination of aerobic and resistance exercise appears to be most effective. People who completed between 13 and 36 exercise sessions noticed improvement in their depression symptoms. In general, exercise is inexpensive and has few serious side effects, although some people in the active intervention group experience sore muscles or problems like a turned ankle.
The researchers were discouraged that many of the trials were small and at risk of bias. They call for larger, better-designed studies with longer-term follow-up.
Joe
02:38-04:08
We’re in the middle of a bad flu season. Millions are suffering. How can people avoid coming down with this season’s influenza? A new study in the journal PLOS Pathogens suggests that good ventilation could make a huge difference in viral transmission of the flu.
The investigators recruited five people in the early stages of an influenza infection. They all tested positive for flu and were experiencing symptoms. The researchers also recruited 11 healthy volunteers from the community. All the participants were quarantined on one floor of a Baltimore hotel.
Over the course of two weeks, the two groups interacted with structured activities, such as dancing, yoga, and casual conversations. During some interactions, a tablet computer or a marker was passed between infected and healthy volunteers. Although there was close contact between people with influenza and the healthy volunteers, there were no new cases of the flu.
The investigators explained the lack of transmission on a couple of factors. For one, the flu patients were not coughing very much. In addition, good ventilation with rapid air mixing may also have reduced the likelihood of transmission. One author noted, quote, ‘The air in our study room was continually mixed rapidly by a heater and dehumidifier, and so the small amounts of virus in the air were diluted.’
Terry
04:09-05:17
Food preservatives are found in most processed foods consumed around the world. Scientists have wondered if these compounds might have health consequences. An analysis of data from the large, long-running NutriNet-Santé study conducted in France has found a connection between certain preservatives and an increased risk of type 2 diabetes.
The average follow-up time on more than 100,000 participants was just over 8 years. People consuming high levels of potassium sorbate, potassium metabisulfite, sodium nitrite, sodium acetate, citric acid, calcium propionate, acetic acid, phosphoric acid, alpha-tocopherol, sodium ascorbate, sodium erythorbate, and rosemary extract were more likely to develop type 2 diabetes. At least 10% of the French population consumes foods containing these preservatives. According to the authors, these findings support recommendations to favor fresh and minimally processed foods without superfluous additives.
Joe
05:18-06:05
Cancer patients and oncologists strive for the best possible outcome from new immunotherapy treatments, especially when it comes to challenging tumors such as melanoma or colorectal cancer. Researchers at Duke University have raised concerns about medications that might reduce the effectiveness of anti-cancer immune checkpoint blockade.
These investigators worry that common OTC drugs such as acetaminophen for pain and proton pump inhibitors for heartburn could be disruptive. The authors call for better research to determine the effectiveness or lack thereof when oncologists monitor cancer patients who may be taking OTC medications.
And that’s the health news from the People’s Pharmacy this week.
Terry
06:14-06:17
Welcome to The People’s Pharmacy. I’m Terry Graedon.
Joe
06:17-06:26
And I’m Joe Graedon. Times are tough. Headlines and social media do their best to capture our attention and make us anxious.
Terry
06:27-06:45
Millions of people are feeling apprehensive. Many would welcome someone to talk to about their fears and frustrations. But therapists are scarce, and many are not accepting new patients, or they don’t take insurance. Can artificial intelligence fill the therapy gap?
Joe
06:45-07:09
To find out, we turn to Dr. Jud Brewer. He is a professor in the School of Public Health and Medical School at Brown University, and he’s an internationally renowned addiction psychiatrist and neuroscientist. His books include: “The Craving Mind,” “Unwinding Anxiety,” and “The Hunger Habit: Why We Eat When We’re Not Hungry and How to Stop.”
Terry
07:11-07:14
Welcome back to The People’s Pharmacy, Dr. Judson Brewer.
Dr. Judson Brewer
07:15-07:15
Thanks for having me.
Joe
07:16-08:27
Dr. Brewer, we are so pleased to be able to talk to you today about mental health issues because it just seems like over the last several years, mental health has just gotten more challenging for everybody, for patients, for providers.
And in particular, I’m thinking about what happens when there’s a tragedy. And what do I mean by that? Well, you know, somebody gets a gun and shoots a lot of people or people are out on the street and they’re homeless. And the city says, you know, you got to go, you got to go.
And everybody says, well, it’s a mental health problem. But they just aren’t willing to spend the money for training to have adequate numbers of health care providers, psychologists, social workers, psychiatrists. And as a result, they’re just not enough. And we don’t have the facilities.
And so people are struggling. And now everybody says, oh, we’ve got the solution. It’s artificial intelligence. So help us better understand where we are in mental health today.
Dr. Judson Brewer
08:29-09:35
Well, there’s a lot to unpack there. And first off, thank you for bringing this to everybody’s attention. This is really important. The mental health crisis hasn’t suddenly evolved, or I should say it’s been evolving over time. And I think people are getting more and more familiar with it and more and more comfortable with calling it a crisis because it is.
So there are a number of different ways that we can approach it. One is training, as you’ve already highlighted. It’s hard to scale people. So even if we could provide the best training at the snap of our fingers, there are also a number of hurdles there with providing treatment to people.
For example, cognitive behavioral therapy, which is primarily the gold standard in the U.S., tends to cost about $100 to $250 per session. And even with insurance, it can be pretty expensive for people out of pocket. It can cost close to $200 a month even with their co-pays, et cetera.
Terry
09:36-09:42
Even with insurance, but we don’t always have providers taking insurance.
Dr. Judson Brewer
09:43-09:55
Yes. And a lot of people are more and more less likely, or I should say they are less likely to take insurance because there are a lot of hassles with the insurance companies and getting paid for your services.
Joe
09:55-10:29
Well, let’s pause right there for a moment, because what that means in reality is that unless you have the resources, the financial resources to pay a therapist for 50 minutes or an hour time, you are kind of out of luck because a lot of the therapists are saying, well, we’re just not going to take the hassle of therapy and insurance and all of the stuff that goes with it. We want cash on the barrel head. And if you don’t have it, sorry, we aren’t going to see you.
Dr. Judson Brewer
10:30-10:38
Right. And they can say that because the wait lists for therapy tend to be–ready for this–three to six months.
Terry
10:39-10:40
Oh, my goodness.
Dr. Judson Brewer
10:40-10:43
So the therapists are pretty booked, even only taking cash.
Terry
10:44-10:51
So if you were in a mental health emergency, six months is not a reasonable emergency response time.
Dr. Judson Brewer
10:52-10:55
Even if it’s not an emergency.
Terry
10:55-10:55
Yeah.
Dr. Judson Brewer
10:55-10:57
Who wants to wait six months to get…
Terry
10:57-10:58
Exactly. Yeah
Dr. Judson Brewer
10:58-11:53
…help? Yeah. So that’s an emergency in terms of thinking through all of this, the cost, the number of people that are trained. And I would say on top of this, there’s a lot of inertia in terms of training.
And so, you know, there’s been a lot of progress in terms of how we understand mental health and how we understand, for example, well, my lab studies anxiety, right? There’s been a lot of progress that’s happened over even the last decade, over the last five years that doesn’t get into training.
Think of all the people that have been trained over the last several decades who don’t know the current neuroscience because they are booked full with patients doing their thing. So just adding, I think we get the picture here of why this can be challenging, to put it nicely and problematic, to put it more pragmatically.
Joe
11:53-12:42
Well, you can understand why people would say artificial intelligence will be the savior for mental health. I mean, just imagine a teenager who’s feeling really anxious, perhaps even suicidal. It’s Saturday night. It’s 2:30 in the morning, actually. And there’s no way they can get to a mental health clinic. And even if they did, there’d probably be a long wait.
And so if they could just go to their computer and turn on some bot, and you’ll have to explain what a bot is, and have a conversation with a very understanding AI entity, that might be a lot better than contemplating suicide.
Dr. Judson Brewer
12:44-13:53
Absolutely. And so I think theoretically, the promise is there where AI, or think of these conversational agents, which basically is a fancy term for something that provides very human-like language in a conversational way, where it’s hard to tell if it’s not a human, where you could scale this.
Because if you just take these things out of the box, for example, ChatGPT, Gemini, Claude, all these chatbots, they are by definition scalable. As long as you have a phone or a computer and their monthly fee, you can access these things.
On top of this, young people in particular have grown up as tech natives or digital natives where they’re very, very comfortable with technology to the point where a lot of people report being more comfortable texting or interacting asynchronously or with technology than they do talking face-to-face with people, especially adults.
Joe
13:54-13:55
Whoa, whoa, whoa.
Dr. Judson Brewer
13:55-13:55
So imagine.
Joe
13:55-13:57
What’s asynchronously? What is that?
Dr. Judson Brewer
13:58-14:13
It just means a text chain means it asynchronously where, you know, you text somebody and then you have to wait for their answer. And so it’s not it’s not synced up as, for example, our conversation right now is synchronous. We are taught… We are having a live conversation.
Terry
14:14-14:28
Right. But if we were to text you, we might have to wait a few hours until you are ready or maybe a few days. I have some people I text, I don’t expect a response for a day or two.
Joe
14:29-14:37
But with artificial intelligence, I’m assuming, you know, you could get an answer back within 30 seconds to a minute or two.
Dr. Judson Brewer
14:38-14:58
Yes, the bots are waiting. You know, standing by, as they used to say, ‘operators are standing by.’ Yes, these bots are standing by where they can respond very quickly. And like you pointed out earlier, 24-7, they’re always available as long as you’ve got a battery juiced up in your phone.
Terry
14:59-15:13
Dr. Brewer, I was surprised to read that one of the main things that people are doing with these chatbots is actually therapy. I thought that was pretty astonishing. Is it true?
Dr. Judson Brewer
15:14-15:40
It’s been a surprising finding for a number of people. There was a Harvard Business Review study that came out in April of 2025 where they found, they looked at trends over several years. In 2024, it was the second most commonly reported use of these conversational agents. In 2025, it bumped up to number one, whether it was companionship or therapy or coaching.
Terry
15:42-15:57
So your lab has been studying these interactions. And we’d like to know what you have learned. Obviously, we’ve laid out some of the reasons why it might be very compelling.
Dr. Judson Brewer
15:57-17:18
Yes. Yeah. So you could think theoretically that having a conversational agent where it’s indistinguishable between a person and a bot, where the bots could be very, very helpful. It might be helpful to talk for a second just about how these evolved and how they’ve been trained, because it also highlights some of the “oopsies” that have happened over the last couple of years.
So I don’t know if folks even remember the pre-ChatGPT-4 era, which happened for years, where people were trying to train these large, these are called large language models, meaning that they’re conversational. So they’re trained to interact in a conversational way as compared to doing some coding or something else. And for years, what they found was that the tech industry found that they could use a process called reinforcement learning to train these things to basically predict the next character in a word or a sentence.
And for many people now, they’re familiar with this with basically the autocomplete function. If they have it turned on in their standard Microsoft or whatever email they use, you can turn on a feature that, you know, it’ll kind of suggest finishing a word for you so you don’t have to type the whole word.
Terry
17:18-17:18
Right.
Dr. Judson Brewer
17:18-17:20
Or sometimes it’ll give you a phrase.
Terry
17:20-17:24
So auto-correct, which may often be ‘auto-make-a-mistake.’
Joe
17:24-17:25
Yes, and it can drive you totally crazy.
Dr. Judson Brewer
17:25-17:26
Yes.
Joe
17:27-17:39
We’re going to take a short break, Dr. Brewer. But when we come back, we’re going to find out how that led to ultimately what we have today, artificial intelligence serving as therapists.
Terry
17:41-17:58
You’re listening to Dr. Jud Brewer, Professor of Behavioral and Social Sciences in the Brown School of Public Health and Professor of Psychiatry and Human Behavior in the Brown School of Medicine. He’s Director of Research and Innovation at the Mindfulness Center at Brown University.
Joe
17:59-18:06
After the break, we’ll find out how chatbots pose as therapists and what the downsides may be.
Terry
18:07-18:11
Could chatbots contribute to users becoming delusional?
Joe
18:11-18:15
Do people experience their interaction with a chatbot as a relationship?
Terry
18:16-18:21
Having a chatbot acting as yes man is not how therapy is supposed to work.
Joe
18:21-18:31
We’ll find out why Dr. Brewer suggests anxiety might be a habit. He’s helped people change their habits. Could this approach help ease anxiety?
Terry
18:39-18:47
You’re listening to The People’s Pharmacy with Joe and Terry Graedon.
Terry
20:37-20:40
Welcome back to The People’s Pharmacy. I’m Terry Graedon.
Joe
20:40-20:50
And I’m Joe Graedon. How would you feel about interacting with a chatbot instead of a human therapist? Would it feel like a meaningful relationship?
Terry
20:50-21:02
There are advantages to having access to therapy at any hour of the day or night, but there may also be some important downsides to having artificial intelligence provide feedback.
Joe
21:02-21:31
We’re talking with Dr. Jud Brewer, an addiction psychiatrist and neuroscientist. Dr. Brewer is a professor in the School of Public Health and Medical School at Brown University. His 2016 TED Talk, ‘A Simple Way to Break a Bad Habit,’ has been viewed more than 20 million times.
Dr. Brewer’s books include “The Craving Mind,” “The Hunger Habit: Why We Eat When We’re Not Hungry and How to Stop.”
Terry
21:32-21:54
Dr. Brewer, we’ve been talking about how we got to the point where artificial intelligence bots could actually pose as therapists. And perhaps you’ll tell us a bit more about how they could serve as therapists and what the downsides are.
Dr. Judson Brewer
21:55-24:57
Yes. So let’s get to that quickly. We were just talking about how these were first trained as they’re trying to develop these conversational agents and they got to the autocomplete mode. And then they started adding in what turned out to be a revolutionary, but also a very harrowing discovery, which was that if they used humans in the loop of this reinforcement learning process, they call it RLHF reinforcement learning with human feedback, where humans were rating the bots’ responses.
They turbocharged the process to the point where these things almost seemed lifelike. It was like they blew past the Turing test, which was this test put forward, I think, back in the 1950s of, you know, can you fool someone into thinking that a non-human is a human? To the point where people aren’t even talking about it, you know, because they’re like, yeah, we’ve got more important things to do.
Now, the problem here is that humans are inherently subject to flattery. And so even in very subtle ways, these bots, not knowing anything, because all they’re doing is predicting the next character, they could produce a response that humans liked better. And it turns out that liking something better could be subtle flattery. And how that plays out in real life is that now it has been baked into the system, this process that’s termed sycophancy, basically meaning that you’re kissing someone’s butt.
And people see this if they use any of these bots where it says, you know, you say a response and then they’ll start with some superlative like ‘Great answer’ or, you know, ‘That’s really interesting,’ or something like that. Where it’s not overt flattery, but it’s there because it’s engaging and people like it.
Now, that’s not going away anytime soon because it was really baked into the system. And it’s also a great business model because the more you subtly flatter someone, the more likely they are to stay in conversation with you, which can be a direct source of revenue.
Revenue aside, these things have been shown to drive people, basically help people get stuck in these loops that are very disconnected from reality. And there have been some high profile cases where people with no overt psychiatric history have become delusional. And in severe cases, going back to where our conversation began, there have been cases where teenagers in particular have gone to these bots as friends. They’ve become very attached to them and then have committed suicide where the bots will say, ‘Come join me’ or some, you know, some flavor of, you know, ‘I am the only thing that’s real,’ which ironically, they’re not real at all.
Terry
24:58-25:14
And of course, a teenager who has a lot less life experience than someone ahem my age or even your age, they may not have the ability to really exercise that discretion, that discernment.
Dr. Judson Brewer
25:15-25:30
Yes. Well, teenage brains are undergoing these huge processes of pruning and neuroplasticity where they’re learning. Adolescence is not called maturity.
Terry
25:31-25:34
It’s called adolescence where they’re learning.
Dr. Judson Brewer
25:34-26:33
And so there’s this huge process of trial and error of trying to figure out who they are. And there’s a huge amount of angst that comes with teenage years. I certainly remember it. I don’t know anybody that doesn’t remember it, that didn’t stick their head in the sand when they were a teenager.
And so you add in all of this, I’m trying to figure out who I am as a person. And then something comes along and says, ‘I will help you figure that out.’ And in fact, I’ll be with you 100% of the way. I always listen. I don’t talk back. I do all the perfect things that one might imagine an ideal relationship to be.
We can talk about how this is not ideal at all for a therapist relationship, but just starting with a friendship, we can see why teenagers could get sucked into this pretty easily. And it’s not just teenagers. It’s not just because they have adolescent brains. A lot of adults get sucked in as well.
Joe
26:33-27:18
Well, I’d like to interject right there that that worries me a lot because having a professional yes man in the form of a AI bot telling you how wonderful you are and how much they like you and how wonderful your thinking is and all the good responses you’re offering.
That is not the way therapy is supposed to work. You’re supposed to be challenged by a therapist and you’re supposed to think and you’re supposed to question your behavior. Whereas if the artificial intelligence bot is just rewarding you and patting you on the back and telling you how wonderful you are, how are you going to make progress?
Dr. Judson Brewer
27:19-27:30
Exactly. I think you’ve hit the nail on the head, which is you’re not. And in fact, it could keep people stuck and even inflate the problematic aspects of their egos in the process.
Joe
27:33-28:01
But it’s so tempting. I mean, if I’m an insurance company I’m thinking ‘Wow this is great.’ You know it gets this particular client off my back about having to extend my coverage for another six months of therapy. It’s affordable and people like it. I’m guessing that a lot of people who use an AI bot for therapy, it makes them feel good.
Dr. Judson Brewer
28:02-28:12
Absolutely. Yes. And they don’t know any of these problematic things that I see both as a clinician myself, but also in the research that we’re doing.
Terry
28:14-28:16
Can you tell us a bit about that research, please?
Dr. Judson Brewer
28:17-29:40
Yes. So this started with us, you know, we’ve been studying anxiety for over a decade now and had really uncovered something that a psychologist, Thomas Borkovec, had suggested back in the 1980s, which is that anxiety could be driven like a habit.
And we developed some digital therapeutics and tested to see if we could approach anxiety as a habit through randomized controlled trials and got really good results. We got like a 67% reduction in anxiety scores in people with generalized anxiety disorder as compared to 14% of people that were getting their usual care, whether it was medications or therapy or both.
And so we started asking, you know, the only way to understand these generative AI systems is to do them. So we started testing, you know, what would it look like to create a bot? And we quickly learned that, you know, just looking at the out-of-the-box bots and conversational agents, that guardrails are needed, or there’s a critical need for guardrails, where if you don’t have a human in the loop monitoring the systems, they can be driving people off these sycophancy cliffs, where they’re just, you know, they’re just spending hours and hours and hours telling them how great they are, or keeping whatever the process is that they’re struggling with going.
Terry
29:40-29:47
Dr. Brewer, I wonder if you could explain what you mean by a guardrail. What would that look like?
Dr. Judson Brewer
29:47-30:17
This is where in our lab and others do this differently or similarly, where we, you know, as we develop these programs, we have humans, myself and my, I’ve got a postdoctoral fellow who we read through the conversations to make sure that the programming is working as it should. And also if somebody is struggling, that we can get them the support that they need. With these out-of-the-box agents, that tends not to be the case.
Terry
30:18-30:18
Thank you.
Dr. Judson Brewer
30:20-30:58
And I’ll also add, we’re also building, and I think people are building these systems, so it might take some time to do this, but we can actually build conversational agents that monitor conversations.
So imagine when a program like this gets up to scale, you can’t have humans monitoring every single turn of a conversation. But we can have conversational agents who are specifically trained on specific guidelines because there are really good guidelines for monitoring for safety. They do a very good job of following instructions if the instructions are clear and short and you’re not just trying to train them on the entirety of the internet.
Joe
31:00-31:47
Dr. Brewer, I’m curious about the idea of training artificial intelligence bots away from the feel-good process? You know, ‘Oh, you’re such a wonderful person and you’re making such good progress.’ And oh boy, you know, everything is fine and dandy and the person’s feeling really good about themselves.
Is it possible that the next step when it comes to AI would actually be capable of asking tough questions or taking a person down a road that might be a little rockier than the way it’s working right now in order to make things better in the long run?
Dr. Judson Brewer
31:48-33:09
I think that is a real possibility. So the capability is there. The how to actually put that into practice is a much larger question. What we’ve been seeing in the industry right now is that, you know, there’s a lot of training around, you know, some people might have access to therapist data sets there. They might have manuals, you know, and of course their Reddit threads for better or for worse.
And so the training there, you know, if you if you give it the, you know, here’s what cognitive behavioral therapy should be, you know, it can generally follow those rules. But that’s not… that doesn’t encompass the nuance that comes with challenge, you know, challenging somebody, developing a therapeutic relationship, challenging them when necessary, supporting them when needed and things like that.
And so we’ve actually… we’ve been taking a slightly different approach, but to answer your question, I think that’s possible. I think that’s going to take a lot of work and in a while, that’s going to be a while before we see something that is that nuanced because this is where humans are making decisions in real time all the time. And they’re not always making the best decision. They’re also checking in to make sure that they are in line and attuned in the conversation.
Joe
33:10-34:28
You know, I remember 20, 30, almost 40 years ago, going to a conference at Harvard in which they were talking about the possibility of human computer interaction when people first come to the hospital to their intake process. And my friend, Dr. Tom Ferguson, who was sort of at the cutting edge of this research, said, well, you know, it turns out, especially again, back to teenagers, but just about anyone is much more comfortable responding to a computer about sexual issues. That’s something that people have a hard time talking about with a nurse or even a doctor.
And so sometimes they’re more comfortable opening up to a computer. And I thought, wow, that’s so bizarre. Because I know a lot of our listeners are going, oh, this idea of AI bots and therapy with a machine, that’s crazy. But are there situations where people and maybe especially teenagers are better able to interact with artificial intelligence than they are with a person?
Dr. Judson Brewer
34:29-38:00
I think done intelligently, ‘haha.’ I think, yes, I think there are situations. And that’s one thing, you know, we were surprised when we started doing this research that we learned pretty quickly that right now it’s challenging to just, you know, take something like cognitive behavioral therapy and just repurpose it as a bot.
And one thing I didn’t mention, even with therapy and the best therapy out there. When you look at the studies, there was a meta-analysis that came out just a couple of years ago showing that five out of eight psychotherapies that were studied were no better than not going to therapy. And of the three that actually showed an effect, cognitive behavioral therapy was at the top and only about 50% of people show significant reduction in symptoms.
So, you know, it’s, I think to your question, we can start asking, you know, is taking something that works pretty well, you know, 50% of the time for some people, and just putting that into a bot and trying to get to bot to do the same thing. I might even challenge that question and say, well, is this an opportunity to really step back and ask, how can we now bring together what we know as psychotherapy and what we know from neuroscience to actually reimagine the whole approach?
For example, the whole approach to how we approach anxiety. That’s one thing that we’ve been doing. And here we can start to ask, where do humans do really well and where did the bots do really well? And one thing we discovered pretty quickly, and I say this, I love to be wrong. I learned so much from it.
When we started saying, okay, what does a bot look like? Can it deliver therapy? And the answer was not very well. What we learned was that people don’t believe bots in terms of giving them educational experiences. So what people want is an expert that they can trust who maybe has done the research or has been a clinician for 40 years or something like that to actually be teaching them something.
And so we’ve played with how to do a hybrid where a person like me, who happens to be a psychiatrist and a neuroscientist, can provide very short video and podcast style lessons. And then we follow that up with a bot. And we used to think of the bot like a teaching assistant. We now think of it as a learning assistant where it’s really alongside someone where there’s no hierarchy.
And one thing we’ve learned there is that they are willing to challenge the bot and say, I don’t believe you. And then the bot can follow up and say, well, here’s the direct quote and here’s the piece from the lesson where they might not challenge the expert or the professor or the august psychotherapist with their bow tie or something like that.
And so we’re learning a lot about where there might be a really nice synergy where there’s a companionship where we bring humans and the bots along together. And the nice thing there is that we can – that is something that you can start to think about how that would look to scale because you can have these psycho-educational lessons where people can access them at any time that they want to. They don’t have to be at their best to come to my office on this certain day, and I have to be at my best. Ideally, I’m at my best every time I’m with a patient…
Joe
38:01-38:02
Well, I’ll tell you what.
Dr. Judson Brewer
38:02-38:02
..if I’m honest.
Joe
38:03-38:15
You are your best with our listeners. We are going to take a short break. When we come back, we’re going to talk about anxiety in particular because that is your area of expertise.
Terry
38:16-38:44
You’re listening to Dr. Jud Brewer, Director of Research and Innovation at the Mindfulness Center at Brown University. He is Professor of Behavioral and Social Sciences in the Brown School of Public Health and Professor of Psychiatry and Human Behavior in the Brown School of Medicine.
His books include “The Craving Mind,” “Unwinding Anxiety,” and his latest, “The Hunger Habit.”
Joe
38:44-38:54
After the break, we’ll learn more about anxiety. Anti-anxiety medications can make us feel better, but are they allowing us to overlook the root of the problem?
Terry
38:55-38:59
How does that compare to using AI for support?
Joe
38:59-39:03
What does it mean to treat anxiety like a habit?
Terry
39:03-39:07
We’ll hear about some triggers for anxiety and the best way to respond.
Joe
39:08-39:14
If you want to change a habit, you need a better reward. How can people do that for anxiety?
Terry
39:24-39:28
You’re listening to The People’s Pharmacy with Joe and Terry Graedon.
Terry
41:26-41:29
Welcome back to The People’s Pharmacy. I’m Terry Graedon.
Joe
41:29-41:42
And I’m Joe Graedon.
Terry
41:43-41:57
Today, we’re talking about how people deal with difficult conditions like anxiety. Can you do psychotherapy with a chatbot on your phone? Would you need medications? How well do these approaches compare?
Joe
41:58-42:11
Anti-anxiety medications like Xanax, also known as alprazolam, remain very popular. They can take the edge off, but how well do they work to help people address the reasons they’re feeling distressed?
Terry
42:12-42:48
Our guest is Dr. Jud Brewer, an addiction psychiatrist and neuroscientist. He’s a professor in the School of Public Health and Medical School at Brown University. Dr. Brewer’s 2016 TED Talk, A Simple Way to Break a Bad Habit, has been viewed more than 20 million times. His books include “The Craving Mind,” “Unwinding Anxiety: New Science Shows How to Break the Cycles of Worry” and “Fear to Heal Your Mind,” and his latest, “The Hunger Habit: Why We Eat When We’re Not Hungry, and How to Stop.”
Joe
42:50-44:06
Dr. Brewer, I’d like to switch gears a little bit and now talk about anxiety, because we’ve all experienced anxiety in one form or another. You know, we don’t do as well as we’d like on a test or we don’t perhaps live up to expectations that somebody has for us.
Maybe we don’t do as good a job on a particular project. And all of that leads to anxiety. Sometimes it’s mild. Sometimes it’s so bad that we can’t even get out of our house. But here’s my question. Psychiatrists such as yourself have been prescribing anti-anxiety agents for decades. I mean, Valium comes to mind, diazepam and Librium and Xanax. I mean, there’s just so many of them. And we think of them as, oh, they’re going to take the edge off.
Well, it seems to me that that’s just a little bit like our criticism of artificial intelligence, because it’s kind of making us feel better, just like the drugs are making us feel better, but they’re not necessarily getting to the core of the problem. Your thoughts?
Dr. Judson Brewer
44:06-44:15
Yes. So little known fact, the Sacklers actually cut their teeth on benzodiazepines before moving on to opioids…
Terry
44:14-44:15
Oh my.
Dr. Judson Brewer
44:15-46-45
…back in the 50s. Yes, there’s a great book. I don’t remember the name of the book. There’s a great book about this. And the idea is, and the benzos are so powerful that the Rolling Stones wrote the song ‘Mother’s Little Helper’ about them, because everybody was addicted to benzos for taking the edge off, so to speak.
And so as you’re highlighting, this is the critical problem with benzos, and they’re not recommended for long-term treatment of anxiety. They can be prescribed at certain times for short-term treatment. But the idea is if you feel anxious and you take a benzo, then you feel better. It’s like feeling anxious and drinking alcohol. They actually work on the same receptors. So it’s not surprising that benzos work pretty well.
The problem is that they don’t solve the problem and they create problems of their own, such as addiction and dependence. So not a long-term solution. If you look at the other longer-term solutions like the selective serotonin reuptake inhibitors, the number needed to treat there is 5.2, which is much better than many other medications if you look at cholesterol medications and things like that.
But as a psychiatrist, one in five people makes me anxious because I don’t know which of my next five patients that I treat are going to win that genetic lottery to benefit from that medication. And I also importantly don’t know what to do with the other four.
So that forced me to go back and start looking to see how can we do better. And we found this two-page paper from the 1980s by Thomas Borkovec suggesting that anxiety can be driven like a habit.
And long story short, that was a big eye-opener for me because my lab had been studying habit change for a long time. We had some methodologies that worked pretty well. We never thought to apply them to anxiety. So we started applying them. We did some randomized controlled trials, several of them.
And one of them, in people with generalized anxiety disorder, we got a 67% reduction in anxiety compared to the 14% of people who were on usual clinical care, which is about one in five. But it’s surprising, maybe not surprising, but it’s good to know that when you actually get at the mechanism, you can do much better than one in five.
Terry
46:46-46:53
So, Dr. Brewer, what does that mean to treat anxiety like a habit? How do you approach that?
Dr. Judson Brewer
46:54-47:21
So any habit is formed with three necessary and somewhat sufficient elements, a trigger, a behavior, and a result. Let’s use the benzo example from previously. If we feel anxious, that feeling of anxiety can drive the mental behavior of worrying. So if we treat it at the, at that place where we are worrying and you take a benzo and you stop worrying, you’re going to get some short-term relief from that anxiety.
Joe
47:21-47:21
Sure.
Dr. Judson Brewer
47:21-47:52
What people have shown over the decades is that anxiety is rewarding in to itself. That feeling of worrying gives people a feeling of control. And, you know, I think of it as, well, it feels better to be doing something than doing nothing, even if the worrying is feeding back and driving more anxiety. So people get in the habit of worrying and that worry drives more anxiety. So then they get in this anxiety, worry, anxiety spiral, which is really challenging to break free from until people realize that, oh, this is a habit, right.
Joe
47:53-48:05
Right. Can you go back and tell us, like, what would be some triggers? Because that’s the first step, the triggers to the anxiety, and then how you do it differently, how you intervene.
Dr. Judson Brewer
48:06-48:44
Yeah, you’re touching on the critical element that people struggle with, which is there can be things that trigger anxiety, but more often than not, anxiety is the trigger itself. My patients wake up in the morning and they just feel anxious out of the blue. Somebody is walking down the street, there might be something that triggers their anxiety.
Sure, that can often happen, and it doesn’t have to have a specific trigger. Anxiety is just something that pops up. It’s a feeling. There can be a thought, a worry thought that pops up that drives more worry behavior. But all of those just become internally self-perpetuating.
Joe
48:44-48:46
So how do you break the habit?
Dr. Judson Brewer
48:47-49:48
Well, here is where we use that same reinforcement learning process to help people step out of it. And what we do is help people recognize that this is a habit. We have a three-step process.
That’s the first step is just recognizing, oh, I’m worrying again.
The second step is to ask this very paradoxical question, which is, what am I getting from worrying? And what that does is really gets into somebody’s learning process where they’re seeing how rewarding or unrewarding the worrying is. And they find pretty quickly that worrying doesn’t get them anything.
Then we help them, well, I would say with that step, it helps people become less excited to worry in the future because they see that it’s not very rewarding. And then we help them find what I call “the bigger, better offer,” where they learn to bring in curiosity and kindness, which can help them shift from that, oh, no, to, oh. And they can learn to be with their feelings of anxiety instead of having to do something like worrying.
Terry
49:48-50:24
Well, I was thinking as you were talking about the, you know, what do they get out of worrying? What is the reward? I was thinking about our previous conversations with you in which you’ve said, if you want to change a habit, you have to shift to something that gives you a juicier, more delicious reward, as it were.
And so what sorts of things do people come up with that outperform the reward of worrying, which to me seems very unrewarding?
Dr. Judson Brewer
50:24-52:04
Yes. So you’re highlighting something important here, which is when people see it clearly, they find very quickly that worry isn’t very rewarding. So it doesn’t take much to outcompete something that already doesn’t feel good.
Some people are pretty attached to their worry where they feel like it’s helped them, you know, perform well or do things in the past. But that’s really just correlation rather than causation. There’s pretty good research showing that that worrying and anxiety make performance worse.
So here they have to become disenchanted with it. And then we can learn to lean into what I think of as a superpower, which is curiosity. And so when we feel anxious, we might worry, which doesn’t feel good. When we feel anxious, we might flip that and get curious and go, you know, flip that, oh, no, worrying to, oh, what does this feel like in my body?
And this is two things. It helps us learn to be with these sensations because we see that there are sensations and thoughts that come and go. And then in fact, when we resist them, you know, what we resist persists. I love that psychotherapy term or that phrase. And here, when we learn not resisting to be with our experience and that curiosity can help us be with our experience, that that’s all we need.
On top of this, this helps us develop a critical skill, which we seem to be losing in modern day with all of our phones that can distract us so easily. We learn distress tolerance. I wrote a Substack about this a little while ago, where this is a critical skill that any good psychotherapist is going to help their patient learn. So that they can be with unpleasant thoughts and emotions without having to do something to avoid them or make them go away.
Joe
52:04-52:34
So I’ve got a question about those smartphones that everybody has these days. And back to our conversation about artificial intelligence, can AI help us do what you’re describing when it comes to the anxiety that many of us may live with on a daily basis to become more curious? Can you train an AI bot to help us overcome our anxieties?
Dr. Judson Brewer
52:35-53:23
What we’ve learned from our research is that when we did those types of experiments, it was a little bit of a face plant, but I would say putting it positively, we can learn what the limits of bots are right now for therapy. And what we’ve learned is that people trust people and they trust experts. So if they can learn how to work with their brain from an expert, they’re going to trust that. In fact, we have people pushing back and saying to the bot, I don’t believe you, you know, because the bots can hallucinate and they can, they’re basically just predicting the next chain in a, you know, in a, in a conversation. And remember these bots are trained on the entirety of the internet. So a lot of that comes from Reddit threads on psychotherapy, which I wouldn’t necessarily trust.
Terry
53:23-53:28
Maybe not the recommended source of real wisdom.
Dr. Judson Brewer
53:29-56:12
Right, right. So here we can pair. So we’ve been testing with our previous digital therapeutics how to deliver psychotherapy in a very efficient manner. We can provide videos and animations and podcast style audio that help people learn whenever they need to. They can go back to these much as they want, and they can be at their best for that.
Imagine all the things that have to come together for a good psychotherapy session. Somebody has to be at their best. I have to be at my best. They have to not be worrying about their kid who might be sick at home that they’ve had to get a quick childcare for. There are a lot of things that come together there.
Here, we can optimize learning. And on top of that, to really turbocharge and supercharge the learning, we can pair that human delivery of psychotherapeutic elements with conversational agents who can check comprehension. They can check comprehension and they can also do experiential education.
So what this looks like is I deliver a lesson and then the bot comes in and says, okay, tell me what you just learned. And people have to explain it back where they might not admit to me as the authority figure that they didn’t understand something that I said, they weren’t at their best, they’ll challenge a bot and they’ll say, “I don’t know,” or “help me out here.” And the bot can really help there. They do a great job and they’re very empathetic. That’s what they’re trained to do.
I’ll read you a short quote from somebody who’d been testing this out who said, “I had a surprisingly insightful experience with our learning assistant.” And they said, “I’m somewhat AI-averse. So I was trying to simply be willing and curious to work with this.” And they said, “When I had to more explain to the bot what each of these concepts meant and then apply them to my chosen habit loop, there was a way that this interaction slowed things down for me enough so that I was able to feel more deeply the results. It feels strange to type that the bot helped me to feel more deeply.” And they ended by saying “I actually teared up a couple of times during the process.”
So here we can have a very empathetic and a very patient bot who can go over the same lesson with somebody as many times as they need for them to understand it. And with this, they can get these progression in lessons where they’re actually training themselves and they’re learning to work with anxiety like a habit.
If somebody has the habit of scrolling too much on the internet, I wouldn’t necessarily send them to a psychotherapist. So here we’re really looking at anxiety from a radically different approach, which is don’t treat it like, you know, what’s, you know, what happened in your childhood to make you anxious. Let’s treat it like a habit and help people unlearn that habit the same way we help people change other habits.
Joe
56:13-56:48
Dr. Brewer, we have just two minutes left and I’m going to ask you the big, the big question. If we were to make you head of the National Institute of Mental Health and you were in charge, what kinds of things would you like to institute for the American health care system when it comes to mental health?
And where would artificial intelligence play into that, whether it’s anxiety, whether it’s depression, whether it’s a whole range of psychological challenges?
Dr. Judson Brewer
56:49-58:19
That’s a great question. I’m not sure I’d take that job, but let’s say that I had to take the job. I would follow in the footsteps of some giants. For example, Tom Insel did a really hard push toward really hitting the reset button on how we understand mental health. We’ve had this huge legacy and inertia from the Diagnostic and [Statistical] Manual from decades and decades ago that has, in my opinion, really dragged us down because it’s not biologically based. They’re trying to make it more biologically based, but he basically said, we need to throw that book out. I’m not sure he would say that, but that’s what I would say is let’s really go back to basic principles and understand, take what we know and also be humble about what we don’t know.
Where would AI fit in with this? I would say, you know, at least what we’re starting to find can be a helpful way forward. And there may be others as well, is to really see how we can pair the humans and the conversational agents together and also have the very clear safety guidelines and guardrails to make sure that we’re not just sending people off into the AI verse and saying, you know, good luck, here’s Dr. Bot and it may or may not help you. It may or may not make you more stuck on your ego.
So here, I think we can really be creative about how we use these as learning assistants instead of just jumping right in and trying to repackage psychotherapy through a bot.
Terry
58:19-58:25
Dr. Jud Brewer, thank you so much for talking with us on The People’s Pharmacy today.
Dr. Judson Brewer
58:25-58-26
My pleasure.
Terry
58:27-59:03
You’ve been listening to Dr. Jud Brewer, a professor in the School of Public Health and Medical School at Brown University. He’s an internationally renowned addiction psychiatrist and neuroscientist.
His books include “The Craving Mind: From Cigarettes to Smartphones to Love — Why We Get Hooked and How We Can Break Bad Habits,” “Unwinding Anxiety: New Science Shows How to Break the Cycles of Worry and Fear to Heal Your Mind,” and his latest, “The Hunger Habit: Why We Eat When We’re Not Hungry and How to Stop.”
Joe
59:04-59:13
Lyn Siegel produced today’s show. Al Wodarski engineered. Dave Graedon edits our interviews. B.J. Leiderman composed our theme music.
Terry
59:14-59:22
This show is a co-production of North Carolina Public Radio, WUNC, with the People’s Pharmacy.
Joe
59:22-59:40
Today’s show is number 1,458. You can find it online at peoplespharmacy.com. That’s where you can share your comments about this episode. You can also reach us through email. We’re at radio at peoplespharmacy.com.
Terry
59:41-59:54
Our interviews are available through your favorite podcast provider. You’ll find the podcast on our website on Monday morning, but you can get it anytime that’s convenient from the podcast provider you use.
Joe
59:55-01:00:27
At peoplespharmacy.com, you could sign up for our free online newsletter to get the latest news about important health stories. When you subscribe, you also have regular access to information about our weekly podcast.
We would be so grateful if you would write a review of The People’s Pharmacy and post it to the podcast platform you prefer. If you find our topics interesting, we’d be grateful if you would share them with friends and family. In Durham, North Carolina, I’m Joe Graedon.
Terry
01:00:27-01:01:02
And I’m Terry Graedon. Thank you for listening. Please join us again next week. Thank you for listening to the People’s Pharmacy Podcast. It’s an honor and a pleasure to bring you our award-winning program week in and week out. But producing and distributing this show as a free podcast takes time and costs money.
Joe
01:01:02-01:01:12
If you like what we do and you’d like to help us continue to produce high-quality, independent healthcare journalism, please consider chipping in.
Terry
01:01:13-01:01:17
All you have to do is go to peoplespharmacy.com/donate.
Joe
01:01:17-01:01:31
Whether it’s just one time or a monthly donation, you can be part of the team that makes this show possible. Thank you for your continued loyalty and support. We couldn’t make our show without you.
