鶹ý

MedPod Today: New CDC Goals, Hospital Hires AI Job, and Med Student Mental Health

— Three 鶹ý reporters offer further insights on these recently covered topics

MedpageToday

The following is a transcript of the podcast episode:

Rachael Robertson: Hey everybody. Welcome to MedPod Today, the podcast series where 鶹ý reporters share deeper insight into the week's biggest healthcare stories. I'm your host, Rachael Robertson. Today, we're talking with Michael DePeau-Wilson about the new CDC director and her plans to rebuild trust in the agency as well as why one hospital decided to hire an AI [artificial intelligence] prompt engineer. Lastly, Sophie Putka shares about new research on med students and mental health. Let's get into it.

Recently the new CDC director, Mandy Cohen, sat down at a press event in Atlanta to discuss her first few weeks on the job and her plans for the future of the agency. Cohen talked about how the CDC's reorganization played out under the former director, Rochelle Walensky. Cohen also spoke about her plans to see the agency through major changes. Michael DePeau-Wilson is here to tell us more about that conversation. Michael, you were at the press event in Atlanta with Director Cohen. What did she say about her specific goals for the future of the CDC?

Michael DePeau-Wilson: That's right, yeah, the director spent a lot of time talking about building trust. She referred to this as her primary goal as the new director, and she specifically said she wanted to build trust between the agency and the general public, but also to build agency between employees at the CDC.

So Director Cohen has actually spent much of her first couple of months on the job talking about building trust, so this was not necessarily unexpected to hear from her. But she did emphasize that in her very first address on an all-hands meeting with CDC employees -- when she said that there were thousands of employees present for this meeting -- she stressed to them that she views trust as being "absolutely foundational to the agency's ability to help Americans and others around the world protect themselves." You know, and along these lines, she also wanted to emphasize that she doesn't think of trust as a feeling necessarily, but she actually sees it as an intentional plan.

Robertson: Trust in the CDC isn't particularly high right now. So what is Cohen's plan to build trust back up?

DePeau-Wilson: Right. So as part of that intentional plan, she actually did break down how she thinks that they'll be able to rebuild trust, internally and externally. And she said that it comes down to three core principles. So that first principle is that she wants to promote a culture within the CDC, that everyone who works there -- whether they're in the offices in Atlanta or they are in some outposts across the world -- that they all feel like they're on one team together.

The second principle she mentioned was that she wants to increase access to data within the agency. She specifically emphasized that she wants critical data to be shared often and very quickly between offices at the CDC. And then the third principle is that the director wants to improve the speed of which the agency is able to respond to crises. So she noted that she believes those first two principles -- that being a team together and sharing data quickly within the agency -- that those are going to be critical to achieving that final principle of being able to respond to crises fast.

Robertson: So in a nutshell, Director Cohen wants everybody to work openly and quickly at the CDC. I can see how that might help build trust within the agency itself. But did she explain how that's also going to help build trust between the CDC and the general public?

DePeau-Wilson: Yeah, so Director Cohen actually talked a lot about her experience leading the North Carolina Department of Health and Human Services during the COVID-19 pandemic. And she said that her experience there showed that centering a public health message around building and maintaining trust can be an effective approach. She also mentioned that, at that time, they took steps to measure public trust in the state department to see how people were viewing the work that they were doing. And she said that she was proud to see that public trust was increasing over that time and she hopes to continue similar work to that while being the director of the CDC as well. But she does acknowledge that this is going to be a very big job.

Robertson: Speaking of big jobs, you also covered a story recently about a unique hire at Boston Children's Hospital as well, right?

DePeau-Wilson: That's right.

Robertson: In the second story, it is all about artificial intelligence. And this story of yours is a follow-up on a story that grabbed headlines earlier this year. So Boston Children's had a new job posting back in April for an AI prompt engineer, which made headlines because it was the first time a hospital had a job specifically related to AI. And after about 4 months, the hospital finally found someone for the role. So you wrote a profile about the new hire. But first, Michael, can you tell us what exactly an AI prompt engineer is?

DePeau-Wilson: Yeah, so the term "AI prompt engineer" does feel pretty new, especially to people outside of the world of AI and machine learning. The short answer is that prompt engineering is the practice of creating commands for AI programs with a goal of generating a desired response or output. The simplest example would be a person who asked ChatGPT to answer a question about -- anything really. You know, a more kind of complex version of that would be a person who uses a combination of text commands, coding, and even specific databases to build an AI tool that would provide a very specific type of answer. So it does spread across a range of different tasks and kind of different jobs.

Now, Dr. Dinesh Rai, he just recently took the job at Boston Children's, and he had a great analogy for what an AI prompt engineer is. And he said that a prompt engineer is kind of like driving a car. So anyone can drive a car, just like anyone can use ChatGPT to write some text and get an answer. But then on a different level, he said that there are the people who know how to drive a manual or stick shift. And then beyond that, there are professional F1 drivers. And he said that the stick shift would be like someone who knows how to use different prompting techniques for an AI program. He named a couple that are probably pretty unique as well, like chain-of-thought or zero-shot prompting, which are very kind of inside tech terms. But then he also said that the next level – that "F1-driver" level of AI prompt engineering – would be someone who knows how to build these programs from the ground up, and is able to get the absolute best type of response out of them. And he equated that to a race car driver understanding every aspect of the mechanics of their car, and how to adjust those to make sure that they're winning races at the highest level.

Robertson: That's interesting. Can you tell us about Dinesh? Is he an F1-level?

DePeau-Wilson: So he modestly said that, yes, he is. And I think that the fact that he was hired by Boston Children's is further proof of that. And he was actually hired because of his ability to handle the technical aspects of the job, but also because he has a clinical background. And that was something that was very important to Boston Children's for someone who was going to take this role.

So he told me that he first got into coding, actually, when he was an undergraduate at Rutgers University in New Jersey, but he really started using AI tools once he got to medical school. So he helped to build a few tools in medical school to help his classmates out with studying. And he enjoyed that so much that he decided to take a fellowship in informatics after his emergency medicine residency in New York. And then after his fellowship, he decided that he enjoyed the tech side of things so much that he left clinical practice to work in tech. And he was able to further his skills there using natural language processing and generative AI tools, which leads really well into the prompt engineering aspect of the job that he's doing. And so all of this experience, both in the medical training and in the technical skills made him a perfect person for this role.

Robertson: What about Boston Children's? Why did the leadership there want to make this hire?

DePeau-Wilson: Yeah, so John Brownstein, who's the chief innovation officer at Boston Children's, he led the effort to hire Dinesh for this role. And, you know, for one thing, he said that having an AI prompt engineer is going to help the hospital improve communication with its AI partners and help them build meaningful relationships with, you know, major tech companies that are putting out AI products that they might be using. But he also said he was most interested in adding someone to their staff with this expertise because the hospital is increasingly beginning to use these large language model tools and their daily flow. And so he was thinking that having an AI prompt engineer in house would be able to sort of translate some of the more technical aspects of that, and help the people in clinical practice or other aspects of the hospital kind of learn how to use these, he said, "appropriately, effectively, and responsibly."

Robertson: Thank you so much for sharing both about AI and the CDC, Michael. We'll be sure to speak again if another hospital hires a similar person.

DePeau-Wilson: Absolutely. Thanks for the opportunity.

Robertson: And lastly, we're talking about medical students and mental health. We know medical students work long hours and are under enormous stress. But research has suggested that they actually have very high levels of depression, suicidal ideation, and substance use. And at the height of the pandemic, the problem got even worse. But according to a research letter out this week in JAMA Internal Medicine, their insurance plans' mental health coverage may not be cutting it. Sophie Putka is here to tell us more. Sophie, can you tell us about what this new research found?

Sophie Putka: Yes, so researchers at Baylor College of Medicine looked at every medical school-issued health insurance plan they could in the U.S. -- about 87% of schools -- and got the cost breakdowns for 138 of them. And the good news is most of these schools offered free therapy sessions at the school. If students on their school's insurance plans opted for in-network providers for mental health instead though, they'd be looking at a median annual deductible of $300. But if they went out of network, that deductible would more than double to $700 a year. Interestingly, schools in the western U.S. had some of the lowest annual deductibles and maximum out-of-pocket costs and schools in the South had the highest.

Robertson: So students might have to pay more for out-of-network providers if they stay on the school-issued insurance. But can't they just take advantage of those free therapy sessions at school? What's the issue then?

Putka: Well, it's great that schools are offering those sessions. But unfortunately, I spoke to the senior author of the study, J. Wesley Boyd, who told me that old fashioned ideas about mental health are still pretty prevalent in medical school. He told me that when it comes to mental health, quote, "either there are no problems, or if you think you have a problem, you're emoting too much. It's your fault. You can't suck it up well enough."

Boyd also told me that because of that stigma, his students at least are very concerned about privacy and that what they may disclose in counseling might get back to their school. He'd heard of instances of this, and it could have very real professional repercussions. So students who really need mental health or substance use services may be avoiding that option entirely. And they may also need more than counseling, like medication or inpatient services.

Robertson: Got it. So then where does the insurance cost come into play?

Putka: So in-network providers tend to be local to medical schools themselves, but many students might only have time or support to seek out this kind of help when they're back home or on a break from school. So that means if they're on the school's plan, paying for out-of-network providers further away could get pricey. And if it gets serious enough that a student needs inpatient treatment, it's a 20% coinsurance they're on the hook for at most schools, not a copay for in-network providers.

That being said, it's hard to compare medical student insurance coverage with other populations' coverage, like the rest of the U.S. or all U.S. employees. So we're not sure how this coverage compares to everyone else. Also, many students come from backgrounds of privilege and go to medical school and may opt to pay for other plans or stay on their parent's insurance. And then there are students who are paying for their living costs with loans, and high out-of-pocket costs for school health insurance might discourage them from seeking mental health care at all.

Robertson: Thanks so much, Sophie.

Putka: Thanks, Rachael.

Robertson: And that's it for today. If you like what you heard, leave us a review wherever you listen to podcasts, and hit subscribe if you haven't already. See you again soon.

This episode was hosted by me, Rachael Robertson, and produced by Greg Laub. Our guests were 鶹ý reporters Michael DePeau-Wilson and Sophie Putka. Links to their stories are in the show notes. MedPod Today is a production of 鶹ý. For more information about the show, check out medpagetoday.com/podcasts.