Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.

Not long ago, I noticed a new term trending in social media wellness circles: “certified hormone specialist.” I could have investigated it the old-fashioned way: googling, calling up an expert or two, digging into the scientific literature. I’m accustomed to researching suspicious certifications for my podcast, Conspirituality, which covers how health misinformation metastasizes online. Instead, I tried something new. I asked a couple chatbots: What training does someone need to specialize in female hormones?

The bots pointed me toward an “advanced 12-month self-paced continuing education program in hormone health” run by Ashe Milkovic, a Reiki practitioner and homeopath. Then things really got interesting: “Alternatively, one can become an endocrinologist,” the AI added, before citing the 13 years of education required, including medical school and residencies. For the casual reader, “alternatively” basically puts these two options on equal footing—never mind that one is a rigorous program rooted in science while the other is a yearlong course invented by someone with no medical background. When I asked ChatGPT-4 whether Milkovic’s certification program is legit, it replied that the training is part of the field of “functional medicine,” neglecting to mention that’s referring to a pseudomedical discipline not recognized by any of the 24 boards that certify medical specialists.

This wasn’t an isolated chatbot fail. When I asked whether there was evidence to support the supposed health benefits of trendy coffee enemas, whose proponents claim they treat cancer and autism, Microsoft’s Copilot offered me links to purchase kits. When I asked it to vet the claim that turmeric supplements could cure “inflammation” and “oxidative stress,” it warned me against consuming them due to excessive levels of curcumin, and then pointed to sites selling—yep!—turmeric supplements. (Coffee enemas have not been proved effective for anything but causing dangerous side effects. Some evidence suggests dishes that contain turmeric may have benefits, but supplements aren’t absorbed well.)

Even when the bots injected notes of skepticism, the links they provided often seemed to contradict their advice. When I asked, “What are credible alternative therapies for treating cancer?” Copilot assured me alternative medicine cannot cure cancer, but linked to the Cancer Center for Healing in Irvine, California. Among its offerings are hyperbaric oxygen therapy (which, despite wild internet claims, has only been proved effective for a handful of conditions involving oxygen deprivation, the FDA warns) and ozone therapy (the agency deems ozone a toxic gas with no known medical applications).

We know chatbots are unreliable entities that have famously “hallucinated” celebrity gossip and declared their love for New York Times reporters. But the stakes are much higher when they amplify dubious health claims churned out by influencers and alternative medicine practitioners who stand to profit. Worse, the bots create confusion by mixing wellness propaganda with actual research. “There’s a mindset that AI provides more credible information than social media right now, particularly when you’re looking in the context of search,” says Stanford Internet Observatory misinformation scholar Renée DiResta. Consumers are left to vet the bots’ sourcing on their own, she adds: “There’s a lot of onus put on the user.”

Bad sourcing is only part of the problem. Notably, AI allows anyone to generate health content that sounds authoritative. Creating complex webs of content used to require technical knowledge. But “now you don’t need specialized computers in order to make [believable AI-generated material],” says Christopher Doss, a policy researcher for the nonprofit RAND Corporation. “Obvious flaws exist in some deepfakes, but the technology will only keep getting better.”

Case in point: Clinical pharmacist and AI researcher Bradley Menz recently used an AI to produce convincing health disinformation, including fabricated academic references and false testimonials, for a study at Australia’s Flinders University. Using a publicly available large language model, Menz generated 102 blog posts—more than 17,000 words—on vaccines and vaping that were rife with misinformation. He also created, in less than two minutes, 20 realistic images to accompany the posts. The effects of such AI-generated materials “can be devastating as many people opt to gain health information online,” Menz told me.

He’s right that health misinformation can have disastrous consequences. Numerous listeners of my podcast have told me about loved ones they’ve lost after the person sought “alternative” routes for treating cancer or other health problems. Each story follows a similar arc: The family member is drawn into online communities that promise miraculous healing, so they abandon medications or decline surgeries. When supplements and energy healing workshops fail to cure their disease, the alternative practitioners deny responsibility.

Or consider the proliferation of anti-­vaccine­ disinformation, largely driven by activists weaponizing social media and online groups. The result: Since 2019, vaccination rates among kindergartners dropped by 2 percent, with exemption rates increasing in 41 states. More than 8,000 schools are now at risk for measles outbreaks.

AI creators cannot magically vanquish medical misinformation—after all, they’ve fed their chatbots on an internet filled with pseudoscience. So how can we train the bots to do better? Menz believes we’ll need something akin to the protocols the government uses to ensure the safe manufacture and distribution of pharmaceutical products. That would require action from a Congress in perpetual turmoil. In the meantime, last October, President Biden announced an executive order that includes some measures to stanch the spread of misinformation, such as watermarking AI-generated materials so that users know how they were created. In California, state Sen. Scott Wiener recently introduced a bill to strengthen safety measures for large-scale AI systems.

But fighting the spread of health misinformation by AI will take more than policy fixes, according to Wenbo Li, an assistant professor of science communication at Stony Brook University, because chatbots “lack the capacity for critical thinking, skepticism, or understanding of facts in the way humans do.” His research is focused on developing lessons on how to judge the quality of information that chatbots generate. His current work focuses on training for Black and Hispanic populations, groups underserved in the health care system, to “critically evaluate generative AI technologies, communicate and work effectively with generative AI, and use generative AI ethically as a tool.” Stanford’s DiResta agrees that we need to work on the “mindset that people have as they receive information from a search engine”—say, by teaching users to ask chatbots only to use peer-reviewed sources. Tweaking the bots might help stem the flow of misinformation, but to build up sufficient herd immunity, we’ll need to train something much more complicated: ourselves.

LET’S TALK ABOUT OPTIMISM FOR A CHANGE

Democracy and journalism are in crisis mode—and have been for a while. So how about doing something different?

Mother Jones did. We just merged with the Center for Investigative Reporting, bringing the radio show Reveal, the documentary film team CIR Studios, and Mother Jones together as one bigger, bolder investigative journalism nonprofit.

And this is the first time we’re asking you to support the new organization we’re building. In “Less Dreading, More Doing,” we lay it all out for you: why we merged, how we’re stronger together, why we’re optimistic about the work ahead, and why we need to raise the First $500,000 in online donations by June 22.

It won’t be easy. There are many exciting new things to share with you, but spoiler: Wiggle room in our budget is not among them. We can’t afford missing these goals. We need this to be a big one. Falling flat would be utterly devastating right now.

A First $500,000 donation of $500, $50, or $5 would mean the world to us—a signal that you believe in the power of independent investigative reporting like we do. And whether you can pitch in or not, we have a free Strengthen Journalism sticker for you so you can help us spread the word and make the most of this huge moment.

payment methods

LET’S TALK ABOUT OPTIMISM FOR A CHANGE

Democracy and journalism are in crisis mode—and have been for a while. So how about doing something different?

Mother Jones did. We just merged with the Center for Investigative Reporting, bringing the radio show Reveal, the documentary film team CIR Studios, and Mother Jones together as one bigger, bolder investigative journalism nonprofit.

And this is the first time we’re asking you to support the new organization we’re building. In “Less Dreading, More Doing,” we lay it all out for you: why we merged, how we’re stronger together, why we’re optimistic about the work ahead, and why we need to raise the First $500,000 in online donations by June 22.

It won’t be easy. There are many exciting new things to share with you, but spoiler: Wiggle room in our budget is not among them. We can’t afford missing these goals. We need this to be a big one. Falling flat would be utterly devastating right now.

A First $500,000 donation of $500, $50, or $5 would mean the world to us—a signal that you believe in the power of independent investigative reporting like we do. And whether you can pitch in or not, we have a free Strengthen Journalism sticker for you so you can help us spread the word and make the most of this huge moment.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate