Health & medical

When ChatGPT Saves Your Life

Bethany Crystal and others·New York and nationwide·February 25, 2026
...she may not have gone to the emergency room in time if ChatGPT had not been insistent.

AI chatbots are not doctors. They don't have medical degrees, they can't examine patients, and they can make mistakes. But across the country — including right here in New York — people are finding that AI can catch things that human doctors miss, slash predatory medical bills, and fight insurance denials. These are real, verified stories reported by NPR, NBC's Today, Tom's Hardware, Becker's Hospital Review, and NBC News.

Bethany Crystal: A New Yorker's Life-Saving ER Trip

Bethany Crystal, a New York consultant, woke from a nap to find red spots covering her legs. Unsure whether to worry, she described her symptoms to ChatGPT. The chatbot's response was urgent and direct: she needed immediate evaluation for possible bleeding risk.

As reported by NPR, Crystal went to the emergency room based on ChatGPT's insistence. She was diagnosed with immune thrombocytopenic purpura (ITP), a rare autoimmune disorder that causes dangerously low platelet counts and increased bleeding risk. She told NPR she may not have gone to the ER in time without the AI's warning.

Alex: 17 Doctors Couldn't Diagnose Him. ChatGPT Did.

A mother whose son Alex had seen 17 doctors over three years for chronic pain with no unifying diagnosis turned to ChatGPT as a last resort. As reported by NBC's Today, she entered his medical information line by line from MRI notes into the chatbot.

ChatGPT suggested tethered cord syndrome — a condition where the spinal cord is abnormally attached, restricting movement. The diagnosis proved correct, something 17 medical professionals had missed.

A $195,000 Hospital Bill Slashed to $33,000

After a family member died from a heart attack, one family faced a $195,000 hospital bill for four hours of intensive care. As reported by Tom's Hardware, they used Claude (Anthropic's AI chatbot) to analyze the billing codes.

Claude identified duplicative charges, improper coding, procedures billed as "inpatient only" despite no hospital admission, and supply costs inflated 500 to 2,300% above Medicare rates. The bill was reduced to $33,000 — an 83% reduction.

Even the FTC Chair Used ChatGPT to Fight a Medical Bill

FTC Chair Lina Khan shared on the New York Times' Hard Fork podcast that she used ChatGPT to successfully contest a medical bill after reading about others doing the same. As reported by Becker's Hospital Review, the nation's top consumer protection official turned to a chatbot because the medical billing system is too opaque for even sophisticated consumers to navigate alone.

Fighting Insurance Denials with AI

Stephanie Nixdorf's insurance company repeatedly declined to cover a drug needed to treat arthritis caused by her cancer immunotherapy. As NBC News reported, an AI system helped write a 23-page appeal letter that identified the specific clinical evidence and policy language needed to overturn the denial. Coverage was approved in September 2024.

She's not alone. A growing ecosystem of AI tools is helping patients challenge insurance denials — a process so complex and adversarial that most patients simply give up. AI can analyze policy language, identify applicable regulations, and draft appeals that speak the insurance company's own language back to them.

What S7263 Would Mean for Patients

More than 300,000 New Yorkers live in areas with severe shortages of healthcare professionals. For the uninsured, the underinsured, and anyone facing the deliberately opaque medical billing system, AI tools provide critical assistance that no one else is offering.

S7263 would classify medical information as a licensed professional domain — meaning AI chatbots could be prohibited from suggesting someone go to the ER for dangerous symptoms, from identifying billing errors, or from helping draft insurance appeals.

The bill doesn't distinguish between an AI prescribing medication (which no one is advocating for) and an AI explaining what a medical term means or flagging that a hospital bill contains duplicate charges. Under its broad language, all of it becomes a violation.

Stories like this are why we fight. Add your voice.