DarshanTalks Podcast

Is AI replacing your Doctor?

Darshan Kulkarni

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 12:57

Send us Fan Mail

In this episode of DarshanTalks, host and attorney-pharmacist Darshan Kulkarni explores the unsettling transition from "Googling your symptoms" to "Chatting with your symptoms." It’s 10:47 PM—you have chest tightness and a chatbot is your only companion. But is the algorithm reassuring you, or is it gaslighting a medical emergency?

We go beyond the hype to examine the February 9, 2026, Nature Medicine study (as reported in the New York Times) which reveals a staggering gap between AI’s medical exam scores and its real-world ability to triage human beings. Darshan breaks down the "Black Box" of emergency room triage, the evolution of FDA Software as a Medical Device (SaMD) regulations, and why your zip code might determine if your data is protected under new 2026 state privacy laws in Washington, California, and Maryland.

Key topics include:

  • The "Midnight Triage" Trap: Why AI struggles with the nuance of human shame, fear, and "atypical presentations" in the ER.
  • FDA & the "Glass Box": Navigating the new 2026 guidance on Clinical Decision Support (CDS) software and the legal line between "wellness devices" and "medical devices."
  • Liability & Malpractice: Why an algorithm can’t carry insurance—and what that means for the doctors who follow (or ignore) AI alerts.
  • The Data Privacy Patchwork: How HIPAA fails you once you leave the hospital portal and enter the world of consumer AI chatbots.

Healthcare is a human endeavor, and accountability requires a human name. Learn how to use AI as a starting point without letting it be your final conclusion.

Support the show

www.kulkarnilawfirm.com

Darshan

It's ten forty-seven at night. Just it feels tight. It's not crushing. It's not dramatic. It's just tight. You don't want to overreact. You don't want a hospital bill. It's the middle of the night almost. You don't want to be embarrassed. You open your phone. Instead of typing to Google, instead of looking up WebMD, you type your symptoms into ChatGPT. It goes, like the anxiety. You stare at the screen, breathe twice. You feel slightly better. You feel reassured. And then you go, what if it's not anxiety? Then it throws out a bunch more options. And now suddenly something just feels off. Now let me make this more comfortable. What if when you go to the hospital, they use an AI triage system? What if it labels you low risk? What if you never knew that an algorithm influenced that decision? Who do you trust? Today, we're asking a series of questions that patients are already asking quietly. Can AI diagnose you better than your doctor? We're going to discuss symptom checkers, AI triage tools in emergency rooms, where artificial intelligence is genuinely oppressive, and where it absolutely fails. And something that most people don't think about, but I do give them that background. What the government is doing to protect you. We will see how the FTA regulates this, what privacy law protects, and what state medical laws complicate everything. So if you like healthcare, explain like you're allowed to understand it. Subscribe now to Darshan Talks. I'm a pharmacist, I'm a lawyer, and I do this because I want to make the system understandable without making you feel small. Let's start with the obvious. AI is not coming. It's already here. It's already inside healthcare. Radiology scans are being analyzed by AI. Stroke detection systems are flagging brain blades. Sepsis prediction tools are monitoring ICU patients. Electronic health records are generating automated risk scores. You may have already been diagnosed with the help of AI, and your refills may have already been filled by AI. Especially if you're on UTEL. You probably didn't even know. And they're fixing that as we speak. Now this shouldn't scare you, but it should make you curious. So let's start from the beginning and let's take it step by step. First of all, in artificial intelligence, there's no intelligence. AI like ChatGPT does not think as you think it thinks. It predicts patterns, it processes massive data sets. It identifies statistical likelihoods. Think of your autocomplete, just make it much, much, much more powerful. Medicine is only partially pattern recognition. This judgment under uncertainty. I've practiced pharmacy, I've seen physicians make calls that weren't even based on a neat algorithm, and they were based on experience, that gut feeling, the one that goes, yeah, you know what? I think evidence points to two, but I've seen four cases of like this this other thing. So purely on intuition, I'm going with this other option. AI is excellent at patterns, but medicine can be about the exception. And it will matter if you are the patient. Now let's be clear. AI is obviously very, very good at certain things. As I mentioned before, it can help detect early breast cancer and mammograms, and this can do it with very impressive sensitivity. It can identify diabetic retinopathy from retinal images, it can process thousands of drug interactions instantly, and it doesn't get tired, it doesn't rush, it doesn't forget rare diseases only because it saw them once. So in these narrow diagnostic tasks, the eye can match and maybe even exceed human specialists. But, and to pretend that this is not real, that it's not happening, it's idiotic and it's not credible. But strength in narrow tasks does not equal complete clinical judgment. The fact is that you will get these over-excessive messages. If you say everything has breast cancer, you have 100% success rate, but it's wrong in many cases. If you say that there's a drug interaction in every single situation, you've got to ask, does that interaction matter? The number of times I've seen these false flags, that's concerning. That's why the FD is looking at regulations. When an AI tool diagnoses, treats, or guides medical decisions, it may be seen as software as a medical device under FDA law. Yes, an algorithm can legally be a medical device. I've spent decades advising companies on this. But not everyone does this right. Some AI tools are FDA clear, some are carefully structured to avoid that classification. Some are software as a medical device, others are in gray zones as clinical decision support. And that's the key part. Support. They aren't making the decision, they're supporting the decision. And then some others are just simply wellness devices, and they're not subject to the same types of claims and scrutiny. Each needs its own justification, its own logic, its own support, and often turns on the claims being made by the company and the software. And then there's the problem of adaptive AI. If a system keeps learning over time, how do you regulate something that changes? The FDA has frameworks in development, they're thoughtful, they're evolving, the FDA is working on it, they're trying to protect patients, but they're stuck. They're regulating technology that exists today, but likely will be passed in a year or two. But as Wayne Gretzky puts it, you need to be looking to where the puck will be. The FDA is in no better position than any of us to evaluate where we will be in two years. So let's talk about what you're actually using. Chat GPT, online symptom checkers, AI triage, they all feel powerful. They're all available at midnight and they don't judge you, include disclaimers. They're saying that they aren't providing medical advice. Generally speaking, that's not good enough. And we've seen lawsuits on this, especially with open AI. Because once something provides diagnosis or treatment or cures, it often triggers regulatory obligations and liability exposure. So the language is crafted carefully. So the responsibility shifts back to you. And there's this article from the New York Times from Feb 9 2026. It's titled A Study That Shows That Health Advice from AI chatbots is frequently wrong. So you, as a patient, can still get hurt. So let me tell you something else from my own experience. As a pharmacist, I've seen these drug interaction alerts pop up constantly. I talked about this a little bit earlier. The first time you look at it, you go, that's unnecessary. Someone was just covering their ass. The second time, same thing. Third, fourth, fifth, twentieth, fiftieth, same unnecessary covering your ass. So you just keep clicking accept the risk. And then the 51st time is important. Whoops, you almost missed that one. Call the doctor, you let them know. But it turns out they were aware, but that life moves on. You keep going. The 200th one. They didn't know. And they thank you for helping the patient. In short, if you blindly follow every alert, you overwhelm clinicians. You overwhelm me. If you ignore all of them, you miss something important. And that's what happens. AI introduces the same dynamic at scale. It creates signals. But humans have to decide which signals matter. And that decision carries responsibility. An algorithm does not carry malpractice insurance. A physician does, and that changes behavior. So let's talk about something else that patients must consider their data. But now I'm on Google, I know that I'm being tracked. Chat GPT, I know I'm being tracked. I don't care. I type symptoms into this AI tool, but I don't actually know where the information goes. If you use a hospital system, HIPAA and a slew of state privacy laws, they might apply. But if you type in that same information into Google or ChatGPT and I've done it, those same laws may not apply. PIPA protects health information covered and handled by covered entities and business associates. But many consumer-facing AI tools operate outside that framework. And that's when state laws kick in. California, Washington, Maryland, some states, like the ones we just discussed, treat certain health-related data, including reproductive information, weight loss, a precise geolocation, essential health data, even outside traditional healthcare settings. So the protections around your AI interaction may depend on your zip code. And that's scary to a lot of people, and it should be. Now imagine this exact same thing happening inside an emergency room. An AI-driven triage system assigns risk scores based on vital signs and data inputs. They'll quickly influence how quickly you'll be seen. If something goes wrong, accountability becomes complicated. So is it the physician, the hospital, the software developer? Law's still catching up. Companies that are designing these systems to reduce liability risk. Well, risk is natural. It's not malicious. It's even helpful. But incentives matter. And you as the patient are subject to this incentive structure. Which takes us to the risks. AI struggles with nuance, does not understand shame, it does not understand fear, does not understand that you minimize symptoms because you are embarrassed. It does not realize that the information you just gave it as part of Chat GPT, as part of a lawsuit, they may have to disclose that. So this data they're being trained on, if certain population is underrepresented, the error rates might increase. If you weigh certain data extra to get that representation, you might see black Nazis like we saw with Gemini in the past. We're seeing disparities in dermatology AI. We've seen concerns in cardiovascular prediction models. AI does not eliminate bias, it can replicate the bias, and worst of all, you won't even know it because it's all inside a black box, even when you interrogate the system. Then there's model drift. Even if you ask the same AI the same question multiple times in the same chat, you can get vastly different results. It's happened to me, and it's much more concerning when you're dealing with health issues. And that's what worries me the most. It's not AI, it's the overconfidence. AI systems sounds so authoritative, but responsibility follows liability. Doctors carry liability insurance. Hospitals show oversight. In the end, for me, I want someone to blame. I want someone to ask questions when things go wrong. AI just can't do that. For me, AI is great at answering those midnight questions. But when I wake up, I want to talk to a real doctor. Now that you feel like it all makes sense, let's add one more layer. We've talked about federal laws like HIPAA and the Food, Drugs, and Cosmetics Act, which involves the FDA. Medicine is regulated at the state level. We've talked about this a little bit earlier. If AI generates a recommendation, is that practicing medicine? If a pharmacist reviews an AI output and it applies that across multiple states, does the AI or the pharmacist need licensure on each one? These questions and others are being actively debated. I work in the space and these answers are evolving. So what does this mean for you? Truth is, the legal system is building guardrails while this technology is already moving. So if AI is telling you results or interpreting them, is it for the flu or cancer or weight loss? Does your software work on behalf of the doctor, or do you directly go to the website with no expectation that the doctor is involved? Each of these makes a difference. Each of these will decide what the state's scope is. So what should you do? Use AI as a starting point. It's not a conclusion. If symptoms worsen, ask for a doctor. Seek care. Tell your provider that you used AI. And let me be clear, they will roll their eyes. I know I would, but it's better to get your questions answered. And the fact is that we all know that that's better. We all know that you're going to ask those questions. You're allowed to ask AI. You're allowed to ask the doctor. Healthcare is not supposed to be mysterious. Let's go back to the question we asked. Can AI diagnose you better than the doctor? Narrow tasks? Sometimes. Yeah. Full human complexity? Not even close. The future will likely be AI plus clinicians evolving under FD oversight, within a patchwork of state laws layered with privacy concerns. Technology will improve and blunky become quite good, but medicine is still a human endeavor. And accountability is still human. If this episode helped you think differently about healthcare, subscribe to Darchan Talks for Healthcare you're allowed to understand. Share this with someone who googles their symptoms at midnight.