DarshanTalks Podcast
Welcome to DarshanTalks!
We demystify fraud for legal, regulatory, and compliance essentials in the life sciences and pharmacy industries. Through engaging 15-30-minute interviews with influential change makers, short educational regulatory defbriefs, and 60 second audio takeaways, we unveil the strategies behind bringing drugs and devices to market—and keeping them there!
Powered By The Kulkarni Law Firm - Helping regulators see your business the way you do.
We focus on life science issues involving medical affairs, marketing and advertising, and clinical research so that you can learn about the industry, enhance your business and grow your career.
DarshanTalks Podcast
Is your "Clinical Decision Support" tool actually an unregulated medical device?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In January 2026, the FDA sharpened the line between helpful software and regulated medical devices. If your AI sits inside an EHR, providing "black box" recommendations that a clinician can’t independently verify in seconds, you aren't just drifting into a regulatory gray area, you’re likely standing outside the "safe zone."
In this episode, we break down the high-stakes intersection of FDA transparency, OIG inducement analysis, and the reality of clinical workflows.
In this episode, we cover:
- The 2026 FDA Update: Why "independence" is the new metric for non-device CDS.
- The Transparency Test: If a physician has to call your engineering team to explain a recommendation, you've already lost.
- OIG & The Anti-Kickback Statute: How "nudging" prescribing behavior creates massive financial liability, regardless of what you call your software.
- Automation Bias: How "fast and confident" AI leads to clinician reliance that regulators now view as a red flag.
- The FTC Factor: Why vague disclosures and hidden logic are no longer defensible under consumer protection standards.
Key Takeaway:
Regulators don't care if the tech works; they care if the compliance story holds up. If you cannot prove your recommendations are separated from commercial influence and fully explainable, you are exposed.
Are you ready to defend your AI? Don't wait for an investigator to walk through your door.
Subscribe to the KLF Deep Dive Podcast & Newsletter to navigate these risks before they turn into enforcement problems.
www.kulkarnilawfirm.com
AI inside VHRs feels safe when it's labeled clinical decision support. The FDA's newest guidance makes something uncomfortably clear. That label only protects you if your software behaves exactly the way regulators expect. Most AI systems, they don't do that. If you're building, buying, or deploying AI in healthcare, subscribe to the KLF Deep Dive Podcast newsletter. I work with FDA regulated clients every single day, and this is where I break down risks before they turn into enforcement problems. So in January 2026, the FDA updated its clinical decision support guidance. They drew a sharper line around what qualifies as a non-device CDS. To stay out of device regulations, clinicians must be able to independently understand the basis for every recommendation and not rely primarily on the software. And the fact is, this sounds reasonable. But now think about most AI inside EHRs. You have black box models, you have adaptive learning, ranked or prioritized outputs, very little explainability. And that's where companies start drifting out of the safe zone. Could a physician realistically explain how your AI reached a recommendation without calling engineering or the vendor? Now, layer in the anti-kickback statute. Once AI recommendations sit inside clinical workflows, the OIG, the Office of Inspector General, suddenly stops caring whether the tool is called a CDS or is just plain analytics. They care about inputs. If AI nudges prescribing behavior, if someone benefits financially downstream, inducement analysis starts. Tracking prescribing patterns, engagement, or outcomes only sharpens that focus. And that is the overlap that companies get in trouble for. FDA looks at reliance and transparency. OIG looks at benefit and intent. AI systems can fail both tests at the same time. The FDA's guidance explicitly calls out automation bias. When software is fast, confident, or embedded in time-pressured workflows, clinicians rely on it more than intended. And that matters. That matters if your AI ranks treatment options and clinicians follow them by default. I've been a clinician, I've been a pharmacist, and I see where that's coming from. Regulators question whether the system's actually supporting judgment or if it's actually directing it. If your AI changed rankings tomorrow, could you prove that it had nothing to do with commercial intent? Look at where the actual recommendations came from. You have to be very careful with that. Even outside FDA device rules, the FTC already expects transparency when automated systems materially influence decisions. Hidden logic and vague disclosures do not hold up. When I advise FDA regulated clients, this is where things break. The tech works, the compliance story does not. AI inside EHRs can and will improve care, but if your system cannot clearly show how those recommendations are generated, reviewed, and separated from commercial influence, you are exposed. Subscribe to the KF Deep Dive Podcast and newsletter. I help FDA regulated companies navigate exactly these issues before the regulators come knocking. Here's my question for you Would you rather explain your AI system to me now? Or would you rather wait till an investigator comes through?