DarshanTalks Podcast
Welcome to DarshanTalks!
We demystify fraud for legal, regulatory, and compliance essentials in the life sciences and pharmacy industries. Through engaging 15-30-minute interviews with influential change makers, short educational regulatory defbriefs, and 60 second audio takeaways, we unveil the strategies behind bringing drugs and devices to market—and keeping them there!
Powered By The Kulkarni Law Firm - Helping regulators see your business the way you do.
We focus on life science issues involving medical affairs, marketing and advertising, and clinical research so that you can learn about the industry, enhance your business and grow your career.
DarshanTalks Podcast
The $100M Mistake: Why AI-Generated Drugs May Not Be Patentable
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
"The model said so" is not a defense. In the rush to integrate AI into drug discovery, life science companies are sprinting toward a compliance cliff. In this episode, Darshan Kulkarni—pharmacist and FDA regulatory lawyer—strips away the marketing hype to reveal the "messy reality" of AI in regulated environments.
We dive deep into the three pillars of risk that can sink a biotech firm:
- Data Integrity: Why AI scales bad data faster than you can fix it.
- The Ownership Crisis: Why current collaboration agreements aren't ready for probabilistic inventorship.
- The Accountability Gap: Who does the DOJ point to when the algorithm makes a $500M mistake?
If you are an executive or general counsel in the life sciences, this is the briefing you need to hear before your next M&A due diligence or patent filing.
www.kulkarnilawfirm.com
AI and drug lab sounds futuristic. Precise, almost sterile. That is the marketing version. The compliance reality is much messier. I'm a pharmacist, I'm an FDA regulatory lawyer, and I do compliance. So I look past the hype and straight to where things break down. An AI-driven drug discovery is not creating new compliance problems. It's repackaging old ones that's making them harder to see. Let's start with data integrity. AI models are only as good as the data they're trained on. If source data is incomplete, biased, or poorly documented, the model just scales the problem faster. Regulators do not care that an algorithm did the work. What they care about is whether the underlying data can be traced, justified, and reproduced. Think about the privacy problems. Then comes model driven. AI systems evolve over time. That is the future everyone almost kind of loves. But in regulated research, silent change is a liability. If your discovery model updates without clear version control, validation checkpoints, audit trails, you're creating a compliance check gap. This will survive and surface at the worst possible moment. The misdocumentation. Traditional labs document hypotheses, protocols, deviation, and decision making. AI labs often assume that the model knows why. And that assumption collapses the first time regulators ask why a candidate was selected or deprioritized. The model said so is not documentation, and this will be a red flag for you. Now add IP ownership, this is where things get uncomfortable again. Who owns AI-generated targets? The sponsor who supplied the data, the AI vendor who built the model, the joint lab that ran the experiments. Most collaboration agreements were not written with probabilistic uh inventorship in mind. If ownership is unclear, patent filings get delayed, licensing deals get messy, and MA due diligence does turn hostile.
SPEAKER_00Enjoying our content? We'd love to hear more. Please like, comment, share, and find more.
SPEAKER_01Finally, there's accountability. When AI drives discovery, responsibility begins to clurk. Sponsors point to vendors, vendors point to models, and everyone points to contracts. Regulators at the FDA and enforcement bodies like DOJ, they don't just accept shared confusions of defense. Someone's always accountable, even if no one wants to raise their hand. And that's the core issue. AI accelerates discovery, but it also accelerates enforcement exposure. Faster insights mean faster decisions. Faster decisions mean more mistakes. And unclear IP ownership means that these mistakes get expensive and very quickly. The companies that get this right will not be the ones with the flashiest models. They're going to be the ones who treated AI labs like regulated environments with clear rules from day one. That's the part no one's talking about. And that's exactly where risk lives. Called or email.