DarshanTalks Podcast
Welcome to DarshanTalks!
We demystify fraud for legal, regulatory, and compliance essentials in the life sciences and pharmacy industries. Through engaging 15-30-minute interviews with influential change makers, short educational regulatory defbriefs, and 60 second audio takeaways, we unveil the strategies behind bringing drugs and devices to market—and keeping them there!
Powered By The Kulkarni Law Firm - Helping regulators see your business the way you do.
We focus on life science issues involving medical affairs, marketing and advertising, and clinical research so that you can learn about the industry, enhance your business and grow your career.
DarshanTalks Podcast
10,000 Specialists: A Futurist’s Vision for AI Healthcare
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, we sit down with world-renowned futurist Bruce McCabe to cut through the hype of large language models like ChatGPT and explore the "stunningly optimistic" reality of Specialist (Narrow) AI.
Bruce shares insights from his global travels to research labs, explaining why the future of medicine isn’t one "God-like" AI, but rather a "hive mind" of tens of thousands of highly trained, testable, and trustworthy specialist systems. From early tumor detection in radiology to the revolutionary protein-folding predictions of AlphaFold, we discuss how AI is incrementally building toward a more precise and efficient healthcare system.
Key Discussion Points:
- Moving Beyond the Hype: Why the current "AI correction" is only about large language models, and why narrow AI is actually underhyped.
- The Power of Specialist Systems: How AI trained on specific datasets (like 100,000 X-rays) achieves higher reliability and lower false-positive rates than general models.
- The "Hive Mind" Concept: A future where thousands of specialist AIs interact to provide comprehensive patient care while maintaining data anonymity.
- Trust and Testing: How we measure the trustworthiness of AI in dermatology and diagnostics through historical clinical data.
- Edge Computing & Privacy: Solving the patient privacy dilemma by using Small Language Models (SLMs) that live on local hospital servers rather than the cloud.
- The Next Frontier: The role of AI in material science, drug construction, and programmable medicines like CAR T-cell therapy.
About Our Guest:
Bruce McCabe is a futurist, speaker, and author who spends half the year visiting scientists and innovators around the world to understand how technology will shape our future. You can find his research and book him for speaking engagements at BruceMcCabe.com.
www.kulkarnilawfirm.com
I think the optimistic things to talk about that I do AI in healthcare because there's uh you know just get away from the the cords and the Germanizer chat GPT and get back to it. Hey guys, welcome to another episode. I'm here with Bruce McKay, one the only way to go and visit as many scientists and deep thinkers as I can around the world. So I travel a lot in science and engineering and innovation to try and make the world a better place.
SPEAKER_01Hey everyone, welcome to another episode of uh, well, this is the Kilcarney Law from Deep Dive, and I'm here with Bruce McCabe, the one, the only. Bruce, for those people who don't know, is one of those true blue futurists. And we get to talk to him, learn from him. Why don't we let him introduce himself and we'll go from there?
SPEAKER_02Hi, Darsian. Yeah, happy to do that. Um, well, the one-liner is yeah, I'm a futurist, and the it's a word that can mean different things to different people. Uh, for me, the way I do it is to go and visit as many scientists and deep thinkers as I can around the world. So I travel a lot. I work as a team with my wife Jane, and we're on the road four and a half to five months a year. So that gives you an idea. Uh and uh uh really, you know, so I I make my money out of speaker fees for at conferences and that sort of thing, and and um delivering talks on what's coming. Uh, but the material for that comes from visits to labs and uh seeing people who are much smarter than I am, uh, who I just really deeply love. Uh, they're all involved in science and engineering and innovation to try and make the world a better place. So that's my methodology.
SPEAKER_01Everyone in the past has talked about AI being that big inflection point. And now we're seeing a little bit of that AI buzz die down a little bit. Um, not because AI is not going to change the world, but it's not going to change the world alone. Can you tell me about the promise of AI, the promise of uh blockchain before that, the promise of patient centricity before that, and so many other promises and how they are incrementally building to our next step, which is robots?
SPEAKER_02AI in particular is definitely different to all of the others. It's in many ways underhyped. I know there's disappointment now, and uh in healthcare, you know, people uh might say, look, it's it's going through a period of correction a little bit. But what they're thinking about is one narrow part of AI, and that's these large language models. And we're looking at the errors and we're looking at the hallucinations and how far can we really go in medicine with that kind of thing beyond, for example, dictation, which is extremely powerful for freeing up nurses' hours with their documenting and life-changing for them. But you know, can we really use that for procedures and diagnostics? Um, but it really is just one compartment. And when we start to look at narrow AIs that are being used and uh far more refined and far better tested, and they're not based on large language models in things like radiology and diagnostics and uh in robotics as well, which we can talk about. To me, it's one of the most stunning, stunningly optimistic things to talk about that I do, um, AI and healthcare, because there's uh uh again, you've got to bring it back to the narrow, you know, just get away from the the clauds and the Gemini's and the chat GPTs and get back to what are the specialist systems we can do really, really well.
SPEAKER_01There are narrow AIs, there are broader AIs, and everything else in between. And some of these will lead to a future. Some of them have use cases right now. What does narrow AI mean?
SPEAKER_02So a classic one in healthcare is um uh a system that learns from 100,000 X-rays or other images to do early stage tumor detection. I would call that a specialist narrow AI. It's not something you have a conversation with, it's something that comes back and says, yes, this tiny little discrepancy within the scan is most likely an early stage tumor. And when you start doing those sorts of narrow AIs, the future is going to be tens of thousands of those specialist systems doing narrow jobs extremely well. In robotics um or in autonomous vehicles, uh, narrow AIs include the vision systems. How do we do a better job of seeing what's ahead of us? Um, or it might be audio. How do we do a better job of interpreting what we're listening to? Um, so the narrower they are, the more testable and trustworthy they can be made.
SPEAKER_01The future is AI that only looks at x-rays. Why can't we build a broader AI that also does the narrow stuff? I mean, I'm a human being and I can paint and read an x-ray. So why can't AI? So answer that for me first.
SPEAKER_02The future I see, if we just go back from the hardware to just the pure software, the AI, is one where we have 10,000 specialist AIs all interacting and producing something that is much more like the ultimate general purpose AI. So, for example, there's nothing uh to stop us. Now we can have AIs ask each other questions, and they do. We can have agents talk to another, but it's nothing to stop us starting to build out a health system where I can make a query of my practitioner AI, which can also look at um uh perhaps all of the personal health AIs on the phones and anonymously look at what people are suffering from. So collect that anonymized data. And suddenly you've got epidemiology on a whole new scale because you've got 10,000 AIs talking back to you know the the uh uh a central health agency or a health practice. Um we could have an AI that talks to best of breed oncology AIs and best of breed ophthalmology AIs. There's nothing to stop the behind the scenes from being a hive mind of many specialists.
SPEAKER_00Enjoying our content, we'd love to hear more. Please like, comment, share, and find more.
SPEAKER_01You talk about the um the x-ray AI, the radiology AI, and the um the fact that they're getting so good. No one talks about the false positives and the false negative rates, they always talk about the just the actual positive rate. And and the reason that matters is if I give an x-ray to an AI and say, I want you to identify everyone who has a pulmonary embolism, for argument. And if you just if every single one that comes through, you just go pulmonary embolism, pulmonary embolism, P E, P E, P E, P E. Well, you always have a hundred percent accuracy rate because you said that to every single one that went through. But how many of those didn't actually have a PE?
SPEAKER_02And that's kind of critical. If you look at, say, Professor Andrew Eng's work at Stanford, all of his stuff on scanning, you know, they do look at what's the false positive rate and what's the false negative rate. And the beautiful thing about narrow AIs is you can test that out. And you can do it with dermatology. So you can give test data sets where you know what the answers are and see what are the false positives and negatives, because you know the clinical history of the patients that you're actually submitting the historical scans to to test. So you can actually test trustworthiness and they perform really well.
SPEAKER_01Let's say the AI says you have a high risk of pulmonary embolism, and we as humans go, no, you don't. I'm I'm looking at it, no, you don't. And then you develop that PE because the AI saw a risk factor that we as humans missed. What, according to you, is a fair window to look at these disease states as opposed to at the point of testing? Uh, has anyone given a had that conversation yet?
SPEAKER_02To me, there's a spectrum of activities in healthcare where there's absolute no-brainers where we can uh uh apply something to a very simple question and get better, faster answers, and therefore change the game in a niche in healthcare. And then there's the more generalized diagnostics where we're looking at, say, risk factors, where there's a lot of fuzziness around the edges of those factors. Perhaps there's uh uh genetic and lifestyle inputs into the ultimate factor there, it could be heart disease or whatever, where we need to assess carefully how much we use AI, where we would use AI, and perhaps it's a much longer pathway to using it. You know, it just doesn't make sense because those questions will come up and we'll ask ourselves, well, you know, do we trust the answer from the machine? Are we ready to do that? I think there's there's a there's a spectrum of activities. And so there's a lot of things we would rule out of using AI for now.
SPEAKER_01There's there's this interim space between I'm curious and I'm willing to pay for a physician that Dr. Google used to answer for us. And we would spend midnight on WebMD typing the symptoms, going, do I have ABC disease state? And then suddenly I have to now diagnose myself, and I'm not sure I'm comfortable with it. I don't even know if I'm really reading this properly. And AI is now doing that. The question then becomes is the AI engaged in the practice of medicine? Or are we ready for AI to do that? So, what is your take on that in terms of healthcare uptake?
SPEAKER_02One would be the systemic application of AI, where there's lots of responsible people involved in that loop. That would be like that dermatology example. And we do things in a considered way, we involve the regulator. Um, and we choose our acceptable risk and move the goalposts accordingly. In fact, there's a whole new dimension to that, if we go down that rat hole a little bit. Uh you've got AIs that are now capable of supervising AIs. Now, this is really interesting. And so the supervisory AI, its only job is to assess the risk or the danger if the AI it's supervising makes a wrong decision.
SPEAKER_01When you go give these talks, yeah, you're listening to their CEOs in the room, there are lawyers in the room, and there's all these smart people, and they're going, what is the future? And you go, the future is AI. And in that extent, we need data to train our system so that we can meet that future where it needs to be. But now you are both advancing the AI, but also advancing patient care and patient privacy. And how do you help them balance that sort of push and pull that uh target for your wealth?
SPEAKER_02Well, I don't advise them tactically on implementation ever. But when it comes to patient privacy, you can see what's evolving is the idea of things happening at the edge, just as they've happened with, say, cloud-based software systems. You could either certify that cloud-based provider their own security, you could have the, I don't know, the FDA involved, whatever it takes to say this is okay that they host our clinical data. Or you can say, we never actually take our data off our servers, it never touches theirs. You know, we actually have some sort of edge-cased uh edge, edge-based implementation, like a client server-based implementation. To me, the future of AI is definitely going to be uh to include that. So, again, let's get away from large language models because they are very problematic in sharing that data. Uh, and let's look at small language models which exist clinically, which can be hosted on your computers and never leave your hospital. Follow our page on LinkedIn.
SPEAKER_01What are the top three use cases that they should be aware of at a one million foot level, that they need to go? Here's where we're going, ready or not, this is what you need to be prepared for. What would you advise them?
SPEAKER_02So Alpha Fold was a way of training a very narrow AI, but an exceptionally powerful one on how to predict based on large samples of data that it was given for training, but how to predict the way a molecule would fold itself. So then that turned into a reversible tool. Certainly in material science, there's a lot coming, which it's very interesting for construction, for sustainability and materials, for stronger materials. Um, and the same would apply, you know. I'm sure there are lots of different problems of construction when it comes to drugs. For example, what catalysts will produce what results in the chemical chain to get the drug that I want. Um, there's an interesting problem. I don't know how far we've got with AI to help that. Um, yeah, just predicting uh um epigenetic effects. You know, um, if we get into programmable medicines that do uh um change our immune system, for example, the CAR T therapy cell therapy that we've uh is involved in behind the Emily Whitehead story, you know, we met uh on the basis of that. You know, that's basically reprogramming some of our immune cells to do a better job of attacking cancer cells. For those people who want to reach out to you, Bruce, how how do they do that? Where can they reach you? Oh, the easiest way, uh, my website is my name. So it's Bruce McCabe.com. B-I-U-C-E-M-C-C A B E.com. And that's just the easiest way to find me anywhere in the world. Thank you again, Bruce, for coming on. My pleasure. Thanks, Darcian.
SPEAKER_01Call, click, or email.