Parminder “Parry” Bhatia was named chief AI officer of GE HealthCare earlier this year after six years at Amazon and AWS working on its machine learning and generative AI programs. He has also focused on natural language processing and now aims to help implement these technologies into patient care and provider workflows as the company sets out on its own and aims to build out its digital portfolio.
We were able to chat during AdvaMed’s MedTech Conference this week in Anaheim, California.
Conor Hale: You started at GE HealthCare in April. How’s it going so far, and what made you want to make the jump?
Parry Bhatia: Prior to joining six months back, for three years I was leading a lot of foundation models and generative AI at Amazon, so things like Amazon CodeWhisperer and Bedrock, where a lot of these foundation models act similarly to ChatGPT.
But I started on the journey of health AI before that at Amazon, building out one of the first healthcare-specific NLP services on the cloud, Comprehend Medical, and then we built HealthLake as well eventually.
So we were working on some of the best cutting-edge technologies, but I did feel like we were one step away from the customers—at AWS we would rely on ISPs or partners like GE HealthCare to build the technologies that would eventually go to doctors and patients.
But at GE HealthCare we get into the devices. You’re a lot closer, so the impact you can have is a lot larger. And the idea was that I’ve worked on foundation models or generative AI in the past, and I’ve worked on healthcare in the past, but I think GE HealthCare provided the best opportunity to actually combine those together and have a real impact on the problems that we work on.
And when I look at what GE HealthCare has done, even in the last 10 years, the amount of AI and machine learning that’s been embedded has been amazing. So those are the factors that motivated me to come to this side of the job.
CH: Is there a sector in GE HealthCare’s portfolio that you found most interesting?
PB: Obviously there are the products that are already out there, like AIR Recon DL, which is able to reduce MRI scan times by 50%, and Sonic DL being a similar one, which can reduce MR assessments to 1/12th the time, which is really useful for taking cardiac images—especially for patients with arrhythmia, because you can capture it in a single heartbeat.
The idea of working on those things obviously excited me, but there’s a wide area of projects ahead in MR, CT and even ultrasound, such as hand-held devices where you can now actually embed AI technology.
One of the recent acquisitions we had was Caption Health earlier this year, and just last week we announced its integration with our Venue ultrasound products.
That means we can provide AI-enabled, real-time scan guidance for cardiac assessments, and that helps get technologists on the same level, even if they have less experience. You can provide them with the right guardrails and guidance so that they can take diagnostic-quality ultrasound measurements consistently and in a quick manner.
So I think technologies like this are really exciting: You can picture healthcare professionals with less training being able to take an ultrasound scan, and they can be in remote settings or one day even at home. I think that’s going to have a massive impact in the long run.
It comes down to our road map for Caption Health, where we saw this synergy with our side of ultrasound as well. We’re starting with just hardware integration, because there’s a lot of work that’s required for things to be built in a compliant and reliable way. But now, as the software is starting to get built out and we start to scale up, I think the stage is set to really start accelerating that. There are many areas where we want to redefine ultrasound scan guidance.
CH: I know it all went down before you got there, but I’m curious whether you feel things have changed since GE HealthCare’s spinoff as an independent company earlier this year.
PB: At Amazon and AWS, when I was working on the healthcare side of things, I interacted with GE for the last five to 10 years—and the GE that I am seeing, in just the last six to nine months, is completely different.
There’s a lot more focus on what we need to do and how we are going to build out the road map. And when I talk to my colleagues, they say they are seeing a huge change in the company’s lean mindset—such as how we can really focus on a few things, then deliver, iterate and move fast.
So if we look at disease care pathways, we are focusing on three main areas—oncology, cardiology and neurology—and in each one of them we are focusing on one disease at a time. Rather than all tumors, we are focusing just on prostate cancer, and we are focusing on atrial fibrillation and Alzheimer’s, as three use cases.
The idea is we need to solve the problems from end to end, with our partners and our ecosystem, to really make sure that all those different components elevate the quality of care we provide to patients.
That’s where a lot of AI and machine learning becomes useful. As we map it onto these three areas, we can start to look into problems beyond just diagnosis—and move into screening, into assessment and into therapies and monitoring.
CH: So you’re trying to prove out these use cases first.
PB: Right, so when we talk about neurology, right now we’re not trying to solve 10 different diseases at once. Obviously, the products that we have capture needs there as well, but the focus is: How do we make the overall workflow process for Alzheimer’s better?
We have the PET/SPECT machines, we have the amyloid imaging agents, we have a lot of competence there already. But there’s going to be digital capabilities that we’ll need to build to integrate all these technologies together.
And once we do that, digital and cloud computing becomes an enabler—because when you have the plug-and-play ready, then it becomes easier to approach another disease. You’ll have 70% to 80% of components that can be used as is, and then you can start focusing on the particulars.
CH: Regarding keeping that lean mindset, there has been a lot of discussion at this conference about M&A driving innovation, and companies debating whether to build things themselves or acquire tech from others. Buying can certainly make sense when you’re talking about adding new types of devices to a portfolio, but do you feel that it’s the same in AI development? Or is it a different game?
PB: I would say we offer a hybrid approach. We work on building from the ground up a lot of these core platform components and capabilities, such as large language models, computer vision and foundation models, and then work in close synergy with our business units like ultrasound, X-ray, CT and MR.
A perfect example of that is the recent 510(k) clearance and CE mark for CT Auto Segmentation, a deep learning algorithm that automatically delineates the organs at risk during radiation therapy.
The other aspect, on buying, is where Caption Health comes in. This year we integrated them into our ultrasound, but now we are thinking of adding similar capabilities to MR and CT as well.
CH: And taking Caption Health outside of echocardiograms to lung exams as well, right? They had a Gates Foundation grant before they were acquired, and you’ve just received a new one.
PB: Yes, and I think these technologies are going to help reinvent the space with regard to healthcare equity. The recent Bill & Melinda Gates Foundation grant was for $44 million and precisely focused on maternal and fetal respiratory diseases.
So you can think of scaling it up from cardiac to respiratory exams, and you can just think of how we can scale it up to 10 other anatomies in the future.