top of page
  • Writer's pictureAhmed Elsarta

Can AI replace your doctor? A look at the future of healthcare with AI researcher Alaa Melek

“Hello, I am Baymax. Your personal Healthcare companion. How are you today?” Says Baymax, waving at you, wishing you the best. He’s also wishing you don’t notice all his warning signs.


In previous techQualia publications we explored some AI use cases in healthcare, particularly in predicting delirium, schizophrenia, or even helping mitigate postpartum depression. But today we’ll look at a new strand of AI making its way into (and promising to transform) healthcare: Generative AI, better known for its usage in ChatGPT and similar applications. To fully understand the AI-healthcare landscape, we will also be talking to one of Egypt’s aspiring AI researchers, Eng/ Alaa Melek, to discuss the state of healthcare research in MENA and the challenges AI researchers face.


Let’s get started!

Illustration by Nour Ahmed

BiomedGPT, when ChatGPT studies medicine.


Imagine that you’re trying to create a real Baymax. You would most likely be using a Large Language Model (LLM) like ChatGPT. But how does that work exactly?


How chatGPT works


The basic principle behind chatGPT (and similar models) is that we analyze lots of text data. Then, we give the model a new sequence of words and ask it to predict the next word, and the next one, and so on.


So if I tell the model “Where are we…”, it will look at previous instances of this sequence, and choose the word with the highest probability of being next: “going”.

But even though it seems like language models are “talking”, they’re not. It's like how traffic radars don’t “see” cars. Both radars and language models only see numbers. They’re looking for patterns in data, and then making a prediction based on them. In the case of ChatGPT, it’s trained on incredibly large amounts of text data (think Wikipedia, Reddit, coding forums, etc.). This helps it understand the patterns of human languages (prominently English) and then answer questions in this language.


But what if you wanted the model to understand French? Well, you give it lots of French documents, then maybe some in Arabic, Spanish, German, and so on. In this case, chatGPT can now understand practically any language - as long as it has access to enough data in said language.


For example, while ChatGPT might be able to carry a conversation in many dialects (Masri, Shami, etc.), which are derived from Arabic, it can’t do the same with languages that aren’t as common on the internet, like Latin or hieroglyphics.


How chatGPT studied medicine.


Ultimately, if we provide documents in a certain language to the model, it will be able to “understand” it and answer corresponding questions. However, AI researchers didn’t stop there. As it turns out, you can treat almost anything as a language.


By now, we can make a pretty convincing Baymax, but it wouldn’t be a very good healthcare companion. ChatGPT (based on GPT-3.5) can only understand one type of input (text, or transcribed speech). Most LLMs were just like that until OpenAI introduced GPT-4, which can handle different inputs (text & images). This type of model is referred to as multimodal.


In May of 2023, a group of AI and Medical researchers released a paper outlining “BiomedGPT”: A model similar to GPT-4 with the ability to handle multimodal medical tasks. This includes things like recognising MRI images, detecting tumors, and so on. But all that is old news, Here’s what’s new.


Visual question answering: no longer exclusive to “language”, researchers now (mostly based in the US) are giving BiomedGPT vast amounts of data containing captioned medical images. The end result is that you can ask a medical question represented as an image and the model will give you an answer.


Moreover, it can perform lots of medical tasks really well, such as classifying tumors as benign or malignant. It also reportedly performs better than modern methods in about 90% of the tasks.


But as I was reading about the performance of this model, I wondered if something like this could replace doctors in the future, despite there being the concern that it is not trustworthy enough.


As of right now, there isn’t an AI model that is 100% accurate in medical tasks, which is critical. When doctors make mistakes, they are held accountable. But AI making a mistake is a whole different story that is still being discussed.


We’ve dabbled with the idea of AI doctors a few times. In Luke Sweeney’s short story Auxilium, we were invited to envision what it would be like if you had an AI therapist, with its infinite patience and capacity for listening, as well as its ability to “understand” in a way that humans can’t. This concept has recently started to make its way into reality with “Social Assistive Robots” (SARs), which are designed to provide patients with psychological help.


It would make sense for therapists to be replaced by AI first. Because when you go to a therapist (as you should, if you need it, and you probably do, we all do…), the main thing that occurs is text-based communication (which we established that AI can do). But Baymax wasn’t just a SAR, he could do more, and so could biomedGPT. To me, the future seemed bright.


But since all I had so far was just speculation and some academic AI jargon; I decided to reach out to an AI researcher to see what they had to say. What follows is some snippets from my exclusive interview with Eng. Alaa Melek, who has graciously provided me with many invaluable insights.

 

Interview with Eng. Alaa Melek, AI researcher at University of Texas and Intixel.

Pictured: Eng. Alaa Melek

Before we start, why don’t you tell us a bit about yourself?


“Sure, I’m an AI researcher specializing in biomedical engineering. I’m currently working towards a PhD at the University of Texas. My masters was about using AI to predict breast cancer in collaboration with “Baheya” hospital. I worked in deep learning research at Intixel for a while, and I also worked at Cairo University as a teaching and research assistant.”


Let’s start with a simple question: is AI going to replace doctors anytime soon?


“Well, in my opinion: Not at all. I've worked closely with lots of AI researchers in this area and the goal for everyone working within this discipline isn’t to replace doctors. The goal (at the end of the day) is to make doctors’ lives easier.


Recently, there was a study which measured radiologists’ performance while working on breast cancer images. In one group, radiologists had the assistance of AI, while the other group didn’t. Even though the system didn’t make a significant difference in terms of accuracy, it reduced the radiologist’s workload by over 44%. That is our goal at the end of the day. Because now this radiologist can help more people.”


Even with something like BiomedGPT, seeing that it can practically understand anything as a language?


“Again, It’s extremely unlikely. AI can indeed understand images and perform simple tasks, sometimes even better than humans, like image recognition. It has already surpassed the human level of performance on imageNet (a popular image recognition database). With huge computing power and data resources, it has been improving exponentially.


But based on my experience (and some consensus): It’s too crucial to let AI have the sole decision, there’s too many complex cases for it to handle. It’ll end up as a tool used by human doctors.”


Is it because we don’t have enough data? Or we just can’t trust it?


“Both, actually. We don’t currently have enough data to create something we can trust 100%, and even if we did, we’ll still have lots of issues, for example: “automation bias”. where a radiologist is sometimes reluctant to disagree with the decision of a machine.”


What are the big challenges that AI researchers face in the MENA region?


“There’s the issue of obtaining funds, of course. Compared to AI startups, AI research gets less funding. AI is a hot topic nowadays. Everyone wants to get in on it, but it wasn’t always like that.


Obtaining data is also a challenge. We struggle with access to the types of data that are necessary for our research. Generalizable models need vast amounts of high quality data, which requires a lot of labeling that can only be done by doctors.


Bias is also a huge issue. We can’t expect a model trained on one population to perform the same on another. AI models have shown disparities related to race, age, and socio-economic classes. Which leads to ‘suboptimal’ results.”


In your opinion, what is the most overlooked issue affecting AI researchers?


“I think we need more research projects involving doctors. (sometimes called ‘Multidisciplinary projects’) Although they are harder to make, because of the logistics involved. they’re incredibly beneficial.


I think we should have a big get together with these doctors, because there is a lot to talk about. Starting from what AI can and can’t do. Then we can discuss the issues specific to our local populations, and what we can do to help.”


What about research from abroad? Wouldn’t that be useful?


“Well, it’s complicated, because although lots of biomedical research from abroad is useful, we have specific illnesses in specific demographics here that are different from other regions. Small factors such as race make a big difference in clinical settings. So, research from abroad would not be completely accurate here.


Going back to doctors, They have the knowledge we need! There is a lot we can do together. Even something as simple as labeling medical images from local hospitals helps us a lot. ”


 

What does the future hold?



Armed with a lot of insight about the subject, I could safely say that Baymax isn’t making it into the market any time soon. Sad, I know, but don’t worry, the future has lots of good things to offer. Here's what Eng/Alaa had to say about it.


What is the future of AI in the region?


“I believe we are already heading in the right direction. We need more investments in AI research, both financial investments and investment in human resources and calibers. Furthermore, we need the stakeholders (healthcare professionals and patients) to educate themselves about the subject, and manage their expectations of what AI can and can’t do. It’s not magic.


Another thing would be encouraging more women to get into the field of AI; initiatives, like Digital Egypt Cubs, are good for encouraging people to get into STEM. There’s still larger scale issues that affect us, and I hope these get sorted out soon.”


The main theme of “Big Hero 6” is that Baymax was like a second brother to our hero, built by his bigger brother to take care of him. Baymax wasn’t kind or caring by accident, he was replicating the kindness of the people who designed him. Unfortunately though, the real world may not be as kind. If we’re not careful, AI is going to replicate and amplify the problems within today’s healthcare system, and we wouldn’t have the option of opting out.


There’s a lot we can do to mitigate the risks by “getting involved”. If you’re a clinical researcher, you should look at what AI researchers are doing, and vice versa. If you aren’t either of those, you should start by educating yourself about the subject as well as raising awareness about it. Maybe send this article to your doctor, see what they think?


In fiction, the future of AI takes on one of two forms: AI either destroys humanity or helps it reach technological heights it couldn’t otherwise. If we want to have a good future, we need to make sure that these systems are designed in a manner that is humane, inclusive, and most importantly: kind.


bottom of page