Episode 45: Using Artificial Intelligence as a Tool: Part 1
Our latest episode of Field Notes dives into the exciting world of artificial intelligence (AI) and machine learning. Dr. Michael Kraus, Associate Chief Medical Officer of Fresenius Kidney Care, talks with experts from our Global Medical Office Dr. Len Usvyat, Head of Clinical Advanced Analytics, and Dr. Luca Neri, Senior Director and Data Science Lead, about what AI actually is, how it is utilized as a tool, and where we’re headed with AI in the field of medicine. What are the benefits and risks of AI? Do people really need to be worried about losing their jobs to AI? Tune into Field Notes to find out.
Dr. Michael Kraus: Welcome, everyone,e to this episode of Field Notes. I'm Dr. Michael Kraus, the Associate Chief Medical Officer for Fresenius Kidney Care, and your host for this discussion today. Here we interview the experts, physicians, and caregivers who bring experience, compassion and insight to the work we do every day.
I'm very excited for today's episode. We're going to be talking about artificial intelligence or A.I., and also machine learning.
We've been hearing a lot about A.I. lately because there's such a rapid development in the industry, but we also see a lot of misrepresentation and frankly, oversimplification of A.I. And because of that, there is a fundamental misunderstanding of what artificial intelligence actually is. And while all of these depictions in the media can be entertaining, A.I. is actually an incredibly nuanced topic that deserves a lot more critical attention and discussion.
Before we get started, a quick disclaimer. Everyone speaking in this episode of Field Notes is a real human being. And no, A.I. didn't write any of this introduction. All kidding aside, we have some great experts to discuss this topic today. And here to help us shed some light are Dr. Len Usvyat, Head of Clinical Advanced Analytics, and Dr. Luca Neri, Senior Director and Data Science Lead, both from the Global Medical Office of Fresenius Medical Care.
Len, Luca, thank you for being here today.
Dr. Len Usvyat: Thanks so much, Mike. It's a pleasure.
Dr. Luca Neri: Thank you very much, Mike.
Dr. Michael Kraus: Len, let's just get a general understanding of what artificial intelligence or A.I. and machine learning are. What do these terms actually mean?
Dr. Len Usvyat: Thanks, Mike. To describe it in probably simpler terms, I think what I would say is artificial intelligence is really the ability of a computer system to learn. And it really is, in principle, it's actually very similar to how we function as human beings. As all of us know, we perceive information. We see things through our senses. We hear things, we can understand things or reflect on things based on the input that we're receiving from all around us. And artificial intelligence is really doing basically the same thing. It is basing its decisions based on lots and lots of data that is provided to whatever the computer engine that it is utilizing to make these determinations. And so, again, I think artificial intelligence, in principle, is the ability of a computer system to learn.
There's a number of subcomponents to artificial intelligence, there are terms like machine learning, for example, there are terms like deep learning. And all of these are really subcomponents of this umbrella term that exists out there, which is artificial intelligence. And I think one of the things I'll certainly say is the reason you're hearing so much about artificial intelligence now is because of the computer systems have become so much more advanced and so much faster in processing information. And so I think in the past, feeding all this input into the computer systems would have been much more difficult, but now that has become much easier.
Dr. Michael Kraus: That’s a lot to get to, so stay with me here, Len. A.I. is becoming pretty mainstream, there’s almost a mania around it. I believe there's a fundamental misunderstanding of what A.I. actually is, partly due to media, partly due to marketing and all the hype. We've developed a fear around A.I. too, and people actually think it's some form of superintelligence and like you said, learns like a human but doesn't have the emotions.
But the concern is that it will take over the world and everyone's going to lose their jobs. Even the A.I. pioneer, Geoffrey Hinton, is concerned about the misuse of A.I. and the potential it has to surpass human intelligence. Can you speak to me just a little bit more about what it actually is and what it isn't, maybe how it works? And do people really need to be worried that we're going to lose our jobs and lives to robot overlords?
Dr. Len Usvyat: There's a lot of questions in there. So let me just first start off with, I think this kind of general fear and I think many of us have seen the movie The Terminator, and so I'm sure there's a lot of fear out there that the world will be a little bit like that movie, The Terminator. And I think it's important to be concerned and it's important to be thinking about these topics, but I don't think we need to panic, and I think that's really important.
And I would also say that especially as it relates to the work that we do, everything that we do in the field of artificial intelligence and data science and machine learning is not meant to substitute anybody doing the actual work, but it's really meant to complement the work that our clinicians or nurses, our doctors and physicians and many others in the clinics are doing.
To me, artificial intelligence is a very important tool. It is a tool like any other tool presented to us. And so, the other thing I would also add on that topic is technological advancements have always been feared, right? I mean, I think we all know that when there is any new development out there, I think initially there's some fear and concern of how it's going to go and whether, you know, what kind of problems it can potentially result in. I mean, there is, you know, I'm sure when the first airplane took off, I'm sure there was a lot of fear about people getting on the planes and in a tube at 35,000 feet in the air. But certainly, as time goes on, I think people realize, again, it becomes an important tool to what we do.
One of the questions was, well, what are some of the steps of how do you actually do artificial intelligence just so that people understand a little bit more? And one thing to again, keep in mind, artificial intelligence is based on historic data. And I think that's one of the absolutely key things to understand artificial intelligence. It always learns on some precedent, be it data in terms of purely numeric data, be it images, for example. It can learn on images, it can learn on text, but it has to learn on something just like we as humans learn by seeing and by reading and by encountering certain situations, we kind of know how to deal with them in the future. And the same with artificial intelligence.
So, any artificial intelligence starts off with collecting the data, gathering the data, processing the data that we may have collected. And I think for clinicians, certainly listening to this, I mean, if you think of the relationship between albumin and hospitalization, I think most of us would probably think that, you know, lower albumin tends to be associated with higher rate of hospitalization. And so the computer, when the artificial intelligence does the calculation, it will say, well, you know, I see the same thing in the data. I see that lower albumin levels are associated with higher hospitalization rate. And so artificial intelligence is basically taking that type of data but imagine just many more variables than just, for example, albumin. So it is able to decipher these very large data sets, process them, and come up with some sort of a prediction into the future.
And so again, I think the data processing and the input data is really critical. And then as you develop these models, I think what becomes really important is to be able to evaluate how the model itself is performing. And this is where I think a lot of the human interference is needed to be able to make sure that the predictions and the estimates that these models are coming up with actually make reasonable sense. And that takes a lot of work.
And then, of course, one of the key things to all of us, and especially, again, as it relates to kidney disease and the work that we all do, I think it's really important that we pilot the work that we do, these artificial intelligence models. And again, I think a very simple example is predicting hospitalization, for example.
So, how do we actually use this model in real clinical practice? And I think, again, I'll conclude this question by saying I do not believe that I think A.I. is taking over or replacing anybody. I mean, I think these are no different than any other technological advancements. I certainly think there's been a wave of A.I. advancements recently, but I don't think it's some sort of a breakthrough issue where suddenly the world is going to change dramatically just overnight. So, I think again, to me, it's a tool; a tool meant to supplement and help actually clinicians do their work a little bit more efficiently.
Dr. Michael Kraus: Exactly. A.I. is a tool. It doesn't have the emotion, the inspiration, and frankly, the perspiration that humans add to the workforce, but we can use them to help us work safely and smarter. ChatGPT is out there in a big discussion and many of us have played in the field of ChatGPT. Luca, what exactly is ChatGPT?
Dr. Luca Neri: Mike, ChatGPT is a large language model that is essentially based on a rather new technology which is called generative pre-trained transformers. It is designed to understand the natural language and to generate humanlike responses to text based conversation. Now, the new version also can analyze images and also reasoning, reason on that and describe that. It can assist with a wide range of tasks. For example, answering questions, providing information, generating creative content. And if you think on how much language we process every day in our everyday life and work, you can imagine how deep and profound and vast can be the impact of GPT on the way we do things and it does quite well in many, many contexts, of course.
Dr. Michael Kraus: It's a text generating tool that creates writing and can actually help with research. How does it formulate those responses? What does it do? How does it make the paragraph that I ask a question to?
Dr. Luca Neri: The architecture behind generative pre-trained transformers is quite complex, but the concept that inspires it can be easy to explain. GPT tries to predict the next word in a string of text that is provided as an input. And it is trained and by death using these let's say, basic ability. Actually, it provides very complex answers, and it does so because it has been pre-trained on a very huge amount of text from all kinds of services available on the web. And by that it generates by deep learning a model that understands semantics.
What is semantics? What is the meaning of words? So, let's imagine that I say the words cat, dog, and pet. So, we all understand that they are a similar concept in some way. Words in the mind of GPT are vectors of parameters on a multi high dimensional space. And in this space, the words that have similar meaning stay very close to each other. Words that have very different meaning are very far away in that space. And in this way, GPT parameterized the concept of semantics, but also, it is able to understand the weight, the connection about concept, not in the same sentence but in distance sentences. So, the connection of the speech and the syntax, how we generate our concepts and our language. And in this way by predicting word by word by taking into considerations all that has been said before, it is able to generate a coherent responses to our questions.
Dr. Michael Kraus: It's almost like on my iPhone where I get the auto generated word on steroids, so it's generating a thought. That's interesting. So, what does it do when it doesn't know the next thought, when it doesn't have an answer, but it still needs to complete? And I've heard the term hallucinations. What does that mean exactly?
Dr. Luca Neri: Generally, when it doesn't have enough information or let's say the answer might be controversial, then there is also another layer of training that is based on reinforcement learning that use human feedback, let's say, to make the responses more coherent to the question or even ask additional question to have more information. When this process fails and it fails in some instances, the model can generate hallucinations that are, let's say, departure from what the model is designed to provide. This is something that is trying to make up the answer without really being successful in generating a coherent, reasonable response.
Dr. Michael Kraus: As Len said, part of the human's job will be to make sure the A.I. generation is corrected, not hallucinating or perpetrating misinformation going forward. You know, it is fascinating. ChatGPT was only launched on November 30th of 2022, less than a year ago, and it's come so far. What do you think it's going to look like going forward?
Dr. Luca Neri: There might be, let's say, many developments, very hard to understand how GPT can be further developing in the sense that the progress in this field is so fast that, let's say, prediction may be dramatically wrong. But one can say that for sure, model size will grow as the computational resources continue to improve. And so, we can expect that GPT models in general to become larger and more powerful, be able to analyze larger chunks of text because now there is a limitation on the amount that GPT can analyze in a single session.
It is already possible, for example, for GPT-4 to use as an input images, but the additional multimodal, let's say learning capabilities can be added in the future. It is also possible that additional grounding and knowledge representation may be added to GPT so that it is able to provide more contextual, coherent responses and a more accurate understanding of the world and generate more accurate, informative responses.
Also, that there might be work and there should be work to improve the interpretability of the model. Now, let's say we know that the model is able to do some very extraordinary performance, starting from simply predicting the next word. But we know that it can generate very nice and surprisingly accurate responses to many questions. This is an emergent ability.
Many of the potential capacity of GPT are not really known. We need to explore, understand out how it works and improve the interpretability of its output. And it can also be personalized some and adapted to specific tasks so we can add the new training to make it more specific to specific tasks. So, for example, we can train it on scientific literature to provide more accurate answer tailor-made for the scientific community.
Dr. Michael Kraus: That's a lot there. We're packaging a lot in a short period of time. Are you concerned? What are the benefits and risk of things like ChatGPT and other A.I. programs?
Dr. Len Usvyat: Let's talk about the benefits, of course, first. The number one thing I would say, and this is by far why we're using A.I. technologies, is really its ability to process huge amounts of information very quickly. To me, one of the key benefits of all these A.I. programs and ChatGPT particularly is really the fact that it was trained on so much data, the fact that it can do things so quickly nowadays because of all these advancements. Many of them happened because our processing, computer processing power has become so much better than it ever used to be. Hardware space and hard drives spaces – that is no longer an issue. And so, I think the ability to process so much information so quickly that you can access it from pretty much anywhere, especially with ChatGPT, you can access it just from a browser or there's certainly many apps that have been developed now for folks to be able to use ChatGPT on their androids or their iPhones. And so those are I think I would say the main benefits. And there is also, as I think Luca mentioned, there are also graphical components now due to ChatGPT, I think you can use things like stable diffusion and this is other A.I. that you can now utilize to be able to draw images, for example, with ChatGPT.
And then in terms of drawbacks, I mean, there are certainly a number of them as well. I mean, I think the number one that will always get mentioned because I do think it's an important issue of biases. I mean, artificial intelligence comes with biases. Why? Because it was learned on historic data. And so, if there's biases in historic data, there will be biases in whatever ChatGPT tells us. So, if it was trained on false information, you will get a false response back. And that's, I think, one of the key things to keep in mind, particularly with ChatGPT, but with any of these A.I. technologies. It is a computer, so it does not have empathy or feelings. I think that's something for all of us to, of course, keep in mind. There are some, I think, privacy and security concerns which will hopefully over time be worked out. But I think the number one thing to be concerned about is really the biases. And number two is, I think, some of the misinformation, because if it was trained on something wrong, it would just give you that response back the same way. So that's something to be very much aware of.
Dr. Michael Kraus: Biases in our medical field can be very significant and negatively impact people. How do we look at A.I. as we go forward to make sure we exclude biases, and we make sure that we improve health equity not go backwards?
Dr. Luca Neri: This is a very complex, complex question. So, it is important that we are very disciplined in the way we first generate our training data set because the way we generate our training data set has the most important, let's say, influence on the output of the model. And then of course, there should be oversight on how the model works. There should be a pipeline for generating evidence that the model is safe and also is effective before, let's say, using it in clinical practice.
Medicine has a long history in finding the best way to test the efficacy and safety of new technologies. We do it with drugs and probably we have to do it in a similar way with artificial intelligence, with the same discipline and the same attention to potential biases, potential safety risk, and also with the look on how much A.I. can improve. For doing that, of course, we need to devise new methodologies probably that are less costly than traditional clinical trials, but I think that it is very important that we engage in this intellectual exercise on how to devise methods to prove the efficacy and safety of A.I. for clinical practice.
Dr. Michael Kraus: So, Len, I've heard it said today A.I. can do mediocre jobs exceedingly well, meaning it writes what it knows in a style that it's aware of and can reproduce things fairly quickly. So, where that is useful in the world, which is a lot of places, that may be good, but even things that you mentioned, the artwork, you know. Yes, A.I. learned from Renoir and can produce a picture, but certainly you don't believe that it can give the same impact, same quality of a Renoir or the likes, do you?
Dr. Len Usvyat: It goes into the whole, I think, question of what can computers do ultimately, and what is computer generated versus not computer generated. To me, I think whatever the images or whatever the text that these A.I. engines are generating, it should not be used without some human oversight and without some human discussion of whatever that may be. And again, I think if it is a painting or if it is a picture, then somebody should be reviewing this and somebody should be thinking about what it is that it's actually producing. And that's why I think in our practice and certainly in clinical care, I do think many of our A.I. and data science developments should not just be taken for granted. I think it's important that the clinician and the physicians alike are very much in the middle of that.
And I think, you know one of the analogies that I would often use, all of us can probably relate to it. When we get on a plane, most of the flying actually happens by computer. The pilots are really there just for emergencies and more complicated situations and to kind of assess what the computer may be doing when it's on autopilot. And I very much think many of these air developments are kind of very similar that I think none of us would probably want to get in a tube at 35,000 feet in the air if the pilot was not in the cockpit.
And I actually very much think that these A.I. developments, the outlook will be very similar. It will help us. It will, I'm sure, make things better. It will maybe take sometimes the emotional component out of something that I think we all know in, for example, therapy. We all know that sometimes physicians can overreact to hemoglobin values going down very quickly. And so the computer sometimes may be able to provide a better answer. But at the end of the day, just like us getting on a plane, we want to make sure there's a pilot, I think the same with any of these air developments, I think it's important that the physician, clinician, and others are in the workflow and actually reviewing what is the information that these A.I. engines are providing us with.
Dr. Michael Kraus: I like the plane analogy, although and I have to think, am I actually going to go back on a plane? But that's a separate issue. Luca, let's stay in the world of medicine. In your capacity, what would you say A.I. has done so far to advance and how are we beginning to use it?
Dr. Luca Neri: We use A.I. in medicine for different purposes. So, for example, to benchmark clinics and compare them fairly to, for example, examine whether a clinical practice pattern may impact on patient outcomes or to segment our patient population in risk classes so that we can apply population and management programs to improve population health or to refer patients to preventive interventions or enhanced or, let's say, intensified treatment pathways.
We can use A.I. to generate suggestion for treatment selection or dosage suggestion, and we can use it for several other use cases for example, for analysis and processing of images and other signals as chatbots to, for example, triage patients based on symptoms in symptom checker for referral, immediate referral for medical attention, or even for drug discovery. So, there is this new field of research that brings together genomic studies and machine learning and these two together can help expedite finding drug targets and improve the development of new drugs.
Dr. Michael Kraus: And I think A.I. really can help us drive improved quality and care to our patients. And at the end of the day, in our world, that's what it's all about. Len, any other thoughts for closing?
Dr. Len Usvyat: Mike, what I would stress is that it's a new development and like any other development, we should be careful, we should be thoughtful, but we shouldn't discount it and we shouldn't stop using it and we shouldn't forbid it. I think it's just we need to be thoughtful about how we're actually using it. And I think how can it be helpful to what we're doing? So, I think that's all I would say.
Dr. Michael Kraus: And Luca, I'm going to let you land the airplane. Last thoughts and let's think about privacy and security. What may be bothering our audience here?
Dr. Luca Neri: Well, I think that is a very important topic. And for all our work that we do at Fresenius Medical Care, privacy is an essential part of our job, and especially for what it concerns when it comes to medical devices and software medical devices. So of course we are very compliant, and we're very conscious on how we process the data in our health information system and therefore the analysis and development and use of our A.I. systems when they are used in clinical practice.
Dr. Michael Kraus: That gives us a lot to talk about. This has been such a fun topic to discuss and it's just fascinating in terms of how far we've come in this technology and how far we have yet to go and see it grow. Len and Luca, thanks again for being here today.
And to our audience, thank you for joining us today. In the next few weeks, make sure to be on the lookout for part two of our Field Notes on artificial intelligence, and we'll talk about how we're using artificial intelligence at Fresenius Medical Care today, in the field of nephrology.
If you're new to the Field Notes podcast, you can download past episodes on the Apple Store, Google Play, or wherever you download your favorite podcasts. Please remember to subscribe to receive the very latest updates as they happen. Until next time. I'm Dr. Michael Kraus and you've been listening to Field Notes by Fresenius Medical Care. Take care, everyone, and let's begin a better tomorrow.