Navigating AI in healthcare

Artificial Intelligence (AI) is integrating into many aspects of our lives and is at the heart of the future of medicine. This technology provides physicians with additional opportunities to enhance patient care and reduce human error.

CMPA has designed this webinar to address the relevance and use of Artificial Intelligence (AI) in healthcare. This learning activity will provide physicians with tips on how to leverage AI in their practices while minimizing their medico-legal risk.

Learning objectives

  1. To analyze the current landscape of AI integration in healthcare and medicine.
  2. To provide a framework for members to understand and reduce the medico-legal risks of these new tools.
  3. To provide members with questions they can ask about AI to cut through the hype and assess its use in practice.

Credits

Synchronous learning

This 1-credit-per-hour Group Learning program has been certified by the College of Family Physicians of Canada for up to 1.0 Mainpro+ credit.

This event is an Accredited Group Learning Activity (Section 1) as defined by the Maintenance of Certification Program of the Royal College of Physicians and Surgeons of Canada, and approved by the CMPA. You may claim a maximum of 1 hour (credits are automatically calculated).

Asynchronous learning

You may claim 1 credit for watching the video of a CMPA webinar under Mainpro+® (Maintenance of Proficiency): Non-certified activities: Self-Learning: Online learning (videos, podcasts).

(Any non-certified activity is generally eligible for one non-certified credit per hour).

You may claim 0.5 credit for watching the video of a CMPA webinar under the Maintenance of Certification Program (MOC): Section 2: Self-Learning: Scanning (Podcasts, audio, video).

Recorded session


Transcript

Dr. Lisa Thurgur: Hi, everyone, and welcome to our webinar entitled Navigating AI in healthcare. Now, before we begin, I would like to acknowledge the land that we are presenting from today. As you likely know, the CMPA offices are located in Ottawa and they are on the unceded, unsurrendered territory of the Anishinaabe Algonquin Nation, whose presence here reaches back to time immemorial.

I would also like to recognize that today we have many participants learning with us from all other areas in Canada, and I would like to honor and pay respect to these lands that you are on, and to all First Nations, Inuit, and Metis peoples throughout Turtle Island. As an organization, the CMPA certainly recognizes all First Peoples who were here before us, those who live with us now, and the seven generations to come.

I'd like to start by introducing our panel for this webinar. Shoshanah Deaton is a family medicine trained physician who practiced comprehensive primary care for many years in a small community in Ontario. She also worked as an investigating coroner and a surgical assistant, and she is currently employed at the CMPA as a physician advisor. She speaks to members daily about AI, and we are very fortunate to have her with us today. Welcome, Shoshanah.

Dr. Shoshanah Deaton: Thank you.

Dr. Lisa Thurgur: And Chantz Strong, who's with us today, holds a master’s in systems engineering, artificial intelligence and business from MIT. And he is the chief privacy officer at the CMPA. His portfolios all focus on improving decision making, reducing risk, and improving patient safety. And he is the perfect person to be with us here today to speak about AI and how to navigate this during these times. Welcome, Chantz.

Mr. Chantz Strong: Yeah, really happy to be here Lisa.

Dr. Lisa Thurgur: Marty Lapner is a partner in the Ottawa Law Office of Gowling WLG. He practices in health law and privacy, including as counsel to the CMPA and to physicians. Marty practices with an emphasis on professional, regulatory and civil liability matters, and he is here today to share a wealth of medico-legal knowledge with us on the topic of AI. Welcome, Marty.

Mr. Martin Lapner: Thanks for having me, Lisa.

Dr. Lisa Thurgur: And my name is Lisa Thurgur. I am an emergency trained physician and I love all things involved with medicine and education. And I'm thrilled to be here. I'm currently a physician advisor in Safe Medical Care – Learning at the CMPA, and really happy to be hosting our webinar on the medico-legal risks of artificial intelligence today.

Now, as a panel, we have no conflicts of interest to declare except that Shoshanah and Chantz and myself are paid employees of the CMPA and Marty is retained counsel. So you'll see that these are our four or sorry, three objectives for this webinar. Firstly, we'd like you to take a look at the current landscape of AI and how it seems to be integrating into healthcare and medicine.

We also want to provide you with a framework to understand and help reduce the medico-legal risks of these new AI tools that we'll be talking about. And finally, we would like to provide you with questions so that you can ask about AI in order to assess its use in your practice. Now, as everyone knows, and this is why you joined us, AI has emerged as a powerful tool that can potentially transform medicine.

It's expected to significantly impact how patients receive care and how patients are treated. And during these very challenging times when physicians are facing burnout, exhausting administrative loads, rising patient volumes… our fascination with AI is not only about the coolness of the concept, but also about being desperate to find any tool that helps with the burdens of practice.

Maybe it's something that streamlines administrative burden, or helps us see more patients on a lengthy wait list or in an overflowing waiting room. So that's why we're all here today to learn about this. And I mean, AI is here to stay. Right Chantz? It's really not going anywhere. So tell us what we actually mean by artificial intelligence when it comes to medicine.

Mr. Chantz Strong: Yeah. Happy to, Lisa. So I think what our members will find is when you look at trying to understand what AI is, a lot of the definitions are too technical. You read it and you don't know what you saw. Or they are too generic, things like, it mimics human thought. I think for me, a practical way to understand what AI is, is it's an algorithm.

It takes data, it processes it, and then it provides an output. That could be a summary of a conversation, a diagnosis, or even an instruction to move a robotic arm up or down. One thing to keep in mind is that some of these algorithms learn, so they use new data on an ongoing basis to get better and unfortunately, sometimes worse over time.

All the other things that you may hear about AI might be irrelevant. So knowing that the AI you're working with is like a generative deep learning adversarial neural network, it's kind of interesting, but really not that helpful. And the thing is, likely the algorithm in use will probably be replaced by something in six months anyways. So.

Dr. Lisa Thurgur: Okay, so the first AI tool that comes to mind for me naturally is ChatGPT. So, so what is ChatGPT and how does this fit in?

Mr. Chantz Strong: Yeah. So ChatGPT is interesting in that it was actually the fastest growing software application ever, and what it is is it's a type of generative AI. So that's a category of AI. What it does is it creates a wholly new output based on its inputs. So this differs from previous AI that members may have seen in their clinical work or academic literature.

So, these previous types of AI, which are still very much in use and useful, they would take an input and then they provide a well-defined answer. So, an example would be a predictive algorithm that, say, takes a early warning score of some sort, based on monitoring patient vitals. Or maybe it sees a CT scan and identifies the area where there may be an abnormality.

Generative AI, on the other hand, what it does is it takes the inputs like say, the recording of a physician-patient interaction, and then, when combined with the instructions you give it, it generates a wholly new product. So, a new output, maybe like a SOAP note. Or perhaps if we're looking at supporting physician education, it would take, a CT scan of an adult lung with a tumor and then generate a wholly new CT scan, say, of an infant lung with a tumor.

So, this is one of these developments that's happened in the last few years. And it's really accelerated the potential impact of AI in healthcare.

Dr. Lisa Thurgur: Okay. That is definitely something I think our members would be very interested in. But tell me, how fast are these things changing? Is it just marketing or is this something significant?

Mr. Chantz Strong: Yeah, no. Great question. I mean, there was an explosion of interest in how fast this AI is developing with ChatGPT, to the point where at one point, many people asked for the development of AI to pause because it may actually pose an existential threat to humanity. I think now that we're kind of mid-2024, so about 18 months later, we're a little bit beyond the headlines. That said, generative AI and what it means is a big deal.

The pace of change is staggering. So ChatGPT, the example we were just talking about, in the last 18 months since it was introduced, it's had a number of massive upgrades. So, it's gone from text only to being able to take any text, video, audio, or image as an input and then provide any text, video or audio or image as an output. So, a lot of big changes are happening there.

Dr. Lisa Thurgur: Interesting, and I know that we're seeing it in a lot of hospital EMR systems for example. If you haven't seen it in your EMR, it's coming soon to one near you, would you say that's right Chantz?

Mr. Chantz Strong: Yeah, absolutely. Generative AI is being integrated in so many different places. So, what it does is it allows you to interact with knowledge and information that's contained in the model in a very sort of natural way. So, it allows it can also kind of interact with other software. And because it's so flexible, it's being integrated in so many different application scenarios. So, for example, it’s being integrated to EMRs. We're seeing it integrated across Microsoft systems. So, it's in their browser.

And every major technology company from Google to Facebook etc. is releasing their own version. So, many hospitals and governments, they're seizing the potential efficiencies and opportunities, by introducing pilot projects for AI scribes, which we'll talk about, I think, later on in this webinar. So finally, as you mentioned, there is the potential for these technologies to really help our members. And some applications can reduce administrative burden. It can make care better, but our members really do need to know the risks and the issues for consideration for the specific AI tool in order to make informed decisions about use.

Dr. Lisa Thurgur: Okay, so we now have a sense of, or we now understand, what is meant by AI in medicine, and you've spoken about how fast the landscape is changing. Can you tell us now how AI could show up in a physician's practice and how our members can begin understanding its risk?

Mr. Chantz Strong: Sure. So, AI's what I'd call a general purpose technology. So, it can be used in so many different contexts. Understanding how you're going to use it is the first step. So, I love to cook. So, maybe we use food as an analogy. Food, like AI, is a general term. But you wouldn't approach, say, baking a cake the same way as you would making pasta. It's the same with AI. The uses, the risks of an automated scribe will be very different from an AI tool that identifies anomalies in CT scans.

Dr. Lisa Thurgur: So, how is the CMPA looking at this?

Mr. Chantz Strong: Yeah, at the CMPA, we've been looking at AI and the regulatory landscape, which is currently developing. And while it's still early, there are some broad categories that can help you think about how AI can show up in your practice. The first category is clinical medical purposes. So, these are ones that will likely have more impact on patients. And these are ones that may actually have more regulation by organizations such as Health Canada. Other uses such as administrative operational uses, patient and consumer uses, knowledge translation, research and development, and even those for public health are going to have different risks and different regulations.

So, maybe knowing the general category will be helpful to understand the regulatory requirement, the impacts, the risks. And from there you can understand whether it works for you and your practice.

Dr. Lisa Thurgur: OK, so it sounds like AI could essentially be used everywhere in healthcare. And we will get to a specific example of this later on in the webinar for sure.

But for now, Shoshanah, how can we best wrap our minds around what is going on? How should members approach the use of AI in their practice of medicine?

Dr. Shoshanah Deaton: You’re right, Lisa. It does sound like AI could be used everywhere in health care. But it's important to remember that while there is a lot of hype about how it will change our lives, and fear about how it might replace us as physicians, you know, we need to consider what do we actually do with it?

As Chantz noted, the pace of change is staggering. So, this CMPA’s suggested approach may change, but there are some questions that we can ask ourselves today, about how to consider the use of AI in our practice. And so, you know, ask yourself, what are you using it for? That is going to determine the impact and the risk of the tool.

Does it need to be regulated? Health Canada approval helps mitigate risks associated with the use of AI by helping to establish the safety, effectiveness, and quality of an AI technology. And then we could consider issues that we are very familiar with. So how will it improve care for our patients? Will it improve our practice? Is it safe?

Where is the evidence of its efficacy? Where is the evidence that the information it provides is accurate and useful? Endorsement by a professional organization can be helpful, since it's much harder to assess than a typical tool because of the lack of explainability of AI and the difficulty of assessing it for bias. Consider also, is the tool usable and practical?

How will it fit into our practice? And is there available training on the use of the tool? And how will it interoperate and integrate with other software and tools? So, you know, those considerations are really things that we deal with all the time when we evaluate a new decision-making tool like the Ottawa Ankle Rules, or maybe a portable ultrasound machine.

That said, AI brings some novel considerations that are unique, such as privacy and data protection, bias and consent. And within those last three points, we should also reflect on whether the AI is learning as we use it. And although there is a lack of clarity here, I think we are all asking ourselves, how much liability risk am I taking on by using an AI tool?

Dr. Lisa Thurgur: Yes, that makes sense. So, AI is not exactly like any other tool that we use, but are there some similar or key principles that we can apply?

Dr. Shoshanah Deaton: Yeah. That's right. Many of the considerations I mentioned are standard, but they'll only get us so far. They'll only get us part of the way. It's useful to think about how you want to use the AI, and then find the closest analog version and walk through the same evaluation and assessment process.

So, for example, if you're interested in using AI to triage patients, ask yourself, what would I consider if I had a nurse doing this?

Dr. Lisa Thurgur: OK. Now it's been mentioned that regulatory issues are key when considering an AI tool to use. So, let's tackle the issue of regulation first. Who regulates these AI tools, Marty?

Mr. Martin Lapner: Thanks, Lisa. Regulation of AI is still a bit of an evolving field.

But significant steps are being taken. A couple I'd like to mention. First, federal legislation has been introduced in Parliament, which would, if passed, enact the AI and Data Act. And that would be the first regulatory framework in Canada for AI tools. It focuses on high-impact systems, and those responsible for those systems. So, developers, designers would have to assess and mitigate risks of harm and bias. And it's intended basically to increase trust and facilitate adoption of AI tools. The second development I wanted to mention is that Health Canada, for a few years now, has been using existing authorities to license software as a medical device.

So that, that assesses the safety and efficacy and the quality of the AI tool. The FDA, which is Health Canada's counterpart in the US, has approved about 700 AI medical devices. And to give you an idea of the state of the regulatory environment, all of those tools still require a human being in the loop.

So, they aren't completely autonomous AI tools. And three, other issues about medical device licensing. It focuses again on a risk-based approach. So higher risk, tools or devices, those that create more risk for patients or that are more immediately driving care are subject to more stringent licensing requirements. The second issue is that not all products are required to be licensed.

Those tools with an administrative purpose don't have to be licensed. There are also certain exclusion criteria, including whether the tool is intended to replace a clinician's judgment. So, for example, a chatbot that guides a patient to the most appropriate form of care generally wouldn't require a medical device license. A drug dosing calculator that can be independently verified by a clinician. Same thing. An electronic medical record system that's got an administrative purpose, so, generally wouldn't require a license.

Dr. Lisa Thurgur: OK, so where are the Colleges on this?

Mr. Martin Lapner: Some colleges have issued guidance. At the moment, they focus on existing duties of physicians. So, consider the best interests of the patient.

Consider accountability and who's accountable for their care, which is the physician. Consider the privacy and confidentiality of the patient. It's a tough line to balance. If Colleges are too prescriptive, the guidelines may become outdated or miss the mark. It's also resource intensive. If it's too high level, then physicians may be left without any practical guidance.

The College in BC has issued some ethical guidelines, and it takes the high level approach we talked about. It focuses on confidentiality, transparency about use of the tool, interpretability of the tool, and obtaining patient consent. The College says this is preliminary and it'll be updated as the AI landscape evolves.

The College in Alberta took a bit of a different approach. Last summer, they released guidance specific to AI scribe tools. And it's one of the most direct publications from a College on AI. And, it says proceed with caution and provide some guidance about considerations, but it still doesn't address all questions, for example, about some of the nuanced recordkeeping issues or privacy compliance. But it's a helpful start.

Dr. Lisa Thurgur: OK, and what about other, allied health-related regulatory bodies in Canada?

Mr. Martin Lapner: Good question. Yeah. Some preliminary guidance there as well. The College of Psychologists for Ontario, for example, they permit AI to supplement clinician's judgment, but it says, psychologists remain accountable. And they also focus on the importance of consent. You'll hear that a lot from the panel today I expect. The few colleges that have provided guidance, seem to tacitly accept AI use. But there's a consistent approach. It has to supplement care, not replace clinical judgment. And they urge caution with a focus on making sure the work product is accurate, protecting patient privacy, and obtaining consent.

Dr. Lisa Thurgur: That's helpful, Marty. Thanks. What would you say, Shoshanah, are the liability considerations specific to AI that physicians need to be aware of?

Dr. Shoshanah Deaton: Yeah. So, with the caveat that the rules are evolving quickly, physicians need to specifically consider issues around privacy, bias, and consent.

Dr. Lisa Thurgur: OK. Privacy, bias, consent. Let's start with privacy and data protection, for example. Marty, can you tell us a bit about that?

Mr. Martin Lapner: Sure. Yeah. So, privacy and data safeguards are absolutely a key issue given the amounts of data to develop and use AI. There's inevitably going to be privacy and cybersecurity risks. Privacy legislation is technology-neutral, so it applies to AI, but the unique characteristics of AI create some uncertainty about what those obligations are. Modern legislation is increasingly being tabled that clarifies some additional obligations specific to AI.

So, for example, there needs to be transparency about use of AI, and even the right of an individual to request that a human being review any decision made by an AI tool. We're also seeing some instances, of regulatory action that may also add some, some clarity and articulate obligations a little better.

The public release of ChatGPT, for example, led to an investigation by the Privacy Commissioner of Canada, which is looking at how ChatGPT processes the personal information of Canadians. And we still don't have the decision, or a report from that file.

Dr. Lisa Thurgur: Interesting. OK. So, would you say then that patient data is being used to improve AI tools? How would we find that out, Chantz?

Mr. Chantz Strong: Yeah, well… So, I would say most modern AI does learn. But what we need to understand is: Are they using our data and our patients’ data to train their AI? So, if the physician is involved in the purchasing decision or the custodian of the data, I would encourage you to ask the vendor some very specific questions, like, does the AI use my patient's data to train your algorithm?

And then understand where are the assurances on the use of the data. And further, can these assurances be changed without my consent? So, could the AI vendor change kind of the privacy policy without you knowing? So, we would encourage you to look at the terms of use, the privacy policy, and also look to see if any professional organizations or health agencies have endorsed particular products.

And it's important to know that in some jurisdictions, you or the vendor may have to conduct a privacy impact assessment. And so that will help you to consider how patient information and personal information will be processed and safeguarded.

Dr. Shoshanah Deaton: Yeah. And I'd like to add, if you work in a hospital or facility in which you're not the custodian of the medical information, you should raise the proposed use of an AI tool with your administration, to review privacy compliance obligations and ensure you're authorized to use the tool.

Dr. Lisa Thurgur: That's interesting. Thank you. So, speaking of privacy, then, our members calling in, Shoshanah, to ask if AI tools are HIPAA compliant. Is that relevant in Canada?

Dr. Shoshanah Deaton: No. HIPAA is the US legislation. That means it's compliant in the US, but not necessarily in Canada. When dealing with personal information or personal health information in Canada, the tool needs to be compliant with the privacy legislation in your province or your territory.

You might see it often. AI models use the cloud to process data and create output. And this processing may take place outside the country and this may violate the law as where you work. So, these laws do vary. For example, in Quebec you now need to do a privacy impact assessment before sharing personal information outside of Quebec.

And in Saskatchewan, the College there has also released guidelines for virtual office assistants, including AI scribes, which discourages using tools from businesses located outside of Canada. So, it's not straightforward. And if you're confused about the use of AI tools, call the CMPA. We can help you walk through the considerations I mentioned today, and your questions will help us shape future advice.

Dr. Lisa Thurgur: That's perfect, Shoshanah. Thank you. Now, another key issue that was mentioned was bias. So, Chantz, can you tell us a bit about that? How can we deal with bias when it comes to AI?

Mr. Chantz Strong: Yeah, so… bias is a huge topic and we're not going to be able to do it justice in our short time. So, essentially this is the issue of whether the algorithm treats certain subgroups differently.

This is important because we know that if subgroups are treated differently, they can have different health outcomes. So, to give you an example of bias with AI, take an AI that seeks to recognize melanomas in the patient population. These algorithms may be biased when it comes to assessing lesions in darker skins, because it’s being trained on data sets that are unrepresentative and predominantly using lighter skin tones.

So, it may misdiagnose these for these people. And the thing is, bias can come from so many different sources. The data that the AI is trained on can be biased. The assumptions or the design of the AI itself can be biased. So, for example, if the designers didn't consider issues of a specific group, like minors or younger patients. Or biases can be deployed in how the AI is deployed, such as, you know, whether the AI can collect input from a variety of different patients.

But remember, bias isn't new to medicine. We consider bias when we assess any scientific study. So, we can use a similar approach. We can look at who is involved in the development of the study, or in this case, AI, what are the potential biases of the underlying data, and more. And it's also important to recognize that likely all models will have some sort of bias, since all data that has been collected has inherent biases.

But remember, human decision making, it's biased as well. So, the key is: know that bias exists, and then take appropriate measures to mitigate the risk of harm. Don't be afraid to ask for information to determine the validity of the tool for your patients. So, you know, talk to your vendors. They should be able to provide information to you about the product’s intended use, the performance of the model for appropriate subgroups, the characteristics of the data used to train and test the model, acceptable inputs, known limitations, and more.

And I'd also note that if the vendor doesn't have this information readily available, they may not have spent enough time to consider bias and how they use their AI. And that can be a red flag in and of itself.

Dr. Lisa Thurgur: Okay, that makes sense. Now, another important area to consider is patient consent when it comes to using AI tools. So, what are some issues here Shoshanah?

Dr. Shoshanah Deaton: So, based on the College guidance that we've seen published so far, Colleges expect us to obtain informed consent from our patients, including communication of the risks and benefits of the use of the technology, the issue of potential bias, and privacy risks. So, you should also explain if the data may be de-identified and used to improve the algorithm by learning from one patient to the next. A discussion with a patient would involve, for example, information we might bring up with any new therapy, including the rate of false negatives, its limitations, and lack of regulation or approval.

We are seeing lawmakers focus increasingly on transparency and how AI is used and processed. So, being transparent with our patients, as we should always be, lines up well with that.

Dr. Lisa Thurgur: So, who should be evaluating or validating these AI tools?

Dr. Shoshanah Deaton: I like that question because physicians may simply not have the bandwidth or resources to evaluate AI tools on their own.

Many tools that exist in medicine are used variably by physicians, and there is no uniform way that tools are brought into use. Decisions are often made at a departmental or hospital or clinic level. Essentially, someone else makes the decision to implement the tool. So, depending on the nature of the tool, physicians may have no choice in using it, like with an EMR, or we may have the option of using it, like a scribe assistant. So, what can we or what can make these decisions easier is regulatory approval about a tool’s safety, its effectiveness and quality. Endorsement from a reputable professional or regulatory organization can be helpful, and I would encourage physicians to look for that.

Interestingly, Lisa, the Canadian Association of Radiologists recently announced plans to put together like a clinician-led AI validation network to increase physicians’ confidence in AI tools.

Dr. Lisa Thurgur: Okay, that is interesting. And that is good to know. All right. So, I would say that's a great overview of the issues related to AI. And we've certainly established a framework or an approach that you can use to assess an AI tool before using it in practice.

Now, let's apply this approach to a concrete example. Shoshanah, can you give us an example where we can apply this framework?

Dr. Shoshanah Deaton: Absolutely. Let's take as an example, an administrative and operational use. So, at the CMPA, we're getting a lot of calls about the use of AI scribes.

Dr. Lisa Thurgur: Ah-ha, AI scribes. Okay, perfect. So, the first thing to consider about an AI tool from our framework, or in this case, you know, not just a random tool, but the first thing to consider with our AI scribe, is how will I use it and will it improve my practice? Is that right?

Dr. Shoshanah Deaton: Yes. The virtual scribe could take many forms, but in essence, it takes video or audio recordings of a patient interaction or instantly transcribes it and then supports the physician to make a note of that interaction. So, let's assume for our exercise that we are looking at an application that takes an audio recording of a patient-physician interaction.

Dr. Lisa Thurgur: OK. So, now we need to understand the regulatory obligations. How does this apply to our AI scribe, Chantz?

Mr. Chantz Strong: Right. So, as we discussed before, since it serves an administrative purpose, it’s not likely to be licensed by Health Canada. Your local College may have regulations. So, I think as Marty may have noted, the CPSA and CPSS, they have interim guidance specific to AI scribes.

And we do understand that other jurisdictions are moving forward in this area. So, if your College doesn't have specific virtual scribe guidance, it could be that the AI will fall into the standard requirements to prepare a good note.

Dr. Lisa Thurgur: OK, and so, a good analogy of this, per se, would be like using a human scribe or, for example, a medical student. Is that right?

Mr. Chantz Strong: Yeah, exactly. So, as mentioned earlier, in some ways this can just be seen as an extension of existing obligations. So, think through the same issues you would if you had a human scribe, like a medical student. So, with a medical student scribe, you would consider issues like, did you get consent from the patient to have someone take the notes, ensure that the scribe, be they human or AI, knows your standards, the required information, and the formats that you need in a note.

All of this, of course, you have to take into consideration: patient privacy, protection of their information and records management. So, you need to understand who has access to the note, where is the information retained, for how long, should it be part of the medical record, and more. And the thing is, whether it's a human or AI scribe, the physician is still accountable for the content and quality of the note.

And so, you have to have processes in place to verify and sign off on this note. And then finally, what you want to make sure is that the entirety of the encounter is recorded and documented, including discharge and follow the instructions.

Dr. Lisa Thurgur: OK, perfect. Now, if we go back and think about our framework, we now need to look at privacy and bias. So, Marty. What do members need to think about, in terms of our AI scribe in this area?

Mr. Martin Lapner: OK. So, maybe I'll cover privacy first and then bias. On privacy, there are a number of issues. Do you have patient consent? Again, common refrain. So, some Colleges or privacy legislation require use of a written consent form for recording clinical encounters.

The consent form should also explain in understandable language, the risks, uses, and purpose of the AI scribe. In some jurisdictions, privacy impact assessment, that was mentioned by Shoshanah earlier may be required or encouraged by the Privacy Commissioner to consider the privacy risks and ways to mitigate them. And then a contract should be in place with the vendor to impose reasonable safeguards.

On the issue of bias, consider how the tool will perform for your patient population. Is it representative of your patient population? What's the performance on a range of accents, voices and regional slang? Have the vendors provided information about this and about intended use, performance and limitations? How does it process nonverbal communications like “ah-ha”s? And does this create the potential to introduce inaccuracies to the note?

Some studies have suggested previously that AI scribes don't deal with those nonverbal communications very well.

Dr. Lisa Thurgur: That's super interesting. But thinking about it now, the purpose of the AI scribe is essentially to improve your documentation. But should we also be thinking about potentially unexpected issues that could arise? So, Shoshanah, tell me, essentially what are some of the downsides for using AI for documentation?

Dr. Shoshanah Deaton: That's an interesting point. Documenting can help you organize your thoughts, and offers a second opportunity for reflection. I know it does for me. Is this lost when AI is documenting? And if so, what can be done to mitigate this, given the potential for time-saving offered by scribing? It is important to be aware of certain additional risks, like the risk of missing critical findings, such as abnormal vital signs, which might become more obvious as we document.

And also, there may be a missed opportunity to describe clinical reasoning, as it may not be captured in a verbal exchange with the patients. Finally, the potential for errors in documentation from AI hallucination is real. In consideration of these risks, signing without reading a note may well be problematic. So, I would suggest, Lisa, carefully reviewing generated chart entries and adding your clinical reasoning to the note if it’s not captured.

Dr. Lisa Thurgur: That is important. Clinical reasoning is very important in any note, whether you're using AI or not. So that's a good tip. OK, so to summarize, if you are considering an AI scribe, you need to consider all the same elements as if you had a medical student doing the scribing. But you also need to consider issues around data, privacy, consent and bias. Is that right?

Dr. Shoshanah Deaton: That's right. But don't forget that at the end of the day, just like with a medical student, the physician is still accountable for the note.

Dr. Lisa Thurgur: OK. Absolutely. And we've covered a lot already, so this is great. But thinking about this, given all of these issues, will physicians ever feel like they trust AI, Chantz? What do you think about that? Will we trust AI?

Mr. Chantz Strong: Right, no, I mean… So, many AI proponents, they believe that a lot of the issues currently facing AI, they'll be solved. So, issues around accuracy, reliability, those will all get better over time. And AI will become more explainable. And that's something we haven't really touched on. But that's the idea.

Can AI provide explanations on how it arrived at a particular conclusion? Which can then help us scrutinize and assess the outputs of an AI tool. So, there are more and more supports being developed for AI tools, such as scribes, that reduce administrative burdens. And organizations are increasingly assessing and validating these tools.

And I think we'll see more and more certifications by professional organizations that can facilitate adoption.

Dr. Shoshanah Deaton: If I could just add, Chantz, one of the key pieces of advice that the CMPA gives our members is to carefully document. So, AI scribes could have the potential to facilitate improved documentation. As these tools gain traction, they may well offer significant benefits to our members.

Also, in the area of clinical decision making, I would add that for the foreseeable future, AI will be an aid for clinicians to support and complement other relevant and reliable information and tools. From a risk management perspective, it's still important to apply sound clinical judgment, even if automated decision support is available.

Dr. Lisa Thurgur: And what if physicians don’t embrace AI, Marty? What happens then?

Mr. Martin Lapner: Good question. Looking further to the future, as AI becomes commonplace, physicians could be held accountable for not embracing AI quickly enough. We've seen examples of this in the past, including with lawsuits against physicians for failing to use follow-up X-rays, for example, when X-ray technologies were becoming more common. There's potential for physicians to be caught in a precarious position of having to argue that they didn't under-rely on AI tools or over-rely on them.

Dr. Lisa Thurgur: Interesting. Have there been any real-world examples already of AI related medico-legal challenges?

Mr. Martin Lapner: Good question. There haven't been many cases involving clinical uses of AI yet. But there have been a few privacy decisions. The UK's NHS, for example, a few years ago shared data on over a million patients with Google's DeepMind, to develop a system for detecting acute kidney injury. So, arguably for a socially beneficial purpose. But the privacy considerations weren't necessarily followed. The Privacy Commissioner found that patients weren't adequately informed that their data was being used for that purpose.

Given the amounts of data used and processed by AI, we expect to see more privacy claims, more cybersecurity incidents, relating to AI. Once we see claims on the clinical side, cases might mirror those that involve software, like decision-support systems. What those cases show us is that existing legal principles still focus on human actors.

So, this means healthcare providers might still be viewed as the ultimate decision makers. And there's evidence of this in the autonomous vehicle context. In the past few months, there have been a couple cases where cars driving autonomously struck pedestrians and the drivers were charged criminally. And the reason for that is that the terms of use for those cars still says the driver is expected to hold the wheel, keep their eyes on the road and take over when needed.

The few medical negligence cases involving software tend to support this view. This may change in the future, but for now, there's a core focus on suing human beings.

Dr. Lisa Thurgur: It's a great example, thanks Marty. All right, let's leave our listeners now with a few takeaways. So, what should physicians think about before incorporating AI into their practice? Shoshanah, can you start us off with a takeaway or two?

Dr. Shoshanah Deaton: Yeah, I feel like we’ve just touched the surface of AI in medicine. But again, you know, our advice may change as things are evolving quickly. I think we should continue to think about the purpose. What is the stated purpose and objective of an AI technology? And is its use appropriate for our practice?

Think about reliability. The efficacy and safety of the tool. And is regulatory approval there? It can help mitigate risks associated with the use of AI by helping to establish that safety and effectiveness. So, look to see whether organizations have endorsed it. Think about privacy. Are there appropriate privacy safeguards, including contractual obligations?

Consider bias. Is the tool appropriate for your patients and have the vendors provided the necessary information, including about the product's intended use? Its performance and limitations, and the validity of the training data for your patient population. And finally, think about consent. Obtain consent by setting out the risks, the benefits and the limitations of the tool.

Dr. Lisa Thurgur: OK, thanks for that. Any other takeaways, Chantz?

Mr. Chantz Strong: Yeah. I mean, it's still very early in the game for AI. And regulators are still evolving their approach. We are actively engaging with other stakeholders who are working to provide physicians with greater clarity and guidance. And we have not yet seen how the courts will approach liability. So, unfortunately, there are a lot of gray areas.

However, we, at the CMPA, we are following this issue very closely, as we know that many of you are looking at these technologies. We're actively engaging with other stakeholders and we hope that, through this webinar, we've shed some light on some of the issues, and we hope that you realize that you have many of the skills you need to begin to assess these tools.

We, at the CMPA, will continue to support our members, and we'll release guidance as things evolve. But if you have questions, know that we're here for you. And so, give us a call.

Dr. Lisa Thurgur: I like it. Thank you for your takeaways. OK, I do want to thank our entire panel for being here with us today. I feel like we could talk about this topic all day, and perhaps we'll even be back to do webinar number two, because this is a topic that is continuing to evolve.

But it is time to head to our Q&A, and I'm sure there'll be some, you know, some great learning points there as well. Our moderator today is Elisabeth Normand, who is a registered nurse with her master's in business administration. She is a valuable member of our Safe Medical Care – Learning team and a very experienced moderator. Welcome, Elisabeth, and thank you for being here with us.

Mrs. Elisabeth Normand: Thank you, Lisa.

Dr. Lisa Thurgur: If you have not had a chance to do so, please send Elisabeth your questions through the Q&A chat function and she will pose them to our panel. Over to you, Elisabeth.

Mrs. Elisabeth Normand: Wonderful. Thank you so much. So, one of the questions that our members are asking quite a bit about – Shoshanah, I'll direct this one to you – is: Will the CMPA protect me if I use AI in my practice?

Dr. Shoshanah Deaton: So, I get a version of this question for any cutting-edge development in medicine. I think it's important to understand that the assistance the CMPA provides to its members is discretionary, and the determination as to eligibility for assistance will depend on the facts and circumstances of any given case, so we don't decide if we will assist in advance of the onset of medico-legal difficulties.

In general, though, the CMPA will assist members in the event of medico-legal difficulty arising from their medical professional work. So, what does that mean? It doesn't mean that Health Canada approval of a particular AI tool, for example, is a prerequisite for CMPA assistance, but rather, your eligibility for assistance will depend on whether you were providing medical input or practicing medicine when engaged in the activity that forms the subject of the action or the complaint. Does that make sense?

Mrs. Elisabeth Normand: It does. Thank you, Shoshanah. Our next question, I'll direct it to you, Chantz. So how do I protect my patients’ data and confidentiality with these new AI systems that are being really integrated everywhere?

Mr. Chantz Strong: Yeah, I mean, this is a key issue. And protection of patient data is something that absolutely must be considered. So, the best way to minimize your risk is to do your due diligence. Ask the vendor. Look at the terms and conditions and the privacy policy. Make sure that you can answer questions like, where does the data go? Is it retained? Is it compliant with my privacy legislation, not HIPAA.

Do they have the right security certifications? Make sure that the contract you have with the vendor speaks to compliance with privacy legislation. And it's got safeguards like encryption in place. Additionally, and you'll hear us say this a few more times maybe, use consent. You know, get consent in a way that's understandable, where the purpose and potential uses of the patient's information. And make sure that you've clearly described the privacy risks.

Mrs. Elisabeth Normand: Thank you, Chantz. Lisa. Here's one. So, do I need to tell my patients if I'm using AI to support my work? And what other steps would I have to take in that case?

Dr. Lisa Thurgur: OK, so, I mean, to that, I would say that, you know, the few guidelines that do exist from regulators suggest that consent generally should be obtained before using an AI tool.

I'm sure you probably guessed that I would say that. I mean, this could evolve as AI use and applications become more common and certainly could depend on context. But as a general rule, it is prudent practice. Now, when you do speak to your patients about the fact that you're using AI, in your work, the discussion should involve all the things we talked about, right? Communication about the risks and benefits, the issue of potential bias, privacy risks, and especially if patient data may also be identified and use to improve their algorithms. Maybe the vendor told you that? Then, this should also be explained to the patient. Very important.

Mrs. Elisabeth Normand: Thank you so much, Lisa. Now, we have a good one. I think, Marty, you might be best to answer this one. So, are there any ways to set up AI in my practice that would meet the College and medico-legal standards?

Mr. Martin Lapner: Yeah, yeah. So, I touched on this a little bit earlier, but some Colleges have offered interim guidance, relating to potential use and benefits of AI. And there are a few common threads. AI may be used to supplement a professional’s work, not replace it, ensure the work product is accurate, protect privacy.

And I'm going to say this, again. It's the chorus, I think: obtain consent.

So, there's tacit acceptance that AI tools may be used, if appropriate steps are taken, prior to its use.

Mrs. Elisabeth Normand: Wonderful. Thank you so much, Marty. Now Chantz, I'm curious about this one, also. So, you did mention hallucination in the webinar. What does that mean?

Mr. Chantz Strong: Yeah, I mean, a hallucination just means that the AI made something up. It's a key risk of generative AI, because many times the rest of the text is so compelling. So, we've seen examples of this in the court. In one widely reported case, the plaintiff's lawyer filed a brief citing various court decisions to object to the motion. And, unfortunately, none of these, cases existed. So, it turns out that the plaintiff's lawyer had used generative AI, that invented the cases and that they hadn't been double-checked.

We've also seen generative AI used by some plaintiffs to prepare research briefs in medical negligence cases. These have been rejected by the courts as unreliable. So, hallucination is essentially an error that the AI put in. So, it's an issue of accuracy.

Mrs. Elisabeth Normand: Wonderful. Thank you so much, Chantz. Shoshanah, here’s one: What are the medico-legal implications of using AI to write patient letters, for example?

Dr. Shoshanah Deaton: Yeah. OK, so consider using AI the same way you would treat a medical student drafting patient letters.

Make sure the tool knows your standards, your required information, your note format. Ensure the patient consents to the technology and the tool has appropriate privacy safeguards and data protection. But at the end of the day, you are still accountable for the content and the quality of the letter. So, make sure that you have the processes to verify and sign off on it.

While AI could improve efficiency, unedited letters could lead to patient harm, given AI may misinterpret information or introduce bias, or hallucinate. So, this is another reason to review the medical letter, or consider whether the use of AI is appropriate in the circumstances.

Mrs. Elisabeth Normand: Now, Chantz, I'm coming back to you. I have a question here. Am I required to be well-versed in AI to be considered either proficient or capable in my work in the future?

Mr. Chantz Strong: So, AI is a changing field and right now, AI is designed as, I think, a few people have said, as an aid for clinicians. So, it's meant to support and complement other relevant information tools that you may have available. That said, there are some concepts that physicians should understand, right? So, AI is very data hungry.

So, what does AI do with my patient's data? How could AI be wrong? So, will it hallucinate? And do I have processes in place to catch these errors? As AI evolves, the expectation or requirements to use AI in practice may change. This will be similar as to, you know, when other technologies like ultrasound, or I think we mentioned X-rays, were introduced into practice.

So, you know, the specific obligations will likely be articulated over time, but it's unclear how long this timeline will be. We may just have to accept the possibility that it'll take a while to get clarity around the legal principles associated to using AI in healthcare.

Mrs. Elisabeth Normand: Thank you so much, Chantz. Lisa, how is AI being used now in medicine? It’s a broad question.

Dr. Lisa Thurgur: It's a broad question. I mean, we can certainly talk about some of the areas. I mean, in general, AI technologies are being used or their intended to be used to complement clinical care. That really is sort of the general principle right now. I mean, they're being used to do things like reduce administrative burden, to increase diagnostic accuracy, improve treatment planning or treatment plans, forecast outcomes of care, you know, those sort of general categories. Specifically, I mean, I think the most common thing that we're getting calls for at the CMPA, anyway, as Shoshanah mentioned, are AI scribes, right? So, those are being used quite commonly now.

Different areas like clinical decision support, imaging analysis, so, for example, diagnostic imaging or adding cameras to instruments for image-guided surgery, as well as broader public health purposes as well. So, you know, such as disease surveillance. So, it is so wide right now the uses of it.

But I think the general message or the general principle is that currently AI is intended to complement clinical care and that's what we're seeing right now.

Mrs. Elisabeth Normand: Thank you so much, Lisa. I imagine we'll see even broader uses in the future. Well, our next question, it does require a little bit of context. So, even if we're not using identifiers, we can presume that those AI companies can use this aggregated information in a way we don't know about.

So, Marty, I guess I can ask you, what are the medico-legal risks of using AI for diagnostic purposes?

Mr. Martin Lapner: Thanks, Elisabeth. That's a good question. AI can support diagnosis. I mentioned earlier, in my initial comments, that the FDA, for example, has approved some tools. But they're all still intended to complement a clinician's judgment.

So, it's important to be aware of that fact and the risks. You touched on the fact that information may be aggregated. So, just because it's aggregated, or de-identified, doesn't mean that there's zero privacy risk. The personal information can still be, or an individual can still be indirectly identified from the information.

So, there is some risk that still exists. We also have to consider that, in the current environment, patient care should still reflect your clinical judgment. So, you need to consider things like potential biases in the tool. And we talked about whether the tool was trained on representative patient data. The tool’s performance, which hopefully you can get from the vendor. And, again, obtaining consent.

Mrs. Elisabeth Normand: Thank you so much, Marty. All right. We do have another one. So, this one I'll direct to you, Shoshanah. Can I use an AI bot to flag and respond to red-flag lab results? And if so, what if the bot misses one? So, how does my liability differ in this situation compared to if a physician or nurse were taking these results and made a mistake, similarly to the bot?

Dr. Shoshanah Deaton: So, this question is a bit difficult to answer because it will depend on the circumstances. But I think the difference between the bot and people is that people are usually regulated by their professional organizations. So, to keep things simple, let's assume that the people are acting within their scope and expertise and doing something that is standard practice.

So, there may not be regulatory guidance about whether a bot is a reasonable thing to use. So, let's use our framework that we talked about. We have the purpose. So, now we should consider quality, privacy, bias and consent. So, how does the bot function? Does it use simple comparison of measurements, or does it use machine learning to identify higher risk patients who should be prioritized for review?

Is there evidence from a clinical study of its reliability? Were clinical study participants representative of the intended patient population? Has the tool been endorsed by a reputable medical organization or approved by a regulator? What safeguards are in place to review the tool and monitor for degradation of the model over time? So, that's just another recap of what we talked about today, which is really helpful.

And I've heard of a tool, along those lines of the bot you mentioned, in the UK. A new type of blood test was designed to prioritize patients at higher risk of cancer, given the increasing number of patient referrals. And the tool uses machine learning to analyze a broad range of signals in the blood, and other information about the patient, to estimate the chance someone has cancer.

It seems to be a decision support tool to help triage patients, but it's currently still being evaluated to demonstrate efficacy. So, to review: a bot may be able to improve care, but it has to be carefully assessed and risks have to be mitigated before implementing it.

Mrs. Elisabeth Normand: That's wonderful. Thank you so much, Shoshana. Thank you everyone for answering all of these wonderful questions that we had.

And, Lisa, I'm going to hand it over back to you.

Dr. Lisa Thurgur: OK, I guess we're out of time, aren't we? It does go by quickly. I bet there are a lot of questions we didn't get to, Elisabeth, is my guess. So, I'd like to apologize for that, but please know that if you do have a question about using AI in your practice, you can call the CMPA and speak with a physician-advisor like Shoshanah.

And they will help address your questions related to AI or anything at all. Also know that we will be reading the unanswered questions that we didn't get to, and this will help us sort of shape content, future content, for future webinars and some of our educational projects. So, thanks again for joining us. We do hope that this webinar has provided some helpful tips for you to use AI safely in your practice, and keep well.


Additional resources




Questions? Contact us at [email protected]