This article is adapted from “A Primer on Law, Risk and AI in Health Care,” published in Healthcare and Life Sciences Law Committee Update (Vol. 3 no. 1, Sept. 2018), and is reproduced by permission of the International Bar Association, London, UK. © International Bar Association.
Imagine your mobile phone could scan patients and immediately provide you with a diagnosis, like something out of Star Trek, or a robotic medic could perform patient procedures unassisted by humans. While these innovations might still seem like science fiction, technological developments currently transforming the healthcare sector, including artificial intelligence (AI), robotics, and big data, are poised to revolutionize patient care.
While practical guidance for practitioners on the use of AI technologies in healthcare is still scarce,1 it is not too early for physicians to gain an understanding about the potential benefits and challenges that AI brings to patient care, and the possible medical-legal risks associated with using AI technologies.
What is artificial intelligence?
AI can be broadly defined as the capacity of a machine or computer to mimic intelligent human thought processes and learn new information.2 “Machine learning” allows computers to gain experience without being programmed to do so. Common applications of machine learning include image and speech recognition. “Deep learning” involves processing information and learning patterns that can be tied to big data analytics.2
Opportunities and challenges
With time, AI technologies are expected to improve healthcare and change the way it is delivered.3 For example, AI is being explored with other tools as a means of increasing diagnostic accuracy, improving treatment planning, and forecasting outcomes of care.4 AI has shown particular promise for clinical application in image-intensive fields, including radiology, pathology, ophthalmology, dermatology, and image-guided surgery.3 However, evidence about the effectiveness and reliability of the practical applications of AI continues to be limited. Despite the attention AI is receiving, the reality is that many technologies have not yet developed sufficiently at this time to determine whether they can meet their potential. For example, suicide prediction models have largely been ineffective to date.5
Other challenges with AI include the inability to explain its reasoning processes, otherwise known as the “black box” effect.6 The utility of AI in patient care can be limited in some situations when the AI-assisted diagnosis does not include information to verify its reliability. The dataset used by some AI technologies to “learn” also has the potential to introduce bias. For example, a dataset that unintentionally excludes patients with certain backgrounds, conditions, or characteristics may not be reliable for broader segments of the population.6
Measured approach to AI
When considering whether to use AI in your practice generally, it is important to be familiar with when and how it should be used, and to make such decisions based on the circumstances of each patient.
While the regulation of AI remains in development, some medical regulatory authorities (Colleges) and professional associations have issued interim guidelines. For example, the College of Physicians and Surgeons of British Columbia has suggested physicians apply a grading system to assess the quality of applications (or apps) that incorporate AI.7 The College suggests using the App Evaluation Model developed by the American Psychiatric Association,8 a five-step assessment of an app’s business model: advertising conflicts of interest, privacy and security, the evidence base that informs the algorithm, ease of use, and interoperability.
The Canadian Medical Association’s Guiding Principles for Physicians Recommending Mobile Health Applications to Patients9 may also be a helpful resource. The objective of using an AI-based technology should be to enhance patient care and complement the physician-patient relationship. Physicians using AI need to be mindful of their legal and medical professional obligations, and discuss the appropriateness of using AI technology and privacy risks with the patient. The CMA also suggests considering whether there is evidence of an app’s safety and effectiveness, and whether it is endorsed by a professional organization, is easy to use, and demonstrates a high standard of security.
While endorsement of an AI technology from a reputable professional or regulatory organization may be a factor to consider in evaluating whether you have complied with your professional and legal obligations, you should still review and seek advice on its suitability in clinical practice, including consideration of the following, among other things: What are the terms of use? Has the AI technology been subject to rigorous evaluation of its accuracy, consistency, and reliability? Does it use appropriate privacy and confidentiality safeguards and policies (e.g. patient consent, encryption, password protection)?
Complementing clinical judgment
AI offers information and recommendations based on the aggregation of a wide variety of data sources. Nevertheless, physicians must still exercise clinical judgment when making a final decision about clinical care. A CMA survey found that 6 in 10 Canadians are interested in the potential benefits of AI in healthcare, but would trust a diagnosis made only by a physician.10
Before deciding to use an AI-based technology in your medical practice, it is important to evaluate any findings, recommendations, or diagnoses suggested by the tool. While AI can provide information for you to consider, it is important to ensure that actual medical care provided to the patient reflects your own recommendations based on objective evidence and sound medical judgment.
Most AI applications are designed to be clinical aids used by clinicians as appropriate to complement other relevant and reliable clinical information and tools. In today’s environment, and for the foreseeable future, AI is not intended to replace a physician’s clinical experience and thoughtful analysis of a patient’s condition.
The bottom line
- Evaluate whether the use of the AI tool is appropriate in the circumstances of each patient.
- Critically review and assess whether AI-based technologies are suited for the intended use and the nature of your practice. Consider the quality, effectiveness, and functionality of the technology; robustness of the database; reliability of the medical evidence informing the algorithm; privacy and confidentiality requirements; and applicable policies or guidelines of your College or health institution.
- AI technologies are currently intended to complement clinical care by informing your decision-making. Continue to exercise professional judgment in making clinical decisions and treatment recommendations aided by AI technologies, in accordance with the expected standard of care.
References
- Crolla D, Lapner M. A primer on law, risk and AI in healthcare. Healthcare and Life Sciences Law Committee Update. 2018 Sept;3(1).
- Pinnington D. Artificial Intelligence: What is AI and will it really replace lawyers? Lawyers’ Professional Indemnity Company (LawPRO). 2018;17(1). Available from: https://www.practicepro.ca/2018/01/artificial-intelligence-what-is-ai-and-will-it-really-replace-lawyers/
- Naylor D. On the Prospects for a (Deep) Learning Health Care System. JAMA. 2018;320(11):1099–1100. doi:10.1001/jama.2018.11103. See also: Government of Canada, Standing Senate Committee on Social Affairs, Science and Technology. Challenge Ahead: Integrating Robotics, Artificial Intelligence and 3D Printing Technologies into Canada’s Healthcare Systems. 2017 Oct. Available from: https://sencanada.ca/content/sen/committee/421/SOCI/Reports/RoboticsAI3DFinal_Web_e.pdf
- Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf. 2019;28(6):495–498. doi:10.1136/bmjqs-2019-009484
- Belsher BE, Smolenski DJ. Pruitt LD, et al. Prediction Models for Suicide Attempts and Deaths: A Systematic Review and Simulation. JAMA Psychiatry. 2019 March 13. doi:10.1001/jamapsychiatry.2019.0174
- Challen R, Denny J, Pitt M,.et al. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28:231–237. doi:10.1136/bmjqs-2018-008370
- College of Physicians and Surgeons of British Columbia. Prescribing apps – the challenge of choice. College Connector. 2018 Nov/Dec;6(6). Available from: https://www.cpsbc.ca/for-physicians/college-connector/2018-V06-06/10
- American Psychiatric Association. App Evaluation Model. [Accessed 2019 May]. Available from: https://www.psychiatry.org/psychiatrists/practice/mental-health-apps/app-evaluation-model
- Canadian Medical Association. Guiding Principles for Physicians Recommending Mobile Health Applications to Patients. CMA Policy, 2015. Available from: https://policybase.cma.ca/en/viewer?file=%2fdocuments%2fPolicypdf%2fPD15-13.pdf#phrase=false
- Canadian Medical Association. Health care system needs to catch up to the requirements of the Google Generation. 2018 Aug 1 [cited 2019 May 17]. Available from: https://www.cma.ca/health-care-system-needs-catch-requirements-google-generation