Artificial Intelligence, also known as AI, is reshaping various sectors, and healthcare is no exception. They say “to err is human”, but when AI gets it wrong, does that mean to “err is machine”? In Ontario, medical practitioners increasingly rely on a mix of specialized AI medical devices and more general-purpose tools like ChatGPT. These AI-enabled devices, many of which are regulated by Health Canada, are revolutionizing diagnostics, treatment planning, and even patient interactions. Ontario doctors are being briefed en masse regarding the use of ChatGPT in clinical settings. As the adoption of these tools grows, it’s crucial to delve into their impact on medical malpractice law in Ontario. Key areas of focus include how AI influences the standard of care and complicates the understanding of causation in medical negligence cases.
Impact of AI on Standard of Care
In medical malpractice cases, the “standard of care” serves as the benchmark against which a healthcare provider’s conduct is measured. Traditionally, this standard has been determined by what a reasonably competent healthcare provider would have done under similar circumstances. However, the growing use of specialized AI tools is likely to reshape this yardstick considerably.
The high accuracy and efficiency of specialized AI tools could create an expectation for their widespread adoption. Like lawyers who resist the use of CaseLines and other post-COVID technological advancements in the courts, there are Doctors who will shy away from the use of AI technology in their clinical practices. Doctors who opt not to use these tools, especially when they have become an industry norm, may find themselves at risk of being deemed negligent. For example, if an AI-based diagnostic tool becomes the standard practice for diagnosing a specific condition and a doctor chooses a different, less effective diagnostic method, that doctor may be at risk of being held to have fallen below a reasonable standard of care.
ChatGPT and similar conversational AI tools are increasingly serving various administrative and even clinical functions in healthcare settings. Doctors may use them for patient intake, initial symptom gathering, or even rudimentary decision support. These tools could become integrated into the standard workflow, influencing what is considered the standard of care. However, if an AI tool like ChatGPT fails to gather information accurately, and the doctor relies on this incorrect information, a court may find that the doctor did not meet the standard of care. Although the use of the AI tool has become part and parcel of clinical practice, the medical standards of care apply beyond AI.
Let’s consider a primary care physician who uses ChatGPT as part of a telemedicine setup to conduct initial patient consultations. The physician integrates ChatGPT to conduct a basic question-and-answer session with the patient to gather preliminary medical history and symptoms before the actual video consultation. Suppose a patient reports symptoms like persistent headaches and fatigue during the initial ChatGPT session. Based on this preliminary data, the physician, who later reviews the AI-gathered information, concludes that the patient is just stressed and prescribes relaxation techniques without conducting further tests.
Later, it turns out the patient had a serious underlying condition that went undiagnosed, leading to adverse health consequences. If the use of tools like ChatGPT for initial symptom gathering has become standard practice, the court might examine whether relying solely on the AI-generated data without further in-depth investigation meets the standard of care. In this case, the physician’s decision to rely on ChatGPT for preliminary assessment, without additional diagnostic steps, could be scrutinized as falling below the expected standard, particularly if that omission led to a misdiagnosis or delayed treatment of a serious condition.
So, as AI tools become more embedded in healthcare systems, their influence extends into legal assessments of medical practices as well. In scenarios like these, the standard of care may not just be about what a reasonably competent healthcare provider would do, but also about how they integrate and balance AI tools like ChatGPT in their diagnostic and treatment processes. This adds a new dimension to how standard of care is defined and evaluated in medical malpractice lawsuits.
Impact of AI on Causation
The issue of causation in medical malpractice lawsuits involves determining whether there’s a nexus between the healthcare provider’s actions and a patient’s negative outcome. At a medical malpractice trial, causation is often the subject of serious and extended debate, often involving multiple expert witnesses for both plaintiffs and defendants. AI adds several layers of complexity to this process. If a doctor follows a course of action recommended by an AI tool and a patient experiences an adverse outcome, the legal and factual analysis of causation becomes more challenging. Was the use of AI causative of the patient’s injury. Was it the doctor’s use of the AI tool, or could the AI itself be flawed? Could both be contributory factors? Would the manufacturers or programmers of the AI be responsible?
With most medical devices in Ontario being regulated by Health Canada, the question of liability might extend beyond the healthcare provider and the manufacturer of the AI tool. If a device regulated and approved by Health Canada fails and leads to a poor patient outcome, what roles does Health Canada play for approving the device? While this is an area of law that is still evolving, it raises important questions about the role of a regulatory body, such as Health Canada or the FDA, in a causation analysis.
Imagine a scenario where ChatGPT is integrated into a hospital’s patient management system and is used to collect initial symptom data for patients admitted to the emergency department. A patient comes in with chest pain and other symptoms, and the data is inputted into ChatGPT, which then categorizes the pain as non-cardiac based. Relying partly on this determination, the physician places the patient on a lower priority list for cardiac evaluation. Later, the patient suffers a heart attack with a devastating outcome.
In this case, the determination of causation would require an intricate analysis. Did ChatGPT misinterpret the symptoms, leading to a lower priority triage status? Did the healthcare provider err in relying too heavily on the AI’s determination rather than carrying out more in-depth history, assessment or diagnostic tests? Or was it a combination of both, along with other factors, that caused or contributed to the adverse outcome?
Adding to the complexity is the nature of machine learning algorithms, which can be thought of as a “black box” in many cases. This means that while the input (medical data) and the output (medical recommendation) are visible, the process by which the algorithm reaches its conclusion is not easily understood, even by experts. This presents a unique challenge in proving causation. If an adverse patient outcome occurs, how can one definitively say whether the AI made an error if the workings of the algorithm are not fully transparent? This could make it difficult for both plaintiffs and defendants to prove or disprove that the AI was a contributory factor in the patient’s outcome, thereby affecting the apportionment of liability.
One example of a medical device that uses machine learning algorithms is a system for diabetic retinopathy screening. Diabetic retinopathy is a complication of diabetes that can lead to vision loss. Traditional screening often involves a specialist examining retinal images taken with a fundus camera. The diabetic system, however, uses machine learning algorithms to analyze these retinal images and identify whether a patient is showing signs of diabetic retinopathy.
The system is trained on a large dataset of retinal images that have been labeled by experts as either showing signs of diabetic retinopathy or not. Through this training, the machine learning model learns to recognize the patterns and features that are indicative of the disease. Once the system is trained, it can analyze new retinal images to screen for diabetic retinopathy, often with an accuracy rate comparable to human experts.
In a medical malpractice case involving the diabetic retinopathy system, the issue of causation would likely become far more complex than in traditional cases. If, for instance, a patient was misdiagnosed by the diabetic retinopathy system and subsequently suffered vision loss, the chain of responsibility would have to be dissected carefully. An expert witness would need to evaluate whether the doctor reasonably relied on the system’s findings. Was it a failure of the algorithm, or did the healthcare provider not incorporate other clinical assessments adequately? It is also conceivable that the machine learning model might have made an error due to the quality of the images, the patient’s unique retinal features, or other variables (e.g., maintenance of the machine or updating of software), complicating the causation analysis further. Moreover, because diabetic retinopathy and similar tools are based on machine learning algorithms, they could evolve and improve over time. This ‘moving target’ nature of the technology could pose additional challenges in causation analysis. An expert witness may need to consider the version of the system used at the time of the misdiagnosis. Was it up to date with the latest trained data? Had it been validated or calibrated adequately before being deployed in a clinical setting?
Impact of AI on Expert Witness Testimony
The integration of Artificial Intelligence in healthcare in Ontario could significantly affect the role of expert witnesses in medical malpractice lawsuits. Traditionally, medical experts have based their analysis on long-standing clinical guidelines, empirical research, and the collective wisdom of the medical community and their clinical experience to assess both standard of care and causation. However, as AI tools become more entwined with healthcare delivery, these experts will likely have to adapt their analyses to account for these technological changes.
In terms of standard of care, expert witnesses would now have to consider whether the use or non-use of an AI tool is consistent with current medical practices in Ontario. For instance, if a specialized AI tool has become commonplace in diagnosing a specific condition, an expert witness may need to assess not only whether the physician’s actions were reasonable but also whether the physician reasonably decided to use or not use the AI tool in question. This could introduce a new layer of complexity to their analysis, as the witness would need to be versed not just in medical procedures but also in the capabilities and limitations of relevant AI technologies.
Similarly, in examining causation, an expert witness will need to untangle more intricate chains of events when AI tools are involved. If an AI-based diagnostic tool offers a recommendation that a healthcare provider follows, and the patient experiences a poor outcome, determining causation becomes multi-dimensional. The expert witness will need to assess whether the fault lies with the AI tool, the healthcare provider, or some combination of the two. Additionally, the expert may need to consider the role of Health Canada’s regulatory approval in assessing the reliability and credibility of the AI tool in question.
Overall, the inclusion of AI in healthcare in Ontario requires expert witnesses to expand their areas of expertise and update their methodologies for analyzing medical malpractice cases. It is no longer just about medical knowledge and clinical practice but also about understanding the intricate interplay between healthcare providers and increasingly sophisticated technology.
Conclusion
The adoption of AI technologies, including both Health Canada-regulated medical devices and general-purpose conversational agents like ChatGPT, is bringing both promise and complexity to healthcare in Ontario. These tools have the potential to elevate standard of care by improving diagnostic accuracy and treatment efficacy but could complicate traditional legal frameworks for assessing medical malpractice. The shifting landscape raises important questions about what constitutes standard care and how causation is determined when AI tools are involved. It even puts a spotlight on the role of regulatory bodies like Health Canada. As AI technologies continue to infiltrate healthcare, we can expect to see new case law emerge that addresses these issues directly. Over time, this will provide a more nuanced understanding of how the standard of care is evolving in the age of AI and machine learning in medical practice. Until then, traditional legal principles will need to be adapted and interpreted in light of these technological advancements.