The integration of Artificial Intelligence (AI) into healthcare presents
transformative opportunities to enhance patient outcomes, optimize clinical
workflows, and reduce costs. However, the rapid adoption of AI technologies also
raises critical legal and ethical questions that must be addressed to ensure
safe, equitable, and responsible use. This article examines the complex legal
landscape surrounding AI in healthcare, including regulatory challenges,
liability concerns, intellectual property issues, and data privacy protections.
In parallel, it explores the ethical implications of AI, focusing on patient
autonomy, algorithmic bias, transparency, and the human-AI relationship in
medical decision-making. Through a combination of theoretical analysis and
real-world case studies, this article underscores the importance of developing
robust legal frameworks and ethical guidelines to navigate the challenges posed
by AI in healthcare.
The article concludes by discussing future directions for
legal and ethical governance in this evolving field, emphasizing the need for
collaboration among policymakers, healthcare professionals, and AI developers to
ensure that AI technologies are deployed in a manner that upholds the highest
standards of patient care and societal benefit.
Background on AI in Healthcare
Increasing patient demand, chronic disease, and resource constraints put
pressure on healthcare systems. Simultaneously, the usage of digital health
technologies is rising, there has been an expansion of data in all healthcare
settings. If properly harnessed, healthcare practitioners could focus on the
causes of illness and keep track of the success of preventative measures and
interventions. As a result, policymakers, legislators, and other decision-makers
should be aware of this.
For this to happen, computer and data scientists and
clinical entrepreneurs argue that one of the most critical aspects of healthcare
reform will be artificial intelligence (AI), especially machine learning
Artificial intelligence (AI) is a term used in computing to describe a computer
program's capacity to execute tasks associated with human intelligence, such as
reasoning and learning. It also includes processes such as adaptation, sensory
understanding, and interaction. Traditional computational algorithms, simply
expressed, are software programmes that follow a set of rules and consistently
do the same task, such as an electronic calculator: "if this is the input, then
this is the output."
On the other hand, an AI system learns the rules (function)
through training data (input) exposure. AI has the potential to change
healthcare by producing new and essential insights from the vast amount of
digital data created during healthcare delivery.[1]
The current state of AI in healthcare is characterized by rapid growth and
innovation. AI is being integrated into diverse healthcare applications such as
imaging analysis, drug discovery, virtual health assistants, and robotic
surgery. Major healthcare institutions and tech companies are investing heavily
in AI research and development, aiming to improve patient care, reduce costs,
and streamline operations. Despite its potential, the deployment of AI in
healthcare is still in its early stages, with many challenges to overcome,
including technical limitations, regulatory hurdles, and ethical concerns.
Importance of Legal and Ethical Considerations
As AI continues to advance and permeate healthcare, it is crucial to address the
legal and ethical implications of its deployment. The integration of AI into
medical practice raises significant questions about the regulation of these
technologies, the allocation of liability in case of errors, and the protection
of patient data. Without clear legal frameworks, the use of AI could lead to
unintended consequences, including patient harm, privacy breaches, and
inequitable access to care.
Ethical considerations are equally important, as AI challenges traditional
notions of patient autonomy, fairness, and transparency. Issues such as
algorithmic bias, the "black box" nature of some AI systems, and the potential
for AI to replace human judgment in critical medical decisions must be carefully
considered. Ensuring that AI is used ethically in healthcare requires ongoing
dialogue among stakeholders, including healthcare professionals, AI developers,
policymakers, and patients. By addressing these legal and ethical concerns, we
can harness the full potential of AI in healthcare while safeguarding patient
rights and promoting trust in these emerging technologies.
Legal Implications Of Ai In Healthcare
Liability Issues
One of the most pressing legal concerns surrounding AI in healthcare is determining liability when AI systems cause harm or errors. As AI systems increasingly influence medical decisions, questions arise about who should be held responsible when things go wrong.
- Responsibility for AI-Related Errors or Harm: In cases where AI systems make incorrect diagnoses or treatment recommendations, determining liability can be complex. Should the healthcare provider who relied on the AI system be held accountable, or should the developers of the AI software bear the responsibility? Additionally, what happens when the AI system functions as intended, but the outcome is still harmful? These scenarios challenge traditional legal concepts of liability, which are based on human error or negligence.
- Legal Ramifications for Healthcare Providers and AI Developers: Healthcare providers may face legal action if they rely too heavily on AI systems without exercising proper clinical judgment, particularly if the AI's recommendations lead to patient harm. On the other hand, AI developers could be held liable for flaws in their algorithms, especially if those flaws were foreseeable or preventable. The legal landscape is still evolving in this area, with courts and lawmakers grappling with how to allocate responsibility among the various parties involved.
Intellectual Property (IP) Concerns
The use of AI in healthcare raises important questions about the ownership and protection of intellectual property (IP). As AI systems generate new medical insights, designs, and treatments, the traditional IP framework must adapt to accommodate these developments.
- Ownership of AI-Generated Innovations: AI systems can create new drugs, diagnostic methods, and even treatment protocols. However, determining who owns these innovations can be challenging. Should the AI's creators, the healthcare providers who use the AI, or the patients whose data contributed to the AI's learning process hold the rights? This question is particularly complex in cases where AI-generated innovations are created autonomously, without direct human input.
- IP Protection for AI Algorithms and Data: Protecting the underlying algorithms that power AI systems is another critical concern. Developers may seek patents or trade secret protection for their AI models, but the fast-paced nature of AI development can make it difficult to establish and enforce these rights. Additionally, the data used to train AI systems, often sourced from patients, raises concerns about ownership and use rights. Balancing the need for innovation with the protection of individual rights is an ongoing challenge in the healthcare industry.
Data Privacy and Security
AI's reliance on large datasets, often containing sensitive patient information, heightens the importance of data privacy and security in healthcare. Ensuring compliance with data protection laws and safeguarding against breaches are critical to maintaining patient trust and the integrity of AI systems.
- Compliance with Data Protection Laws (e.g., GDPR, HIPAA): AI systems in healthcare must comply with stringent data protection regulations, such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the United States. These laws govern how personal health information (PHI) can be collected, processed, and shared. Compliance involves not only securing patient consent but also implementing robust data protection measures to prevent unauthorized access and ensure data integrity.
- Risks of Data Breaches and Unauthorized Access: The vast amounts of data required for AI systems, coupled with the interconnected nature of healthcare networks, make them attractive targets for cyberattacks. Data breaches can lead to significant legal consequences, including fines, lawsuits, and reputational damage. Furthermore, the use of AI can increase the risk of unauthorized access if the systems are not properly secured, potentially compromising patient privacy and trust.
Ethical Implications of AI in Healthcare
Patient Autonomy and Consent
One of the fundamental ethical principles in healthcare is respecting patient autonomy, which includes the right to make informed decisions about their treatment. The integration of AI into healthcare introduces new challenges in ensuring that patients fully understand and consent to AI's role in their care.
- Ensuring Informed Consent When Using AI in Treatment: AI systems often analyze complex data and provide recommendations that may not be easily understood by patients. Ensuring informed consent requires that patients are clearly informed about how AI will be used in their diagnosis or treatment, including the potential risks, benefits, and limitations of AI-driven decisions. This transparency is crucial for maintaining trust and ensuring that patients are making truly informed choices.
- The Role of Patients in Decision-Making Processes Involving AI: As AI becomes more prevalent, it is essential to define the role of patients in decision-making processes. Patients should be involved in discussions about the use of AI in their care, and their preferences and values should be respected. This includes giving patients the option to opt-out of AI-driven care if they prefer traditional methods. Involving patients in these decisions supports their autonomy and ensures that AI is used in a manner that aligns with their individual needs and expectations.
Bias and Fairness in AI
AI systems are only as good as the data they are trained on, and if this data is biased, the AI can perpetuate or even exacerbate existing inequities in healthcare.
- Addressing Algorithmic Bias and Its Impact on Marginalized Groups: AI systems can unintentionally reflect the biases present in the data they are trained on, leading to unfair treatment of marginalized groups. For example, if an AI system is trained on data primarily from a specific demographic, it may not perform as well for patients from different backgrounds, potentially leading to misdiagnosis or inadequate treatment. Addressing these biases requires careful selection and curation of training data, as well as ongoing monitoring to ensure that AI systems are equitable in their application.
- Strategies to Ensure Fairness and Equity in AI Applications: To ensure fairness, developers and healthcare providers must implement strategies that actively identify and mitigate bias in AI systems. This includes using diverse datasets for training, conducting regular audits of AI systems, and involving diverse stakeholders in the development and testing phases. Additionally, healthcare organizations should adopt guidelines and best practices that prioritize equity in AI deployment, ensuring that all patients benefit equally from these technologies.
The Human-AI Interaction
The relationship between human clinicians and AI systems is a crucial aspect of ethical AI deployment in healthcare. While AI offers many advantages, it should complement, rather than replace, the human elements of care.
- Balancing Human Judgment with AI Recommendations: AI can provide valuable insights and support decision-making, but it should not replace the clinical judgment of healthcare providers. Ethical AI use requires a balance where AI acts as a tool to enhance human decision-making, rather than supplanting it. Clinicians must retain the final authority in treatment decisions, ensuring that AI recommendations are critically evaluated in the context of each patient's unique circumstances.
- The Role of Empathy and Human Touch in AI-Driven Healthcare: Healthcare is not just a science but also an art that involves empathy, compassion, and the human touch. These qualities are essential for building trust and rapport with patients, something that AI, no matter how advanced, cannot replicate. As AI systems take on more tasks in healthcare, it is important to preserve the human aspects of care, ensuring that patients do not feel reduced to mere data points in an algorithm.
Legal And Ethical Challenges
While AI offers many benefits, its implementation has also led to legal disputes
and ethical dilemmas. Examining these cases provides insight into the challenges
that need to be addressed for AI to be used effectively and responsibly in
healthcare.
Case Studies Highlighting Legal Disputes and Ethical Dilemmas
- AI in Diagnostics and Liability Issues: In one notable case, an
AI system used for diagnosing a rare condition provided an incorrect
diagnosis, leading to inappropriate treatment. The ensuing legal battle
focused on whether the healthcare provider or the AI developer should be
held liable for the error, highlighting the complexities of assigning
responsibility in AI-driven care.
- Bias in AI-Driven Treatment Recommendations: Another case
involved an AI system used to recommend treatment plans for cancer patients.
It was discovered that the system's recommendations were biased against
patients from certain demographic groups, leading to inferior treatment
options for these individuals. This case underscores the ethical imperative
to address bias in AI systems to ensure fair and equitable care.
Analysis of How These Challenges Were Addressed:
These cases illustrate the need for clear legal frameworks and ethical
guidelines to govern the use of AI in healthcare. They also highlight the
importance of continuous monitoring and auditing of AI systems to detect and
correct biases, as well as the need for collaboration between AI developers,
healthcare providers, and regulators to navigate the legal and ethical
complexities of AI deployment
Indian case laws:
- Justice K.S. Puttaswamy (Retd.) v. Union of India (2017):
This landmark case established the right to privacy as a fundamental right under
the Indian Constitution. The Supreme Court's ruling has significant implications
for AI in healthcare, particularly concerning the collection, storage, and
processing of patient data. Any AI system that handles personal health data must
comply with privacy norms established by this ruling. Future AI-related
disputes, especially those involving data breaches or unauthorized data usage,
are likely to reference this judgment.[5]
Judgement of the Supreme Court
On 24th August, 2017 a 9 Judge Bench of the Supreme Court delivered a unanimous
verdict in Justice K.S. Puttaswamy vs. Union of India and other connected
matters, affirming that the Constitution of India guarantees to each individual
a fundamental right to privacy. Although unanimous, the verdict saw 6 separate
concurring decisions. Justice Chandrachud authored the decision speaking for
himself, Justices Khehar and R.K. Agarwal and Abdul Nazeer. The remaining 5
judges each wrote individual concurring judgments.
The conclusion arrived at by the Bench in the concurring judgments records the
plurality of opinions and the various facets of privacy that have made their way
into the reasoning. The first of the two-part series examines the decision
authored by Justice Chandrachud, along with those of Chelameshwar and Bobde JJ.
This post is the second in a two-part series and records the decisions rendered
by Nariman J., Sapre J. and S. K. Kaul J.
On August 24, 2017, a nine-judge bench of the Supreme Court of India delivered a
unanimous verdict in the Puttaswamy case, affirming that the right to privacy is
a fundamental right guaranteed by the Constitution. Although the verdict was
unanimous, six separate concurring opinions were delivered, reflecting various
perspectives on privacy. Justice Chandrachud authored the main judgment, joined
by Justices Khehar, R.K. Agarwal, and Abdul Nazeer. The other five justices—Chelameshwar,
Bobde, Nariman, Sapre, and Kaul—offered concurring judgments, adding depth to
the Court's reasoning on privacy in modern society.
Justice Nariman emphasized the historical significance of dissenting judgments
in shaping constitutional interpretations, particularly in regard to Article 21,
which guarantees the right to life and personal liberty. He argued that privacy
is an inalienable right and that its scope must evolve in line with
technological advancements and changing social contexts. According to Nariman,
the decision in M.P. Sharma should not be read as denying the existence of a
privacy right under Part III of the Constitution, particularly because the
framers intended the Constitution to be an organic document capable of adapting
to new circumstances. Privacy, he argued, is a multifaceted right, including
physical privacy, informational privacy, and the autonomy of personal
decision-making. The right to privacy begins where liberty ends, underscoring
the importance of protecting personal autonomy.
Justice Sapre connected the concept of dignity, as enshrined in the Preamble of
the Constitution, with the need to protect individual autonomy. He viewed the
rights to Liberty, Equality, and Fraternity as interconnected and essential to
understanding privacy. For Sapre, privacy is an inherent, inalienable right that
allows individuals to live with dignity. While this right is subject to
reasonable restrictions for the sake of public interest, it remains a core
principle of personal liberty.
Justice Kaul took the position that the Constitution is a living document that
must be interpreted in a manner that reflects contemporary societal needs. He
defined privacy as a right that shields individuals from both State and
non-State actors, enabling them to make autonomous life choices. He emphasized
that privacy is not merely a common law right but an essential component of
human dignity and personal development. Kaul was particularly concerned with the
risks posed by technological advancements, noting that privacy is now more
vulnerable to intrusion through surveillance, profiling, and data collection. He
argued for a balanced regime of data protection that respects individual privacy
while accommodating legitimate State interests, particularly in matters of
national security.
The Court's decision in Puttaswamy underscores the importance of privacy in a
democracy, highlighting that autonomy over personal information is a key aspect
of individual freedom. Justice Kaul noted that fundamental rights are not
subject to majoritarianism; even if only a small section of the population is
affected, privacy must still be upheld. He also stressed that sexual autonomy is
a vital aspect of privacy, which further broadens the scope of the judgment's
implications in a digital age.
This case has far-reaching implications, especially in the age of digital health
data, where AI systems play an increasing role in healthcare. AI technologies,
which process vast amounts of patient data, will be required to comply with the
stringent privacy protections outlined in this ruling, and the judgment is
likely to serve as a foundation for future legal frameworks governing data
privacy in healthcare.[6]
The landmark case of Justice K.S. Puttaswamy (Retd.) v. Union of India (2017)
established the right to privacy as a fundamental right under the Indian
Constitution. This decision carries significant implications for AI in
healthcare, especially concerning the collection, storage, and processing of
personal health data. Any AI system that handles such sensitive information must
adhere to the privacy norms set forth by this ruling. As AI technology becomes
more prevalent in healthcare, particularly in areas like data collection and
patient monitoring, this judgment is expected to be a key reference in future
legal disputes involving data breaches or unauthorized usage of personal health
information.
- Dr. Laxman Balkrishna Joshi v. Dr. Trimbak Bapu Godbole (1969):
This case further refined the standards of medical negligence in India. The
Supreme Court held that a doctor is expected to exercise reasonable care and
skill. As AI becomes more prevalent, the courts might have to determine what
constitutes "reasonable care" when AI tools are used. For example, if an AI
system provides an incorrect diagnosis, the question may arise as to whether the
doctor's reliance on AI was reasonable or if they should have exercised
independent judgment.[7]
A medical practitioner, when offering treatment, must possess skill and
knowledge and provide care in decision-making and treatment administration. In
this case, the first respondent's son suffered a femur fracture on May 6, 1953,
and was transported to the appellant's hospital after an eleven-hour journey.
The appellant directed one morphia injection instead of two, assured the first
respondent of the patient's stability, and then the patient died of deteriorated
condition at 9 P.M. The appellant attributed the death to fat embolism.
The trial and High Courts found the appellant negligent for reducing the
fracture without anesthesia, using excessive force, and causing shock, leading
to death. The appellant's appeal argued that the High Court wrongly relied on
medical texts instead of expert testimony and that the findings were based on
misunderstanding and conjecture.
The Court upheld the High Court's decision, confirming the appellant's
negligence and the damages awarded.
Held: (1) There was nothing wrong in the High Court emphasising the opinions of
authors of well-recognised medical works instead of basing its conclusions on
the expert's evidence as, it was a alleged by the appellant that the expert was
a professional rival of the appellant and was, therefore, unsympathetic towards
him. [216 E-F] 207 (2) The trial court and the High Court were right in holding
that the appellant was guilty of negligence and wrongful acts towards the
patient and was liable for damages, because, the first respondent's case that
what the appellant did was reduction of the fracture without giving anaesthetic,
and not mere immobilisation with light traction 'as was the appellant's case,
was more acceptable and consistent with the facts and circumstances of the case.
The findings included:
- Medical Knowledge: The first respondent, a medical practitioner, was present throughout the treatment and understood it.
- Reason for Leaving: The first respondent left for Dhond only after being assured that the fracture reduction was complete and the patient's condition was stable.
- Morphia Effects: The patient was likely unconscious from the morphia, contradicting the appellant's claim of patient cooperation. The second morphia injection was not given due to the first injection's unexpected effects.
- Inadequate Explanation: The appellant failed to address specific allegations about excessive force and lack of anesthesia in his response to the Medical Council.
- Apology Admission: The appellant's later apology indicated acknowledgment of mistakes, undermining his initial claim of embolism as the cause of death.
- True Cause of Death: The true cause of death was shock from the appellant's treatment, not embolism, as the symptoms did not align with embolism and were not observed.
Civil Appellate Jurisdiction: Civil Appeal No. 547 of 1965.
Appeal by special leave from the judgment and decree dated February 25, 27, 1963
of the Bombay High Court in First Appeal No. 552 of 1968.
Purshottamdas Tricumdas and I. N. Shroff, for the appellant.
Bishan Narain, B. Dutta and J. B. Dadachatnji, for the respondents.
208 The Judgment of the Court was delivered by Shelat, J. This appeal by special
leave raises the question of the liability of a surgeon for alleged neglect
towards his patient. It arises from the following facts.
At about sunset on May 6, 1953, Ananda, the son of respondent 1, aged about
twenty years, met with an accident on the sea beach at Palshet, a village in
Ratnagiri District, which resulted in the fracture of the femur of his left leg.
Since the sea beach was at a distance of 14' miles from the place where he and
his mother lived at the time it took some time to bring a cot and remove him to
the house. Dr. Risbud, a local physician, was called at about 8-30 or 8-45 P.m.
The only treatment he gave was to tie wooden planks on the boy's leg with a view
to immobilise it and give rest.
Next day, he visited the boy and though he found him in good condition, he
advised his removal to Poona for treatment.
On May 8, 1953, Dr. Risbud procured Mae In tyres splints and substituted them
for the said wooden planks. A taxi was thereafter called in which the boy Ananda
was placed in a reclining position and he, along with respondent 2 and Dr.
Risbud, started for Poona at about 1 A.m. They reached the city after a journey
of about 200 miles at about 11-30 A.m.
on May 9, 1953. By that time respondent 1 had come to Poona from Dhond where he
was practising as a medical practitioner. They took the boy first to Tarachand
Hospital where his injured leg was screened. It was found that he had an
overlapping fracture of the femur which required pintraction. The respondents
thereafter took the boy to the appellant's hospital where, in his absence, his
assistant, Dr. Irani, admitted him at 2-15 P.m. Some time thereafter the
appellant arrived and after a preliminary examination directed Dr. Irani to give
two injections of 1/8th grain of morphia and 1/200th grain of Hyoscine H.B. at
an hour's interval. Dr. Irani, however, gave only one injection.
Ananda was thereafter removed to the X-ray room on the ground floor of the
hospital where two X-ray photos of the injured leg were taken. He was then
removed to the operation theatre on the upper floor where the injured leg was
put into plaster splints. The boy was kept in the operation theatre for a little
more than an hour and at about 5-30 P.m., after the treatment was over, he was
removed to the room assigned to him. On an assurance given to respondent 1 that
Ananda would be out of the effect of morphia by 7 P.m., respondent 1 left for
Dhond. Respondent 2, however, remained with Ananda in the sand room.
At about
6-30 P.m. she noticed that he was finding difficulty in breathing and was having
cough. Thereupon Dr. Irani called the appellant who, finding that the boy's
condition was deteriorating started giving emergency treatment which continued
right until 9 P.m. when the 209 boy expired. The appellant thereupon issued a
certificate, Ext. 138, stating therein that the cause of death was fat embolism.
Their case further was that "While putting the leg in plaster the defendant used
manual traction and used excessive force for this purpose, with the help of
three men although such traction is never done under morphia alone, but done
under proper general anesthesia. This kind of rough manipulation is calculated
to cause conditions favourable for embolism or shock and prove fatal to the
patient. The plaintiff No. 1 was given to understand that the patient would be
completely out of morphia by 7 p.M. and that he had nothing to worry about.
Plaintiff No. 1 therefore left for Dhond at about 6 P.M. the same evening." In
his written statement the appellant denied these allegations and stated that the
boy was only under the analgesic effect of the morphia injection when he was
taken to the X-ray room and his limb was put in plaster in the operation
theatre. Sometime after the morphia injection the patient was taken to the X-ray
room where X-ray plates were taken. The boy was cooperating satisfactorily. He
was thereafter removed to the operation theatre and put on the operation table.
The written statement tiller, proceeds to state:
"Taking into consideration the
history of the patient and his exhausted condition, the defendant did not find
it desirable to give a general anesthetic. The defendant, therefore, decided to immobilise the fractured femur by plaster of Paris bandages. The defendant
accordingly reduced the rotational deformity and held the limb in proper
position with slight traction and immbilised it in plaster spica. The hospital
staff was in attendance.
The patient was cooperating satisfactorily.
The appellant denied using excessive force for manual traction, stating that the
limb was placed in plaster as immediate treatment to improve the patient's
condition. He claimed that by 6:30 P.M., Ananda's condition had worsened with
abnormal breathing, high fever, coma, and signs of cerebral embolism, despite
emergency treatment, and that Ananda died around 9 P.M. The case involved
extensive evidence, including correspondence, the appellant's letter, a
complaint to the Bombay Medical Council, and testimonies from the respondents,
Dr. Gharpure, and other Poona doctors. The nurse who attended Ananda was not
examined. The arguments referenced well-known surgical texts on fracture
treatment.The trial court's findings were as follows:
- Ananda's femur fracture occurred around 7 P.M. on May 6, 1953. He was transported home between 8:30 and 9 P.M.
- Dr. Risbud arrived within ten minutes but only immobilized the leg with planks, neglecting proper immobilization of the hip and knee joints.
- On May 8, 1953, Dr. Risbud replaced the planks with MacIntyre splints, but the thigh was swollen and red.
- Ananda was transported by taxi from Palshet to Poona, arriving at around 11:30 A.M. on May 9, 1953, after a nearly eleven-hour journey. He was first admitted to Tarachand Hospital and then to the appellant's hospital at 2:15 P.M.
- The appellant examined Ananda and ordered two morphia injections, but Dr. Irani only administered one. Preliminary examinations were conducted before starting treatment.
- Ananda received a morphia injection at 3 P.M., was in the X-ray room from 3:20 P.M. to 4 P.M., and spent about an hour in the operation theatre. He was moved to a recovery room at around 5 P.M.
- Respondent 1 stayed with Ananda throughout and left the hospital for Dhond at about 6 P.M. on the assurance given to him that the boy would come out of the morphia by about 7 P.M.
- At about 6:30 P.M.
respondent 2 complained to Dr. Irani that the boy was having cough and was
finding difficulty in breathing. The appellant, on being called, examined the
boy and found his condition deteriorating and, therefore, gave emergency
treatment from 6-30 P.m. until the boy died at 9 P.m.
On the crucial question of treatment given to Ananda, the trial Court accepted
the eye, witness account given by respondent 1 and came to the conclusion that
notwithstanding the denial by the appellant, the appellant had performed
reduction of the fracture; that in doing so he applied with the help of three of
his attendants excessive force, that such reduction was done without giving
anesthetic, that the said treatment resulted in cerebral embolism or shock which
was the proximate cause of the boy's death. The trial court disbelieved the
appellant's case that be had decided to postpone reduction of the fracture or
that his treatment consisted of immobilisation with only light traction with
plaster splints.
The trial Judge was of the view that this defence was an after-thought and was
contrary to the evidence and the circumstances of the case. On these findings he
held the appellant guilty of negligence and wrongful acts which resulted in the
death of Ananda and awarded general damages in the sum of Rs. 3,000.
In appeal, the High Court came to the conclusion that though the appellant's
case was that a thorough preliminary examination was made by him before he
started the treatment, that did not appear to be true. The reason for this
conclusion was that though Dr. Irani swore that the patient's temperature, pulse
and respiration were taken, the clinical chart, Ext. 213, showed only two dots,
one indicating that pulse was 90 and the other that respiration was 24. But the
chart did not record the temperature. If that was taken, it was hardly likely
that it would not be recorded along with pulse and respiration.
212 As regards the appellant's case that he had decided to delay the reduction
of the fracture and that he would merely immobilise the patient's leg for the
time being with light traction, the High Court agreed with the trial court that
case also was not true. The injury was a simple fracture.
The reasons given by the appellant for his decision to delay the reduction were
that (1) there was swelling on the thigh, (2) that two days had elapsed since
the accident, (3) that there was no urgency for reduction and (4) that the, boy
was exhausted on account of the long journey. The High Court observed that there
could not have been swelling at that time for neither the clinical notes, Ext.
213, nor the case paper, Ext. 262 mentioned swelling or any other symptom which
called for delayed reduction.
Ext. 262 merely mentioned one morphia injection, one X-ray photograph and
putting the leg in plaster of Paris. The reference to one X-ray photo was
obviously incorrect as actually two such photos were taken. This error crept in
because the case paper, Ext. 262, was prepared by Dr. Irani some days after the
boy's death after the X-ray plates had been handed over on demand to respondent
1 and, therefore, were not before her when she: prepared Ext. 262. Her evidence
that she had prepared that exhibit that very night was held unreliable.
The High Court concluded that the appellant reduced the fracture without proper
anesthesia and used excessive force, leading to shock and the patient's death,
while the claim of cerebral embolism was considered a cover-up. The appellant's
request to reopen the case was denied because the findings were based on
evidence, not conjecture. Respondent 1, a medical practitioner, observed the
treatment and his version was deemed more credible.
The appellant had instructed Dr. Irani to administer two morphia injections, but
only one was given due to her forgetting, not a deliberate omission.That part of
her evidence hardly inspires condence for, in such a case as the present it is
impossible to believe that she would forget the appellant's instructions. The
second one was probably not given because, the one that was given had a deeper
effect on the boy than was anticipated.
The evidence of respondent 1 was that after the boy was brought from the
operation theatre to the room assigned to him, he was assured by the appellant
that the boy was all right and would come out of the morphia effect by about 7
P.m. and that thereupon he decided to return to Dhond and did in fact leave at 6
P.m. Both the courts accepted this part of his evidence and we see no reason to
find any fault with it.
What follows from this part of his evidence, however, is somewhat important. If
respondent 1 was assured that the boy would come out of the effect of morphia by
about 7 P.m., it must mean that the appellants version that the boy was
cooperating all throughout in the operation theatre and was even lifting his
hand as directed by him cannot be true.
The evidence showed that the morphia injection was expected to relieve pain, but
the boy remained unconscious, leading Dr. Irani to withhold the second
injection. If the appellant had only used light traction and postponed fracture
reduction, he would likely have informed respondent 1, who would not have left
for Dhond at 6 P.M. The assurance given by the appellant, which led respondent 1
to depart, implied that the fracture had been reduced and the boy's condition
was satisfactory.
The appellant's failure to inform respondent 1 of any postponement supports the
conclusion that reduction was performed, not just immobilization. The letter of
the appellant to respondent 1 dated July 17, 1953, was, in our view, rightly
highlighted by both the courts while considering the rival version of the
parties.
The appellant cited an article by Moore advocating for delayed reduction, but
the article also recommended immediate reduction if skilled supervision is
available and proper cast techniques are used—guidelines the appellant did not
follow. The High Court was justified in referencing medical texts for guidance,
despite the appellant's criticism that it should have relied solely on Dr.
Gharpure's testimony.
The High Court's use of medical literature was appropriate, especially given
concerns about Dr. Gharpure's potential bias.From the elaborate analysis of the
evidence by both the trial court and the High Court, it is impossible to say
that they did not consider the evidence before them or that their findings were
the result of conjectures or surmises or inferences unwarranted by that
evidence. We would not, therefore, be justified in reopening those concurrent
findings or reappraising the evidence.
As regards the cause of death, the respondents' case was that the boy's
condition was satisfactory at the time be was admitted in the appellant's
hospital, that if fat embolism was the cause of death, it was due to the heavy
traction and excessive force resorted to by the appellant without administering
anaestbetic to the boy.
The appellant's case, on the other band, was that fat embolism must have set in
right from the time of the accident or must have been caused on account of
improper or inadequate immobilisation of the leg, at Palshet and the hazards of
the long journey in the taxi and that the boy died, therefore, of cerebral
embolism. In the case, the appellant's death certificate stated that the cause
of death was cerebral embolism. While some medical literature suggests that fat
embolism can be a cause of death following fractures, these signs are not always
clinically evident and require specific examinations.
The appellant, an experienced surgeon, did not detect any signs of fat embolism
or notify the respondent, who was also a doctor. The symptoms and signs
associated with fat embolism, which should have been present given the patient's
condition and history, were not observed. The appellant's subsequent admissions
and lack of thorough examination suggest an attempt to cover up the true cause
of death, which was likely shock from inadequate treatment. Both the trial court
and High Court found the appellant negligent and liable for damages.[8]
The appeal is dismissed with costs.
Conclusion
This article has explored the multifaceted implications of AI in healthcare,
highlighting the need for a robust legal framework that can adapt to the unique
demands of AI technologies. Issues such as liability, intellectual property, and
data privacy are critical to the responsible deployment of AI in clinical
settings. Similarly, the ethical considerations of bias, transparency, patient
autonomy, and the preservation of human elements in healthcare underscore the
complexities involved in integrating AI into medical practice.
Successful case studies demonstrate the transformative potential of AI, yet they
also serve as cautionary tales, illustrating the risks of biased algorithms,
privacy breaches, and the over-reliance on AI at the expense of human judgment.
These examples emphasize that while AI can greatly enhance healthcare, its
implementation must be guided by principles of fairness, accountability, and
continuous oversight.
In conclusion, the future of AI in healthcare is promising, but its success
depends on the collective efforts of all stakeholders. By prioritizing legal
compliance, ethical integrity, and patient-centered care, we can ensure that AI
becomes a powerful tool that enhances healthcare delivery and improves patient
outcomes, rather than a source of new risks or disparities. The journey forward
is one of collaboration, innovation, and vigilance, as we strive to harness AI's
potential while upholding the fundamental values of medical practice.
Bibliography
1. Books and Academic Journals
- Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a "right to explanation." AI Magazine, 38(3), 50-57.
Legal Cases
- Justice K.S. Puttaswamy (Retd.) v. Union of India (2017), Writ Petition (Civil) No. 494 of 2012, Supreme Court of India.
- Dr. Laxman Balkrishna Joshi v. Dr. Trimbak Bapu Godbole (1969), 1 SCR 206, Supreme Court of India.
Reports and White Papers
- World Health Organization (WHO). (2021). Ethics and Governance of Artificial Intelligence for Health: WHO Guidance.
- European Commission. (2020). White Paper on Artificial Intelligence: A European approach to excellence and trust.
Articles and News Outlets
- Lohr, S. (2019). A.I. in Health Care: Profits and Perils. The New York Times.
- Medical negligence and the law by K K S R Murthy.
Websites and Online Resources
- https://www.ncbi.nlm.nih.gov/
- https://nixongwiltlaw.com/
- https://www.livelaw.in/
- https://www.sciencedirect.com/science/article/pii/S0580951724000254
End Notes
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8963864/
- https://nixongwiltlaw.com/nlg-blog/2023/11/16/generative-ai-healthcare-innovations-and-legal-challenges
- https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8826344/
- https://www.sciencedirect.com/science/article/pii/S016517812300416X
- https://translaw.clpr.org.in/wp-content/uploads/2021/12/Justice-K.S.-Puttaswamy-.pdf
- https://www.scobserver.in/reports/k-s-puttaswamy-right-to-privacy-judgment-of-the-court-in-plain-english-ii/
- https://ijme.in/wp-content/uploads/2016/11/1279-5.pdf
- https://globalfreedomofexpression.columbia.edu/laws/india-n-d-jayal-v-union-of-india-2004-9-scc-362/
Written By: Adv.Smruti S.Kalantre.
Please Drop Your Comments