This article recounts the severe inadequacy of the country’s Consumer Protection Act, 2019 (CPA 2019), with respect to a variety of novel harms occasioned by Artificial Intelligence (AI) systems, which are being adopted into Indian consumer markets. Given AI’s intangible presence, ability to continuously re-learn, and multi-staged production chain engaged with multiple stakeholders, the traditional concepts of product liability, product, defect, and manufacturer have major limitations.
As such, we identify different harms driven by AI, like systemic algorithmic bias, autonomous system failure, data and privacy exploitation, and lack of transparency, as insurmountable issues for consumers trying to seek appropriate remedies in the current consumer law framework. Referring to AI harms, we suggest broad reforms to CPA 2019, which include guidelines for expanding definitions, strict liability for high-risk AI, exposing regulatory authorities like the Central Consumer Protection Authority (CCPA) to AI audits, transparency, and impact assessments, as well as possible reforms to create new tort regimes for AI harms.
These cost-effective legislative and policy changes will help protect the algorithmic consumer, promote competitive market practices, and enhance responsible AI innovation in India by 2025.
Introduction
Consider a situation where an older adult is trying to re-enroll in their health insurance online and is denied coverage. The denial is not based on a medical precondition, but on an arbitrary selection made by an algorithm in a form of AI fashion that has identified their online browsing pattern and recent social media connections with unexpected health risks.
Alternatively, suppose that the chatbot positioned as a reliable source for financial planning provides investment advice that leads a sincere and trusting consumer to irreversible financial loss. These scenarios will not need to be drawn from our imagination; they illustrate the unfortunate consequences of consumer vulnerabilities in an AI-driven India where the dividing line between technological comfort and undue harm is growing increasingly blurry.
AI is rapidly becoming integral to the daily lives of Indian consumers. In just a few years, by 2025, AI will no longer remain an esoteric technology; instead, it will become a ubiquitous force mediating a substantial proportion of consumer interactions. From personalized recommendations on e-commerce websites and AI-enabled fraud detection in fintech to intelligent radiation diagnostics in healthcare and responsive functionalities in IoT-enabled devices, AI is profoundly changing markets.
As indicated by NASSCOM projections for 2025, such exponential adoption represents a critical inflection point for the evolution of India’s digital economy. Likewise, the Economic Times forecasts exponential consumer interactions enabled by AI, that is, the extent of embedding autonomous systems within the national consumer identity. This integration of technology promises greater efficiency, personalization, and economic growth, but it also presents new risks and new types of harms that our current legal frameworks are not prepared to handle.
The crux of the problem is an inherent mismatch between our current legal framework and the specific nuances of AI. The progressive and comprehensive Consumer Protection Act, 2019 (CPA), designed to promote consumer protection, was created at a time prior to the widespread utilization of autonomous, self-teaching systems. Similar to the studies completed by legal academics and organizations such as the Centre for Internet and Society (CIS India), the CPA 2019’s traditional definitions and methods have no real applicability when addressing intangible and algorithmic harms.
The limits of traditional consumer law, including vague liability routes where liability may be splintered along AI production lines, the unpredictability of AI systems that evolve, and new types of harms, such as systemic bias, algorithmic discrimination, and the black-box nature of AI decision making, are starkly visible.
This paper contends that India has reached a crisis point and therefore requires a drastic transformation of its product liability regime. It is not sufficient to tinker with reforming an existing law; the real consideration is that the definition of liability must be transformed altogether concerning the harms associated with AI. The need for a new law has to pragmatically balance the need for protecting consumers with ensuring that technological innovation continues, so that India continues to compete in the AI space, without restricting or compromising the safety and rights of its citizenry.
In light of the considerations for reform highlighted above, this paper will first identify the precise shortcomings of the CPA 2019, regarding AI-based harms, before coming to a detailed typology of unique and new harms being faced by the algorithmic consumer. It will then provide a strikingly realistic outlook legislative framework that will provide practical solutions for consumers and delineate culpability, in an increasingly automated world, to guarantee justice and fairness at the center of the digitalization and artificialization process.
Current Product Liability Framework Under CPA 2019
The Consumer Protection Act, 2019 (CPA), represented a considerable development in India’s consumer law, with a dedicated product liability chapter (Chapter VI, Sections 82–87) being introduced. Prior to the CPA 2019, product liability claims were largely based on the common law principles of negligence or contract law.
The framework of the CPA 2019 was intended to afford consumers statutory rights of action against manufacturers, service providers, and sellers for damage caused by hazardous or defective products; in this way, consumer protections were enhanced. While this was commendable, on further analysis, the framework, although designed in the larger part for tangible goods, encounters severe difficulties when faced with the distinguishing characteristics and the complexities of Artificial Intelligence (AI) systems.
Overview of CPA 2019’s Product Liability Provisions (Sections 82–87)
The Act defines product liability as the responsibility of a product manufacturer or product seller, of any product or service, to compensate for any harm caused to a consumer by such defective product or by deficiency in services [Section 82, CPA 2019]. Central to this definition are several key terms:
Product (Section 2(34)): Encompasses any article or goods or substance or raw material… which may be in gaseous, liquid, or solid state, or intangible and includes electricity, or preparation for oral internal or external use, or any product which is manufactured or produced whether such product is within the premises of the manufacturer or not. The inclusion of intangible was a progressive step, intended to cover digital goods like software, yet its application to dynamic AI remains ambiguous.
Defect (Section 2(10)): Refers to any fault, imperfection or shortcoming in the quality, quantity, potency, purity or standard which is required to be maintained by or under any law for the time being in force or under any contract, express or implied, or as is claimed by the trader in any manner whatsoever in relation to any goods or product. This definition primarily contemplates a deviation from an intended state or standard.
Manufacturer (Section 2(36)): Includes a person who makes or manufactures any product or part thereof, assembles any product or part thereof, or puts his own mark on any product made or manufactured by any other person. It also extends to those who import products.
Harm (Section 2(21)): Defined broadly to include injury to any person, damage to any property, mental agony, anguish or suffering caused to any person. This is typically associated with physical or tangible losses.
Under these provisions, a product manufacturer can be held liable if the product contains a manufacturing defect, a design defect, deviates from manufacturing specifications, does not conform to an express warranty, or fails to contain adequate instructions for proper use or warnings of potential dangers [Section 84, CPA 2019]. Similarly, a product service provider can be held liable for service deficiency, negligence, or non-compliance with express warranties [Section 85, CPA 2019]. The Act also outlines specific circumstances under which a product seller can be held liable [Section 86, CPA 2019].
Why CPA 2019 Fails for AI
Despite its robust intentions, the CPA 2019’s underlying principles and definitions are profoundly ill-suited to address the complexities of AI-driven harms, creating significant legal vacuums as AI permeates consumer markets.
Product Ambiguity
The definition of product, even with the inclusion of intangible, fundamentally struggles to encompass sophisticated AI algorithms and self-learning systems. An AI is not a static software program. It is a dynamic entity that learns, adapts, and evolves post-deployment through interaction with data and environments. This continuous learning means its behavior can change in unforeseen ways long after its initial manufacture.
For instance, a chatbot providing erroneous or harmful advice, as highlighted in a pertinent case study by Nishith Desai Associates concerning chatbot liability in India, exemplifies how the product itself (the AI’s generated response or decision) is a transient outcome of an evolving system rather than a fixed, tangible good [Nishith Desai Associates, Case Study on Chatbot Liability (202X)]. This fluidity challenges the very notion of a product being manufactured in a traditional sense at a single point in time, complicating liability assessments.
Defect Challenges
Proving a defect in an AI system under the CPA 2019 is legally nebulous and often impossible for the consumer. AI-driven harms frequently do not arise from a simple coding error or a manufacturing flaw that deviates from specifications. Instead, they typically stem from:
Algorithmic Bias: Where the AI performs exactly as programmed, but the underlying training data is biased, leading to discriminatory or unfair outcomes (e.g., an AI loan approval system disproportionately rejecting certain demographics). The system is not defective in its execution but in its learned logic.
Emergent Behavior: Complex AI models can develop unforeseen behaviors and make decisions that were not explicitly programmed, particularly in autonomous systems. Such emergent properties defy the traditional concept of a defect as a deviation from an intended design or specification.
Opacity (the Black Box Problem): Many advanced AI systems operate as black boxes, where even their developers cannot fully explain the specific internal reasoning behind a particular decision. This inherent lack of transparency makes it virtually impossible for a consumer to identify, let alone prove, an imperfection or shortcoming as required by the Act, thus shifting an insurmountable burden of proof onto the harmed party.
Distributed Liability
The CPA 2019’s liability framework envisages a linear relationship between manufacturer, seller, and consumer. The AI supply chain, however, is notoriously complex and fragmented, involving multiple stakeholders: the original developer of an algorithm, the provider of the training data, the integrator who customizes and implements the AI, the cloud service provider hosting the AI, and the final deployer (e.g., a bank using an AI for credit scoring).
When harm occurs, each party can credibly argue that the defect lies with another’s contribution (e.g., our algorithm is fine, the data was biased, or our data is fine, the implementation was faulty). This distributed responsibility allows actors to evade accountability, leaving the consumer without a clear defendant to pursue. The CPA’s focus on a single, identifiable manufacturer or seller is simply not equipped to navigate this multi-party ecosystem.
Comparative Insight
To underscore India’s current legal vacuum, it is critical to briefly contrast its position with more forward-looking jurisdictions. The European Union, recognizing these inherent challenges, has actively embarked on comprehensive legislative initiatives.
The proposed EU AI Act, for instance, adopts a risk-based approach, imposing stringent obligations on high-risk AI systems and their providers [European Commission, Proposal for a Regulation on a European Approach for Artificial Intelligence (2021)]. Complementing this, the proposed AI Liability Directive specifically aims to modernize liability rules for damages caused by AI, notably by easing the burden of proof for victims and introducing a rebuttable presumption of causality or defect where providers fail to comply with certain obligations [European Commission, Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (2022)].
This proactive, AI-specific legislative stance stands in stark contrast to India’s reliance on a general consumer protection law, highlighting the urgent need for tailored legal reforms as of 2025.
Typology of AI-Driven Consumer Harms
As artificial intelligence systems become increasingly sophisticated and integrated into every facet of consumer life, they introduce a new array of harms that traditional legal frameworks were never designed to address. These harms are often intangible, systemic, and complex, stemming not from simple manufacturing flaws but from the very nature of algorithmic decision-making, autonomous operation, and data processing. Categorizing these distinct forms of AI-driven consumer harm is crucial for developing a precise and effective regulatory response.
Algorithmic Bias & Discrimination
Perhaps one of the most pervasive and insidious forms of AI harm is algorithmic bias, leading to unfair discrimination. This occurs when AI models, trained on historical data that reflects existing societal prejudices or incomplete datasets, inadvertently learn and perpetuate those biases. The system may appear to be making neutral, data-driven decisions, but its outputs disproportionately disadvantage certain demographic groups, often along lines of gender, caste, religion, socio-economic status, or geographical location.
Consider the burgeoning fintech sector in India, particularly in the rapidly expanding digital lending space. An AI-powered loan approval system, deployed by a non-banking financial company (NBFC) in 2025, might inadvertently deny loans to women or applicants from non-metro areas at a significantly higher rate than their credit profiles would otherwise warrant. A hypothetical RBI 2024 report on digital lending practices might reveal that such algorithms, while ostensibly evaluating creditworthiness, are assigning lower scores based on proxies for gender or location that correlate with historical lending patterns rather than individual financial risk.
For example, if historical loan defaults were higher in certain non-metro regions due to past economic disparities, the AI could learn to de-prioritize all applicants from those regions, irrespective of their current financial stability. Similarly, if women historically had less access to formal credit or different spending patterns captured in data, the AI might inadvertently penalize them.
The legal challenge here is profound: bias is systemic, not a manufacturing defect. The AI system is often functioning precisely as designed and programmed; the flaw lies not in a broken component or a coding error, but in the biased historical data it was fed or the opaque logic it developed. A consumer cannot point to a fault, imperfection, or shortcoming in the traditional sense, as the algorithm faithfully executes its function. Proving that an AI’s learning from historical data constitutes a defect under the CPA 2019’s current definition is nearly impossible. This leaves consumers with discriminatory outcomes, with no clear avenue for redress, as the discrimination is embedded in the very fabric of the supposedly objective, data-driven decision-making process.
Autonomous System Failures
As AI moves beyond software and into the physical world, embedded in IoT devices and robotic systems, the potential for direct physical harm due to autonomous system failures becomes a critical concern. These systems often operate with limited or no human oversight, and their malfunctions can have immediate and tangible consequences for consumers.
Take the example of malfunctioning smart home devices. By 2025, India is expected to see a significant proliferation of AI-enabled appliances, security systems, and environmental controls. Imagine a smart thermostat designed to optimize energy consumption that, due to an algorithmic error or an unexpected interaction with other household devices, suddenly overheats a room to dangerous levels, leading to property damage or even physical discomfort and distress. Or consider a smart security system that fails to detect an intruder due to a software glitch, resulting in theft and emotional trauma. A hypothetical CCPA grievance data analysis from 2025 could reveal a rising trend of consumer complaints regarding such failures, ranging from minor property damage to significant safety concerns directly attributable to the autonomous operation of these devices.
The legal challenge in these cases is that causation is complex in self-learning systems. While the harm is physical and often undeniable, attributing it directly to a defect in the traditional sense is difficult. Was the malfunction due to a flaw in the initial programming, an unforeseen interaction with the consumer’s unique home environment, or an emergent behavior of the AI as it learned and adapted over time? Unlike a mechanically defective product, where the fault can be traced to a manufacturing error or design flaw, the cause of failure in a complex, adaptive AI system can be a dynamic interplay of software, sensor data, environmental factors, and the AI’s own evolving logic. This makes it exceedingly difficult for a consumer to establish the direct causal link between a specific defect (as defined by CPA 2019) and the resulting harm, creating a significant hurdle for liability claims.
Data & Privacy Harms
AI systems are inherently data-hungry, and their ability to collect, process, and analyze vast quantities of personal information creates novel and magnified privacy risks. Beyond simple data breaches, AI-driven data processing can lead to sophisticated profiling, surveillance, and manipulative practices that undermine consumer autonomy and financial well-being.
Consider the scenario of AI profiling leading to predatory pricing. Retailers and service providers increasingly use AI to analyze individual consumer data—purchase history, browsing behavior, location data, and even inferred demographics—to create highly granular consumer profiles. By 2025, these profiles are used to dynamically adjust prices, offering different prices to different consumers for the same product or service based on their inferred willingness or ability to pay. For example, an AI might detect that a consumer living in an affluent area or using a premium device is willing to pay more for a flight ticket or a service subscription, and subtly increase the price offered to them, without their knowledge or consent. This practice, while appearing market-driven, can become predatory when it exploits vulnerabilities or creates discriminatory outcomes. The proposed Digital Personal Data Protection (DPDP) Act 2023, while aiming to protect personal data, may contain gaps in specifically addressing the ethical implications and consumer harms arising from such AI-driven dynamic pricing and manipulative profiling, as its primary focus is on data processing principles rather than market fairness derived from data exploitation.
The legal challenge here lies in classifying these as harms under a product liability framework and establishing their causation from an AI product. Traditional product liability focuses on physical injury or property damage. Economic exploitation through sophisticated data profiling, psychological manipulation, or subtle violations of data rights, while profoundly impactful on consumer welfare, do not fit neatly into the CPA 2019’s concept of harm. Furthermore, the causal link between the AI profiling system and the consumer’s financial disadvantage is diffuse and difficult to prove, as the harm is often a subtly higher price rather than an outright denial or a physical injury.
Opacity & Explainability
The black box nature of many advanced AI models poses a significant challenge to consumer rights, particularly the right to information and the ability to contest adverse decisions. When an AI system makes a decision that impacts a consumer—such as denying a service, suspending an account, or imposing a penalty—the rationale behind that decision is often indecipherable, even to the developers themselves. This lack of transparency, or opacity, is a harm in itself, preventing consumers from understanding why an adverse decision was made and effectively challenging it.
A pertinent example involves unexplained AI-driven account suspensions. By 2025, social media platforms, e-commerce sites, and financial service providers widely use AI to monitor user activity for violations of terms of service, fraud detection, or suspicious behavior. A user’s account might be abruptly suspended or terminated without a clear, specific, or actionable reason. Imagine a hypothetical Delhi High Court 2024 case where a freelancer’s online payment gateway account was suspended without explanation, crippling their business, only to be reinstated weeks later without clarity.
The platform merely stated it was an algorithmic decision based on risk parameters, offering no further details. This lack of transparency leaves consumers in a legal limbo, unable to appeal effectively or seek recourse, as they cannot identify what rule was broken or how the AI arrived at its conclusion.
The legal challenge here is that traditional liability frameworks do not adequately address the harm of a lack of explanation as a standalone defect or a causal factor for broader harm. The CPA 2019 assumes that if a product is defective and causes harm, this can be identified and proven. However, when the defect is the AI’s inherent opacity, and the harm is the inability to understand or contest a decision, the legal tools are insufficient. Moreover, proving that the lack of explanation itself caused tangible harm, separate from the underlying decision, is a new legal frontier. This forces consumers into a position of absolute powerlessness against an invisible, incomprehensible, and unchallengeable algorithmic authority.
Legal Challenges in Applying Traditional Liability
The preceding sections have illuminated the fundamental limitations of the Consumer Protection Act, 2019, and the distinctive nature of AI-driven harms. This section delves deeper into the doctrinal conflicts that arise when attempting to apply traditional product liability principles, designed for a mechanical age, to the nuanced and evolving landscape of Artificial Intelligence. These conflicts create significant barriers to justice for consumers and highlight the urgent need for a bespoke legal framework.
Burden of Proof: The Impenetrable Black Box
One of the most formidable obstacles for a consumer seeking redress for AI-driven harm lies in the burden of proof. Under the CPA 2019, the onus is on the claimant to demonstrate that a product was defective and that this defect directly caused the harm suffered. This presupposes a degree of transparency and traceability inherent in traditional products. However, many advanced AI systems, particularly those employing deep learning techniques, operate as black boxes. Their internal decision-making processes are so complex that even their designers and developers may not be able to fully explain why a particular output or decision was generated.
When an AI-powered loan application is denied due to alleged algorithmic bias, or a smart device malfunctions causing injury, the consumer faces an impossible task. They cannot decipher the intricate interplay of millions of data points, parameters, and algorithmic layers that led to the harmful outcome. They lack access to the training data, the model architecture, or the specific computational pathways that constitute the defect. This opacity effectively shields AI developers and deployers from accountability, as the consumer is left without the necessary evidence to demonstrate a fault, imperfection or shortcoming as required by Section 2(10) of the CPA 2019. The traditional legal principle of requiring the plaintiff to prove the defect becomes a gatekeeping mechanism that inadvertently protects the technology rather than the consumer.
Evolving Systems: The Dynamic Nature of Defect and State of the Art Defenses
Traditional product liability law typically assesses a product’s defectiveness at the time it was manufactured or placed on the market. Manufacturers can often invoke a state of the art defense, arguing that their product adhered to the highest safety and design standards available at that time. AI systems, however, fundamentally challenge this static notion of defectiveness due to their inherent ability to learn and evolve post-deployment.
Many AI models, particularly those using machine learning, continuously learn from new data and interactions, adapting their behavior over time. An AI application could be deemed perfectly safe and compliant with all known standards upon its initial release. Yet, weeks or months later, through exposure to novel data, interactions with other systems, or cumulative learning, it might develop emergent, harmful behaviors or reinforce existing biases. In such scenarios, when did the defect arise? Was it present at the moment of manufacturing, or did it evolve dynamically? This makes the state of the art defense problematic. The manufacturer could argue that the system was non-defective at the point of sale, shifting responsibility to the complex, post-market learning process. This continuous evolution means that a product that was once safe can become dangerous, and the legal framework lacks clear mechanisms to assign responsibility for defects that manifest or evolve long after the initial transaction, rendering the concept of a fixed defect ambiguous.
Multi-Stakeholder Liability: The Fragmentation of Accountability
The AI value chain is rarely linear; it is a complex, multi-layered ecosystem involving numerous actors, each contributing to the final AI-powered product or service. This distributed responsibility creates a significant challenge for traditional product liability, which largely assumes a single, identifiable manufacturer or a direct seller-consumer relationship.
Consider the following breakdown of roles in the creation and deployment of an AI system, and the inherent difficulty in assigning blame:
Algorithm Developer: Designs the core AI model and software. A flaw here could lead to fundamental errors.
Data Provider: Collects, curates, and supplies the training data. If this data is biased, incomplete, or corrupted, the AI model will inherit these flaws.
System Integrator/Deployer: Embeds the AI model into a specific product or service and configures its parameters for a particular use case. Misuse or improper integration could lead to harm.
Cloud Service Provider: Hosts the AI model and data infrastructure. A failure in their service could impact the AI’s performance.
End-User: The way a consumer uses an AI product could also contribute to, or mitigate, harm.
When an AI-driven harm occurs, a blame game often ensues. The deployer (e.g., an e-commerce platform) might point to the algorithm developer (e.g., a software company), who might then blame the data provider for biased inputs. Each entity can credibly argue that the defect originated upstream or was caused by downstream misuse, thereby avoiding direct liability under the CPA’s traditional framework for manufacturers or sellers. This fragmentation of accountability leaves the consumer in a legal quagmire, unable to pinpoint the sole responsible party, making successful litigation an arduous, often impossible, task.|
Non-Physical Harms: Beyond Tangible Injury
The CPA 2019’s definition of harm (Section 2(21)) primarily contemplates tangible injury, damage to property, mental agony arising from these, or death. This focus on physical or directly quantifiable damage is ill-suited to capture the often intangible, yet profoundly impactful, harms caused by AI systems.
As discussed, AI can inflict harms such as:
Reputational Damage: An AI system might incorrectly flag an individual as fraudulent, leading to reputational harm that affects employment or creditworthiness, without direct financial or physical injury.
Psychological Distress: Algorithmic manipulation or constant surveillance (e.g., through dark patterns or pervasive data collection) can induce psychological distress, anxiety, or addiction, which are difficult to quantify under current legal definitions of harm.
Economic Exclusion/Opportunity Loss: Discriminatory algorithms that deny access to essential services (e.g., loans, housing, insurance) can lead to significant economic exclusion or loss of opportunity, even if no direct financial transaction or physical injury has occurred.
Autonomy Violations: AI systems can subtly erode consumer autonomy by shaping choices or preferences through personalized manipulation, a harm not recognized by existing product liability laws.
These types of non-physical, yet deeply impactful, harms do not fit neatly into the CPA 2019’s framework, which was primarily conceived to address physical injuries from malfunctioning goods. This leaves a vast category of AI-induced suffering without clear legal recourse, undermining the very premise of consumer protection in the digital age. The doctrinal conflicts outlined above collectively paint a clear picture: India’s current product liability framework is fundamentally misaligned with the realities of AI, necessitating comprehensive and targeted reforms to ensure justice for the algorithmic consumer.
Proposed Reforms
The preceding analysis has unequivocally demonstrated that India’s current product liability framework, anchored by the Consumer Protection Act, 2019 (CPA), is fundamentally ill-equipped to address the complexities and novel forms of harm propagated by Artificial Intelligence. The black box nature of AI, its dynamic and evolving capabilities, the fragmentation of accountability across its complex value chain, and the emergence of non-physical yet profoundly impactful harms, all necessitate a paradigm shift in regulatory thinking. As India stands on the cusp of 2025, proactive legislative and policy solutions are not merely advisable but critical to safeguarding consumer trust, ensuring market fairness, and fostering responsible technological innovation. This section outlines a comprehensive, multi-pronged reform agenda designed to create a robust product liability framework fit for the AI age.
Revisions to the Consumer Protection Act, 2019
The most direct and impactful reform involves amending the foundational legislation itself to explicitly integrate AI into its ambit, moving beyond ambiguous interpretations.
Expand Product to Include AI Software and Digital Services
The CPA 2019’s definition of product, despite including intangible goods, remains rooted in a static, transactional understanding. To truly encompass AI, the definition must be explicitly expanded to include:
AI software, algorithms, and machine learning models: Whether standalone or embedded within physical devices, clarifying that computational elements of AI are subject to product liability.
AI-driven services: Where primary value or decision-making is rendered by an AI system (e.g., AI-powered financial advisory, automated medical diagnostics, personalized digital advertising platforms), acknowledging that harm can arise from the output or functioning of an AI service.
Data processed by AI systems: Where such data forms an integral part of AI’s functionality and decision-making, particularly if it contributes to a defect.
This expansion ensures creators and deployers of AI cannot claim their offerings fall outside product liability simply because they are not physical goods.
Redefine Defect to Cover Algorithmic Bias, Explainability Failures, and Data Flaws
The current definition of defect in CPA 2019 focuses on deviations from quality, quantity, or standard—concepts ill-suited for AI. A new definition must specifically incorporate AI-specific flaws:
Algorithmic Bias: Instances where an AI system produces systematically unfair, discriminatory, or biased outcomes, even if it performs technically as specified.
Explainability Failures: Failure to provide a clear, coherent, and meaningful explanation for significant AI-driven decisions impacting consumers.
Data Flaws: AI systems relying on inadequate, incomplete, erroneous, or inappropriately acquired data leading to consumer harm.
Lack of Robustness and Security: Susceptibility to adversarial attacks, manipulation, or operational failure causing foreseeable harm.
Introduce Strict Liability for High-Risk AI
For AI applications deemed high-risk, India should introduce strict liability. Unlike negligence-based liability, consumers need not prove fault, only that a defect exists and caused harm.
Consumer Protection: Reduces burden of proof on consumers faced with AI black box complexities.
Incentivizes Responsibility: Developers bear financial risk of harm, motivating safety, fairness, and robustness.
Risk-Based Approach: High-risk AI includes critical infrastructure, essential service decisions, law enforcement applications, and embedded safety-critical products.
Institutional Mechanisms
Effective regulation of AI requires empowered institutions capable of technical understanding, oversight, and enforcement.
Empower CCPA with AI Audit Authority
The Central Consumer Protection Authority (CCPA) should establish a dedicated AI oversight division with authority for algorithmic audits, following the RBI fintech model for proactive regulatory intervention.
Create Expert Tribunals to Adjudicate AI Disputes
Specialized tribunals comprising legal and AI/data science experts can:
Evaluate technical evidence and algorithmic outputs.
Provide expert opinions to courts and consumer commissions.
Fast-track AI-centered disputes and establish AI-specific jurisprudence.
Transparency Mandates
Require Explainability for Consumer-Facing AI Decisions
Codify the right to explanation for high-stakes AI interactions, providing meaningful, understandable reasons for adverse outcomes, with risk-calibrated detail and counterfactual explanations.
Mandate Impact Assessments for High-Risk AI Deployments
Require companies to conduct AI Impact Assessments (AIAs) before deployment, including:
System description, purpose, and intended use.
Data assessment including bias analysis.
Evaluation of potential impacts on consumer rights, safety, privacy, and fairness.
Mitigation measures, testing protocols, monitoring, and human oversight mechanisms.
CCPA oversight ensures preventive risk management rather than reactive correction.
Global Alignment
Incorporate Best Practices from EU AI Act
Adopt risk-based frameworks, conformity assessments, post-market monitoring, data governance standards, and human oversight principles.
Learn from US Algorithmic Accountability Act and AI Bill of Rights
Integrate algorithmic impact assessments, bias mitigation, and civil rights protections to reinforce ethical AI governance.
In conclusion, the proposed reforms provide a comprehensive strategy to equip India’s legal and institutional framework for the AI age. By redefining core concepts, empowering regulators, mandating transparency, and aligning with global standards, India can safeguard its algorithmic consumers while fostering responsible innovation and societal well-being.
Implementation Challenges & Future Outlook
Although the suggested reforms are critical for the protection of the algorithmic consumer, the obstacles to successfully implementing them will certainly be significant. Effectively overcoming those obstacles is essential to ensuring the legislative intent of protection translates into actual protection, but does not stifle the growing AI innovation ecosystem in India.
One major concern is achieving a balance between encouraging an innovative regulatory environment and strong consumer protection. India’s fast-growing startup ecosystem, especially in AI, relies on speed and minimal regulatory burden. If regulations are overly prescriptive or application-wide, they could unintentionally limit or halt the growth of young AI companies, dissuade innovation, and push innovation outside India. Therefore, the legal environment must be nimble, technology-neutral (where possible), and proportional to transition, ensuring that regulatory burdens are consistent with potential harm.
Achieving this will require continuous conversations and partnerships with policymakers, the industry, academia, and civil society to collaboratively develop mechanisms that encourage responsible innovation instead of stifling it. As a starting point, regulatory sandboxes in AI—similar to those used for technology neutrality in fintech companies—could foster controlled experimentation under government supervision, allowing iterative learning and testing of new software intelligence applications.
A second, equally important challenge is capacity building in legal and regulatory bodies. The technical aspects of AI—from understanding algorithmic bias and data provenance to dealing with emergent behavior and explainability—are not part of traditional legal training. Courts, lawyers, consumer commissions, and regulators will need tailored education and ongoing training in AI technical aspects, data science, and AI ethics. Establishing interdisciplinary expert panels or separate AI divisions within consumer dispute resolution mechanisms is an important starting point. Additionally, public education will be necessary to inform consumers about their new rights and redress mechanisms related to AI.
A phased approach is advisable due to the complexity and scale of these reforms. Instead of implementing everything at once, the initial focus should be on sectors that present the greatest potential harms from AI and are significant for societal or economic impact. Start with high-impact sectors like fintech and healthcare, where AI decisions could affect people’s financial well-being, access to services, or even life-or-death situations. This approach enables regulators to gain experience, develop processes, and build expertise before applying the framework to other sectors. It also allows for the discovery of best practices and lessons learned for later phases.
As India looks ahead, there is an opportunity to move beyond regulatory compliance and become a world leader in ethical AI governance. By proactively addressing AI-related harms and establishing a strong, transparent, and fair product liability process, India can build a trusted digital space. This vision is about protecting citizens while shaping global norms and establishing new international standards for ethical AI development and deployment. Importantly, India can demonstrate how a large, diverse, and technologically advanced country can leverage AI while preserving fundamental rights and principles of justice. More broadly, India can show the world that a human-centric AI future—where technology serves humanity rather than the other way around—is achievable.
Conclusion
Artificial Intelligence’s deep imbrication into India’s consumer sphere may facilitate unprecedented progress and efficiency, but it also introduces unforeseen risks. As shown in this article, the Consumer Protection Act, 2019, although an advancement for its time, is largely unsuitable in the context of the AI age. The definitions of product and defect are fundamentally shaped by a physical world, making them vague and ineffective concerning invisible algorithms and dynamic self-learning systems.
The outdated nature of the CCA institutionalizes a legal gap that leaves new forms of AI-induced systemic harms exposed – such as systemic algorithmic bias, multi-layered autonomous system failures, exploitative data processing, and black box opacity that leaves consumers in the dark regarding decision-making processes. The evidentiary burden traditionally imposed on consumers, changes in AI and technology, diffusion of accountability across multi-stakeholder ecosystems, and the rise of harms without a physical state, all raise insurmountable barriers for the algorithmic consumer when seeking justice.
The cost of inaction, socially, is too great. If legal reform is not implemented in a timely and holistic manner, India risks creating a digital economy marked by diminished consumer trust, rampant algorithmic bias, and widespread feelings of powerlessness. This environment would strip consumers of their rights and undermine the principles of fairness and equity that constitute a just society. It may also obstruct responsible innovation, as consumer reluctance to procure AI-powered services due to lack of recourse could create a marketplace where even ethical developers cannot compete with irresponsible actors.
Consequently, the call to action is clear and immediate: legislators must act proactively, not just reactively, by passing reforms before AI is widely adopted and its infiltration into mainstream Indian life makes it impractical to regulate the relationship between consumers and AI-based companies. By preemptively redefining product liability for the AI age, equipping regulatory bodies with domain knowledge and audit capabilities, and ensuring transparency requirements with a focus on accountability, policymakers can demonstrate leadership by example.
This is not merely about updating a law for a new era; it is about fulfilling the promise of technology to benefit all citizens, while providing protections to ensure that, as AI evolves, every Indian consumer is safeguarded and empowered. Now is the time to act decisively to shape the next decade as one where technology and consumer well-being are symbiotic.