Introduction
Artificial Intelligence (AI) has become a transformative force within the global corporate sector, revolutionizing how companies operate, strategize, and interact with stakeholders. In India, as elsewhere, AI’s integration into corporate decision-making processes—spanning domains such as finance, human resources, marketing, and compliance—has ushered in unprecedented efficiencies and the capacity for predictive analytics. Yet, the autonomous and opaque qualities of AI systems pose significant legal and ethical challenges, particularly in assigning responsibility and ensuring regulatory compliance. The unique context of India, with its rapidly digitizing economy and evolving regulatory landscape, underscores the urgency to address these issues: Are India’s existing laws sufficient to govern AI-driven corporate actions? How should liability be assigned when AI systems make consequential decisions? And what can India learn from international regulatory approaches?
This research paper critically examines the regulatory gap in India concerning AI, accountability, and corporate decision-making. By analyzing the adequacy of current legal frameworks—including the Companies Act of 2013, the Information Technology Act of 2000, and SEBI guidelines—alongside comparative insights from the European Union’s AI Act and U.S. corporate law, the paper seeks to illuminate the deficiencies in India’s present approach and propose pathways for reform. Employing a doctrinal methodology, the analysis draws from statutory instruments, judicial decisions, and academic literature, with particular attention to the intersection of legal liability, algorithmic bias, data protection, and corporate governance. The central thesis is that India’s legal infrastructure has yet to effectively address the accountability vacuum created by AI-driven decision-making in the corporate sphere. Consequently, legislative innovation, institutional reforms, and robust ethical oversight are urgently needed to bridge the regulatory gap while ensuring both innovation and accountability.
The Rise of AI in Indian Corporate Decision-Making
AI’s Transformative Role in Corporate Functions
AI technologies have rapidly permeated the Indian corporate landscape, automating tasks from risk assessment and investment analysis to recruitment and customer engagement. The deployment of machine learning algorithms and data-driven models has enabled Indian firms to optimize supply chains, personalize consumer experiences, and detect fraud with a sophistication previously unattainable. As highlighted by Mukherjee and Chang, AI’s evolution from “advisory roles to proactive execution” marks a fundamental shift, with agentic AI systems now capable of autonomously pursuing long-term goals, making complex decisions, and orchestrating multi-stage workflows without continuous human oversight. Such developments offer clear operational benefits but also disrupt established legal and ethical frameworks, raising critical questions about the locus of control, the validity of consent, and the assignment of responsibility.
The Indian Context: Opportunities and Challenges
India’s embrace of AI is driven by the twin imperatives of economic growth and global competitiveness. The government’s Digital India initiative, burgeoning tech start-up ecosystem, and the increasing digitization of traditional sectors have created fertile ground for AI innovation. However, this rapid adoption has outpaced the evolution of legal and regulatory mechanisms. Existing statutes—such as the Companies Act, 2013 and the Information Technology Act, 2000—were crafted without anticipation of AI’s current capabilities, leaving significant gaps in areas such as algorithmic transparency, data protection, and liability allocation. The result is a regulatory field characterized by ambiguity and fragmentation, with corporations, regulators, and consumers alike navigating uncharted territory.
Legal Accountability and the “Moral Crumple Zone” of AI
The Problem of Diffused Responsibility
Central to the legal challenges posed by AI in corporate contexts is the phenomenon of the “moral crumple zone”—a condition wherein accountability for AI-driven outcomes is diffused across multiple actors, including developers, users, and third-party vendors, often leaving end-users or consumers in precarious legal and ethical positions. As Mukherjee and Chang observe, this diffusion of responsibility is aggravated by the opacity and autonomy of advanced AI systems, which can execute multi-turn workflows and adapt dynamically to unforeseen conditions. When an AI system makes or materially influences a decision—be it a hiring choice, a credit approval, or a compliance determination—traditional frameworks for responsibility attribution, which presuppose identifiable human agency, are strained to the breaking point.
These issues are not confined to theoretical speculation. For example, if an AI-powered investment platform in India autonomously reallocates client assets based on faulty data or biased algorithms, resulting in financial loss, the question arises: Who bears liability—the corporate entity deploying the system, the developers who designed it, the data providers, or the AI system itself? The “moral crumple zone” thus threatens to undermine one of the foundational principles of corporate law: that responsibility for corporate actions must be traceable and enforceable.
The Limitations of Existing Indian Legal Frameworks
Companies Act, 2013
The Companies Act, 2013 is the principal statute governing corporate conduct in India, emphasizing the duties of directors, disclosure norms, and mechanisms of shareholder protection. While the Act addresses the accountability of corporate officers and establishes internal controls, it is silent on the deployment of autonomous or semi-autonomous AI systems. The statute’s provisions on fiduciary duty, negligence, and fraud presuppose human actors as decision-makers. In the absence of explicit statutory language or interpretive guidance, it remains unclear whether or how directors or officers might be held liable for harms resulting from AI-driven decisions—especially where those decisions are opaque or the causal chain is complex.
Information Technology Act, 2000
The Information Technology Act, 2000 (IT Act) provides the primary legal framework for digital transactions and cybersecurity in India. While the Act addresses unauthorized access, data breaches, and certain cybercrimes, its scope does not extend to the governance of algorithmic decision-making or the specific risks posed by AI systems. Notably, the IT Act’s provisions on “intermediary liability” are ill-suited to scenarios where AI systems act as autonomous agents, making decisions with significant material consequences. The Act’s data protection provisions, meanwhile, have been criticized for their lack of clarity and enforceability, particularly in the context of algorithmic profiling and discrimination.
SEBI Guidelines and Financial Sector Regulation
The Securities and Exchange Board of India (SEBI) has issued various guidelines and circulars addressing algorithmic trading and the use of technology in financial markets. However, these regulations are primarily focused on market integrity and systemic risk, rather than the broader issue of accountability for AI-driven corporate actions. SEBI’s approach, although progressive in some respects, remains reactive rather than anticipatory, lacking detailed provisions on explainability, algorithmic bias, or the assignment of liability in cases of AI-induced harm.
Algorithmic Bias, Data Protection, and Ethical Imperatives
The Challenge of Algorithmic Bias
Algorithmic bias represents one of the most acute risks in AI-driven corporate decision-making. As Birhane, van Dijk, and Pasquale argue, existing AI systems are “never fully autonomous, but always human-machine systems that run on exploited human labour and environmental resources,” and they inherit the biases embedded in their training data. In the Indian context, the risk is exacerbated by the diversity and complexity of local data ecosystems, as well as historical and structural inequities. AI models used in hiring, lending, or law enforcement may inadvertently perpetuate or amplify discrimination against marginalized communities, with limited avenues for redress.
The opacity of AI (“black box” models) further complicates matters. As Vincze et al. observe, “interpretable policies…advantageously facilitate transparent decision-making within automated systems. Hence, their usage is often essential for diagnosing and mitigating errors, supporting ethical and legal accountability, and fostering trust among stakeholders.” However, the corporate adoption of interpretable AI (XAI) remains the exception rather than the norm, and Indian law has yet to mandate transparency or explainability in algorithmic decision-making.
Data Protection and Privacy
Data is the lifeblood of AI, and robust data protection is a prerequisite for both ethical AI deployment and public trust. Yet, India’s data protection regime—anchored in the IT Act and as yet lacking a comprehensive data protection statute—remains underdeveloped relative to the demands of the AI era. As Birhane et al. highlight, “training data, foundation to AI systems, is often sourced in questionable manners; uncompensated and unregulated. AI models built on such data necessarily inherit dataset problems such as encoding and exacerbating societal and historical stereotypes.” The absence of statutory safeguards for data subject consent, purpose limitation, and algorithmic auditability leaves Indian corporations exposed to significant legal and reputational risks.
Corporate Social Responsibility and Ethical Governance
Corporate actors are increasingly expected to go beyond mere legal compliance, embracing broader ethical responsibilities in their use of AI. As Birhane et al. emphasize, the most urgent conversation is not about “robot rights” but about “the duties and responsibilities of the corporations and powerful persons now profiting from sociotechnical systems (including, but not limited to, robots).” This ethical imperative has begun to manifest in voluntary codes of conduct and the establishment of AI ethics committees within leading firms, but such measures remain uneven and largely unenforceable absent regulatory backing.
International Regulatory Approaches: Lessons for India
The European Union: The AI Act and Beyond
The European Union’s AI Act represents the world’s most comprehensive attempt to regulate AI deployment, adopting a risk-based approach that subjects high-risk systems to stringent requirements of transparency, accountability, and human oversight. The Act mandates documentation and traceability, the provision of meaningful explanations, and the establishment of clear lines of responsibility. It also imposes severe penalties for non-compliance, signaling a shift from “soft law” to binding statutory obligations. The EU’s framework is notable for its explicit recognition of the need to “align AI-driven choices with stakeholder values, and maintain ethical safeguards.”
For India, the EU model offers both inspiration and caution. The modular, sector-specific approach of the AI Act could be adapted to India’s diverse regulatory context, but its success depends on institutional capacity, enforcement mechanisms, and the ability to balance innovation with risk mitigation.
The United States: Corporate Law and Liability
The U.S. approach to AI regulation remains fragmented, with sectoral statutes and agency guidance filling gaps in the absence of comprehensive federal legislation. As Mukherjee and Chang note, the U.S. Copyright Office’s 2023 policy affirms that “AI-generated works lacking substantial human authorship cannot be copyrighted,” reflecting a legal regime wary of granting personhood or rights to non-human actors. In matters of liability, U.S. corporate law continues to prioritize identifiable human agency, although the proliferation of “agentic AI” is beginning to strain traditional doctrines.
The Debate over AI Personhood and Corporate Rights
A recurring theme in the international debate concerns the analogy between AI rights and corporate rights. As Birhane et al. argue, “the best analogy to robot rights is not human rights but corporate rights, a highly controversial concept whose most important effect has been the undermining of worker, consumer, and voter rights by advancing the power of capital to exercise outsized influence on politics and law.” Granting rights or legal standing to AI systems risks further diluting accountability and complicating efforts to regulate harmful conduct. The Indian legal tradition, with its emphasis on human-centric responsibility, is thus well-advised to resist calls for AI personhood, focusing instead on mechanisms to pin responsibility on corporate actors and their human agents.
Bridging the Regulatory Gap: Proposals for Reform
Legislative and Regulatory Innovation
To address the accountability vacuum at the heart of AI-driven corporate decision-making, India must undertake significant legislative innovation. This includes:
- Amending the Companies Act, 2013: The Act should be updated to explicitly address the deployment of AI systems in corporate governance and decision-making. Directors’ duties should be expanded to include oversight of AI risk, with clear standards for due diligence and liability in relation to algorithmic decisions.
- Enacting a Comprehensive Data Protection Law: Building on the draft Personal Data Protection Bill, India should establish robust provisions for consent, data minimization, purpose limitation, and algorithmic auditability, with special attention to the challenges posed by AI.
- Sector-Specific Regulations: Regulatory bodies such as SEBI, RBI, and the Insurance Regulatory and Development Authority (IRDAI) should issue detailed guidelines governing the use of AI, including requirements for explainability, bias mitigation, and human-in-the-loop review for high-stakes applications.
Institutional Reforms
- Establishment of AI Ethics Committees: Every large corporation deploying AI in critical decision-making processes should be required to establish an AI Ethics Committee, comprising internal and external experts, to oversee compliance with legal, ethical, and technical standards. These committees would serve as focal points for internal audit, stakeholder engagement, and liaison with regulators.
- Mandatory AI Audits: Regular, independent audits of AI systems—focusing on transparency, fairness, and performance—should be mandated, with findings disclosed to regulators and, where appropriate, to affected stakeholders.
- Enhanced Corporate Disclosure: Public companies should be required to disclose material information regarding their use of AI, including the types of algorithms deployed, risk assessment procedures, and mechanisms for redress in case of adverse outcomes.
Fostering Interpretability and Human Oversight
As Vincze et al. demonstrate, interpretable AI architectures such as SMOSE “facilitate transparent decision-making within automated systems,” enabling corporations to diagnose and correct errors, support ethical and legal accountability, and foster trust among stakeholders. Indian regulators should incentivize or require the use of interpretable AI in high-risk contexts, and ensure that human oversight is embedded throughout the decision-making pipeline.
Building Capacity and Public Awareness
Finally, bridging the regulatory gap will require investments in institutional capacity and public awareness. Regulators must be equipped with the technical expertise to evaluate AI systems, and stakeholders—including consumers, employees, and civil society organizations—must be empowered to understand and challenge algorithmic decisions that affect their rights and interests.
Conclusion
The accelerating integration of AI into Indian corporate decision-making presents both immense opportunities and formidable legal and ethical challenges. The current regulatory framework—anchored in statutes conceived before the AI revolution—offers inadequate tools for ensuring accountability, transparency, and fairness. As AI systems become more autonomous and opaque, the diffusion of responsibility threatens to erode the very foundations of corporate governance and legal liability.
Drawing on comparative international experience and leading academic analysis, this paper has argued that India must undertake a concerted program of legislative, institutional, and ethical reform to bridge the regulatory gap. Key priorities include amending core corporate and data protection statutes, establishing robust oversight mechanisms, mandating transparency and interpretability, and fostering a culture of ethical responsibility within the corporate sector. Only by embracing these reforms can India realize the promise of AI-driven innovation while safeguarding the rights, interests, and trust of all stakeholders.
In the final analysis, the challenge is not to grant rights to AI, nor to absolve corporations of responsibility, but to ensure that the deployment of AI in corporate decision-making is guided by principles of accountability, transparency, and justice. As the boundaries between human and machine agency continue to blur, the imperative for clear, enforceable, and forward-looking regulation has never been more urgent.
End Notes:
- Abbott, Ryan. “The Reasonable Robot: Artificial Intelligence and the Law.” Cambridge: Cambridge University Press, 2020.
- Birhane, Abeba, Jelle van Dijk, and Frank Pasquale. “Debunking Robot Rights Metaphysically, Ethically, and Legally.” arXiv preprint arXiv:2404.10072v1 (2024). http://arxiv.org/pdf/2404.10072v1
- Bridy, Annemarie. “Coding Creativity: Copyright and the Artificially Intelligent Author.” Stanford Technology Law Review 5 (2012): 1-28.
- Mukherjee, Anirban, and Hannah Hanwen Chang. “Agentic AI: Autonomy, Accountability, and the Algorithmic Society.” arXiv preprint arXiv:2502.00289v3 (2025). http://arxiv.org/pdf/2502.00289v3
- Vincze, Mátyás, Laura Ferrarotti, Leonardo Lucio Custode, Bruno Lepri, and Giovanni Iacca. “SMOSE: Sparse Mixture of Shallow Experts for Interpretable Reinforcement Learning in Continuous Control Tasks.” arXiv preprint arXiv:2412.13053v1 (2024). http://arxiv.org/pdf/2412.13053v1