Unpacking MeitY’s Report on AI Governance Guidelines Development
by Deepank Singhal, Senior Associate, Raashi Vaishya, Associate
The rise of AI platforms has ushered in an era of unprecedented empowerment, granting users the ability to research and create with ease—capabilities once reserved for experts in the field. Yet, like all great innovations, AI carries a shadow, bringing forth complex challenges that demand thoughtful governance and regulation. The key to crafting effective solutions lies in first recognizing the right problems. In this regard, the Ministry of Electronics and Information Technology (“MeitY”) deserves commendation for its foresight in identifying both present and potential issues, ensuring that India remains prepared amidst the rapid global evolution of AI.
Recognizing the transformative impact of Artificial Intelligence (“AI”) in our lives, on 6th January, 2025, the MeitY published the “Report on AI Governance Guidelines Development” (“Report”) for public consultation.1 This Report provides guidance on principles of AI governance, while laying the groundwork to enable and regulate sustainable and ethical development of AI technologies in India.
While the last date for providing comments on the Report was 27th January, 20252, MeitY reportedly extended this deadline till 27th February, 2025.3
Genesis of the AI Governance Guidelines Development Report
The discussion surrounding AI governance has intensified, especially with the release of this comprehensive Report. But what does AI governance mean? Simply put, it refers to the frameworks and principles that ensure AI systems operate ethically, safely, and inclusively, minimizing risks while maximizing benefits for society.
India’s diverse socio-economic landscape presents immense opportunities for AI-driven growth, but these come with significant risks. Strong governance is crucial to ensure responsible and inclusive progress. To address this, a multi-stakeholder advisory group, chaired by the Principal Scientific Adviser (“PSA”), was formed (“Advisory Group”). This Advisory Group included representatives from various ministries and sectors, tasked with crafting an ‘AI for India-Specific Regulatory Framework’ and providing strategic guidance on AI governance.
On 9th November, 2023, the MeitY formally established a subcommittee on ‘AI Governance and Guidelines Development’ under the Advisory Group’s guidance. The subcommittee aimed to identify key governance issues, analyse regulatory gaps, and propose actionable strategies to ensure AI systems in India are trustworthy and accountable.
In this piece, we explore in detail the insights and recommendations offered in the Report to foster a trustworthy, innovation-driven AI ecosystem, while addressing current legal and regulatory gaps.
Need for AI Governance
From the publication of “Computer Machinery and Intelligence”, by Alan Turing in the 1950s, which eventually became Turing Test as used by experts to measure computer intelligence to present day generative AI Platforms like ChatGPT, Stability AI, and Perplex AI, the field of Artificial Intelligence has witnessed a tremendous growth. Over the last decade, advancements in machine learning, access to vast datasets, improvements in computational capabilities, progress in natural language processing, and the widespread adoption of connected devices have collectively accelerated AI’s capabilities. These developments have led to the emergence of “foundation models”, which are highly versatile AI systems, trained on vast datasets, capable of powering diverse applications, including generative AI tools that perform tasks ranging from content creation to complex decision-making.
Amidst growing concerns over AI hallucinations i.e., generation of incorrect or misleading outputs by the AI, especially in the view of minimal human control or supervisions, it becomes imperative to understand the interaction mechanism between different components within the system and identify the specific component leading to any potential harm.
Core Principles of AI Governance
While aligning the efforts of various National and Global organizations working in the domain of AI Governance Principles, including but not limited to the Organization for Economic Cooperation and Development (“OECD”), NITI Aayog, and the National Association of Software and Service Companies (“NASSCOM”), the subcommittee proposed the following AI governance principles:
- Transparency: AI systems should provide complete information on their development, processes, capabilities and limitations enabling users with necessary disclosures.
- Accountability: Developers and deployers must ensure AI accountability, user rights, and legal compliance with clear responsibility mechanisms.
- Safety and Robustness: AI systems must be developed, deployed, and monitored for safety, reliability, and robustness to minimize risks, prevent misuse, and ensure they function as intended.
- Privacy and Security: AI systems must comply with data protection laws, respect user privacy, and ensure data quality, integrity, and security-by-design.
- Fairness and Non-Discrimination: AI systems must be fair, inclusive, and free from discrimination, bias, or undue preference toward any individual or group.
- Human-Centered Values: AI systems must have human oversight to prevent overreliance, address ethical dilemmas, and ensure legal compliance and societal well-
- Inclusive and Sustainable Innovation: AI development and deployment should equitably distribute benefits and support sustainable development for all.
- Digital by Design Governance: AI governance should use digital technologies and techno-legal measures to enhance regulation, compliance, and principled operations.
Operationalizing the Principles of AI Governance
The subcommittee identified 3 approaches for operationalizing, i.e., bringing these principles to life:
- Lifecycle Approach: A lifecycle approach to AI governance ensures that ethical principles are effectively implemented by addressing risks at different stages: development (design, training, and testing), deployment (operational use), and diffusion (long-term impact across sectors). Considering the entire lifecycle helps in managing risks, ensuring compliance, and promoting responsible AI use.
- Ecosystem View: AI governance should take an ecosystem-wide approach, as multiple actors—such as data principals, providers, developers, deployers, and end-users—are involved throughout the AI lifecycle. Focusing on a single group in isolation limits effectiveness, whereas a holistic perspective ensures better outcomes by clarifying responsibilities and liabilities across the ecosystem.
- Leveraging Technology for Governance: A conventional governance strategy may not be sufficient given the rapid development and deployment of AI models. Integrating a “techno-legal” approach, where technology complements legal frameworks, can help mitigate risks, enhance scalability and effectiveness of governance. By using technological tools such as compliance systems, blockchain, and smart contracts, regulators can better monitor AI systems and enforce compliance across vast ecosystems. Such technology can help allocate liabilities and establish clear chains of responsibility among actors.4 While these tools support self-regulation, they need periodic reviews to ensure security, accuracy, fairness, and protection of users’ fundamental rights.
Gap Analysis
Keeping in view the applicability of existing laws to the use of AI, the gap analysis in the Report evaluates whether current laws are suitable for addressing risks posed by rapidly evolving AI. This process involved identifying areas of concern where AI could amplify known harms and exploring ways to strengthen compliance and enforcement mechanisms.
To this end, three key aspects emerged as priorities:
- Ensuring existing laws are enforced effectively to address risks exacerbated by AI systems.
- Equipping regulators with the awareness of AI ecosystem, including its data, models, applications, and stakeholders.
- Acknowledging the dynamic nature of AI and adopting a whole-of-government approach to tackle emerging challenges.
With these priorities in mind, the Report subsequently examines how existing laws can be effectively enforced and adapted to address the unique risks and challenges posed by AI systems.
- Deepfakes, Fakes, and Malicious Content
The misuse of foundation models to create malicious synthetic media (e.g., deepfakes) is addressed by existing legal safeguards as outlined below:- Information Technology Act, 2000 (“IT Act”):
- Section 66D criminalizes the use of computer resources for cheating by personation.
- Section 66E prescribes punishment for capturing, publishing, or transmitting images of private areas without consent.
- Sections 67A and 67B address the publication or transmission of obscene material, including deepfake-generated content.
- Bharatiya Nyaya Sanhita (“BNS2”):
- Identity theft and cheating by personation are covered under Sections 319 (cheating by personation), 336 (forgery for the purpose of cheating), 294 to 296 (selling/circulating/distributing obscene objects/media), and 356 (defamation) of the BNS.
- Other Relevant Laws:
- Prevention of Children from Sexual Offences Act, 2012 (Section 12): Sexual harassment of children.
- Juvenile Justice Act, 2015 (Section 75): Causing harm to children.
- Copyright Act (Section 51): Infringement of copyrighted works.
- Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules, 2021”):
- Rule 3(1)(b): Requires intermediaries to inform users of their privacy policies, user agreements, rules and regulations, and take reasonable efforts to prevent dissemination of certain content5 which may cause harm to users.
- Rule 3(1)(c): Mandates intermediaries to periodically (at least once every year) inform users about the effects of non-compliance with its privacy policies, user agreements, rules, and regulations.
- Rule 3(2)(b): Requires intermediaries to remove or disable access to content which is in the nature of impersonation in electronic form, including artificially morphed images, within 24 hours of receiving a user complaint.While the legal framework appears robust for addressing malicious synthetic media, its effectiveness depends on enhanced capabilities for stakeholder compliance and enforcement by authorities. As per the Report, this creates opportunities for technological measures, such as assigning unique, immutable identifiers to different participants (e.g., content creators, publishers, etc.) for establishing traceability, and embedding watermarks in AI-generated content, through which one can track and monitor deepfakes. This approach may help identify when deepfakes are created unlawfully or without consent, ensuring quicker detection and removal.
- Information Technology Act, 2000 (“IT Act”):
- Cybersecurity
The use of AI to compromise cybersecurity can be addressed under existing laws, primarily the IT Act, which:- Reporting of incidents with Indian Computer Emergency Response Team (“CERT-IN”) under Section 70B and the National Critical Information Infrastructure Protection Centre (“NCIIPC”) under Sections 70 and 70A.
Other relevant regulations include:
- The Digital Personal Data Protection Act (“DPDPA”) which inter alia mandates the data fiduciaries to implement appropriate security safeguards against potential personal data breaches.
- Various cybersecurity guidelines have also been introduced by sectoral regulators such as:
- Reserve Bank of India (“RBI”),
- Securities and Exchange Board of India (“SEBI”),
- Insurance Regulatory and Development Authority of India (“IRDAI”), and
- Department of Telecom (“DoT”).
AI’s ability to enable even non-technical individuals to carry out sophisticated cybersecurity threats highlights the need to strengthen cybersecurity measures for AI systems. For this, providers of AI systems could benefit from clear guidance on building “secure by design” to ensure their systems are secure and function as intended from the start.
- Intellectual Property Rights
Training foundation models on copyrighted works without permission can infringe the copyright holder’s exclusive rights.
Indian copyright law provides narrow exceptions under Section 52(1)(a)(i), primarily for personal or private use.
However, commercial or institutional research is not exempted. Some key concerns highlighted in the Report include:- Whether AI systems should be allowed to train on copyrighted data without explicit consent of each copyright holder?
- If yes, under what circumstances such training will not amount to violation of the copyright of rights holders?
These issues are of paramount importance especially in view of the ongoing litigation against Open AI before courts in Delhi wherein the courts are in the process of deciding the similar issues if the training of AI models on copyrighted data without approval of rights holder would amount to copyright infringement and if the training can be qualified as ‘fair use’ under Section 52 of the Copyright Act, 1957.6
The Report suggests that addressing these issues and introducing guardrails is vital to:
- Balance copyright holders’ rights and development of AI
- Ensure lawful use of AI systems
Further, Indian copyright law requires human authorship for copyright protection, raising questions about works generated by foundation models:
- How much human input is necessary to qualify the user/developer of an AI system as the “author” of a generated work?
- Can AI-generated works be categorized as “computer-generated works” under the Indian copyright law?
Proactive guidance from relevant authorities, such as the Copyright Office, and carrying out a consultation with the relevant stakeholders including intellectual property centric government and private organizations and think-tanks would provide clarity and consistency in addressing these issues. Opportunities to leverage techno-legal measures to ensure responsible data use in AI training models should also be examined.
- AI-Driven Bias and Discrimination
Biases embedded in AI systems can have far-reaching consequences, depending on the context and scale of deployment,
especially the aspect of non-traceability of such biases by the end users. Existing laws, such as employment laws,
minority protection laws, and consumer protection laws, provide safeguards against discrimination. However, AI introduces unique challenges as:- People may not easily trace or prove AI-based discrimination due to its complexity.
- AI can unintentionally reinforce existing biases, leading to legal violations
(e.g., in recruitment processes where AI systems might be trained on filtering out certain class of applicants). - Companies using AI tools may not fully understand the risks or how to address them.
Ensuring consumer rights are protected when AI is used in decision-making is an evolving subject.
Accordingly, the Report emphasises the need for a ‘whole-of-government-approach’ to effectively address concerns about AI biases.
Subcommittee’s Recommendations
To govern AI effectively, regulators need clear information on traceability of data, models, systems, and actors across the AI lifecycle, along with transparency in how liabilities and risks are managed between parties. This may help create targeted governance mechanisms.
While existing sectoral laws can address AI risks in regulated industries like health or finance, some risks may spill over across sectors, requiring a broader approach. A baseline framework ensuring transparency and accountability across the AI ecosystem, along with a whole-of- government approach, is crucial to (i) avoid inefficiencies by assessing AI systems in silos, and (ii) help in addressing cross-sectoral challenges effectively.
In this vein, the subcommittee has made the following recommendations in the Report:
- Implement a Whole-of-Government Ap proach by Establishing an Inter-Ministerial AI Coordination Committee/ Governance Group:
To foster a unified and effective approach to AI governance, the Report recommends MeitY and the PSA to establish an empowered Inter-Ministerial AI Coordination Committee or Governance Group (“Committee”). This Committee should function as an ongoing, permanent mechanism, bringing together national-level authorities and key institutions to align efforts under a common roadmap. Such coordination is vital given the complexity of AI systems.
The Committee should prioritize coordinating efforts of the authorities and institutions to implement a whole-of-government approach, enabling regulators and departments to address AI-related risks effectively. By fostering ongoing dialogue and mapping the AI ecosystem, the group can identify gaps and challenges without overburdening stakeholders with excessive regulatory requirements. Regular meetings of the Committee should drive initiatives to:- Strengthen existing laws to minimize AI-related harm;
- Issue joint guidance for legal clarity and certainty;
- Harmonize common terminologies and risk inventories;
- Support self-regulation aligned with responsible AI principles;
- Address gaps through coordinated multi-regulator efforts;
- Encourage AI applications for societal benefit; and
- Facilitate access to Indian-context datasets for assessing fairness and bias in AI models.
The Committee should include a mix of official members from key government bodies and non-official members representing industry, academia, and end-user perspectives. By engaging external experts, the Committee can integrate diverse insights to strengthen governance frameworks and support the responsible development of AI systems in India.
- Establishing a Technical Secretariat to Serve as a Technical Advisory Body and Coordination Point for the Committee:
The Report recommends MeitY establish a Technical Secretariat as an advisory and coordination hub for the AI Governance Committee. Comprising officials, regulators, and experts, it would oversee stakeholder mapping, trend analysis, and risk assessment in AI-related areas like online safety, data governance, and employment. It would also develop standardized metrics, engage industry for solutions, and identify regulatory gaps. The Secretariat, along with its Committee, should remain non-statutory for now, staffed by MeitY officials, consultants, and an AI Sub-Group to finalize its structure. - Establishing an AI Incident Database as a Repository for Real-World Problem to Mitigate Risks:
The Report recommends creating an AI incident database to document real-world AI risks, focusing on mitigation rather than fault-finding. Initially, public sector AI deployments should report incidents, with private entities encouraged to contribute. AI incidents may extend beyond cybersecurity issues to include malfunctions, discrimination, privacy violations, and safety risks. Inspired by global models like the OECD AI Incidents Monitor, this database should function independently of cybersecurity frameworks, fostering learning without penalization. CERT-IN may manage it under the Secretariat’s guidance. - Collaboration of the Technical Secretariat with the Industry to Encourage Voluntary Commitments on Transparency :
The Report suggests the Secretariat collaborate with the industry to promote voluntary transparency commitments, starting with self-regulation and evaluating disclosures. This includes transparency reports, AI system purpose disclosures, red-teaming7, data monitoring, peer reviews, and security checks. Governments using AI should adopt similar governance measures, and standardized risk assessment protocols should be encouraged. Regulators should oversee these efforts, ensuring compliance while minimizing the need for strict regulations. - Establishing a Technical Secretariat to Serve as a Technical Advisory Body and Coordination Point for the Committee:
The Secretariat should assess technological tools to mitigate AI risks through a systems- level approach, enabling real-time tracking of negative outcomes in sectors like healthcare and finance. While legal frameworks address issues like synthetic media, technological solutions can enhance compliance and enforcement. Tools such as watermarking, platform labelling, and fact-checking should be evaluated for effectiveness. Additionally, a gap analysis should identify shortcomings in prevention, detection, and reporting. This dual approach strengthens AI risk management and regulatory resilience. - Formation of a Sub-Group with MeitY to Suggest Measures to Strengthen the Legal Framework:
The Report suggests forming a sub-group to work with MeitY on strengthening the Digital India Act (DIA) by enhancing legal, regulatory, and grievance redressal frameworks. The DIA should address AI risks, digital business models, and improve adjudication through skilled personnel and online dispute resolution. It recommends reviewing Grievance Appellate Committees (GACs) and Adjudicating Officers (AOs), considering full-time specialized roles and broader eligibility criteria. The sub-group should ensure the DIA remains future-ready while supporting digital growth.
Conclusion
As India strides toward becoming a leader in AI innovation, this Report marks a pivotal moment as it underscores that effective AI governance must prioritize ‘harm mitigation’ as its foundational regulatory principle, while operationalizing the 7 principles. The Report’s recommendations are both comprehensive and timely, offering a nuanced framework for balancing innovation with responsibility. By embracing principles like transparency, accountability, and inclusivity, and through its recommendations for a whole-of-government approach, the report reflects an ambitious, yet necessary roadmap for the responsible development of AI in India. The Report urges regulatory and private stakeholders to collaborate on comprehensive solutions to mitigate AI-related risks. Notably, it prioritizes identifying and analysing harmful incidents to develop a robust regulatory framework rather than focusing on penalization. The recommendation to establish a full-time technical secretariat and a sub-group within MeitY, considering the rapid advancements in AI, is a significant step toward creating forward-looking solutions that align legal and regulatory frameworks with technological progress.
While this initiative is a timely response to the pressing ethical and regulatory challenges posed by AI, its success will depend on continuous dialogue among stakeholders, formulation of adaptive policies upon identification of gaps within the existing regulatory framework, and proactive implementation thereof. Accordingly, the challenge now lies in translating this vision into actionable outcomes that not only safeguard society against unintended harm, but also inspire confidence and creativity in the AI community.
Footnotes
[1] See: https://indiaai.gov.in/article/report-on-ai-governance-guidelines-development; Report available here.
[2] Feedback/ Comments on the Report, if any, can be submitted here.
[3] See: https://www.storyboard18.com/how-it-works/meity -extends-deadline-for-public-comments-on-report-on-ai-governance-guidelines-development-54566.htm.
[4] For example, “consent artefacts” in MeitY’s Electronic Consent Framework could be adapted for AI ecosystems to track activities and establish liability chains.
[5] Rule 3(1)(b)(i) to (xi) of the IT Rules, 2021; Available here.
[6] See, Ani Media Pvt Ltd Vs. Open Ai Inc & Anr., CS (Comm) 1028 of 2024 before the Hon’ble High Court of Delhi.
[7] In simple terms, red-teaming is like testing a system to find its weaknesses before others can exploit them. Experts play the role of “attackers” to deliberately try to break or misuse an AI system, identify errors, or uncover biases. The goal is to make the AI safer, more reliable, and less vulnerable to harm or misuse.