IFF: New Delhi: Tuesday, 04 March 2025.
MeitY released its report ‘AI Governance Guidelines Development’ on January 6, 2025 and opened it up for public feedback to guide the development of a trustworthy and accountable AI ecosystem in India. This post summarises IFF’s response.
The rapid development of Artificial Intelligence (“AI”) has opened new possibilities in governance, public policy, industry, and civil society. Yet, these advancements also raise questions about fairness, accountability, and the safeguarding of constitutional rights. In our submission to MeitY on the Draft AI Governance Guidelines, we at the Internet Freedom Foundation (“IFF”) offered concrete recommendations to ensure that India’s approach to AI remains inclusive, transparent, and constitutionally anchored.
Deficiencies in the Consultative Process
When the consultation on AI governance was first announced, the window for comments was set at only 20 days far too short to allow meaningful participation. Following our urgings and other civil society organizations, MeitY extended the deadline. However, limiting the consultation to online form-based submissions and English-language documents still excludes a broad spectrum of stakeholders. Such constraints prevent marginalized communities, smaller businesses, and non-English speakers from effectively voicing their concerns.
What We Recommend
Since 2018, multiple international bodies have endorsed high-level ethical principles for AI, covering concepts such as transparency, privacy, and accountability. Yet, these principles often remain purely aspirational if not backed by clear enforcement mechanisms. Our submission points out that while broad ideals are valuable, AI governance in India must go further embedding constitutional norms into actual, binding rules.
What We Recommend
IFF has consistently advocated for a rights-oriented framework to guide tech policy. India’s Constitution with its emphasis on liberty, equality, and social justice provides a solid foundation for AI governance. In a context where AI influences policing, credit scoring, and even welfare distribution, upholding constitutional protections is non-negotiable. Additionally, the NITI Aayog’s Responsible AI Principles recognized constitutional morality, but the draft report remains largely silent on how to operationalize these standards.
Key Considerations
The draft report suggests self-regulatory frameworks in which companies voluntarily adopt ethical guidelines. While private-sector input is valuable, a self-regulation-only model risks enabling unchecked commercial interests to overshadow public welfare. Such a light regulatory framework exacerbates the risks posed by AI systems, such as data misuse, bias, and manipulation.
What We Recommend
India’s existing laws whether on cybercrimes, data protection, or intellectual property were created with a more traditional, human-centric decision-making model in mind. They often fail to anticipate AI-specific issues, ranging from mass data scraping and automated profiling to algorithmic biases in gig work.
Areas Needing Attention
AI holds transformative potential for India, but unfettered growth of powerful, opaque systems can also magnify discrimination, invade privacy, and weaken civil liberties. As we at IFF emphasized in our submission, a robust AI governance framework must integrate constitutional values, inclusive public consultation, and enforceable legal standards. Self-regulation and broad ethical principles are helpful starts but simply not enough to guarantee the rights and welfare of all Indians.
Important Documents
MeitY released its report ‘AI Governance Guidelines Development’ on January 6, 2025 and opened it up for public feedback to guide the development of a trustworthy and accountable AI ecosystem in India. This post summarises IFF’s response.
The rapid development of Artificial Intelligence (“AI”) has opened new possibilities in governance, public policy, industry, and civil society. Yet, these advancements also raise questions about fairness, accountability, and the safeguarding of constitutional rights. In our submission to MeitY on the Draft AI Governance Guidelines, we at the Internet Freedom Foundation (“IFF”) offered concrete recommendations to ensure that India’s approach to AI remains inclusive, transparent, and constitutionally anchored.
Deficiencies in the Consultative Process
When the consultation on AI governance was first announced, the window for comments was set at only 20 days far too short to allow meaningful participation. Following our urgings and other civil society organizations, MeitY extended the deadline. However, limiting the consultation to online form-based submissions and English-language documents still excludes a broad spectrum of stakeholders. Such constraints prevent marginalized communities, smaller businesses, and non-English speakers from effectively voicing their concerns.
What We Recommend
- Longer, Multi-Lingual Consultations: In line with the Pre-legislative Consultation Policy (PLCP), 2014, consultations should be accessible in multiple languages, include offline mechanisms such as public hearings, and allow enough time for meaningful engagement.
- Greater Transparency: All submissions should be made public, and counter-comments should be encouraged to foster a more robust, transparent exchange of views.
- Greater Coherence: It is unclear if this report supersedes or complements earlier AI policy efforts, raising concerns about fragmentation. The government should create a comprehensive, unified AI policy framework that integrates and builds upon existing foundational documents.
Since 2018, multiple international bodies have endorsed high-level ethical principles for AI, covering concepts such as transparency, privacy, and accountability. Yet, these principles often remain purely aspirational if not backed by clear enforcement mechanisms. Our submission points out that while broad ideals are valuable, AI governance in India must go further embedding constitutional norms into actual, binding rules.
What We Recommend
- Concrete Implementation Roadmaps: To ensure that transparency remains a cornerstone of India’s AI and data governance framework, it is crucial to address potential conflicts between the principles of transparency and privacy rights. For example, the amendment of Section 8(1)(j) of the Right to Information Act, 2005, under the Digital Personal Data Protection Act, 2023, may inadvertently undermine the public's ability to access critical information, especially when it pertains to the functioning and use of AI-driven systems and personal data handling. Therefore, specific guidelines and legal mandates are required to translate different principles like "fairness" and "non discrimination" into practice.
- Risk-Based Categorization: Similar to the EU AI Act, India’s framework should treat high-risk AI systems (e.g., those used in law enforcement or social welfare distribution) differently from low-risk applications (e.g., simple chatbots).
IFF has consistently advocated for a rights-oriented framework to guide tech policy. India’s Constitution with its emphasis on liberty, equality, and social justice provides a solid foundation for AI governance. In a context where AI influences policing, credit scoring, and even welfare distribution, upholding constitutional protections is non-negotiable. Additionally, the NITI Aayog’s Responsible AI Principles recognized constitutional morality, but the draft report remains largely silent on how to operationalize these standards.
Key Considerations
- Grounding in Articles 14, 19, and 21 of the Constitution: These Articles of the constitution guarantee equality, free expression, and the right to life and personal liberty. AI systems deployed by public or private actors cannot be allowed to infringe upon these rights.
- Accountability for Both State and Private Entities: Even though constitutional law traditionally applies to public bodies, Supreme Court jurisprudence has clarified that private actors, especially those wielding significant influence over people’s rights, must also be held accountable. This is in context of tech giants wielding enormous power with minimal oversight due to monopolistic concentration of power. Under India’s welfare model, the state bears moral and legal obligations to safeguard fundamental rights.
The draft report suggests self-regulatory frameworks in which companies voluntarily adopt ethical guidelines. While private-sector input is valuable, a self-regulation-only model risks enabling unchecked commercial interests to overshadow public welfare. Such a light regulatory framework exacerbates the risks posed by AI systems, such as data misuse, bias, and manipulation.
What We Recommend
- Enforceable Standards and Independent Audits: Relying solely on voluntary initiatives leaves accountability gaps. Instead, legislation should mandate external audits, algorithmic impact assessments, and oversight mechanisms to keep corporate players in check.
- Avoiding Digital-by-Design Pitfalls: Automating governance functions has the potential to exclude those who have limited digital literacy or access. Making services “digital-by-default” must be weighed against the risk of deepening India’s existing digital divides.
India’s existing laws whether on cybercrimes, data protection, or intellectual property were created with a more traditional, human-centric decision-making model in mind. They often fail to anticipate AI-specific issues, ranging from mass data scraping and automated profiling to algorithmic biases in gig work.
Areas Needing Attention
- Privacy and Data Protection: Although India has passed the Digital Personal Data Protection Act, 2023, there are real harms caused by it that reduce transparency (for instance, restricting the scope of the RTI Act). AI governance must ensure robust data safeguards and the possibility of meaningful redress for privacy violations.
- Algorithmic Fairness and Non-Discrimination: Legal provisions must address new forms of “proxy discrimination,” where AI systems treat individuals differently based on correlated data points that effectively stand in for protected attributes like caste or religion.
- Gig Worker Protections: From ride-hailing to food delivery, AI-based algorithms have become gatekeepers for allocating tasks and determining wages, yet are rarely open to scrutiny. A law or clear regulatory guidelines are essential to safeguard gig workers’ interests and ensure fair labor practices.
AI holds transformative potential for India, but unfettered growth of powerful, opaque systems can also magnify discrimination, invade privacy, and weaken civil liberties. As we at IFF emphasized in our submission, a robust AI governance framework must integrate constitutional values, inclusive public consultation, and enforceable legal standards. Self-regulation and broad ethical principles are helpful starts but simply not enough to guarantee the rights and welfare of all Indians.
Important Documents