As artificial intelligence (AI) continues to drive technological innovation, reshaping industries and societies worldwide, ensuring its safety, ethical integrity, and trustworthiness, has become a global imperative. Following the 2023 UK AI Safety Summit, several countries, including the UK, US, Japan, Singapore, and the European Union, have taken proactive steps by establishing or designating AI Safety Institutes (AISIs). Following the second AI Safety Summit in Seoul, the signatories of the Seoul Declaration committed towards working together for establishing an international network of AISIs.
While each country’s institute may have different structural and operational characteristics, their core mission remains the same – identifying, assessing, and mitigating risks posed by AI to ensure responsible development and deployment. These institutes aim to address the risks associated with AI technologies through safety assessments, risk mitigation frameworks, and fostering international collaboration.
In India, the call for establishing an AISI has been growing, supported by industry leaders, policymakers, academics and think tanks. As of October 2024, the Ministry of Electronics and Information Technology (MeitY) has initiated consultation with stakeholders, to explore the possible scope, structure, and role of such an institute. Against this backdrop, Ikigai Law hosted a multi-stakeholder roundtable discussion on “Building Trust in AI: The Role of AI Safety Institute (AISI) in India.”
This discussion brought together over 20 leading voices from industry, government, embassies, academia, civil society, independent legal consultants, think tanks and law and policy, to deliberate on India’s potential AISI. The conversation focused on the possible structure and governance model of the proposed AISI, its potential role and objectives, and the importance of aligning with global standards while contextualizing them for India-specific use cases. Participants also drew insights from global efforts, discussed the need for continuous safety evaluation, and proposed actionable recommendations for fostering trust and innovation in India’s AI ecosystem.
Key themes and insights
- Global collaboration and India’s potential role for Global South
A prominent theme of the roundtable was the urgent need for India to actively participate in global conversations on AI safety. Participants emphasized that if India does not engage promptly, it risks missing critical opportunities to shape international standards and frameworks. Joining the international network of AI Safety Institutes (AISIs) was essential for sharing standards, exchanging knowledge, and contributing to global AI governance.
Participants highlighted India’s potential to lead Global South perspectives on AI safety. By actively engaging with international bodies, India can participate in the development of globally interoperable standards which also reflect the needs and challenges of emerging economies. Some participants also emphasised that by proactive engagement in global discourse on AI safety, and participation in developing internationally relevant safety standards would not only enhance India’s own AI ecosystem but also bolster its leadership role in the region. Some participants cautioned against blindly adopting Western-centric AI safety approaches without local contextual adaptation.
2. 2. Objectives and the potential role of AISI
Several participants highlighted the importance of defining the objectives of AISI at the outset. Participants suggested that AISI’s primary focus could include:
- Researching and developing evaluation frameworks for assessing the safety, ethical implications and societal impact of AI systems.
- Acting as a guiding body prescribing robust guidelines for designing testing frameworks for advanced AI models, ensuring they meet safety and ethical standards suitable for India’s context.
- Promoting interoperable standards aligned with global frameworks.
- Addressing context-specific safety challenges across sectors like healthcare, agriculture, and education.
- Building capacity and prescribing guidelines on how testing frameworks for AI systems can be designed, exploring public-private-academic partnerships.
Building on MeitY’s consultation, participants expanded the discussion noting that the AISI should be envisioned as an advisory and capacity-building body, with responsibilities to foster innovation, develop safety evaluation frameworks, and promote voluntary standards. There was alignment that the AISI could function as a supporting organisation, not undertaking any regulatory or enforcement functions. Instead, its role should be to provide robust support to existing regulatory frameworks through research, and prescribing guidance on designing testing frameworks for AI models on safety. Iterative and risk-based testing which allows developers to identify issues as they arise, adjust the AI systems, and assess effective mitigations, can serve to boost safe deployment of AI systems. Some participants emphasized that AISI could potentially play a significant role in supporting the regulatory mechanism through research, creating sandboxes for testing new AI systems, and ensuring alignment with international standards – functioning as a potential facilitator of safety and innovation.
There was consensus that the AISI should not have regulatory powers, unlike the European Union’s AI Office under the EU AI Act. Instead, its role should be to facilitate safety and innovation by supporting regulatory mechanisms and aligning with international standards.
It was also highlighted by some participants that a clear division of roles could help in ensuring that the AISI becomes a trusted advisory body. Such a body can enable conversations with the industry and alignment with India’s agile AI strategy of ‘AI for all’, which also aligns with global ethical and safety standards.
3. 3. Structural and governance models
On structure and governance model of the AISI, several participants suggested a hybrid model that combines centralized coordination with a hub-and-spoke mechanism. This could enable better regional representation and localized responses to challenges, involve sectoral experts particularly in sectors like agriculture and healthcare, where context-specific expertise is crucial for decision making.
Additionally, some participants also discussed a consortium-based model of governance. The consortium can consist of a coalition of stakeholders such as academic institutions, industry, and government entities – working together to contribute their specific expertise and resources.
Some participants also discussed the decentralized model of governance, emphasizing transparency and accountability, while avoiding silos and integrating existing initiatives like the Digital Personal Data Protection Act and the IndiaAI Mission.
4. 4. Safety as a continuous process
Participants stressed that AI safety requires continuous monitoring and evaluation, akin to an iterative process that ensures AI systems remain aligned with safety standards as they evolve. Unlike one-time compliance checks, AI safety must adapt to changing societal contexts, technological advancements, and emerging risks.
A participant likened AI safety to evaluating new employees, where their performance is assessed and refined over time to ensure alignment with organizational goals and ethical standards. Similarly, AI models must undergo ongoing checks and audits to ensure they operate safely and transparently in varying contexts.
The importance of flexible and agile safety frameworks was also emphasized, especially for a diverse country like India. Participants suggested that AISI could play a pivotal role in developing safety frameworks by incorporating use-case-specific safety measures and addressing the socio-cultural nuances of AI adoption in India. It was emphasized by some participants that India-specific considerations must shape our approach towards AI safety.
Drawing from the MeitY consultation, which stressed on embedding safety at the design stage, participants underscored the need for dynamic safety protocols. It was noted that safety protocols, which involves mechanisms such as continuous post-deployment evaluation of AI systems, are required to ensure that they perform as intended in real-world scenarios.
5. 5. Lessons from International efforts on AISI
Participants drew insights from global counterparts in the U.S., U.K., and Singapore, highlighting the lessons India can take while envisaging its AISI. Participants stressed on the need to contextualize adaption of international safety standards for India specific use-cases, rather than replicating international standards. While the MeitY consultation highlighted the role of partnerships and global interoperability, the roundtable discussion delved deeper into practical takeaways from existing international AI safety institutions.
Some participants highlighted the United States’ model under the National Institute of Standards and Technology (NIST), which prioritizes rigorous scientific testing and standard-setting to ensure safety and trust. It was pointed out by some participants that an Indian AISI could adapt NIST’s approach to create evaluation and safety frameworks, tailored to India’s multilingual and socio-economic diversity.
Participants also gave example of United Kingdom’s approach which emphasizes access to technical resources, robust evaluation methodologies, and comprehensive reporting to guide its AI safety efforts. Participants further noted that India’s AISI could benefit from similar reporting mechanisms but must ensure inclusivity to address India’s broader stakeholder base, including grassroots innovators.
Meanwhile, Singapore’s AISI’s focus on a sectoral approach that addresses specific harms, voluntary codes of conduct, and use-case-driven research was also discussed by the participants. It was noted that India could adapt a similar approach, focusing on high-priority sectors such as agriculture, healthcare, and education, while developing safety protocols which are relevant for Indian use-cases.
While drawing from these examples, participants emphasized that India must tailor its approach to address its unique socio-economic and resource constraints, ensuring alignment with local priorities and needs.
6. 6. Challenges and considerations
Several challenges were identified in establishing AISI, reflecting the diverse requirements and complexities of India’s ecosystem.
Resource constraints such as limited access to data, computational infrastructure, and skilled talent emerged as critical hurdles. It was suggested by the participants that AISI in India should focus on scalable, cost-effective solutions which can function in resource-constrained environments.
Participants also emphasised on the need for localized safety standards, particularly given India’s linguistic, cultural, and socio-economic diversity. Contextualizing safety standards to match India specific concerns and factors was emphasized by the participants – reiterating that local context plays a crucial role in the use of AI systems.
Building on the MeitY consultation’s focus on consolidating existing efforts, participants emphasized the need to coordinate across ministries and sectors, reducing duplication and fragmentation. It was also pointed out that the AISI by developing safety standards that aligns with international standards, can help build trust and transparency by prescribing guidance on designing of testing and evaluation frameworks for AI systems, and constant stakeholder consultations – further enhancing AISI’s credibility and impact.
7. 7. Recommendations for next steps
The participants shared some key recommendations to guide the establishment and functioning of AISI:
- AISI should implement a hybrid approach of a hub-and-spoke structure, leveraging expertise from regional centers of excellence, academia, and industry. This model would ensure localized responses to India-specific challenges while consolidating ongoing efforts across ministries to prevent duplication and fragmentation of efforts.
- To ensure a well-coordinated AI governance framework, participants emphasised the need for AISI to coordinate across private, public and academics. It was highlighted that by prescribing guidance AISI can ensure alignment in the testing and evaluation frameworks, with regulatory mechanisms such as the Digital Personal Data Protection Act, and initiatives such as those under the IndiaAI mission, which are critical to provide structured guidance to stakeholders – encouraging adoption.
- Participants emphasised that AISI should function as an advisory and capacity-building body, developing risk-based assessment frameworks, which encourages innovation while supporting the regulatory mechanisms through research, continuous safety evaluations and voluntary standards.
- AISI should prioritize creating safety benchmarks and evaluation tools for AI systems, tailored to India’s socio-economic, cultural, and linguistic diversity. This includes addressing concerns of biases, misrepresentative data, multilingual contexts, etc., promoting digital inclusion, and aligning with India-specific use cases in high-priority sectors like healthcare, agriculture, and education.
- Building on MeitY’s emphasis on design-stage safety, participants stressed the need for dynamic safety protocols for assessment of AI systems, that include mechanisms for continuous post-deployment monitoring and evaluation. It was noted that such dynamic evaluation frameworks will ensure AI systems perform safely and ethically across evolving real-life scenarios.
- AISI should actively engage with international AISIs to develop globally interoperable standards, which also reflect the needs of developing nations. By leveraging India’s demographic and technological strengths, AISI can position itself as a thought-leader for the Global South, contributing scalable and context-specific solutions to global AI governance.
- Building on the discussion in MeitY’s consultation, it was highlighted that while engaging with international AISIs and adapting international AI testing and evaluation standards, AISI should focus on contextualizing international frameworks for India specific use-cases and sector-specific contextual concerns. For instance, India can draw from US’s NIST and its rigorous testing methods, UK’s emphasis on technical resource access, and Singapore AISI’s sectoral approach, while tailoring these initiatives for India’s resource-constrained environments and local priorities.
- Participants further emphasised the importance of fostering collaboration across public, private, and academic sectors to ensure comprehensive understanding of ground realities and sector-specific concerns. This collaboration could include implementing comprehensive training programs to build capacity among policymakers, developers, and users.
- To foster grassroot level innovation across sectors, participants suggested that the AISI should develop open-source tools, such as sandboxes for testing AI systems and federated data architectures, empowering innovators and promoting equitable access to crucial resources.
- It was also suggested that the AISI focuses on building scalable knowledge sharing initiatives for democratizing AI safety. Initiatives such as a common platform, fostering continuous dialogue, research dissemination and stakeholder engagement, can boost development and deployment of safe AI solutions.
The roundtable discussion underscored the critical need for a multi-stakeholder approach to AI safety in India. As recommended by the Participants, by establishing AISI as a collaborative, advisory body, India can lead the way in creating inclusive, safe, and ethical AI systems. With scalable solutions tailored to India’s unique challenges, AISI has the potential to position India as a thought leader in global AI governance while fostering trust and innovation in its own AI ecosystem.
This post has been authored by Animesh Kumar, Associate with inputs from Rutuja Pol, Principal Associate.
****