We organized a round-table discussion, ‘AI – Legal hurdles and policy direction for Indian entrepreneurs’ on 5 March 2024 in Bengaluru. This came against the backdrop of a widely-criticised advisory issued by India’s IT ministry, asking companies to seek government permission before releasing under-tested or unreliable AI models for public use, among other requirements. (A week or so after our event, this advisory was later revised to exempt startups among other things.) Our aim was to give a platform for the AI innovation ecosystem to voice their opinions and participate in policy making conversations on AI.
Our event brought together 20 experts including founders and AI heads of start-ups, venture capital (VC) investors, general counsels (GCs) at start-ups and VCs, and policy advisors, to discuss their concerns with this advisory and more importantly, gauge the Indian startup ecosystem’s wish-list for AI regulation in India.This event was a part of our efforts towards driving thoughtful discourse on AI policy and law in India, shaping responsible AI regulation in the country and engaging with key stakeholders.
Takeaways from the event
Our discussion unpacked views of start-ups, investors and policy experts on AI governance frameworks, their impact on innovation, lessons from global governance and the way forward for India.
1. Global AI framework: Some participants suggested developing a global framework for responsible AI, as the way forward. There was some consensus that ethical principles first needed to be identified for laws to be harmonised across sectors and countries. Some speakers opined that such certainty will take some time and cautioned that hurried policy making could lead to a fractured, fragmented, and hyperlocal approach to AI regulation. Speakers said that in the meantime, there may be merit to wait and see how AI and its use cases evolve, increase focus on safety issues and build government capacity to assess harms. Speakers also emphasised the need to contextualise this for India.
2. Graded, risk-based and harm-based AI governance: Some participants said that AI governance should not be prescriptive as this may lead to increased compliance challenges, especially for smaller companies. Citing examples, founders shared that even if their companies are not likely to cause harms, they are required to carry out additional obligations. For example, the requirement of Data Protection Officer (DPO) under DPDPA was cited as a huge cost centre. Speakers said that identifying and classifying AI harms was easier than user harms in data protection, therefore, suggested adopting a graded, risk-based and harm-based approach to governance.
3. Some specificity is good: Some speakers said that in addition to principle-based prescriptions, objective standards should be developed. Some encouraged the development of AI certifications, which include specific checklists for compliance.
4. Policy uncertainty can stifle innovation: Discussing one of the most recent examples of this, speakers highlighted their apprehension regarding the MeitY advisory (Note that this was the version before any changes were made). Everyone agreed that it was a knee-jerk response by the government. Lawyers shared that even subsequent clarificatory statements by ministers on the ambiguously worded advisories or notifications, do not help calm waters. In this case, the clarification regarding application of the advisory being only to significant platforms and not start-ups, was also ambiguous and led to further uncertainty. This created confusion for unicorn start-ups which technically fall under the ambit of ‘large platform’. Some questioned the legal basis of this advisory, and sanctions applicable in case of non-compliance.
5. Investors value ‘baseline certainty’: The investor community shared that having a basic level of certainty through appropriate regulation contributes to investor confidence. Although the community has a risk-taking capacity, this certainty provides investors with a protective safety net at the back of their mind, allowing them to invest freely and spot opportunities. The investor community also shared that instead of a prescriptive approach that stifles innovation, risk-based harm-based solutions to innovate on ground should be introduced.
6. Self-regulatory body: Speakers discussed the idea of a self-regulatory organisation (SRO) for the AI ecosystem in India, as a viable model to enhance trust. Here, the SRO could come up with safety parameters and share them with the government. The room was divided on the feasibility of this model. Some speakers felt that SROs may not yield on-ground benefits for the industry or the government, and that there could be competing interests to account for in an SRO model. Proposing an alternate, a speaker shared the example of the Indian National Space Promotion and Authorization Centre (IN-SPACe), an autonomous agency under the Department of Space which had the mandate of a regulator and a promoter. Sharing some of IN-SPACe’s success stories, a similar setup was proposed where such a body would comprise of industry players with limited tenures.
This readout is authored by Vidushi Sinha, Associate with inputs from Rutuja Pol, Principle Associate, and Nehaa Chaudhari, Partner.
For more on the topic please reach out to us at contact@ikigailaw.com.