Report: Roundtable on Bias to Fairness: Charting path towards Responsible AI

As artificial intelligence (AI) continues to reshape our world, the issue of bias in AI systems has emerged as a critical challenge that demands urgent attention. These biases can perpetuate and even amplify existing societal inequalities. AI systems trained on public internet data inherit existing societal biases, which is a known challenge in their development. Addressing these biases in AI systems will require continuous iteration and refinement. Furthermore, these biases occur across various domains including – healthcare, finance, criminal justice, etc. The complexity and far-reaching implications of AI biases necessitate a multistakeholder approach, drawing upon the expertise, experiences, and insights from stakeholders across - academia, industry, government, and civil society.

Against this backdrop, Ikigai Law hosted a multi-stakeholder roundtable on ‘Bias to Fairness: Charting a path towards Responsible AI’ in New Delhi, India. The discussion explored the complexities of biases in AI systems including: defining and identifying biases across the AI lifecycle with illustrative examples, examining existing approaches to mitigating these biases, and specific frameworks to address biases comprehensively.

The event brought together 25 experts, including - academicians from top Indian academic institutions, startup founders, researchers, present and former government officials and law/policy experts. This diverse representation enabled a holistic conversation, bringing multiple perspectives to the complex issue of AI bias. Several key insights emerged from this discussion including – the impossibility of building a ‘bias free’ AI system, shifting focus from eliminating biases to managing and redressing socially undesirable biases, need to integrate local communities for addressing biases, use of open-source AI models etc. 

1. Challenges to defining and identifying biases: Speakers unanimously agreed that there could not be a single definition of bias. Participants emphasized that bias is inherently contextual and subjective, varying across cultures, applications, and individual perspectives and should be seen specifically in the context of use cases. For general purpose AI models or systems, equal performance across topics is not possible. On detecting and eliminating AI biases, participants highlighted that the occurrence of bias across the AI lifecycle and the “double black-box effect” complicates bias detection. The double black box effect refers to the combined opacity of the vast, unexamined training data and the inscrutable internal workings of AI models, challenging the identification of bias sources and explanations in AI decision-making processes. In the context of indic languages, it was also discussed that expectations to address bias is not feasible technically given lack of indic datasets.

2.  Impossibility of bias-free AI systems: LLM learns from vast amounts of data, which often reflect societal biases present in the data sources. However, these datasets are typically not evenly distributed across all topics. Some topics may have significantly more data available than others, leading to uneven exposure of the model to different subjects. Bias is very culture and region specific, whereas AI and LLMs are global.  Speakers recognized bias to be fundamental for predictive systems as it enables decision-making and pattern recognition. Participants emphasized that creating completely bias-free AI systems is an unattainable goal. They argued that biases are inherent in human knowledge and, by extension, in the data used to train AI models. One of the experts recognized bias as a fundamental component of all social systems that cannot/should not be entirely eliminated. Some experts also highlighted that addressing bias could lead to distorting facts. Speakers highlighted that some biases are necessary for AI systems to function and make predictions. The challenge lies in identifying which biases are problematic or harmful and developing strategies to address them. Participants also discussed that user prompts play a critical role in generating outputs from GenAI tools. Some participants also mentioned that technology companies should not be held accountable and made responsible for fixing the bias of the internet and society. This realistic approach acknowledges the complexities of bias in AI while focusing on practical solutions to minimize unintended consequences caused due to harmful biases and ensure fairness in AI applications across diverse contexts. Instead of aiming for complete bias elimination, the concept of “bias resilience” was introduced as a more practical alternative. This approach focused on creating a more inclusive environment for AI access and development. 

3.  Distinction between bias and fairness: Several speakers emphasized that bias is inherent and often necessary in AI, as it allows systems to make predictions and classifications. However, fairness relates to how these biases align with societal norms and expectations. Bias was described as a fundamental aspect of predictive systems, while fairness involves ensuring that the biases do not lead to unfair or discriminatory outcomes. Some biases were considered acceptable or even beneficial, while others were seen as problematic and requiring mitigation to achieve fairness. The key insight is that the goal should not be to eliminate all bias, which is impossible, but rather to create “fair” AI systems by aligning biases with ethical principles contextualised to different use cases  and addressing those that lead to harmful or discriminatory results.

4. Evaluating AI biases in specific context: When assessing bias in AI systems, it’s crucial to consider the purpose and context of their use. Participants emphasized that the goal should not be to eliminate all biases, but rather to ensure that AI systems operate in a manner that is considered fair and equitable within their specific contexts of use. This approach recognizes that some biases may be necessary or even beneficial in certain contexts, while others could be undesirable and need mitigation. However, efforts to debias general purpose AI systems can lead to unintended consequences. When attempting to make a system more fair or representative, there’s a risk of introducing new biases or causing the system to “forget” important functionalities. 

5. Standardization v Personalization: The standardization versus personalization debate in AI development reveals a complex tension between fairness and functionality. One of the speakers argued for the value of personalization, highlighting its utility in features like autocomplete and emphasizing the importance of progress in low-resource languages, even if perfect equality isn't initially achievable. They cautioned against overly rigid standardization that might hinder innovation. On the other hand, participants while agreeing on the importance of developing AI for all languages, shifted the focus to quality rather than mere equality of responses. They emphasized the need to recognize and transparently address systemic quality differences as a form of bias. Both perspectives acknowledge the current limitations and biases in AI systems, particularly in language technologies. 

6.     Availability of resources:

a.  Issue: Speakers also highlighted the intricate relationship between data availability, linguistic diversity, and AI bias, particularly in the context of India. They emphasized that the challenge of addressing bias in AI systems is deeply rooted in the quality and quantity of available data, especially for low-resource indic languages and specific use cases. They pointed out that while universal approaches to solving bias issues are appealing, they often fall short due to the diverse nature of AI applications. India faces challenges due to the lack of digital footprint for many of its languages. This scarcity of linguistic data makes it difficult to train unbiased multilingual models.

b. Solution: Speakers emphasized the need for a structured approach to bias mitigation that focuses on including more diverse, community-specific, and high-quality multilingual data. 

7.     Approaches for minimizing AI biases

Given the challenges outlined above, instead of pursuing an impossible ideal of bias-free AI, the discussion shifted towards effectively managing and mitigating harmful biases. 

a) Cultural context and community-based solutions: The discussion on cultural differences highlighted the complex challenge of defining and addressing bias in AI across diverse global contexts. Existing bias detection techniques were found to be inadequate when applied across different cultural settings. They suggested developing AI models and evaluation methods that are more culturally aware and emphasized involving local communities for bias identification and mitigation. Several speakers emphasized the need to integrate local communities into both building datasets and evaluating outputs of AI systems. Speakers emphasized the need for having evaluators from different backgrounds including gender, age, etc. Additionally, they called for the current pool of experts to be expanded beyond technical specialists and linguists to include “culture keepers”. Additionally, they called for the current pool of experts to be expanded beyond technical specialists and linguists to include “culture keepers”. This approach would enable the creation of more comprehensive and culturally sensitive assessment methods, ensuring AI systems are evaluated fairly across diverse contexts. The idea of co-design labs involving local communities was proposed to adapt AI systems to specific cultural contexts and user needs. 

b) Collaborating with international AI safety institutes and developing global harmonised benchmarks: Global harmonised benchmarking and AI safety institutes play a crucial role in addressing AI bias on a broader scale. Speakers emphasized the importance of AI Safety Institutes across the globe to work towards developing best practices, shared resources for benchmarks, and exchange findings to advance research on addressing novel AI issues like bias and elevate local concerns to a global context.  

c)  Open-source datasets and models: Open-source AI models and datasets were emphasized to be crucial for democratizing access to AI technologies. Participants specifically noted the importance of open datasets ’ in testing for biases across different demographics. Speakers argued that this approach promoted transparency, enabled broader scrutiny, and allowed for collaborative community driven improvement of AI systems..  

d)  User education: There was recognition of the importance of user understanding and decision-making in addressing AI biases, given the role of user prompts in generating outputs from GenAI tools. Participants called for increased AI literacy to help users understand and navigate potential biases in AI systems.  

8.  Responsibility and obligations: The discussion explored where the responsibility for bias free AI development lies, with debate over whether obligations should primarily fall on developers, deployers, or policymakers. There was recognition that addressing bias requires effort across the entire AI lifecycle and ecosystem. Suggestions were made for organizations to develop their own AI policies and for industry consortiums to promote responsible development practices. Some participants argued for a light-touch regulatory approach, while others emphasized the need for clearer guidelines and accountability mechanisms. The complexity of assigning responsibility in a rapidly evolving technological landscape was a recurring theme. 

9. Flexible governance and adaptive regulation for emerging technologies: Participants  discussed leveraging existing regulatory frameworks (both horizontal & sectoral) with the focus on  technology neutral approach to AI governance, emphasizing the need for a flexible and adaptable framework. They highlighted the importance of organizations developing their own responsible AI principles and policies addressing key aspects such as inclusion, transparency, safety,. Given the rapid evolution of AI technologies, they suggested a light-touch regulatory approach (as necessary) which is capable of adapting to emerging risks and opportunities. Speakers also proposed prototyping environments to evaluate AI models & applications in controlled, culturally relevant scenarios, allowing for more precise bias identification and correction. The conversation also cautioned against overly rigid frameworks, noting their potential to hinder innovation or inadvertently introduce new biases.

The roundtable highlighted the complex, context-dependent nature of AI biases and the need for multifaceted solutions. The role of open-source AI models emerged as a strong theme enhancing transparency and fostering broader participation in bias mitigation. Key takeaways included the need of adaptive, context-specific approaches for bias mitigation, importance of cultural context in bias identification, the shift towards managing socially undesirable biases rather than eliminating all biases, and the critical role of diverse perspectives in AI development and evaluation & user education. Moving forward, our efforts should focus on developing best practices & resources that can address biases across various cultural and linguistic contexts specifically for underrepresented languages and communities. Future initiatives should promote ongoing collaboration between academia, industry, government, and civil society to ensure comprehensive approaches to AI bias mitigation. Additionally, emphasis should be placed on creating flexible technology neutral governance models that can keep pace with rapid technological advancements.. Adopting these solutions can help us build an AI ecosystem that is innovative, fair, inclusive, and reflective of global diversity. 

This post has been authored by Aarya Pachisia, Associate with inputs from Rahil Chatterjee, Senior Associate, and Rutuja Pol, Principal Associate.

Challenge
the status quo

Dividing by zero...