IIT Madras study calls for a great participatory approach
IIT Madras researchers and Vidhi Centre for Legal Policy, Delhi conducted a study recently.
The study has called for participatory approaches in the development and governance of Artificial Intelligence in India and abroad.
This study sought to establish the need and importance of a participatory approach to AI Governance while grounding it in real-world use cases, through an interdisciplinary collaboration, say sources from IIT Madras.
Operations
As operations in multiple domains get increasingly automated through AI, the various choices and decisions that go into their setup and execution can get transformed, become opaque and obfuscate accountability.
This model highlights the importance of involving relevant stakeholders in shaping the design, implementation, and oversight of AI systems, say sources from IIT Madras.
Researchers
Researchers from the Centre for Responsible AI (CeRAI) under Wadhwani School of Data Science and AI at IIT Madras and Vidhi Legal, a leading think-tank on legal and tech policy, between technologists, lawyers and policy researchers conducted this study in two parts.
Findings
Their findings were published in a Pre-Print Paper in ‘arXiv’, an open-access archive for nearly 2.4 million scholarly articles in the fields of physics, mathematics, and computer science, among many others. The Papers can be viewed using the following links – https://arxiv.org/abs/2407.13100 and https://arxiv.org/abs/2407.13103
Need for the study
Prof. B. Ravindran, Head, Wadhwani School of Data Science and Artificial Intelligence (WSAI), IIT Madras highlighted the need for studies.
The widespread adoption of AI technologies in the public and private sectors has resulted in them significantly impacting the lives of people in new and unexpected ways, said Prof Ravindran of WSAI at IIT Madras.
In this context, it becomes important to inquire how their design, development and deployment takes place. This study found that persons who will be impacted by the deployment of these systems have little to no say in how they are developed, he said.
Seeing this as a major gap, this research study advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems, he said.
By ensuring that diverse communities are included in AI development, we can create systems that better serve everyone, particularly those who have been historically underrepresented, he said.
Increasing transparency and accountability in AI systems fosters public trust, making it easier for these technologies to gain widespread acceptance. Further, by involving a wide range of stakeholders, we can reduce risks like bias, privacy violations, and lack of explainability, making AI systems safer and more reliable, he said.
Value
Shehnaz Ahmed, Lead, Law and Technology, Vidhi Centre for Legal Policy, said, “Increasingly, there is a recognition of the value of participatory approaches in AI development and governance.
However, the lack of a clear framework for implementing these principles limits their adoption. This report addresses critical challenges by offering a sector-agnostic framework that answers key questions such as how to identify stakeholders, involve them throughout the AI lifecycle, and effectively integrate their feedback, she said.
The findings demonstrate how participatory processes can enhance AI solutions, particularly in areas like facial-recognition technology and healthcare. Embracing a participatory approach is the pathway to making AI truly human-centric, a core aspiration of the IndiaAI mission, she said.
Recommendations
The recommendations for Implementing Participatory AI include:
Ø Adopt a Participatory Approach to AI Governance: Engage stakeholders throughout the entire AI lifecycle—from design to deployment and beyond—to ensure that AI systems are both high-quality and fair.
Ø Establish Clear Mechanisms for Stakeholder Identification: Develop robust processes for identifying relevant stakeholders, guided by criteria like power, legitimacy, urgency, and potential for harm. The “decision sieve” model is a valuable tool in this process.
Ø Develop Effective Methods for Collating and Translating Stakeholder Input: It is crucial to create clear procedures for collecting, analyzing, and turning stakeholder feedback into actionable steps. Techniques like voting and consensus-building can be used but it is important to be aware of their limitations and potential biases.
Ø Address Ethical Considerations Throughout the AI Lifecycle: Involve ethicists and social scientists from the beginning of AI development to ensure that fairness, bias mitigation, and accountability are prioritized at every stage.
Ø Prioritize Human Oversight and Control: Even as AI systems become more advanced, it is essential to keep humans in control, especially in sensitive areas like law enforcement and healthcare.’.
Papers
In this First Paper, the authors investigated various issues that have cropped up in the recent past when it comes to AI governance and explored viable solutions. By analyzing how beneficial a participatory approach has been in other domains, they proposed a framework that integrates these aspects, say sources from IIT Madras.
The Second Paper analysed two use cases of AI solutions and their governance, with one of them being a largely deployed solution in Facial Recognition Technologies which has been widely discussed and well documented, while the other is a possible future application of a relatively newer AI solution in a critical domain, say sources from IIT Madras.
The lack of transparency in how these technologies are deployed raises serious privacy concerns and risks of misuse by law enforcement.
Engaging stakeholders like civil society groups, undertrials, and legal experts can help ensure that FRT systems are deployed in ways that are fair, transparent, and respectful of individual rights, say sources from IIT Madras.
Large Language Models (LLMs) in Healthcare: In healthcare, the stakes are even higher. LLMs can sometimes generate inaccurate or fabricated information, posing significant risks when used in medical decision-making.
Furthermore, if LLMs are trained on biased data, they could exacerbate healthcare disparities. The opacity of these models’ decision-making processes further complicates matters, making it difficult to trust their outputs.
Involving doctors, patients, legal teams, and developers in the development and deployment of LLMs can lead to systems that are not only more accurate but also more equitable and transparent, say sources from IIT Madras.
S Vishnu Sharmaa now works with collegechalo.com in the news team. His work involves writing articles related to the education sector in India with a keen focus on higher education issues. Journalism has always been a passion for him. He has more than 10 years of enriching experience with various media organizations like Eenadu, Webdunia, News Today, Infodea. He also has a strong interest in writing about defence and railway related issues.