India’s Report on AI Governance Guidelines Development
The Internet Governance Project (IGP) filed these comments on the report of the Subcommittee on ‘AI Governance and Guidelines Development’ constituted by the Ministry of Electronics and Information Technology (MeitY).
General comments on the subcommittee’s report
The term AI should more accurately be referred to as advanced machine learning. Machine learning combines the processing power of semiconductors, networks, software applications, and data sources to perform functions such as spam control, content moderation, language translation, search, image generation, medical diagnosis, autonomous vehicle control, and many others. Machine learning applications are not new, they have been in play for the past 30 years. Similarly, many concerns attributed to AI are characteristic of digital technology more broadly. AI is a marketing term for computing applications with diverse functions and varied consequences. “Governing AI” means governing all digital technologies, and thus directly implicates freedom of expression and the right to use information technologies for economic development. Therefore, we strongly recommend the subcommittee recognize AI as an extension of networked computing technologies instead of treating AI as a fundamentally distinct technological paradigm requiring entirely novel governance mechanisms.
Comments on specific issues and approaches outlined in the report
Balancing market and non-market objectives for regulation
Regulators usually try to balance both market and non-market considerations when setting regulatory objectives. The report emphasizes responsible AI, detecting design defects, and minimizing risks and harms towards ensuring sustainable and ethical development of AI technologies. Given that the integration of advanced machine learning capabilities in various sectors is at a nascent stage, fostering innovation, and competition is critical. Therefore, alongside non-market concerns, strengthening the market for emerging technologies should also be a focus of regulation. With a limited number of players developing and deploying advanced machine learning capabilities in a few specific sectors, India should prioritize policies to stimulate domestic innovation and competitiveness.
Balancing state intervention and self-regulation
Regulatory objectives may be achieved through state intervention and/or self-regulation. The report rightly encourages self-regulation, but also calls for greater state intervention by advocating for both reforming existing frameworks and/or introducing new laws to regulate advanced machine learning capabilities.
The report includes analysis of existing laws and frameworks to determine how to apply them to emerging technologies. However, solely focusing on updating existing laws ignores the way private actors can and do take responsibility for governance. For example, many liabilities associated with the use of advanced machine learning will be – and should be – resolved via contracts, rather than through broad, uniform forms of legislation or governmental regulation. This is evident in the licensing agreements to acquire training data. Looking forward, consumers or deployers of commercial machine-learning applications will likely negotiate liability issues with the producers/sellers of applications.
Voluntary measures from the industry are as effective as the compliance and enforcement intentions and capabilities of the industry. A generic self-regulatory approach to “AI” will not work and liability and obligations should depend entirely on the specific application and sector in question. For example, medical diagnosis applications of machine learning might work better if subjected to the same kinds of review and liability as prescription drugs or other medical technologies. Similarly, applications of machine learning to the problem of aircraft navigation should be subject to the same kinds of advanced review and regulation as other aviation technologies. The decision on whether regulation should be voluntary or by the state should be determined not based on whether the use of digital technology is designated as “AI” or not, rather what matters is the potential damages that could be caused by a particular product or service. Therefore, high-risk or sensitive applications of emerging technologies may require greater state intervention whereas other sectors could see the private sector lead governance efforts.
Finally, the subcommittee is advocating for new frameworks to address challenges such as bias and discrimination arising out of AI systems. However, it is important to remember that just as the Internet and computers have produced new ways of disseminating illegal content, new forms of discrimination, fraud, and consumer exploitation, so will some advanced machine learning applications. Recent applications pose some new wrinkles in these classic problems of digital information and communications policy, but for the most part, we need to figure out how to apply existing principles and standards, not invent new ones for a mythical generality called “AI.” For example, discrimination on the basis of gender is already illegal in India, so if an AI application can be proven to discriminate against a specific gender, it could be challenged under existing law.
Institutional design for regulation
The report suggests three approaches for developing an institutional design for AI governance. First, it proposes an Inter-Ministerial AI Coordination Committee or Governance Group to address the siloed examination of AI systems by existing departments and regulators. In principle, it is a good approach as it will help in building a common understanding between various authorities and institutions, especially on cross-cutting issues.
Second, the report recommends establishing a technical advisory body or Technical Secretariat responsible for developing a systems-level understanding to identify gaps, develop metrics and protocols for AI accountability, and create an AI incident database. Many global bodies and entities have already developed metrics and/or maintain AI incident databases. We recommend that MEITy focus on engaging and drawing upon existing global resources instead of inventing new ones. Additionally, the formation of both the Interministerial Committee and the Technical Secretariat should be done with appropriate checks and balances in place.
Third, the report advocates for a “digital by design” governance by shifting from “command-and-control” model to a techno-legal approach. This would involve supplementing legal and regulatory regimes with appropriate technology layers across actors and systems. However, the proposed techno-legal approach assumes that technological solutions like watermarking, platform labeling, and fact-checking tools will sufficiently mitigate AI-related risks. Moreover, these tools may not be sufficient to address complex socio-political issues such as bias, misuse, and accountability gaps.
Principles-based approach to governance
The subcommittee advocates for a principles-based approach to develop, deploy and use “responsible and trustworthy AI”. It aims to align its recommendations with the efforts of global and domestic institutions like the OECD and NITI Aayog, as well as industry bodies like NASSCOM. Nevertheless, the report has certain broad and undefined principles such as ‘human-centered values’ & ‘do no harm’, and ‘inclusive & sustainable innovation’. Importantly, ensuring privacy and security which are generally associated with the rights of the user are also included under the proposed list of principles. Broad ethical principles should guide any and all forms of human interaction and technological development. However, there is little value in providing general principles without evidence or knowledge about the distinct problems posed by applications of machine learning in specific contexts.
The post India’s Report on AI Governance Guidelines Development appeared first on Internet Governance Project.
Source: Internet Governance Forum