India’s government is intensifying its scrutiny of foreign tech companies ahead of national elections.
A recent directive from India’s Ministry of Electronics and Information Technology (MeitY) mandates explicit government permission for deploying “under-tested” or “unreliable” AI models and software for Indian users. The advisory requires AI systems to be labeled to acknowledge potential fallibility before deployment, stresses punitive measures for handling unlawful information and requires platforms to incorporate a “consent popup” mechanism to inform users of the potential inaccuracies in output generated by AI. Additionally, metadata embedding in AI-generated content is mandated for traceability.
This move follows Google Gemini’s answer to a journalist’s question whether Prime Minister Narendra Modi’s policies were “fascist,” sparking outrage and accusations of bias from the government. The Minister of State for IT, Rajeev Chandrasekhar, denounced Gemini’s handling of the query as “malicious” and in violation of intermediary liability regulations and criminal law provisions.
With India going to the polls in the coming months, the advisory cautions AI platforms and intermediaries to ensure that their “their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process”. Platforms must report their actions in response to the latest advisory within 15 days and compliance with these rules is necessary to ensure responsible AI usage and content moderation on AI platforms. The advisory also mandates platforms to educate users about the consequences of engaging with unlawful content, including potential account suspension or legal consequences.
Given the rapid evolution of AI technology, the requirement for government approval for companies developing AI products has caused uncertainty within the technology sector. Technologists and industry experts have warned that the advisory ushers in a license-raj that would stifle innovation and entrepreneurship in the AI sector. Although Chandrasekhar claims the advisory is intended to help shield AI companies from potential liability, it is unclear whether the government interpreting the liability regime to include AI platforms and intermediaries will hold-up. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 do not explicitly incorporate AI models within the definition of an intermediary or a content publisher. Therefore, it is possible the advisory would be challenged in the courts. The Minister of State for IT, Rajeev Chandrasekhar, has clarified that the directions primarily target “significant platforms” and startups will not have to seek permission from the ministry, the advisory itself doesn’t differentiate based on platform size.
The restrictions imposed on large platforms who have the resources for testing and government approval enables the government to strategically exert control over the development and deployment of AI technologies in India. For example, the recent introduction of Krutrim, positioned as a domestic alternative to LLMs and native AI tools like ChatGPT, attracted considerable interest upon its launch last month. However, the initial excitement quickly waned as users interacted with the platform, revealing responses riddled with inaccuracies and fabricated information. Krutrim’s flawed performance highlighted its unpreparedness for extensive adoption. This setback mirrors similar challenges faced by Google’s Gemini shortly after its debut. Nevertheless, the Krutrim incident did not receive the same level of attention as the Google Gemini debacle. Focusing on platforms based on size alone could hinder competition in the AI sector and does not mitigate the harms arising from AI.
Since the enactment of the Information Technology Act (IT Act) in 2000, there have been numerous amendments, yet criticisms persist regarding its failure to keep pace with emerging technologies like AI. In 2022, the MeitY announced its intention to replace the IT Act with a modern legal framework tailored to India’s evolving digital landscape – the Digital India Act (DIA). Although the draft of the DIA remains undisclosed, the DIA is expected to prioritize online safety, trust, accountability, and an open internet. A presentation of the proposed law suggests it primarily conceives AI-related harms through the lens of misinformation. The legislation is likely to Include provisions reinforcing algorithmic accountability, transparency, and introducing human oversight of algorithmic decisions.
Alongside the overhaul of the IT Act, MeitY is contemplating amendments to the IT Rules of 2021 to establish a regulatory framework overseeing the responsible deployment and use of AI technologies. Despite the limitations of the IT rules, the government has utilized it to exert control over AI companies. Video streaming platforms like Netflix and Amazon Prime face heightened scrutiny over content deemed “vulgar” by authorities. Social media platforms are required to promptly remove deepfakes and to ensure that AI models used on their services do not facilitate prohibited content hosting or pose risks to electoral integrity. MeitY has issued advisories mandating platforms to inform users about prohibited content under IT rules.
The government’s efforts to tighten control over the tech sector, is expected to intensify in an election year. While India’s protectionist stance under Prime Minister Modi claims that it will prevent misuse of technology, increased state control may restrict business opportunities, prompting companies to navigate to other jurisdictions. Despite regulatory challenges, India remains an attractive market for tech giants due to its vast population, underscoring the country’s significance as a strategic market for global corporations amidst evolving regulatory landscapes across Asia.
The post Unpacking India’s New AI Advisory appeared first on Internet Governance Project.
Source: Internet Governance Forum