Dispatches from the evolving digital political economy
AI Governance: Empty Gestures?
The US government released an AI Executive Order October 31. The Halloween release comes after a year-long FUD campaign, backed partly by businessmen with a vested interest in AI, that frames AI as a frightening new technology that poses existential risks to all humanity. In fact, it is the product of incremental improvements in an evolving digital ecosystem that has been bringing together computing, data, networks and software for 70 years. As such, the EO reflects many fallacies about governance of emerging technologies, but it also does a few useful things amidst some harmless blather. Here’s a quick 8 point evaluation:
“AI” is not a single thing, but a myriad of applications ranging from grammar checks in word processors and recommending cooking videos, to classifying anomalous behavior on networks and navigation of autonomous vehicles. The relatively recent commercialization of generative AI systems, such as LLMs based on the transformer architecture, has brought attention to a digital ecosystem in development for decades, and massive investment in proprietary and open source technologies. Each application has different regulatory implications and different risks. Restrictive regulations targeting a vague category called “AI” can add restrictive regulations on all computing and the Internet, while missing the target on real problems.
The EO invokes the Defense Production Act, furthering the dangerous securitization of digital technologies. It requires companies developing any “foundation model” to notify the federal government when training the model and share the results of all red-team safety tests. But there is no precise definition of what is or is not a “foundation model” that would allow companies developing them to know whether they have to notify the federal government, and there is no way to know whether AI models pose “a serious risk to national security, national economic security, or national public health and safety” before they have been deployed in a specific social context. You cannot construct “guardrails” for a road that hasn’t been built yet and whose pathways and uses are unknown or still being developed.
The EO calls for standards and methods for detecting AI-generated content and authenticating official content. We do need this, but the real work around that is happening in the marketplace anyway. Likewise, red teaming is also being done by model providers with broader expert collaboration.
Of course, the Federal government should identify risks it perceives emerging from its own use of AI and deploy mitigations. An EO is an appropriate tool for this. For instance, the EO mentions developing a National Security Memorandum to set limits and direction on military and intelligence use of AI.
Standards and regulations pertaining to biological threats and civil rights discrimination already exist. No special “AI” regulations are needed to enforce them. What you want to regulate is discrimination, liability and risk, not a specific technology.
The EO correctly recognizes a link between privacy and AI applications, but recognizes that this issue can only be addressed via legislation. It advances no new policy ideas about how to trade off the benefits of AI-processed aggregates of data against the costs to user privacy. For example, the executive order doesn’t say anything about facial recognition, which is of course one of the most important AI applications and one that is commonly used by the federal government. As someone who just had his face photographed, put into a database and “recognized” at immigration checkpoints in three different countries (US, Taiwan and Japan), one wonders why facial recognition – a specific application with known uses and risks – was not part of this EO.
A positive aspect of the EO is its call for streamlining immigration of high-skilled workers in computing/software in order to promote American leadership in the field. Another is using AI systems to innovate and make government activities like contracting more efficient. But it’s unclear how promoting leadership in AI development and applications is consistent with the “AI is a big threat” tone of the EO.
The global governance aspirations of the EO are confused. The political, military and economic interests of national governments diverge. The US withdrawal from the WTO’s e-commerce free trade negotiations seem to have foreclosed the most promising avenues for international cooperation and aligned the US with protectionist and nationalist countries such as India. What is conspicuously absent from AI experts’ dialogues to date, and is only mentioned in passing in the EO, is development of collaboration and coordination structures involving private and public actors to help address narrowly focused, transnational impacts of generative AI.
Yet Another Aadhaar Leak
The personal information of a staggering 815 million Indian citizens (roughly half of India’s population), including sensitive information such as their Aadhaar and passport information along with names, phone numbers and addresses is up for sale on the dark web. Reports have surfaced that the compromised Indian Council of Medical Research (ICMR) database might have been the source of the leak, but the news is yet to be officially confirmed. The sale of such sensitive data emphasizes the urgent need for robust protective measures and a strengthened cybersecurity framework. The leak raises concerns about the vulnerability of one of the world’s largest biometric identification systems, India’s Aadhaar, which serves as a unique identification number for Indian residents. As we have highlighted in our research, such leaks underscore the need for proactive data security regulations and cybersecurity protocols to prevent further breaches of digital public infrastructures.
Geopolitics and poor data privacy unraveling networked governance in cybersecurity?
VirusTotal began as a small Spanish security company in 2004, was acquired by Google in 2012, moved under its subsidiary Chronicle six years later, and in 2018 added the U.S. Cyber Command as a contributor. This trajectory epitomizes the collaborative spirit of networked governance structures in cybersecurity, where actors voluntarily unite to combat malicious threats. The platform’s growth transcended geographical and institutional boundaries, helping to produce cybersecurity. As of today it brings together nearly 225 antivirus, behavioral analysis/sandbox products, crowdsourced IDS (Intrusion Detection Systems) and Sigma Rules, file characterization tools & datasets, website/domain scanning engines & datasets, and YARA rule repositories. However, like many areas of the digital political economy (e.g., drones), the rising specter of geopolitical competition between rivals combined with inadequate organizational data privacy practices, may now threaten this Internet governance accomplishment. Russia’s government has announced Multiscanner, a state-led, competing initiative that would offer a similar platform and functionalities. Seemingly driven by VirusTotal’s operational data privacy mistakes which revealed the PII of USG and UK government users, and accusations that the USG could snoop into VirusTotal user PII based on jurisdictional purview, the platform represents another deliberate stride towards “sovereign” cybersecurity driven by national security objectives. The platform appears not to be functional yet, but has named some Russian firms involved, including Kaspersky, AVSoft, and Netoscope (a project of the Coordination Center for TLD .RU/.??). Given various sanctions against Russian entities it seems impossible that it could gain widespread contributors. On the other hand, we’ve seen how cybersecurity-related devices and activity can be excluded from stringent sanctions. Secure, transnational networking remains important to all actors for multiple reasons. The key question is: How will this bifurcated arrangement, driven by geopolitics and poor data privacy, impact cybersecurity for non-state actors going forward?
The post The Narrative: November 1, 2023 appeared first on Internet Governance Project.
Source: Internet Governance Forum