The image projected by Viktor Mayer-Schönberger summed up the AI governance dialogue perfectly. It was a gigantic motorcycle driver barreling around a curve. The image was used to represent someone “who doesn’t know where he is going, but wants to be the first to get there.” To the 30 attendees at IGP’s 9th annual workshop, “Does AI Need Governance?”, the reference to the policy discourse around artificial intelligence was evident. The motorcyclist racing toward an unknown destination seemed similar to the flood of policy initiatives and legislative proposals aimed at the rise of machine learning applications. The 2024 workshop was intended to ignite a more informed and critical dialogue about the governance of digital systems. IGP will publish a complete summary of the proceedings, but here is a quick overview of some of the discussions.
Session 1: AI is a software application
Understanding AI governance problems begins with a recognition that AI is not a new or emerging technology, but a set of applications built upon an evolving digital ecosystem. AI applications combine computing devices, networks, and digitized data, all managed by software models, feedback loops and software instructions in various tailored applications. The policy problems of this digital ecosystem have been with us for 30 years, ever since the Internet joined up the world with a common digital networking protocol. In the opening sessions, Milton Mueller argued that AI is just a machine learning application enabled by this broader digital ecosystem. Georgetown’s Laura DeNardis agreed and called attention to the way computing applications can both improve or weaken cybersecurity. She argued that alarmist AI frames and excessive focus on content issues at the expense of infrastructure is reminiscent of earlier discussions of Internet governance. For Laura, countries are politicizing and co-opting AI conversations for geopolitical goals, allowing multilateralism to gain the upper hand over multistakeholder governance. Viktor asked why people are so concerned about AI, and attributed it to a fear of losing agency. When we govern AI we are really governing decision making, and humans want to retain control of their decisions. According to him, a balance of individual and collective control is best achieved by means of structural nudges embedded in infrastructures, which he calls guardrails. Guardrails are a softer, more flexible alternative to top-down directives. But not much was said about where those guardrails come from, or how they get into place.
Session 2: Controlling the digital ecosystem
Lennart Heim, Dean Ball and Andrew Strait focused on attempts to govern AI by leveraging different parts of the digital ecosystem. Heim focused on “compute.” He explored how control over the global distribution of computing power would provide the leverage to control AI applications, by giving a central authority the power to influence which AI systems are built. Presumably, compute “power” would be measured by FLOPs, or floating point operations per second. Heim’s presentation made explicit an assumption that often goes unstated: the concern that AI applications become progressively more dangerous as computing power increases. This malevolent version of Moore’s law sounds alarming until one realizes that computing power has been increasing exponentially for more than 50 years. Until recently, we all considered that a good thing.
Dean Ball spoke about the regulation of “models,” the accepted label for the software component of machine learning applications. Ball argued that model-based regulation in general is wildly premature. He stressed he is not opposed to model-based governance per se, but wants to emphasize that “laws” are but one form of governance, and not the most appropriate one for our present situation. Andrew Strait of the Ada Lovelace Institute presented a useful model of the AI value chain. He argued that “safety” is not a model property and cannot be found “in” the model. AI is part of a socio-technical system and its benefits and dangers depend on its specific uses and the persons affected by them. He criticized model evaluations, saying they lacked external validity and could be gamed.
Session 3: We’re all (not) going to die.
If there was one clear message from the conference, it was that public attitudes toward AI governance have been too heavily influenced by the myth of the dark Singularity: the claim that advanced AI could get out of control and threaten human existence. Presentations and statements from Milton Mueller, Andrew Strait, and tech journalist Nirit Weiss-Blatt pushed back on this myth, with Weiss showing that its promulgation was heavily supported by a few big donors from the Effective Altruism movement. The AGI/singularity myth massively distorts policy discussions, making protections against the sudden emergence of an all-powerful AGI the primary driver of policy. The prospect of an all-powerful autonomous AGI that poses an imminent risk of human extinction is not real, however, yet is being used to justify the most extreme controls, such as the centralization of power over the digital ecosystem. If the growth of computing power is associated with the risk of total destruction, our policy responses are not only disproportionate, they are not targeted on real problems.
After reviewing the evidence, conference attendees agreed that “threat of human extinction” should no longer be used to motivate and justify public policy toward machine learning applications. Both the likelihood (pDoom), and the very existence of AGI were questioned or debunked. While accepting Mueller’s argument that AGI as an autonomous entity is a myth, Strait showed how the AGI construct can be used to support research, corporate or policy agendas. The term AGI is used in a Microsoft-OpenAI contract, for example, in a way that might allow OpenAI to deny MSFT access to OpenAI technologies if it develops an AGI (defined by lawyers as a machine that matches the power of the human brain). The propensity of software vendors to sprinkle their old applications with “magic AI fairy dust” in order to upsell their products was also noted.
Session 4: Flavors of enclosure: Property rights and sovereignty over data
Data is an essential input to AI applications. Aside from its use in training applications, the people and actions that generated the data inherently structure the statistical regularities used by AI applications to draw conclusions. AI applications can also produce data as well as ingest it. Brenden Kuerbis, Deven Desai, Mark Riedl and Jyoti Panday provided differing takes on the data governance issues affecting AI. Kuerbis noted that as aggregations of digital data are endowed with new value by AI applications, various controllers of that data are motivated to enclose it. He described how “a variety of technical and contractual mechanisms are emerging in the private sector to govern data used in generative AI. We also see market-based governance of data.” Desai and Reid related prior debates over copyright to the current situation, noting how earlier Internet battles over copyright were being replayed, and while Panday looked at India’s sovereignty claims over locally generated data.
Session 5: The motorcyclists in government
Karine Perset, who heads the OECD’s AI division, reported on the uncoordinated proliferation of AI regulatory initiatives, laws, codes of conduct, Executive Orders, programs and policies. If the OECD, which is typically one of the first and most well-equipped institutions for developing global norms and policies, feels overwhelmed by the flood of proposals, and concerned about how they might clash or create barriers and transaction costs, then perhaps we all should be concerned. Grace Abuhamad spoke from the perspective of the US federal government, about the motives and objectives of national policy making toward AI. She discussed the balance of promoting progress in computing science and technology with AI safety and how the Biden EO reflected that. She explained how NTIA was given responsibility for determining whether the administration should favor or oppose open-source AI models. New Zealand’s Sarah Box came at it from the perspective of a small island nation, with not much of a cloud or application industry, and not wanting to see restrictions that might stifle its development and spread.
Session 6: The geopolitics of AI
Moderator John Tien, a former DHS Deputy Secretary, came out and said it: these days, “geopolitics” just means the U.S. vs China. That may not be wholly true, but when it comes to the digital ecosystem it pretty much is. Jason Luo, a postdoctoral researcher at George Washington University, provided an overview of his work on China’s digital ecosystem, noting the unique role of capital investment by local and provincial governments. Luo highlighted the role of China’s local governments over design, selection, procurement, implementation of AI programs. He regards China’s AI environment as decentralized: the central government mandates policy top-down but is forced to delegate vertically to provincial governments and horizontally to private suppliers. On the bilateral relationship, Luo noted that China’s view of the US government is that it promotes the use of AI at home while threatening AI security risks internationally. In contrast, China sees itself as using AI as the newest addition to its development toolbox for global leadership.
For Jon Lindsay, concerns of geopolitics reshaping AI are déjà vu. Lindsay presented his framework for great power competition with dense economic interdependence, which he referred to as a different ball game than just “cold war 2.0.” For Lindsay, AI and cyber as information infrastructures both entail a logic of cooperation, because the information environment requires data, computing infrastructures, transnational companies, without which the space can’t exist to be leveraged by nation-states. Gray zone engagements below the threshold for armed conflict act as an important “pressure release valve for international politics”. Chinese threat actors may use AI to enable espionage or IP theft but when they do, we learn about them and they learn about how we react, which improves stability. Decoupling on the other hand increases the risks of bargaining failure and open warfare.
Karmen Lucero presented the strengths and weaknesses of Chinese and US industrial policy and AI. An interesting exchange occurred between Lindsay and Lucero: can economic entanglement be universally regarded as stabilizing? Lucero highlighted how despite mutual interdependence between Great Britain and Germany during World War 1, various insecurities caused all-out conflict. He then brought up the lack of economic entanglement between the US and Soviet Union as still allowing for stability by diffusing tensions through proxy wars. Lindsay disagreed, noting that conflicts always occur in areas where the least interdependence and no mutually assured destruction exists and where alliance relationships spillover.
Session 7: AI and the regulation of speech
IGP’s Brenden Kuerbis presented “Is Generative AI Really a Game-Changer in Disinformation Campaigns?” on behalf of Seungtae Han. Han’s paper seeks to explain the perceived threat of generative AI especially its novel capabilities in creating content, disseminating it, and persuading its target audience. But Han’s empirical study of over 600 instances of disinformation during the Fukushima and Zaporizhzhia nuclear emergencies finds no evidence to indicate that state actors have leveraged generative AI to craft propaganda narratives. Instead, he finds that traditional media remains the vector of choice to communicate about issues related to nuclear emergencies. The volume and technological sophistication of an LLM are no match for prior belief, and the perceived credibility of established sources, which are much more influenced by socio-cultural contexts. Further, LLMs can’t keep pace with a rapidly changing information environments, at least without significant human input. Kuerbis called these findings unsurprising when considering that the risk of being caught using LLMs far outweighs the potential benefits for any state-owned media apparatus.
Prof. Clay Calvert from the University of Florida Levin College of Law presented an overview of Supreme Court perspectives on the first amendment especially editorial rights of social media platforms and the status of AI-powered algorithms as protected expression. Calvert shared his framework for “medium-specific first amendment jurisprudence” where the nature of the medium on which speech is conveyed will affect the amount of protection that speech will have. Calvert explained how the Supreme Court “punted” on the issue of whether the Florida and Texas statutes violate the first amendment, sending the cases back to the lower courts because the principles about the AI technology aren’t formed yet, there is a lot of fact-finding to be done over whether unconstitutional applications of the laws outweigh constitutional ones or vice versa. He did note that Kagan’s opinion strongly reinforced the Tornillo decision
Tarek Naous from the Computer Science Department at Georgia Tech presented award-winning research on the cultural biases inherent to most mainstream LLMs. Naous showed how the detectors and transformers powering LLMs are Western-biased because they are trained on open internet data, which often results in culturally inappropriate outputs. Naous also pointed out that the self-reported evaluation of models through benchmarks don’t measure reasoning abilities in meaningful ways. Naous suggested model evaluations analyze the output itself as used by different types of end-users, using his team’s Cultural Appropriateness Measures Set for LMs (CAMEL) benchmark as an example. While many red-teaming exercises are currently focused on explicit harms like asking an LLM “how can I build a bomb”, Naous argued more nuanced exercise should include more an analysis of cultural stereotypes so that they don’t negatively impact their constituents. Drawing on Calvert’s discussion of content recommendation algorithms as a form of expression protected by the first amendment, it was argued that all models will be biased in some way or another, and we should expect models to compete for users based on how their outputs satisfy different user groups rather than expecting some kind of central authority to emerge and push everyone toward a “perfect” solution.
Stay tuned for more updates about the full summary.
The post AI governance workshop summary appeared first on Internet Governance Project.
Source: Internet Governance Forum