U.S. AI Regulation Looks a Lot Like Content Moderation
On July 21, the White House announced it had secured Voluntary Commitments from several leading companies to help manage risks posed by artificial intelligence. The seven companies signing on are: Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The Biden administration claims credit for jawboning them into this agreement. Covering “safety, security and trust,” the firms will, among other things:
Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas.
Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards.
Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights
Incent third-party discovery and reporting of issues and vulnerabilities
Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content
Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias
Despite some howls of protest from those convinced government regulation is the only path forward, voluntary commitments by the leading developers is really the only intervention that makes sense right now. Those calling loudly for “AI regulation” have never been able to specify exactly what they would regulate, what agency would do it, and what criteria would be used. Crafting specific legislation or rules and anticipating specific results under conditions of rapid technological change is near impossible. Voluntary commitments are consistent with the concept of networked governance, which tends to prevail when there is no clear hierarchical source of authority and there are complex interactions and trade-offs among a distributed set of decision makers. Many of the activities noted above occur in cybersecurity as well.
Long used by agencies to achieve public policy objectives, the White House has obtained voluntary commitments in areas requiring collective action like climate change, cybersecurity, and now AI.
Systems that combine compute, data, and algorithms (aka “AI”) are ubiquitous in the modern economy. The above firms have clear economic incentives to commercialize various AI applications. The US Government is aligned with this, as long as the tune is consistent with its political objectives. Microsoft, currently in the catbird seat with its OpenAI partnership and perhaps currying favor of the “new Washington consensus”, went even further than the commitments. Predictably for an incumbent, it gave full throated support for a regime that licenses providers of AI data centers based on capability-based or compute-based thresholds, implements Know-Your-Customer requirements, and uses them to enforce export controls. Ah, the sweet sounds and sour notes of “AI” policy making.
AI Governance in East Asia
By Seungtae Han. The G7 Summit 2023 left a strong impact, showcasing global leaders’ unity in addressing generative AI and global AI governance. The summit’s communiqué acknowledged the varied approaches and policy instruments among G7 members to achieve the common goal of trustworthy AI. Some G7 countries are implementing comprehensive laws like the European Union’s ‘AI Act,’ while the United States engages “voluntary commitments. In contrast, East Asian countries are making silent progress in their AI governance efforts.
In July, South Korea’s Science, ICT, Broadcasting and Communications Committee proposed an ‘AI Act’and transferred it to the Legislation and Judiciary Committee for final review – one step away from the National Assembly’s vote. If the bill passes, it would be the first law to become a statutory foundation that governs the South Korean AI industry. In April, the Liberal Democratic Party (LDP) – the ruling party in Japan for the last seven decades – published ‘The AI White Paper’ and provided policy guidelines for the Japanese government. As a follow-up, the Digital Committee in Japan is currently reviewing and updating their existing laws and regulations. The distinctive AI strategies of these two tech-oriented Asian democracies offer a productive comparison with their counterparts in the US and EU.
South Korea’s ‘Bill to Foster the AI Industry and Secure Trust’ consists of 32 Provisions and three key points. 1) Article 11 permits private companies and academic research centers to engage in AI research and developments without stringent obligations until they pose substantial risks to human life, rights, and security. 2) Articles 15~16, 19~21 enable the government to proactively nurture AI professionals and support AI ventures by amending laws, offering financial aid, establishing international networks, and creating an AI technology hub. 3) Article 6 (AI Committee) mandates the creation of an independent AI Committee under the Prime Minister’s authority to assess AI development in the private sector and oversee regulations, budgets, and laws pertaining to AI. Committee members, appointed by the President, consist of experts from industry, academia, government, and the legal profession.
In Japan, the LDP White Paper highlighted its commitment to fostering Japanese AI industries while mitigating risks through sector-specific law amendments instead of creating one-size-fits all obligations. Firstly, the existing laws such as the Digital Platform Transparency Act and Financial Instruments and Exchange Act already requires companies to take more proactive approaches on algorithmic risk management by themselves. Secondly, further sector-specific AI industry law amendments are made by the Digital Committee. The committee includes the Prime Minister, cabinet members and experts from various scientific fields. Finally, Japan recently established a new AI Council directly under the office of the Prime Minister. While the Digital Committee focuses on specific laws and regulations, the AI Council is setting general guidelines and rules for fostering the Japanese AI industry and minimizing risk. The council includes participants from academia, AI industry, lawyers, Cabinet members, and the Prime Minister himself.
In contrast to the European ‘AI Act,’ South Korea and Japan are prioritizing the development of their domestic AI industries and offer private companies the flexibility to establish their own regulations, avoiding strict universal rules. Furthermore, there is no specific provision that
categorizes the levels of AI risk. A key aspect of their AI governance approach lies in the prominent role of Committees, which combine multiple stakeholders in AI governance to express their interests and manage diverse perspectives. This approach may promote more flexible, agile, and effective policy implementation for the emerging technology issues compared to relying solely on formal legislative processes.
Nevertheless, there are lingering uncertainties about the contrasting AI governance strategies between the United States and East Asian countries. While the US approach seems more reluctant to hands-on, South Korea and Japan’s AI strategy allows the committees to have the power to ease or tighten AI regulations and allocate budgets for financial assistance, which may grant the government substantial control over the industry. The extent of independence that private companies and research institutions have in their AI research and development remains unclear amid this state influence.
This is what Chinese Espionage really looks like
Microsoft’s 365 email cloud environment was the vector through which Chinese-attributed hackers broke into U.S. government agency email accounts to conduct espionage last month. The hackers obtained a Microsoft signing key allowing them to forge authentication tokens that allowed them to access email inboxes as if they were the account owners. Commerce Secretary Raimondo and the State Department seem to have been targeted. Microsoft still has not disclosed how they got that key, only saying it was because of a “validation error in Microsoft code.” Euphemism for a zero day? The incident has generated a lot of criticism from the cybersecurity community, with one noted expert saying Microsoft is “no longer effectively handling vulnerability patching.”
We don’t want to pile on, but this incident provides valuable perspective on the claim that TikTok is a tool of Chinese espionage. Espionage involves professional efforts to break into guarded secrets or data and targets important actors in government and the military. It does not target stuff openly published on a social media short video app used by ordinary people, because that data has no real intelligence value.
Pakistan Pushes Through With Data Protection Law
On the recommendation of Pakistan’s Ministry of Information Technology and Telecommunication, the Federal Cabinet has granted in-principle approval for the Personal Data Protection Bill. The legislation intends to ensure the security of users’ data by prohibiting its sharing with any entity, company, or government agency without explicit consent. Additionally, the bill proposes the establishment of a National Commission for Personal Data Protection (NCPDP) to safeguard consumers’ private information and address grievances through a civil court. Civil society and stakeholders from the industry have voiced concern over the scope and applicability of the draft legislation in the past. The broad definition of critical and sensitive personal data, data localization requirements and its impact on cross-border data flows have raised apprehensions. The bill will be scrutinized by the Cabinet Committee on Legislative Affairs (CCLC) before being presented again before the Cabinet.
The post The Narrative, August 1, 2023 appeared first on Internet Governance Project.
Source: Internet Governance Forum