The Narrative: September 15, 2023

Reports on the evolving digital political economy
A Victory for Free Speech on Social Media
A federal appeals court has upheld a finding that U.S. federal officials violated the First Amendment by coercing or strongly encouraging social-media platforms to censor content. The court also narrowed the scope and targets of the lower court’s preliminary injunction, finding it “both vague and broader than necessary to remedy the Plaintiffs’ injuries.”
The decision singled out the President’s office, the FBI and the CDC for overstepping their powers, but absolved the National Institute of Allergy and Infectious Diseases (NIAID), the Cybersecurity and Infrastructure Security Agency (CISA), and the State Department. 
Unless overturned by the Supreme Court, the decision erects an important safeguard against governmental attempts to turn the dominant social media platforms into tools of governmental public opinion management. The Court wrote “… the Supreme Court has rarely been faced with a coordinated campaign of this magnitude orchestrated by federal officials that jeopardized a fundamental aspect of American life.” Invoking the “the close nexus test,” which makes private editorial decisions unconstitutional if they are coerced or significantly encouraged by the government, the Court wrote:

“We find that the White House, acting in concert with the Surgeon General’s office, likely (1) coerced the platforms to make their moderation decisions by way of intimidating messages and threats of adverse consequences, and (2) significantly encouraged the platforms’ decisions by commandeering their decision-making processes, both in violation of the First Amendment.” 
We find that the FBI, too, likely (1) coerced the platforms into moderating content, and (2) encouraged them to do so by effecting changes to their moderation policies, both in violation of the First Amendment.
We find that, although not plainly coercive, the CDC officials likely significantly encouraged the platforms’ moderation decisions, meaning they violated the First Amendment. …Ultimately, the platforms came to heavily rely on the CDC [and] adopted rule changes meant to implement the CDC’s guidance.” 

The Appeals court ruled that there was not enough evidence that NIAID, the State Department, and CISA coerced or significantly encouraged the platforms. Dr Fauci’s NIAID was just trying to promote its own view, and State Department officials did not flag specific content for censorship or suggest policy changes. Although CISA flagged content for social-media platforms, the court held that its conduct was an “attempt to convince,” not an “attempt to coerce.” 
The Court (correctly, in our opinion) ruled that the July 4 preliminary injunction issued by the District Court was too vague, and broader than necessary. The court wrote, “It is axiomatic that an injunction is overbroad if it enjoins a defendant from engaging in legal conduct. Nine of the preliminary injunction’s ten prohibitions risk doing just that. Moreover, many of the provisions are duplicative of each other and thus unnecessary.” “The injunction’s carve outs do not solve its clarity and scope problems. Although they seem to greenlight legal speech, the carve outs, too, include vague terms and appear to authorize activities that the injunction otherwise prohibits on its face.” The new, modified injunction reads: 
“Defendants, and their employees and agents, shall take no actions, formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to remove, delete, suppress, or reduce, including through altering their algorithms, posted social-media content containing protected free speech. That includes, but is not limited to, compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directing, or otherwise meaningfully controlling the social-media companies’ decision-making processes.” 
That wording seems to us to be right on target. The injunction delayed itself to allow the Biden administration to appeal to the Supreme Court, however, which unfortunately the government chose to do. It is disturbing that our government is to intent on retaining the power to manipulate social media content. 
Google Antitrust and the Ghost of Microsoft 
Three years after the Trump administration initiated an antitrust lawsuit, the trial of Google has begun. The trial is supposed to determine whether Google’s dominance in search and advertising came from illegal exclusionary acts. Defenders of Google cite the relative ease with which users can access other search engines and say it is used by 90% of the market because it is the best; detractors point to the power of default settings and the huge sums Google pays to Apple and others to be the default search engine. 
We think the Justice Department’s case for consumer harm is extraordinarily weak in this case, but whatever the courts decide about that, we need to focus on the question of what is an appropriate remedy if Google is found guilty. The DoJ’s request for relief says only “Enjoin Google from continuing to engage in the anticompetitive practices.” So, what if Google stops the allegedly anticompetitive practices and still remains dominant?  European antitrust attacks on Google have produced enormous fines and a few structural adjustments, but they have had no effect on its dominance of search. The DoJ also says “Enter structural relief as needed to cure any anticompetitive harm.” OK, what “structural relief”? 
Supporters of the lawsuit wistfully invoke the Microsoft case from 23 years ago, which the government won, sort of. It should be noted, however, that despite the court’s 2000 finding of monopoly power, proposals to “break up” Microsoft were quickly abandoned once antitrust authorities started thinking about what that would entail. A forced separation of applications and OS monitored by federal regulators did not sound like a good way to run the software industry. All the lawsuit accomplished, ultimately, was a settlement in which Microsoft made the Internet Explorer browser a distinct application outside the OS.
Yet for all that, we are still debating whether the 2001 Microsoft antitrust settlement did anything. Those who think it did, claim that the encounter with antitrust law and the separate browser made it easier for new players such as Google and Mozilla to arise. Those who think it did not matter point to the changing techno-economic conditions over the 14-year long case. Middleware browsers were a disruptive technical change. The rise of Google could not have been stopped by Microsoft even if the government had done nothing – the industry was progressing inevitably toward a network interface in which browser and cloud-based applications would undermine the desktop monopoly. They also assert that the Netscape Navigator browser lost its competition with Microsoft not because of the software firm’s unfair tactics, but because Navigator was full of bugs and performance problems by 1998. 
Market conditions change. It took three years just to move from lawsuit to trial in this case. The Microsoft trial started in 1998, the settlement was implemented in 2002, and the consent decree didn’t expire until 2012. If this trial goes on, will the market conditions of 2020 still be relevant? We offer Google free advice: save yourself the hassle; settle this one out of court by generalizing the European solution everywhere, prompting consumers to choose a search engine when initiating a device. Google loses an insignificant amount of users, and uses the money it saves on lawyers to focus on emerging market, Large Language Models (LLMs) – which seem to have emerged despite Google’s alleged stifling of innovation.
What we have here is Schumpeterian competition based on new production functions, not neoclassical competition over margins. The Google trial replays another round of antitrust lawyers’ inability to come to grips with the effects of direct and indirect network externalities. In a separate fast-tracked suit set for trial in 2024, Democrats will be focusing on Google’s alleged abuse of the ad tech market, where publishers monetize eyeballs for advertisers who pay the publisher for the AdSpace. We will explore this more nuanced antitrust suit in a forthcoming blog post.
An “ambitious” technology agenda for the G20 devoid of civil society
The 2023 G20 Leaders’ Summit in India, where the African Union was welcomed as a full-fledged group member, concluded with the issuance of the New Delhi Declaration. India’s ability to achieve consensus and secure support from global leaders for a rules-based and inclusive global trade system, while advocating for fair competition and discouraging protectionism amid ongoing geopolitical tensions, is being hailed as a significant diplomatic accomplishment. In addition to addressing challenges related to economic growth, sustainable development and climate change, the declaration emphasizes the importance of technological transformation and digital public infrastructure (DPI), with a specific focus on responsible AI development, digital security, and Central Bank Digital Currencies (CBDCs). The document defines DPI as a continually evolving concept and a collection of shared digital systems created and leveraged by both the public and private sectors. These systems are based on secure and resilient infrastructure and can be constructed using open standards, specifications, and open-source software to facilitate the delivery of services at a societal scale. The leaders have endorsed voluntary and non-binding policy recommendations to advance DPI, acknowledged the significance of the free flow of data with trust and cross-border data flows while respecting relevant legal frameworks, and reaffirmed the role of data for development. G20 countries have also committed to integrating DPI into the Financial Inclusion Action Plan for the next three years and have adopted the G20 Framework for Systems of Digital Public Infrastructure. This framework is voluntary and serves as a suggested guideline for the development, deployment, and governance of DPI. India has also put forward a proposal to establish a Global DPI Repository and has introduced the One Future Alliance, which aims to support the deployment of DPI in low and middle-income countries.
However, the G20’s failure to adequately involve civil society in its decision-making processes and policy discussions is a significant shortcoming. The G20 represents some of the world’s largest economies and wields considerable influence over global policies. However, it operates without the direct representation of civil society organizations, which are essential stakeholders in addressing complex global issues. G20 meetings and discussions are typically conducted behind closed doors, without opportunities for civil society organizations to observe or contribute. G20’s failure to involve civil society in its decision-making processes and policy discussions is a significant drawback that undermines the effectiveness, transparency, and legitimacy of this influential international forum.
Assessing disinformation: Logically’s report on Fukushima 
On August 24, Japan initiated the release of treated wastewater from the Fukushima Nuclear Power Plant into the ocean, with the support of the Japanese government, the scientific community, and the International Atomic Energy Agency (IAEA). Logically, a British tech startup specializing in identifying disinformation, published a report on China’s propaganda campaign related to the Fukushima wastewater release. Since early 2023, IGP has also monitored instances of disinformation regarding the Fukushima issue as part of the IAEA’s Coordinated Research Project. We aim to cross-referencing Logically’s findings with our own comprehensive account of disinformation practices, including data from Ukraine’s Zaporizhzhia Nuclear Power Station.
Logically’s analysis reveals a series of concerted efforts by Chinese officials, state media, and pro-China influencers to spread disinformation and narratives about Japan’s Fukushima wastewater release. Logically utilized their AI-driven threat intelligence platform and conducted primary and secondary research to scrutinize the narratives amplified by Chinese State officials and media. They observed content on platforms such as Weibo, Meta, and X, which included identifying paid advertisements through Meta’s ad library. They found social media posts that:

Challenged IAEA’s safety report as flawed, and cast doubt on IAEA’s support for Japan’s plan.
Claimed that the wastewater release will contaminate Japanese seafood.
Amplified concerns about the plan expressed by Japanese fishermen and South Korean and Chinese people.
Referred to “treated wastewater” as “nuclear-contaminated water.”

They also found examples of a Chinese propaganda campaign in traditional media, claiming that:

The Global Times published 126 English articles, and the People’s Daily produced 74 articles in English and 60 in Japanese. Between January and August 2023, the Global Times published 126 English articles, and the People’s Daily produced 74 articles in English and 60 in Japanese related to Fukushima wastewater release.
China Central Television, and other Chinese organizations ran at least 22 paid advertisements on Meta on the risks posed by wastewater release.
There was a 1509% increase in posts mentioning “Fukushima” by Chinese state media, officials, and pro-China influencers.

Examples of top Weibo hashtags related to the Fukushima wastewater included:

“Japan will use 70 billion yen to deal with negative information about nuclear-contaminated water” – 430 million reads.
“Are China’s Japanese restaurants going out of business in droves?” – 320 million reads.
“Provinces most affected by Japan’s nuclear sewage” – 130 million reads.

The data and methods used by Logically aren’t divulged, a common problem in the cottage industry of disinfo monitoring. Logically’s report about this campaign is not inconsistent with IGP’s findings to date. IGP has reviewed over 200 governmental and corporation statements and content pieces from various online sources, encompassing both traditional media and social media platforms across East Asian nations. They also underscore that Chinese state media disseminated inaccurate and provocative narratives concerning the Fukushima wastewater release, including doubts about the IAEA’s independence, emphasizing Japan’s lobbying activities within the IAEA and the scientific community, and amplification of public resentment in South Korea, Japan, and China. 
However, Logically’s report also highlights a limitation in grasping the broader Fukushima wastewater disinformation context. To fully comprehend ongoing disinformation activities related to the Fukushima wastewater release, we should broaden our perspectives beyond state-led actors like the Chinese government, and focus also on non-state actors. IGP’s has found that many of the examples and revelations in Logically’s report could also be applicable to regions like South Korea and Japan, where governments support the wastewater release plan. For instance, South Korean media has highlighted the potential harmful consequences of releasing Fukushima wastewater, emphasizing its potential to adversely affect marine life and disrupt the local ecosystem. Japanese media has also extensively covered the apprehensions raised by the Japanese fishing industry well in advance of the commencement of the wastewater release. Essentially, IGP suspects individuals and media outlets in East Asia actively engage in debate, misinformation and possibly even disinformation dissemination, irrespective of their governments’ positions, and they play an equally important role in spreading disinformation. For instance, we’ve identified that several civic groups in South Korea and Japan ran paid advertisements on Meta to propagate false narratives concerning the wastewater release and environmental concerns. The IGP team believes that disinformation surrounding the Fukushima wastewater release is more complex than just state-led disinformation.
Huawei Mate 60 Pro Leaps over the “High Fence” 
Huawei took DC insiders by surprise with its release of its flagship Mate 60 Pro. In China, Mate 60 Pro was framed as a success that broke the US’s coordinated chip manufacturing blockade. US export controls have definitely hindered Huawei, which blocked its access to TSMC chips, lithography equipment, and the latest design automation intellectual property, but a teardown  investigation into Mate 60 Pro has shown that sanctions have not prevented Huawei’s chip supplier SMIC from manufacturing 7nm chips with decent parametric yields. 
While SMIC is not self-sufficient yet, loopholes within current export control policies made it possible for them to source tools from equipment companies typically used in 28 nm processes and adapt them for the more advanced 7nm process. 
The Mate 60 pro release was timed when U.S. Commerce Secretary Gina Raimondo was still in Beijing. The US government has begun a check on the “character and composition” of the phone. Despite a clear failure of their stated goals, National Security Advisor Jake Sullivan maintained that the United States “should continue on its course of a ‘small yard, high fence’ set of technology restrictions focused narrowly on national security concerns (…) regardless of the outcome.”
Any add-ons to the existing export control regime targeting SMIC will raise the stakes of the technological standoff between China and the US. The only way to fully fix these loopholes would be full measures that further limit the basic equipment and tools of chip-making to an unprecedented scale. Such an unfortunate outcome would only serve to heighten tensions, harm US manufacturers and accelerate decoupling. After the recent Huawei release, any further possible US export controls will only strengthen Beijing’s tendency to pursue self-sufficiency in its technological capacity, and reinforce its distrust towards US intentions.
Don’t Rely on Foreign Policy Think Tanks for Global AI Governance Advice
The venerable Bulletin of the Atomic Scientists has published its September issue, which explores The Hype, Peril, and Promise of Artificial Intelligence. In it, Rumtin Sepasspour of @CSERCambridge and defense policy oriented think tank Global Catastrophic Risk Policy authors a “premium” (apparently the latest iteration of data, compute and algorithms only impacts premium readership?) piece, “A reality check and a way forward for the global governance of artificial intelligence”. It makes several useful observations about the need for targeted and focused governance, and clearly identifying “what policy outcomes are being sought and which institutional functions are needed to reach those outcomes.”  Yet, in typical fashion of the AI community, the author confines its proposed solutions to multilateral ones. We continue to be surprised at the AI community’s lack of awareness of the range of institutionalized global governance options, particularly those engaged in Internet governance (which is arguably a hell of a lot closer topically to AI than hypothetical “existential” threats), where recognition of stakeholder incentives results in active, voluntary participation in collective action to address a variety of serious transnational problems. For some background on how and why these networked governance structures work where hierarchical (e.g., state led) solutions don’t, start with our 2013 International Studies Review article. 
 
The post The Narrative: September 15, 2023 appeared first on Internet Governance Project.
Source: Internet Governance Forum