15 May, 2023
More Wrong Thinking on Generative AI Risks
Civil society is in trouble if the musings of an anonymous philosopher on the LESSWRONG website, most likely by a national security official with access to a closed CISA meeting, reflects the current zeitgeist inside governments concerning generative AI. The piece asks the question, Are Advances in LLMs a National Security Risk? While there are legitimate reasons to be concerned with generative AI risks (and even take action like requiring system cards) the author’s argument is that only governments have the ability and should mitigate possible harms through controlling the technology. Troublingly, the author ignores their own evidence leading them to that false conclusion. E.g., if cybersecurity insurance premiums are indeed growing then that means there are even stronger incentives for decentralized, but interconnected actors to develop defenses instead of transferring risk. The author goes on to argue the need for international diplomacy with adversaries (which is fine in principle), citing nuclear arrangements and that “under even modest assumptions [LLMs] constitute a threat to order larger than any other weapons system to date.” The conceptualization and logic here are baffling. First of all, yes, militaries are using AI in their operations and we should all be concerned about it, but the fantasy that algorithms equate to kinetic weapons that can annihilate hundreds of thousands or millions of humans in seconds is absurd (remember the similar Cyber Pearl Harbor threat that hasn’t materialized?). Moreover, low-level, interstate cyber conflict is omnipresent, this is because the benefits of engaging in that activity outweigh the costs and there are limited (if any) consequences, states exist in a condition of anarchy. Thus a more likely outcome is that, even if adversarial states were to engage in developing international mechanisms (e.g., norms) for controlling generative AI, states will continue to use the technology to their advantage when “necessary” while civil society endures the associated costs (e.g., a Manhattan Project for AI safety) of the controls.
GIG-Arts 2023
We will be presenting our analysis on “WebPKI and Non-Governmental Governance of Trust on the Internet” at the GIG-ARTS conference on Tuesday, May 16th, 2023. The conference theme revolves around the Governance of Cybersecurity and our analysis delves into the development of transnational, cooperative, and private-sector-driven governance within the Certificate Authority and Browser Forum (CAB Forum). We investigate how this governance structure addresses the challenges of collective action problems in order to promote the adoption of security standards.
Over the past decade, the Forum has led various notable initiatives, including Network Security Requirements, the progressive Baseline Requirements for Certificate Issuance, and the recent implementation of Certificate Transparency. The Forum has successfully managed these reforms through a distinctive governance framework that grants voting rights to certificate producers and consumers.
Our study employs a mixed-methods approach to characterize the Forum’s stakeholders, governance mechanisms, and voting patterns. Our presentation includes initial findings on external factors that influence the Forum, such as – market share among Certificate Authorities, interoperability across Browser root stores, security incidents, and alternative governance platforms and consistent themes that emerged from qualitative analysis and semi-structured interviews. These themes encompass a preference for consensus-based decision-making, power dynamics between Certificate Authorities and Browsers, and the challenges faced by non-native English speakers in a diverse forum. We synthesize these findings to outline potential opportunities, dynamics of industry self-governance, social trust considerations, and risks to the sustainability of the Forum. Finally, we conclude our findings by offering policy recommendations.
Oversight Board Consultation
IGP attended the Oversight Board “Shaheed” PAO Asia Roundtable” focusing on Facebook and Instagram’s moderation of the word “shaheed” in reference to people on Meta’s “Dangerous Individuals and Organizations” list. The term “shaheed” is translated by Meta as “martyr” in English. It accounts for more content removals under the Community Standards than any other single word or phrase on Meta’s platforms. The roundtable was attended by several human rights and legal institutions, civil society, and digital rights organizations based in Asia. This case provides an opportunity to highlight how ambiguous terms like “shaheed” are being used to automate censorship of individuals that are designated as dangerous by the government. Meta’s policies can negatively impact disadvantaged and marginalized communities and lead to extra-judicial censorship of legitimate speech.
The discussions focused on the context in which the word is being used, for example freedom fighters in India and Pakistan are referred to using the word shaheed. Shaheed is also an honorific term given to soldiers who have fallen in the line of duty and citizens who are casualties of war, terrorism or heroes who died saving lives during disasters. While context of use is important it does not address wide censorship that is enabled around ambiguous terms like shaheed.
For example, any community fighting for self-determination or standing up against the state’s might may use the term “shaheed” or martyr to describe its compatriots. Self-determination by communities and secession efforts are viewed and treated as a threat to the state, until they become politically viable options. Therefore, enabling such wide censorship will eventually contribute to extending power struggles that often are rooted in the history and identities that exist and operate outside of social media.
IGP raised the Burhan Wani case as an example that could be useful in understanding the censorship implications and the wide powers that platforms have in the context of this case. The Indian government shut the internet down to block conversation and protests in Jammu and Kashmir (J&K), and simultaneously, platforms like Meta and Instagram censored conversation beyond J&K. Though it is not clear whether Meta’s moderation team carried out the blocks using the term ‘martyr’ the official statement from the company acknowledges they removed content that was “praised” or deemed to be supportive of terrorists, terror groups, etc. As the case highlights, Meta’s global team exercises a certain amount of judgment. In the case of content from J&K, all posts need to be put in a context that condemns these “terrorist organizations” and their “violent activities”. This resulted in the legitimate speech of journalists, and citizens being shut down.
Another related point raised by IGP in the context of community standards around “Dangerous Individuals and Organizations” is the amendment of India’s terror law, the Unlawful Activities Prevention Act in August 2019. The amended law includes provisions to enable designating an individual as a “terrorist” and has been used to label critics, activists, journalists, academics, and citizens as such. The UAPA repackages ideas as crimes and enables the government’s subversion of principles of justice and due process.
The UAPA is being challenged in the courts but with the courts coming out to protect the state against individual liberty, it looks like the law is here to stay. Meta needs to take a call on terms like “shaheed” bearing in mind how these measures contribute to shaping the restrictions on liberty in India and elsewhere. By allowing broad censorship based on such ambiguous terms, social media companies are wading into political conflict.
The post The Narrative: Panic over Generative AI Risks; WebPKI GIG-ARTS conference; Oversight Board consultation appeared first on Internet Governance Project.
Source: Internet Governance Forum