IGP at AoIR 2024

From October 30 to November 2nd, the 2024 Association of Internet Researchers (AoIR) conference was held at the University of Sheffield in Sheffield, United Kingdom. This year marked the 25th anniversary of the AoIR conference, which set a record as the largest in its history, drawing approximately 700 attendees from around the world in various capacities. The conference featured over 130 sessions spread across three consecutive days. These sessions covered various internet research topics, including AI and platform studies, digital culture, and online communities. Particular focus was placed on the impact of digital technologies on society, politics, the digital economy, and human behavior. The conference’s themes reflected current issues in internet studies, such as AI governance, content moderation, digital industries, mis/disinformation, online public opinion, digital methods, platform labor, and social movements. 
Session: Governing Mis/Disinformation
On behalf of IGP, I presented our research titled ‘From Black to White: Dissecting Disinformation in Nuclear Emergencies‘ during the Governing Mis/Disinformation session. The study examines state-sponsored disinformation campaigns during nuclear emergencies through propaganda models, focusing on the Zaporizhzhia (ZNPP) and Fukushima Daiichi nuclear power plant (FNPP) incidents.
Our analysis, using the Legitimating Source Model (LSM) and Deflective Source Model (DSM), revealed significant differences between the ZNPP and FNPP cases in terms of disinformation lifespan and actor involvement. Despite active state participation in disinformation campaigns, these efforts showed limited impact and failed to achieve policy changes in targeted nations. Contrary to common assumptions, we found that social media platforms played a minimal role in state propaganda efforts. Instead, traditional media served as the primary channel for disinformation, leveraging strong institutional trust among domestic audiences. When social media was used, the narratives merely echoed those found in traditional media outlets. The research found no evidence of AI-enabled disinformation in these nuclear emergencies. However, our theoretical analysis suggests that while generative AI could potentially enhance certain aspects of the DSM, it might simultaneously undermine the LSM’s core legitimization mechanisms.
In my session, there were two research groups from Europe represented by four panelists: Daria Dergacheva and Christian Katzenbach from the University of Bremen and the Alexander von Humboldt Institute in Germany, and Joanne Juai and Cornelia Brantner from Karlstad University, Sweden. Daria Dergacheva’s presentation, “Governing and defining misinformation,” examined how major social media platforms have evolved in their approach to misinformation governance from inception through 2023, revealing that early efforts primarily focused on combating spam and impersonation. The second presentation by Joanne Juai, “The dark side of LLM-powered chatbots,” used the 2024 Taiwan presidential election as a case study to analyze Microsoft’s Copilot responses across five languages, finding concerning disparities in content accuracy, particularly higher false information rates in Traditional Chinese content compared to Simplified Chinese, and notably biased information in German language responses regarding Taiwan’s election.
Discussion
Reaction 1
Our presentation received notably different responses from two distinct groups: scholars familiar with propaganda theories expressed positive feedback and engaged in deeper theoretical discussions, while those less familiar with these frameworks raised methodological questions. I noticed that most scholars, particularly in other sessions, predominantly employ quantitative methods and rely on master’s and undergraduate students for data collection, or obtain data from external sources. My approach of personally collecting qualitative data over two years drew surprise from the audience. While I acknowledge that quantitative methods and utilizing master’s and undergraduate students can enable faster collection of larger datasets, my experience suggests that qualitative approaches are particularly valuable for propaganda research as they allow for deeper understanding of narrative contexts and nuances.
The study of disinformation as propaganda tactics extends far beyond quantifying the volume of propaganda messages. At its core, it’s about understanding how false information is strategically crafted to resonate with target audiences’ cultural, political, and socioeconomic backgrounds, and how it potentially influences their perceptions. Simply analyzing the style and tone of false information or measuring its prevalence while overlooking context proves less effective for understanding its true impact.
Although more time-intensive, propaganda research requires researchers to personally examine each false narrative and understand how it connects with the target audience’s social background. Our Fukushima disinformation research illustrates this point well. By analyzing public responses to disinformation about Japan’s nuclear wastewater release, we discovered that people weren’t actually rejecting scientific evidence from experts and international organizations. Instead, they were expressing a deeper, fundamental distrust of the Japanese government. Without personally reading and processing each piece of disinformation, we might have incorrectly concluded that people in South Korea and China were simply anti-science. Our empirical approach to disinformation research demands that researchers immerse themselves in the disinformation landscape. This means experiencing it firsthand, accumulating data personally, and developing a nuanced understanding through careful examination of each piece of content.
Reaction 2
Secondly, panelist Daria Dergacheva raised a crucial question about our conclusion, suggesting that the failure to achieve policy changes or shift public perceptions doesn’t necessarily mean disinformation campaigns are harmless or haven’t impacted people’s thinking. While I acknowledged that disinformation campaigns may influence people’s thoughts and potentially pose threats during nuclear emergencies, I argued that we cannot classify disinformation as a threat solely based on its ability to generate noise and create instability.
The proliferation of false information during periods of social vulnerability is, in fact, a natural phenomenon in liberal democratic societies. It’s both normal and healthy for people to express diverse viewpoints on specific issues. Moreover, state-sponsored disinformation, as a propaganda tactic, is fundamentally a goal-oriented activity, designed to achieve specific, calibrated, and visible changes – whether strategic in the long-run or tactical in the short-run. Even if it manages to create emotional turbulence, if the disinformation lacks sufficient persuasive power to gain public support, fails to alter policies, and doesn’t undermine public trust in government communications, then it cannot be considered successful as propaganda, nor classified as a threat. Given that we cannot observe and measure the psychological impact of disinformation through direct means, behavioral change must remain the primary criterion for evaluating the success of disinformation campaigns. As I concluded in my response: democracy is inherently noisy.
Reaction 3
Another interesting question came from a fellow Korean researcher at the University of Massachusetts-Amherst, who studies disinformation targeting ethnic minorities, particularly Asian American populations. She noted that my research seemed overly focused on top-down disinformation distribution through traditional media outlets while potentially underestimating social media’s role in bottom-up disinformation campaigns. During our post-session chat, I learned that she had previously studied how South Korean left-wing groups use social media for influence operations to counter mainstream media narratives. Her perspective seemed influenced by left-wing rhetoric in South Korea and revealed some grievances toward mainstream media.
In response to her question, I acknowledged that our study emphasizes the significant role of mainstream media over social media, confirming the presence of top-down information flows. While she was correct in noting that individuals and civic groups disseminated disinformation in our case studies, it is important to clarify that our findings were observational rather than an intended focus of our research.
First, we observed few influential and distinctive disinformation narratives originating from social media when compared to those propagated by mainstream media. In the nuclear emergencies we analyzed, much of the disinformation circulating on social media was largely a replication of narratives first disseminated by mainstream media sources. Second, the boundary between mainstream media and social media is becoming increasingly blurred. Most mainstream media outlets now share their content on social media platforms, allowing users to react, share, and even repurpose that content. This convergence complicates the traditional distinction between top-down and bottom-up information dissemination.
Third, while social media does facilitate bottom-up disinformation campaigns, our observations showed that posts created by individuals and civic groups often had limited reach. The audience engaging with these posts typically consisted of followers who already shared similar views, raising questions about the broader influence of these campaigns on public perception. Conversely, despite a prevailing sense of mistrust in mainstream media, these outlets continue to maintain substantial followership and provoke significant reactions to their content. This engagement far surpasses that of lesser-known accounts, suggesting that mainstream media retains a dominant role in shaping public discourse.
Takeaway
The most significant insight I gained from this session came from Joanne Juai’s presentation, “The Dark Side of LLM-Powered Chatbots: Misinformation, Biases, and Content Moderation Challenges in Political Information Retrieval.” This study, which used the 2024 Taiwanese presidential election as a case study, examined the complexities and implications of employing LLM-based chatbots for political information retrieval. The research team conducted an in-depth analysis of Microsoft’s Copilot responses across five languages: English, Traditional Chinese, Simplified Chinese, German, and Swedish. Their findings revealed notable discrepancies in content accuracy and response behavior among these languages, with significantly higher rates of misinformation in Traditional Chinese (the written language used in Taiwan) compared to improved performance in Simplified Chinese (used in mainland China). Moreover, responses in German exhibited more pronounced bias when presenting information related to the Taiwanese presidential election than those in other European languages.
I posed a question to Joanne, suggesting that her findings might imply that generative AI’s current limitations—specifically, its unreliability and linguistic disparities—could deter its use in disinformation campaigns. I referenced my own research from this year’s Internet Governance Project (IGP) conference presentation, “Is Generative AI Really a Game-Changer in Disinformation Campaigns?” which highlighted the inconsistent capabilities of various LLMs in comprehending East Asian languages and cultural subtleties. This technological imperfection, I argued, could reduce the likelihood of propagandists leveraging generative AI for disinformation, as these inconsistencies undermine the efficacy of such campaigns across diverse linguistic and cultural regions.
While Joanne acknowledged that LLMs display technological limitations and uneven performance across different languages, platforms, and cultural contexts, she disagreed with my assessment. In specific, we both concurred that her research underscores the instability of LLMs in conducting sophisticated disinformation campaigns; however, she argued that these linguistic discrepancies and informational inequalities could actually incentivize propagandists to exploit these tools. By crafting chatbots tailored to disseminate disinformation to specific linguistic communities (such as speakers of Traditional Chinese), while ensuring accurate information is provided to other language groups, propagandists could strategically use these biases to their advantage. This approach would enable micro-targeting of specific audiences while minimizing exposure risks. For instance, the Chinese government might exploit such discrepancies to spread false information exclusively in Traditional Chinese, thereby targeting the Taiwanese population while evading detection by Western analysts and intelligence agencies who may lack proficiency in Traditional Chinese.
Joanne’s response was the most impactful takeaway I had from this year’s conference. Her answer prompted deep reflection, and I realized that her argument was quite compelling. While she acknowledged that there is no concrete evidence of China using this approach to spread disinformation during the Taiwanese presidential election, she pointed out that technological imbalances can be exploited in various ways based on the intentions of propagandists. Since technology is inherently neutral, its applications are boundless and depend on how users choose to utilize it. Her research provided me with fresh insights and presented significant findings that could help fill the gaps in my own study on AI-powered disinformation.
Conclusion
Overall, my presentation at AoIR was a success. I gained new insights into AI-powered disinformation research, which allowed me to reassess my current claims from multiple perspectives. The most profound takeaway was the realization that even with shared data and observations, researchers can draw fundamentally different conclusions. Joanne and I, for instance, reached contrasting assessments from our common observations about the limitations of LLMs and the nature of state-led disinformation. Such academic exchanges are immensely enriching, and I am grateful for the chance to represent IGP at the conference, where I both learned from and contributed to scholars in related fields.
The post IGP at AoIR 2024 appeared first on Internet Governance Project.
Source: Internet Governance Forum