Experimental
Authorship
Narrative
Change
Reality
Check

Misinformed Choice: Understanding it as a ‘Socio-Technical Imaginary’

Created
Jan 29, 2024 3:36 AM
Author

Anushree Majumdar

Publication year
January 29, 2024

image

Photo by Jon Tyson on UnSplash (Edited)

Misinformed Choice: Understanding it as a ‘Socio-Technical Imaginary’

by Anushree Majumdar

Misinformation Project Series 1: The Making of Misinformed Choice

In the first blog in this series on disinformation in political elections in South and Southeast Asia, we introduced the information stack that is made up of seven layers — Self-recognition, process, infrastructure, representation, variety, verification, and assurance — together, they form the basis of “informed choice”, a fundamental element of the electoral process. Misinformed Choice occurs when any of these layers come under attack due to reasons ranging from fake news spread by bad actors and state-sponsored information operators; misleading election manifestos of different political parties; posts on a variety of social media platforms; unverified information shared via messaging apps; television channels furthering certain narratives due to inherent biases; and occasionally, even wilful ignorance that occurs as a consequence of consuming a vast amount of information across mediums.

What sets Misinformed Choice apart from other forms of information manipulation and control is the way in which the subject/citizen is convinced that they have studied the information in a rational and objective manner and have reached a conclusion that can be proved in a number of ways — even if that particular opinion or information cycle is likely to crumble under scrutiny. Misinformed Choice occurs not as a result of information scarcity but of an information overload, which creates a condition of suspended belief, causing harm to the people making those choices.

While the democratic nature of popular social media platforms and the ease of access has resulted in millions of users across countries — in 2023, approximately 4.9 billion people use social media across the world[1] — these spaces are also being used and weaponized by political parties, information operators, organized troll armies and individuals to overwhelm citizens during election season by inundating their feeds and timelines with an excess of information. The sheer volume makes it rather difficult for journalists, fact-checkers and individuals to verify, and a large chunk of it passes muster without any digital checks and balances on social media platforms.

Therefore, it is crucial to acknowledge the nature of informational ecosystems instead of simply placing blame on the user, citing gullibility or laziness as a reason for being misled by a variety of information. In this study, we emphasize that the delegation of responsibility on to the user — either by dismissing their activities or by forcing them to go through digital literacy and training — is important so that we can understand Misinformed Choice as a ‘socio-technical imaginary.’

In their 2015 publication, Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power, Sheila Jasanoff and Sang-Hyun Kim introduced a framework of sociotechnical imaginaries as “collective performed, institutionally stabilized, and publicly performed’ practices of ‘desirable futures.”[2] In turn, we posit that Misinformed Choice has to be considered as a complex interplay of multiple stakeholders who create conditions, structures, practices, and processes that enable and naturalize Misinformed Choice as a part of our everyday experiences. A pattern of how Misinformed Choice is being created and perpetuated with the help of digital technologies has emerged, and in our scouting report, we explore this phenomenon by dividing it into five clusters — information ecosystems, cybernetic verification loops, affect mining, lack of digital authority, and the infrastructure of disenfranchisement. A brief primer on these clusters follows:

  1. Information ecosystems

The rise of new structures of information dissemination runs parallel to the steady decline of legacy media; for many decades, the latter was synonymous with veracity, and reportage was rooted in facts. Digital ecosystems naturalize the aesthetic conditions of misinformation: context collapse, ‘friend-of-a-friend’ networking that makes information immediate, the meme cultures that lead to information transfer far removed from its provenance, and the flattening of information sources which make it difficult to distinguish the quality and rigor of the information being produced. In the flattened space of the computational network, because all information is presented as equally true, all information becomes suspicious, and users have great difficulty in discerning how to trust these ecosystems.

  1. Cybernetic verification loops

Digital information establishes a cybernetic feedback loop with other digital information, establishing causal and correlative pathways of verification and inter-operability. This causes information events to turn into information cascades, where through repetition and virality, false information can be distributed at a speed that over-inscribes existing meanings — and produce entirely new ones, which replace the existing facts. Additionally, false information can appear to be truthful because it is supported by a number of digital verification sources, which themselves are not verified and can be created to produce an assurance of knowledge that only serves to continuously weave a web of bad data.

  1. Affect mining

Social media platforms can activate every user as a potential conduit through whom misinformation is spread — especially when they are in a heightened emotional state, and share that false information with those they are familiar with. This affective state of being, stemming from intimacy, empathy and identification, allows the information to bypass any checks and balances that could have been performed by the receiver, because the presumption is that the sender must have verified the data before circulating it.

  1. Lack of digital authority

When people are experiencing a Misinformed Choice event, they are not certain whom to go to, in order to get their doubts verified. Online sources, even when they are presented through official channels, are looked at with suspicion, especially in countries where state-sponsored propaganda is heavy or there is a mistrust of the current government. Moreover, legal frameworks are lagging behind, and redress or justice from legal structures are inefficient; the punitive actions that are available, are without impact.

Groups and collectives that are working to combat disinformation have flagged a key issue: there is no common database or a network where the corrected reports can be spread. Even though information should carry its traces, there are very few mechanisms of flagging information as contaminated even after it has been disproven, and hence it spreads at a speed that is almost impossible to counter. Making the fact checkers dependent on the very platforms that incentivize the spread of misinformation does not make for a responsible building of fact-checked information.

  1. Infrastructure of disenfranchisement

Through desk research and interviews with stakeholders, we have learned that in many parts of South Asia and Southeast Asia, digital infrastructures are actively being used as a tool for disenfranchisement. Almost all nine countries reviewed exercise draconian Internet shutdowns, erasure and censorship of information, and an unaccountable state control of digital infrastructure which curbs people from engaging with democratic modes of information making.

Many big tech companies often collude with these states and engage in active erasure of fundamental rights and civil liberties — there appears to be no multi-stakeholder body that can define global standards and establish structures of accountability. The information groups that are fighting on the ground to keep systems more open, democratic, and humane, work in extremely low-resource conditions, and are often fighting against the intersectional powers of these technology companies and the governments that they partner with.

These five categories can be applied to the ‘stacks’ of Misinformed Choice. In the next and final blog in this series, we will look at the ways in which these elements affect different demographics. While our case studies reflect digital information systems, the geo-political contexts and urgencies of specific countries and regions in Asia, our research has found them to be relevant and resonate with many other parts of the world where questions of digital information, democratic participation, and people’s engagement are leading to unprecedented challenges in ensuring free and fair futures.