The Making of Misinformed Choice

Dec 5, 2023 2:41 AM

Anushree Majumdar

Publication year
December 5, 2023


Photo by Element5 Digital on Unsplash

The Making of Misinformed Choice

An exploration of dis/misinformation and digital authorship in South Asia and Southeast Asian elections

by Anushree Majumdar

In the last decade, as digital technologies became more ubiquitous in newly networked ecosystems across South Asia and Southeast Asia, we have witnessed the incredible rise of mis/disinformation in election cycles in this vast region. The political playground has become digital and the emergence of mis/disinformation events has created a crisis of faith that negatively impacts democratic processes which in turn, leads to a further polarization of societies, communities, reducing trust in electoral practices, and a collapse of civic engagement in almost all public institutions. The fundamental pillars of democracy — free and fair elections, universal and equal suffrage, transparent processes and accountability — are threatened by mis/disinformation that attack multiple narratives that go beyond politicians and parties, but also harm perceptions around individual and national identity.

Earlier this year, the Digital Narratives Studio’s Experimental Authorship pillar embarked on a project with Digital Asia Hub and the Bertelsmann Stifting Foundation to study mis/disinformation events in election cycles in this particular region, to view these phenomena through a lens of crises and responses. The research project was supplemented by interviews with a number of stakeholders including fact-checkers, journalists, academics, health workers, and data analysts from nine regions: India, Pakistan, Sri Lanka, The Philippines, Taiwan, Malaysia, Indonesia, Thailand and Japan. We examined the conditions under which an electorate participates in democratic processes, and studied the ways in which information travels through different digital platforms; how it can become contaminated at different points of the journey, and how it is distorted by bad actors, information operators, and ordinary people.

This allowed us to introduce the concept of “Misinformed Choice”, a sustained manipulation of user behaviour through complex, sophisticated, and relentless deployment of computational networks and persuasive algorithms that has resulted into a persistent problem of people making choices which, to them, might appear as informed, rational, and agential, but are created out of an ecosystem of mis/dis information that is shaped by digital technologies. In the past, political parties geared their efforts to bring citizens to the voting booth, where they would exercise their right to vote after making an “informed choice” based on information and material such as manifestos provided to them by political parties, news items and opinion provided by media houses, and other conversations in the public sphere. But the aim of long-term disinformation campaigns is to turn the voter into a manipulator of information, fed to them via digital spaces of engagement. A victim of disinformation turns into an offender as they spread fake news, false information, concocted histories to their networks, simply because they are convinced about their own opinions and choices.

This brings us to the following questions: who is the author in mis/disinformation cycles, and how has the author function changed? In order to answer these questions, we looked at the way information systems have changed with the advent of digital technologies. As traditional sources of information grow less in number and weaker in influence, a spate of digital platforms have rushed in to take its place, ushering in selective information without requiring to provide any trail of facts or verified sources. We cite the work of Duncan Watts, who argues that computational networks are networks of simulation; that simulation produces new understandings of reality but cannot be held accountable if it does not offer truth. This ties in with what Jacques Rancière called “information without signature” — one of the biggest challenges that of the present digital age is that we are over overburdened with information without signatures, and a lot of the time is spent in just verifying where it came from. Instead, we must explore how the prevalence of digital technologies have resulted in informational overload, causing a flattening of computational systems where all information appears to be equally true; and how the democratization of access has become confused with the democratization of production.

One of the ways in which we studied the different author functions in mis/disinformation cycles was by identifying informed choice as an “information stack.” Borrowing from Benjamin Bratton’s suggestion that we should understand global computing systems as a stack, we came up with seven layers — Self-recognition, process, infrastructure, representation, variety, verification, and assurance — that come together as informed choice. Then, by delving into how disinformation campaigns work across these layers, we could demonstrate how each layer can be independently contaminated and how it corrupts the entire stack. What unifies all the different elements of disinformation campaigns is the intent to cause harm, to disrupt due process, and to further the use of digital technologies in a way that corrodes mental autonomy.

In the second and third part of this blog series, we will expand further into how these campaigns with their different author functions have played out in different parts of South Asia and Southeast Asia, with alarmingly similar and yet relatively unique results.