By Fernanda Odilla, Research Fellow, University of Bologna
In Philip K. Dick’s 1956 novella “The Minority Report,” the Department of Precrime is a governmental agency which harnesses the visions of those who can see future events before they occur. Nearly seventy years later, Dick’s dystopian future may have become partially true.
To be sure, while people don’t possess psychic powers to provide information about upcoming crimes to police officers, machines can run automated math models based on data from past events and on the assumption that patterns will repeat to a degree, thereby enabling law enforcement to predict and prevent a wide range of misconduct, including certain types of corruption.
Why does this matter to both anti-corruption practitioners and scholars? For starters, we’re still operating in the dark: there’s too little current research that opens the curtain on anti-corruption algorithms. Just as other researchers have interrogated facial recognition and risk assessment algorithm-based tools associated with everything from credit scoring to sentencing and bail decisions, we must do so with automated systems designed to identify and curb corruption.
If we want better anti-corruption efforts, we need to understand how these artificial intelligence (AI) tools operate, who created them, and their impacts and risks. Moreover, we must know who is monitoring the monitors. Otherwise, we risk perpetuating the ills we seek to cure.
AI in AC: Hopes and Frontiers
Right now, numerous countries deploy anti-corruption (AC) tools based on AI. In Ukraine and Brazil, applications identify fraud in public procurement through suspicious patterns. In Mexico, the Tax Administration Service uses AI to detect companies that conduct fraudulent operations. The Nigeria Customs Service piloted a project to use machine learning to identify irregularities on imports and the Brazilian Revenue Service is applying a natural language processing technique to generate automated reports on possible customs fraud. In Europe, the Datacros project developed an anti-corporate crime prototype with a strong predictive capacity to identify companies linked to individuals with criminal records. And Global Witness and its partners are fighting corruption in the Democratic Republic of Congo’s mining sector through a computer program that draws on satellite imagery.
Subscribe here to receive the Corruption in Fragile State Blog's posts. We publish every three weeks—enough to keep you informed without cluttering your inboxes.
Some experts believe that digital technologies are changing the “corruption game.” To Vinay Sharma, senior advisor at the World Bank, “AI could help prevent and mitigate corruption risks as early as possible.” Oxford Insights not only lists AI as “the next frontier in anti-corruption” but also stresses that it “has the potential to recapture millions, and possibly billions, of dollars for governments and therefore citizens across the world.” In 2019, the US Government Accountability Office launched an Innovation Lab that sought to analyze multiple large data sets simultaneously to identify and curb improper payments; techniques discovered in the lab have “the potential to save billions of dollars for taxpayers across the full spectrum of federal programs.”
Toward a more responsible evaluation of AI in anti-corruption
Despite these optimistic hopes for AI’s role in current and future anti-corruption efforts, researchers have by and large yet to evaluate this brave new world. One exception, a recent article in Nature, outlines some of the promises and perils of AI-based anti-corruption tools, offering a starting point to critically assess such technologies.
As a scholar I still find the scarcity of research on AI in anti-corruption worrisome. Here’s where I think we need further research—and why it’s relevant for both anti-corruption practitioners and academics.
To the still-incipient research into AI in the anti-corruption field, the ability to process data with a degree of autonomy, with or without supervision, and its learning capacities differentiates AI from classic static technologies.
"Just as other researchers have interrogated facial recognition and risk assessment algorithm-based tools associated with everything from credit scoring to sentencing and bail decisions, we must do so with automated systems designed to identify and curb corruption."
The appeal of this capacity to decide autonomously shouldn’t be dismissed. The application of AI tools to anti-corruption programming can assist or even replace human decision-makers: in principle, AI tools are impartial, rapid and immune to fatigue. But for AI to truly benefit anti-corruption efforts, we still must know in detail the nature of the growing number of AI applications, calculate their efficiency and efficacy, and guarantee the integrity of such technologies.
Aside from the need of a more elaborated definition, we need less hype and more critical reflection on the use of AI in anti-corruption. To this end, I propose three essential research needs for evaluating AI’s role in anti-corruption.
1. Assessing the nature of AI-based anti-corruption tools
Unfortunately, there are not many attempts to map these emerging technologies for anti-corruption, transparency and accountability. The tools currently used fight specific types of corruption and tend to work with audit trails or hypothesis sets based on what has previously been observed to mainly identify unauthorized expenditures, illicit enrichment, conflict of interests, fraud in public procurement, and licensing scams.
Initiatives-wide mapping would help us better understand the type of corruption AI is being deployed against as well as who has developed these goal-driven tools (and which are the inputs, processing techniques and outputs of such technologies). In this regard, separating bottom-up (those led by, say, activists, journalists and citizens to hold public officials accountable) and top-down initiatives (e.g., government agencies that seek to improve horizontal accountability by preventing and predicting misconduct) may be a starting point. However, we should also be questioning the quality of data being used and who is designing, deploying. operating and controlling these tools.
By systematic mapping these digital anti-corruption tools, researchers can help anti-corruption practitioners more accurately assess the nature and efficacy of particular anti-corruption efforts.
2. Measuring AI performance to prevent and detect corruption
AI often works with metrics related to probability, accuracy, precision, sensibility, and specificity. What about AI in anti-corruption? Should it use these same metrics or develop new ones? Which systems of measurement should be used for tools designed to prevent versus those designed to detect a misconduct associated with corruption? Which databases offer valid and reliable data to be used as input? What is an acceptable number of false positives and false negatives for corruption?
We know by now that some systems are better at detecting certain suspicious actions than others. Take the case of the Chinese AI system “Zero Trust” that cross-references big data to evaluate millions of government workers. Jointly developed and deployed by the Chinese Academy of Sciences and the Chinese Communist Party’s internal control institutions, the system has access to over a hundred sensitive databases and caught 8,721 government employees engaging in misconduct such as embezzlement, abuse of power, misuse of government funds and nepotism. “Zero Trust,” however, is better at detecting property transfers and land acquisitions rather than others suspicious actions; moreover, and worryingly so, the program cannot readily explain its processes.
Without greater transparency regarding the metrics to assess AI anti-corruption tools, we won’t know whether these systems are carrying out tasks and operations for which they were designed, thereby increasing their opacity and risk of bias.
3. Adding accountability to AI-based anti-corruption tools
Anti-corruption scholars are accustomed to asking, “Who’s controlling the controllers?” This question of accountability very much applies to AI in anti-corruption.
We need to consider humans as crucial components of automated systems. This includes having people in charge of tasks from programming to maintenance, as well as informing citizens about decisions made, their outcomes and possible impacts. We also need human oversight over inputs and outcomes of algorithmic systems.
"AI often works with metrics related to probability, accuracy, precision, sensibility, and specificity. What about AI in anti-corruption? Should AI in anti-corruption use these same metrics or develop new ones? Which systems of measurement should be used for tools designed to prevent versus those designed to detect a misconduct associated with corruption? What is an acceptable number of false positives and false negatives?"
In a recent working paper, I looked at the use of AI to fight corruption and improve accountability in Brazil and observed a low level of concern regarding unintended possible biases among the developers. Findings like these raise concerns because the lack of accountability increases the risks not only of biased or unfair algorithms but also of the misuse of individual data.
A goal of anti-corruption scholarship should be to assess the governance and integrity of AI tools used to curb corruption. Practitioners should take into consideration potential risks and unexpected outputs when building and using these tools. More importantly, these tools need to be auditable and explainable.
Brave New World?
AI in anti-corruption is not exclusively concerned with mitigating risk peril and hazards. In 2016, a group of tech-savvy and concerned Brazilians launched Operation Love Serenade (OSA). Developers built a mainly Python-programmed application named Rosie that first extracts and merges data from open public and private databases, then applies hypotheses (audit trails) and test-driven development processes to estimate the “probability of corruption” of Brazilian congresspeople’s expenses. Rosie publishes the outcomes of its analysis online on a dashboard named Jarbas and uses an automated Twitter account to invite its 40,600 followers to use OSA’s tools and data processes to hold politicians accountable. In addition, the initiative has mobilised over 500 people in an open Telegram group (recently transitioned to Discord), where developers, activists and journalists share ideas, doubts, and technical solutions. Operation Love Serenade illustrates the promise AI holds for bottom-up public-minded anti-corruption efforts.
And yet whatever promise AI holds for anti-corruption, we must not forsake our critical and investigative responsibilities. To do otherwise is to leave the door wide open for minority reports in the form of low-quality bits of information produced by opaque and even biased tools. Which is its own form of ethical corruption.
Fernanda Odilla obtained her doctoral degree in Social Science and Public Policy at the Brazil Institute at King’s College London. She has a Masters in Criminology and Criminal Justice from King’s College London, a pre-masters in Crime and Public Safety from UFMG (Universidade Federal de Minas Gerais, Brazil), and a BA First Class Honors in Journalism from PUC Minas (Brazil). She is currently a research affiliate at King's Brazil Institute and a research fellow for a research project supported by the European Research Council (ERC) at the University of Bologna. Within the ERC's project, Fernanda Odilla investigates how digital media, algorithms, and artificial intelligence help the fight against corruption from grassroots across the world. Her research interests are control of corruption, accountability, and new technologies in the context of anti-corruption, integrity and quality of government. Prior to her academic career, she worked as a multimedia producer for the Brazilian desk at the BBC in London and as a reporter for daily newspapers in Brazil, where she had dedicated to investigating and exposing political corruption. She is the author of Pizzolato – Não Existe Plano Infalível (2014), on the escape and imprisonment of the only person sentenced for the Mensalão scandal who ran away from Brazil.