Algorithmic Accountability: Who Watches the Machine?

20 February 2020

Computers increasingly make decisions by themselves. A new research project headed by Professor Christian Fieseler will look at how businesses, regulators and users should deal with algorithms that cannot be understood.

Centre for Internet and SocietyThe Project Group from BI Research Centre for Internet and Society.

In order to remain competitive, businesses and other organizations must adopt more efficient means of decision-making. One way to do this is to hand the reins over to machines that make decisions partly or completely without human involvement. Adopters include the news and media sector, financial traders, health care companies and public welfare organizations.

Machines do not reason like humans. Whereas we can backtrack our decision-making processes through explainable steps involving comprehensible data, complex algorithms can be black boxes. You feed a black box some data at one end, and it spits out a result at the other – what happens inside is anyone’s guess.

Outcomes of algorithmic decisions can be mundane, such as a movie recommendation, or life-altering, such as the suggested length of a prison sentence or the probability of being affected by an illness. In both cases, consumers, citizens and patients often struggle to understand and meaningfully retrace algorithmic decision-making.

Legitimate, participatory and inclusive algorithms

The research project ‘Algorithmic Accountability: Designing Governance for Responsible Digital Transformation’ sets out to create a framework that organizations, regulators and communities can use to take concrete steps towards accountable decision-making processes.

In order to do this, Professor Fieseler and colleagues will investigate how both organizations and stakeholders can shape and implement AI and algorithmic technologies in a way that is transparent, comprehensible and ultimately accountable.

The project has been awarded NOK 10 million from the Research Council of Norway and will run until the end of 2023. It is based at the BI Research Centre for Internet and Society. In addition to Professor Fieseler participants include Associate Professors Christoph Lutz and Alexander Buhmann, and Assistant Professor Eliane Bucher.

The project is being carried out in collaboration with the KIN Center for Digital Innovation at the Vrije Universiteit Amsterdam (VU) where Professor Marleen Huysman and Assistant Professors Mark Boons and Ella Hafermalz research organisational implementations of algorithmic accountability.

External partners are Harvard’s Berkman Klein Center for Internet and Society, the Humboldt Institute for Internet and Society, the University of St. Gallen, the University of Surrey, the University of Groningen and the University of Leipzig.

Read more:

BI Research Centre for Internet and Society

BI Business Review: Addressing the reputational risks of Artificial Intelligence

You can also see all news here.