-
Research Topic

The Future of Work: Robotics and AI

New technologies increasingly allow for the automation of cognitive tasks previously reserved for human workers. We investigate how this may improve workers’ performance and well-being, reconfigure education systems, and alter social interactions.

Description

In this research stream, we are investigating robotics and AI from a social and ethical perspective. Our research is embedded into current discussions on the ramifications of smart technology, for example in terms of its potential for disrupting labor markets, affecting social inclusion, and changing our sense of privacy. While fairness, accountability, and transparency (FAT*) has emerged as an important interdisciplinary research community in the study of algorithms and AI, the embodied aspects of smart technology are often neglected. In our study of social robots, we pay particular attention to the material realities of the technology and its users as well as the social environment.

We use conceptual, qualitative and quantitative methodologies to approach the topic from a multi-stakeholder angle. This has meant connecting to and co-authoring with researchers across disciplines, including engineering, law, ethics and philosophy. While the research on this topic is in its early stages and ongoing, first results have been published in prestigious proceedings and edited volumes. 

 

Sub-Topics and Findings

The following key sub-topics have emerged, with specific research questions and findings for each sub-topic. 

  • Privacy and Social Robots

How do embodied robots that interact with humans affect our sense of privacy? Which new privacy risks emerge? What technological, social and legal approaches can help tackle these risks?  We first approached these questions from a conceptual perspective, using actor-network theory, value-sensitive design literature and previous research on privacy and social robots to advance our understanding of privacy-sensitive robotics as an emerging research field. We argued that, compared with non-embodied smart technology, social robots present heightened privacy risks in terms of physical privacy and social bonding.  We also gauged expert voices through in-depth conversations at conference workshops and have conducted surveys on non-users' privacy fears. Taken together, the findings show the importance of paying attention to privacy questions in the development process of social robots and call for a holistic approach, combining ethical, legal and technological perspectives. 

  • Ethical, Legal and Social Aspects of Robots

What are the key ethical, legal and social (ELS) challenges of social robots? How can these ELS challenges be addressed? In addition to privacy, we were interested in ELS challenges and potential solutions more broadly. Since ELS and social robots is a broad and interdisciplinary field, we approached the topic from a participatory and action-research oriented perspective. Conducting workshops at leading robotics and AI conferences with academics and practitioners alike, we found evidence for five major ELS areas: (1) privacy and security, (2) legal uncertainty including liability questions, (3) autonomy and agency, (4) economic implications, and (5) human-robot interaction including the replacement of human-human interaction. Within each category, specific challenges emerged. For example, discussions on autonomy and agency centered on the question of legal personhood for social robots as well as hierarchies in decision-making processes. Recommendations to address these ELS challenges were of both a legal and technological nature.

  • Transparency in AI Systems and Robots

What does transparency mean in the context of increasingly complex AI systems and social robots? What are the legal and social parameters of transparency? What are antecedents and outcomes? In this fresh and ongoing research area, we are currently working on a multi-disciplinary literature review, including the development of a framework for transparency that considers temporal and social aspects. Surveying the legal, philosophical, organizational and information systems and HCI literature, we found that transparency is a contested and complex construct that is difficult to implement in AI systems, where the GDPR is very vague. Transparency is often equated with explainability, neglecting important power dynamics and dark sides. 

  • AI and Inclusion

Will follow shortly.

 

Collaborating Institutions

University of Zurich, Queen Mary University London, Oregon State University

 

Activities and Career Paths

The research on this topic revealed a vivid stream of activities and opened up new career paths for all involved members.

Christoph Lutz, who started working on the Fair Labor project from the beginning of 2016 on as a postdoc, established this topic in close exchange with the collaborating institutions. Following up on research during his doctoral dissertation, which had resulted in a proceedings article on privacy and social robots, Christoph continued this research stream at the Nordic Centre. The research was presented at major international conferences such as We Robot, New Friends, ACM HRI, IEEE ARSO, AoIR, and three pre-conferences of the ICA human-machine communication group. Within this research stream, Christoph, in collaboration with Eduard Fosch Villaronga and Aurelia Tamò Larrieux, organized several workshops on ELS aspects of social robots in 2016 and 2017. Several early-stage publications emerged from Christoph's work on this topic and more are in the making.  As a result of his scientific achievements, Christoph was promoted to associate professor at BI Norwegian Business School in 2018. He will continue the research on social robots, privacy and transparency in the Toppfork project Future Ways of Working in the Digital Economy.

Throughout the project, Christian Fieseler was strongly engaged in the Global Network of Internet and Society Research Centers and its ongoing focus on AI and inclusion. Christian strengthened the presence of the Nordic research area in that interdisciplinary and quickly growing community. As a result of his outstanding scientific achievements, both before and within the Fair Labor project, Christian was promoted to full professor at BI Norwegian Business School in 2018. He will continue his work on the AI and inclusion as a project leader of the Toppforsk project Future Ways of Working in the Digital Economy.

 

Key Publications

Fosch Villaronga, E., Tamò-Larrieux, A., & Lutz, C. (2018). Did I tell you my new therapist is a robot? Ethical, legal, and societal issues of healthcare and therapeutic robots. Working Paper, SSRN Electronic Journal. Working paper link

Lutz, C., & Tamò-Larrieux, A. (2018). Communicating with robots: ANTalyzing the interaction between healthcare robots and humans with regards to privacy. In A. Guzman (Ed.), Human-Machine Communication: Rethinking Communication, Technology, and Ourselves (pp. 145–165). Bruxelles, Bern, Berlin, Frankfurt am Main, New York, Oxford, Wien: Peter Lang. Edited volume link

Rueben, M., Aroyo, A., Lutz, C., Schmölz, J., Van Cleynenbreugel, P., Corti, A., Agrawal, S., & Smart. W. (2018). Themes and research directions in privacy-sensitive robotics. Proceedings of the 2018 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Genoa, 27-28 September. Proceedings link

Lutz, C., & Tamò, A. (2015). RoboCode-ethicists – Privacy-friendly robots, an ethical responsibility of engineers? Proceedings of the 2015 ACM Web Science Conference, Oxford, 28 June-1 July, 1–12. Proceedings link