Artificial intelligence (AI) increasingly permeates our daily life. AI underlies algorithms that are used to search for information, learn a new language and skills, shop for products and services, or interact with appliances. Like all technologies, AI comes with benefits and costs.
When we consider commercial AI applications as consumers, we tend to look at the personal benefits they bring us. We readily adopt biometric face scanning or digital finger printing for access to our personal devices because it makes it easier to interact with them, with little thought of the threats to privacy or human rights that these technologies bring.
AI plays an increasing role the public space. Public officials rely on AI to provide services such as e-government, healthcare, and education or to empower public infrastructure, like traffic control and waste management. With global investments in smart cities anticipated to reach US$2.51 trillion by 2025.
As careless as we seem to be with most commercial AI like the one in our smartphones, we are more reluctant to accept public or government use of AI in our daily lives. In a new research project we explore why using artificial intelligence as part of a public service is so different from using the same AI as part of a commercial service.
Do the benefits outweigh the costs?
Public investment in AI is usually justified by referring to its societal benefits. For example, smart surveillance cameras which can recognize an individual’s identity are presented as helping to make our streets safer. But we find that added value of societal benefits is not as highly appreciated as direct personal benefits (e.g., convenience or ease of use).
In our research we see that people are very sensitive to the personal cost of surveillance technologies, such as loss of privacy and personal control, but they seem to be much less concerned about these costs in commercial settings and particularly in public applications directed towards infrastructure.
In surveillance types of AI, perceived costs in terms of fears and privacy harms are higher than overall perceived benefits; while in commercial and infrastructure-related AI the benefits are emphasized over costs.
People we asked feel the least ‘exploited’ and most ‘served’ by public infrastructure-directed AI (e.g., traffic control, air quality monitoring, water management) mostly because they feel that the perceived costs for themselves are the lowest, lower even than commercial AI applications like chatbots or apps.
These findings show how subjective the evaluation of public AI can be: AI infrastructure like smart streetlights and traffic sensors might optimize energy consumption and track criminal activity, but they also can monitor people’s movement and record videos of the area, just like surveillance cameras do.
A communication problem?
Interestingly, our results suggest that public governance of AI technologies may still be preferred in contexts that clearly breach privacy – such as monitoring by smart cameras – the public trusts the government more than if say a company were to implement the same.
We also show that in contexts with high personal costs, increasing perceived societal benefits increase support for AI. In particular, the acceptability of smart surveillance cameras was stronger for public applications (e.g., on a street, underground or public transportation stations) than for commercial contexts (e.g. to monitor people in stores).
We also find that government initiatives to increase transparency (what the data is used for and who holds the data) and anonymization of personal data help to increase acceptance of public AI technologies.
Our initial findings, and the research we will follow it up with, can enable marketers and AI developers to craft applications that resonate with users, minimize perceived costs, and maximize perceived benefits. It can also inform policymakers on when and how to deploy AI in public settings, considering privacy, transparency, and societal benefits.