Artificial intelligence (AI) continues to be a pressing topic, perhaps more so now than ever with the recent buzz around AI applications such as ChatGPT; a language model chatbot with the ability to generate “human-like” text based on a user’s inputs.
Technologies like this and other AI applications are likely to have significant consequences for both daily and work practices. But the potential of AI is complicated and is only realized through an understanding of the work context, practices, and how AI interacts with them.
New generations of AI have self-learning capabilities
There are several ongoing conversations about how we should approach AI technologies, especially due to developments in the more recent generations of AI systems. These systems are empowered by so-called “deep learning”, which is a machine learning approach that teaches computers to do what comes naturally to humans, in essence through “learning by doing”.
In other words, these systems have self-learning capabilities, which are often based on large datasets analysed by the technologies. The inherent technological functions of deep learning (e.g. self-learning and inherent opacity of these systems) matter when trying to predict how AI may change our work practices. Nevertheless, we also need to look at the specific contexts in which such technologies are used.
Practice-based theories offer valuable insights into AI in the workplace, particularly in four key aspects:
1. The Interplay of AI Models, Data, and Work Practices
AI models and data sets are not standalone entities; they derive their characteristics from their relationships with each other and are shaped by people's work practices. These complex models and vast data sets don't possess inherent qualities or identities; they emerge through the work practices that create and maintain them.
For instance, if we are developing AI models to prioritize tasks in a project, the distinctions made within the data set are likely influenced by work practices. This could involve considering relationships between project complexity and resource allocation.
2. Adapting AI models to evolving work practices
Work practices evolve over time, and so should AI models and data. Just like priorities change in a project, the criteria for AI models must adapt as we gain more knowledge and experience.
This poses a challenge for AI, as the data it relies on may change, necessitating updates to the models. It raises questions such as, "How quickly do circumstances change?" or "Which relationships are more dynamic while others remain relatively stable?"
3. Navigating blurred boundaries in work and AI
The boundaries between different aspects of work can become blurred. The relationships between entities are shaped by work practices, and this can lead to ambiguity. Just as project requirements may have fuzzy edges, it's not always clear where one aspect ends, and another begins.
These blurred boundaries often raise ethical questions and expose power dynamics within the data, AI models, and work practices. For example, when prioritizing tasks, we may need to examine what factors determine the importance of one task over another and if certain work practices or groups hold more influence in shaping these priorities.
4. AI adaptation challenges in a shifting world
AI models tend to rely on historical data while operating in a world where data structures and work practices constantly evolve. This raises the question of how well AI can adapt to these changing relationships.
By closely monitoring emerging work practices associated with data and AI models, we can identify critical issues that may arise if our models fail to keep up with evolving relationships. A practice-centered perspective prompts us to ask which relationships are most significant and how the practices that create and maintain them are organized.
The article is written by Emma Skjelten Daasvatn, research assistant at the Nordic Centre for Internet and Society.