-
Excerpt from course description

Deep Learning and Explainable AI

Introduction

The performance of deep learning models in tasks such as image classification, representation learning or data generation has reached state-of-the-art results in recent years.

In this course we will study different deep learning models, such as convolutional neural networks, recurrent neural networks, and autoencoders as well as probabilistic graphical models for deep learning like variational autoencoders. In addition, this course provides students with skills to implement and train deep learning models by exposing them to Python libraries such as TensorFlow. Finally, since some real-world applications require models to be interpretable, this course addresses the concept of explainable artificial intelligence (XAI).

Course content

To address the learning outcomes listed above, the course will have the following content:

Generative modeling: Many datasets have limited training data or limited labeled data. The course introduces generative modeling techniques that can deal with this challenge by learning generative models that are able to create new observations. 

State-of-the-art deep learning architectures: Different data types and different input/output pairs require different architectures, e.g., a neural network architecture designed for time-series is not necessarily a good fit for tabular data. In this course, we will introduce and use state-of-the-art neural network architectures that fit different types of data, which will enable the student to make better choices when creating new models.

Explainable AI: Many real-world use cases require a model to be explainable. For example, the GDPR regulation requires that a decision made by a system should be understandable for a lay person. This part of the course will introduce some basic techniques for probing and understanding predictions made by a machine learning system.

Probabilistic Machine Learning:  Probabilistic modeling and deep learning have been succesfully coupled in methodologies such as the variational autoendocer (VAE). VAEs are not only useful as generative models, but also for making predictions on binomial or multinomial outcomes. Thus, VAE predictions are drawn from a probability density funcion, which allow us to say something about their degree of certainty.

Disclaimer

This is an excerpt from the complete course description for the course. If you are an active student at BI, you can find the complete course descriptions with information on eg. learning goals, learning process, curriculum and exam at portal.bi.no. We reserve the right to make changes to this description.