INF-8605 Interpretability in deep learning - 5 ECTS

Artificial intelligence (AI) and deep learning approaches are often considered black boxes, i.e. as a type of algorithms that accomplish learning tasks but cannot explain how. However, as AI/deep learning is increasingly absorbed as adopted for accomplishing cognitive tasks for human beings, it is becoming important that the deep learning models are understandable by humans, such that artificial and human intelligence can co-exist and collaborate. In critical tasks such as deriving, from given data, a correct medical diagnosis and prognosis, collaboration between artificial and human intelligence are imperative so that the suggestions or decisions from artificial intelligence are both more accurate and more trustworthy.

Registration is closed.


1st online meeting: May 8th, 10:15-12:00

2nd physical meeting: June 12-16th, 08:15-17:00

 

This course will consider different topics of importance regarding interpretable deep learning, equipping the students with knowledge of approaches that can be used to explain deep learning, and deep learning approaches that are more explainable than others. In addition, the students will receive practical skills in applying selected approaches for explaining deep learning, which will equip the students with practical skills in adapting to the rapid pace of technology development in the field of explainable artificial intelligence/interpretable deep learning.

Introductory concepts

  • Explainability and interpretability
  • Black box models versus explainable models
  • Knowledge versus performance, and the need of explainability
  • Example case(s)

Model-agnostic approaches

  • Knowledge abstraction and encoding
  • Interpretation and visualization of abstract encoding - concepts and techniques, visual explanation
  • Feature importance and feature interaction
  • Counterfactual methods

Neural networks and explainability

  • Network visualization and neuron interaction
  • Tracing and explainable backpropagation
  • Depth of network, and abstraction
  • Class activation maps
  • Saliency and attention models
  • Fuzzy neural networks - type 1 and 2

Self-reading

  • Two research articles - comparison, weaknesses, strengths, application domain

Extensive lab work, self-exercises, and group work for competence development are also included.

 

Knowledge:

A general educational aim of the course will be to equip students with knowledge and skills regarding interpretable artificial intelligence/deep learning, for considering explainable approaches for solving a neural learning problem as well as for developing an explanation for an existing knowledge model developed by a black box approach. This will enable the students to understand, work with, and solve deep learning tasks with a balance of explainability and accuracy, as needed.

A brief introduction to explainable/interpretable deep learning is being offered in this course. This will fill the knowledge gap for those who want to learn more about deep learning and develop trustworthy, reliable deep learning models. Recognizing the significance of interpretable models for computationally-intensive deep learning architectures, as well as the analysis and comprehension of complex biological applications, and the need for cross-disciplinary collaborations in future biotech, medicine, and AI.

 

The course is supported by the Digital Life Norway Research School and there are a limited number of travel grants available for our members. 

 

Contact course coordinator:

Dilip K. Prasad dilip.prasad@uit.no

Published Feb. 15, 2023 12:02 PM - Last modified May 22, 2023 12:41 PM