Notice

Hit 122
Subject Dr. Seong Tae Kim (TUM) gives an invited talk on the interpretable deep learning at Dec. 3
Date 2019-12-02
Speaker: Dr. Seong Tae Kim (Technical University of Munich)

Title: Interpretable deep learning: What happens inside deep neural networks?

Abstract: Recently deep learning research has achieved superior performance in a variety of applications. Despite the successes, current deep learning approaches have their limitations and challenges. The lack of interpretability (so-called ‘black-box model’) is the representative limitation of current deep learning studies. In other words, it is difficult for users to understand how deep networks make a particular decision. In safe-critical tasks (e.g., medical image analysis, autonomous vehicle, and biometrics), it is very important to interpret the prediction of deep networks because incorrect predictions could lead to dangerous consequences. Therefore, it is required to improve the transparency of deep networks to provide the trustworthiness of the behavior of deep networks. For this purpose, a few research efforts have been devoted to increasing the interpretability of deep neural networks in machine learning and computer vision community. In this talk, Dr. Seong Tae will outline some possible research directions for increasing the interpretability of deep networks in safe-critical applications.