2019 ICCV Workshop on

Interpreting and Explaining Visual Artificial Intelligence Models

Saturday, November 2nd, 2019

@ COEX 318AB, Seoul, Korea


Explainable and interpretable machine learning models and algorithms are important topics which have received growing attention from research, application and administration. Many advanced visual artificial intelligence systems are often perceived as black-boxes. Researchers would like to be able to interpret what the AI model has learned in order to identify biases and failure models and improve models. Many government agencies pay special attention to the topic. In May 2018, EU enacted the General Data Protection Regulation (GDPR) which mandates a right to explanation from machine learning model. An US agency, DARPA, launched an Explainable Artificial Intelligence program in 2017. The ministry of science and ICT (MSIT) of South Korea established an Explainable Artificial Intelligence Center.

Recently, several models and algorithms are introduced to explain or interpret the decision of complex artificial intelligence system. Explainable Artificial Intelligence systems can now explain the decisions of autonomous system such as self-driving cars and game agents trained by deep reinforcement learning. Saliency maps built by attribution methods such as network gradients, DeConvNet, Layer-wise Relevance Propagation, PatternAttribution and RISE can identify relevant inputs upon the decisions of classification or regression tasks. Bayesian model composition methods can learn automated decomposition of input data into a composition of explainable base models in human pose estimation. Model agnostic methods such as LIME (Local interpretable model-agnostic explanations) and SHAP make complex deep learning models by providing importance of input features. Network Dissection and GAN Dissection provide human-friendly interpretations of internal nodes in deep neural network and deep generative models.

The present workshop aims to overview recent advances in explainable/interpretable artificial intelligence and establish new theoretical foundations of interpreting and understanding visual artificial intelligence models including deep neural networks. We will discuss the future research directions and applications of explainable visual artificial intelligence.

This workshop has interest including but not limited to the following topics.
- Explaining the decision of visual deep learning models
- Interpretable deep learning models
- Machine learning/deep learning models which generates human-friendly explanations
- Bayesian model composition/decomposition methods
- Model-agnostic machine learning explainable models
- Evaluation of explainable AI models
- Causal analysis of complex AI/ML systems
- Practical applications of explainable AI


Jaesik Choi @UNIST
Seong-Whan Lee @Korea Univ.
K.-R. Müller @TU Berlin
Seongju Hwang @KAIST
Bohyung Han @SNU
David Bau @MIT
Ludwig Schubert @OpenAI
Yong Man Ro @KAIST
Invited Speakers

Trevor Darrell
UC Berkeley

Wojciech Samek
Head of Machine Learning Group
Fraunhofer Heinrich Hertz Institute

David Bau
PhD student

Ludwig Schubert
Software Engineer

Paper Submission

Task Deadline
Paper Submission Deadline July 26, 2019 (11:59PM PST)
Final Decisions to Authors August 22, 2019
submission page: https://cmt3.research.microsoft.com/VXAI2019/

Call for papers

This workshop is a full-day events, which will include invited talks and poster presentation of accepted papers.
We call for full papers with 6 pages or extend abstracts with 2-4 pages. Papers accepted by this workshop can be resubmitted to other conferences or journals. Please submit your papers to https://cmt3.research.microsoft.com/VXAI2019/

- Submission deadline : July 26, 2019 (11:59PM PST)
- Notification date : August 22, 2019 (Thirthday)


Time Title
08:30 - 08:45 Opening Remarks
08:45 - 09:15 Invited Talk 1
09:15 - 10:00 Invited Talk 2
10:00 - 10:30 Coffee Break
10:30 - 10:45 Invited Talk 3
10:45 - 11:15 Poster Spotlights
11:15 - 11:45 Poster Session
11:45 - 12:45 Lunch
12:45 - 13:15 Poster Session
13:15 - 14:00 Invited Talk 4
14:00 - 14:45 Invited Talk 5
14:45 - 15:15 Coffee Break
15:15 - 16:00 Invited Talk 6
16:00 - 16:45 Tutorial on open source projects in explainable artificial Intelligence
16:45 - 17:00 Closing Remarks

UNIST Explainable Artificial Intelligence Center
    Jaesik Choi / jaesik@unist.ac.kr / 052-217-2180
    GyeongEun Lee / socool@unist.ac.kr / 052-217-2196
    Sohee Cho / shcho@unist.ac.kr / 052-217-2258