2019 ICCV Workshop on

Interpreting and Explaining Visual Artificial Intelligence Models


Saturday, November 2nd, 2019

@ COEX 318AB, Seoul, Korea

Invitation

Explainable and interpretable machine learning models and algorithms are important topics which have received growing attention from research, application and administration. Many advanced visual artificial intelligence systems are often perceived as black-boxes. Researchers would like to be able to interpret what the AI model has learned in order to identify biases and failure models and improve models. Many government agencies pay special attention to the topic. In May 2018, EU enacted the General Data Protection Regulation (GDPR) which mandates a right to explanation from machine learning model. An US agency, DARPA, launched an Explainable Artificial Intelligence program in 2017. The ministry of science and ICT (MSIT) of South Korea established an Explainable Artificial Intelligence Center.

Recently, several models and algorithms are introduced to explain or interpret the decision of complex artificial intelligence system. Explainable Artificial Intelligence systems can now explain the decisions of autonomous system such as self-driving cars and game agents trained by deep reinforcement learning. Saliency maps built by attribution methods such as network gradients, DeConvNet, Layer-wise Relevance Propagation, PatternAttribution and RISE can identify relevant inputs upon the decisions of classification or regression tasks. Bayesian model composition methods can learn automated decomposition of input data into a composition of explainable base models in human pose estimation. Model agnostic methods such as LIME (Local interpretable model-agnostic explanations) and SHAP make complex deep learning models by providing importance of input features. Network Dissection and GAN Dissection provide human-friendly interpretations of internal nodes in deep neural network and deep generative models.

The present workshop aims to overview recent advances in explainable/interpretable artificial intelligence and establish new theoretical foundations of interpreting and understanding visual artificial intelligence models including deep neural networks. We will discuss the future research directions and applications of explainable visual artificial intelligence.

This workshop has interest including but not limited to the following topics.
- Explaining the decision of visual deep learning models
- Interpretable deep learning models
- Machine learning/deep learning models which generates human-friendly explanations
- Bayesian model composition/decomposition methods
- Model-agnostic machine learning explainable models
- Evaluation of explainable AI models
- Causal analysis of complex AI/ML systems
- Practical applications of explainable AI

Organizers

Jaesik Choi @UNIST
Seong-Whan Lee @Korea Univ.
K.-R. Müller @TU Berlin
Seongju Hwang @KAIST
Bohyung Han @SNU
David Bau @MIT
Ludwig Schubert @OpenAI
Yong Man Ro @KAIST
Invited Speakers


Trevor Darrell
Professor
UC Berkeley

Wojciech Samek
Head of Machine Learning Group
Fraunhofer Heinrich Hertz Institute

David Bau
PhD student
MIT

Ludwig Schubert
Software Engineer
OpenAI

TBD
Paper Submission

Call for papers

This workshop is a full-day events, which will include invited talks and poster presentation of accepted papers. Papers accepted by this workshop can be resubmitted to other conferences or journals. Please refer to the following Submission Guidelines.

Important dates
Task Deadline
Paper Submission Deadline July 26, 2019 (11:59PM PST)
Final Decisions to Authors August 22, 2019
CMT Submissions Website
https://cmt3.research.microsoft.com/VXAI2019/


Submission Guidelines

All submissions will be handled electronically via the conference's CMT Website. By submitting a paper, the authors agree to the policies stipulated in this website. The submission deadline is Friday July 26, 2019.

This workshop invites full research papers at most 8 pages or extended abstract at most 4 pages, in the ICCV style. Additional pages containing only cited references are allowed. As single-blind peer reviewing, the author's name and affiliation are on the paper.

Please refer to the following files for detailed formatting instructions:


Submissions don't need to be anonymized. Paper do not use the template or have more than eight pages (excluding references) will be rejected without review.

1) Paper submission and review site:

Submission Site (bookmark or save this URL!)

Please make sure that your browser has cookies and Javascript enabled.

Please add "email@msr-cmt.org" to your list of safe senders (whitelist) to prevent important email announcements from being blocked by spam filters.

Log into CMT3 at https://cmt3.research.microsoft.com. If you do not see “International Conference on Computer Vision (ICCV 2019)” in the conference list already, click on the “All Conferences” tab and find it there.

2) Setting up your profile: You can update your User Profile, Email, and Password by clicking on your name in the upper-right inside the Author Console and choosing the appropriate option under “General”.

3) Domain Conflicts: When you log in for the first time, you will be asked to enter your conflict domain information. You will not be able to submit any paper without entering this information. We need to ensure conflict-free reviewing of all papers. At any time, you can update this information by clicking on your name in the upper-right and entering “Domain Conflicts” under ICCV 2019.

It is the primary author's responsibility to ensure that all authors on their paper have registered their institutional conflicts into CMT3. Each author should list domains of all institutions they have worked for, or have had very close collaboration with, within the last 3 years (example: mit.edu; ox.ac.uk; microsoft.com). DO NOT enter the domain of email providers such as gmail.com. This institutional conflict information will be used in conjunction with prior authorship conflict information to resolve assignments to both reviewers and area chairs. If a paper is found to have an undeclared or incorrect institutional conflict, the paper may be summarily rejected.

4) Creating a paper submission: This step must be completed by the paper registration deadline. After this deadline, you will not be able to register new papers, but you will be able to edit the information for existing papers.

(a) Click the “+ Create new submission” button in the upper-left to create a new submission. There, you will be prompted to enter the title, abstract, authors, and subject areas. You are strongly encouraged to finalize the author list by the registration deadline.

(b) Check with your co-authors to make sure that: (1) you add them with their correct CMT3 email; and (2) they have entered their domain conflicts into CMT3 for ICCV 2019. If you add an author with an email that is not in CMT3 and the name and organization is not automatically filled, that means they are not yet in the system, and you should make sure to check that they do not already have an account under a different email before completing the requested information to add them.

(c) Enter subject (topic) areas for your paper. You must include at least one primary area. This information is used to help assign ACs and reviewers.

Program

PROGRAM

Time Title
08:30 - 08:45 Opening Remarks
08:45 - 09:15 Invited Talk 1
09:15 - 10:00 Invited Talk 2
10:00 - 10:30 Coffee Break
10:30 - 10:45 Invited Talk 3
10:45 - 11:15 Poster Spotlights
11:15 - 11:45 Poster Session
11:45 - 12:45 Lunch
12:45 - 13:15 Poster Session
13:15 - 14:00 Invited Talk 4
14:00 - 14:45 Invited Talk 5
14:45 - 15:15 Coffee Break
15:15 - 16:00 Invited Talk 6
16:00 - 16:45 Tutorial on open source projects in explainable artificial Intelligence
16:45 - 17:00 Closing Remarks
Contact

UNIST Explainable Artificial Intelligence Center
    Jaesik Choi / jaesik@unist.ac.kr / 052-217-2180
    GyeongEun Lee / socool@unist.ac.kr / 052-217-2196
    Sohee Cho / shcho@unist.ac.kr / 052-217-2258