In conjunction with KDD 2019
August 4-8, 2019 - Anchorage, Alaska, USA

Explainable AI (XAI)

for Fairness, Accountability, & Transparency

INTRODUCTION


With the increasing availability and efficiency of modern computing, machine learning systems have become increasingly powerful and complex, often involving a large number of model features and model parameters. The problem with most of these learning algorithms is that they are designed to be trained via an optimization procedure over training examples, and are inherently lacking in representations that can be understood by humans, resulting in predictions that are obscure for the end users. This results in a need for either learning systems that have explainability inherently built in, or for meta-systems that can generate explanations from existing black-box models, in service of allowing users to gain insights into how complex systems arrive at their decisions. Such systems will be foundational for providing insights into fairness and transparency, and allowing predictions to be appropriately managed and trusted.

This workshop will provide a forum for sharing recent progress in XAI methods, how the methodologies can be applied in industrial AI/ML, and the methods’ limitations. Some of the problems this workshop will discuss include:

  • How should explainable systems or meta-systems be designed?
  • How should complex systems’ explainability and accountability be measured and communicated to end users?
  • What is the relationship between fairness and explainability? Are there unified approaches that can solve for both problems?
  • How are recent state-of-the-art advances in explainability currently applied in industrial machine learning problems? Are these methods applicable to deployed AI/ML systems, or only to models designed with explainability in mind?
  • What are the current limits of applying the explainability approach in different application domains in industrial settings? What are some of the promising directions toward addressing these limits?

The workshop will start off with invited talks on the state-of-the-art in XAI, followed by contributed talks. The panel session will discuss regulatory requirements and their implications for XAI in industrial settings, the current challenges in applying an XAI approach to various problems, and the potential for future developments and applications. The audience will obtain a practical understanding of the state-of-the-art, as well as perspectives on current challenges and limitations.

SCHEDULE AND LOCATION


Monday, August 5

08:00am - 12:00pm

Arteaga - Street Level, Egan

(for last-minute changes, please refer to the official KDD app)

07:55-08:00 Opening Remarks
Organizing Committee
08:00-08:30 Keynote: Responsible Use of Explainable Artificial Intelligence
Patrick Hall
08:30-08:45 Contributed Talk: Parametrised Data Sampling for Fairness Optimisation
Carlos Vladimiro Gonzalez Zelaya, Paolo Missier, and Dennis Prangle
08:45-09:00 Contributed Talk: What Makes a Good Explanation? The Lemonade Stand Game As a Platform for Evaluating XAI
Talia Tron, Daniel Ben David, Tzvika Barenholz, and Yehezkel S. Resheff
09:00-09:30 Keynote: Interpretability - now what?
Been Kim
09:30-09:45 Coffee Break
09:45-10:00 Contributed Talk: Fairness Is Not Static: Deeper Understanding of Long Term Fairness via Agents, Environments, and Auditors
Alexander D'Amour, Yoni Halpern, Hansa Srinivasan, Pallavi Baljekar, James Atwood, and D. Sculley
10:00-10:15 Contributed Talk: On Fairness in Budget-Constrained Decision Making
Michiel Bakker, Alejandro Noriega-Campero, Duy Patrick Tu, Prasanna Sattigeri, Kush Varshney, and Alex Pentland
10:15-10:45 Keynote: Every Data Set is Flawed: The Importance of Intelligibility in Machine Learning
Rich Caruana
10:45-11:25 Panel Discussion: Fairness, Accountability, and Transparency in AI
Been Kim, Cynthia Rudin, Patrick Hall, and Rich Caruana. Moderator: Nhung Ho
11:25-11:30 Closing Remarks
Organizing Committee
11:30-12:00 Poster Session
Invited Presenters

KEYNOTE SPEAKERS AND PANELISTS


Rich Caruana

Rich Caruana is a Senior Researcher at Microsoft Research. Before joining Microsoft, Rich was on the faculty in the Computer Science Department at Cornell University, at UCLA's Medical School, and at CMU's Center for Learning and Discovery. Rich's Ph.D. is from Carnegie Mellon University, where he worked with Tom Mitchell and Herb Simon. His thesis on Multi-Task Learning helped create interest in a new subfield of machine learning called Transfer Learning. Rich received an NSF CAREER Award in 2004 (for Meta Clustering), best paper awards in 2005 (with Alex Niculescu-Mizil), 2007 (with Daria Sorokina), and 2014 (with Todd Kulesza, Saleema Amershi, Danyel Fisher, and Denis Charles), co-chaired KDD in 2007 (with Xindong Wu), and serves as area chair for NIPS, ICML, and KDD. His current research focus is on learning for medical decision making, transparent modeling, deep learning, and computational ecology.

KEYNOTE: Every Data Set is Flawed: The Importance of Intelligibility in Machine Learning

Patrick Hall

Patrick Hall is a senior director for data science products at H2O.ai where he leads efforts to make machine learning more understandable and trustworthy for practitioners and consumers. Patrick is an author of the popular e-booklet, An Introduction to Machine Learning Interpretability, and a frequent contributor to O'Reilly Ideas on the subjects of transparency, model management, and security for machine learning. He's also a member of the multi-institution AI risk and security (AIRS) financial services working group and an awarded lecturer in the Department of Decision Sciences at George Washington University. Patrick studied computational chemistry at the University of Illinois before graduating from the Institute for Advanced Analytics at North Carolina State University.

KEYNOTE: Responsible Use of Explainable Artificial Intelligence Explainable artificial intelligence (XAI) enables human learning from machine learning, human appeal of automated model decisions, regulatory compliance, and white-hat hacking and security audits of ML models. XAI techniques have been implemented in numerous open source and commercial packages and XAI is also an important, mandatory, or embedded aspect of commercial predictive modeling in industries like financial services. However, like many technologies, XAI can be misused, particularly as a faulty safeguard for harmful black-boxes and for other malevolent purposes like model stealing, reconstruction of sensitive training data, and “fairwashing”. This presentation discusses a few guidelines and best practices to help practitioners avoid any unintentional misuse, identify any intentional abuse, and generally make the most of currently available XAI techniques.

Been Kim

Been Kim is a research scientist on the Google Brain Team. Her research focuses on improving interpretability in machine learning by building interpretability methods for already-trained models or building inherently interpretable models. She has MS and PhD degrees from MIT. Been has given tutorials on interpretability at ICML 2017, at the Deep Learning Summer school at University of Toronto, Vector institute in 2018 and at CVPR 2018. Been is one of the executive board members of Women in Machine Learning (WiML), and helps with various ML conferences as a workshop chair, an area chair, a steering committee member and a program chair. More on Been here.

KEYNOTE: Interpretability - now what? In this talk, I hope to reflect on some of the progress made in the field of interpretable machine learning. We will reflect on where we are going as a field, and what are the things we need to be aware and be careful as we make progress. With that perspective, I will then discuss some of my recent work 1) sanity checking popular methods and 2) developing more lay person-friendly interpretability method.

Cynthia Rudin

Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University, and directs the Prediction Analysis Lab, whose main focus is in interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo, and a PhD from Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She is past chair of both the INFORMS Data Mining Section and the Statistical Learning and Data Science section of the American Statistical Association. She has also served on committees for DARPA, the National Institute of Justice, and AAAI. She has served on three committees for the National Academies of Sciences, Engineering and Medicine, including the Committee on Applied and Theoretical Statistics, the Committee on Law and Justice, and the Committee on Analytic Research Foundations for the Next-Generation Electric Grid. She is a fellow of the American Statistical Association and a fellow of the Institute of Mathematical Statistics. She will be the Thomas Langford Lecturer at Duke University during the 2019-2020 academic year.

CALL FOR PAPERS


Description:

As AI takes on a more prominent role in society, the need for transparency in AI systems is of increasing importance. The ability to explain a model’s output has become a necessity in many applications. The explainable AI (XAI) workshop invites submissions highlighting the benefits, common roadblocks, and recent advances in explainable AI.

We welcome papers on a range of XAI topics, from theoretical approaches to making machine learning models interpretable, to novel applications of explainable AI techniques in production settings. Submissions focusing on fairness in machine learning systems are also welcome. Topics of interest include, but are not limited to:

  • Techniques for making machine learning systems interpretable. Survey of existing techniques or exploration of trends.
  • Quantifying or visualizing interpretability. Strategies to effectively communicate the algorithm’s explanations in a human-understandable way.
  • Novel applications of existing techniques.
  • Limitations of the existing techniques in practice (some or all).
  • Trade-offs between privacy and transparency.
  • Formalizations of fairness, bias, discrimination; trade-offs and relationships among them.
  • Design interventions to mitigate biases in systems, or to discourage biased user behavior.
  • Methods and tools for ensuring that algorithms comply with fairness policies.
  • Techniques for guaranteeing accountability without necessitating undesirable transparency.
  • Techniques for ethical, autonomous A/B testing.

If you would like clarification of any topic, or of whether your submission falls within the scope of this workshop, feel free to contact us at xai.kdd2019@gmail.com. Accepted papers will be part of the conference proceedings.

Student presenters will be eligible for a travel grant of up to $2500 to present their work at the workshop. In order to be eligible for the grant, you must have full-time student status or be a recent graduate (you must have graduated after April 1, 2018).

Submission Guidelines:

Authors are requested to submit papers up to a total of six (6) pages, including all content and references. Papers must be in PDF format and formatted according to the new Standard ACM Conference Proceedings Template. In addition, authors can provide an optional two (2) page supplement at the end of their submitted paper (in the same PDF file, starting at page 7 in the case of a 6-page paper) focused on reproducibility.

This workshop will follow a peer review process similar to the KDD Applied Data Science Track. Reviews are not double-blind, and author names and affiliations should be listed on the submission.

Important Policies:

This workshop strongly encourages original and reproducible work. We will also adhere to all of KDD’s policies on Reproducibility, Authorship, Dual Submissions, Conflicts of Interest, Retraction, Attendance, and Copyrights, which can be found here.

Submissions have closed

DATES


Paper Submission Deadline: May 19, 2019 Extended! May 26, 2019 11:59PM AoE

Author Notification: June 1, 2019

Camera-ready Submissions: June 22, 2019

Workshop Date: August 5, 2019

ORGANIZERS


If you have any questions, please feel free to contact the organizing committee at xai.kdd2019@gmail.com


Joy Rimchala

Dr. Joy Rimchala is a Staff Data Scientist at Intuit, working on applying Computer Vision and Natural Language techniques to solve information extraction problems. Joy has a long-standing interest in approaches to train complex models in data-efficient and training-efficient manners using domain knowledge, machine teaching, and meta-learning. Prior to Intuit, Joy did her PhD dissertation on applying computer vision techniques and sequence-based modeling to analyze object trajectories in microscopy videos.

Jineet Doshi

Jineet Doshi is a Data Scientist at Intuit building machine learning models for Security, Risk, and Fraud. He has worked on the challenging problem of scaling machine learning models, and also making them more real-time. He holds a Master’s degree in Information Technology from Carnegie Mellon University. Prior to joining Intuit, he conducted research in making machine learning models more interpretable, and using NLP techniques to summarize terms and conditions on various websites.

Qiang Zhu

Dr. Qiang Zhu is a Director of Data Science at Intuit, where he leads a diverse team of Data Scientists, Machine Learning Engineers, and Data Analysts with the goal of leveraging data to power prosperity around the world. Qiang has 10+ years’ experience in Data Science. He has published 20+ works (including SIGKDD 2012 Best Paper), and received 1500+ citations.

Diane Chang

Dr. Diane Chang is a Distinguished Data Scientist at Intuit, currently focusing on using Artificial Intelligence and Machine Learning in the Security, Risk, and Fraud group. During her time at Intuit, she helped launch a direct lending business by using Machine Learning to identify low-risk customers. Diane has broad experience in giving talks, organizing workshops, and moderating or participating in panels. She was a lead organizer of our CUI workshop last year. She also served on the KDD Applied Data Science Track PC for two years.

Nick Hoh

Dr. Nick Hoh is a Senior Data Scientist at Intuit, and currently works on Machine Learning applications in Security, Risk, and Fraud. Nick is passionate about teaching, and he seeks out opportunities to share Machine Learning knowledge with the broader public. Prior to his time at Intuit, Nick worked with Udacity to develop and deliver content as a Session Lead for their Machine Learning Nanodegree.

Conrad De Peuter

Conrad De Peuter is a Senior Data Scientist at Intuit, currently focusing on Natural Language Processing and language modeling, and their applications in the tax domain. During his time at Intuit, he has focused on launching a human-in-the-loop compliance law extractor, in addition to custom entity resolution solutions. He holds an MS in Data Science from Columbia University, as well as undergraduate Math and CS degrees from Duke University.

Shir Meir Lador

Shir Meir Lador is a Data Science Team Lead at Intuit, as well as WiDS Tel Aviv ambassador and the co-founder of PyData Tel Aviv meetups. She is the co-host of “Unsupervised,” a podcast about data science in Israel, and gives talks at various machine learning and data science conferences and meetups. Shir holds an M.Sc. in electrical engineering and computers, with a major in machine learning and signal processing, from Ben-Gurion University.

Sambarta Dasgupta

Dr. Sambarta Dasgupta is a Data Scientist at Intuit, where he is involved in building forecasting and anomaly detection algorithms for time series data. His research interests include time series forecasting (especially hierarchical forecasting, meta-learning for time series, and neural net-based methods), deep learning, kernel-based learning algorithms, and operations research. His PhD is from Iowa State University, where his graduate research was focused on building real-time series analysis models for sensor data.