About

The safe deployment of autonomous physical systems in real-world scenarios requires them to be explainable and trustworthy, especially in critical domains. In contrast with `black-box' systems, explainable and trustworthy autonomous physical systems will lend themselves to easy assessments by system designers and regulators. This promises to pave ways for easy improvements which can lead to enhanced performance, and as well, increased public trust.

In this one-day virtual workshop, we aim to gather a globally distributed group of researchers and practitioners to discuss the opportunities and social challenges in the design, implementation, and deployment of explainable and trustworthy autonomous physical systems, especially in a post-pandemic era. Interactions will be fostered through panel discussions and a series of spotlight talks. We welcome all to register here to attend. Our workshop's access code is "AccessW14". We will be using Delegate Connect to run the virtual workshop. See the note on Accessibility below from the CHI Workshop chairs:

Update from CHI Workshop chairs (March 2nd) - "Dear potential delegates, please note the workshop organisers are in discussion with the overall workshops chairs (who are discussing with the Accessibility Chairs and the General Chairs) for ACM CHI 2021. Please note the current set of technologies listed should not be a barrier to your participation. Please apply to attend and we will discuss your accessibility needs with you. As chairs for this particular workshop we are committed to being inclusive so we will work with our delegates to ensure participation for all as CHI 2021 is for all. This might mean we drop our plans to use any particular technology if we cannot find a suitable way to make the experience for delegates inclusive."

We are currently providing support (in the form of registration fee waiver) for some students and early career researchers to attend our workshop. Click here to apply.

Invited speakers

All times are in BST

Call for papers

We invite researchers to submit their recent work on around the following topics: Accountability and trust in autonomous systems, AI ethics, algorithmic transparency, human factors in explanation generation and presentation, explainability grounded in the social sciences, explainable planning, context-aware and situation-aware explanations, interaction design and explainable autonomous systems

The organisers and the program committee will review submitted papers. We will ensure the reviewers are not assigned papers that they have affiliation with in any way. Selected authors of the accepted papers would be asked to make short video presentation of their talk. All accepted papers will be placed on our website.
Please submit an extended abstract using the CHI conference paper template as specified here. Paper length should be between 2 - 3 pages without references . Submission should be anonymized. Please submit your paper through the EasyChair submission site.
Submission on EasySpeak will be opened from January 1st, 2021. A selected number of authors of the selected papers will be asked to present their work in a spotlight talk. Authors of accepted papers are also invited to submit an extended version of their work to the Journal of Responsible Technology.

IMPORTANT DATES:
Submission deadline: 11:59PM BST, February 1st, 2021 (Submission site)
Submission deadline: 11:59PM BST, February 11th, 2021 (Submission site)
Acceptance notification: February 18th, 2021
Acceptance notification: February 22nd, 2021
Workshop: Friday, May 7th, 2021

Schedule

All times are in British Summer Time (BST)
Session 1
13:00 - 13:05 Welcome note by organisers
13:05 - 13:30 Alan Winfield Tim Miller
Professor, The University of Melbourne
Explainable artificial intelligence: beware the inmates running the asylum
13:30 - 13:55 Alan Winfield Paul Luff
Professor, King's College London
Planning and Situating Actions: challenges for explanation and trustworthiness in autonomous systems
13:55 - 14:20 Alan Winfield Alun Preece
Professor, Cardiff University
"If it walks like a duck and quacks like a duck...": Coherent Multimodal Explanations for Trustable Machine Teammates
14:20 - 14:45 Spotlight Talk Selected authurs present their work
14:45 - 15:00 Coffee Break 1
Session 2
15:00 - 15:25 Alan Winfield Masoumeh Mansouri
Assistant Professor, University of Birmingham
Trustworthy and Explainable Autonomous Robotic Systems: Requirements and Solutions
15:25 - 15:50 Alan Winfield Bastian Pfleging
Assistant Professor, Eindhoven University of Technology
HCI challenges of automated vehicles – how can we trust and understand what they do?
15:50 - 16:15 Alan Winfield Erik Vinkhuyzen
Senior Researcher at Nissan Motor Corporation
Normal Traffic Assumptions
16:15 - 16:40 Panel Discussion 1 A panel discussion around the topics from the keynotes
16:40 - 16:55 Coffee Break 2
Session 3
16:55 - 17:20 Alan Winfield Ehud Sharlin
Professor, University of Calgary
Autonomous Vehicles and Pedestrians: From Obstacle Avoidance to Interaction
17:20 - 17:45 Alan Winfield Katie Driggs-Campbell
Assistant Professor, University of Illinois at Urbana-Champaign
Building Trust in Autonomous Systems through Communication and Validation
17:45 - 18:10 Panel Discussion 2 A panel discussion around the topics from the keynotes
18:10 - 18:15 Wrap-up Closing remarks
18:15 - 18:30 Virtual Drink (optional) Networking time

Paper Presentations


Designing Interactions with Autonomous Physical Systems
Marius Hoggenmueller, Tram Thi Minh Tran, Luke Hespanhol, Martin Tomitsch
|Download|

On the Way to Improving Experimental Protocols to Evaluate Users’ Trust in AI-Assisted Decision Making
Oleksandra Vereschak,Gilles Bailly, Baptiste Caramiaux
|Download|

Socially Acceptable Robot Navigation Across Dedicated and Shared Infrastructure
Tommaso Colombino, Danilo Gallo, Shreepriya Shreepriya, Antonietta Grasso, Cecile Boulard
|Download|

COVID-Labels: Explainable and Trustworthy Mechanisms for Rebuilding Businesses during the COVID-19 Pandemic
Veronica Sih, Aarathi Prasad
|Download|

Organisers



  • Daniel Omeiza

    Daniel Omeiza
    daniel.omeiza@cs.ox.ac.uk

    is a PhD student at the University of Oxford researching on explainability in autonomous driving.

  • Martim Brandao

    Sule Anjomshoae
    sule.anjomshoae@umu.se

    is a PhD student in the Explainable AI (XAI) research team at the Umeå University. Her research focuses on generating and presenting human-understandable explanations for the predictions made by black-box algorithms.

  • Martin Magnusson

    Konrad Kollnig
    konrad.kollnig@cs.ox.ac.uk

    is a PhD student at the University of Oxford. With a background in computer science and mathematics, his research tries to analyse how to increase user participation in the design of our day-to-day technological architecture.

  • Martin Magnusson

    Oana-Maria Camburu
    oana-maria.camburu@cs.ox.ac.uk

    is a post-doctoral researcher at the University of Oxford and co-investigator at the Alan Turing Institute on the project of Natural Language Explanations for Deep Neural Networks. Oana’s research focuses mainly on explainability and natural language processing.

  • Martin Magnusson

    Kary Främling
    kary.framling@umu.se

    is a professor and the head of the Explainable AI (XAI) team at the University of Umeå in Sweden. Kary is also the founder and head of the Adaptive Systems of Intelligent Agents team at Aalto University, Finland.

  • Martin Magnusson

    Lars Kunze
    lars@robots.ox.ac.uk

    Doctor Lars Kunze is a Departmental Lecturer in Robotics in the Oxford Robotics Institute (ORI) and the Department of Engineering Science at the University of Oxford. He is also the lead of the Cognitive Robotics Group in the ORI.

Click here to view workshop summary report.

Support

This workshop is also partly supported by the Sense—Assess—eXplain (SAX) project which is funded by the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York.