Explanation in Artificial Intelligence: A Human-Centred Approach
Grant number: DP190103414 | Funding period: 2019 - 2022
This project aims to produce validated methods for creating human-centred explanations of decisions made by artificial intelligence (AI). Trial deployment of AI devices has resulted in the requirement for explanations of how AI makes decisions, where developed AI systems gave insufficient consideration of how decision logic would be explained to people. This project positions 'explainable AI' within the intersection of human-computer interaction, computer science and cognitive psychology. The expected outcomes of this project are new methods, models and algorithms for explaining different types of AI models to people. This project should result in improved understanding and trust of decision..View full description
Related publications (3)
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang, Prashan Madumal, Tim Miller, Krista A Ehinger, Benjamin IP Rubinstein
Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form. This ..