Unraveling The Black Box - Building Understandable AI Through Strategic Explanation and User-based Design
 No Thumbnail Available 
Date
2024
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The pervasive integration of Artificial Intelligence (AI) in society presents
both opportunities and challenges, with the black-box issue emerging as a
significant obstacle in realizing the full potential of AI. The opaque nature of
AI decision-making processes impedes user understanding, particularly
among non-technical individuals, raising concerns about the reliability of AI
recommendations. Therefore, how to help users understand AI decisionmaking
has become an urgent task. This thesis aims to assist developers in
contemplating how to construct AI that users can understand. To build
understandable AI, researchers have proposed many theories, methods, and
frameworks in existing research. However, there are still limitations and
challenges in current research. To address these challenges and finish the
research aim, starting with a discussion on transparency and interpretability,
the thesis elaborates on how to strategically explain to users within three
dimensions: simplifying algorithm, appropriate information disclosure, and
high-level collaboration. Furthermore, the thesis conducts surveys on users in
four high-stakes areas, establishing AI explainability principles based on
three stages, conceptualization, construction, and measurement. In addition to
these primary contributions, the thesis also covers some supportive work,
including challenges faced by explainable AI, user-centered development,
and automation trust. These works lay a solid foundation for addressing
research questions and achieving research objectives, while also providing
room for contemplation in future research.
Description
Keywords
understandable AI, transparency, interpretability, explainability strategy, high-stakes areas, user-based AI, XAI, automation-trust