Liangzhi Li is a specially appointed assistant professor with Institute for Datability Science (IDS), Osaka University, Japan. He received the B.S and M.S degrees in Computer Science from South China University of Technology (SCUT), China, in 2012 and 2016, respectively, and the Ph.D. degree in Engineering from Muroran Institute of Technology, Japan, in 2019. After graduation, he worked as a researcher (2019-2021) and is now working as an assistant professor (2021-present) at Osaka University. His main fields of research interest include computer vision, explainable AI, and medical imaging. He has received the best paper award from FCST 2017 and IEEE Sapporo Section (2018).
Ph.D. in Engineering, 2019
Muroran Institute of Technology, Japan
M.S. in Computer Science, 2016
South China University of Technology (SCUT)
B.S. in Computer Science, 2012
South China University of Technology (SCUT)
Explainable artificial intelligence has been gaining attention in the past few years. However, most existing methods are based on gradients or intermediate features, which are not directly involved in the decision-making process of the classifier. In this paper, we propose a slot attention-based classifier called SCOUTER for transparent yet accurate classification. Two major differences from other attention-based methods include: (a) SCOUTER’s explanation is involved in the final confidence for each category, offering more intuitive interpretation, and (b) all the categories have their corresponding positive or negative explanation, which tells “why the image is of a certain category” or “why the image is not of a certain category.” We design a new loss tailored for SCOUTER that controls the model’s behavior to switch between positive and negative explanations, as well as the size of explanatory regions. Experimental results show that SCOUTER can give better visual explanations while keeping good accuracy on small and medium-sized datasets.