TY - GEN
T1 - Generating Smooth Interpretability Map for Explainable Image Segmentation
AU - Okamoto, Takaya
AU - Gu, Chunzhi
AU - Yu, Jun
AU - Zhang, Chao
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Interpreting decisions made by deep neural networks (DNNs) has recently received wide attention. Specifically, this field is advanced to reveal the black box of the decisionmaking process of DNNs to facilitate reliable real applications. One recent method, U-Noise, realizes this by introducing an additional model to interpret the image segmentation process. By assuming that important pixels for segmentation should not be hindered by noise, such a model learns a noise mask as an interpretability map to identify which pixels can be added with noise. However, U-Noise regards all pixels independently during noise mask learning, which can cause the interpretability map to be less smooth and continuous. In this study, we propose a smoothing loss to better guide interpretability learning. It works by introducing a new assumption that important pixels for segmentation are also likely to be spatially close. We draw inspiration from the bilateral filter to design the smoothing loss, which enables a two-fold smoothing strategy with regard to the spatial location and pixel intensity. Experiments on a medical image segmentation dataset demonstrate that our method can generate a smoother yet more accurate interpretability map than prior methods.
AB - Interpreting decisions made by deep neural networks (DNNs) has recently received wide attention. Specifically, this field is advanced to reveal the black box of the decisionmaking process of DNNs to facilitate reliable real applications. One recent method, U-Noise, realizes this by introducing an additional model to interpret the image segmentation process. By assuming that important pixels for segmentation should not be hindered by noise, such a model learns a noise mask as an interpretability map to identify which pixels can be added with noise. However, U-Noise regards all pixels independently during noise mask learning, which can cause the interpretability map to be less smooth and continuous. In this study, we propose a smoothing loss to better guide interpretability learning. It works by introducing a new assumption that important pixels for segmentation are also likely to be spatially close. We draw inspiration from the bilateral filter to design the smoothing loss, which enables a two-fold smoothing strategy with regard to the spatial location and pixel intensity. Experiments on a medical image segmentation dataset demonstrate that our method can generate a smoother yet more accurate interpretability map than prior methods.
UR - http://www.scopus.com/inward/record.url?scp=85179756419&partnerID=8YFLogxK
U2 - 10.1109/GCCE59613.2023.10315524
DO - 10.1109/GCCE59613.2023.10315524
M3 - 会議への寄与
AN - SCOPUS:85179756419
T3 - GCCE 2023 - 2023 IEEE 12th Global Conference on Consumer Electronics
SP - 1023
EP - 1025
BT - GCCE 2023 - 2023 IEEE 12th Global Conference on Consumer Electronics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th IEEE Global Conference on Consumer Electronics, GCCE 2023
Y2 - 10 October 2023 through 13 October 2023
ER -