Preview

Российский журнал персонализированной медицины

Расширенный поиск

Модели объяснения диагноза как элемент интеллектуальных систем диагностики в медицине: краткий обзор

https://doi.org/10.18705/2782-3806-2022-2-6-23-32

Аннотация

В работе рассмотрены наиболее важные и эффективные подходы и модели объяснения и интерпретации результатов диагностики, получаемых с использованием интеллектуальных систем диагностики. Необходимость их использования обусловлена тем, что сама интеллектуальная система диагностики является «черным ящиком» и для врача важно не только получить диагноз пациента, но и понять, почему получен такой диагноз, какие элементы информации о пациенте наиболее значимы с точки зрения диагноза. Приведены обзоры основных подходов к объяснению предсказаний моделей машинного обучения в целом и применительно к медицине. Показано, как различная исходная информация о пациенте влияет на выбор моделей объяснения. Рассмотрены модели при наличии визуальной и табличной информации. Также рассмотрены модели объяснения примерами. Цель работы — обзор основных моделей объяснения и их зависимости от вида информации о пациенте.

Об авторах

Л. В. Уткин
Федеральное государственное автономное образовательное учреждение высшего образования «Санкт-Петербургский политехнический университет Петра Великого»
Россия

Уткин Лев Владимирович, д.т.н., профессор Высшей школы искусственного интеллекта

ул. Политехническая, д. 29, Санкт-Петербург, 19525



Ю. И. Крылова
Федеральное государственное автономное образовательное учреждение высшего образования «Санкт-Петербургский политехнический университет Петра Великого»
Россия

Крылова Юлия Игоревна, аспирант

Санкт-Петербург



А. В. Константинов
Федеральное государственное автономное образовательное учреждение высшего образования «Санкт-Петербургский политехнический университет Петра Великого»
Россия

Константинов Андрей Владимирович, аспирант

Санкт-Петербург



Список литературы

1. Adadi A, Berrada M. Peeking inside the blackbox: a survey on explainable artificial intelligence (XAI) // IEEE Access. 2018; 6:52138–52160.

2. Angelov PP, Soares EA, Jiang R, et al. Explainable artificial intelligence: an analytical review // Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2021; 11(5):1424.

3. Bodria F, Giannotti F, Guidotti R, et al. Benchmarking and survey of explanation methods for black box models // arXiv:2102.13076. 2021 Feb.

4. Burkart N, Huber, MF. A survey on the explainability of supervised machine learning // Journal of Artificial Intelligence Research. 2021; 70:245–317.

5. Cambria E, Malandri L, Mercorio F, et al. A survey on XAI and natural language explanations // Information Processing & Management. 2023; 60(1): 103111.

6. Carvalho DV, Pereira EM, Cardoso JS. Machine learning interpretability: A survey on methods and metrics // Electronics. 2019; 8(8):832.

7. Guidotti R, Monreale A, Ruggieri S, et al. A survey of methods for explaining black box models // ACM Computing Surveys. 2019; 51(5):1–42.

8. Krenn M, Pollice R, Guo SY, et al. On scientific understanding with artificial intelligence // Nature Reviews Physics. 2022 Oct 11:1–9.

9. Li Z, Zhu Y and Matthijs van Leeuwen. A Survey on Explainable Anomaly Detection // arXiv:2210.06959 (2022).

10. Marcinkevics R and Vogt JE Interpretability and explainability: A machine learning zoo mini-tour // arXiv:2012.01805. Jan 2020.

11. Minh D, Wang HX, Li Y, et al. Explainable artificial intelligence: a comprehensive review // Artificial Intelligence Review. 2021:1–66.

12. Sahakyan M, Aung Z, Rahwan T. Explainable artificial intelligence for tabular data: A survey // IEEE Access. 2021; 9:135392–135422.

13. Schwalbe G, Finzel B. XAI method properties: A (meta-) study // arXiv:2105.07190. 2021 May.

14. Sejr JH, Schneider-Kamp A. Explainable outlier detection: What, for Whom and Why? // Machine Learning with Applications. 2021; 6:100172.

15. Zhang Q, Zhu SC. Visual interpretability for deep learning: a survey // Frontiers of Information Technology & Electronic Engineering. 2018; 19(1):27–39.

16. Di Martino F, Delmastro F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data // Artificial Intelligence Review. 2022:1–55.

17. Holzinger A, Langs G, Denk H, et al. Causability and explainability of artificial intelligence in medicine // WIREs Data Mining and Knowledge Discovery. 2019; 9(4): 1–13.

18. Jin D, Sergeeva E, Weng W-H, et al. Explainable deep learning in healthcare: A methodological survey from an attribution view // WIREs Mechanisms of Disease. 2022; Vol.14(3):1–25.

19. Loh HW, Ooi CP, Seoni S, et al. Application of Explainable Artificial Intelligence for Healthcare: A Systematic Review of the Last Decade (2011–2022) // Computer Methods and Programs in Biomedicine. 2022 Sep 27:107161.

20. Mohanty A, Mishra S. A Comprehensive Study of Explainable Artificial Intelligence in Healthcare // Augmented Intelligence in Healthcare: A Pragmatic and Integrated Analysis. Springer, Singapore. 2022: 475–502.

21. Patricio C, Neves JC, Teixeira LF. Explainable Deep Learning Methods in Medical Imaging Diagnosis: A Survey // arXiv:2205.04766, May, 2022.

22. Payrovnaziri SN, Chen Z, Rengifo-Moreno P, et al. Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review // Journal of the American Medical Informatics Association. 2020; 27(7):1173–1185.

23. Singh A, Sengupta S, Lakshminarayanan V. Explainable Deep Learning Models in Medical Image Analysis // Journal of Imaging. 2020 Jun 20; 6(6):52.

24. Slijepcevic D, Horst F, Lapuschkin S, et al. Explaining machine learning models for clinical gait analysis // ACM Transactions on Computing for Healthcare (HEALTH). 2021; 3(2):1–27.

25. Tjoa E, Guan C. A survey on explainable artificial intelligence (XAI): Toward medical XAI // IEEE Transactions on Neural Networks and Learning Systems. 2020; 32(11): 4793–4813.

26. Tonekaboni S, Joshi S, McCradden MD, et al. What clinicians want: contextualizing explainable machine learning for clinical end use // Machine Learning for Healthcare Conference. PMLR. 2019:359–380.

27. Utkin LV, Meldo AA, Kovalev MS, et al. A Review of Methods for Explaining and Interpreting Decisions of Intelligent Cancer Diagnosis Systems // Scientific and Technical Information Processing. 2021; Vol. 48(5):398–405.

28. Yang CC. Explainable Artificial Intelligence for Predictive Modeling in Healthcare // Journal of Healthcare Informatics Research. 2022; 6(2):228–239.

29. Reyes M, Meier R, Pereira S, et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities // Radiology: Artificial Intelligence. 2020 May 27; 2(3):e190043.

30. Abdelsamea MM, Zidan U, Senousy Z, et al. A survey on artificial intelligence in histopathology image analysis // Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery. 2022:e1474.

31. Sakai A, Komatsu M, Komatsu R, et al. Medical professional enhancement using explainable artificial intelligence in fetal cardiac ultrasound screening // Biomedicines. 2022; 10(3):551.

32. Lamy JB, Sekar B, Guezennec G, et al. Explainable artificial intelligence for breast cancer: A visual case-based reasoning approach // Artificial intelligence in medicine. 2019; 94:42–53.

33. Rodriguez-Sampaio M, Rincón M, ValladaresRodríguez S, et al. Explainable Artificial Intelligence to Detect Breast Cancer: A Qualitative Case-Based Visual Interpretability Approach // International WorkConference on the Interplay Between Natural and Artificial Computation. Springer, Cham. 2022:557–566.

34. Hauser K, Kurz A, Haggenmüller S, et al. Explainable artificial intelligence in skin cancer recognition: A systematic review // European Journal of Cancer. 2022; 167: 54–69.

35. Alsinglawi B, Alshari O, Alorjani M, et al. An explainable machine learning framework for lung cancer hospital length of stay prediction // Scientific Reports. 2022; 12(1):1–10.

36. Kobylińska K, Orłowski T, Adamek M, et al. Explainable Machine Learning for Lung Cancer Screening Models // Applied Sciences. 2022; 12(4):1926.

37. Pintelas E, Liaskos M, Livieris IE, et al. Explainable machine learning framework for image classification problems: case study on glioma cancer prediction // Journal of Imaging. 2020; 6(6):37.

38. Zhou B, Khosla A, Lapedriza A, et al. Learning Deep Features for Discriminative Localization // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016:2921–2929.

39. Selvaraju RR, Cogswell M, Das A, et al. GradCAM: Visual Explanations from Deep Networks via Gradient-based Localization // Proceedings of the IEEE International Conference on Computer Vision (ICCV). 2017:618–626.

40. Shrikumar A, Greenside P, Kundaje A. Learning Important Features Through Propagating Activation Differences // Proceedings of the International Conference on Machine Learning (ICML). 2017; Vol. 70:3145–3153.

41. Gale W, Oakden-Rayner L, Carneiro G, et al. Producing Radiologist-Quality Reports for Interpretable Deep Learning // Proceedings of the IEEE International Symposium on Biomedical Imaging (ISBI). 2019:1275–1279.

42. Vaswani A, Shazeer N, Parmar N, et al. Attention Is All You Need // Advances in Neural Information Processing Systems. 2017:5998–6008.

43. Chen Y, Song Z, Chang TH, Wan X. Generating Radiology Reports via Memory-driven Transformer // Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020:1439–1449.

44. Graziani M, Andrearczyk V, Marchand-Maillet S, Müller H. Concept attribution: Explaining CNN decisions to physicians // Computers in Biology and Medicine. 2020; 123:103865.

45. Meldo AA, Utkin LV, Kovalev MS, et al. The natural language explanation algorithms for the lung cancer computer-aided diagnosis system // Artificial Intelligence in Medicine. 2020; 108:1–10.

46. Ribeiro MT, Singh S, Guestrin C. Why should I trust you? Explaining the predictions of any classifier // arXiv:1602.04938, Aug 2016.

47. Shankaranarayana SM, Runje D. Alime: Autoencoder based approach for local interpretability // International Conference on Intelligent Data Engineering and Automated Learning. Springer. 2019:454–463.

48. Zafar MR, Khan NM. DLIME: A deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems // arXiv:1906.10263, Jun 2019.

49. Ribeiro MT, Singh S, Guestrin C. Anchors: High-precision model-agnostic explanations // AAAI Conference on Artificial Intelligence. 2018:1527–1535.

50. Kovalev MS, Utkin LV, Kasimov EM. SurvLIME: A method for explaining machine learning survival models // Knowledge-Based Systems. 2020; 203:106164.

51. Agarwal R, Melnick L, Frosst N, et al. Neural additive models: Interpretable machine learning with neural nets // Advances in Neural Information Processing Systems. 2021; 34:4699-4711.

52. Konstantinov AV, Utkin LV. Interpretable machine learning with an ensemble of gradient boosting machines // Knowledge-Based Systems. 2021; 222:1–16.

53. Strumbel E, Kononenko I. An efficient explanation of individual classifications using game theory// Journal of Machine Learning Research. 2010; 11:1–18.

54. Lundberg SM, Lee S-I. A unified approach to interpreting model predictions // Advances in Neural Information Processing Systems. 2017:4765–4774.

55. Utkin LV, Konstantinov AV. Ensembles of Random SHAPs // arXiv:2103.03302, Mar., 2021.

56. Tschandl P, Argenziano G, Razmara M, et al. Diagnostic Accuracy of Content Based Dermatoscopic Image Retrieval with Deep Classification Features // British Journal of Dermatology 181. 2019; 1 (2019):155–165.

57. Barata C and Santiago C. Improving the Explainability of Skin Cancer Diagnosis Using CBIR // Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). 2021:550–559.

58. Sadeghi M, Chilana PK, Atkins MS. How Users Perceive Content-based Image Retrieval for Identifying Skin Images // Understanding and Interpreting Machine Learning in Medical Image Computing Applications. 2018:141–148.

59. Fong RC, Vedaldi A. Interpretable explanations of black boxes by meaningful perturbation // Proceedings of the IEEE International Conference on Computer Vision, IEEE. 2017:3429–3437.

60. Schutte K, Moindrot O, Hérent P, et al. Using StyleGAN for Visual Interpretability of Deep Learning Models on Medical Images // arXiv:2101.07563, Jan (2021).

61. Kim J, Kim M, Ro YM. Interpretation of Lesional Detection via Counterfactual Generation // Proceedings of the IEEE International Conference on Image Processing (ICIP). 2021:96–100.

62. Guidotti R. Counterfactual explanations and how to find them: literature review and benchmarking // Data Mining and Knowledge Discovery. 2022 Apr 28:1–55.

63. Kim S, Seo M, Yoon S. XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations // Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021:15719– 15728.

64. Ming Y, Xu P, Qu H, et al. Interpretable and steerable sequence learning via prototypes // Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019:903–913.

65. Oscar L, Hao L, Chaofan C, et al. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions // Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). 2018; 32:3530–3537.


Рецензия

Для цитирования:


Уткин Л.В., Крылова Ю.И., Константинов А.В. Модели объяснения диагноза как элемент интеллектуальных систем диагностики в медицине: краткий обзор. Российский журнал персонализированной медицины. 2022;2(6):23-32. https://doi.org/10.18705/2782-3806-2022-2-6-23-32

For citation:


Utkin L.V., Krylova J.Y., Konstantinov A.V. Explanation models as a component of the intelligent computer-aided diagnosis systems in medicine: a brief review. Russian Journal for Personalized Medicine. 2022;2(6):23-32. (In Russ.) https://doi.org/10.18705/2782-3806-2022-2-6-23-32

Просмотров: 474


Creative Commons License
Контент доступен под лицензией Creative Commons Attribution 4.0 License.


ISSN 2782-3806 (Print)
ISSN 2782-3814 (Online)