Abstract
Automated medical coding, a clinical high-dimensional multilabel task, requires explicit interpretability. Existing works often rely on local interpretability methods, failing to provide comprehensive explanations of the overall mechanism behind each label prediction within a multilabel set. We propose a mechanistic interpretability module called DIctionary Label Attention (DILA) that disentangles uninterpretable dense embeddings into a sparse embedding space, where each nonzero element (a dictionary feature) represents a globally learned medical concept. Through human evaluations, we show that our sparse embeddings are more human understandable than its dense counterparts by at least 50 percent. Our automated dictionary feature identification pipeline, leveraging large language models (LLMs), uncovers thousands of learned medical concepts by examining and summarizing the highest activating tokens for each dictionary feature. We represent the relationships between dictionary features and medical codes through a sparse interpretable matrix, enhancing our global understanding of the model's predictions while maintaining competitive performance and scalability without extensive human annotation.
Original language | English (US) |
---|---|
Pages (from-to) | 1014-1038 |
Number of pages | 25 |
Journal | Proceedings of Machine Learning Research |
Volume | 259 |
State | Published - 2024 |
Event | 4th Machine Learning for Health Symposium, ML4H 2024 - Vancouver, Canada Duration: Dec 15 2024 → Dec 16 2024 |
Keywords
- Interpretability
- Medical Coding
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability