TY - GEN
T1 - Diagnosing failures of fairness transfer across distribution shift in real-world medical settings
AU - Schrouff, Jessica
AU - Harris, Natalie
AU - Koyejo, Oluwasanmi
AU - Alabdulmohsin, Ibrahim
AU - Schnider, Eva
AU - Opsahl-Ong, Krista
AU - Brown, Alex
AU - Roy, Subhrajit
AU - Mincu, Diana
AU - Chen, Christina
AU - Dieng, Awa
AU - Liu, Yuan
AU - Natarajan, Vivek
AU - Karthikesalingam, Alan
AU - Heller, Katherine
AU - Chiappa, Silvia
AU - D'Amour, Alexander
N1 - The authors would like to acknowledge and thank Lucas Dixon, Noah Broestl, Sara Mahdavi, Nenad Tomasev, Cameron Chen, Stephen Pfohl, Matt Kusner, Victor Veitch, Jon Deaton, Shannon Sequeira, Abhijit Guha Roy, Jan Freyberg, Aaron Loh, Martin Seneviratne, Patricia MacWilliams, Yun Liu, Christopher Semturs, Dale Webster, Greg Corrado and Marian Croak for their contributions to this effort. This work was funded by Google.
PY - 2022
Y1 - 2022
N2 - Diagnosing and mitigating changes in model fairness under distribution shift is an important component of the safe deployment of machine learning in healthcare settings. Importantly, the success of any mitigation strategy strongly depends on the structure of the shift. Despite this, there has been little discussion of how to empirically assess the structure of a distribution shift that one is encountering in practice. In this work, we adopt a causal framing to motivate conditional independence tests as a key tool for characterizing distribution shifts. Using our approach in two medical applications, we show that this knowledge can help diagnose failures of fairness transfer, including cases where real-world shifts are more complex than is often assumed in the literature. Based on these results, we discuss potential remedies at each step of the machine learning pipeline.
AB - Diagnosing and mitigating changes in model fairness under distribution shift is an important component of the safe deployment of machine learning in healthcare settings. Importantly, the success of any mitigation strategy strongly depends on the structure of the shift. Despite this, there has been little discussion of how to empirically assess the structure of a distribution shift that one is encountering in practice. In this work, we adopt a causal framing to motivate conditional independence tests as a key tool for characterizing distribution shifts. Using our approach in two medical applications, we show that this knowledge can help diagnose failures of fairness transfer, including cases where real-world shifts are more complex than is often assumed in the literature. Based on these results, we discuss potential remedies at each step of the machine learning pipeline.
UR - http://www.scopus.com/inward/record.url?scp=85150953100&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150953100&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85150953100
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -