Assessment of bias and equanimity of medical imaging algorithms
AZEIA
Duration:
14.06.2021 - 30.09.2022
Predictive algorithms are at risk of behaving in a biased way when exposed to certain input patterns, which may result in unfair or inequitable decisions. To avoid these undesirable behaviors, special care must be taken during the design and estimation phases of the models to identify any potential sources of bias. In healthcare contexts, such biases may lead to unequal attention toward already vulnerable populations, further widening existing inequalities.
The AZEIA project proposes a comprehensive study of current methodologies for identifying and correcting bias in intelligent systems, with a focus on their application in medical imaging. To support this, a platform will be developed to analyze bias-related risks during the creation of predictive models. The platform will also allow models to be trained in a way that mitigates the detected risks, as well as evaluate their fairness once deployed.
The system will be validated through demonstrations in two use cases: breast cancer screening using mammography and the diagnosis of breast lesions in anatomical pathology samples.
Looking for support for your next project? Contact us, we are looking forward to helping you.


