Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-23393
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWehrli, Samuel-
dc.contributor.authorHertweck, Corinna-
dc.contributor.authorAmirian, Mohammadreza-
dc.contributor.authorGlüge, Stefan-
dc.contributor.authorStadelmann, Thilo-
dc.date.accessioned2021-11-03T12:39:05Z-
dc.date.available2021-11-03T12:39:05Z-
dc.date.issued2021-10-27-
dc.identifier.issn2730-5953de_CH
dc.identifier.issn2730-5961de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/23393-
dc.description.abstractFace Recognition (FR) is increasingly influencing our lives: we use it to unlock our phones; police uses it to identify suspects. Two main concerns are associated with this increase in facial recognition: (1) the fact that these systems are typically less accurate for marginalized groups, which can be described as “bias”, and (2) the increased surveillance through these systems. Our paper is concerned with the first issue. Specifically, we explore an intuitive technique for reducing this bias, namely “blinding” models to sensitive features, such as gender or race, and show why this cannot be equated with reducing bias. Even when not designed for this task, facial recognition models can deduce sensitive features, such as gender or race, from pictures of faces—simply because they are trained to determine the “similarity” of pictures. This means that people with similar skin tones, similar hair length, etc. will be seen as similar by facial recognition models. When confronted with biased decision-making by humans, one approach taken in job application screening is to “blind” the human decision-makers to sensitive attributes such as gender and race by not showing pictures of the applicants. Based on a similar idea, one might think that if facial recognition models were less aware of these sensitive features, the difference in accuracy between groups would decrease. We evaluate this assumption—which has already penetrated into the scientific literature as a valid de-biasing method—by measuring how “aware” models are of sensitive features and correlating this with differences in accuracy. In particular, we blind pre-trained models to make them less aware of sensitive attributes. We find that awareness and accuracy do not positively correlate, i.e., that bias ≠ awareness. In fact, blinding barely affects accuracy in our experiments. The seemingly simple solution of decreasing bias in facial recognition rates by reducing awareness of sensitive features does thus not work in practice: trying to ignore sensitive attributes is not a viable concept for less biased FR.de_CH
dc.language.isoende_CH
dc.publisherSpringerde_CH
dc.relation.ispartofAI and Ethicsde_CH
dc.rightshttp://creativecommons.org/licenses/by/4.0/de_CH
dc.subjectFairnessde_CH
dc.subjectConvolutional neural networkde_CH
dc.subjectDiscriminationde_CH
dc.subjectEthnic biasde_CH
dc.subjectGender biasde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.subject.ddc170: Ethikde_CH
dc.titleBias, awareness, and ignorance in deep-learning-based face recognitionde_CH
dc.typeBeitrag in wissenschaftlicher Zeitschriftde_CH
dcterms.typeTextde_CH
zhaw.departementLife Sciences und Facility Managementde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.departementSoziale Arbeitde_CH
zhaw.organisationalunitCentre for Artificial Intelligence (CAI)de_CH
zhaw.organisationalunitInstitut für Computational Life Sciences (ICLS)de_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
dc.identifier.doi10.1007/s43681-021-00108-6de_CH
dc.identifier.doi10.21256/zhaw-23393-
zhaw.funding.euNode_CH
zhaw.issue3de_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end522de_CH
zhaw.pages.start509de_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.volume2de_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.webfeedBiosensor Analysis & Digital Healthde_CH
zhaw.webfeedMachine Perception and Cognitionde_CH
zhaw.webfeedDatalabde_CH
zhaw.webfeedPredictive Analyticsde_CH
zhaw.webfeedZHAW digitalde_CH
zhaw.funding.zhawLibra: A One-Tool Solution for MLD4 Compliancede_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen Soziale Arbeit

Files in This Item:
File Description SizeFormat 
2021_Wehrli-etal_Deep-learning-face-recognition.pdf1.68 MBAdobe PDFThumbnail
View/Open
Show simple item record
Wehrli, S., Hertweck, C., Amirian, M., Glüge, S., & Stadelmann, T. (2021). Bias, awareness, and ignorance in deep-learning-based face recognition. AI and Ethics, 2(3), 509–522. https://doi.org/10.1007/s43681-021-00108-6
Wehrli, S. et al. (2021) ‘Bias, awareness, and ignorance in deep-learning-based face recognition’, AI and Ethics, 2(3), pp. 509–522. Available at: https://doi.org/10.1007/s43681-021-00108-6.
S. Wehrli, C. Hertweck, M. Amirian, S. Glüge, and T. Stadelmann, “Bias, awareness, and ignorance in deep-learning-based face recognition,” AI and Ethics, vol. 2, no. 3, pp. 509–522, Oct. 2021, doi: 10.1007/s43681-021-00108-6.
WEHRLI, Samuel, Corinna HERTWECK, Mohammadreza AMIRIAN, Stefan GLÜGE und Thilo STADELMANN, 2021. Bias, awareness, and ignorance in deep-learning-based face recognition. AI and Ethics. 27 Oktober 2021. Bd. 2, Nr. 3, S. 509–522. DOI 10.1007/s43681-021-00108-6
Wehrli, Samuel, Corinna Hertweck, Mohammadreza Amirian, Stefan Glüge, and Thilo Stadelmann. 2021. “Bias, Awareness, and Ignorance in Deep-Learning-Based Face Recognition.” AI and Ethics 2 (3): 509–22. https://doi.org/10.1007/s43681-021-00108-6.
Wehrli, Samuel, et al. “Bias, Awareness, and Ignorance in Deep-Learning-Based Face Recognition.” AI and Ethics, vol. 2, no. 3, Oct. 2021, pp. 509–22, https://doi.org/10.1007/s43681-021-00108-6.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.