Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-27395
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLoi, Michele-
dc.contributor.authorHeitz, Christoph-
dc.date.accessioned2023-03-20T13:59:07Z-
dc.date.available2023-03-20T13:59:07Z-
dc.date.issued2022-
dc.identifier.isbn978-1-4503-9352-2de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/27395-
dc.description.abstractIn this paper, we provide a moral analysis of two criteria of statistical fairness debated in the machine learning literature: 1) calibration between groups and 2) equality of false positive and false negative rates between groups. In our paper, we focus on moral arguments in support of either measure. The conflict between group calibration vs. false positive and false negative rate equality is one of the core issues in the debate about group fairness definitions among practitioners. For any thorough moral analysis, the meaning of the term “fairness” has to be made explicit and defined properly. For our paper, we equate fairness with (non-)discrimination, which is a legitimate understanding in the discussion about group fairness. More specifically, we equate it with “prima facie wrongful discrimination” in the sense this is used in Prof. Lippert-Rasmussen's treatment of this definition. In this paper, we argue that a violation of group calibration may be unfair in some cases, but not unfair in others. Our argument analyzes in great detail two specific hypothetical examples of usage of predictions in decision making. The most important practical implication is that between-group calibration is defensible as a bias standard in some cases but not others; we show this by referring to examples in which the violation of between-group calibration is discriminatory, and others in which it is not. This is in line with claims already advanced in the literature, that algorithmic fairness should be defined in a way that is sensitive to context. The most important practical implication is that arguments based on examples in which fairness requires between-group calibration, or equality in the false-positive/false-negative rates, do no generalize. For it may be that group calibration is a fairness requirement in one case, but not in another.de_CH
dc.language.isoende_CH
dc.publisherAssociation for Computing Machineryde_CH
dc.rightshttp://creativecommons.org/licenses/by/4.0/de_CH
dc.subjectEqualized oddsde_CH
dc.subjectFairnessde_CH
dc.subjectPredictionde_CH
dc.subjectCalibrationde_CH
dc.subjectEqual opportunityde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleIs calibration a fairness requirement? : an argument from the point of view of moral philosophy and decision theoryde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
dc.identifier.doi10.1145/3531146.3533245de_CH
dc.identifier.doi10.21256/zhaw-27395-
zhaw.conference.details5th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Seoul, Republic of Korea, 21-24 June 2022de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end2034de_CH
zhaw.pages.start2026de_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsProceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparencyde_CH
zhaw.funding.snf187473de_CH
zhaw.funding.zhawSocially acceptable AI and fairness trade-offs in predictive analyticsde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2022_Loi-Heitz_Is-calibration-a-fairness-requirement.pdf314.82 kBAdobe PDFThumbnail
View/Open
Show simple item record
Loi, M., & Heitz, C. (2022). Is calibration a fairness requirement? : an argument from the point of view of moral philosophy and decision theory [Conference paper]. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2026–2034. https://doi.org/10.1145/3531146.3533245
Loi, M. and Heitz, C. (2022) ‘Is calibration a fairness requirement? : an argument from the point of view of moral philosophy and decision theory’, in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, pp. 2026–2034. Available at: https://doi.org/10.1145/3531146.3533245.
M. Loi and C. Heitz, “Is calibration a fairness requirement? : an argument from the point of view of moral philosophy and decision theory,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 2026–2034. doi: 10.1145/3531146.3533245.
LOI, Michele und Christoph HEITZ, 2022. Is calibration a fairness requirement? : an argument from the point of view of moral philosophy and decision theory. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Conference paper. Association for Computing Machinery. 2022. S. 2026–2034. ISBN 978-1-4503-9352-2
Loi, Michele, and Christoph Heitz. 2022. “Is Calibration a Fairness Requirement? : An Argument from the Point of View of Moral Philosophy and Decision Theory.” Conference paper. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2026–34. Association for Computing Machinery. https://doi.org/10.1145/3531146.3533245.
Loi, Michele, and Christoph Heitz. “Is Calibration a Fairness Requirement? : An Argument from the Point of View of Moral Philosophy and Decision Theory.” Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2022, pp. 2026–34, https://doi.org/10.1145/3531146.3533245.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.