Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-29512
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBaumann, Joachim-
dc.contributor.authorLoi, Michele-
dc.date.accessioned2024-01-04T15:07:15Z-
dc.date.available2024-01-04T15:07:15Z-
dc.date.issued2023-06-19-
dc.identifier.issn2210-5433de_CH
dc.identifier.issn2210-5441de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/29512-
dc.descriptionErworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch)de_CH
dc.description.abstractAlgorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what "fairness" might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity) and separation (also known as equalized odds), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.de_CH
dc.language.isoende_CH
dc.publisherSpringerde_CH
dc.relation.ispartofPhilosophy & Technologyde_CH
dc.rightshttps://creativecommons.org/licenses/by/4.0/de_CH
dc.subjectActuarial fairnessde_CH
dc.subjectAlgorithmic fairnessde_CH
dc.subjectGroup fairness criteriade_CH
dc.subjectMoral philosophyde_CH
dc.subjectPrediction-based decision makingde_CH
dc.subjectRiskde_CH
dc.subjectSufficiencyde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.subject.ddc170: Ethikde_CH
dc.titleFairness and risk : an ethical argument for a group fairness definition insurers can usede_CH
dc.typeBeitrag in wissenschaftlicher Zeitschriftde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
dc.identifier.doi10.1007/s13347-023-00624-9de_CH
dc.identifier.doi10.21256/zhaw-29512-
dc.identifier.pmid37346393de_CH
zhaw.funding.euNode_CH
zhaw.issue45de_CH
zhaw.originated.zhawYesde_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.volume36de_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.funding.snf187473de_CH
zhaw.funding.zhawSocially acceptable AI and fairness trade-offs in predictive analyticsde_CH
zhaw.funding.zhawAlgorithmic Fairness in data-based decision making: Combining ethics and technologyde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2023_Baumann-Loi_Fairness-risk-group-fairness-definition-insurers.pdf932.5 kBAdobe PDFThumbnail
View/Open
Show simple item record
Baumann, J., & Loi, M. (2023). Fairness and risk : an ethical argument for a group fairness definition insurers can use. Philosophy & Technology, 36(45). https://doi.org/10.1007/s13347-023-00624-9
Baumann, J. and Loi, M. (2023) ‘Fairness and risk : an ethical argument for a group fairness definition insurers can use’, Philosophy & Technology, 36(45). Available at: https://doi.org/10.1007/s13347-023-00624-9.
J. Baumann and M. Loi, “Fairness and risk : an ethical argument for a group fairness definition insurers can use,” Philosophy & Technology, vol. 36, no. 45, Jun. 2023, doi: 10.1007/s13347-023-00624-9.
BAUMANN, Joachim und Michele LOI, 2023. Fairness and risk : an ethical argument for a group fairness definition insurers can use. Philosophy & Technology. 19 Juni 2023. Bd. 36, Nr. 45. DOI 10.1007/s13347-023-00624-9
Baumann, Joachim, and Michele Loi. 2023. “Fairness and Risk : An Ethical Argument for a Group Fairness Definition Insurers Can Use.” Philosophy & Technology 36 (45). https://doi.org/10.1007/s13347-023-00624-9.
Baumann, Joachim, and Michele Loi. “Fairness and Risk : An Ethical Argument for a Group Fairness Definition Insurers Can Use.” Philosophy & Technology, vol. 36, no. 45, June 2023, https://doi.org/10.1007/s13347-023-00624-9.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.