Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-29507
Full metadata record
DC FieldValueLanguage
dc.contributor.authorBaumann, Joachim-
dc.contributor.authorCastelnovo, Alessandro-
dc.contributor.authorCrupi, Riccardo-
dc.contributor.authorInverardi, Nicole-
dc.contributor.authorRegoli, Daniele-
dc.date.accessioned2024-01-04T14:55:39Z-
dc.date.available2024-01-04T14:55:39Z-
dc.date.issued2023-06-15-
dc.identifier.isbn979-8-4007-0192-4de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/29507-
dc.description.abstractNowadays, Machine Learning (ML) systems are widely used in various businesses and are increasingly being adopted to make decisions that can significantly impact people’s lives. However, these decision-making systems rely on data-driven learning, which poses a risk of propagating the bias embedded in the data. Despite various attempts by the algorithmic fairness community to outline different types of bias in data and algorithms, there is still a limited understanding of how these biases relate to the fairness of ML-based decision-making systems. In addition, efforts to mitigate bias and unfairness are often agnostic to the specific type(s) of bias present in the data. This paper explores the nature of fundamental types of bias, discussing their relationship to moral and technical frameworks. To prevent harmful consequences, it is essential to comprehend how and where bias is introduced throughout the entire modelling pipeline and possibly how to mitigate it. Our primary contribution is a framework for generating synthetic datasets with different forms of biases. We use our proposed synthetic data generator to perform experiments on different scenarios to showcase the interconnection between biases and their effect on performance and fairness evaluations. Furthermore, we provide initial insights into mitigating specific types of bias through post-processing techniques. The implementation of the synthetic data generator and experiments can be found at https://github.com/rcrupiISP/BiasOnDemand.de_CH
dc.language.isoende_CH
dc.publisherAssociation for Computing Machineryde_CH
dc.rightshttps://creativecommons.org/licenses/by/4.0/de_CH
dc.subjectBiasde_CH
dc.subjectFairnessde_CH
dc.subjectSynthetic datade_CH
dc.subjectMoral worldviewde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleBias on demand : a modelling framework that generates synthetic data with biasde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
dc.identifier.doi10.1145/3593013.3594058de_CH
dc.identifier.doi10.21256/zhaw-29507-
zhaw.conference.details6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Chicago, USA, 12-15 June 2023de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end1013de_CH
zhaw.pages.start1002de_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.publication.reviewPeer review (Publikation)de_CH
zhaw.title.proceedingsProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparencyde_CH
zhaw.funding.snf187473de_CH
zhaw.funding.zhawSocially acceptable AI and fairness trade-offs in predictive analyticsde_CH
zhaw.funding.zhawAlgorithmic Fairness in data-based decision making: Combining ethics and technologyde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
zhaw.relation.referenceshttps://github.com/rcrupiISP/BiasOnDemandde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2023_Baumann-etal_Bias-on-demand_ACM.pdf413.9 kBAdobe PDFThumbnail
View/Open
Show simple item record
Baumann, J., Castelnovo, A., Crupi, R., Inverardi, N., & Regoli, D. (2023). Bias on demand : a modelling framework that generates synthetic data with bias [Conference paper]. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1002–1013. https://doi.org/10.1145/3593013.3594058
Baumann, J. et al. (2023) ‘Bias on demand : a modelling framework that generates synthetic data with bias’, in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, pp. 1002–1013. Available at: https://doi.org/10.1145/3593013.3594058.
J. Baumann, A. Castelnovo, R. Crupi, N. Inverardi, and D. Regoli, “Bias on demand : a modelling framework that generates synthetic data with bias,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Jun. 2023, pp. 1002–1013. doi: 10.1145/3593013.3594058.
BAUMANN, Joachim, Alessandro CASTELNOVO, Riccardo CRUPI, Nicole INVERARDI und Daniele REGOLI, 2023. Bias on demand : a modelling framework that generates synthetic data with bias. In: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. Conference paper. Association for Computing Machinery. 15 Juni 2023. S. 1002–1013. ISBN 979-8-4007-0192-4
Baumann, Joachim, Alessandro Castelnovo, Riccardo Crupi, Nicole Inverardi, and Daniele Regoli. 2023. “Bias on Demand : A Modelling Framework That Generates Synthetic Data with Bias.” Conference paper. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1002–13. Association for Computing Machinery. https://doi.org/10.1145/3593013.3594058.
Baumann, Joachim, et al. “Bias on Demand : A Modelling Framework That Generates Synthetic Data with Bias.” Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2023, pp. 1002–13, https://doi.org/10.1145/3593013.3594058.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.