Full metadata record
DC FieldValueLanguage
dc.contributor.authorDenzel, Philipp-
dc.contributor.authorBrunner, Stefan-
dc.contributor.authorLuley, Paul-Philipp-
dc.contributor.authorFrischknecht-Gruber, Carmen-
dc.contributor.authorReif, Monika Ulrike-
dc.contributor.authorSchilling, Frank-Peter-
dc.contributor.authorAmini, Amin-
dc.contributor.authorRepetto, Marco-
dc.contributor.authorIranfar, Arman-
dc.contributor.authorWeng, Joanna-
dc.contributor.authorChavarriaga, Ricardo-
dc.date.accessioned2023-12-01T16:11:50Z-
dc.date.available2023-12-01T16:11:50Z-
dc.date.issued2023-11-02-
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/29258-
dc.description.abstractExplainability has been recognized as one of the key tenets for the development of trustworthy AI systems for health-related applications. As regulation for AI is being developed, organizations deploying health-oriented AI systems will have to comply with requirements on transparency and explainability. However, despite the imminent introduction of these regulations, actionable guidelines for assessing compliance are still lacking. We present ongoing work on developing a framework for assessing and certifying the transparency of AI systems. This framework makes an explicit link between the foreseen certification requirements and validated processes, algorithms, and methods for assessing compliance of AI systems; resulting in a concrete workflow to perform AI certification. It is based on analysis of the proposed AI regulation in the EU, recommended practices, and ISO standards. This is complemented by empirically validated state-of-the-art algorithmic methods for explainable AI in real-world applications. Stakeholders have unique requirements for the explainability of AI systems. Meanwhile, a wide variety of explainable AI methodologies are available and may be appropriate for different stakeholder preferences, data modalities, applications, and purposes. As a result, the nuanced selection of relevant methodologies becomes an indispensable consideration in this framework. Therefore, within this framework, a taxonomy has been developed to guide the decision for selecting the appropriate and applicable set of methods. The framework and the application of the taxonomy is illustrated through several health-related use cases. Take the case of a skin lesion classification system, involving as stakeholders the patient, the dermatologist, the developers, and the authorities. Here, several considerations guide the choice of methods: e.g., which stakeholder should receive the explanation, is the model directly interpretable or not, are intrinsic or post-hoc explanatory methods required, or whether explanations should be local or global. To mention some of the methods suitable for the dermatologist based on these considerations: In cases where local explanations are required and deep learning methods are used, our framework will point to methods such as SHAP or LIME that illustrate what features in the image led to the model’s decision. Such methods might indicate that a lesion was classified as malignant due to its asymmetric shape, ill-defined border, or irregular colour. Likewise, developers and regulators may require other types of explanations. This could include, for example, explaining global behavior in conjunction with local behavior, as well as identifying model weaknesses throughout the learning and verification stages by providing feature-level explanations. In essence, the presented framework will provide a concrete guide for researchers, developers, and certification bodies in the development, validation, and certification of explainable, transparent AI systems and promote the adoption of best practices for responsible AI innovation.de_CH
dc.language.isoende_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectExplainable artificial intelligencede_CH
dc.subjectTransparencyde_CH
dc.subjectTrustworthy AIde_CH
dc.subjectEU AI actde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleA framework for assessing and certifying explainability of health-oriented AI systemsde_CH
dc.typeKonferenz: Sonstigesde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitCentre for Artificial Intelligence (CAI)de_CH
zhaw.organisationalunitInstitut für Angewandte Mathematik und Physik (IAMP)de_CH
zhaw.conference.detailsExplainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.publication.statuspublishedVersionde_CH
zhaw.publication.reviewPeer review (Abstract)de_CH
zhaw.webfeedIntelligent Vision Systemsde_CH
zhaw.webfeedResponsible Artificial Intelligence Innovationde_CH
zhaw.funding.zhawcertAInty – A Certification Scheme for AI systemsde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
There are no files associated with this item.
Show simple item record
Denzel, P., Brunner, S., Luley, P.-P., Frischknecht-Gruber, C., Reif, M. U., Schilling, F.-P., Amini, A., Repetto, M., Iranfar, A., Weng, J., & Chavarriaga, R. (2023, November 2). A framework for assessing and certifying explainability of health-oriented AI systems. Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
Denzel, P. et al. (2023) ‘A framework for assessing and certifying explainability of health-oriented AI systems’, in Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
P. Denzel et al., “A framework for assessing and certifying explainability of health-oriented AI systems,” in Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023, Nov. 2023.
DENZEL, Philipp, Stefan BRUNNER, Paul-Philipp LULEY, Carmen FRISCHKNECHT-GRUBER, Monika Ulrike REIF, Frank-Peter SCHILLING, Amin AMINI, Marco REPETTO, Arman IRANFAR, Joanna WENG und Ricardo CHAVARRIAGA, 2023. A framework for assessing and certifying explainability of health-oriented AI systems. In: Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023. Conference presentation. 2 November 2023
Denzel, Philipp, Stefan Brunner, Paul-Philipp Luley, Carmen Frischknecht-Gruber, Monika Ulrike Reif, Frank-Peter Schilling, Amin Amini, et al. 2023. “A Framework for Assessing and Certifying Explainability of Health-Oriented AI Systems.” Conference presentation. In Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
Denzel, Philipp, et al. “A Framework for Assessing and Certifying Explainability of Health-Oriented AI Systems.” Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023, 2023.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.