Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-20217
Publication type: Conference paper
Type of review: Peer review (abstract)
New version available at: https://digitalcollection.zhaw.ch/handle/11475/22061
Title: Two to trust : AutoML for safe modelling and interpretable deep learning for robustness
Authors: Amirian, Mohammadreza
Tuggener, Lukas
Chavarriaga, Ricardo
Satyawan, Yvan Putra
Schilling, Frank-Peter
Schwenker, Friedhelm
Stadelmann, Thilo
et. al: No
DOI: 10.21256/zhaw-20217
Proceedings: Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020
Conference details: 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Santiago de Compostela, Spain, 29-30 August 2020
Issue Date: Aug-2020
Publisher / Ed. Institution: Springer
Language: English
Subjects: Automated deep learning; AutoDL; Adversarial attacks
Subject (DDC): 006: Special computer methods
Abstract: With great power comes great responsibility. The success of machine learning, especially deep learning, in research and practice has attracted a great deal of interest, which in turn necessitates increased trust. Sources of mistrust include matters of model genesis ("Is this really the appropriate model?") and interpretability ("Why did the model come to this conclusion?", "Is the model safe from being easily fooled by adversaries?"). In this paper, two partners for the trustworthiness tango are presented: recent advances and ideas, as well as practical applications in industry in (a) Automated machine learning (AutoML), a powerful tool to optimize deep neural network architectures and netune hyperparameters, which promises to build models in a safer and more comprehensive way; (b) Interpretability of neural network outputs, which addresses the vital question regarding the reasoning behind model predictions and provides insights to improve robustness against adversarial attacks.
URI: https://digitalcollection.zhaw.ch/handle/11475/20217
Fulltext version: Accepted version
License (according to publishing contract): Licence according to publishing contract
Departement: School of Engineering
Organisational Unit: 
Published as part of the ZHAW project: Ada – Advanced Algorithms for an Artificial Data Analyst
QualitAI - Quality control of industrial products via deep learning on images
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2020_Amirian_etal_AutoML-for-safe-modelling_TAILOR_ECAI.pdfAccepted Version2.88 MBAdobe PDFThumbnail
View/Open
Show full item record
Amirian, M., Tuggener, L., Chavarriaga, R., Satyawan, Y. P., Schilling, F.-P., Schwenker, F., & Stadelmann, T. (2020, August). Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. https://doi.org/10.21256/zhaw-20217
Amirian, M. et al. (2020) ‘Two to trust : AutoML for safe modelling and interpretable deep learning for robustness’, in Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. Available at: https://doi.org/10.21256/zhaw-20217.
M. Amirian et al., “Two to trust : AutoML for safe modelling and interpretable deep learning for robustness,” in Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Aug. 2020. doi: 10.21256/zhaw-20217.
AMIRIAN, Mohammadreza, Lukas TUGGENER, Ricardo CHAVARRIAGA, Yvan Putra SATYAWAN, Frank-Peter SCHILLING, Friedhelm SCHWENKER und Thilo STADELMANN, 2020. Two to trust : AutoML for safe modelling and interpretable deep learning for robustness. In: Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Conference paper. Springer. August 2020
Amirian, Mohammadreza, Lukas Tuggener, Ricardo Chavarriaga, Yvan Putra Satyawan, Frank-Peter Schilling, Friedhelm Schwenker, and Thilo Stadelmann. 2020. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Conference paper. In Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020. Springer. https://doi.org/10.21256/zhaw-20217.
Amirian, Mohammadreza, et al. “Two to Trust : AutoML for Safe Modelling and Interpretable Deep Learning for Robustness.” Proceedings of the 1st TAILOR Workshop on Trustworthy AI at ECAI 2020, Springer, 2020, https://doi.org/10.21256/zhaw-20217.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.