Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-30985
Publication type: | Conference paper |
Type of review: | Peer review (publication) |
Title: | Improving resilience and robustness in artificial intelligence systems through adversarial training and verification |
Authors: | Brunner, Stefan Reif, Monika Ulrike Rejzek, Martin |
et. al: | No |
DOI: | 10.21256/zhaw-30985 |
Proceedings: | Advances in Reliability, Safety and Security, Part 4 |
Editors of the parent work: | Kołowrocki, Krzysztof Dąbrowska, Ewa |
Page(s): | 39 |
Pages to: | 48 |
Conference details: | 34th European Safety and Reliability Conference (ESREL), Cracow, Poland, 23-27 June 2024 |
Issue Date: | Jun-2024 |
Publisher / Ed. Institution: | Polish Safety and Reliability Association |
Publisher / Ed. Institution: | Gdynia |
ISBN: | 978-83-68136-16-6 978-83-68136-03-6 |
Language: | English |
Subjects: | Machine learning; Adversarial perturbation; Deep learning; Neural network resilience |
Subject (DDC): | 006: Special computer methods |
Abstract: | This contribution presents a comprehensive review of the applicability of adversarial perturbations in the training and verification of neural networks. Adversarial perturbations, designed to deliberately manipulate inputs, have emerged as a powerful tool for improving the robustness and generalization of neural networks. This review systematically examines the utilization of adversarial perturbations in both the training and verification phases of neural network development. In the training phase, adversarial perturbations have been harnessed to enhance model resilience against adversarial attacks by augmenting the training dataset with perturbed examples. Various techniques, such as adversarial training and robust optimization, are explored for their effectiveness in fortifying neural networks against both traditional and advanced adversarial attacks. In the verification realm, adversarial perturbations offer a novel approach for assessing model reliability and safety. Adversarial examples generated during verification expose vulnerabilities and aid in uncovering potential shortcomings of neural network architectures. This review delves into the evaluation of various state-of-the-art adversarial perturbations for different model architectures and datasets and provides a comprehensive analysis of their applications in both training and verification of neural networks. By providing a thorough overview of their benefits, limitations, and evolving methodologies, this review not only contributes to a deeper understanding of the pivotal role adversarial perturbations play in enhancing the robustness and resilience of neural networks, but also provides a basis for selecting the appropriate perturbation for specific tasks such as training or verification. |
URI: | https://esrel2024.com/wp-content/uploads/articles/part4/improving-resilience-and-robustness-in-artificial-intelligence-systems-through-adversarial-training-and-verification.pdf https://digitalcollection.zhaw.ch/handle/11475/30985 |
Fulltext version: | Published version |
License (according to publishing contract): | Licence according to publishing contract |
Departement: | School of Engineering |
Organisational Unit: | Institute of Applied Mathematics and Physics (IAMP) |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2024_Brunner-etal_Improving-resilience-and-robustness-in-artificial-intelligence-systems.pdf | 2.52 MB | Adobe PDF | View/Open |
Show full item record
Brunner, S., Reif, M. U., & Rejzek, M. (2024). Improving resilience and robustness in artificial intelligence systems through adversarial training and verification [Conference paper]. In K. Kołowrocki & E. Dąbrowska (Eds.), Advances in Reliability, Safety and Security, Part 4 (pp. 39–48). Polish Safety and Reliability Association. https://doi.org/10.21256/zhaw-30985
Brunner, S., Reif, M.U. and Rejzek, M. (2024) ‘Improving resilience and robustness in artificial intelligence systems through adversarial training and verification’, in K. Kołowrocki and E. Dąbrowska (eds) Advances in Reliability, Safety and Security, Part 4. Gdynia: Polish Safety and Reliability Association, pp. 39–48. Available at: https://doi.org/10.21256/zhaw-30985.
S. Brunner, M. U. Reif, and M. Rejzek, “Improving resilience and robustness in artificial intelligence systems through adversarial training and verification,” in Advances in Reliability, Safety and Security, Part 4, Jun. 2024, pp. 39–48. doi: 10.21256/zhaw-30985.
BRUNNER, Stefan, Monika Ulrike REIF und Martin REJZEK, 2024. Improving resilience and robustness in artificial intelligence systems through adversarial training and verification. In: Krzysztof KOŁOWROCKI und Ewa DĄBROWSKA (Hrsg.), Advances in Reliability, Safety and Security, Part 4 [online]. Conference paper. Gdynia: Polish Safety and Reliability Association. Juni 2024. S. 39–48. ISBN 978-83-68136-16-6. Verfügbar unter: https://esrel2024.com/wp-content/uploads/articles/part4/improving-resilience-and-robustness-in-artificial-intelligence-systems-through-adversarial-training-and-verification.pdf
Brunner, Stefan, Monika Ulrike Reif, and Martin Rejzek. 2024. “Improving Resilience and Robustness in Artificial Intelligence Systems through Adversarial Training and Verification.” Conference paper. In Advances in Reliability, Safety and Security, Part 4, edited by Krzysztof Kołowrocki and Ewa Dąbrowska, 39–48. Gdynia: Polish Safety and Reliability Association. https://doi.org/10.21256/zhaw-30985.
Brunner, Stefan, et al. “Improving Resilience and Robustness in Artificial Intelligence Systems through Adversarial Training and Verification.” Advances in Reliability, Safety and Security, Part 4, edited by Krzysztof Kołowrocki and Ewa Dąbrowska, Polish Safety and Reliability Association, 2024, pp. 39–48, https://doi.org/10.21256/zhaw-30985.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.