Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-25396
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDoran, Hans Dermot-
dc.date.accessioned2022-08-05T10:25:09Z-
dc.date.available2022-08-05T10:25:09Z-
dc.date.issued2022-06-23-
dc.identifier.isbn978-3-645-50194-1de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/25396-
dc.description.abstractIn this relatively informal discussion-paper we summarise issues in the domains of safety and security in machine learning that will affect industry sectors in the next five to ten years. Various products using neural network classification, most often in vision related applications but also in predictive maintenance, have been researched and applied in real-world applications in recent years. Nevertheless, reports of underlying problems in both safety and security related domains, for instance adversarial attacks have unsettled early adopters and are threatening to hinder wider scale adoption of this technology. The problem for real-world applicability lies in being able to assess the risk of applying these technologies. In this discussion-paper we describe the process of arriving at a machine-learnt neural network classifier pointing out safety and security vulnerabilities in that workflow, citing relevant research where appropriate.de_CH
dc.language.isoende_CH
dc.publisherWEKAde_CH
dc.rightsLicence according to publishing contractde_CH
dc.subjectSafetyde_CH
dc.subjectSecurityde_CH
dc.subjectDeep learningde_CH
dc.subjectNeural network classifierde_CH
dc.subjectFunctional safetyde_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.titleSecurity and safety aspects of AI in industry applicationsde_CH
dc.typeKonferenz: Paperde_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.organisationalunitInstitute of Embedded Systems (InES)de_CH
dc.identifier.doi10.48550/arXiv.2207.10809de_CH
dc.identifier.doi10.21256/zhaw-25396-
zhaw.conference.detailsEmbedded World Conference, Nuremberg, Germany, 21-23 June 2022de_CH
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.pages.end378de_CH
zhaw.pages.start373de_CH
zhaw.parentwork.editorSikora, Axel-
zhaw.publication.statuspublishedVersionde_CH
zhaw.publication.reviewPeer review (Abstract)de_CH
zhaw.title.proceedingsProceedings of Embedded World Conference 2022de_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
2022_Doran_Security-safety-aspects-AI-industry-applications_EmbeddedWorld.pdf710.2 kBAdobe PDFThumbnail
View/Open
Show simple item record
Doran, H. D. (2022). Security and safety aspects of AI in industry applications [Conference paper]. In A. Sikora (Ed.), Proceedings of Embedded World Conference 2022 (pp. 373–378). WEKA. https://doi.org/10.48550/arXiv.2207.10809
Doran, H.D. (2022) ‘Security and safety aspects of AI in industry applications’, in A. Sikora (ed.) Proceedings of Embedded World Conference 2022. WEKA, pp. 373–378. Available at: https://doi.org/10.48550/arXiv.2207.10809.
H. D. Doran, “Security and safety aspects of AI in industry applications,” in Proceedings of Embedded World Conference 2022, Jun. 2022, pp. 373–378. doi: 10.48550/arXiv.2207.10809.
DORAN, Hans Dermot, 2022. Security and safety aspects of AI in industry applications. In: Axel SIKORA (Hrsg.), Proceedings of Embedded World Conference 2022. Conference paper. WEKA. 23 Juni 2022. S. 373–378. ISBN 978-3-645-50194-1
Doran, Hans Dermot. 2022. “Security and Safety Aspects of AI in Industry Applications.” Conference paper. In Proceedings of Embedded World Conference 2022, edited by Axel Sikora, 373–78. WEKA. https://doi.org/10.48550/arXiv.2207.10809.
Doran, Hans Dermot. “Security and Safety Aspects of AI in Industry Applications.” Proceedings of Embedded World Conference 2022, edited by Axel Sikora, WEKA, 2022, pp. 373–78, https://doi.org/10.48550/arXiv.2207.10809.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.