DSpace Collection:
https://digitalcollection.zhaw.ch/handle/11475/1090
2024-03-28T19:39:33ZAn industrial educational laboratory at Ducati Foundation : narrative approaches to mechanics based upon continuum physics
https://digitalcollection.zhaw.ch/handle/11475/30412
Title: An industrial educational laboratory at Ducati Foundation : narrative approaches to mechanics based upon continuum physics
Authors: Corni, Federico; Fuchs, Hans Ulrich; Savino, Giovanni
Abstract: This is a description of the conceptual foundations used for designing a novel learning environment for mechanics implemented as an Industrial Educational Laboratory – called Fisica in Moto (FiM) – at the Ducati Foundation in Bologna. In this paper, we will describe the motivation for and design of the conceptual approach to mechanics used in the lab – as such, the paper is theoretical in nature. The goal of FiM is to provide an approach to the teaching of mechanics based upon imaginative structures found in continuum physics suitable to engineering and science. We show how continuum physics creates models of mechanical phenomena by using momentum and angular momentum as primitive quantities. We analyse this approach in terms of cognitive linguistic concepts such as conceptual metaphor and narrative framing of macroscopic physical phenomena. The model discussed here has been used in the didactical design of the actual lab and raises questions for an investigation of student learning of mechanics in a narrative setting.2017-01-01T00:00:00ZScalaGrad : a statically typed automatic differentiation library for safer data science
https://digitalcollection.zhaw.ch/handle/11475/30408
Title: ScalaGrad : a statically typed automatic differentiation library for safer data science
Authors: Meyer, Benjamin; Stadelmann, Thilo; Lüthi, Marcel
Abstract: While the data science ecosystem is dominated by programming languages that do not feature a strong type system, it is widely agreed that using strongly typed programming languages leads to more maintainable and less error-prone code and ultimately more trustworthy results. We believe Scala 3 would be an excellent contender for data science in a strongly typed language, but it lacks a general automatic differentiation library, e.g., for gradient-based learning.We present ScalaGrad, a general and type-safe automatic differentiation library designed for Scala. It builds on and improves a novel approach from the functional programming community using immutable duals, which is conceptually simple, asymptotically optimal and allows differentiation of higher-order code. We demonstrate the ease of use, robust performance, and versatility of ScalaGrad through its applications to deep learning, higher-order optimization, and gradient-based sampling. Specifically, we show an execution speed comparable to PyTorch for a simple deep learning use case, capabilities for higher-order differentiation, and opportunities to design more specialized libraries decoupled from ScalaGrad. As data science challenges evolve in complexity, ScalaGrad provides a pathway to harness the inherent advantages of strongly typed languages, ensuring both robustness and maintainability.2024-05-31T00:00:00ZDeep neural networks for automatic speaker recognition do not learn supra-segmental temporal features
https://digitalcollection.zhaw.ch/handle/11475/30407
Title: Deep neural networks for automatic speaker recognition do not learn supra-segmental temporal features
Authors: Neururer, Daniel; Dellwo, Volker; Stadelmann, Thilo
Abstract: While deep neural networks have shown impressive results in automatic speaker recognition and related tasks, it is dissatisfactory how little is understood about what exactly is responsible for these results. Part of the success has been attributed in prior work to their capability to model supra-segmental temporal information (SST), i.e., learn rhythmic-prosodic characteristics of speech in addition to spectral features. In this paper, we (i) present and apply a novel test to quantify to what extent the performance of state-of-the-art neural networks for speaker recognition can be explained by modeling SST; and (ii) present several means to force respective nets to focus more on SST and evaluate their merits. We find that a variety of CNN- and RNN-based neural network architectures for speaker recognition do not model SST to any sufficient degree, even when forced. The results provide a highly relevant basis for impactful future research into better exploitation of the full speech signal and give insights into the inner workings of such networks, enhancing explainability of deep learning for speech technologies.2024-03-26T00:00:00ZAccounting and billing challenges in large scale emerging cloud technologies
https://digitalcollection.zhaw.ch/handle/11475/30396
Title: Accounting and billing challenges in large scale emerging cloud technologies
Authors: Serhiienko, Oleksii; Harsh, Piyush
Abstract: Billing models which can easily adapt with emerging market opportunities is essential in long term survival of any business. Accounting and billing is also one of the few processes which has wide impact on legal and regulatory compliance, revenue lines as well as customer retention models of all businesses. In the era of rapid technology shifts, with emergence of Fog and Edge deployment models, and marriage of IoT and cloud which promises smart-everything everywhere, it is paramount to understand what new challenges must be addressed within any billing framework. In this paper we list several emerging challenges which should be overcome in architecting any future-ready billing platform. We also present briefly an analysis of few technologies which could be used in prototyping such a solution. We present our proof of concept experiment along with initial results highlighting the feasibility of our proposed architecture towards a scalable billing framework for massively distributed IoT applications at the edge.2020-05-01T00:00:00ZIntegrating uncertainty in deep neural networks for MRI based stroke analysis
https://digitalcollection.zhaw.ch/handle/11475/30395
Title: Integrating uncertainty in deep neural networks for MRI based stroke analysis
Authors: Herzog, Lisa; Murina, Elvis; Dürr, Oliver; Wegener, Susanne; Sick, Beate
Abstract: At present, the majority of the proposed Deep Learning (DL) methods provide point predictions without quantifying the model's uncertainty. However, a quantification of the reliability of automated image analysis is essential, in particular in medicine when physicians rely on the results for making critical treatment decisions. In this work, we provide an entire framework to diagnose ischemic stroke patients incorporating Bayesian uncertainty into the analysis procedure. We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images with corresponding uncertainty information about the reliability of the prediction. For patient-level diagnoses, different aggregation methods are proposed and evaluated, which combine the individual image-level predictions. Those methods take advantage of the uncertainty in the image predictions and report model uncertainty at the patient-level. In a cohort of 511 patients, our Bayesian CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart. The best patient aggregation method yielded 95.89% of accuracy. Integrating uncertainty information about image predictions in aggregation models resulted in higher uncertainty measures to false patient classifications, which enabled to filter critical patient diagnoses that are supposed to be closer examined by a medical doctor. We therefore recommend using Bayesian approaches not only for improved image-level prediction and uncertainty estimation but also for the detection of uncertain aggregations at the patient-level.2020-10-01T00:00:00ZCustom static analysis to enhance insight into the usage of in-house libraries
https://digitalcollection.zhaw.ch/handle/11475/30388
Title: Custom static analysis to enhance insight into the usage of in-house libraries
Authors: van de Laar, Piërre; Corvino, Rosilde; Mooij, Arjan J.; van Wezep, Hans; Rosmalen, Raymond
Abstract: For software maintenance and evolution, insight into the codebase is crucial. One way to enhance insight is the application of static analysis to extract and visualize program-specific relations from the code itself, such as call graphs and inheritance trees. Yet, software often contains in-house libraries: unique, domain-specific libraries whose usage is typically scattered throughout the codebase. To provide sufficient insight into the usage of those libraries, the static analysis must be customized with domain-specific information.
In this paper, we propose a method to enhance insight into the usage of in-house libraries by producing custom overviews. Furthermore, we describe three exploratory case studies targeting industrial C++ and Ada codebases, in which the method was developed, evolved, and validated.
The method prescribes how to create custom overviews using static analysis iteratively, starting from a user-provided, initial specification of proper library usage using code patterns. As a safeguard, the method includes cross-checks to detect code fragments that deviate from proper library usage. Whenever such a deviating library usage is found, the code owners determine whether that deviating library usage should be added to the specification of proper library usage or the code fragment should be made compliant. The latter alternative makes both the codebase more regular and keeps the custom static analysis simpler. The method creates custom overviews that reveal opportunities to improve the usage of the in-house libraries, e.g., the removal of domain-specific redundant code which cannot be detected using generic tools, such as compilers and linters.
We observed that industrial codebases are regular enough to create custom overviews using static analysis in the three exploratory case studies. Furthermore, we observed that the cross-checks, which detect deviating library usage, ensure the validity and completeness of the custom overviews. We conclude that producing custom overviews for in-house libraries using the method is valuable and feasible.2024-06-01T00:00:00ZTowards automated information security governance
https://digitalcollection.zhaw.ch/handle/11475/30377
Title: Towards automated information security governance
Authors: Trammell, Ariane; Gehring, Benjamin; Isele, Marco; Spielmann, Yvo; Zahnd, Valentin
Abstract: Securing a company is not an easy task. Many organizations such as NIST, CIS, or ISO offer frameworks that offer comprehensive security measures. However, those frameworks are generally large and require expert knowledge to be tailored to a given organization. Since such experts are rare, we propose an automated solution that selects security controls and prioritizes them according to an organizations need. We performed initial steps towards the implementation of the proposed solution by evaluating how Natural Language Processing can be used to select security controls that are relevant for the assets of a company and by showing that we can prioritize the selected controls based on the current threat landscape. We expect the proposed solution to be a major benefit for all organizations that intend to improve their security posture but are limited in specialized personnel.2024-01-01T00:00:00ZFeedMeter : evaluating the quality of community-driven threat intelligence
https://digitalcollection.zhaw.ch/handle/11475/30376
Title: FeedMeter : evaluating the quality of community-driven threat intelligence
Authors: Rüedlinger, Andreas; Klauser, Rebecca; Lamprakis, Pavlos; Happe, Markus; Tellenbach, Bernhard; Veyisoglu, Onur; Trammell, Ariane
Abstract: A sound understanding of the adversary in the form of cyber threat intelligence (CTI) is key to successful cyber defense. Various sources of CTI exist, however there is no state-of-the-art method to approximate feed quality in an automated and continuous way. In addition, finding, combining and maintaining relevant feeds is very laborious and impedes taking advantage of the full potential of existing feeds. We propose FeedMeter, a platform that collects, normalizes, and aggregates threat intelligence feeds and continuously monitors them using eight descriptive metrics that approximate the feed quality. The platform aims to reduce the workload of duplicated manual processing and maintenance tasks and shares valuable insights about threat intelligence feeds. Our evaluation of a FeedMeter prototype with more than 150 OSINT sources, conducted over four years, shows that the platform has a real benefit for the community and that the metrics are promising approximations of source quality. A comparison with a prevalent commercial threat intelligence feed further strengthens this finding.2024-01-01T00:00:00ZPicture of the month : Dumbo the elephant
https://digitalcollection.zhaw.ch/handle/11475/30368
Title: Picture of the month : Dumbo the elephant
Authors: Friso, Francesca2024-01-01T00:00:00ZThe influence of rotator cuff tear type and weight bearing on shoulder biomechanics in an ex vivo simulator experiment
https://digitalcollection.zhaw.ch/handle/11475/30356
Title: The influence of rotator cuff tear type and weight bearing on shoulder biomechanics in an ex vivo simulator experiment
Authors: Genter, Jeremy; Croci, Eleonora; Oberreiter, Birgit; Eckers, Franziska; Bühler, Dominik; Gascho, Dominic; Müller, Andreas M.; Mündermann, Annegret; Baumgartner, Daniel
Abstract: Glenohumeral biomechanics after rotator cuff (RC) tears have not been fully elucidated. This study aimed to investigate the muscle compensatory mechanism in weight-bearing shoulders with RC tears and asses the induced pathomechanics (i.e., glenohumeral translation, joint instability, center of force (CoF), joint reaction force).
An experimental, glenohumeral simulator with muscle-mimicking cable system was used to simulate 30° scaption motion. Eight fresh-frozen shoulders were prepared and mounted in the simulator. Specimen-specific scapular anthropometry was used to test six RC tear types, with intact RC serving as the control, and three weight-bearing loads, with the non-weight-bearing condition serving as the control. Glenohumeral translation was calculated using instantaneous helical axis. CoF, muscle forces, and joint reaction forces were measured using force sensors integrated into the simulator. Linear mixed effects models (RC tear type and weight-bearing) with random effects (specimen and sex) were used to assess differences in glenohumeral biomechanics.
RC tears did not change the glenohumeral translation (p > 0.05) but shifted the CoF superiorly (p ≤ 0.005). Glenohumeral translation and joint reaction forces increased with increasing weight bearing (p < 0.001). RC and deltoid muscle forces increased with the presence of RC tears (p ≤ 0.046) and increased weight bearing (p ≤ 0.042).
The synergistic muscles compensated for the torn RC tendons, and the glenohumeral translation remained comparable to that for the intact RC tendons. However, in RC tears, the more superior CoF was close to where glenoid erosion occurs in RC tear patients with secondary osteoarthritis. These findings underscore the importance of early detection and precise management of RC tears.2024-01-01T00:00:00ZData driven value creation in industrial services including remanufacturing
https://digitalcollection.zhaw.ch/handle/11475/30353
Title: Data driven value creation in industrial services including remanufacturing
Authors: Stucki, Melissa; Meierhofer, Jürg; Gal, Barna; Gallina, Viola; Eisl, Stefanie
Abstract: In the era of twin transition companies face complex challenges. Economical resource usage and efficiency are particularly important in the manufacturing sector. Product-service systems are seen a promising solution that can meet the expectations regarding efficiency in a sustainable way. However, there is a huge potential regarding the evaluation of sustainable value creation. The paper focuses on the value creation process in product-service systems including offers with remanufactured products.
Against this background, the goal of this paper is to describe how data driven industrial services can create sustainable value in the meaning of the triple bottom line. Based on previous work, a quantitative model for the assessment and optimization of this value creation is extended to include additional remanufacturing strategies. The value optimization model integrates the different perspectives of provider, custumer and society. The numerical evaluation of this model for a specific application case shows that economic and ecological value creation can be jointly achieved and optimized, and which service arrangements lead to this optimization.2024-01-01T00:00:00ZMöglichkeiten des PFAS-Ersatzes und wissenschaftliche Ansätze
https://digitalcollection.zhaw.ch/handle/11475/30351
Title: Möglichkeiten des PFAS-Ersatzes und wissenschaftliche Ansätze
Authors: Schneider, Toni2024-03-14T00:00:00ZÖkologischer & wirtschaftlicher Nutzen mit industriellen Smart Services
https://digitalcollection.zhaw.ch/handle/11475/30348
Title: Ökologischer & wirtschaftlicher Nutzen mit industriellen Smart Services
Authors: Meierhofer, Jürg; Stucki, Melissa
Abstract: Smart Services verhelfen industriellen Unternehmen zu mehr wirtschaftlichem Wert für Ihre Kunden, Partner und sich selbst. Darüber hinaus können diese Services aber auch ökologischen Nutzen schaffen, z.B. durch den optimierten Betrieb oder die effizientere Wartung von Produkten. Eine Voraussetzung dafür ist jedoch, dass bei der Gestaltung der industriellen Services die ökonomischen und ökologischen Ziele gemeinsam und systematisch verfolgt werden.2024-03-01T00:00:00ZData-driven information extraction and enrichment of molecular profiling data for cancer cell lines
https://digitalcollection.zhaw.ch/handle/11475/30347
Title: Data-driven information extraction and enrichment of molecular profiling data for cancer cell lines
Authors: Smith, Ellery; Paloots, Rahel; Giagkos, Dimitris; Baudis, Michael; Stockinger, Kurt
Abstract: With the proliferation of research means and computational methodologies, published biomedical literature is growing exponentially in numbers and volume. Cancer cell lines are frequently-used models in biological and medical research that are currently applied for a wide range of purposes, from studies of cellular mechanisms to drug development, which has led to a wealth of related data and publications. Sifting through large quantities of text to gather relevant information on cell lines of interest is tedious and extremely slow when performed by humans. Hence, novel computational information extraction and correlation mechanisms are required to boost meaningful knowledge extraction.
In this work, we present the design, implementation and application of a novel data extraction and exploration system. This system extracts deep semantic relations between textual entities from scientific literature to enrich existing structured clinical data concerning cancer cell lines. We introduce a new public data exploration portal, which enables automatic linking of genomic copy number variants plots with ranked, related entities such as affected genes. Each relation is accompanied by literature-derived evidences, allowing for deep, yet rapid, literature search, using existing structured data as a springboard.2024-03-01T00:00:00ZModular and portable time-resolved fluorescence measurement system
https://digitalcollection.zhaw.ch/handle/11475/30344
Title: Modular and portable time-resolved fluorescence measurement system
Authors: Hagen, Raphael; Spano, Fabrizio; Bonmarin, Mathias; Fehr, Daniel
Abstract: With the increasing importance of monitoring-based preventive medicine and advances in the development of fluorometric assays, small and more affordable timeresolved fluorescence measurement techniques are gaining acceptance in biomedical applications [1], [2]. Often these devices are only designed to detect basic properties of a marker and do not have the essential features that would enable the detection of more complex processes in a fluorometric assay.
Further description: [1] O. Alonso, N. Franch, J. Canals, K. Arias-Alpízar, E. de la Serna, E. Baldrich, and A. Diéguez, “An internet of things-based intensity and time-resolved fluorescence reader for point-of-care testing,” Biosensors and Bioelectronics, vol. 154, p. 112074, Apr. 2020.
[2] D. Xiao, Z. Zang, N. Sapermsap, Q. Wang, W. Xie, Y. Chen, and D. D. U. Li, “Dynamic fluorescence lifetime sensing with CMOS single-photon avalanche diode arrays and deep learning processors,” Biomedical Optics Express, vol. 12, pp. 3450–3462, June 2021.2023-12-15T00:00:00ZDesign and validation of isomorphic crystal library for nonlinear optics and THz wave generation
https://digitalcollection.zhaw.ch/handle/11475/30341
Title: Design and validation of isomorphic crystal library for nonlinear optics and THz wave generation
Authors: Shin, Bong‐Rim; Yu, In Cheol; Puc, Uros; Kim, Won Tae; Yoon, Woojin; Kim, Chaeyoon; Yun, Hoseop; Kim, Dongwook; Jazbinsek, Mojca; Rotermund, Fabian; Kwon, O‐Pil
Abstract: Development of new organic crystals possessing large second-order optical nonlinearity is very challenging because of strong tendency of centrosymmetric dipole–dipole molecular assembly in crystals. This tendency makes it difficult to develop various analogous crystals that allow fine tuning of optical and physical properties to enhance the device performance. A design approach of an isomorphic crystal library consisting of 11 highly efficient nonlinear optical salt crystals is reported. Analyzing the so-called isomorphic tolerance space in previously reported mother crystals (PMnXQ chromophores, where PM denotes piperidin-4-ylmethanol electron donor, n corresponds to the substi-tuted position of halogen (X) group on the quinolinium (Q) electron acceptor), various substituents are introduced into the PMnXQ crystals at different posi-tions, considering their space-filling characteristics and interionic interaction ability. All 11 PMnXQ crystals exhibit an isomorphic (or pseudo-isomorphic) crystal structure, in which the cationic chromophores form a perfectly parallel assembly for maximizing the second-order nonlinear optical susceptibility. The optical, physical, and crystal characteristics of newly designed, synthesized, and grown isomorphic PMnXQ crystals show both similarities and differences. Excellent THz wave-generation performance is demonstrated in both kHz- and MHz-repetition optical pump systems with new PMnXQ crystals. Therefore, the design approach using isomorphic tolerance space is very attractive for developing diverse isomorphic analogous organic crystals.2023-07-03T00:00:00ZLocal rigidity by flexibility : unusual design for organic THz‐device materials
https://digitalcollection.zhaw.ch/handle/11475/30340
Title: Local rigidity by flexibility : unusual design for organic THz‐device materials
Authors: Kim, Dong‐Joo; Yu, In Cheol; Jazbinsek, Mojca; Kim, Chaeyoon; Yoon, Woojin; Yun, Hoseop; Kim, Sang‐Wook; Kim, Dongwook; Rotermund, Fabian; Kwon, O‐Pil
Abstract: Terahertz (THz) waves interact with molecular phonon vibrations of organic matter. When designing organic THz-device materials, conformational flexible groups (CFGs) are in most cases avoided. CFGs create many low-energy conformers with high conformational entropy, which results in large and many phonon vibration modes that lead to undesired self-absorption of THz waves. Here, nonpolar CFGs only having weak intermolecular interaction capability are unusually introduced into organic THz-device materials, utilized for efficient THz wave generation. Newly designed THz-source crystals possess nonpolar methylene (CH2)n units having high conformational flexibility. Compared to previously reported benchmark crystals without methylene CFGs, introducing methylene CFGs significantly reduces void volume in newly designed crystals. This leads to the suppression of molecular phonon vibrations below 2.0 THz (i.e., introducing flexibility results in local rigidity). At infrared pump wavelengths, new CFG-contained crystals generate a strong THz electric field that is one order of magnitude stronger than that generated in inorganic ZnTe crystals. CFG-contained crystals exhibit a flatter spectral shape of the generated THz wave than benchmark crystals without methylene CFGs. Therefore, the introduction of CFGs is a very intriguing design strategy for organic THz-device materials to reduce the limitations caused by phonon vibrations.2023-11-06T00:00:00ZVariations of skin thermal diffusivity on different skin regions
https://digitalcollection.zhaw.ch/handle/11475/30337
Title: Variations of skin thermal diffusivity on different skin regions
Authors: Bajrami, Dardan; Zubiaga, Asier; Renggli, Timon; Kirsch, Christoph; Spano, Fabrizio; Fehr, Daniel; von Schulthess, Patrick; Lindhorst-Peters, Alisa; Huber, Stephanie; Roider, Elisabeth; Rossi, René M.; Navarini, Alexander A.; Bonmarin, Mathias
Abstract: Background and Objective: Skin thermal diffusivity plays a crucial role in various applications, including laser therapy and cryogenic skin cooling.This study investigates the correlation between skin thermal diffusivity and two important skin parameters, melanin content and erythema, in a cohort of 102 participants.
Methods: An in-house developed device based on transient temperature measurement was used to assess thermal diffusivity at different body locations. Melanin content and erythema were measured using a colorimeter. Statistical analysis was performed to examine potential correlations.
Results: The results showed that the measured thermal diffusivity values were consistent with previous reports, with variations observed among subjects. No significant correlation was found between thermal diffusivity and melanin content or erythema. This suggests that other factors, such as skin hydration or epidermis thickness, may have a more dominant influence on skin thermal properties.
Conlcusion: This research provides valuable insights into the complex interplay between skin thermal properties and physiological parameters, with potential implications for cosmetic and clinical dermatology applications.2024-03-01T00:00:00ZEx vivo experimental strategies for assessing unconstrained shoulder biomechanics : a scoping review
https://digitalcollection.zhaw.ch/handle/11475/30336
Title: Ex vivo experimental strategies for assessing unconstrained shoulder biomechanics : a scoping review
Authors: Genter, Jeremy; Croci, Eleonora; Ewald, Hannah; Müller, Andreas M.; Mündermann, Annegret; Baumgartner, Daniel
Abstract: Background: Biomechanical studies of the shoulder often choose an ex vivo approach, especially when investigating the active and passive contribution of individual muscles. Although various simulators of the glenohumeral joint and its muscles have been developed, to date a testing standard has not been established. The objective of this scoping review was to present an overview of methodological and experimental studies describing ex vivo simulators that assess unconstrained, muscular driven shoulder biomechanics.
Methods: All studies with ex vivo or mechanical simulation experiments using an unconstrained glenohumeral joint simulator and active components mimicking the muscles were included in this scoping review. Static experiments and humeral motion imposed through an external guide, e.g., a robotic device, were excluded.
Results: Nine different glenohumeral simulators were identified in 51 studies after the screening process. We identified four control strategies characterized by: (a) using a primary loader to determine the secondary loaders with constant force ratios; (b) using variable muscle force ratios according to electromyography; (c) calibrating the muscle path profile and control each motor according to this profile; or (d) using muscle optimization.
Conclusion: The simulators with the control strategy (b) (n = 1) or (d) (n = 2) appear most promising due to its capability to mimic physiological muscle loads.2023-07-01T00:00:00ZFatigue lifetime analysis of POM gears for generalized tooth root loading and shapes
https://digitalcollection.zhaw.ch/handle/11475/30333
Title: Fatigue lifetime analysis of POM gears for generalized tooth root loading and shapes
Authors: Eberlein, Robert; Düzel, Sven
Abstract: Today, commonly used calculation methods for the determination of the tooth root load capacity of polymer gears (e.g. VDI 2736) are based on the same assumptions as for steel gears. Due to the widely varying material properties of polymers in terms of non-linear material behavior and rate dependency, the predicted lifetimes of polymer gears are inaccurate [1]. In a previous study, rate-dependent nonlinear viscoplastic finite element (FE) modeling of POM allows the quantification of material influences that are not considered in standard assumptions for metal parts [2]. Based on a physically motivated material model, a lifetime model was developed and validated to predict tooth root fracture as a function of rotational speed. In the present study, the existing lifetime model is extended and validated to include the torque (tooth loading) and notch (tooth root shape) dependency. It is found that a cyclic increase in the cumulative strain energy density occurs in the polymer gear tooth root, as can be investigated for uniaxial loading [3]. This strain energy density can be used as a damage variable that causes crack initiation and leads to tooth root fracture. Using the lifetime model, the evolution of the damage variable(s) can be predicted to derive a critical level of loading cycles, thus forming a damage criterion. Numerical results show that the damage criterion is independent of speed, load and geometric influences, whereas the validity of the damage criterion is proven by comparison with experimental results obtained from a gear test bench.
Further description: References:
[1] A. Pogačnik, and U. Kissling, “The future of plastic gear standardization,” 2022, pp. 1149–1160. doi: https://doi.org/10.51202/9783181023891
[2] S. Düzel, R. Eberlein, and H.-J. Dennig, ”Fatigue life analysis of POM gears with transient material modelling”, accepted for publication in AIP Conf Proc, 2023
[3] R. Shrestha, J. Simsiriwong, and N. Shamsaei, “Fatigue modeling for a thermoplastic polymer under mean strain and variable amplitude loadings,” Int J Fatigue, vol. 100, pp. 429–443, Jul. 2017, doi: 10.1016/j.ijfatigue.2017.03.0472024-01-01T00:00:00ZFatigue life analysis of POM gears with transient material modeling
https://digitalcollection.zhaw.ch/handle/11475/30332
Title: Fatigue life analysis of POM gears with transient material modeling
Authors: Düzel, Sven; Eberlein, Robert; Dennig, Hans-Jörg
Abstract: The today’s standard calculation methods for investigating the load carrying capacity of polymer gears (such as VDI 2736) are based on the same assumptions as for steel gears. Due to strongly varying material properties of polymers regarding stiffness level, nonlinearity and rate dependency, the predicted lifetimes of polymer gears are inaccurate. In the current study a rate dependent nonlinear viscoplastic finite element (FE) modelling of polyoxymethylene (POM) allows quantifying the material influences not considered by standard assumptions for metal parts. By deploying such a nonlinear material model for POM, a POMsteel gear pairing is investigated and additionally validated on a gear test rig. Different rotational speeds but constant brake torque are investigated to analyse the influence of inertia under the dynamic conditions as well as the rate depended material properties to the fatigue lifetime. An accelerated approach for a repeated transient FE modelling of the gear meshing process makes it possible to investigate critical stress and strain states over consecutive cycles, specifically focusing on tooth root breaking. It turns out that a cyclic increase of (plastic) strains occurs in the polymer gear teeth roots. Through an extrapolation of the local maximum principal strain, the strain state of the failure causing cycles can be analysed and forms the basis for a fatigue life analysis of POM gears. In combination with a strain based failure criterion the admissible number of cycles of POM gears for various rotational speeds is predicted and validated against experimental results obtained from the gear test rig.2023-01-01T00:00:00ZPicture of the month : snail
https://digitalcollection.zhaw.ch/handle/11475/30331
Title: Picture of the month : snail
Authors: Friso, Francesca2023-04-01T00:00:00ZIs it ops that make data science scientific?
https://digitalcollection.zhaw.ch/handle/11475/30330
Title: Is it ops that make data science scientific?
Authors: Doemer, Manuel; Kempf, David
Abstract: The primary challenge of data science as a new paradigm of scientific discovery is the rigorous and practically usable documentation of the data product development process. A more expanding and consequent implementation of principles from modern software development and operations will allow the field to mature as a truly scientific discipline.2023-06-21T00:00:00ZMusculoskeletal model-based control strategy of an over-actuated glenohumeral simulator to assess joint biomechanics
https://digitalcollection.zhaw.ch/handle/11475/30327
Title: Musculoskeletal model-based control strategy of an over-actuated glenohumeral simulator to assess joint biomechanics
Authors: Genter, Jeremy; Rauter, Georg; Müller, Andreas M.; Mündermann, Annegret; Baumgartner, Daniel
Abstract: Determining the acting shoulder and muscle forces in vivo is very complex. In this study, we developed a control strategy for a glenohumeral simulator for ex vivo experiments that can mimic physiological glenohumeral motion and overcome the problem of over-actuation. The system includes ten muscle portions actuated via cables to induce upper arm motion in three degrees of freedom, including scapula rotation. A real-time optimizer was implemented to handle the over-actuation of the glenohumeral joint while ensuring a minimum of muscle tension. The functionality of the real-time optimizer was also used to simulate different extents of rotator cuff tears. Joint reaction forces were consistent with in vivo measurements. These results demonstrate the feasibility and added value of implementing a real-time optimizer for using in vivo data to drive a shoulder simulator.
Die Bestimmung der wirkenden Schultergelenks- und Muskelkräfte in vivo ist sehr komplex. In dieser Studie wurde eine Kontrollstrategie für einen glenohumeralen Simulator für ex vivo Experimente entwickelt, der die physiologischen glenohumeralen Bewegungen nachahmen und das Problem der Überaktuation lösen kann. Das System umfasst zehn Muskelsegmente, die über Motoren via Kabelzüge angesteuert werden, um die Oberarmbewegung in drei Freiheitsgraden, einschließlich der Skapularotation, zu induzieren. Ein Echtzeit-Optimierer wurde implementiert, um die Überaktuation des Glenohumeralgelenks zu lösen und gleichzeitig eine minimale Muskelvorspannung zu gewährleisten. Die Funktionalität des Echtzeit-Optimierers wurde auch genutzt, um verschiedene Grade von Rotatorenmanschettenrupturen zu simulieren. Die Gelenkreaktionskräfte stimmten mit den in vivo Messungen überein. Diese Ergebnisse zeigen die Machbarkeit und den Mehrwert der Implementierung eines Echtzeit-Optimierers für die Verwendung von in vivo Daten zur Steuerung eines Schultersimulators.
Further description: Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch)2023-01-01T00:00:00ZModular and portable time-resolved fluorescence measurement system
https://digitalcollection.zhaw.ch/handle/11475/30326
Title: Modular and portable time-resolved fluorescence measurement system
Authors: Hagen, Raphael; Spano, Fabrizio; Bonmarin, Mathias; Fehr, Daniel
Abstract: Time-resolved fluorescence measurements are not only able to overcome the limitations of intensity-based applications, but can also resolve information about the sample that is not possible with steady-state data. As monitoring-based preventive medicine becomes more popular, efforts have been made to develop smaller and more affordable measurement methods that could potentially be integrated into point-of-care devices. In this work, we present a portable, low-cost demonstration device that can perform time-resolved fluorescence measurements in the frequency domain. The device combines the flexibility and advanced measurement modes of a desktop sensor system with the small form factor of a portable point-of-care device. Initial measurements show promising results, but further testing is needed to fully assess the performance of the device.2023-10-25T00:00:00ZEvaluation of the GRAMM/GRAL model for high-resolution wind fields in Heidelberg, Germany
https://digitalcollection.zhaw.ch/handle/11475/30324
Title: Evaluation of the GRAMM/GRAL model for high-resolution wind fields in Heidelberg, Germany
Authors: May, Maximilian; Wald, Simone; Suter, Ivo; Brunner, Dominik; Vardag, Sanam N.
Abstract: Independent verification of mitigation efforts for climate and air quality action in cities relies on inferring emissions from atmospheric concentration measurements. As emissions are dispersed in the atmosphere before they reach an instrument, the quantitative estimation of emissions requires an understanding of the atmospheric transport and associated uncertainties. In this study, we analyse the catalogue of steady-state flow fields generated by the Graz Mesoscale Model (GRAMM) coupled to the Graz Lagrangian Model (GRAL) for an entire year in Heidelberg, Germany. We use a loss function for the wind field selection, which assigns a best-matching catalogue entry to any given hour by exploiting observation data. We introduce a new loss function which finds an optimal balance between differences in wind speed and wind direction. We evaluate the performance of the model based on 15 meteorological measurement sites, of which 14 are in the inner high-resolution and building-resolving GRAL domain (12.5 km × 12.5 km, 10 m resolution). Performance metrics include mean bias (MB) and root mean square errors (RMSEs) of simulated and observed wind speed and wind direction for all individual stations. On average, we find a mean underestimation of wind speed of 0.14 ms−1 corresponding to about 7 % of the mean wind speed and a mean RMSE of 1.03 ms−1. For wind direction, a mean overall bias smaller than 1° is achieved, but individual stations show larger biases (mean absolute bias: 37°), especially at stations where wind speeds are low on average. Evaluation benchmarks for mean biases of wind direction and wind speed of mesoscale models provided by the European Environmental Agency (EEA) are met at 11 and 14 out of 15 stations at low measurement heights, respectively. Recently suggested extended benchmarks for complex terrain are met at almost all stations. Additionally, for the first time, we analyse the model's ability to simulate the vertical wind profile and we analyse the benefit of implementing a wind profile measurement into the process. We find that the model does not fully capture the vertical profile in our setting. We further study the required measurement network size and find that a high number (> 6) of meteorological stations improves the selection of flow fields over the entire GRAL domain substantially. The conducted comprehensive analysis of the wind fields in the GRAL domain are the basis for detailed quantitative analysis of greenhouse gas and air pollutant emissions using the GRAMM/GRAL modelling framework.2023-12-01T00:00:00ZDevelopment, intercomparison and analysis of city emission inventories in support of independent verification of city greenhouse gas budgets
https://digitalcollection.zhaw.ch/handle/11475/30323
Title: Development, intercomparison and analysis of city emission inventories in support of independent verification of city greenhouse gas budgets
Authors: Denier van der Gon, Hugo; Dröge, Rianne; Super, Ingrid; Droste, Arjan; Brunner, Dominik; Suter, Ivo; Constantin, Lionel; Perrussel, Olivier; Sanchez, Olivier; Chen, Jia; Aigner, Patrick; Kühbacher, Daniel
Abstract: The ICOS-cities PAUL project aims to support the European Green Deal by solving specific scientific and technological problems related to the observation and verification of greenhouse gas (GHG) emissions from densely populated urban landscapes. To this end, comprehensive city observatories, applying various in situ and ground-based remote sensing GHG measurement technologies, will be developed and evaluated in a relatively large (Paris), medium (Munich) and small (Zürich) city. A critical input for the optimal design of such observatories are complete, spatially explicit, state-of-the-art city emission inventories for greenhouse gases and co-emitted species. Currently the emission data available for European cities vary considerably in source sector completeness, spatial resolution, base year and temporal disaggregation. Our target resolution in the ICOS-cities PAUL project is 100 x 100 meter, hourly resolution for a recent year like 2018 or 2019 to avoid impact of the Covid-19 pandemic. Such data would allow evaluation of the city budget and more detailed district level budgets, which can support tailored climate action plans. For Paris (3 x 3 km) and Zurich (100 x 100 m), emission inventories are developed by respectively, AIRPARIF and EMPA in collaboration with the municipality of Zurich. The emission inventory for Munich is based on the downscaling of the 1 x 1 km TNO-GHGco inventory where key source sectors are stepwise replaced by bottom-up estimates by TUM and TNO. Here we harmonize source sectors and evaluate and intercompare the emission inventories of the three cities. We identify dominant source sectors and potentially missing sources, and determine ratios between GHG and co-emitted species necessary for source sector attribution. Furthermore, we compare the results against downscaled national reported emission data in line with the official reporting to UNFCCC, and draw conclusions on consistency between national scale and city scale inventories. Lessons learned will lead to the development of a more general methodology to provide city emission data to other European cities and, as part of the overall ICOS-cities objective, robust observation-based methods for quantifying city GHG emissions and sinks to assess the impact of city climate actions.2023-04-28T00:00:00ZFrom observations to modeling : investigating the heat mitigation potential of public spray mist cooling in Zurich
https://digitalcollection.zhaw.ch/handle/11475/30322
Title: From observations to modeling : investigating the heat mitigation potential of public spray mist cooling in Zurich
Authors: Suter, Ivo; Drossaart van Dusseldorp, Saskia; Anet, Julien
Abstract: Residents of urban areas are disproportionately affected by heat stress due to the combination of global warming and increasing urbanisation[1]. This not only affects the quality of life, but also poses a significant health risk and has been shown to lead to increased mortality rates[2]. However, due to the complex nature of urban climate, the impact of such interventions can vary depending on the local conditions and are thus hard to predict. In this study a real-world implementation of a spray mist cooling system in the city of Zurich is investigated. A ring carrying 180 high pressure nozzles was installed on a public square, as shown in figure 1a. Studies on spray mist cooling are scarce and inconclusive, as it depends on various operational, environmental and experimental factors[3]. State-of-the-art measuring stations[4] have been deployed for continuous measurements of temperature, humidity and other parameters during summer 2022, as shown in figure 1b. The measurements showed a weak cooling effect that was most pronounced south of the cloud, as shown in figure 2. A mean effect of -0.7°C was measured, with the strongest cooling of up to -2.5°C. The impact of the cloud was most pronounced at 25°C. A dependency on relative humidity and wind direction was measured, with the largest effect measured at low relative humidity downwind of the misting system. Outside of the operational hours no temperature difference was observed.
The field experiment supports model development as an ideal case for model validation. The effect of the misting systems on heat and moisture fluxes have been implemented into the urban LES model PALM[1]. The parameterised cooling system in PALM was then used to investigate variations in placement, weather conditions and amount of sprayed water.2023-04-25T00:00:00ZDesign of high-performance organic nonlinear optical and terahertz crystals by controlling the van der Waals volume
https://digitalcollection.zhaw.ch/handle/11475/30320
Title: Design of high-performance organic nonlinear optical and terahertz crystals by controlling the van der Waals volume
Authors: Shin, Bong-Rim; Puc, Uros; Park, Yu-Jin; Kim, Dong-Joo; Lee, Chae-Won; Yoon, Woojin; Yun, Hoseop; Kim, Chaeyoon; Rotermund, Fabian; Jazbinsek, Mojca; Kwon, O-Pil
Abstract: In the development of new organic crystals for nonlinear optical and terahertz (THz) applications, it is very challenging to achieve the essentially required non-centrosymmetric molecular arrangement. Moreover, the resulting crystal structure is mostly unpredictable due to highly dipolar molecular components with complex functional substituents. In this work, new organic salt crystals with top-level macroscopic optical nonlinearity by controlling the van der Waals volume (VvdW ), rather than by trial and error, are logically designed. When the VvdW of molecular ionic components varies, the corresponding crystal symmetry shows an observable trend: change from centrosymmetric to non-centrosymmetric and back to centrosymmetric. All non-centrosymmetric crystals exhibit an isomorphic P1 crystal structure with an excellent macroscopic second-order nonlinear optical response. Apart from the top-level macroscopic optical nonlinearity, new organic crystals introducing highly electronegative fluorinated substituents with strong secondary bonding ability show excellent performance in efficient and broadband THz wave generation, high crystal density, high thermal stability, and good bulk crystal growth ability.2023-12-06T00:00:00ZBidisperse extension of the kissing number problem
https://digitalcollection.zhaw.ch/handle/11475/30319
Title: Bidisperse extension of the kissing number problem
Authors: Schneider, Johannes Josef; Barrow, David Anthony; Li, Jin; Weyland, Mathias; Flumini, Dandolo; Eggenberger Hotz, Peter; Füchslin, Rudolf Marcel
Abstract: In the so-called kissing number problem, the question of the maximum number of spheres of the same size that can touch a sphere in their midst without overlaps is investigated. While in one and two dimensions the kissing numbers can be easily determined as 2 and 6, respectively, the dispute between Isaac Newton and David Gregory from 1694 whether the kissing number in three dimensions is 12 or 13 could only be resolved in favor of Newton in 1953. Only for a few higher dimensions exact kissing numbers are known.
We consider a bidisperse extension of the kissing number problem in three dimensions, where the sphere in the center has a larger radius than the surrounding spheres, and again pose the question of the maximum number of surrounding spheres that can touch the sphere in their midst without overlap. To determine this maximum number for various ratios between the radii of the center sphere and the surrounding spheres, we develop a heuristic optimization algorithm based on Simulated Annealing and its deterministic variant Threshold Accepting, which have been used to achieve excellent results in other sphere packing problems. This maximum number also serves as an upper bound for the number of contacts between spheres of different sizes in polydisperse systems with a given ratio between the radii of the largest and the smallest sphere.2023-08-07T00:00:00ZTheoretical evaluation of the impact of diverse treatment conditions by calculation of the tumor control probability (TCP) of simulated cervical cancer Hyperthermia-Radiotherapy (HT-RT) treatments in-silico
https://digitalcollection.zhaw.ch/handle/11475/30308
Title: Theoretical evaluation of the impact of diverse treatment conditions by calculation of the tumor control probability (TCP) of simulated cervical cancer Hyperthermia-Radiotherapy (HT-RT) treatments in-silico
Authors: Mingo Barba, Sergio; Ademaj, Adela; Marder, Dietmar; Riesterer, Oliver; Lattuada, Marco; Füchslin, Rudolf Marcel; Petri-Fink, Alke; Scheidegger, Stephan
Abstract: Introduction: Hyperthermia (HT) induces various cellular biological processes, such as repair impairment and direct HT cell killing. In this context, in-silico biophysical models that translate deviations in the treatment conditions into clinical outcome variations may be used to study the extent of such processes and their influence on combined hyperthermia plus radiotherapy (HT + RT) treatments under varying conditions.
Methods: An extended linear-quadratic model calibrated for SiHa and HeLa cell lines (cervical cancer) was used to theoretically study the impact of varying HT treatment conditions on radiosensitization and direct HT cell killing effect. Simulated patients were generated to compute the Tumor Control Probability (TCP) under different HT conditions (number of HT sessions, temperature and time interval), which were randomly selected within margins based on reported patient data.
Results: Under the studied conditions, model-based simulations suggested a treatment improvement with a total CEM43 thermal dose of approximately 10 min. Additionally, for a given thermal dose, TCP increased with the number of HT sessions. Furthermore, in the simulations, we showed that the TCP dependence on the temperature/time interval is more correlated with the mean value than with the minimum/maximum value and that comparing the treatment outcome with the mean temperature can be an excellent strategy for studying the time interval effect.
Conclusion: The use of thermoradiobiological models allows us to theoretically study the impact of varying thermal conditions on HT + RT treatment outcomes. This approach can be used to optimize HT treatments, design clinical trials, and interpret patient data.2024-01-01T00:00:00ZSoot aerosol from commercial aviation engines are poor ice nucleating particles at cirrus cloud temperatures
https://digitalcollection.zhaw.ch/handle/11475/30301
Title: Soot aerosol from commercial aviation engines are poor ice nucleating particles at cirrus cloud temperatures
Authors: Testa, Baptiste; Durdina, Lukas; Alpert, Peter A.; Mahrt, Fabian; Dreimol, Christopher H.; Edebeli, Jacinta; Spirig, Curdin; Decker, Zachary C. J.; Anet, Julien; Kanji, Zamin A.
Abstract: Ice nucleating particles catalyse ice formation in clouds, affecting climate through radiative forcing from aerosol-cloud interactions. Aviation directly emits particles into the upper troposphere where ice formation conditions are favourable. Previous studies have used proxies of aviation soot to estimate their ice nucleation activity, however the investigations with commercial aircraft soot from modern in-use aircraft engine have not been quantified. In this work, we sample aviation soot particles at ground level from different commercial aircraft engines to test their ice nucleation ability at temperatures ≤ 228 K, as a function of engine thrust and soot particle size. Additionally soot particles were catalytically stripped to reveal the impact of mixing state on their ice nucleation ability. Particle physical and chemical properties were further characterised and related to the ice nucleation properties. The results show that aviation soot nucleates ice at or above relative humidity conditions required for homogeneous freezing of solution droplets (RHhom).We attribute this to a mesopore paucity inhibiting pore condensation and the sulfur content which suppresses freezing. Only large soot aggregates (400 nm) emitted under 30–100 % thrust conditions for a subset of engines (2/10) nucleate ice via pore condensation and freezing. For those specific engines, the presence of hydrophilic chemical groups facilitates the nucleation. Aviation soot emitted at thrust ≥100 % (sea level thrust) nucleates ice at or above RHhom. Overall our results suggest that aviation soot will not contribute to natural cirrus formation and can be used in models to update impacts of soot-cirrus clouds.2023-11-06T00:00:00ZExperimental platform P-HIL for BESS-interfaced active distribution grids
https://digitalcollection.zhaw.ch/handle/11475/30300
Title: Experimental platform P-HIL for BESS-interfaced active distribution grids
Authors: Ibañez, Alfredo Velazquez; Rodriguez Rodriguez, Juan R.; Paternina, Mario; Segundo Sevilla, Felix Rafael; Korba, Petr
Abstract: Power Hardware in the Loop (P-HIL) systems and Battery Energy Storage Systems (BESS) are essential tools in the transition to a more sustainable and efficient energy matrix. These systems work together to analyze the integration of intermittent renewable energy sources to improve the stability and reliability of electrical grids. This, in turn, contributes to the reduction of greenhouse gas emissions and the development of a low-carbon economy. This paper presents an experimental working platform and real-time simulation based on P-HIL technology and scaled power electronics prototypes. The platform allows the analysis of the interaction of BESS devices in electrical distribution grids. A case study is presented to demonstrate the combination of two main qualities of a BESS: improving voltage stability and reducing peak demand. To this end, an experimental 1 kW BESS consisting of a Dual Active Bridge (DAB) and a Voltage Source Converter (VSC) is connected to a 13-bus IEEE distribution network. The attained results demonstrate the ability of the platform to bridge two areas of electrical engineering and highlight its significant advantages.2023-01-01T00:00:00ZCorrection for particle loss in a regulatory aviation nvPM emissions system using measured particle size
https://digitalcollection.zhaw.ch/handle/11475/30293
Title: Correction for particle loss in a regulatory aviation nvPM emissions system using measured particle size
Authors: Durand, Eliot; Durdina, Lukas; Smallwood, Greg; Johnson, Mark; Spirig, Curdin; Edebeli, Jacinta; Roth, Manuel; Brem, Benjamin; Sevcenco, Yura; Crayford, Andrew
Abstract: To reduce the adverse impact of civil aviation on local air quality and human health, a new international standard for non-volatile Particulate Matter (nvPM) number and mass emissions was recently adopted. A system loss correction method, which accounts for the significant size-dependent particle loss, is also detailed to predict nvPM emissions representative of those at engine exit for emissions inventory purposes. As Particle-Size-Distribution (PSD) measurement is currently not prescribed, the existing loss correction method uses the nvPM number and mass measurements along with several assumptions to predict a PSD, resulting in significant uncertainty.
Three new system loss correction methodologies using measured PSD were developed and compared with the existing regulatory method using certification-like nvPM data reported by the Swiss and European nvPM reference systems for thirty-two civil turbofan engines representative of the current fleet. Additionally, the PSD statistics of three sizing instruments typically used in these systems (SMPS, DMS500 and EEPS) were compared on a generic aero-engine combustor rig.
General agreement between the three new PSD loss correction methods was observed, with both nvPM number- and mass-based system loss correction factors (kSL_num and kSL_mass) within ±10% reported across the engines tested. By comparison, the existing regulatory method was seen to underpredict kSL_num by up to 67% and overpredict kSL_mass by up to 49% when compared with the measured-PSD-based methods, typically driven by low nvPM mass concentrations and small particle size. In terms of the particle sizing instrument inter-comparison, an agreement of ±2 nm for the GMD and ±0.08 for the GSD was observed across a range of particle sizes on the combustor rig. However, it was seen that these differences can result in a 19% bias for kSL_num and 8% for kSL_mass for the measured-PSD-based methods, highlighting the need for further work towards the standardisation of PSD measurement for regulatory purposes.2023-01-24T00:00:00ZThe effect of electrified mobility on the relationship between traffic conditions and energy consumption
https://digitalcollection.zhaw.ch/handle/11475/30290
Title: The effect of electrified mobility on the relationship between traffic conditions and energy consumption
Authors: Fiori, Chiara; Arcidiacono, Vincenzo; Fontaras, Georgios; Makridis, Michail; Mattas, Konstantinos; Marzano, Vittorio; Thiel, Christian; Ciuffo, Biagio
Abstract: Decreasing road transport's harmful effects on environment and health and reducing road accidents are major policy priorities. A variety of technologies could drastically improve air quality, reduce energy consumption and CO2 emissions of road vehicles: in this respect, a prominent trend leverages Electric Vehicles (EVs), supported by improved performance and energy efficiency through connectivity and automation. A noteworthy research question in the transition from Internal Combustion Engine Vehicles (ICEVs) to the alternative technologies, is to understand how Intelligent Transport Systems and other traffic-related measures can contribute to the reduction of fuel consumption and greenhouse gas emissions. In fact, a widely acknowledged tenet assumes that congestion removal or mitigation in presence of ICEVs implies also a reduction of transport-related externalities. This paper explores whether this effect still holds for EVs, by performing an analysis of energy consumption over different vehicle trajectories, under both congested and free-flow conditions. Calculations are carried out using two vehicle simulators: the VT-CPEM (Virginia Tech Comprehensive Power-based Energy consumption model) model for EVs and the CO2MPAS (CO2 model for Passenger and commercial vehicle Simulation) vehicle simulator for the ICEVs, for both electric and conventional cases passengers and freight/commercial powertrains have been analysed. Results are presented on real and simulated data related to four powertrain-vehicle combinations, in terms of general trends of energy/fuel consumption versus speed. Interestingly, results show that, differently from ICEVs, the relationship between congestion and energy consumption underlying EVs can change with higher energy consumption connected to an increased average traffic speed.2019-01-01T00:00:00ZA new class of organic crystals with extremely large hyperpolarizability : efficient THz wave generation with wide flat‐spectral‐band
https://digitalcollection.zhaw.ch/handle/11475/30289
Title: A new class of organic crystals with extremely large hyperpolarizability : efficient THz wave generation with wide flat‐spectral‐band
Authors: Kim, Seung‐Jun; Yu, In Cheol; Kim, Dong‐Joo; Jazbinsek, Mojca; Yoon, Woojin; Yun, Hoseop; Kim, Dongwook; Rotermund, Fabian; Kwon, O‐Pil
Abstract: In organic π-conjugated crystals, enhancing molecular optical nonlinearity of chromophores (e.g., first hyperpolarizability β ≥ 300 × 10−30 esu) in most cases unfortunately results in zero macroscopic optical nonlinearity, which is a bottleneck in organic nonlinear optics. In this study, a new class of nonlinear optical organic crystals introducing a chromophore possessing an extremely large first hyperpolarizability is reported. With newly designed 4-(4-(4-(hydroxymethyl)piperidin-1-yl)styryl)-1-(pyrimidin-2-yl)pyridin-1-ium (PMPR) chromophore, incorporating a head-to-tail cation-anion O-H⋯O hydrogen-bonding synthon and an optimal selection of molecular anion into crystals results in extremely large macroscopic optical nonlinearity with effective first hyperpolarizability of 335 × 10−30 esu. This is in sharp contrast to zero value for previously reported analogous crystals. An ultrathin PMPR crystal with a thickness of ≈10 µm exhibits excellent terahertz (THz) wave generation performance. Both i) broadband THz wave generation with a wide flat-spectral-band in the range of 0.7–3.4 THz defined at −3 dB and high upper cut-off generation frequency of > 7 THz as well as ii) high-generation efficiency (5 times higher THz amplitude than ZnTe crystal with a mm-scale thickness) are simultaneously achieved. Therefore, new PMPR crystals are highly promising materials for diverse applications in nonlinear optics and THz photonics.2023-01-01T00:00:00ZDichlorinated organic‐salt terahertz sources for THz spectroscopy
https://digitalcollection.zhaw.ch/handle/11475/30288
Title: Dichlorinated organic‐salt terahertz sources for THz spectroscopy
Authors: Shin, Bong‐Rim; Yu, In Cheol; Jazbinsek, Mojca; Yoon, Woojin; Yun, Hoseop; Kim, Sang‐Wook; Kim, Dongwook; Rotermund, Fabian; Kwon, O‐Pil
Abstract: Although in terahertz (THz) source materials molecular anions significantly influence the performance of THz generation, only limited classes of molecular counter anions have been reported. Here, utilizing dichlorinated molecular anions in THz generators is reported for the first time, to the best of our knowledge. In these new crystals, two dichlorinated molecular anions with different molecular symmetries, asymmetric 3,4-dichlorobenzenesulfonate (34DCS) and symmetric 3,5-dichlorobenzenesulfonate (35DCS), are incorporated with a 2-(4-hydroxystyryl)-1-methylquinolinium (OHQ) cation possessing top-level molecular optical nonlinearity. OHQ-34DCS exhibits a strong nonlinear optical response, in contrast to OHQ-35DCS. In OHQ-34DCS crystals, the dichlorinated groups form strong halogen bonds (XBs) and hydrogen bonds (HBs), which are beneficial for suppressing molecular (phonon) vibrations. The optical-to-THz conversion efficiency of the OHQ-34DCS crystals is extremely high, comparable to that of the benchmark organic THz generators. Moreover, the THz emission spectra from the OHQ-34DCS crystals, compared to those of previously reported benchmark analogous crystals, are stronger modulated toward a flatter shape, but possess substantially reduced spectral dimples. Therefore, the introduction of dichlorinated molecular anions is an efficient approach for the design of highly efficient electro-optic salt crystals as efficient broadband THz wave sources.2023-02-01T00:00:00ZA generic machine learning framework for fully-unsupervised anomaly detection with contaminated data
https://digitalcollection.zhaw.ch/handle/11475/30284
Title: A generic machine learning framework for fully-unsupervised anomaly detection with contaminated data
Authors: Ulmer, Markus; Zgraggen, Jannik; Goren Huber, Lilach
Abstract: Anomaly detection (AD) tasks have been solved using machine learning algorithms in various domains and applications. The great majority of these algorithms use normal data to train a residual-based model, and assign anomaly scores to unseen samples based on their dissimilarity with the learned normal regime. The underlying assumption of these approaches is that anomaly-free data is available for training. This is, however, often not the case in real-world operational settings, where the training data may be contaminated with a certain fraction of abnormal samples. Training with contaminated data, in turn, inevitably leads to a deteriorated AD performance of the residual-based algorithms.
In this paper we introduce a framework for a fully unsupervised refinement of contaminated training data for AD tasks. The framework is generic and can be applied to any residual-based machine learning model. We demonstrate the application of the framework to two public datasets of multivariate time series machine data from different application fields. We show its clear superiority over the naive approach of training with contaminated data without refinement. Moreover, we compare it to the ideal, unrealistic reference in which anomaly-free data would be available for training. Since the approach exploits information from the anomalies, and not only from the normal regime, it is comparable and often outperforms the ideal baseline as well.2024-01-26T00:00:00ZScalable deployment of deep learning algorithms for predictive maintenance in commercial machine fleets : bridging the research-industry gap
https://digitalcollection.zhaw.ch/handle/11475/30283
Title: Scalable deployment of deep learning algorithms for predictive maintenance in commercial machine fleets : bridging the research-industry gap
Authors: Goren Huber, Lilach
Abstract: Developing deep learning algorithms for predictive maintenance of industrial systems is a growing trend in numerous application fields. Whereas applied research methods have been rapidly advancing, implementations in commercial systems are still lagging behind. One of the main reasons for this delay is the fact that most methodological advances have been focusing on the development of data-driven algorithms for fault detection, diagnosis, or prognosis, ignoring some of the crucial aspects that are required for scaling these algorithms to large fleets of multi-component heterogeneous machines under varying operating conditions, and making sure that their implementation is technically feasible.
In this tutorial, we will elaborate on some of these aspects and discuss possible approaches to address them. We will provide the background to data analytical techniques that enable the scalable deployment of deep learning algorithms in commercial machine fleets. Some examples are transfer learning, fleet-level algorithms, physics-informed deep learning, and uncertainty quantification. We will demonstrate these general concepts using concrete use-cases that apply them to operational data from commercial machine fleets.2022-11-01T00:00:00ZDeep learning for predictive maintenance : scalable implementation in operational setups
https://digitalcollection.zhaw.ch/handle/11475/30282
Title: Deep learning for predictive maintenance : scalable implementation in operational setups
Authors: Goren Huber, Lilach
Abstract: The ”Industry 4.0” revolution has facilitated the access of companies and the public sector to relatively cheap solutions for data acquisition, data storage, cloud computing, and efficient machine learning algorithms. This is one of the main reasons for the growing interest we observe in implementing intelligent data driven decisions in industry. One of the most popular applications of such an intelligent decision support is for condition-based maintenance (CBM) and predictive maintenance (PdM). CBM and PdM are data driven approaches to maintenance planning, that are seen as the natural replacement of the traditional reactive and preventive (time-based) maintenance approaches in the era of big data. As a research team specializing on applied R&D projects in this field we have acquired an extensive experience in collaborating with industry partners and with the public sector. In recent years, our collaborations have made us realize that implementing CBM and PdM in commercial setups is still at its rather premature phases. This stands in contrast to the large body of scientific literature that has been emerging, suggesting advanced machine learning and deep learning (DL) algorithms for PdM.
This gap between the state-of-the-art research on the one hand, and industrial implementation, on the other hand has several origins. An important one is that the technological and algorithmic development is driven primarily by academia and less by industry. This stands in contrast to other applications of DL such as image recognition, speech recognition and gaming, which are driven by industry giants like google, meta, apple and microsoft. As a result, much of the published research in the field of CBM and PdM is developed and tested on synthetic data rather than with real operational data from industry. At the university of applied sciences our research projects are dominated by operational data from real industrial systems. We thus repeatedly encounter a new set of challenges that are often ignored in the scientific literature. In this tutorial we would like to shed light on some of these challenges and suggest possible ways to address them:
1. Dealing with the lack of labeled historical faults.
2. Effective combination of domain knowledge for fault isolation.
3. Upscaling the Fault Detection and Isolation (FDI) algorithms to multi-component systems.
4. Upscaling FDI algorithms to heterogeneous machine fleets.
5. Dealing with data scarcity.
6. Quantifying uncertainty in fault detection problems.
We will discuss these challenges using concrete examples from two very different systems: (i) wind turbines and (ii) aircraft engines. The first use case relies entirely on real field data for the development, validation and testing of the algorithms. The second use case is applied to a publicly available data set. This allows us to share the code with the participants of the tutorial and to walk them through the important points listed above. Thus, the last hour of the tutorial will be dedicated to reviewing the code and practicing its usage together with the participants.
While demonstrating the above challenges using concrete use cases, we will keep stressing their generic nature and broad relevance in various applications of PdM and CBM, such that the participants can take a clear message and attempt to apply the concepts in their own field.
Further description: Workshop2023-06-22T00:00:00ZSourcing decisions for plant-based food alternatives in the context of the triple bottom line
https://digitalcollection.zhaw.ch/handle/11475/30281
Title: Sourcing decisions for plant-based food alternatives in the context of the triple bottom line
Authors: Rühlin, Viola; Weingart, Joel Niklas; Scherrer, Maike
Abstract: Due to growing concerns about the environmental impacts of livestock farming, there has been a shift in dietary habits with a growing demand for plant-based alternatives. Using the example of chicken and mozzarella as well as their plant-based counterparts, this study evaluates their raw material sourcing strategies from a holistic sustainability perspective. The results reveale different trade-offs within and between the three sustainability dimensions studied. However, the study can clearly show that from an environmental perspective, the plant-based alternatives are always more advantageous. From an economic and social perspective, the results were less clear.2023-07-01T00:00:00ZThe effect of coopetition on driving distance in logistics in urban areas
https://digitalcollection.zhaw.ch/handle/11475/30280
Title: The effect of coopetition on driving distance in logistics in urban areas
Authors: Steiner, Albert; Weingart, Joel Niklas; Huber, Marius; Scherrer, Maike
Abstract: Due to population growth and urbanisation, our cities have become increasingly dense in recent years. As private motorised transport shares the road infrastructure with freight transport, the relative availability of road infrastructure for logistics service providers has decreased. The logistics areas that were inside the city have been pushed to the outskirts to make room for residential and office buildings in the inner cities. These developments have led to logistics service providers having to deliver an increasing among of goods to cities, but also having to cover increasing distances due to the logistics sprawl (Aljohani & Thompson, 2016). The logistics service providers are convinced that they have optimised their routes and see no further potential for optimisation. Cooperation with other logistics service providers for further optimisation of delivery efficiency related to reduced driving distance, time savings, or shared vehicles is mainly unthinkable for the logistics service providers. The paper at hand aims to shed light into the question whether coopetition, i.e., cooperation between competitors (Luo, 2007; Scherrer, 2023; Scherrer et al., 2021) can lead to further optimisation potential emphasising the reduction of driving distance in urban areas.2023-10-01T00:00:00ZSo you want your private LLM at home? : a survey and benchmark of methods for efficient GPTs
https://digitalcollection.zhaw.ch/handle/11475/30279
Title: So you want your private LLM at home? : a survey and benchmark of methods for efficient GPTs
Authors: Tuggener, Lukas; Sager, Pascal; Taoudi-Benchekroun, Yassine; Grewe, Benjamin F.; Stadelmann, Thilo
Abstract: At least since the introduction of ChatGPT, the abilities of generative large language models (LLMs), sometimes called GPTs, are at the center of the attention of AI researchers, entrepreneurs, and others. However, for many applications, it is not possible to call an existing LLM service via an API due to data protection concerns or when no task-appropriate LLM exists. On the other hand, deploying or training a private LLM is often prohibitively computationally expensive. In this paper, we give an overview of the most important recent methodologies that help reduce the computational footprint of LLMs. We further present extensive benchmarks for seven methods from two of the most important areas of recent progress: model quantization and low-rank adapters, showcasing how it is possible to leverage state-of-the-art LLMs with limited resources. Our benchmarks include resource consumption metrics (e.g. GPU memory usage), a state-of-the-art quantitative performance evaluation as well as a qualitative performance study conducted by eight individual human raters. Our evaluations show that quantization has a profound effect on GPU memory requirements. However, we also show that these quantization methods, contrary to how they are advertised, cause a noticeable loss in text quality. We further show that low-rank adapters allow effective model fine-tuning with moderate compute resources. For methods that require less than 16 GB of GPU memory, we provide easy-to-use Jupyter notebooks that allow anyone to deploy and fine-tune state-of-theart LLMs on the Google Colab free tier within minutes without any prior experience or infrastructure.2024-05-31T00:00:00ZImpact analysis of wind turbines subjected to ship collision and blast loading
https://digitalcollection.zhaw.ch/handle/11475/30278
Title: Impact analysis of wind turbines subjected to ship collision and blast loading
Authors: Mehreganian, Navid; Safa, Yasser; Boiger, Gernot Kurt
Abstract: The structural integrity of Offshore Wind Turbines (OWT) is of prime significance, due to the significant dynamic stresses generated by the extreme blast and impact loads. The detrimental damage of such loads emanates from either large inelastic localized deformations or the global rotations which to the collapse of the wind turbine over the ship. In some circumstances, however, the combination of the two is imminent. In this work, we examine two scenarios of impact and blast phenomena on offshore wind turbines. In the first, the influence of gravity loads, wind velocity, vessel angle of attack, and its initial momentum, on the localized and global deformations is discussed. A numerical FE model is developed to further investigate the damage of the OWT struck by a commercial ship. In the second scenario, we develop a mathematical model to capture the blast response of cylindrical shells utilized in the OWT. By decomposing the load into a spatial part with constant magnitude and a temporal part characterized by a piecewise function, the analytical solution is sought for two distinguishable phases. The validation of the analytical model with the FE one shows the capability of the former to capture the dynamic plastic collapse of the shell with a good degree of accuracy.2024-01-18T00:00:00ZCustomizing the human-avatar mapping based on EEG error related potentials during avatar-based interaction
https://digitalcollection.zhaw.ch/handle/11475/30277
Title: Customizing the human-avatar mapping based on EEG error related potentials during avatar-based interaction
Authors: Iwane, Fumiaki; Porssut, Thibault; Blanke, Olaf; Chavarriaga, Ricardo; Millan, Jose Del R.; Herbelin, Bruno; Boulic, Ronan
Abstract: Objective. A key challenge of virtual reality (VR) applications is to maintain a reliable human-avatar mapping. Users may lose the sense of controlling (sense of agency), owning (sense of body ownership), or being located (sense of self-location) inside the virtual body when they perceive erroneous interaction, i.e. Break-in-embodiment (BiE). However, the way to detect such an inadequate event is currently limited to questionnaires or spontaneous reports from users. The ability to implicitly detect BiE in real-time enables us to adjust human-avatar mapping without interruption.
Approach. We propose and empirically demonstrate a novel Brain Computer Interface (BCI) approach that monitors the occurrence of BiE based on the users' brain oscillatory activity in real-time to adjust the human-avatar mapping in VR. We collected EEG data of 37 participants while they performed reaching movements with their avatar with different magnitude of distortion.
Main results. Our BCI approach seamlessly predicts occurrence of BiE in varying magnitude of erroneous interaction. The mapping has been customized by BCI-reinforcement learning (RL) closed-loop system to prevent BiE from occurring. Furthermore, a non-personalized BCI decoder generalizes to new users, enabling "Plug-and-Play" ErrP-based non-invasive BCI. The proposed VR system allows customization of human-avatar mapping without personalized BCI decoders or spontaneous reports.
Significance. We anticipate that our newly developed VR-BCI can be useful to maintain an engaging avatar-based interaction and a compelling immersive experience while detecting when users notice a problem and seamlessly correcting it.2024-02-22T00:00:00ZDetecting anomalies in time series using kernel density approaches
https://digitalcollection.zhaw.ch/handle/11475/30276
Title: Detecting anomalies in time series using kernel density approaches
Authors: Frehner, Robin; Wu, Kesheng; Sim, Alexander; Kim, Jinoh; Stockinger, Kurt
Abstract: This paper introduces a novel anomaly detection approach tailored for time series data with exclusive reliance on normal events during training. Our key innovation lies in the application of kernel-density estimation (KDE) to scrutinize reconstruction errors, providing an empirically derived probability distribution for normal events post-reconstruction. This non-parametric density estimation technique offers a nuanced understanding of anomaly detection, differentiating it from prevalent threshold-based mechanisms in existing methodologies. In post-training, events are encoded, decoded, and evaluated against the estimated density, providing a comprehensive notion of normality. In addition, we propose a data augmentation strategy involving variational autoencoder-generated events and a smoothing step for enhanced model robustness. The significance of our autoencoder-based approach is evident in its capacity to learn normal representation without prior anomaly knowledge. Through the KDE step on reconstruction errors, our method addresses the versatility of anomalies, departing from assumptions tied to larger reconstruction errors for anomalous events. Our proposed likelihood measure then distinguishes normal from anomalous events, providing a concise yet comprehensive anomaly detection solution. The extensive experimental results support the feasibility of our proposed method, yielding significantly improved classification performance by nearly 10% on the UCR benchmark data.2024-03-01T00:00:00ZNetwork dynamics of positive energy districts : a coevolutionary business ecosystem analysis
https://digitalcollection.zhaw.ch/handle/11475/30274
Title: Network dynamics of positive energy districts : a coevolutionary business ecosystem analysis
Authors: Zapata Riveros, Juliana; Scacco, Paulet Michelle; Ulli-Beer, Silvia
Abstract: Introduction: Amid the rising interest in sustainable urban development, Positive Energy Districts (PEDs) have become a focus of research. This study examines the dynamic processes that influence the development and scalability of PEDs from a co-evolutionary business ecosystem perspective.
Methods: To delve into the dynamics of Positive Energy Districts, we applied the business ecosystem framework to a real-world case study, namely the Hunziker Areal. Our research methodology involved the development and validation of a high-level conceptual model. This was achieved through workshops and guided interviews with experts engaged in pilot and research projects related to PEDs.
Results: The study highlights the significance of employing a systemic approach to evaluate the potential of PEDs in enhancing housing sustainability while creating value for diverse stakeholders. Through the utilization of causal loop diagrams, key feedback loops explaining the diffusion of PEDs are identified. Moreover, the study reveals varying perceptions of PED utility among stakeholders, who assess the impact using different Key Performance Indicators (KPIs) such as CO2 target achievement and well-being. Key factors influencing technology adoption, such as low prosumer electricity unit cost, are also identified.
Discussion: Our findings further shed light on crucial aspects affecting value capture and the attractiveness of the ecosystem to investors. Additionally, the study underscores the critical role of supportive policies and regulations in facilitating the diffusion and scalability of Positive Energy Districts.2024-01-01T00:00:00ZDesign elements for supply network resilience : a unified reference framework for quantitative simulation modelling
https://digitalcollection.zhaw.ch/handle/11475/30273
Title: Design elements for supply network resilience : a unified reference framework for quantitative simulation modelling
Authors: Doege, Patrick; Weingart, Joel Niklas; Scherrer, Maike
Abstract: Existing research on supply network resilience is predominantly qualitative, with limited attention given to quantitative methods such as simulation and optimisation techniques. Based on an exploratory search of the literature, this paper proposes a novel framework for capturing interdependencies and dynamics among different components leading to increased supply network resilience. The framework serves as reference for building quantitative resilience models and is essential for understanding how different design elements contribute to overall network resilience. It further advances the understanding of supply network resilience and serves as comprehensive guide for researchers and practitioners when building quantitative supply network resilience models.2023-07-01T00:00:00ZA combined experimental and numerical method for tailoring the multi-scale mechanical properties of soft solid liquid composites
https://digitalcollection.zhaw.ch/handle/11475/30272
Title: A combined experimental and numerical method for tailoring the multi-scale mechanical properties of soft solid liquid composites
Authors: Cauquil, Eléonore; Kiener, Luca; Hofmann, Jonas; Spano, Fabrizio; Röhrnbauer, Barbara
Abstract: Solid liquid composites are motivated by a variety of multi-physics applications including research in mechanobiology. From a mechanical perspective, liquid inclusions in a matrix affect both its global and local properties - the latter seen as local stiffness variations. It is known that stiffness variations in substrates are sensed by cells and incite cell migration so-called durotaxis. To investigate this complex interplay, detailed knowledge is needed on the local mechanical properties of the substrate. In this study, a combined experimental and numerical approach is proposed to characterize and tailor the local and global mechanical properties of a soft solid liquid composite.
Polydimethylsiloxane (PDMS) membranes with a regular pattern of liquid inclusions of two different sizes (1.1mm, 0.5mm) were produced according to the procedure reported elsewhere. Planar tension tests were performed resulting in a biaxial state of stress, representative for loading conditions of biological membranes. After preconditioning during 9 cycles, samples were strained quasi-statically to 30% nominal strain. In addition to the global force and displacement data, local deformations were evaluated using a digital image correlation system.
A numerical model based on a representative unit cell approach was built using a commercial finite element software. The unit cell was modeled as a 3D cuboid containing a spherical inclusion. For the PDMS, a Neo-Hookean material was chosen, which was fitted to test data of pure PDMS. The liquid inclusion was modeled using built-in element types. The model was validated applying both the global force response and the local deformation pattern. A numerical parameter study was performed, varying the size and density of the inclusions.
The numerical model was shown to excellently reproduce both, the global force response, and the local deformation pattern (Figure). Apart from the parameter of the Neo-Hookean model, no fitting of parameters was needed resulting in a simple and robust modeling approach. The parameter study revealed the potential to tailor a wide variety of biaxial global stiffnesses (0.20-0.44 MPa) and to finetune local stiffness gradients.
Current limitations are the reproducibility of the PDMS properties and the small experimental basis (n=3). However, the feasibility of the approach as well as the excellent predictive capabilities of the model have been shown. This experimental and numerical framework shall be used to investigate phenomena such as durotaxis incited by specifically tailored stiffness gradients and thus, contribute to quantitative research in mechanobiology.2023-09-01T00:00:00ZMechanical properties of pelvic implants : interaction between implants and tissue
https://digitalcollection.zhaw.ch/handle/11475/30271
Title: Mechanical properties of pelvic implants : interaction between implants and tissue
Authors: Röhrnbauer, Barbara
Abstract: Pelvic implants, mainly meshes, are load-bearing structures. Therefore it is well accepted that their mechanical properties play an important role determining the outcome after implantation. Next to strength, different aspects of mechanical behavior, such as the non-linear stress-strain relationship, anisotropy or strain history influence the function and the compatibility of the mesh in vivo. Besides, due to the manufacturing technique, meshes are hierarchical, multi-scale structures, which is reflected also in their mechanical properties. Deformation patterns at low length scales, the length scales of the cells, might differ significantly from the macro-scale deformation and might contribute to the reported complications. Thus aiming at mechanically biocompatible implant designs, these lower length scales need to be accounted for. Alternative materials derived by electrospinning, featuring a completely different micro- and nanostructure compared to meshes, have recently been suggested for pelvic floor repair. Though promising outcomes regarding host immune response have been reported, their mechanical biocompatibility remains to be shown.2023-01-01T00:00:00ZMultiphysics simulation-based investigation of electro-static precipitation phenomena in the context of coating standard automotive rims
https://digitalcollection.zhaw.ch/handle/11475/30270
Title: Multiphysics simulation-based investigation of electro-static precipitation phenomena in the context of coating standard automotive rims
Authors: Boiger, Gernot Kurt; Siyahhan, Bercan; Schubiger, Alain; Hostettler, Marco; Fallah, A.S.; Khawaja, H.; Moatamedi, Mojtaba
Abstract: In this extended study, electrostatic precipitation, a cornerstone technology in industrial coating applications, is examined with enhanced depth and breadth, targeting its applicability in coating standard automotive rims. Utilizing an advanced Eulerian- Lagrangian, Extended Discrete Element Method, Finite Volume solver constructed within the OpenFOAM CFD-framework, we present a holistic computational model that incorporates various facets such as airflow dynamics, coating-particle interactions, and intricate particle-substrate phenomena like blow-off and corona formation. Enabled by Massive Simultaneous Cloud Computing technology, our solver permits concurrent exploration of a wide array of industrially relevant conditions.
This research goes beyond earlier studies by encompassing not only variations in "Mean Powder Particle Diameters" and "Powder Particle Density," but also conducting a more expansive simulation sweep that incorporates changes in "Particle Diameter Deviation" and "Applied Voltage." This allows for a nuanced understanding of sensitivities and uncertainties linked to these parameters. We apply this comprehensive modeling approach to scrutinize single-burst powder coating on a typical metallic, automotive rim substrate. The study delivers intricate predictions and visualizations of coating patterns, efficiencies, and homogeneity across a range of conditions.
Our findings offer valuable insights for optimizing powder properties, which hold considerable implications for material suppliers in the coating industry. Despite these advances, certain limitations remain, underscoring the need for further research in this vital domain.2023-12-14T00:00:00ZDank Grundlagenforschung zum idealen Netz?
https://digitalcollection.zhaw.ch/handle/11475/30258
Title: Dank Grundlagenforschung zum idealen Netz?
Authors: Röhrnbauer, Barbara2024-01-01T00:00:00ZQuantifying energy-saving measures in office buildings by simulation in 2D cross sections
https://digitalcollection.zhaw.ch/handle/11475/30253
Title: Quantifying energy-saving measures in office buildings by simulation in 2D cross sections
Authors: Witzig, Andreas; Tello, Camilo; Schranz, Franziska; Bruderer, Johannes; Haase, Matthias
Abstract: A methodology is presented to analyse the thermal behaviour of buildings with the goal to quantify energy saving measures. The solid structure of the building is modelled with finite elements to fully account for its ability to store energy and to accurately predict heat loss through thermal bridges. Air flow in the rooms is approximated by a lumped element model with three dynamical nodes per room. The dynamic model also contains the control algorithm for the HVAC system and predicts the net primary energy consumption for heating and cooling of the building for any time period. The new simulation scheme has the advantage to avoid U-values and thermal bridge coefficients and instead use well-known physical material parameters. It has the potential to use 2D and 3D geometries with appropriate automatic processing from BIM models. Simulations are validated by comparison to IDA ICE and temperature measurement. This work aims to discuss novel approaches to disseminating building simulation more widely.2023-09-01T00:00:00ZA semi transient methodology for dual time stepping of particle and flow field simulations of an Eulerian-Lagrangian multiphysics solver
https://digitalcollection.zhaw.ch/handle/11475/30252
Title: A semi transient methodology for dual time stepping of particle and flow field simulations of an Eulerian-Lagrangian multiphysics solver
Authors: Siyahhan, Bercan; Boiger, Gernot; Fallah, Arash; Khawaja, Hassan; Moatamedi, Moji
Abstract: Many industrial applications involve particles transported by a carrier fluid flow with additional multi-physical effects such as electromagnetics. The simulation of such processes is computationally expensive especially because of the diverse dimensional and time scales involved. In this study, the time scale for the fluid flow to be stably simulated is shown to be up to 2 orders of magnitude higher than the time scale for a spherical particle to assume carrier flow velocity. A semi-transient solution methodology has been devised, utilizing a dual time stepping approach for the flow and particle simulations. In this methodology, first the flow field is simulated with the larger time step, saving the resultant fields at regular intervals serving as snap shots of the flow. Then between each snap shot, the flow is treated as steady state, facilitating the calculation of the particle trajectory based on the resultant forces. This approach is especially suitable for applications where the particle cloud density is low enough not to have a significant effect on the flow field warranting a one way coupling. The accuracy of the method is established by comparing key performance parameters such as coating transfer efficiency and the homogeneity of the coating obtained from a fully transient simulation. The saving potential in terms of computational resources is also quantified2023-12-01T00:00:00ZHolistic analysis of organised misinformation activity in social networks
https://digitalcollection.zhaw.ch/handle/11475/30251
Title: Holistic analysis of organised misinformation activity in social networks
Authors: Peñas, Anselmo; Deriu, Jan; Sharma, Rajesh; Valentin, Guilhem; Reyes-Montesinos, Julio
Abstract: To tackle the problem of disinformation, society must be aware not only of the existence of intentional misinformation campaigns, but also of the agents that introduce the misleading information, their supporting media, the nodes they use in social networks, the propaganda techniques they employ and their overall narratives and intentions. Disinformation is a challenge that must be addressed holistically: identifying and describing a disinformation campaign requires studying misinformation locally, at the message level, as well as globally, by modelling its propagation process to identify its sources and main players. In this paper, we argue that the integration of these two levels of analysis hinges on studying underlying features such as disinformation’s intentionality, and benefited and injured agents. Taking these features into account could make automated decisions more explainable for end users and analysts. Moreover, simultaneously identifying misleading messages, knowing their narratives and hidden intentions, modelling their diffusion in social networks, and monitoring the sources of disinformation will also allow a faster reaction, even anticipation, against the spreading of disinformation.2023-01-01T00:00:00ZText-to-speech pipeline for Swiss German : a comparison
https://digitalcollection.zhaw.ch/handle/11475/30250
Title: Text-to-speech pipeline for Swiss German : a comparison
Authors: Bollinger, Tobias; Deriu, Jan Milan; Vogel, Manfred
Abstract: In this work, we studied the synthesis of Swiss German speech using different Text-to-Speech (TTS) models. We evaluated the TTS models on three corpora, and we found, that VITS models performed best, hence, using them for further testing. We also introduce a new method to evaluate TTS models by letting the discriminator of a trained vocoder GAN model predict whether a given waveform is human or synthesized. In summary, our best model delivers speech synthesis for different Swiss German dialects with previously unachieved quality.2023-06-01T00:00:00ZNavigation needs for the unpiloted airspace
https://digitalcollection.zhaw.ch/handle/11475/30248
Title: Navigation needs for the unpiloted airspace
Authors: Osechas, Okuary; Felux, Michael; McGraw, Gary
Abstract: The paper discusses the challenges of providing resilient navigation services to Advanced Air Mobility (AAM) and Uncrewed Aeronautical Systems (UAS) in various types of environments in, around and above urban areas. The analysis shows that at least three different types of urban/suburban environments are relevant, and the paper proposes a criterion to distinguish them, based on how much surrounding buildings influence radio propagation in a particular volume. Finally, the paper also proposes pathways to solving the foreseeable problems with three concepts that will enable higher navigation accuracy in urban environments: a low-power terrestrial navigation aid based on multifunction communication systems, a GNSS augmentation service with reduced time-to-alert, and a landing system for AAM aircraft drones in urban environments.2024-01-01T00:00:00ZTest-retest reliability of isometric shoulder muscle strength during abduction and rotation tasks measured using the Biodex dynamometer
https://digitalcollection.zhaw.ch/handle/11475/30247
Title: Test-retest reliability of isometric shoulder muscle strength during abduction and rotation tasks measured using the Biodex dynamometer
Authors: Croci, Eleonora; Born, Patrick; Eckers, Franziska; Nüesch, Corina; Baumgartner, Daniel; Müller, Andreas Marc; Mündermann, Annegret
Abstract: Background: The Constant score (CS) is often used clinically to assess shoulder function and includes a muscle strength assessment only for abduction. The aim of this study was to evaluate the test-retest reliability of isometric shoulder muscle strength during various positions of abduction and rotation with the Biodex dynamometer and to determine their correlation with the strength assessment of the CS.
Methods: Ten young healthy subjects participated in this study. Isometric shoulder muscle strength was measured during 3 repetitions for abduction at 10° and 30° abduction in the scapular plane (with extended elbow and hand in neutral position) and for internal and external rotation (with the arm at 15° abduction in the scapular plane and elbow flexed at 90°). Muscle strength tests with the Biodex dynamometer were measured in 2 different sessions. The CS was acquired only in the first session. Intraclass correlation coefficients (ICCs) with 95% confidence interval, limits of agreement, and paired t tests for repeated tests of each abduction and rotation task were calculated. Pearson's correlation between the strength parameter of the CS and isometric muscle strength was investigated.
Results: Muscle strength did not differ between tests (P > .05) with good to very good reliabilities for abduction at 10° and 30°, external rotation and internal rotation (ICC >0.7 for all). A moderate correlation of the strength parameter of the CS with all isometric shoulder strength parameters was observed (r > 0.5 for all).
Conclusion: Shoulder muscle strength for abduction and rotation measured with the Biodex dynamometer are reproducible and correlate with the strength assessment of the CS. Therefore, these isometric muscle strength tests can be further employed to investigate the effect of different shoulder joint pathology on muscle strength. These measurements consider a more comprehensive functionality of the rotator cuff than the single strength evaluation in abduction within the CS as both abduction and rotation are assessed. Potentially, this would allow for a more precise differentiation between the various outcomes of rotator cuff tears.2023-10-10T00:00:00ZNavigating the ocean of biases : political bias attribution in language models via causal structures
https://digitalcollection.zhaw.ch/handle/11475/30246
Title: Navigating the ocean of biases : political bias attribution in language models via causal structures
Authors: Jenny, David F.; Billeter, Yann; Sachan, Mrinmaya; Schölkopf, Bernhard; Jin, Zhijing
Abstract: The rapid advancement of Large Language Models (LLMs) has sparked intense debate regarding their ability to perceive and interpret complex socio-political landscapes. In this study, we undertake an exploration of decision-making processes and inherent biases within LLMs, exemplified by ChatGPT, specifically contextualizing our analysis within political debates. We aim not to critique or validate LLMs' values, but rather to discern how they interpret and adjudicate "good arguments." By applying Activity Dependency Networks (ADNs), we extract the LLMs' implicit criteria for such assessments and illustrate how normative values influence these perceptions. We discuss the consequences of our findings for human-AI alignment and bias mitigation.2023-11-15T00:00:00ZFully automatic algorithm for detecting and tracking anatomical shoulder landmarks on fluoroscopy images with artificial intelligence
https://digitalcollection.zhaw.ch/handle/11475/30245
Title: Fully automatic algorithm for detecting and tracking anatomical shoulder landmarks on fluoroscopy images with artificial intelligence
Authors: Croci, Eleonora; Hess, Hanspeter; Warmuth, Fabian; Künzler, Marina; Börlin, Sean; Baumgartner, Daniel; Müller, Andreas Marc; Gerber, Kate; Mündermann, Annegret
Abstract: Objective: Patients with rotator cuff tears present often with glenohumeral joint instability. Assessing anatomic angles and shoulder kinematics from fluoroscopy requires labelling of specific landmarks in each image. This study aimed to develop an artificial intelligence model for automatic landmark detection from fluoroscopic images for motion tracking of the scapula and humeral head.
Materials and methods: Fluoroscopic images were acquired for both shoulders of 25 participants (N = 12 patients with unilateral rotator cuff tear, 6 men, mean (standard deviation) age: 63.7 ± 9.7 years; 13 asymptomatic subjects, 7 men, 58.2 ± 8.9 years) during a 30° arm abduction and adduction movement in the scapular plane with and without handheld weights of 2 and 4 kg. A 3D full-resolution convolutional neural network (nnU-Net) was trained to automatically locate five landmarks (glenohumeral joint centre, humeral shaft, inferior and superior edges of the glenoid and most lateral point of the acromion) and a calibration sphere.
Results: The nnU-Net was trained with ground-truth data from 6021 fluoroscopic images of 40 shoulders and tested with 1925 fluoroscopic images of 10 shoulders. The automatic landmark detection algorithm achieved an accuracy above inter-rater variability and slightly below intra-rater variability. All landmarks and the calibration sphere were located within 1.5 mm, except the humeral landmark within 9.6 mm, but differences in abduction angles were within 1°.
Conclusion: The proposed algorithm detects the desired landmarks on fluoroscopic images with sufficient accuracy and can therefore be applied to automatically assess shoulder motion, scapular rotation or glenohumeral translation in the scapular plane.
Clinical relevance statement: This nnU-net algorithm facilitates efficient and objective identification and tracking of anatomical landmarks on fluoroscopic images necessary for measuring clinically relevant anatomical configuration (e.g. critical shoulder angle) and enables investigation of dynamic glenohumeral joint stability in pathological shoulders.
Key Points:
• Anatomical configuration and glenohumeral joint stability are often a concern after rotator cuff tears.
• Artificial intelligence applied to fluoroscopic images helps to identify and track anatomical landmarks during dynamic movements.
• The developed automatic landmark detection algorithm optimised the labelling procedures and is suitable for clinical application.
Further description: Erworben im Rahmen der Schweizer Nationallizenzen (http://www.nationallizenzen.ch)2023-08-11T00:00:00Z