Please use this identifier to cite or link to this item: https://doi.org/10.21256/zhaw-22850
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHirsa, Ali-
dc.contributor.authorOsterrieder, Jörg-
dc.contributor.authorHadji Misheva, Branka-
dc.contributor.authorPosth, Jan-Alexander-
dc.date.accessioned2021-07-22T13:21:42Z-
dc.date.available2021-07-22T13:21:42Z-
dc.date.issued2021-
dc.identifier.otherarXiv:2106.08437v1de_CH
dc.identifier.urihttps://arxiv.org/abs/2106.08437de_CH
dc.identifier.urihttps://digitalcollection.zhaw.ch/handle/11475/22850-
dc.description.abstractFinancial trading has been widely analyzed for decades with market participants and academics always looking for advanced methods to improve trading performance. Deep reinforcement learning (DRL), a recently reinvigorated method with significant success in multiple domains, still has to show its benefit in the financial markets. We use a deep Q-network (DQN) to design long-short trading strategies for futures contracts. The state space consists of volatility-normalized daily returns, with buying or selling being the reinforcement learning action and the total reward defined as the cumulative profits from our actions. Our trading strategy is trained and tested both on real and simulated price series and we compare the results with an index benchmark. We analyze how training based on a combination of artificial data and actual price series can be successfully deployed in real markets. The trained reinforcement learning agent is applied to trading the E-mini S&P 500 continuous futures contract. Our results in this study are preliminary and need further improvement.de_CH
dc.format.extent18de_CH
dc.language.isoende_CH
dc.publisherarXivde_CH
dc.rightshttp://creativecommons.org/licenses/by-nc-nd/4.0/de_CH
dc.subjectDeep reinforcement learningde_CH
dc.subjectDeep Q-networkde_CH
dc.subjectFinancial tradingde_CH
dc.subjectFuturede_CH
dc.subject.ddc006: Spezielle Computerverfahrende_CH
dc.subject.ddc332.6: Investitionde_CH
dc.titleDeep reinforcement learning on a multi-asset environment for tradingde_CH
dc.typeWorking Paper – Gutachten – Studiede_CH
dcterms.typeTextde_CH
zhaw.departementSchool of Engineeringde_CH
zhaw.departementSchool of Management and Lawde_CH
zhaw.organisationalunitInstitut für Datenanalyse und Prozessdesign (IDP)de_CH
zhaw.organisationalunitInstitut für Wealth & Asset Management (IWA)de_CH
dc.identifier.doi10.21256/zhaw-22850-
zhaw.funding.euNode_CH
zhaw.originated.zhawYesde_CH
zhaw.author.additionalNode_CH
zhaw.display.portraitYesde_CH
Appears in collections:Publikationen School of Management and Law

Files in This Item:
File Description SizeFormat 
2021_Hirsa-etal_Deep-reinforcement-learning.pdf1.04 MBAdobe PDFThumbnail
View/Open
Show simple item record
Hirsa, A., Osterrieder, J., Hadji Misheva, B., & Posth, J.-A. (2021). Deep reinforcement learning on a multi-asset environment for trading. arXiv. https://doi.org/10.21256/zhaw-22850
Hirsa, A. et al. (2021) Deep reinforcement learning on a multi-asset environment for trading. arXiv. Available at: https://doi.org/10.21256/zhaw-22850.
A. Hirsa, J. Osterrieder, B. Hadji Misheva, and J.-A. Posth, “Deep reinforcement learning on a multi-asset environment for trading,” arXiv, 2021. doi: 10.21256/zhaw-22850.
HIRSA, Ali, Jörg OSTERRIEDER, Branka HADJI MISHEVA und Jan-Alexander POSTH, 2021. Deep reinforcement learning on a multi-asset environment for trading [online]. arXiv. Verfügbar unter: https://arxiv.org/abs/2106.08437
Hirsa, Ali, Jörg Osterrieder, Branka Hadji Misheva, and Jan-Alexander Posth. 2021. “Deep Reinforcement Learning on a Multi-Asset Environment for Trading.” arXiv. https://doi.org/10.21256/zhaw-22850.
Hirsa, Ali, et al. Deep Reinforcement Learning on a Multi-Asset Environment for Trading. arXiv, 2021, https://doi.org/10.21256/zhaw-22850.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.