Factor Rotation
Together with RAM Active Investments RAM AI, the use of Reinforcement Learning (RL) for the field of factor investing will be explored. The document below provides a brief overview of the specification of the use case – as discussed with RAM AI -, some related literature and potential challenges we will be faced with during this project.
Determining Factor Allocation
- Research Question 1: Based on a static multi-factor model, can RL improve on the static factor loadings?
- Research Question 2: Based on a dynamic multi-factor model, can RL achieve better results based optimized factor rotations.
Possible further specification: Determining Equity Portfolio Weights
- Research Question: Can RL be used to directly determine the asset weights of an equity portfolio when factor data is provided as inputs?
Possible further specification: MaxESG for Investment Strategies
- Research Question: Can RL be used to optimize the ESG/Sustainability profile of an investment portfolio, given that relevant ESG information is provided in form of additional factors?
After having discussed the potential use cases, RAM AI has shown most interest in a dynamic factor allocation strategy based on RL. We will hence focus on this use case specifically first.
Existing Literature and Frameworks:
The existing literature on RL for factor investing specifically is very thin. Some attempts have been made, like [NOW], but this work still seems like work-in-progress and does not look like it has been officially completed. There is a longer list of research for RL in portfolio management in general, and hence for equity trading which has been published over the last few years [SAT], but not focusing on factor investing specifically. Most RL for stock trading also only seem to be based on an unrealistically low and/or selective number of stocks [LZC] and/or only consider price-related data as inputs into the RL agent [ZHU].
One promising project seems to be the openly available library FinRL [XIA], which is very close to the aim that we have in this project. The use and applicability of the implemented modules for this project will have to be explored in more details, however.
Challenges:
-
How to deal with large observation and action spaces?
-
How to deal with factor ambiguity and overlap/correlation?
-
Very little training data available, which likely calls for the need of a realistic simulator / artificial market data to train the agent…
-
How to incorporate an influence of the agent on the environment?
-
No direct definition of an “Episode” in factor investing. Can we make this task episodic?
-
How to shape rewards?
-
How to avoid overfitting and have confidence in future performance?
Further questions to be addressed:
Which equity universe to focus on? –> This will be dependent on the data availability, but usually a US stock universe is used.
Where do we get the data from? –> We can use Refinitiv, but for the purpose of our publications on the website, public data would be preferred. An option could be to standardize data from Refinitiv and ask to publish this, as done by [GUI]. Also, this heavily depends on the scope we agree on; e.g., for any ESG-related topic, additional data needs to be sourced
Design choices for a simple multi-factor model. –> Needs to be done together with RAM AI
We need to strike a balance between realistic backtesting tool and not spending too much time implementing the infrastructure necessary before RL can even be investigated. Potentially, we can rely on the implementations of [XIA] or existing inhouse solutions to get started faster.
The code for this use-case can be found on GitHub: Link
References:
-
[GUI] Tony Guida and Guillame Coqueret (2021) “Machine Learning for Factor Investing”, http://www.mlfactor.com/.
-
[LZC] Jian Li, Kun Zhang, and Laiwan Chan (2007) “Independent Factor Reinforcement Learning for Portfolio Management”, http://dl.ifip.org/db/conf/ideal/ideal2007/LiZC07.pdf.
-
[NOW] Nowicki, Pier (2019) “Deep Reinforcement Learning Framework for Factor Investing”, https://cs230.stanford.edu/projects_fall_2019/reports/26251841.pdf. THIS SEEMS MORE WORK IN PROGRESS.
-
[SAT] Sato, Yoshiharu (2019) “Model-Free Reinforcement Learning for Financial Portfolios: A Brief Survey”, https://arxiv.org/abs/1904.04973.
-
[XIA] Xiao-Yang Liu, Hongyang Yang, Qian Chen, Runjia Zhang, Liuqing Yang, Bowen Xiao, Christina Dan Wang (2020) “FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance”, https://arxiv.org/abs/2011.09607.
-
[ZHU] Zhuoran Xiong, Xiao-Yang Liu, Shan Zhong, Hongyang Yang, Anwar Walid (2018) “Practical Deep Reinforcement Learning Approach for Stock Trading”, https://arxiv.org/abs/1811.07522.