/FinRL

FinRL: Financial Reinforcement Learning Framework.🔥

Primary LanguageJupyter NotebookMIT LicenseMIT

FinRL: Deep Reinforcement Learning for Quantitative Finance twitter facebook google+ linkedin

Downloads Downloads Python 3.6 PyPI Documentation Status License

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.

Our Mission: efficiently automate trading. We continuously develop and share codes for finance.

Our Vision: AI community has accumulated an open-source code ocean over the past decade. We believe applying these intellectual and engineering properties to finance will initiate a paradigm shift from the conventional trading routine to an automated machine learning approach, even RLOps in finance.

FinRL is the first open-source framework to demonstrate the great potential of applying deep reinforcement learning in quantitative finance. We help practitioners establish the development pipeline of trading strategies using deep reinforcement learning (DRL). A DRL agent learns by continuously interacting with an environment in a trial-and-error manner, making sequential decisions under uncertainty, and achieving a balance between exploration and exploitation.

News: We will release codes for both paper trading and live trading. Please let us know your coding needs.

Join to discuss FinRL: AI4Finance mailing list, AI4Finance Slack channel:

Follow us on WeChat:

The ecosystem of FinRL:

FinRL 1.0: entry-level for beginners, with a demonstrative and educational purpose.

FinRL 2.0: intermediate-level for full-stack developers and professionals, ElegantRL.

FinRL 3.0: advanced-level for investment banks and hedge funds, a cloud-native solution FinRL-podracer.

FinRL 0.0: hundreds of training/testing/trading environments in FinRL-Meta.

FinRL provides a unified framework for various markets, SOTA DRL algorithms, finance tasks (portfolio allocation, cryptocurrency trading, high-frequency trading), live trading support, etc.

Outline

Tutorials

News

Overview

A video about FinRL library. The AI4Finance Youtube Channel for quantative finance.

Supported Data Sources:

Data Source Type Range and Frequency Request Limits Raw Data
Yahoo! Finance US Securities Frequency-specific, 1min 2,000/hour OHLCV
CCXT Cryptocurrency API-specific, 1min API-specific OHLCV
WRDS.TAQ US Securities 2003-now, 1ms 5 requests each time Intraday Trades
Alpaca US Stocks, ETFs 2015-now, 1min Account-specific OHLCV
RiceQuant CN Securities 2005-now, 1ms Account-specific OHLCV
JoinQuant CN Securities 2005-now, 1min 3 requests each time OHLCV
QuantConnect US Securities 1998-now, 1s NA OHLCV

DRL Algorithms

ElegantRL implements Deep Q Learning (DQN), Double DQN, DDPG, A2C, SAC, PPO, TD3, GAE, MADDPG, etc. using PyTorch.

Status Update

Version History [click to expand]
  • 2021-08-25 0.3.1: pytorch version with a three-layer architecture, apps (financial tasks), drl_agents (drl algorithms), neo_finrl (gym env)
  • 2020-12-14 Upgraded to Pytorch with stable-baselines3; Remove tensorflow 1.0 at this moment, under development to support tensorflow 2.0
  • 2020-11-27 0.1: Beta version with tensorflow 1.5

Installation

Contributions

  • FinRL is the first open-source framework to demonstrate the great potential of applying DRL algorithms in quantitative finance. We build an ecosystem around the FinRL framework, which seeds the rapidly growing AI4Finance community.
  • The application layer provides interfaces for users to customize FinRL to their own trading tasks. Automated backtesting tool and performance metrics are provided to help quantitative traders iterate trading strategies at a high turnover rate. Profitable trading strategies are reproducible and hands-on tutorials are provided in a beginner-friendly fashion. Adjusting the trained models to the rapidly changing markets is also possible.
  • The agent layer provides state-of-the-art DRL algorithms that are adapted to finance with fine-tuned hyperparameters. Users can add new DRL algorithms.
  • The environment layer includes not only a collection of historical data APIs, but also live trading APIs. They are reconfigured into standard OpenAI gym-style environments. Moreover, it incorporates market frictions and allows users to customize the trading time granularity.

Publications

We published papers in FinTech at Google Scholar and now arrive at this project:

  • FinRL-Meta: Data-driven deep reinforcement learning in quantitative finance, Data-Centric AI Workshop, NeurIPS 2021.
  • Explainable deep reinforcement learning for portfolio management: An empirical approach. paper ACM International Conference on AI in Finance, ICAIF 2021.
  • FinRL-Podracer: High performance and scalable deep reinforcement learning for quantitative finance. ACM International Conference on AI in Finance, ICAIF 2021.
  • FinRL: Deep reinforcement learning framework to automate trading in quantitative finance, ACM International Conference on AI in Finance, ICAIF 2021.
  • FinRL: A deep reinforcement learning library for automated stock trading in quantitative finance, Deep RL Workshop, NeurIPS 2020.
  • Deep reinforcement learning for automated stock trading: An ensemble strategy, paper and codes, ACM International Conference on AI in Finance, ICAIF 2020.
  • Multi-agent reinforcement learning for liquidation strategy analysis, paper and codes. Workshop on Applications and Infrastructure for Multi-Agent Learning, ICML 2019.
  • Practical deep reinforcement learning approach for stock trading, paper and codes, Workshop on Challenges and Opportunities for AI in Financial Services, NeurIPS 2018.

Citing FinRL

@article{finrl2020,
    author  = {Liu, Xiao-Yang and Yang, Hongyang and Chen, Qian and Zhang, Runjia and Yang, Liuqing and Xiao, Bowen and Wang, Christina Dan},
    title   = {{FinRL}: A deep reinforcement learning library for automated stock trading in quantitative finance},
    journal = {Deep RL Workshop, NeurIPS 2020},
    year    = {2020}
}
@article{liu2021finrl,
    author  = {Liu, Xiao-Yang and Yang, Hongyang and Gao, Jiechao and Wang, Christina Dan},
    title   = {{FinRL}: Deep reinforcement learning framework to automate trading in quantitative finance},
    journal = {ACM International Conference on AI in Finance (ICAIF)},
    year    = {2021}
}

To Contribute

Welcome to join AI4Finance Foundation community!

Please check Contributing Guidances.

Contributors

Thanks to our contributors!

LICENSE

MIT License

Disclaimer: Nothing herein is financial advice, and NOT a recommendation to trade real money. Please use common sense and always first consult a professional before trading or investing.