Alternate data source error
chatRG opened this issue · 2 comments
Hi,
I am using the following snippet to fetch Yahoo data and processing it like the CSV class.
class YAHOODailyBarDataSource(object):
def __init__(self, csv_dir='', asset_type='Equity', adjust_prices=True, csv_symbols=None):
self.csv_dir = ''
self.asset_type = asset_type
self.adjust_prices = adjust_prices
self.csv_symbols = csv_symbols
self.asset_bar_frames = self._load_data()
self.asset_bid_ask_frames = {}
def _load_data(self):
asset_frames = {}
for symbol in self.csv_symbols:
asset_frames[symbol] = get_ticker_data(symbol)
return asset_frames
Don't see any NaN values yet I get this error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-97-532501a3d301> in <module>()
----> 1 sixtyforty()
5 frames
/usr/local/lib/python3.6/dist-packages/qstrader/portcon/order_sizer/dollar_weighted.py in __call__(self, dt, weights)
161 ) # Long only
162 asset_quantity = int(
--> 163 np.floor(after_cost_dollar_weight / asset_price)
164 )
165
ValueError: cannot convert float NaN to integer
- How to fix this?
- Any future development plan for alternate data sources other than static CSV?
Thanks! :)
Hi @chatRG,
Thanks for your question. This ValueError
generally indicates that the CSV file for a particular ticker cannot be found in the chosen directory or that the specified start dates of the backtest for the simulation are earlier than the start of the data.
Our intent is to provide more helpful errors for these situations in future releases.
Kind regards,
Mike.
Hi @chatRG,
Just to let you know that I've now added a more helpful ValueError
when seeing the above issue and have released it into v0.1.4 of QSTrader. It now mentions that the backtest start date is likely to be earlier than the earliest pricing date of a particular asset. Here is the relevant commit: 7b5cd2b
Also I forgot to answer your previous question (2)! Yes, there are plans to add data sources other than static CSV. Initially it will likely be MySQL or PostgreSQL, but will depend on what the community is generally interested in.
Kind regards,
Mike.