shilewenuw/get_all_tickers

get_tickers() not working properly

Opened this issue ยท 11 comments

When running get_tickers() the length of the returned list is 19959.
When you run set() on the returned list the new length is 6653.
Also, for each of the Exchanges (AMEX, NYSE, NASDAQ) the same list is returned with a list of 6653.
6653*3 = 19959 so I think that the same tickers are being repeated over and over.

get_tickers() throwing error
list_of_tickers=gt.get_tickers()
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\get_all_tickers\get_tickers.py", line 73, in get_tickers
tickers_list.extend(__exchange2list('nyse'))
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\get_all_tickers\get_tickers.py", line 138, in __exchange2list
df = __exchange2df(exchange)
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\get_all_tickers\get_tickers.py", line 134, in __exchange2df
df = pd.read_csv(data, sep=",")
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\pandas\io\parsers.py", line 605, in read_csv
return _read(filepath_or_buffer, kwds)
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\pandas\io\parsers.py", line 463, in _read
return parser.read(nrows)
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\pandas\io\parsers.py", line 1052, in read
index, columns, col_dict = self._engine.read(nrows)
File "C:\Users\Taborda\AppData\Roaming\Python\Python39\site-packages\pandas\io\parsers.py", line 2056, in read
data = self._reader.read(nrows)
File "pandas_libs\parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read
File "pandas_libs\parsers.pyx", line 771, in pandas._libs.parsers.TextReader._read_low_memory
File "pandas_libs\parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows
File "pandas_libs\parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas_libs\parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 24, saw 46

Having the same issue

Maybe page changed.
Lib expects CSV but gets HTML with paged data.

I couldn't find direct link to CSV download although the button is there.

https://www.nasdaq.com/market-activity/stocks/screener

hinxx commented

This link gets me the JSON https://api.nasdaq.com/api/screener/stocks?tableonly=true&limit=25&offset=0&download=true.
For the CSV, it seems that the page now uses JS to convert JSON into CSV and serves that to the user. Not sure if CSV can be obtained directly from URL, though.

I was able to fix the issue by using the new URL @hinxx found, other functions are also changed as the data from the JSON is a little different.

https://github.com/dbondi/get_all_tickers/blob/master/get_all_tickers/get_tickers.py

Edit:
I changed the code so you can now search with multiple filters including mktcap, analyst rating, country, region, and sector. And I also deleted some functions.

Here is the link to my older edit which will work with all functions on this repository
https://github.com/dbondi/get_all_tickers/blob/699baf2a6f508d0f5a8b5a27348e738c4e39956e/get_all_tickers/get_tickers.py

You can use this file
get_tickers.py.txt

I took @JaisinhBhosale9712 's change, cleaned up the debug stuff in it, and made a PR here bumping it to 1.7: #17

Feel free to use the referenced branch until this gets merged, rather than c+ping the code (pip install git+https://github.com/rikbrown/get_all_tickers@nasdaq-fix)

@dbondi solution worked for everything but filtering by Region. Any idea on how to fix this part?

Still running into the error to get all tickers after using the above .py files and .py.txt file. Still receiving the tokenizing error
ParserError: Error tokenizing data. C error: Expected 1 fields in line 5, saw 46

I took @JaisinhBhosale9712 's change, cleaned up the debug stuff in it, and made a PR here bumping it to 1.7: #17

Feel free to use the referenced branch until this gets merged, rather than c+ping the code (pip install git+https://github.com/rikbrown/get_all_tickers@nasdaq-fix)

Its works thank you

Still running into the error to get all tickers after using the above .py files and .py.txt file. Still receiving the tokenizing error
ParserError: Error tokenizing data. C error: Expected 1 fields in line 5, saw 46

I have the same error. Were you able to fix it somehow? thanks