atreadw1492/yahoo_fin

get_quote_table not working

Opened this issue · 2 comments

~/Library/Python/3.9/lib/python/site-packages/yahoo_fin/stock_info.py in get_quote_table(ticker, dict_result, headers)
293 tables = pd.read_html(requests.get(site, headers=headers).text)
294
--> 295 data = tables[0].append(tables[1])
296
297 data.columns = ["attribute" , "value"]

IndexError: list index out of range

Hi there, not so experienced in this but, I believe that get_quote_table is not working and giving list index out of range because it is trying to webscrap from yahoo finance and they recently changed their layout and such. We probably need a major update from the maintainers. I am trying to webscrap from the source so I can share my code once I am done and perhaps we can update the yahoo_fin as well

Following code works with Pandas > 2

def get_quote_table(ticker, dict_result=True, headers={'User-agent': 'Mozilla/5.0'}) -> dict:

    '''Scrapes data elements found on Yahoo Finance's quote page
       of input ticker

       @param: ticker
       @param: dict_result = True
    '''

    site = "https://finance.yahoo.com/quote/" + ticker + "?p=" + ticker

    html = requests.get(site, headers=headers).text

    soup = BeautifulSoup(html, 'lxml')

    # Select the target <ul> and build dicts for each row
    data_dicts = []
    target = soup.find_all("div", {"data-testid": "quote-statistics"})[0].findChildren("li", recursive=True)
    row_dict = {}
    for row in target:
        col_name = row.findChildren("span")[0].text
        col_value = row.findChildren("span")[1].text
        row_dict[col_name] = col_value
        data_dicts.append(row_dict)

    data = pd.DataFrame(data_dicts)

    data = data.drop_duplicates().reset_index(drop=True)

    data = data.transpose()

    data = data.reset_index()

    data.columns = ["attribute", "value"]

    quote_price = pd.DataFrame(["Quote Price", si.get_live_price(ticker)]).transpose()
    quote_price.columns = data.columns.copy()

    data = pd.concat([data, quote_price], ignore_index=True)

    data = data.drop_duplicates().reset_index(drop=True)

    data["value"] = data.value.map(si.force_float)

    if dict_result:
        result = {key: val for key, val in zip(data.attribute, data.value)}
        return result

    return data