mwaskom/seaborn-data

ValueError: '{name}' is not one of the example datasets.

oguzhari opened this issue ยท 8 comments

Hi!

I tried the load_dataset func but, I got "ValueError: 'tips' is not one of the example datasets." error. First, I thought be issue with my computer, so I tried colab, but the error is still there.

image

You can check python version and seaborn version here
image

Duplicated issue. I teach a python class and was also able to get 17 more accounts of this issue.

versioninfo
sns load_dataset()

Those are both old versions of seaborn. I cannot replicate on the current version.

Issue happens in Anaconda actually. I think the latest version is 0.12.2 in conda. Are there any plan to update version in conda?

That's not something I have any control over.

@connorfryar I suggest you to use !pip install seaborn --upgrade command on Colab and force the seaborn update. I think it solves the problem.

@mwaskom You were right, the problem occurs in old versions. Do you have any suggestions except update? Because our students are new, they use Anaconda and I am not sure they can handle the update with pip thing.

It looks like GitHub changed something about the html it returns from the repository homepage and the list of files can no longer be parsed from it. That breaks get_dataset_names on < v0.13 (fortunately, the approach that we used to enumerate the valid dataset names changed for unrelated reasons prior to the v0.13 release, which is why it works fine on the latest version). If you look at the source for load_dataset, it has:

    if cache:
        cache_path = os.path.join(get_data_home(data_home), os.path.basename(url))
        if not os.path.exists(cache_path):
            if name not in get_dataset_names():
                raise ValueError(f"'{name}' is not one of the example datasets.")
            urlretrieve(url, cache_path)
        full_path = cache_path
    else:
        full_path = url

So get_dataset_names only runs when you are trying to load from the cache and the dataset is not previously cached. A little weird, but I think it suggests a few workarounds:

  1. Do a one-time setup where you cache all the datasets and then use load_dataset as normal
  2. Use cache=False, e.g. load_dataset("tips", cache=False) every time you want to load data
  3. Change your instructional materials to use pd.read_csv and point at a path. The raw url on this repo can still work; e.g. pd.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/tips.csv"), or you could clone the repo and distribute the datasets to your students so that they have them locally.

Depending on your teaching setup (1) is probably the easiest because it's a one-time thing and then all commands should work as expected. (Although if you're using Colab, I don't know how persistent the cache will be; it may not work there). It could look as simple as doing (in a Jupyter notebook):

cache_dir = sns.get_data_home()
!git clone https://github.com/mwaskom/seaborn-data.git $cache_dir

But if students don't have git installed, that won't work; you'll want to cook up a simple script that enumerates the datasets you want and does pd.read_csv(web_path).to_csv(f"{data_home}/{name}.csv"

Actually, nevermind, I had an idea and I believe I've fixed this going forward. Hope that solves your problems and sorry for the trouble.

It looks solved! Thanks for your attention!