ContinuumIO/anaconda-package-data

Binder examples not working

Closed this issue · 0 comments

The examples in the binder notebook are failing with this error:

>>> df = dd.read_parquet('s3://anaconda-package-data/conda/hourly/2018/12/2018-12-31.parquet',
...                      storage_options={'anon': True})
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-3-37350afb994b> in <module>
      1 df = dd.read_parquet('s3://anaconda-package-data/conda/hourly/2018/12/2018-12-31.parquet',
----> 2                      storage_options={'anon': True})

/srv/conda/envs/notebook/lib/python3.7/site-packages/dask/dataframe/io/parquet/core.py in read_parquet(path, columns, filters, categories, index, storage_options, engine, gather_statistics, **kwargs)
    135     if hasattr(path, "name"):
    136         path = stringify_path(path)
--> 137     fs, _, paths = get_fs_token_paths(path, mode="rb", storage_options=storage_options)
    138 
    139     paths = sorted(paths, key=natural_sort_key)  # numeric rather than glob ordering

/srv/conda/envs/notebook/lib/python3.7/site-packages/fsspec/core.py in get_fs_token_paths(urlpath, mode, num, name_function, storage_options, protocol)
    313         cls = get_filesystem_class(protocol)
    314 
--> 315         options = cls._get_kwargs_from_urls(urlpath)
    316         path = cls._strip_protocol(urlpath)
    317         update_storage_options(options, storage_options)

AttributeError: type object 'S3FileSystem' has no attribute '_get_kwargs_from_urls'

I guess s3fs changed the API in a recent version and should be pinned in environment.yml.