unexpected keyword argument 'use_gpu'
Closed this issue · 4 comments
bobermayer commented
Hi,
I'm trying to run the tutorial, but it fails during vae.train()
with the message Trainer.__init__() got an unexpected keyword argument 'use_gpu'
TypeError Traceback (most recent call last)
Cell In[5], line 3
1 VELOVI.setup_anndata(adata, spliced_layer="Ms", unspliced_layer="Mu")
2 vae = VELOVI(adata)
----> 3 vae.train()
File ~/miniconda3/envs/brain_organoids/lib/python3.11/site-packages/velovi/_model.py:187, in VELOVI.train(self, max_epochs, lr, weight_decay, use_gpu, train_size, validation_size, batch_size, early_stopping, gradient_clip_val, plan_kwargs, **trainer_kwargs)
183 es = "early_stopping"
184 trainer_kwargs[es] = (
185 early_stopping if es not in trainer_kwargs.keys() else trainer_kwargs[es]
186 )
--> 187 runner = TrainRunner(
188 self,
189 training_plan=training_plan,
190 data_splitter=data_splitter,
191 max_epochs=max_epochs,
192 use_gpu=use_gpu,
193 **trainer_kwargs,
194 )
195 return runner()
File ~/miniconda3/envs/brain_organoids/lib/python3.11/site-packages/scvi/train/_trainrunner.py:82, in TrainRunner.__init__(self, model, training_plan, data_splitter, max_epochs, accelerator, devices, **trainer_kwargs)
79 if getattr(self.training_plan, "reduce_lr_on_plateau", False):
80 trainer_kwargs["learning_rate_monitor"] = True
---> 82 self.trainer = self._trainer_cls(
83 max_epochs=max_epochs,
84 accelerator=accelerator,
85 devices=lightning_devices,
86 **trainer_kwargs,
87 )
88 # currently set for MetricsCallback
89 self.trainer._model = model
File ~/miniconda3/envs/brain_organoids/lib/python3.11/site-packages/scvi/train/_trainer.py:171, in Trainer.__init__(self, accelerator, devices, benchmark, check_val_every_n_epoch, max_epochs, default_root_dir, enable_checkpointing, checkpointing_monitor, num_sanity_val_steps, enable_model_summary, early_stopping, early_stopping_monitor, early_stopping_min_delta, early_stopping_patience, early_stopping_mode, additional_val_metrics, enable_progress_bar, progress_bar_refresh_rate, simple_progress_bar, logger, log_every_n_steps, learning_rate_monitor, **kwargs)
168 if logger is None:
169 logger = SimpleLogger()
--> 171 super().__init__(
172 accelerator=accelerator,
173 devices=devices,
174 benchmark=benchmark,
175 check_val_every_n_epoch=check_val_every_n_epoch,
176 max_epochs=max_epochs,
177 default_root_dir=default_root_dir,
178 enable_checkpointing=enable_checkpointing,
179 num_sanity_val_steps=num_sanity_val_steps,
180 enable_model_summary=enable_model_summary,
181 logger=logger,
182 log_every_n_steps=log_every_n_steps,
183 enable_progress_bar=enable_progress_bar,
184 callbacks=callbacks,
185 **kwargs,
186 )
File ~/miniconda3/envs/brain_organoids/lib/python3.11/site-packages/lightning/pytorch/utilities/argparse.py:70, in _defaults_from_env_vars.<locals>.insert_env_defaults(self, *args, **kwargs)
67 kwargs = dict(list(env_variables.items()) + list(kwargs.items()))
69 # all args were already moved to kwargs
---> 70 return fn(self, **kwargs)
TypeError: Trainer.__init__() got an unexpected keyword argument 'use_gpu'
I'm not able to use vae.train(use_gpu=False)
either (same error).
any ideas?
thanks for your help!
here are my versions
sc.logging.print_versions()
-----
anndata 0.10.5.post1
scanpy 1.9.8
-----
IPython 8.21.0
PIL 10.2.0
absl NA
aiohttp 3.9.3
aiosignal 1.3.1
annotated_types 0.6.0
anyio NA
asttokens NA
attr 23.2.0
backoff 2.2.1
brotli 1.1.0
bs4 4.12.3
certifi 2024.02.02
cffi 1.16.0
charset_normalizer 3.3.2
chex 0.1.8
click 8.1.7
colorama 0.4.6
comm 0.2.1
contextlib2 NA
croniter NA
cycler 0.12.1
cython_runtime NA
dateutil 2.8.2
decorator 5.1.1
deepdiff 6.7.1
defusedxml 0.7.1
docrep 0.3.2
etils 1.7.0
executing 2.0.1
fastapi 0.109.2
flax 0.8.1
frozenlist 1.4.1
fsspec 2024.2.0
gmpy2 2.1.2
google NA
h5py 3.10.0
idna 3.6
importlib_resources NA
ipywidgets 8.1.2
jax 0.4.23
jaxlib 0.4.23.dev20240125
jedi 0.19.1
joblib 1.3.2
kiwisolver 1.4.5
lightning 2.0.9.post0
lightning_cloud NA
lightning_utilities 0.10.1
llvmlite 0.42.0
matplotlib 3.8.3
ml_collections NA
ml_dtypes 0.3.2
mpi4py 3.1.5
mpl_toolkits NA
mpmath 1.3.0
msgpack 1.0.7
mudata 0.2.3
multidict 6.0.5
multipart 0.0.9
multipledispatch 0.6.0
natsort 8.4.0
numba 0.59.0
numpy 1.26.4
numpyro 0.13.2
opt_einsum v3.3.0
optax 0.1.9
ordered_set 4.1.0
orjson 3.9.10
packaging 23.2
pandas 2.2.0
parso 0.8.3
patsy 0.5.6
pickleshare 0.7.5
pkg_resources NA
prompt_toolkit 3.0.42
psutil 5.9.8
pure_eval 0.2.2
pycparser 2.21
pydantic 2.1.1
pydantic_core 2.4.0
pygments 2.17.2
pynndescent 0.5.11
pyparsing 3.1.1
pyro 1.9.0+f02dfb9
pytz 2024.1
requests 2.31.0
rich NA
scipy 1.12.0
scvelo 0.3.1
scvi 1.1.1
seaborn 0.13.2
session_info 1.0.0
six 1.16.0
sklearn 1.1.3
sniffio 1.3.0
socks 1.7.1
soupsieve 2.5
sparse 0.15.1
sphinxcontrib NA
stack_data 0.6.2
starlette 0.36.3
statsmodels 0.14.1
sympy 1.12
threadpoolctl 3.3.0
toolz 0.12.1
torch 2.1.2.post301
torchgen NA
torchmetrics 1.2.1
tqdm 4.66.2
traitlets 5.14.1
tree 0.1.8
typing_extensions NA
umap 0.5.5
urllib3 2.2.1
uvicorn 0.27.1
velovi 0.3.0
wcwidth 0.2.13
websocket 1.7.0
websockets 12.0
wrapt 1.16.0
yaml 6.0.1
yarl 1.9.4
-----
Python 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0]
Linux-6.1.0-1033-oem-x86_64-with-glibc2.35
-----
Session information updated at 2024-03-04 13:31
lzygenomics commented
I have the same problem : )
bobermayer commented
martinkim0 commented
Hi, sorry about this - I've added a fix in #24 and will be re-releasing.
martinkim0 commented
Going to close this issue now since 0.3.1 includes a fix.