scverse/scvi-tools

Autotune doesn't seem to be able to take learning rate argument anymore

Closed this issue · 3 comments

Before you could sample LR from a distribution. WHen I try to reimplement similar behavior with autotune, it throws an error on model training.

model_cls = scvi.model.SCVI

model_cls.setup_anndata(adata, layer="counts", batch_key='sample_id')

search_space = {
    "model_params": {"n_hidden": tune.choice([64, 128, 256]), 
                     "n_layers": tune.choice([1, 2, 3, 4]),
                     "n_latent": tune.choice([10, 20, 30, 40, 50]),
                     "gene_likelihood": tune.choice(["nb", "zinb"])
                    },
    "train_params": {"max_epochs": 100,
                    "lr":tune.loguniform(1e-4, 1e-2)}
(raylet) Traceback (most recent call last):
  File "python/ray/_raylet.pyx", line 1807, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1908, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1813, in ray._raylet.execute_task
  File "python/ray/_raylet.pyx", line 1754, in ray._raylet.execute_task.function_executor
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/_private/function_manager.py", line 726, in actor_method_executor
    return method(__ray_actor, *args, **kwargs)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/tune/trainable/trainable.py", line 342, in train
    raise skipped from exception_cause(skipped)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/air/_internal/util.py", line 88, in run
    self._ret = self._target(*self._args, **self._kwargs)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 115, in <lambda>
    training_func=lambda: self._trainable_func(self.config),
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/util/tracing/tracing_helper.py", line 467, in _resume_span
    return method(self, *_args, **_kwargs)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/tune/trainable/function_trainable.py", line 332, in _trainable_func
    output = fn()
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/ray/tune/trainable/util.py", line 138, in inner
    return trainable(config, **fn_kwargs)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/scvi/autotune/_experiment.py", line 551, in _trainable
    model.train(**train_params)
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/scvi/model/base/_training_mixin.py", line 136, in train
    runner = self._train_runner_cls(
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/scvi/train/_trainrunner.py", line 81, in __init__
    self.trainer = self._trainer_cls(
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/scvi/train/_trainer.py", line 153, in __init__
    super().__init__(
  File "/home/ubuntu/miniconda3/envs/scanpy/lib/python3.10/site-packages/lightning/pytorch/utilities/argparse.py", line 70, in insert_env_defaults
    return fn(self, **kwargs)
TypeError: Trainer.__init__() got an unexpected keyword argument 'lr'

Versions:

VERSION
'1.2.0'

Hi @LinearParadox ,
Try to do something like:

search_space = {
    "model_params": {"n_hidden": tune.choice([64, 128, 256]), 
                     "n_layers": tune.choice([1, 2, 3, 4]),
                     "n_latent": tune.choice([10, 20, 30, 40, 50]),
                     "gene_likelihood": tune.choice(["nb", "zinb"])
                    },
    "train_params": {"max_epochs": 100,
                    "plan_kwargs": {"lr": tune.loguniform(1e-4, 1e-2)}}}

Should work

Got it, that worked! I think the tutorial for autotune may have a slight typo related to this:

The search space is:

search_space = {
    "model_params": {"n_hidden": tune.choice([64, 128, 256]), "n_layers": tune.choice([1, 2, 3])},
    "train_params": {"max_epochs": 100},
}

It says we might get 2 models if we run 2 samples:

model1 = {
    "n_hidden": 64,
    "n_layers": 1,
    "lr": 0.001,
}
model2 = {
    "n_hidden": 64,
    "n_layers": 3,
    "lr": 0.0001,
}

The learning rate is not set in the search space, however these two seem to have different learning rates. I'm not sure if it's an extra zero, or whether it automatically varies the learning rate.

You are correct, we will change that, thanks for notice!