cambridgeltl/sapbert

requirements.txt doesn't help resolving requirements.

raven44099 opened this issue · 1 comments

I cannot fulfill the correct requirements for executing your kindly prepared google-colab sheet. Below I summarized my failure.

First cell ERROR:

tensorflow 2.9.2 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible.
tensorflow 2.9.2 requires tensorboard<2.10,>=2.9, but you have tensorboard 2.11.2 which is incompatible.

I don't know if the above error poses a problem, however, I get another
ERROR (something like):

ModuleNotFoundError: No module named 'torchtext.legacy'.

subsequently I downgrade (stackoverflow recommendation) using !pip install torchtext==0.10.0, giving me
ERROR:
torchvision 0.14.1+cu116 requires torch==1.13.1, but you have torch 1.9.0 which is incompatible.

So I upgrade to 1.13.1.
Then I get

     11 from torchtext.data.utils import RandomShuffler
     12 from .example import Example
---> 13 from torchtext.utils import download_from_url, unicode_csv_reader
     14 
     15 

ImportError: cannot import name 'unicode_csv_reader' from 'torchtext.utils' (/usr/local/lib/python3.8/dist-packages/torchtext/utils.py)

At some point I also get

torchvision 0.14.1+cu116 requires torch==1.13.1, but you have torch 1.7.0 which is incompatible.
torchtext 0.10.0 requires torch==1.9.0, but you have torch 1.7.0 which is incompatible.
torchaudio 0.13.1+cu116 requires torch==1.13.1, but you have torch 1.7.0 which is incompatible.

I've recognized that I can continue the notebook executions beyond this errors. However, I was not able to execute the whole notebook, since I got an error in following cell:

# Initialize the trainer and model
trainer = Trainer(**cfg.trainer)
exp_manager(trainer, cfg.get("exp_manager", None))
model = nemo_nlp.models.EntityLinkingModel(cfg=cfg.model, trainer=trainer)

ERROR:

INFO:pytorch_lightning.utilities.rank_zero:Using bfloat16 Automatic Mixed Precision (AMP)
INFO:pytorch_lightning.utilities.rank_zero:GPU available: False, used: False
INFO:pytorch_lightning.utilities.rank_zero:TPU available: False, using: 0 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
[NeMo I 2023-02-06 02:31:00 exp_manager:362] Experiments will be logged at SelfAlignmentPretrainingTinyExample/2023-02-06_02-14-21
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
[<ipython-input-33-8586e3adaa65>](https://localhost:8080/#) in <module>
      1 # Initialize the trainer and model
      2 trainer = Trainer(**cfg.trainer)
----> 3 exp_manager(trainer, cfg.get("exp_manager", None))
      4 model = nemo_nlp.models.EntityLinkingModel(cfg=cfg.model, trainer=trainer)

3 frames
[/usr/local/lib/python3.8/dist-packages/lightning_fabric/loggers/tensorboard.py](https://localhost:8080/#) in __init__(self, root_dir, name, version, default_hp_metric, prefix, sub_dir, **kwargs)
     91     ):
     92         if not _TENSORBOARD_AVAILABLE and not _TENSORBOARDX_AVAILABLE:
---> 93             raise ModuleNotFoundError(
     94                 "Neither `tensorboard` nor `tensorboardX` is available. Try `pip install`ing either."
     95             )

ModuleNotFoundError: Neither `tensorboard` nor `tensorboardX` is available. Try `pip install`ing either.