dvmazur/mixtral-offloading

Update Requirements.txt

Opened this issue · 0 comments

Update the requirements.txt for running in v100 GPUs in Colab. OpenAI has released a new version of triton 2.2.0 which is not compatible with V100 GPUs. I have faced this issue in my notebook and after checking it I had to apply a new version limit on torch. It should be:

torch>=2.1.0,<2.2.0

You can find the issue here.
The error was this:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
[<ipython-input-12-e4c6296ba548>](https://localhost:8080/#) in <cell line: 10>()
     12   start_time = time.time()
     13   with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
---> 14     output = model.generate(**model_inputs, max_length=500)[0]
     15   duration += float(time.time() - start_time)
     16   total_length += len(output)

25 frames
[/usr/local/lib/python3.10/dist-packages/triton/compiler/compiler.py](https://localhost:8080/#) in ttgir_to_llir(mod, extern_libs, target, tma_infos)
    165     # TODO: separate tritongpu_to_llvmir for different backends
    166     if _is_cuda(target):
--> 167         return translate_triton_gpu_to_llvmir(mod, target.capability, tma_infos, runtime.TARGET.NVVM)
    168     else:
    169         return translate_triton_gpu_to_llvmir(mod, 0, TMAInfos(), runtime.TARGET.ROCDL)

IndexError: map::at