Issues
- 0
Does This Fine-tuning code doesn't work in single A6000 GPU for LLaMA-2-7B with LoRA?
#359 opened by 01choco - 2
Dimension dismatch on AlphaTensor
#356 opened by ywsslr - 0
- 0
nebullvm LICENSE and commercial use?
#355 opened by betogulliver - 0
- 2
Forward Forward Algorithm Questions
#352 opened by and-rewsmith - 0
[Speedster] TensorRt OSError: [WinError 127] The specified procedure could not be found
#353 opened by nemeziz69 - 5
[ Speedster] With Hugging Face notebook code on nebulydocker/nebullvm container: RuntimeError: Expected all tensors to be on the same device
#349 opened by trent-s - 2
- 3
[speedster] _dl_check_map_versions assertion error with optimize_model and ONNX compilers
#346 opened by trent-s - 0
Yolov8-Pose Model
#348 opened by saim212 - 2
torch2.0 support on speedster
#347 opened by lucasjinreal - 10
yolov8 + nebuly | AttributeError: type object 'DummyClass' has no attribute 'models'
#342 opened by scraus - 3
[ChatLLaMA] Training process halted
#309 opened by MuffinC - 3
[ChatLlama] Train Actor with llama-7b model Error initializing torch.distributed using env:// rendezvous:
#292 opened by balcklive - 0
Evaluating accuracy of only the reward model
#343 opened by HannahKirk - 1
- 4
- 0
- 4
Issues with accelerate and deepspeed training
#331 opened by swang99 - 3
- 2
[Chatllama] Merge the datasets to create more insightful training data
#321 opened by PierpaoloSorbellini - 3
[Chatllama] KL Divergence equation
#298 opened by mountinyy - 8
[ChatLLaMA] RLHF Training: Prompt too long
#299 opened by swang99 - 2
- 1
[chatllama]How models enable inference
#335 opened by yangzhipeng1108 - 1
- 1
Support for torch 2.0
#325 opened by lminer - 2
- 3
[ChatLLaMA] RLHF Training: dimension mismatch
#312 opened by BigRoddy - 1
- 2
for rlhf_accelerate branch, can't run with multiGPU
#288 opened by balcklive - 0
[Speedster] Make Speedster optimize_model() return InferenceLearner also for StableDiffusion models
#313 opened by Telemaco019 - 1
module not found:chatllama.rlhf.dataset
#324 opened by MuffinC - 1
Add Support for PEFT fine-tuning
#295 opened by PierpaoloSorbellini - 3
[ChatLLaMA] train RL on multi-gpu
#307 opened by EthanChen1234 - 0
- 0
- 9
- 1
- 0
[Nebullvm] Add option to Nebullvm auto-installer for installing **all** libraries
#310 opened by Telemaco019 - 2
[ChatLLaMA] No GPU Detected issue
#302 opened by MuffinC - 1
[Speedster] Speedster usage
#285 opened by Ludobico - 3
Tokenizer Issue
#297 opened by yradwan147 - 1
- 2
- 2
[Chatllama] discounted_rewards with PPO
#290 opened by mountinyy - 1
How to inference?
#286 opened by wallon-ai - 2
- 1
[Speedster] - Example notebooks not working
#282 opened by LuigiCerone