h2oai/h2o-llmstudio
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://h2oai.github.io/h2o-llmstudio/
PythonApache-2.0
Issues
- 0
- 4
- 0
[FEATURE] Add (experimental) FP8 support
#748 opened by psinger - 1
Compare Zero-Epoch Prediction with Fine-Tuned Prediction as well as Validation Score Comparison
#695 opened by meganjkurka - 0
[DOCS] Duplicate Questions in the FAQ's
#734 opened by meganjkurka - 0
[FEATURE] Freezing layers
#727 opened by psinger - 0
[BUG] Memory allocation left resident in GPU(s) after model upload to HuggingFace
#736 opened by tmostak - 1
[FEATURE] Connection with LLM DataStudio
#735 opened by meganjkurka - 0
[FEATURE] Select multiple training dataframes
#733 opened by psinger - 0
[FEATURE] Implement SimPO
#732 opened by psinger - 0
[UX] Screen hangs when you click Download Model
#728 opened by meganjkurka - 1
- 1
- 0
[BUG] Code rendering in the validation prediction insights replaces characters
#722 opened by pascal-pfeiffer - 9
[BUG] HuggingFace export does not preserve bfloat16 weights but converts to float16 silently when using CPU for upload
#702 opened by tmostak - 0
[CODE IMPROVEMENT] Custom HF model for classification
#713 opened by psinger - 3
[FEATURE] add support for multigpu, splitting model across gpus without using deepspeed/fsdp
#710 opened by Quetzalcohuatl - 1
- 4
Data Format section has a broken link
#701 opened by cemremengu - 0
[FEATURE] Option to plot train/eval plots with epoch instead of step on x-axis
#700 opened by tmostak - 1
[FEATURE] Support for minimum learning rate
#671 opened by tmostak - 1
[FEATURE] Use local LLM deployment as Judge
#694 opened by pascal-pfeiffer - 1
[FEATURE] Option for not saving checkpoint
#691 opened by psinger - 8
ValueError: invalid literal for int() with base 10: ‘Failed to initialize NVML: Unknown Error’
#668 opened by jldroid19 - 4
[FEATURE] Option to add BOS token
#636 opened by psinger - 0
[BUG] Chat window generation parameters not updated
#679 opened by psinger - 0
[FEATURE] Random validation sample for chat interface
#683 opened by psinger - 0
- 0
[BUG] UI freezes when using "Stop streaming" button with a text in the input box
#674 opened by pascal-pfeiffer - 0
[FEATURE] Fine-tune CohereForCausalLM Models
#677 opened by pascal-pfeiffer - 8
[BUG] Exporting / downloading model larger, than VRAM available (trained with DeepSpeed) fails
#670 opened by AZ777xx - 0
[CODE IMPROVEMENT] Default for max_time
#653 opened by psinger - 0
[FEATURE] Mixed Precision Dtype
#672 opened by psinger - 2
[BUG] Mixed precision not working with bfloat16
#628 opened by maxjeblick - 0
- 0
[CODE IMPROVEMENT] Sort data files alphabetically
#665 opened by psinger - 1
[BUG] Pipenv missing as a requirement for the `make llmstudio` command
#669 opened by pascal-pfeiffer - 0
- 1
[FEATURE] Add additional digit of precision for specifying learning rate and other parameters
#661 opened by tmostak - 0
[FEATURE] Add danube2 to default model list
#658 opened by pascal-pfeiffer - 0
[BUG] Scheduler should consider gradient accumulation while assigning `epoch_steps`?
#663 opened by rohitgr7 - 0
[CODE IMPROVEMENT] Flash_attn installation may be wrong if the wheel is cached
#651 opened by pascal-pfeiffer - 0
- 1
[BUG] Tokenizer config has add_bos_token=true while LLM Studio is training with add_special_tokens=False
#644 opened by pascal-pfeiffer - 2
- 0
[DOCS] Adding the description syntax
#621 opened by shaunyogeshwaran - 2
[BUG] Error when pushing model to HuggingFace
#635 opened by jeffwang0516 - 1
- 0
[FEATURE] Training with QLoRA + FSDP
#631 opened by pascal-pfeiffer - 2
[BUG] Recent regression causing an error when loading tokenizer for Deepseek models
#623 opened by tmostak