bitsandbytes

There are 20 repositories under bitsandbytes topic.

  • dvgodoy/FineTuningLLMs

    Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"

    Language:Jupyter Notebook5091268
  • bobazooba/xllm

    🦖 X—LLM: Cutting Edge & Easy LLM Finetuning

    Language:Python40741121
  • shaheennabi/Production-Ready-Instruction-Finetuning-of-Meta-Llama-3.2-3B-Instruct-Project

    Instruction Fine-Tuning of Meta Llama 3.2-3B Instruct on Kannada Conversations. Tailoring the model to follow specific instructions in Kannada, enhancing its ability to generate relevant, context-aware responses based on conversational inputs. Using the Kannada Instruct dataset for fine-tuning! Happy Finetuning 🎋

    Language:Jupyter Notebook21106
  • AkimfromParis/RAG-Japanese

    Open source RAG with Llama Index for Japanese LLM in low resource settting

    Language:Jupyter Notebook8103
  • antonio-f/Orca2

    Orca 2 on Colab

    Language:Jupyter Notebook8211
  • eljandoubi/AI-Photo-Editing-with-Inpainting

    A web app that allows you to select a subject and then change its background, OR keep the background and change the subject.

    Language:Jupyter Notebook8101
  • dasdristanta13/LLM-Lora-PEFT_accumulate

    LLM-Lora-PEFT_accumulate explores optimizations for Large Language Models (LLMs) using PEFT, LORA, and QLORA. Contribute experiments and implementations to enhance LLM efficiency. Join discussions and push the boundaries of LLM optimization. Let's make LLMs more efficient together!

    Language:Jupyter Notebook6211
  • ryan-air/Alpaca-3B-Fine-Tuned

    In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 3B parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.

    Language:Jupyter Notebook6300
  • bobazooba/shurale

    Conversation AI model for open domain dialogs

    Language:Python4111
  • arham-kk/llama2-qlora-sft

    This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset

  • ryan-air/Alpaca-350M-Fine-Tuned

    In this project, I have provided code and a Colaboratory notebook that facilitates the fine-tuning process of an Alpaca 350M parameter model originally developed at Stanford University. The model was adapted using LoRA to run with fewer computational resources and training parameters and used HuggingFace's PEFT library.

    Language:Jupyter Notebook3301
  • to-aoki/bitsandbytes

    bitsandbytes modify for jetson orin

    Language:Python3100
  • Bushra-Butt-17/BudgetBuddy-Finance-Chatbot

    Budget Buddy is a finance chatbot built using Chainlit and the LLaMA language model. It analyzes PDF documents, such as bank statements and budget reports, to provide personalized financial advice and insights. The chatbot is integrated with Hugging Face for model management, offering an interactive way to manage personal finances.

    Language:Python2110
  • kedir/Specialized-Immigration-Assistant

    provide specialized immigration assistance in the field of immigration law using large language model

    Language:Jupyter Notebook2100
  • lpalbou/model-quantizer

    Effortlessly quantize, benchmark, and publish Hugging Face models with cross-platform support for CPU/GPU. Reduce model size by 75% while maintaining performance.

    Language:Python20
  • Md-Emon-Hasan/Fine-Tuning

    End-to-end fine-tuning of Hugging Face models using LoRA, QLoRA, quantization, and PEFT techniques. Optimized for low-memory with efficient model deployment

    Language:Jupyter Notebook110
  • CancerCareAI

    403errors/CancerCareAI

    An AI-powered system for extracting cancer-related information from patient Electronic Health Record (EHR) notes

    Language:Jupyter Notebook0110
  • Varun0157/quantisation

    Experiments in quantisation consisting of quantisation from scratch, bitsandbytes, and llama.cpp. [Assignment 4 of Advanced Natural Language Processing, IIIT-H Monsoon '24]

    Language:Python00
  • edcalderin/HuggingFace_RAGFlow

    This project implements a classic Retrieval-Augmented Generation (RAG) system using HuggingFace models with quantization techniques. The system processes PDF documents, extracts their content, and enables interactive question-answering through a Streamlit web application.

    Language:Python