Pinned Repositories
-lecture-pytorch_basic
[lecture]pytorch_basic
airbnb-clone
cloning airbnb using python, django, tailwind
AKT_clone
AKT_clone
alpaca-lora
Code for reproducing the Stanford Alpaca InstructLLaMA result on consumer hardware
cl_bert_kt
cl_bert_kt
debate_bot
debate_bot
Deep_knowledge_tracing_baseline
Deep_knowlege_tracing_baseline
gara2
gara2
lm-trainer-v3
lm-trainer-v3
MonaCoBERT
Monotonic Attention based ConvBERT for Knowledge Tracing
codingchild2424's Repositories
codingchild2424/debate_bot
debate_bot
codingchild2424/Deep_knowledge_tracing_baseline
Deep_knowlege_tracing_baseline
codingchild2424/alpaca-lora
Code for reproducing the Stanford Alpaca InstructLLaMA result on consumer hardware
codingchild2424/auto_gpt_stable
auto_gpt_stable
codingchild2424/automated-interpretability
codingchild2424/cl_bert_kt
cl_bert_kt
codingchild2424/lm-trainer-v2
lm-trainer-v2
codingchild2424/lm-trainer-v3
lm-trainer-v3
codingchild2424/bitsandbytes
8-bit CUDA functions for PyTorch
codingchild2424/ddpm_practice
ddpm_practice
codingchild2424/gpt-4-vision-for-eval
gpt-4-vision-for-eval
codingchild2424/KoAlpaca
KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)
codingchild2424/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
codingchild2424/LLaVA-Docent-v1
LLaVA-Docent-v1
codingchild2424/LOMO
LOMO: LOw-Memory Optimization
codingchild2424/math_scoring_with_gpt
math_scoring_with_gpt
codingchild2424/MEGABYTE-pytorch
Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch
codingchild2424/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
codingchild2424/mlm-trainer
mlm-trainer
codingchild2424/nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
codingchild2424/Open-Llama
The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.
codingchild2424/oslo-1
OSLO: Open Source for Large-scale Optimization
codingchild2424/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
codingchild2424/phoenix
ML Observability in a Notebook - Uncover Insights, Surface Problems, Monitor, and Fine Tune your Generative LLM, CV and Tabular Models
codingchild2424/self-instruct
Aligning pretrained language models with instruction data generated by themselves.
codingchild2424/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
codingchild2424/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
codingchild2424/trl
Train transformer language models with reinforcement learning.
codingchild2424/vision
Clean, reproducible, boilerplate-free deep learning project template.
codingchild2424/whisper-diarization
Automatic Speech Recognition with Speaker Diarization based on OpenAI Whisper