Pinned Repositories
awesome-mm-chat
多模态 MM +Chat 合集
awesome-python-cn
Python资源大全中文版,内容包括:Web框架、网络爬虫、网络内容提取、模板引擎、数据库、数据可视化、图片处理、文本处理、自然语言处理、机器学习、日志、代码分析等
deep_learning_codesegment
Code segment are often used in deep learning algorithms(pytorch/numpy)
DeepLearning-500-questions
深度学习500问,以问答形式对常用的概率知识、线性代数、机器学习、深度学习、计算机视觉等热点问题进行阐述,以帮助自己及有需要的读者。 全书分为15个章节,近20万字。由于水平有限,书中不妥之处恳请广大读者批评指正。 未完待续............ 如有意合作,联系scutjy2015@163.com 版权所有,违权必究 Tan 2018.06
miniloader
mini dataloader
mmdetection
OpenMMLab Detection Toolbox and Benchmark
mmdetection-mini
mmdetection最小学习版
mmyolo
OpenMMLab YOLO series toolbox and benchmark
xtuner
XTuner is a toolkit for efficiently fine-tuning LLM
yolov5-comment
yolov5的注释版本
hhaAndroid's Repositories
hhaAndroid/awesome-mm-chat
多模态 MM +Chat 合集
hhaAndroid/mmdetection
OpenMMLab Detection Toolbox and Benchmark
hhaAndroid/xtuner
XTuner is a toolkit for efficiently fine-tuning LLM
hhaAndroid/llm-course
Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks.
hhaAndroid/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
hhaAndroid/DeepSpeedExamples
Example models using DeepSpeed
hhaAndroid/llama3
The official Meta Llama 3 GitHub site
hhaAndroid/ReAlign
Reformatted Alignment
hhaAndroid/VLMEvalKit
Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarks
hhaAndroid/InternVL
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
hhaAndroid/Janus
Janus-Series: Unified Multimodal Understanding and Generation Models
hhaAndroid/Liger-Kernel
Efficient Triton Kernels for LLM Training
hhaAndroid/LISA
Project Page for "LISA: Reasoning Segmentation via Large Language Model"
hhaAndroid/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
hhaAndroid/LLaVA
Visual Instruction Tuning: Large Language-and-Vision Assistant built towards multimodal GPT-4 level capabilities.
hhaAndroid/llava-phi
hhaAndroid/LLMs-from-scratch
Implementing a ChatGPT-like LLM from scratch, step by step
hhaAndroid/lmms-eval
Accelerating the development of large multimodal models (LMMs) with lmms-eval
hhaAndroid/long-context-attention
USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference
hhaAndroid/lvlm-interpret
hhaAndroid/MHA2MLA
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
hhaAndroid/ml-ferret
hhaAndroid/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, GLM4.5, InternLM3, DeepSeek-R1, ...) and 200+ MLLMs (Qwen2.5-VL, Qwen2.5-Omni, Qwen2-Audio, InternVL3, Ovis2.5, Llava, GLM4v, Phi4, ...) (AAAI 2025).
hhaAndroid/ring-flash-attention
Ring attention implementation with flash attention
hhaAndroid/slime
slime is a LLM post-training framework aiming at scaling RL.
hhaAndroid/torchgpipe
A GPipe implementation in PyTorch
hhaAndroid/torchtitan
A native PyTorch Library for large model training
hhaAndroid/transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
hhaAndroid/trl
Train transformer language models with reinforcement learning.
hhaAndroid/verl
verl: Volcano Engine Reinforcement Learning for LLMs