Pinned Repositories
awesome-LLM-resourses
π§βπ ε ¨δΈηζε₯½ηLLMθ΅ζζ»η» | Summary of the world's best LLM resources.
ColossalAI
Making large AI models cheaper, faster and more accessible
DeepSeek-Coder
DeepSeek Coder: Let the Code Write Itself
DeepSeek-Coder-V2
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
DeepSeek-LLM
DeepSeek LLM: Let there be answers
DeepSeek-Math
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
DeepSeek-V2
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DeepSeek-V3
DeepSeek-VL
DeepSeek-VL: Towards Real-World Vision-Language Understanding
OBB
Fitting an Oriented Bounding Box to points
SXSlinux's Repositories
SXSlinux/OBB
Fitting an Oriented Bounding Box to points
SXSlinux/awesome-LLM-resourses
π§βπ ε ¨δΈηζε₯½ηLLMθ΅ζζ»η» | Summary of the world's best LLM resources.
SXSlinux/ColossalAI
Making large AI models cheaper, faster and more accessible
SXSlinux/DeepSeek-Coder
DeepSeek Coder: Let the Code Write Itself
SXSlinux/DeepSeek-Coder-V2
DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence
SXSlinux/DeepSeek-LLM
DeepSeek LLM: Let there be answers
SXSlinux/DeepSeek-Math
DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
SXSlinux/DeepSeek-V2
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
SXSlinux/DeepSeek-V3
SXSlinux/DeepSeek-VL
DeepSeek-VL: Towards Real-World Vision-Language Understanding
SXSlinux/DotNet.Revit
This revit code
SXSlinux/draco
Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
SXSlinux/Genesis
A generative world for general-purpose robotics & embodied AI learning.
SXSlinux/Grokking-Deep-Learning
this repository accompanies the book "Grokking Deep Learning"
SXSlinux/LLMs-from-scratch
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
SXSlinux/petals
πΈ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading