tyqscc's Stars
tysonwu/stack-orderflow
Orderflow chart GUI using finplot and pyqt5graph
murtazayusuf/OrderflowChart
Plot orderflow footprint charts using plotly in python.
tile-ai/tilelang
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
Tangent-Wei/crypto_info
全网最全-币圈区块链各类常用工具与相关信息资料大全-虚拟加密货币-欧易OKX币安Binace芝麻开门Gate-App注册-NFT-Defi-加密钱包-比特币-新手入门教程 -持续更新
deepseek-ai/DeepSeek-R1
GothenburgBitFactory/taskwarrior
Taskwarrior - Command line Task Management
fasiondog/hikyuu
Hikyuu Quant Framework 基于C++/Python的极速开源量化交易研究框架,同时可基于策略部件进行资产重用,快速累积策略资产。
kungfu-origin/kungfu
Kungfu Trader
pytorch/torchtitan
A PyTorch native library for large model training
wondertrader/wondertrader
WonderTrader——量化研发交易一站式框架
zhihu/ZhiLight
A highly optimized LLM inference acceleration engine for Llama and its variants.
m1guelpf/auto-subtitle
Automatically generate and overlay subtitles for any video.
chenyme/Chenyme-AAVT
这是一个全自动(音频)视频翻译项目。利用Whisper识别声音,AI大模型翻译字幕,最后合并字幕视频,生成翻译后的视频。
qinL-cdy/auto_ai_subtitle
nicolargo/glances
Glances an Eye on your system. A top/htop alternative for GNU/Linux, BSD, Mac OS and Windows operating systems.
junegunn/fzf
:cherry_blossom: A command-line fuzzy finder
lm-sys/FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
0cch/moderncpp_public
《现代C++语言核心特性解析》其他资料
meta-llama/llama-models
Utilities intended for use with Llama models.
MooreThreads/torch_musa
torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics cards.
karpathy/LLM101n
LLM101n: Let's build a Storyteller
KoboldAI/KoboldAI-Client
For GGUF support, see KoboldCPP: https://github.com/LostRuins/koboldcpp
NVIDIA/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
huggingface/trl
Train transformer language models with reinforcement learning.
unslothai/unsloth
Finetune Llama 4, DeepSeek-R1, Gemma 3 & Reasoning LLMs 2x faster with 70% less memory! 🦥
hiyouga/LLaMA-Factory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
anilshanbhag/gpu-topk
Efficient Top-K implementation on the GPU
gyatskov/radix-sort
GPU optimized implementation of Radix Sort via OpenCL
intel/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, vLLM, DeepSpeed, Axolotl, etc.
bigscience-workshop/petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading