BrightXiaoHan
Graduated from NUDT (National University of Defense Technology) Working on machine translation.
Ifun GameChina.
Pinned Repositories
AdapterMT
Implementation of [Simple, Scalable Adaptation for Neural Machine Translation](https://arxiv.org/abs/1909.08478)
CMakeTutorial
CMake中文实战教程
FaceDetector
A re-implementation of mtcnn. Joint training, tutorial and deployment together.
FaceRecognizer
Deep face recognition.
fast-chatglm
Faster ChatGLM-6B with CTranslate2
HOME
My Personal Home Directory.
MachineTranslationTutorial
机器翻译Jupyter Notebook教程
neovim_as_ide
Neovim configuration for c++ and python development from scratch.
optimum-ascend
Optimized inference with Ascend and Hugging Face
TaskDrivenChatBot
Task driven chat bot service.
BrightXiaoHan's Repositories
BrightXiaoHan/CMakeTutorial
CMake中文实战教程
BrightXiaoHan/optimum-ascend
Optimized inference with Ascend and Hugging Face
BrightXiaoHan/fast-chatglm
Faster ChatGLM-6B with CTranslate2
BrightXiaoHan/Ascend-text-generation-inference
huggingface/text-generation-inference 适配昇腾NPU
BrightXiaoHan/HOME
My Personal Home Directory.
BrightXiaoHan/pytorch-npu
Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch
BrightXiaoHan/Blogs
我的个人博客
BrightXiaoHan/elasticsearch-jieba-plugin
jieba analysis plugin for elasticsearch 7.0.0, 6.4.0, 6.0.0, 5.4.0,5.3.0, 5.2.2, 5.2.1, 5.2, 5.1.2, 5.1.1
BrightXiaoHan/speaker-verification
speaker verification using pyannote
BrightXiaoHan/ChatGLM-DocMT
BrightXiaoHan/ChatGLM-Efficient-Tuning
Fine-tuning ChatGLM-6B with PEFT | 基于 PEFT 的高效 ChatGLM 微调
BrightXiaoHan/CTranslate2
Fast inference engine for Transformer models
BrightXiaoHan/faster-whisper
Faster Whisper transcription with CTranslate2
BrightXiaoHan/fastllm
纯c++的全平台llm加速库,支持python调用,chatglm-6B级模型单卡可达10000+token / s,支持glm, llama, moss基座,手机端流畅运行
BrightXiaoHan/flash-rwkv
BrightXiaoHan/inference
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
BrightXiaoHan/langchain
🦜🔗 Build context-aware reasoning applications
BrightXiaoHan/langchain-ChatGLM
langchain-ChatGLM, local knowledge based ChatGLM with langchain | 基于本地知识的 ChatGLM 问答
BrightXiaoHan/lightllm
LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.
BrightXiaoHan/nanoRWKV
The nanoGPT-style implementation of RWKV Language Model - an RNN with GPT-level LLM performance.
BrightXiaoHan/NvChad
Blazing fast Neovim config providing solid defaults and a beautiful UI, enhancing your neovim experience.
BrightXiaoHan/nvchad-starter
Starter config for NvChad
BrightXiaoHan/optimum
🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
BrightXiaoHan/ragflow
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
BrightXiaoHan/rwkv.c
Inference Llama 2 in one file of pure C
BrightXiaoHan/sacrebleu
Reference BLEU implementation that auto-downloads test sets and reports a version string to facilitate cross-lab comparisons
BrightXiaoHan/setfit
Efficient few-shot learning with Sentence Transformers
BrightXiaoHan/ssr-command-client
:airplane:The commend client of ssr based Python3
BrightXiaoHan/SwissArmyTransformer
SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.
BrightXiaoHan/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs