jyy0553
Focus on natural language processing, information retrieval, Learing-to-Rank
Tianjin UniversityTianjin
jyy0553's Stars
eseckel/ai-for-grant-writing
A curated list of resources for using LLMs to develop more competitive grant applications.
THUDM/GLM-4
GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Clouditera/SecGPT
SecGPT网络安全大模型
AsajuHuishi/Nonlinear-information-processing-technology-Chapter3
非线性信息处理技术第三章作业
yongzhuo/char-similar
汉字字形/拼音/语义相似度(单字, 可用于数据增强, CSC错别字检测识别任务(构建混淆集)) Chinese character font/pinyin/semantic similarity (single character, can be used for data augmentation, CSC misclassified character detection and recognition tasks (building confusion sets))
contr4l/SimilarCharacter
对常用的6700个汉字进行音、形比较,输出音近字、形近字的列表。 # 相近字
houbb/nlp-hanzi-similar
The hanzi similar tool.(汉字相似度计算工具,中文形近字算法。可用于手写汉字识别纠正,文本混淆等。)
vpncn/vpncn.github.io
2024**翻墙软件VPN推荐以及科学上网避坑,稳定好用。对比SSR机场、蓝灯、V2ray、老王VPN、VPS搭建梯子等科学上网与翻墙软件,**最新科学上网翻墙梯子VPN下载推荐,访问Chatgpt。
InternLM/lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
wgwang/awesome-LLMs-In-China
**大模型
span-man/ebooks
tickstep/aliyunpan
阿里云盘命令行客户端,支持JavaScript插件,支持同步备份功能。
QwenLM/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Dao-AILab/flash-attention
Fast and memory-efficient exact attention
baichuan-inc/Baichuan-13B
A 13B large language model developed by Baichuan Intelligent Technology
meta-llama/llama
Inference code for Llama models
lilongxian/BaiYang-chatGLM2-6B
(1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。
THUDM/GLM
GLM (General Language Model)
THUDM/ChatGLM2-6B
ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型
baichuan-inc/Baichuan-7B
A large-scale 7B pretraining language model developed by BaiChuan-Inc.
bigganbing/Fairseq_MorphTE
[NeurIPS 2022]MorphTE: Injecting Morphology in Tensorized Embeddings
wenge-research/YAYI
雅意大模型:为客户打造安全可靠的专属大模型,基于大规模中英文多领域指令数据训练的 LlaMA 2 & BLOOM 系列模型,由中科闻歌算法团队研发。(Repo for YaYi Chinese LLMs based on LlaMA2 & BLOOM)
ssbuild/chatglm_finetuning
chatglm 6b finetuning and alpaca finetuning
hkust-nlp/ceval
Official github repo for C-Eval, a Chinese evaluation suite for foundation models [NeurIPS 2023]
huawei-noah/Efficient-AI-Backbones
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
mymusise/ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
OpenMOSS/MOSS
An open-source tool-augmented conversational language model from Fudan University