Pinned Repositories
AG-LayCast
Augmented Queue-based Transmission and Transcoding Optimization for Livecast Services Based on Cloud-Edge-Crowd Integration
ChatGLM-Peft-Tuning
ChatGLM-Peft-Tuning
cvor
A simple implementation of control variates operator (CVor)
DAOA
Decentralized asynchronous optimization for dynamic adaptive multimedia streaming over information centric networking
FedLive
FedLive: A Federated Transmission Framework for Panoramic Livecast with Reinforced Variational Inference
GNNlive
Learning to Stream...
legal-intelligence
marl_stock
ns3-gym
Prototype-SchedulingModule-TCSVT
Schduling module of prototype system (Augmented Queue-based Transmission and Transcoding Optimization for Livecast Services Based on Cloud-Edge-Crowd Integration)
uglyghost's Repositories
uglyghost/ChatGLM-Peft-Tuning
ChatGLM-Peft-Tuning
uglyghost/legal-intelligence
uglyghost/GNNlive
Learning to Stream...
uglyghost/cvor
A simple implementation of control variates operator (CVor)
uglyghost/open-webui
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
uglyghost/RWKV_xy
just for test
uglyghost/uglyghost.github.io
中文版学术主页
uglyghost/AGOD
AI-generated Optimization Decision
uglyghost/AI-teacher
uglyghost/BScontract
uglyghost/ChatGLM-6B
ChatGLM-6B:开源双语对话语言模型 | An Open Bilingual Dialogue Language Model
uglyghost/ChatGPT
Reverse engineered ChatGPT API
uglyghost/chatgpt-on-wechat
uglyghost/ChatGPTWeb
uglyghost/clear_data
uglyghost/code_nlp
uglyghost/ColossalAI
Making large AI models cheaper, faster and more accessible
uglyghost/FedCPMD
uglyghost/FL-bench_FedSper
uglyghost/langchain-examples
Basic examples of prompt engineering leveraging langchain: https://www.langchain.com/
uglyghost/llm_paper_website
uglyghost/MARLlib
One repository is all that is necessary for Multi-agent Reinforcement Learning (MARL)
uglyghost/Multi-Agent-Transformer
uglyghost/paper_check
uglyghost/Qwen
The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
uglyghost/RTGCN-new
uglyghost/RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
uglyghost/simulation_li
simulation_li
uglyghost/test_page
uglyghost/visual-chatgpt
VisualChatGPT