Pinned Repositories
academic-kickstart
academic-njucode
analytics-zoo
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray
attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
Cardionvascular_DataMining
CS-Notes-PDF
https://github.com/CyC2018/CS-Notes PDF版本离线阅读
CSIT5900_Assignment
Heart-Disease-Data-Mining
ucore
清华大学操作系统课程实验 (OS Kernel Labs)
sgwhat's Repositories
sgwhat/Heart-Disease-Data-Mining
sgwhat/analytics-zoo
Distributed Tensorflow, Keras and PyTorch on Apache Spark/Flink & Ray
sgwhat/Cardionvascular_DataMining
sgwhat/CSIT5900_Assignment
sgwhat/ucore
清华大学操作系统课程实验 (OS Kernel Labs)
sgwhat/academic-kickstart
sgwhat/academic-njucode
sgwhat/attention_sinks
Extend existing LLMs way beyond the original training length with constant memory usage, without retraining
sgwhat/CS-Notes-PDF
https://github.com/CyC2018/CS-Notes PDF版本离线阅读
sgwhat/cs231n.github.io
Public facing notes page
sgwhat/Data-Paralle-Cpp
个人翻译《Data Parallel C++》
sgwhat/datasets
A collection of datasets of ML problem solving
sgwhat/fullstack-tutorial
🚀 fullstack tutorial 2021,后台技术栈/架构师之路/全栈开发社区,春招/秋招/校招/面试
sgwhat/indoor-location-competition-20
Indoor Location Competition 2.0
sgwhat/inference
Reference implementations of MLPerf™ inference benchmarks
sgwhat/ipex-llm
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.
sgwhat/lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
sgwhat/migrate-examples
sgwhat/models
Model Zoo for Intel® Architecture: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors
sgwhat/NLP_Paper_Retrial
This is the collection of NLP papers published recently
sgwhat/ollama
Get up and running with Llama 2, Mistral, and other large language models locally.
sgwhat/sentimen-analysis-based-on-sentiment-lexicon-and-deep-learning
sgwhat/tensorflow
Computation using data flow graphs for scalable machine learning
sgwhat/text-generation-webui
A Gradio Web UI for running local LLM on Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) using IPEX-LLM.