davidgxue
Open source contributor to transformers & GPTQModel || Machine Learning Engineer @ Astronomer. Prev @ HiveAI, Meta & Amazon
New York, NY
Pinned Repositories
ask-astro
An end-to-end LLM reference implementation providing a Q&A interface for Airflow and Astronomer
AutoGPTQ
Open source contribution fork: An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
cors-anywhere
CORS Anywhere is a NodeJS reverse proxy which adds CORS headers to the proxied request.
Face-Mask-Detection
Housing-Prices-Predictive-ML
Boston House Prices Regression predictive modeling machine learning problem from end-to-end Python
Linusky17.github.io
UVA-Study-Buddy-Finder
Built using Django Framework, the UVA Study Buddy Finder is a website/webapp designed to assist UVA students to find study peers according to their need.
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
sage
AI copilot designed to enhance developer productivity and streamline onboarding processes
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
davidgxue's Repositories
davidgxue/UVA-Study-Buddy-Finder
Built using Django Framework, the UVA Study Buddy Finder is a website/webapp designed to assist UVA students to find study peers according to their need.
davidgxue/AutoGPTQ
Open source contribution fork: An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
davidgxue/cors-anywhere
CORS Anywhere is a NodeJS reverse proxy which adds CORS headers to the proxied request.
davidgxue/Face-Mask-Detection
davidgxue/Housing-Prices-Predictive-ML
Boston House Prices Regression predictive modeling machine learning problem from end-to-end Python
davidgxue/Linusky17.github.io
davidgxue/LLM-Patching-Scripts
A collection of scripts and code that I used to improve/modify/patch LLMs
davidgxue/transformers
Fork of 🤗 Transformers for open source contribution PRs
davidgxue/vllm
Open Source Contribution Fork for vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs