Pinned Repositories
ai-lab-recipes
Examples for building and running LLM services and applications locally with Podman
aotriton
Ahead of Time (AOT) Triton Math Library
bard-cli
Google Bard in Terminal
bugzilla
Go client for bugzilla API
content-resolver-input
Configuration files for Feedback Pipeline
cputopology
linux systemd service to configure cputopology at boot time
FBGEMM
FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
fromager
Build your own wheels
go-gitlab
A GitLab API client enabling Go programs to interact with GitLab in a simple and uniform way
staging-next-unisys
staging next plus unisys changes
prarit's Repositories
prarit/staging-next-unisys
staging next plus unisys changes
prarit/ai-lab-recipes
Examples for building and running LLM services and applications locally with Podman
prarit/aotriton
Ahead of Time (AOT) Triton Math Library
prarit/bard-cli
Google Bard in Terminal
prarit/bugzilla
Go client for bugzilla API
prarit/content-resolver-input
Configuration files for Feedback Pipeline
prarit/cputopology
linux systemd service to configure cputopology at boot time
prarit/FBGEMM
FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
prarit/fromager
Build your own wheels
prarit/go-gitlab
A GitLab API client enabling Go programs to interact with GitLab in a simple and uniform way
prarit/go-jira
Go client library for Atlassian Jira
prarit/grubby
prarit/jira-cli
🔥 [WIP] Feature-rich interactive Jira command line.
prarit/lab
Lab wraps Git or Hub, making it simple to clone, fork, and interact with repositories on GitLab
prarit/mcelog
Linux kernel machine check handling middleware
prarit/pesign
Linux tools for signed PE-COFF binaries
prarit/rpm
prarit/tuned
Tuning Profile Delivery Mechanism for Linux
prarit/v3simple-spatial
prarit/instructlab-sdg
Python library for Synthetic Data Generation
prarit/llvm-triton
prarit/viper
Go configuration with fangs
prarit/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs