Pinned Repositories
graal
Graal: High-Performance Polyglot Runtime :rocket: :trophy:
native_leaks
Native Memory Leaks Examples
open-box
Generalized and Efficient Blackbox Optimization System [SIGKDD'21].
sample-java-programs
Sample Java programs to demonstrate performance issues
SHARK
SHARK - High Performance Machine Learning Distribution
TencentKona-8
Tencent Kona is a no-cost, production-ready distribution of the Open Java Development Kit (OpenJDK), Long-term support(LTS) with quarterly updates. Tencent Kona serves as the default JDK internally at Tencent Cloud for cloud computing and other Java applications.
TencentKonaSMSuite
Tencent Kona SM Suite is a set of Java security providers, which support algorithms SM2, SM3 and SM4, and protocols TLCP/GMSSL, TLS 1.3 (RFC 8998) and TLS 1.2.
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
TencentKona-8
Tencent Kona is a no-cost, production-ready distribution of the Open Java Development Kit (OpenJDK), Long-term support(LTS) with quarterly updates. Tencent Kona serves as the default JDK internally at Tencent Cloud for cloud computing and other Java applications.
felixxfyang's Repositories
felixxfyang/sample-java-programs
Sample Java programs to demonstrate performance issues
felixxfyang/graal
Graal: High-Performance Polyglot Runtime :rocket: :trophy:
felixxfyang/native_leaks
Native Memory Leaks Examples
felixxfyang/open-box
Generalized and Efficient Blackbox Optimization System [SIGKDD'21].
felixxfyang/SHARK
SHARK - High Performance Machine Learning Distribution
felixxfyang/TencentKona-8
Tencent Kona is a no-cost, production-ready distribution of the Open Java Development Kit (OpenJDK), Long-term support(LTS) with quarterly updates. Tencent Kona serves as the default JDK internally at Tencent Cloud for cloud computing and other Java applications.
felixxfyang/TencentKonaSMSuite
Tencent Kona SM Suite is a set of Java security providers, which support algorithms SM2, SM3 and SM4, and protocols TLCP/GMSSL, TLS 1.3 (RFC 8998) and TLS 1.2.
felixxfyang/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs