Pinned Repositories
benchmark-octane
Octane benchmark for Node.js
c-ares
A C library for asynchronous DNS requests
Ch3nYuY.github.io
cpuinfo
CPU INFOrmation library (x86/x86-64/ARM/ARM64, Linux/Windows/Android/macOS/iOS)
Dockerfiles
Optimized media, analytics and graphics software stack images. Use the dockerfile(s) in your project or as a recipe book for bare metal installation.
dsa-perf-micros
learning-v8
Project for learning V8 internals
llama2.c
Inference Llama 2 in one file of pure C
node
Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:
tech-learning-journey
Ch3nYuY's Repositories
Ch3nYuY/benchmark-octane
Octane benchmark for Node.js
Ch3nYuY/c-ares
A C library for asynchronous DNS requests
Ch3nYuY/Ch3nYuY.github.io
Ch3nYuY/cpuinfo
CPU INFOrmation library (x86/x86-64/ARM/ARM64, Linux/Windows/Android/macOS/iOS)
Ch3nYuY/Dockerfiles
Optimized media, analytics and graphics software stack images. Use the dockerfile(s) in your project or as a recipe book for bare metal installation.
Ch3nYuY/dsa-perf-micros
Ch3nYuY/learning-v8
Project for learning V8 internals
Ch3nYuY/llama2.c
Inference Llama 2 in one file of pure C
Ch3nYuY/node
Node.js JavaScript runtime :sparkles::turtle::rocket::sparkles:
Ch3nYuY/tech-learning-journey
Ch3nYuY/tensorflow
An Open Source Machine Learning Framework for Everyone
Ch3nYuY/Use-LLMs-in-Colab
🤖 集合众多大模型在Colab上的使用 | LLMs is all you need.
Ch3nYuY/v8-internals
面向编译器开发人员的V8内部实现文档
Ch3nYuY/wasm-micro-runtime
WebAssembly Micro Runtime (WAMR)
Ch3nYuY/XNNPACK
High-efficiency floating-point neural network inference operators for mobile, server, and Web