Pinned Repositories
airframe
Libraries for Building Full-Fledged Scala Applications
apt-boto-s3
The fast and simple S3 transport for apt.
arch-pkgs
Fork of https://github.com/archlinux/svntogit-community
cs231n
My (incomplete) solutions for Stanford CS231n assignments
gcj
Google Code Jam solutions in Rust
graceful
Shutdown gracefully
httprouter
A high performance HTTP request router that scales well
libni
Low-level building blocks for efficient and scalable C++ applications
monolith
ByteDance's Recommendation System
rotor-capnp
mio based async stream for Cap'n Proto messages.
0x1997's Repositories
0x1997/rotor-capnp
mio based async stream for Cap'n Proto messages.
0x1997/graceful
Shutdown gracefully
0x1997/monolith
ByteDance's Recommendation System
0x1997/arch-pkgs
Fork of https://github.com/archlinux/svntogit-community
0x1997/baidu-netdisk-high-speed
:zap: 百度网盘高速下载 Chrome 插件
0x1997/c-ares-feedstock
0x1997/calibre
0x1997/catch-feedstock
0x1997/clipp-feedstock
0x1997/concurrentqueue-feedstock
0x1997/grpc-cpp-feedstock
0x1997/incubator-mxnet
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
0x1997/lego
Let's Encrypt client and ACME library written in Go
0x1997/libhdfs3-feedstock
A conda-smithy repository for libhdfs3.
0x1997/librdkafka-feedstock
A conda-smithy repository for librdkafka.
0x1997/lingua
The most accurate natural language detection library for Java and the JVM, suitable for long and short text alike
0x1997/mleap
MLeap: Deploy Spark Pipelines to Production
0x1997/protobuf-feedstock
A conda-smithy repository for protobuf.
0x1997/pyhash-feedstock
0x1997/python-confluent-kafka-feedstock
A conda-smithy repository for python-confluent-kafka.
0x1997/spacemacs
An Emacs distribution - The best editor is neither Emacs nor Vim, it's Emacs *and* Vim!
0x1997/tensorflow-DeepFM
Tensorflow implementation of DeepFM for CTR prediction.
0x1997/tensorflow-model-server
0x1997/tensorflow_recipes
Tensorflow conda recipes
0x1997/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
0x1997/tensorrtllm_backend
The Triton TensorRT-LLM Backend
0x1997/text-generation-inference
Large Language Model Text Generation Inference
0x1997/triton-inference-server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
0x1997/yakuake
Yakuake with some custom patches
0x1997/yaml-cpp-feedstock