Pinned Repositories
fabric
Read-only mirror of https://gerrit.hyperledger.org/r/#/admin/projects/fabric
fabric-ca
fabric-gerrit
fabric-protos
fabric-protos-go
fabric-rfcs
RFC process for Hyperledger Fabric. The RFC (request for comments) process is intended to provide a consistent and controlled path for major changes to Fabric and other official project components. https://wiki.hyperledger.org/display/fabric
fabric-samples
fabric-smart-client
The Fabric Smart Client is a new Fabric Client that lets you focus on the business processes and simplifies the development of Fabric-based distributed application.
fabric-token-sdk
The Fabric Token SDK is a set of API and services that lets developers create token-based distributed application on Hyperledger Fabric.
fastsafetensors
High-performance safetensors model loader
manish-sethi's Repositories
manish-sethi/fabric
Read-only mirror of https://gerrit.hyperledger.org/r/#/admin/projects/fabric
manish-sethi/fabric-ca
manish-sethi/fabric-gerrit
manish-sethi/fabric-protos
manish-sethi/fabric-protos-go
manish-sethi/fabric-rfcs
RFC process for Hyperledger Fabric. The RFC (request for comments) process is intended to provide a consistent and controlled path for major changes to Fabric and other official project components. https://wiki.hyperledger.org/display/fabric
manish-sethi/fabric-samples
manish-sethi/fabric-smart-client
The Fabric Smart Client is a new Fabric Client that lets you focus on the business processes and simplifies the development of Fabric-based distributed application.
manish-sethi/fabric-token-sdk
The Fabric Token SDK is a set of API and services that lets developers create token-based distributed application on Hyperledger Fabric.
manish-sethi/foundation-model-stack
manish-sethi/goleveldb
LevelDB key/value database in Go.
manish-sethi/hyperledger-archives-fabric
Blockchain fabric code
manish-sethi/ibm-vllm
A fork of github.com/vllm-project/vllm
manish-sethi/pebble
RocksDB/LevelDB inspired key-value database in Go
manish-sethi/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs