Pinned Repositories
admin3
一个轻巧的后台管理框架,项目后端基于Java17、SpringBoot3.0,前端基于TypeScript、Vite3、Vue3、Element Plus,只提供登录会话、用户管理、角色管理、权限资源管理、事件日志、对象存储等基础功能的最佳实践方案,不做过多的封装,适合二次定制开发、接私活、源码学习等场景
ai-tools
asio
Asio C++ Library
blind_watermark
Blind&Invisible Watermark ,图片盲水印,提取水印无须原图!
ByteTransformer
optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
cpy3
Go bindings to the CPython-3 API
csproj
DeepDanbooru
AI based multi-label girl image classification system, implemented by using TensorFlow.
design_patterns
C++设计模式
quickfix2
tangjicheng46's Repositories
tangjicheng46/quickfix2
tangjicheng46/ai-tools
tangjicheng46/asio
Asio C++ Library
tangjicheng46/ByteTransformer
optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
tangjicheng46/csproj
tangjicheng46/myserver
server backend
tangjicheng46/op
tangjicheng46/sagebox
Essential tools for every programmer's toolkit.
tangjicheng46/EssentialCSharp
This project contains the source code for the book Essential C# by Mark Michaelis (Addison-Wesley).
tangjicheng46/generative-models
Generative Models by Stability AI
tangjicheng46/gh-favorite-excises
tangjicheng46/go-ast-book
:books: 《Go语言定制指南》(原名:Go语法树入门/开源免费图书/Go语言进阶/掌握抽象语法树/Go语言AST)
tangjicheng46/gohub
tangjicheng46/gopher-lua
GopherLua: VM and compiler for Lua in Go
tangjicheng46/hugo
The world’s fastest framework for building websites.
tangjicheng46/inplace_vector
inplace vector: A dynamically-resizable vector with fixed capacity and embedded storage.
tangjicheng46/kitetsu
tangjicheng46/leet
tangjicheng46/md-doc1
tangjicheng46/pg-cpp
tangjicheng46/powergrid
tangjicheng46/sagemaker-distributed-training-workshop
Hands-on workshop for distributed training and hosting on SageMaker
tangjicheng46/sd-cpu-test
tangjicheng46/sd_deploy
Stable Diffusion web UI
tangjicheng46/sd_trt
tangjicheng46/starlark-go
Starlark in Go: the Starlark configuration language, implemented in Go
tangjicheng46/tangjicheng46.github.io
tangjicheng46/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
tangjicheng46/winter
tangjicheng46/x