Pinned Repositories
adanet
Fast and flexible AutoML with learning guarantees.
akg
AKG (Auto Kernel Generator) is an optimizer for operators in Deep Learning Networks, which provides the ability to automatically fuse ops with specific patterns.
albert_zh
A LITE BERT FOR SELF-SUPERVISED LEARNING OF LANGUAGE REPRESENTATIONS, 海量中文预训练ALBERT模型
AliceMind
api-data
整理开发中常用的各类API接口,当前有4大类:微信相关、数据及分析、开发专用、生活服务,如:天气预报、文档生成、身份证识别、代理IP等; 整理各种数据包,如:中华古诗词数据、词库、敏感词表、医学词表、四六级英汉词典数据等
AutoKernel
AutoKernel 是一个简单易用,低门槛的自动算子优化工具,提高深度学习算法部署效率。
awesome-tensor-compilers
A list of awesome compiler projects and papers for tensor computation and deep learning.
BERT-keras
Keras implementation of BERT(Bidirectional Encoder Representations from Transformers)
bert_language_understanding
Pre-training of Deep Bidirectional Transformers for Language Understanding
VIAtoCOCO
Convert the json file created by VIA tool to COCO dataset format json file.
wxyhv's Repositories
wxyhv/akg
AKG (Auto Kernel Generator) is an optimizer for operators in Deep Learning Networks, which provides the ability to automatically fuse ops with specific patterns.
wxyhv/AliceMind
wxyhv/awesome-tensor-compilers
A list of awesome compiler projects and papers for tensor computation and deep learning.
wxyhv/BlogLearning
自己的学习历程,重点包括各种好玩的图像处理算法、运动捕捉、机器学习
wxyhv/bolt
10x faster matrix and vector operations.
wxyhv/CS-Notes
:books: 技术面试必备基础知识、Leetcode、计算机操作系统、计算机网络、系统设计
wxyhv/EET
Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model
wxyhv/EMLL
Edge Machine Learning Library
wxyhv/FasterStereoCuda-Library
这是一个基于CUDA加速的快速立体匹配库,它的核心是SemiglobalMatching(SGM)算法,它不仅在时间效率上要远远优于基于CPU的常规SGM,而且占用明显更少的内存,这意味着它不仅可以在较低分辨率(百万级)图像上达到实时的帧率,且完全具备处理千万级甚至更高量级图像的能力。
wxyhv/FasterTransformer
Transformer related optimization, including BERT, GPT
wxyhv/HowToCook
程序员在家做饭方法指南。Programmer's guide about how to cook at home (Chinese).
wxyhv/inter-operator-scheduler
[MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration
wxyhv/KSAI-Lite
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite
wxyhv/MASTER-pytorch
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)
wxyhv/NN-CUDA-Example
Several simple examples for popular neural network toolkits calling custom CUDA operators.
wxyhv/nnfusion
A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.
wxyhv/nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
wxyhv/oneflow
OneFlow is a performance-centered and open-source deep learning framework.
wxyhv/onnx
Open standard for machine learning interoperability
wxyhv/onnx-simplifier
Simplify your onnx model
wxyhv/PL-Compiler-Resource
程序语言与编译技术相关资料(持续更新中)
wxyhv/portrait-matting-unet-flask
Portrait Mating implementation in UNet with PyTorch.
wxyhv/pylint
It's not just a linter that annoys you!
wxyhv/RASP
An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"
wxyhv/service-streamer
Boosting your Web Services of Deep Learning Applications.
wxyhv/taichi
Productive & portable programming language for high-performance, sparse & differentiable computing on CPUs & GPUs
wxyhv/tedukuri
《算法竞赛进阶指南》资源社区
wxyhv/TensorFlowTTS
:stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
wxyhv/TNN
TNN: developed by Tencent Youtu Lab and Guangying Lab, a lightweight and high-performance deep learning framework for mobile inference. TNN is distinguished by several outstanding features, including its cross-platform capability, high performance, model compression and code pruning. Based on ncnn and Rapidnet, TNN further strengthens the support and performance optimization for mobile devices, and also draws on the advantages of good extensibility and high performance from existed open source efforts. TNN has been deployed in multiple Apps from Tencent, such as Mobile QQ, Weishi, Pitu, etc. Contributions are welcome to work in collaborative with us and make TNN a better framework. TNN:由腾讯优图实验室和光影实验室协同打造,移动端高性能、轻量级推理框架,同时拥有跨平台、高性能、模型压缩、代码裁剪等众多突出优势。TNN框架在原有Rapidnet、ncnn框架的基础上进一步加强了移动端设备的支持以及性能优化,同时也借鉴了业界主流开源框架高性能和良好拓展性的优点。目前TNN已经在手Q、微视、P图等应用中落地,欢迎大家参与协同共建,促进TNN推理框架进一步完善。
wxyhv/TRTorch
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT