Pinned Repositories
addons
Useful extra functionality for TensorFlow 2.0 maintained by SIG-addons
automl
Google Brain AutoML
chartjs-web-components
the web components for chartjs
ci_edit
A terminal text editor with mouse support and ctrl+Q to quit.
darknet
Convolutional Neural Networks
mobilenetv2-yolov3
yolov3 with mobilenetv2 and efficientnet
MobilenetV3
A tensorflow implemention of MobilenetV3
MyBatis-Node
A fork of MyBatisNodeJs which is modified by myself
automl
Google Brain AutoML
addons
Useful extra functionality for TensorFlow 2.x maintained by SIG-addons
fsx950223's Repositories
fsx950223/addons
Useful extra functionality for TensorFlow 2.0 maintained by SIG-addons
fsx950223/tensorflow
An Open Source Machine Learning Framework for Everyone
fsx950223/app
fsx950223/cancer_detection
fsx950223/composable_kernel
fsx950223/composer
Supercharge Your Model Training
fsx950223/disaster_tweets
fsx950223/examples
Fast and flexible reference benchmarks
fsx950223/flash-attention
Fast and memory-efficient exact attention
fsx950223/FlexGen
Running large language models on a single GPU for throughput-oriented scenarios.
fsx950223/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
fsx950223/keras-io
Keras documentation, hosted live at keras.io
fsx950223/llama
Inference code for LLaMA models
fsx950223/llm-foundry
LLM training code for MosaicML foundation models
fsx950223/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
fsx950223/monet_painting
fsx950223/Movielens
fsx950223/nccl-rccl-parser
Tool to run rccl-tests/nccl-tests based on from an application
fsx950223/nypd
fsx950223/penguins_sklearn
fsx950223/profilerv1
fsx950223/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
fsx950223/rocBLASTER
fsx950223/rocm_docker
fsx950223/titanic_nmf
fsx950223/triton
Development repository for the Triton language and compiler
fsx950223/tsl
fsx950223/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
fsx950223/web_app
fsx950223/xformers