xla
There are 42 repositories under xla topic.
elixir-nx/nx
Multi-dimensional arrays (tensors) and numerical definitions for Elixir
pytorch/xla
Enabling PyTorch on XLA Devices (e.g. Google TPU)
n2cholas/awesome-jax
JAX - A curated list of resources https://github.com/google/jax
gordicaleksa/get-started-with-JAX
The purpose of this repo is to make it easy to get started with JAX, Flax, and Haiku. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I found useful while learning about the JAX ecosystem.
mpi4jax/mpi4jax
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python :zap:
dfm/extending-jax
Extending JAX with custom C++ and CUDA code
JuliaGPU/XLA.jl
Julia on TPUs
kamalkraj/ALBERT-TF2.0
ALBERT model Pretraining and Fine Tuning using TF2.0
HomebrewNLP/revlib
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
inoryy/tensorflow-optimized-wheels
TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA
gomlx/gomlx
GoMLX -- Accelerated ML Libraries for Go
flaport/sax
S + Autograd + XLA :: S-parameter based frequency domain circuit simulations and optimizations using JAX.
huanghuidmml/tfbert
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
sayakpaul/keras-xla-benchmarks
Presents comprehensive benchmarks of XLA-compatible pre-trained models in Keras.
DifferentiableUniverseInitiative/jaxDecomp
JAX bindings for the NVIDIA cuDecomp library
scala-network/scala-pool
Official scala pool repository
onnx/onnx-xla
XLA integration of Open Neural Network Exchange (ONNX)
sseung0703/TF2-jit-compile-on-multi-gpu
Tensorflow2 training code with jit compiling on multi-GPU.
sayakpaul/you-dont-know-tensorflow
Contains materials for my talk "You don't know TensorFlow".
sayakpaul/xla-benchmark-sd
Provides code to serialize the different models involved in Stable Diffusion as SavedModels and to compile them with XLA.
AlibabaPAI/FlashModels
Fast and easy distributed model training examples.
jhn-nt/data-snax
Versatile Data Ingestion Pipelines for Jax
kmkolasinski/tensorflow-nanoGPT
Example how to train GPT-2 (XLA + AMP), export to SavedModel and serve with Tensorflow Serving
scala-network/StellitePay-API
DEPRECATED ⛔️
googleinterns/paksha
Compiling JAX to WebAssembly for exploring client-side machine learning
gottingen/tf-reading
tensorflow code reading
mugithi/google-terraform-pytorch-tpu
Automated provisioner of a Google Cloud TPU environment for training in PyTorch
InikoPro/mineveruscoinonarm
Mine verus coin on ARM like Pi, Tablet, Mobile & Other.
jhashekhar/multilingual-clf
Classification of multilingual dataset trained only on English training data using pre-trained models. Model is trained on TPUs using PyTorch and torch_xla library.
13X-Labs/gpt2-text-generation-xla
As the quality of large language models increases, so do our expectations of what they can do. Since the release of OpenAI's GPT-2, text generation capabilities have received attention. And for good reason - these models can be used for summarization, translation, and even real-time learning in some language tasks.
nguyentruonglau/keras-classify
Optimal choice for 🛰 classification problem.
VertexC/dl-infer-perf
deep learning inference perf analysis
aklein4/MonArc
A practical method for training energy-based language models.
AndreiMoraru123/Super-Resolution
Modern Graph TensorFlow implementation of Super-Resolution GAN
ReturnToFirst/FastTFWorkflow
Tutorial about How to change your slow tensorflow training faster
wcxve/xspex
Access the Xspec models and corresponding JAX/XLA ops.