Erebyel
Profesional de la información y los datos especializada en procesamiento de lenguaje natural
Madrid
Erebyel's Stars
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
JohnSnowLabs/spark-nlp
State of the Art Natural Language Processing
RasaHQ/financial-demo
A demo for a financial services bot
PlanTL-GOB-ES/lm-spanish
Official source for spanish Language Models and resources made @ BSC-TEMU within the "Plan de las Tecnologías del Lenguaje" (Plan-TL).
HarisIqbal88/PlotNeuralNet
Latex code for making neural networks diagrams
somosnlp/nlp-de-cero-a-cien
Curso práctico: NLP de cero a cien 🤗
huggingface/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
EleutherAI/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
EleutherAI/gpt-neo
An implementation of model parallel GPT-2 and GPT-3-style models using the mesh-tensorflow library.
kingoflolz/mesh-transformer-jax
Model parallel transformers in JAX and Haiku
thunil/TecoGAN
This repo contains source code and materials for the TEmporally COherent GAN SIGGRAPH project.
sdv-dev/CTGAN
Conditional GAN for generating synthetic tabular data.
google/oboe
Oboe is a C++ library that makes it easy to build high-performance audio apps on Android.
apache/incubator-marvin
Apache Marvin-AI
google-research/recsim_ng
RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems
fastai/fastbook
The fastai book, published as Jupyter Notebooks
fastai/course20
Deep Learning for Coders, 2020, the website
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
microsoft/huggingface-transformers
🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.
microsoft/nlp-recipes
Natural Language Processing Best Practices & Examples
recommenders-team/recommenders
Best Practices on Recommendation Systems
microsoft/computervision-recipes
Best Practices, code samples, and documentation for Computer Vision.
openai/sparse_attention
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
openai/gpt-3
GPT-3: Language Models are Few-Shot Learners
openai/gpt-2
Code for the paper "Language Models are Unsupervised Multitask Learners"
NVIDIA/OpenSeq2Seq
Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
NVIDIA/semantic-segmentation
Nvidia Semantic Segmentation monorepo
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
NVIDIA/data-science-stack
NVIDIA Data Science stack tools
NVIDIA/DeepLearningExamples
State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.