/oreilly-hands-on-transformers

Hands on NLP and Computer Vision with Transformers

Primary LanguageJupyter Notebook

oreilly-logo

Hands on NLP with Transformers

This repository contains code for the O'Reilly Live Online Training for Hands on NLP with Transformers

This training will provide an introduction to the novel transformer architecture which is currently considered state of the art for modern NLP tasks. We will take a deep dive into what makes the transformer unique in its ability to process natural language including attention and encoder-decoder architectures. We will see several examples of how people and companies are using transformers to solve a wide variety of NLP tasks including conversation-holding, image captioning, reading comprehension, and more.

This training will feature several code-driven examples of transformer-derived architectures including BERT, GPT, T5, and the Vision Transformer. Each of our case studies will be inspired by real use-cases and will lean on transfer learning to expedite our process while using actionable metrics to drive results.

Notebooks

Classification with BERT

Classification with XLNET

Off the shelf NLP with T5

Generating LaTeX with GPT2

Image Captioning with Vision Transformers

3rd Part Transformer Models

Instructor

Sinan Ozdemir is currently the Director of Data Science at Directly, managing the AI and machine learning models that power the company’s intelligent customer support platform. Sinan is a former lecturer of Data Science at Johns Hopkins University and the author of multiple textbooks on data science and machine learning. Additionally, he is the founder of the recently acquired Kylie.ai, an enterprise-grade conversational AI platform with RPA capabilities. He holds a Master’s Degree in Pure Mathematics from Johns Hopkins University and is based in San Francisco, CA.