/nlu

1 line for thousands of State of The Art NLP models in hundreds of languages The fastest and most accurate way to solve text problems.

Primary LanguagePythonApache License 2.0Apache-2.0

NLU: The Power of Spark NLP, the Simplicity of Python

John Snow Labs' NLU is a Python library for applying state-of-the-art text mining, directly on any dataframe, with a single line of code. As a facade of the award-winning Spark NLP library, it comes with 1000+ of pretrained models in 100+ , all production-grade, scalable, and trainable and everything in 1 line of code.

NLU in Action

See how easy it is to use any of the thousands of models in 1 line of code, there are hundreds of tutorials and simple examples you can copy and paste into your projects to achieve State Of The Art easily.

NLU & Streamlit in Action

This 1 line let's you visualize and play with 1000+ SOTA NLU & NLP models in 200 languages for Named Entitiy Recognition, Dependency Trees & Parts of Speech, Classification for 100+ problems, Text Summarization & Question Answering using T5 , Translation with Marian, Text Similarity Matrix using BERT, ALBERT, ELMO, XLNET, ELECTRA with other of the 100+ wordembeddings and much more using Streamlit .

streamlit run https://raw.githubusercontent.com/JohnSnowLabs/nlu/master/examples/streamlit/01_dashboard.py

NLU provides tight and simple integration into Streamlit, which enables building powerful webapps in just 1 line of code which showcase the. View the NLU&Streamlit documentation or NLU & Streamlit examples section. The entire GIF demo and

All NLU ressources overview

Take a look at our official Spark NLU page: https://nlu.johnsnowlabs.com/ for user documentation and examples

Ressource Description
Install NLU Just run pip install nlu pyspark==3.0.2
The NLU Namespace Find all the names of models you can load with nlu.load()
The nlu.load(<Model>) function Load any of the 1000+ models in 1 line
The nlu.load(<Model>).predict(data) function Predict on Strings, List of Strings, Numpy Arrays, Pandas, Modin and Spark Dataframes
The nlu.load(<train.Model>).fit(data) function Train a text classifier for 2-Class, N-Classes Multi-N-Classes, Named-Entitiy-Recognition or Parts of Speech Tagging
The nlu.load(<Model>).viz(data) function Visualize the results of Word Embedding Similarity Matrix, Named Entity Recognizers, Dependency Trees & Parts of Speech, Entity Resolution,Entity Linking or Entity Status Assertion
The nlu.load(<Model>).viz_streamlit(data) function Display an interactive GUI which lets you explore and test every model and feature in NLU in 1 click.
General Concepts General concepts in NLU
The latest release notes Newest features added to NLU
Overview NLU 1-liners examples Most common used models and their results
Overview NLU 1-liners examples for healthcare models Most common used healthcare models and their results
Overview of all NLU tutorials and Examples 100+ tutorials on how to use NLU on text datasets for various problems and from various sources like Twitter, Chinese News, Crypto News Headlines, Airline Traffic communication, Product review classifier training,
Connect with us on Slack Problems, questions or suggestions? We have a very active and helpful community of over 2000+ AI enthusiasts putting NLU, Spark NLP & Spark OCR to good use
Discussion Forum More indepth discussion with the community? Post a thread in our discussion Forum
John Snow Labs Medium Articles and Tutorials on the NLU, Spark NLP and Spark OCR
John Snow Labs Youtube Videos and Tutorials on the NLU, Spark NLP and Spark OCR
NLU Website The official NLU website
Github Issues Report a bug

Getting Started with NLU

To get your hands on the power of NLU, you just need to install it via pip and ensure Java 8 is installed and properly configured. Checkout Quickstart for more infos

pip install nlu pyspark==3.0.2

Loading and predict with any model in 1 line python

import nlu 
nlu.load('sentiment').predict('I love NLU! <3') 

Loading and predict with multiple models in 1 line

Get 6 different embeddings in 1 line and use them for downstream data science tasks!

nlu.load('bert elmo albert xlnet glove use').predict('I love NLU! <3') 

What kind of models does NLU provide?

NLU provides everything a data scientist might want to wish for in one line of code!

  • NLU provides everything a data scientist might want to wish for in one line of code!
  • 1000 + pre-trained models
  • 100+ of the latest NLP word embeddings ( BERT, ELMO, ALBERT, XLNET, GLOVE, BIOBERT, ELECTRA, COVIDBERT) and different variations of them
  • 50+ of the latest NLP sentence embeddings ( BERT, ELECTRA, USE) and different variations of them
  • 100+ Classifiers (NER, POS, Emotion, Sarcasm, Questions, Spam)
  • 300+ Supported Languages
  • Summarize Text and Answer Questions with T5
  • Labeled and Unlabeled Dependency parsing
  • Various Text Cleaning and Pre-Processing methods like Stemming, Lemmatizing, Normalizing, Filtering, Cleaning pipelines and more

Classifiers trained on many different different datasets

Choose the right tool for the right task! Whether you analyze movies or twitter, NLU has the right model for you!

  • trec6 classifier
  • trec10 classifier
  • spam classifier
  • fake news classifier
  • emotion classifier
  • cyberbullying classifier
  • sarcasm classifier
  • sentiment classifier for movies
  • IMDB Movie Sentiment classifier
  • Twitter sentiment classifier
  • NER pretrained on ONTO notes
  • NER trainer on CONLL
  • Language classifier for 20 languages on the wiki 20 lang dataset.

Utilities for the Data Science NLU applications

Working with text data can sometimes be quite a dirty Job. NLU helps you keep your hands clean by providing lots of components that take away data engineering intensive tasks.

  • Datetime Matcher
  • Pattern Matcher
  • Chunk Matcher
  • Phrases Matcher
  • Stopword Cleaners
  • Pattern Cleaners
  • Slang Cleaner

Where can I see all models avaiable in NLU?

For NLU models to load, see the NLU Namespace or the John Snow Labs Modelshub or go straight to the source.

Supported Data Types

  • Pandas DataFrame and Series
  • Spark DataFrames
  • Modin with Ray backend
  • Modin with Dask backend
  • Numpy arrays
  • Strings and lists of strings

NLU Demos on Datasets

NLU component examples

Checkout the following notebooks for examples on how to work with NLU.

NLU Training Examples

Binary Class Text Classification training

Multi Class Text Classification training

Multi Label Text Classification training

Named Entity Recognition training (NER)

Part of Speech tagger training (POS)

NLU Applications Examples

NLU Demos on Datasets

NLU examples grouped by component

The following are Collab examples which showcase each NLU component and some applications.

Named Entity Recognition (NER)

Part of speech (POS)

Sequence2Sequence

Classifiers

Word Embeddings

Sentence Embeddings

Sentence Embeddings

Dependency Parsing

Text Pre Processing and Cleaning

Chunkers

Matchers

Need help?

Simple NLU Demos

Features in NLU Overview

  • Tokenization
  • Trainable Word Segmentation
  • Stop Words Removal
  • Token Normalizer
  • Document Normalizer
  • Stemmer
  • Lemmatizer
  • NGrams
  • Regex Matching
  • Text Matching,
  • Chunking
  • Date Matcher
  • Sentence Detector
  • Deep Sentence Detector (Deep learning)
  • Dependency parsing (Labeled/unlabeled)
  • Part-of-speech tagging
  • Sentiment Detection (ML models)
  • Spell Checker (ML and DL models)
  • Word Embeddings (GloVe and Word2Vec)
  • BERT Embeddings (TF Hub models)
  • ELMO Embeddings (TF Hub models)
  • ALBERT Embeddings (TF Hub models)
  • XLNet Embeddings
  • Universal Sentence Encoder (TF Hub models)
  • BERT Sentence Embeddings (42 TF Hub models)
  • Sentence Embeddings
  • Chunk Embeddings
  • Unsupervised keywords extraction
  • Language Detection & Identification (up to 375 languages)
  • Multi-class Sentiment analysis (Deep learning)
  • Multi-label Sentiment analysis (Deep learning)
  • Multi-class Text Classification (Deep learning)
  • Neural Machine Translation
  • Text-To-Text Transfer Transformer (Google T5)
  • Named entity recognition (Deep learning)
  • Easy TensorFlow integration
  • GPU Support
  • Full integration with Spark ML functions
  • 1000 pre-trained models in +200 languages!
  • Multi-lingual NER models: Arabic, Chinese, Danish, Dutch, English, Finnish, French, German, Hewbrew, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Urdu and more
  • Natural Language inference
  • Coreference resolution
  • Sentence Completion
  • Word sense disambiguation
  • Clinical entity recognition
  • Clinical Entity Linking
  • Entity normalization
  • Assertion Status Detection
  • De-identification
  • Relation Extraction
  • Clinical Entity Resolution

Citation

We have published a paper that you can cite for the NLU library:

@article{KOCAMAN2021100058,
    title = {Spark NLP: Natural language understanding at scale},
    journal = {Software Impacts},
    pages = {100058},
    year = {2021},
    issn = {2665-9638},
    doi = {https://doi.org/10.1016/j.simpa.2021.100058},
    url = {https://www.sciencedirect.com/science/article/pii/S2665963821000063},
    author = {Veysel Kocaman and David Talby},
    keywords = {Spark, Natural language processing, Deep learning, Tensorflow, Cluster},
    abstract = {Spark NLP is a Natural Language Processing (NLP) library built on top of Apache Spark ML. It provides simple, performant & accurate NLP annotations for machine learning pipelines that can scale easily in a distributed environment. Spark NLP comes with 1100+ pretrained pipelines and models in more than 192+ languages. It supports nearly all the NLP tasks and modules that can be used seamlessly in a cluster. Downloaded more than 2.7 million times and experiencing 9x growth since January 2020, Spark NLP is used by 54% of healthcare organizations as the world’s most widely used NLP library in the enterprise.}
    }
}