Various ways to summarise text using the libraries available for Python :
- pyteaser
- sumy
- gensim
- pytldr
- XLNET
- BERT
- GPT2
pip install sumy
pip install gensim
pip install pyteaser
pip install pytldr
pip install bert-extractive-summarizer
pip install spacy==2.0.12
pip install transformers==2.2.0
Pyteaser has two function:
Summarize: that takes title and text and summarizes them
SummarizeURL: that takes the url and summarizes the content of the url
Summy has various preprocessing libraries and summarizer libraries
sumytoken: for tokenizing the text
get_stop_words: to remove the stop words from the text
stemmer: to stemp the words
LexRankSummarizer: summarizes based on lexical ranking
LsaSummarizer: summarizes based on semantic
LuhnSummarizer: summarizes based on Luhn's algorithm
gensim has a summarize library which can be imported and used directly.
pytldr is also like sumy where they have various nlp libraries like tokenizer.
Here we have used TextRankSummarizer, RelevanceSummarzer, LsaSummarizer from pytldr
XLNet is an auto-regressive language model which outputs the joint probability of a sequence of tokens based on the transformer architecture with recurrence.
Extractive Text summarization refers to extracting (summarizing) out the relevant information from a large document while retaining the most important information. BERT (Bidirectional Encoder Representations from Transformers) introduces rather advanced approach to perform NLP tasks.
GPT-2 model with 1.5 Billion parameters is a large transformer-based language model. It's trained for predicting the next word. So, we can use this specialty to summarize the data.
Run main.py from "for_python3" folder while using python, else test by running "summarize.py" or the notebook named as "Text Summarizer Notebook.ipynb"
PS: pytldr and pyteaser doesn't work for python3