Here is an example of some typical output you can expect to see.
Before you get started you need to install the python blessings
library to colorize the terminal output.
$ sudo pip install blessings
$ python wordsworth.py --filename textfile.txt --top 50
$ python wordsworth.py -f textfile.txt -t 50
$ python wordsworth.py --filename textfile.txt --ntuple 10
$ python wordsworth.py -f textfile.txt -n 10
$ python wordsworth.py --filename textfile.txt --ignore the,a,--
$ python wordsworth.py -f textfile.txt -i the,a,--
$ python wordsworth.py --filename textfile.txt --ignore ,--
$ python wordsworth.py -f textfile.txt -i ,--
wordsworth-nltk.py provides extended analysis, including a frequency analysis of verbs, nouns, adjectives, pronouns etc. To run this script you will need to install the python Natural Language Toolkit (NLTK) and the Brown and Punkt datasets which is used for token tagging. Fortunately this is very simple to install.
Step 1. Install NLTK
$ sudo pip install nltk
Step 2. Launch the python interpreter
$ python
Step 3. Download the Brown
and Punkt
dataset
>>> import nltk
>>> nltk.download('brown')
>>> nltk.download('punkt')