In this lab, we'll learn how to use various NLP techniques to generate descriptive statistics to explore a text corpus!
You will be able to:
- Generate common corpus statistics using NLTK
- Use a count vectorization strategy to create a bag of words
- Compare two different text corpora using corpus statistics generated by NLTK
In this lab, we'll load two different text corpora from NLTK's library of various texts, and then explore and compare each corpus using some basic statistical measures and techniques common in NLP. Let's get started!
In the cell below:
- Import
nltk
- Import
gutenberg
andstopwords
fromnltk.corpus
- Import everything (
*
) fromnltk.collocations
- Import
FreqDist
andword_tokenize
fromnltk
- Import the
string
andre
libraries
Now, let's take a look at the corpora available to us. There are many, many corpora available inside of nltk's corpus
module. For this lab, we'll make use of the texts contained in corpus.gutenberg
-- 18 different (complete) corpora that can be found on the Project Gutenberg website.
To see the file ids for each of the corpora inside of gutenberg
, we can call the .fileids()
method. Do this now in the cell below.
file_ids = None
file_ids
Great! For the first part of this lab, we'll be working with Shakespeare's Macbeth, a tragedy about a pair of ambitious social climbers.
To load the actual corpus, we need to pass in the file id for macbeth into gutenberg.raw()
.
Do this now in the cell below. Then, print the first 1000 characters of the text to ensure it loaded correctly, and get a feel for what our text data looks like.
macbeth_text = None
Question: Look at the text snippet above. What do you notice about it? Are there any issues you see that we'll need to deal with during the preprocessing steps?
Write your answer below this line:
Yes, there are. Some of the words are hyphenated. If we just use basic tokenization, then it will split hyphenated words into individual tokens. There are also numbers that act as metadata about which witch is speaking -- we'll need to remove these.
Looking at the text output above shows us a few things that we'll need to deal with during the preprocessing and tokenization steps -- specifically:
- Capitalization -- we'll need to lowercase all words.
- Apostrophes -- we'll need to write some basic regex in order to capture words that contain apostrophes as a single token. In the interest of time, a pattern has been provided for you. Use the following pattern:
"([a-zA-Z]+(?:'[a-z]+)?)"
- Numbers -- We'll want to remove these, as they generally appear as stage direction to tell us which witch is speaking.
In the cell below:
- Store the pattern shown above in the appropriate variable
- Use
nltk.regexp_tokenize()
and pass in our text and thepattern
pattern = None
macbeth_tokens_raw = None
Great! Now that we have our tokens, we need to lowercase them. In the cell below, use a list comprehension and the .lower()
method on every word token in macbeth_tokens_raw
. Store this inside macbeth_tokens
.
macbeth_tokens = None
Now that we've done some basic cleaning and tokenization, let's go ahead and create a Frequency Distribution to see the number of times each word is used in this play. This frequency distribution is an example of a Bag of Words, which you've worked with in previous labs.
In the cell below:
- Use
FreqDist()
and pass inmacbeth_tokens
as the input - Display the frequency distribution to see what it looks like
macbeth_freqdist = None
macbeth_freqdist.most_common(50)
Well, that doesn't tell us very much! The top 10 most used words in macbeth are all Stop Words. They don't contain any interesting information, and essentially just act as the "connective tissue" between the words that really matter in any text. Let's try removing the stopwords and punctuation, and then creating another frequency distribution that contains only the important words.
We've already imported the stopwords
module. We can access all of the stopwords using the stopwords.words()
method -- however, we don't want to use the whole thing, as this contains all stopwords in every language supported by NLTK. We don't need to check for and remove any Finnish or Japanese stop words, as this text is in English. To avoid unnecessarily long runtimes, we'll just use the English subset of stopwords by passing in the parameter "english"
into stopwords.words()
.
In the cell below:
- Get all the
'english'
stopwords fromstopwords.words()
and store them in the appropriate variable below. They will be stored as a list, by default - We'll also want to remove all punctuation. Create a list version of
string.punctuation
and add it to our stopwords list - Finally, we'll also remove numbers. Create a list that contains numbers 0-9 (as strings!), and add this to the stopwords list as well
- Use another list comprehension to get words out of
macbeth_tokens
as long as they are not instopwords_list
stopwords_list = None
stopwords_list += None
stopwords_list += None
macbeth_words_stopped = None
Great! Now, let's create another frequency distribution using macbeth_words_stopped
, and then inspect the top 50 most common words, to see if removing stopwords and punctuation has helped.
Do this now in the cell below.
macbeth_stopped_freqdist = None
macbeth_stopped_freqdist.most_common(50)
This is definitely an improvement! You may be wondering why 'Macb'
shows up as the number 1 most used token. If you inspect Macbeth on project gutenberg and search for 'Macb'
, you'll soon discover that the source text denotes Macb
as stage direction for any line spoken by Macbeth's character. This means that 'Macb'
is actually stage direction, meaning that under normal circumstances, we would need to ask ourselves if it is worth it to remove it or keep it. In the interest of time for this lab, we'll leave it be.
Now that we have a frequency distribution, we can easily answer some basic questions about the text. Let's answer some basic questions about Macbeth below, before we move onto creating bigrams.
What is the size of the total vocabulary used in Macbeth, once all stopwords have been removed?
Compute this in the cell below.
Knowing the frequency with which each word is used is somewhat informative, but without the context of how many words are used in total, it doesn't tell us much. One way we can adjust for this is to use Normalized Word Frequency, which we can compute by dividing each word frequency by the total number of words.
Compute this now in the cell below, and display the normalized word frequency for the top 50 words.
total_word_count = None
macbeth_top_50 = None
print('Word\t\t\tNormalized Frequency')
for word in macbeth_top_50:
normalized_frequency = None
print('{} \t\t\t {:.4}'.format(None, None))
Knowing individual word frequencies is somewhat informative, but in practice, some of these tokens are actually parts of larger phrases that should be treated as a single unit. Let's create some bigrams, and see which combinations of words are most telling.
In the cell below:
- We'll begin by aliasing a particularly long method name to make it easier to call. Store
nltk.collocations.BigramAssocMeasures()
inside of the variablebigram_measures
- Next, we'll need to create a finder. Pass
macbeth_words_stopped
intoBigramCollocationFinder.from_words()
and assign the result tomacbeth_finder
- Once we have a finder, we can use it to compute bigram scores, so we can see the combinations that occur most frequently. Call the
macbeth_finder
object'sscore_ngrams()
method and pass inbigram_measures.raw_freq
as the input - Display first 50 elements in the
macbeth_scored
list to see the 50 most common bigrams in macbeth
bigram_measures = None
macbeth_finder = None
macbeth_scored = None
# Display the first 50 elements of macbeth_scored
These look a bit more interesting. We can see here that some of the most common ones are stage directions, such as 'Enter Macbeth' and 'Exeunt Scena', while others seem to be common phrases used in the play.
To wrap up our initial examination of Macbeth, let's end by calculating Mutual Information Scores.
To calculate mutual information scores, we'll need to first create a frequency filter, so that we only examine bigrams that occur more than a set number of times -- for our purposes, we'll set this limit to 5.
In NLTK, mutual information is often referred to as pmi
, for Pointwise Mutual Information. Calculating PMI scores works much the same way that we created bigrams, with a few notable differences.
In the cell below:
- We'll start by creating another finder for pmi. Pass
macbeth_words_stopped
as the input toBigramCollocationFinder.from_words()
. Store this is the variablemacbeth_pmi_finder
- Once we have our finder, we'll need to apply our frequency filter. Call
macbeth_pmi_finder
'sapply_freq_filter
and pass in the number5
as the input - Now, we can use the finder to calculate pmi scores. Use the pmi finder's
.score_ngrams()
method, and pass inbigram_measures.pmi
as the argument. Store this inmacbeth_pmi_scored
- Examine the first 50 elements in
macbeth_pmi_scored
macbeth_pmi_finder = None
macbeth_pmi_scored = None
Now that we've worked through generating some baseline corpus statistics for one corpus, it's up to you to select a second corpus and generate your own corpus statistics, and then compare and contrast the two. For simplicity's sake, we recommend you stick to a corpus from nltk.corpus.gutenberg
-- although comparing the diction found in a classic work of fiction to something like a presidential State of the Union address could be interesting, it's not really an apples-to-apples comparison, and those corpora could also require additional preprocessing steps that are outside the scope of this lab.
In the cells below:
- Select another corpus from
gutenberg.fileids()
- Clean, preprocess, tokenize, and generate corpus statistics for this new corpus
- Perform a comparative analysis using the Macbeth statistics we generated above and your new corpus statistics. How are they similar? How are they different? Was there anything interesting or surprising that you found in your comparison? Create at least one meaningful visualization comparing the two corpora
In this lab, we used our newfound NLP skills to generate some statistics specific to text data, and used them to compare two different works!