This is the first course of the Natural Language Processing Specialization.
Use a simple method to classify positive or negative sentiment in tweets
Use a more advanced model for sentiment analysis
Use vector space models to discover relationships between words and use principal component analysis (PCA) to reduce the dimensionality of the vector space and visualize those relationships
Write a simple English-to-French translation algorithm using pre-computed word embeddings and locality sensitive hashing to relate words via approximate k-nearest neighbors search
This is the second course of the Natural Language Processing Specialization.
Create a simple auto-correct algorithm using minimum edit distance and dynamic programming
Apply the Viterbi algorithm for POS tagging, which is important for computational linguistics
Write a better auto-complete algorithm using an N-gram model (similar models are used for translation, determining the author of a text, and speech recognition)
Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model
This is the third course in the Natural Language Processing Specialization.
Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets
Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model
Train a recurrent neural network to perform NER using LSTMs with linear layers
Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning
This is the fourth course in the Natural Language Processing Specialization.
Translate complete English sentences into French using an encoder/decoder attention model
Build a transformer model to summarize text
Use T5 and BERT models to perform question answering
Build a chatbot using a reformer model