Create a tokenize function that splits a sentence into words.
Implement an "ngram" function that splits tokens into N-grams.
Implement an one_hot_encode function to create a vector from a numerical vector from a list of tokens.
Train a sequential model with embeddings on the IMDB data found in data/external/imdb/. Produce the model performance metrics and training and validation accuracy curves within the Jupyter notebook
fit the same data with an LSTM layer. Produce the model performance metrics and training and validation accuracy curves within the Jupyter notebook.
fit the same data with a simple 1D convnet. Produce the model performance metrics and training and validation accuracy curves within the Jupyter notebook.