/Continuous-Representation-Experiment

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.

Continuous-Representation-Experiment

In this project I train and evaluate three different methods for next-word prediction using LSTMs and continuous value inputs and outputs. With what I call sequence to token (S2T), an input sequence is encoded as float values (embeddings) and used to predict a final masked token in the sequence.