This is an example of seq-to-seq language model in pytorch. the dataset is consist of 50k different caption from IMDB that describing the movies. Each description are classified in positive and negative label with 0 and 1. This this model is available with LSTM and GRU In the last part you can send a new sentence to model and get the proper output. Dataset is in the code and has been uploaded to drive
mehrshadmmt/IMDB-seq2seq-with-GRU-LSTM
This is an example of seq-to-seq language model in pytorch.
Jupyter Notebook