GluonNLP is a toolkit that enables easy text preprocessing, datasets loading and neural models building to help you speed up your Natural Language Processing (NLP) research.
Make sure you have Python 2.7 or Python 3.6 and recent version of MXNet.
You can install MXNet
and GluonNLP
using pip:
pip install --pre --upgrade mxnet pip install gluonnlp
GluonNLP documentation is available at our website.
For questions and comments, please visit our forum (and Chinese version). For bug reports, please submit Github issues.
GluonNLP has been developed by community members. Everyone is more than welcome to contribute. We together can make the GluonNLP better and more user-friendly to more users.
Read our contributing guide to get to know about our development procedure, how to propose bug fixes and improvements, as well as how to build and test your changes to GluonNLP.
Join our contributors.
Check out how to use GluonNLP for your own research or projects.
If you are new to Gluon, please check out our 60-minute crash course.
For getting started quickly, refer to notebook runnable examples at Examples.
For advanced examples, check out our Scripts.
For experienced users, check out our API Notes.
Load the Wikitext-2 dataset, for example:
>>> import gluonnlp as nlp
>>> train = nlp.data.WikiText2(segment='train')
>>> train[0][0:5]
['=', 'Valkyria', 'Chronicles', 'III', '=']
Build vocabulary based on the above dataset, for example:
>>> vocab = nlp.Vocab(counter=nlp.data.Counter(train[0]))
>>> vocab
Vocab(size=33280, unk="<unk>", reserved="['<pad>', '<bos>', '<eos>']")
From the models package, apply an Standard RNN langauge model to the above dataset:
>>> model = nlp.model.language_model.StandardRNN('lstm', len(vocab),
... 200, 200, 2, 0.5, True)
>>> model
StandardRNN(
(embedding): HybridSequential(
(0): Embedding(33280 -> 200, float32)
(1): Dropout(p = 0.5, axes=())
)
(encoder): LSTM(200 -> 200.0, TNC, num_layers=2, dropout=0.5)
(decoder): HybridSequential(
(0): Dense(200 -> 33280, linear)
)
)
For example, load a GloVe word embedding, one of the state-of-the-art English word embeddings:
>>> glove = nlp.embedding.create('glove', source='glove.6B.50d')
# Obtain vectors for 'baby' in the GloVe word embedding
>>> type(glove['baby'])
<class 'mxnet.ndarray.ndarray.NDArray'>
>>> glove['baby'].shape
(50,)