Urdu, Kashmiri and Maithili Support
anuragshas opened this issue ยท 22 comments
I would like to contribute for Urdu and Kashmiri language which are also one of the official languages in India and has Indian origins.
I have started working on Urdu Language using repo NLP for Marathi. I have gathered around 350K wikipedia articles link and in process of scraping those articles. I have also added multiprocessing support for gathering articles
Thanks for the initiative, shout out if you need any help!
I am falling short of memory while creating TextLMDataBunch with only 100K articles and 32K vocabulary. How much memory is required to create the data for language model?
It's advisable to not go beyond vocab length of 30k. Are you talking about GPU memory ? I've GTX 1080 Ti with 11 GB memory on which I trained all the models. You can train your models on Google Colab if you're having trouble with memory on your own gpu. If you were talking about RAM, i think 16gb should be fine!
Thank you for the information. The issue was that single file was having over 350K character which was unable to tokenized and numericalized at once and loaded into main memory so I had to select fewer sentences and it worked.
I have also created LM for Maithili language having perplexity of 50. In search of news for classification task.
Side by side I am training on Urdu language with 150K articles.
Kashmiri Language has only 350 articles and I think it won't be enough to create language model
Good to know that it worked.
Yes, 350 articles seems too less. Try if you can get data from somewhere else.. news articles/govt. Websites etc.
I have completed for Urdu and here is the link
Resources for Kashmiri language is very scarce and some of them are paid, there are epaper websites having images. I am searching for more resources if possible or else I will have to drop it.
I am working on scraping maithili language news websites
@anuragshas Thanks for the contribution! Would you like to raise a PR to add your model to iNLTK (I can help you with the process)
You are welcome. I am really happy that I will be able to raise my first PR on github.
After going through the the code, I guess i will have to change config.py file but I am in doubt will I have to fine tune the all_languages_identifying_model
@anuragshas don't worry about all_languages_identifying_model. I will be fine tuning it to add Tamil language to iNLTK , I will tune it for Urdu as well.
As far as LM is concerned, we'll not be able to add it to iNLTK in the current form. Can you follow these instructions and upload the saved model on dropbox and share it's link with me?
Shout out if you need any help!
@anuragshas You've been working on LM for Maithili as well, right? Can you share the Wikipedia Dataset you would've prepared for it? Because tuning language-classifier model again for Maithili will take time, I was about to train it for Tamil, Urdu, Telugu, so thought if we can add Maithili as well, that would be great!
@goru001 Here is the link of MaithiliWikiArticles.
I have been busy searching for job, I will create PR for urdu lm as soon as I get free.
@anuragshas No issues! Good luck :).
@anuragshas Once you've imported the Tokenizer, you need to load the pretrained model which you would have saved the last time, and then export. That is, just to be very clear, you don't need to retrain, just do learn.load('your_saved_model_after_training.pth')
And then,
learn.export('export.pkl').
Once you have this 'export.pkl' and 'tokenizer.model'(which is a result of unsupervised training by sentencepiece), upload these to Dropbox, and then,
- Add your language and language code to config file
- Add both the links to config file
- The model I've pushed in today does contain language identifying capabilities for Urdu. So, nothing to do on that front.
- Check that all the functions are running fine for Urdu.
pycache and .idea folder is already present to the repo shouldn't that be removed?
yes, you're right. That was a mistake when I'd first committed. I've removed those now! Thanks!
Thank you for the information. The issue was that single file was having over 350K character which was unable to tokenized and numericalized at once and loaded into main memory so I had to select fewer sentences and it worked.
I have also created LM for Maithili language having perplexity of 50. In search of news for classification task.
Side by side I am training on Urdu language with 150K articles.
Kashmiri Language has only 350 articles and I think it won't be enough to create language model
I am working on Maithili.
Does inltk now supports maithili ?
@ankur220693 not yet! Feel free to contribute and raise a PR. Let me know if you need some help along the way.
@ankur220693 I am actually short of data for working on maithili. The model that I had created was overfitting therefore I had to put it on hold. If you can help us gather maithili text from a reliable source then please let us know
@anuragshas would you have any updates regarding language resources for Kashmiri?
For Kashmiri there is not enough data available publicly to work on, check on Oscar or Wikipedia dump if there is data available. Last time I had scraped it was only 350 articles
Dear @anuragshas I think Kashmiri Wikipedia has increased in size now.