module dsconcept missing
Opened this issue ยท 14 comments
sorry, should have put it here ๐
~/concept-tagging-api/tests$ python context.py
Traceback (most recent call last):
File "context.py", line 8, in
import app
File "/home/ubuntu/concept-tagging-api/service/app.py", line 6, in
import dsconcept.get_metrics as gm
ModuleNotFoundError: No module named 'dsconcept'
and re: other repo
fatal: unable to access 'https://developer.nasa.gov/DataSquad/classifier_scripts.git/': Could not resolve host: developer.nasa.gov
ERROR: Command errored out with exit status 128: git clone -q https://developer.nasa.gov/DataSquad/classifier_scripts.git
Did you try installing the library with this command?
pip install git+https://github.com/nasa/concept-tagging-training.git@v1.0.3-open_source_release#egg=dsconcept
What was the exact command you executed that resulted in the fatal: unable to access
error?
Trying to run the app ... no have not tried that one, will give it a shot later today thanks
That appears to have worked thank you Anthony. Now just have to wait to be allowed to access it to try it.
Glad to hear it. Sorry about those misleading links. Would love to hear how it ends up working.
Anthony, have it running now. Have to seriously test soon - probably have a project full of documents coming too.
Speaking of which, do you have a recommended size limit for firing a batch at it, do you think?
e.g. context being - if this generally working on short abstracts - and start throwing long reports at it to see what comes out - reasonable length for model to handle, that sort of thing - 10MB text dump of a report being like 10K 200 word abstracts?
Seems at the moment approx 4MB might be a limit (at least at this machine capacity) for a single text payload. Everything from 1KB up to 7MB or so in this 7000 odd I am trying.
This is one example: https://mer-env.s3-ap-southeast-2.amazonaws.com/ENV01111.pdf-textract-text.txt
To optimize speed, my tests (using abstracts) indicated that a batch size of about 700 documents is ideal. More than that gave diminishing returns on speed. If you are working with longer documents, the ideal batch size may be lower. However, I would recommend automatically summarizing the text before sending it to the API, as other tests indicated that this is best for getting the most accurate tags.
Thanks Anthony, so maybe a megabyte or so in that case?
I am going to do the summarizer as well for the same set and compare.
I'm not totally sure, but that sounds about right.
Seems like around 3MB docs with presumably lots of words etc. can run 32GB out of memory and core dump a machine
Notes on installing this repo: change scm_version to False to get it to in
sudo apt-get install cargo
for rust package manager for example
and if running local python3 -m spacy download en_core_web_sm