More information about MEDOC on OMICTools website or on MEDOC's publication on arXiv.org:
Thanks to rafspiny for his multiple corrections and feedback !
MEDLINE is a database of scientitifc articles released by the NIH. Pubmed is the most common way to query this database, used daily by many scientists around the world.
The NIH provides free APIs to build automatic queries, however a relational database could be more efficient.
The aim of this project is to download XML files provided by MEDLINE on a FTP and to build a relational mySQL database with their content.
The first step is to clone this Github repository on your local machine.
Open a terminal:
git clone "https://github.com/MrMimic/MEDOC"
cd ./MEDOC
Here prerequisites and installation procedures will be discussed.
XML parsing libraries may be needed. You can install them on any Debian-derived system with:
sudo apt-get install libxml2-dev libxslt1-dev zlib1g-dev
You may also need python-dev
. You can also install it with the same command:
sudo apt-get install python-dev
The second step is to install external dependencies and to cythonize python functions.
Thus, run the file SETUP.py
cd /path/to/MEDOC
python3 utils/SETUP.py build_ext --inplace
This script will:
- Check for pip3 and give command to install it
- Check for Cython and give command to install it
- Check for pymysql and give command to install it
- Check for bs4 and give command to install it
There's no need to Cythonize functions anymore, they've been optimized.
Alternatively you can exploit the requirements.txt file shipped with the project. Simply run the following command from the MEDOC folder.
pip3 install -r requirements.txt
bs4==0.0.1
beautifulsoup4==4.6.0
Cython==0.27.2
html5lib==0.999999999
lxml==3.5.0
PyMySQL==0.7.11
Before you can run the code, you should take a look at parameters.json
file and customize it according to your
environment.
Plus, if you have already a user to access the DB you wish to create you can change the schema
file to reflect that.
You can change the DB_USER and the DB_PASSWORD fields with the following command.
Suppose your credentials are: my_custom_user/my_secret_password
export MEDOC_SQL_FILE='database_creation.sql'
sed -i'' -e "s/\bdb_user\b/my_custom_user/g" $MEDOC_SQL_FILE
sed -i'' -e "s/\bDB_PASSWORD\b/my_secret_password/g" $MEDOC_SQL_FILE
NOTE: If python3 is your default, you do not need to specify python3
or pip3
but just use python
and pip
.
Open file 'parameters.json' and change complete path value including your /home/xxx/...
If your computer has 16Go or more of RAM, you can set 'insert_command_limit' to '1000' of greater.
Leave database name to 'pubmed' but change the mySQL password to yours.
Then, simply execute :
python3 __execution__.py
First line should be about database creation and number of files to download.
Then, a regular output for a file loading should look like:
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - DOWNLOADING FILE
Downloading baseline/medline17n0216.xml.gz ..
Elapsed time: 12.32 sec for module: download
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - FILE EXTRACTION
Elapsed time: 0.42 sec for module: extract
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - XML FILE PARSING
Elapsed time: 72.47 sec for module: parse
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SQL INSERTION
10000 articles inserted for file baseline/medline17n0216.xml.gz
20000 articles inserted for file baseline/medline17n0216.xml.gz
30000 articles inserted for file baseline/medline17n0216.xml.gz
Total time for file medline17n0216.xml.gz: 5.29 min
Program stop running because of 'Segmentation fault (core dumped)'
Indexing a file with 30K article take some time and RAM (if you know other parser than LXML, more RAM-frieldy, do a PR). Try to open the function /lib_medline/python_functions/E_parse_xml.py and go to the line:
soup = BeautifulSoup(file_content, 'lxml')
Change 'lxml' to 'html-parser' and re-launch SETUP.py.
Or simply try to lower the 'insert_command_limit' parameter, to insert values more often in the database, thus saving RAM usage.
SQL insertions are taking really a lot of time (more than 15min / file)'
Recreate the SQL database after dropping it, by running the following command:
DROP DATABASE pubmed ;
Then, comment every line about indexes (CREATE INDEX) or foreigns keys (ALTER TABLE) into the SQL creation file. Indexes are slowing up insertions.
When the database is full, launch the indexes and alter commands once at a time.
Problem installing lxml
Make sure you have all the right dependencies installed
On Debian based machines try running:
sudo apt-get install python-dev libxml2-dev libxslt1-dev zlib1g-dev
Contributed packages to the original MEDOC package
- install mysql in macos
- brew install mysql
- mysql -u root -p
- SHOW DATABASES
- install pyenv in macos
- brew update
- brew install pyenv
- brew install pyenv-virtualenv
- CFLAGS="-I$(xcrun --show-sdk-path)/usr/include" pyenv install -v 3.6.4
- pyenv install 2.7.10
- pyenv versions
- Then, when you need a certain version:
- pyenv local 3.5.0
- Update bash_profile
- if which pyenv > /dev/null; then eval "$(pyenv init -)"; fi
- if which pyenv-virtualenv-init > /dev/null; then eval "$(pyenv virtualenv-init -)"; fi
- export PYENV_ROOT="$HOME/.pyenv"
- export PATH="$PYENV_ROOT/bin:$PATH"
- eval "$(pyenv init -)"
- update the configuration file
- src/test/p3mResource
- /Users/dsm/DGit/MEDOC/contrib/basic/src/test/p3mResource/configuration1b.cfg
- src/test/p3mResource
- run the unit test
- src/test/p3m
- medoc1b.py :
- src/test/p3m
- misc
- medoc9x.py : a sample unit test and logging in python