Provide high speed filtering and aggregation over data see ActiveData Wiki Page for project details
Branch | Status |
---|---|
master | |
dev | |
frontend6 |
ActiveData is a service! You can certainly setup your own service, but it is easier to use Mozilla's!
curl -XPOST -d "{\"from\":\"unittest\"}" http://activedata.allizom.org/query
- Python2.7 installed
- Elasticsearch version 6.x
Elasticsearch has a configuration file at config/elasticsearch.yml
. You must modify it to handle a high number of scripts
script.painless.regex.enabled: true
script.max_compilations_rate: 10000/1m
We enable compression for faster transfer speeds
http.compression: true
And it is a good idea to give your cluster a unique name so it does not join others on your local network
cluster.name: lahnakoski_dev
then you can run Elasticsearch:
c:\elasticsearch>bin\elasticsearch
Elasticsearch runs off port 9200. Test it is working
curl http://localhost:9200
you should expect something like
{
"status" : 200,
"name" : "dev",
"cluster_name" : "lahnakoski_dev",
"version" : {
"number" : "1.7.5",
"build_hash" : "00f95f4ffca6de89d68b7ccaf80d148f1f70e4d4",
"build_timestamp" : "2016-02-02T09:55:30Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
It is still too early for PyPi install, so please clone master
branch off of github:
git clone https://github.com/klahnakoski/ActiveData.git
git checkout master
and install your requirements:
pip install -r requirements.txt
The ActiveData service requires a configuration file that will point to the
default Elasticsearch index. You can find a few sample config files in
tests/config
. simple_settings.json
is simplest one:
{
"flask":{
"host":"0.0.0.0",
"port":5000,
"debug":false,
"threaded":true,
"processes":1
},
"constants":{
"pyLibrary.env.http.default_headers":{"From":"https://wiki.mozilla.org/Auto-tools/Projects/ActiveData"}
},
"elasticsearch":{
"host":"http://localhost",
"port":9200,
"index":"unittest",
"type":"test_result",
"debug":true
}
...<snip>...
}
The elasticsearch
property must be updated to point to a specific cluster,
index and type. It is used as a default, and to find other indexes by name.
Jump to your git project directory, set your PYTHONPATH
and run app.py
:
cd ~/ActiveData
export PYTHONPATH=.
python active_data/app.py --settings=resources/config/simple_settings.json
If you have no records in your Elasticsearch cluster, then you must add some before you can query them.
Make a table in Elasticsearch, with one record:
curl -XPUT "http://localhost:9200/movies/movie/1" -d "{\"name\":\"The Parent Trap\",\"released\":\"29 July` 1998\",\"imdb\":\"http://www.imdb.com/title/tt0120783/\",\"rating\":\"PG\",\"director\":{\"name\":\"Nancy Meyers\",\"dob\":\"December 8, 1949\"}}"
Assuming you used the defaults, you can verify the service is up if you can access the Query Tool at http://localhost:5000/tools/query.html. You may use it to send queries to your instance of the service. For example:
{"from":"movies"}
The Github repo also included the test suite, and you can run it against your service if you wish. The tests will create indexes on your cluster which are filled, queried, and destroyed
cd ~/ActiveData
export PYTHONPATH=.
# OPTIONAL, TEST_SETTINGS already defaults to this file
export TEST_SETTINGS=tests/config/test_simple_settings.json
python -m unittest discover -v -s tests