Base experiment code for web experiments that are hosted on Google App Engine For Python (2.7)
- Python 2.7
- Google AppEngineLauncher for Python: here
- Google App command line tools (for downloading data from server). This must be done when starting Google AppEngineLauncher
- Open Google AppEngineLauncher
- File -> Add Existing Application
- Navigate to this folder
- Click Add
- Click Run in AppEngineLauncher
- Navigate in browser to localhost:8080
- Generate some data
- To inspect the data you created, in the App Engine Launcher click on SDK Console and then on Datastore Viewer
- Go to https://appengine.google.com/ and click Create Application
- Application Identifier - only you use it but you need to know it for later
- Application Title - this will appear as the label on the tab in the web browser of your experiment
- Edit the first line of app.yaml to match your Application Identifier
- In Google App Engine, click on Deploy and then enter the necessary credentials
- update your field
- Edit app.yaml to have a new version number (usually by adding one)
- In Google App Engine, click Deploy
- click Dashboard
- On the dashboard, click Versions (under Main in the left bar)
- Set your new version as Default
Instead of clicking Deploy, you can also deploy from the command line: appcfg.py update . --noauth_local_webserver
Note: If you upload a new version of your experiment, it will still share the same datastore as your previous experiments. To remove the existing data in the datastore, either create a new experiment (with a different application identifier) or delete all data in the datastore.
- Open the dashboard for your experiment (via Google App Engine)
- Click Datastore Viewer (under Data in the left bar)
- Enjoy
enter this at the command line:
appcfg.py download_data --config_file=bulkloader.yaml --filename=data.csv --kind=DataObject --url=http://<app_name>.appspot.com/_ah/remote_api
You should subsititute the name of your experiment for <app_name> above.
Note: The local testing in Google App Engine currently doesn't support batch download
If you change the data being written you'll have to re-create the download data file (bulkloader.yaml)
appcfg.py create_bulkloader_config --filename=bulkloader.yaml --url=http://<app_name>.appspot.com/_ah/remote_api
Then set the line in the new bulkloader.yaml connector:
to connector: csv
and set the delimiter to tab-based.
- In exp folder:
- index.html: html of the experiment that is loaded by backend.py (you should modify this for your experiment)
- app.yaml: (needs to be changed to update your app name and version number)
- defines app name and version number
- Handlers section - specify how URLs should be routed to the files in your folder (can be left alone)
- Libraries section - which libraries are used (can be left alone)
- Builtins - turns remote_api on (necessary for data download)
- backend.py: (can be left alone in most cases)
- loads index.html (via JINJA) and displays it
- defines the structure of the data to save (DataObject section)
- is the code that first gets called when people go to the experiment page
- backend.pyc: is automatically generated by python based on backend.py
- bulkloader.yaml: (can be left alone)
- tells the data downloader how the data from Google will be formatted
- must match backend.py buttons/sliders/etc)
- In css folder:
- style.css: specifies css for index.html and experiment.s (usually can be left alone)
- In js folder:
- init_exp.js: only javascript file directly loaded by index.html (except for JQuery), loads all other javascript files. This file is unlikely to require changes.
- demographics.js: functions for displaying the demographics questions
- instructions.js: functions for displaying the instructions and instruction checks
- exp_logic.js: functions to move the experiment through the various stages
- trial_fcns.js: functions defining what to do on a training and test trial
- slider_fcns.js: functions to control the slider
- canvas_fcns.js: functions to control the html canvas being drawn on
- In analysis folder:
- read.R: read in the raw file downloaded from App Engine and parse the JSON to create (and save) an RData object (read.R calls parser.py). You might need to change input_file (line 2) to match the name of the file you download it from Google App Engine if the name gets changed.
- parser.py: parses raw GAE result (it assumes a tab-delimited file) into CSV file suitable to be read into R (by read.R). It takes two parameters: the file to read and the name of the file to write the results to.
- makefile. A simple makefile to run read.R from the command line. Simply type
make