JSON Expressions for BigQuery
June 2020 - Can insert JSON documents into BigQuery while managing the schema. Queries are not supported yet.
The library is intended to manage multiple BigQuery tables to give the illusion of one table with a dynamically managed schema.
partition
- Big data is split into separate containers based on age. This allows queries on recent data to use less resources, and allows old data to be dropped quicklycluster
- another name for the sorted order of the data in a partition. Sorting by the most common used lookup will make queries fasterid
- The set of columns that identifies the document
table
- Any name you wish to give to this table seriestop_level_fields
- dictionary from deep property path to top-level property name. BigQuery demands that control columns are top-level; which may not be the case for your documents.partition
-field
- The dot-delimited field used to partition the tables (must betime
datatype)expire
- When BigQuery will automatically drop your data.
id
- The identification of documentsfield
- the set of columns to uniquely identify this documentversion
- column used to determine age of a document; replacing newer with older
cluster
- Columns used to sort the partitionsschema
- {name: type} dictionary - needed when there is no data; BigQuery demands column definitionssharded
- boolean - set totrue
if you allow this library to track multiple tables. It allows for schema migration (expansion only), and for faster insert from a multitude of machinesaccount_info
- The information BigQuery provides to connect
This is a complicated example. See tests/config.json for a minimal example.
{
"table": "my_table_name",
"top_level_fields": {},
"partition": {
"field": "submit_time",
"expire": "2year"
},
"id": {
"field": "id",
"version": "last_modified"
},
"cluster": [
"id",
"last_modified"
],
"schema": {
"id": "integer",
"submit_time": "time",
"last_modified": "time"
},
"sharded": true,
"account_info": {
"private_key_id": {
"$ref": "env://BIGQUERY_PRIVATE_KEY_ID"
},
"private_key": {
"$ref": "env://BIGQUERY_PRIVATE_KEY"
},
"type": "service_account",
"project_id": "my-project-id",
"client_email": "me@my_project.iam.gserviceaccount.com",
"client_id": "12345",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/my-project.iam.gserviceaccount.com"
}
}
Setup Dataset
with an application name
dataset = bigquery.Dataset(
dataset=application_name,
kwargs=settings
)
Create a table
destination = dataset.get_or_create_table(settings.destination)
Insert documents as you please
destination.extend(documents)
Request a merge when done
destination.merge()
Fork and clone this repo.
git clone https://github.com/klahnakoski/jx-bigquery.git
cd jx-bigquery
pip install -r requirements.txt
You will require a Google API key to run tests. The website will allow you to generate one and download a JSON file with the key. Update the tests/config.json to point to that file:
# contents of tests/config.json
{
"destination": {
"account_info": {
"$ref": "file:///e:/moz-fx-dev-ekyle-treeherder-a838a7718652.json"
}
},
"constants": {},
"debug": {
"trace": true
}
}
Then you can run the tests
python -m unittest discover tests
NOTE - the tests will create a
testing
dataset and generate/drop tables