Python idiomatic client for Google Cloud Platform services.
This client supports the following Google Cloud Platform services:
- Google Cloud Datastore
- Google Cloud Storage
- Google Cloud Pub/Sub
- Google BigQuery
- Google Cloud Resource Manager
- Google Stackdriver Logging
- Google Stackdriver Monitoring
If you need support for other Google APIs, check out the Google APIs Python Client library.
$ pip install --upgrade gcloud
- getting-started-python - A sample and tutorial that demonstrates how to build a complete web application using Cloud Datastore, Cloud Storage, and Cloud Pub/Sub and deploy it to Google App Engine or Google Compute Engine.
- google-cloud-python-expenses-demo - A sample expenses demo using Cloud Datastore and Cloud Storage
With google-cloud-python
we try to make authentication as painless as possible.
Check out the Authentication section in our documentation to learn more.
You may also find the authentication document shared by all the
google-cloud-*
libraries to be helpful.
Google Cloud Datastore (Datastore API docs) is a fully managed, schemaless database for storing non-relational data. Cloud Datastore automatically scales with your users and supports ACID transactions, high availability of reads and writes, strong consistency for reads and ancestor queries, and eventual consistency for all other queries.
See the google-cloud-python
API datastore documentation to learn how to
interact with the Cloud Datastore using this Client Library.
See the official Google Cloud Datastore documentation for more details on how to activate Cloud Datastore for your project.
from google.cloud import datastore
# Create, populate and persist an entity
entity = datastore.Entity(key=datastore.Key('EntityKind'))
entity.update({
'foo': u'bar',
'baz': 1337,
'qux': False,
})
# Then query for entities
query = datastore.Query(kind='EntityKind')
for result in query.fetch():
print result
Google Cloud Storage (Storage API docs) allows you to store data on Google infrastructure with very high reliability, performance and availability, and can be used to distribute large data objects to users via direct download.
See the google-cloud-python
API storage documentation to learn how to connect
to Cloud Storage using this Client Library.
You need to create a Google Cloud Storage bucket to use this client library. Follow along with the official Google Cloud Storage documentation to learn how to create a bucket.
from google.cloud import storage
client = storage.Client()
bucket = client.get_bucket('bucket-id-here')
# Then do other things...
blob = bucket.get_blob('remote/path/to/file.txt')
print blob.download_as_string()
blob.upload_from_string('New contents!')
blob2 = bucket.blob('remote/path/storage.txt')
blob2.upload_from_filename(filename='/local/path.txt')
Google Cloud Pub/Sub (Pub/Sub API docs) is designed to provide reliable,
many-to-many, asynchronous messaging between applications. Publisher
applications can send messages to a topic
and other applications can
subscribe to that topic to receive the messages. By decoupling senders and
receivers, Google Cloud Pub/Sub allows developers to communicate between
independently written applications.
See the google-cloud-python
API Pub/Sub documentation to learn how to connect
to Cloud Pub/Sub using this Client Library.
To get started with this API, you'll need to create
from google.cloud import pubsub
client = pubsub.Client()
topic = client.topic('topic_name')
topic.create()
topic.publish('this is the message_payload',
attr1='value1', attr2='value2')
Querying massive datasets can be time consuming and expensive without the right hardware and infrastructure. Google BigQuery (BigQuery API docs) solves this problem by enabling super-fast, SQL-like queries against append-only tables, using the processing power of Google's infrastructure.
This package is still being implemented, but it is almost complete!
import csv
from google.cloud import bigquery
from google.cloud.bigquery import SchemaField
client = bigquery.Client()
dataset = client.dataset('dataset_name')
dataset.create() # API request
SCHEMA = [
SchemaField('full_name', 'STRING', mode='required'),
SchemaField('age', 'INTEGER', mode='required'),
]
table = dataset.table('table_name', SCHEMA)
table.create()
with open('csv_file', 'rb') as readable:
table.upload_from_file(
readable, source_format='CSV', skip_leading_rows=1)
# Perform a synchronous query.
QUERY = (
'SELECT name FROM [bigquery-public-data:usa_names.usa_1910_2013] '
'WHERE state = "TX"')
query = client.run_sync_query('%s LIMIT 100' % QUERY)
query.timeout_ms = TIMEOUT_MS
query.run()
for row in query.rows:
print row
See the google-cloud-python
API BigQuery documentation to learn how to connect
to BigQuery using this Client Library.
The Cloud Resource Manager API (Resource Manager API docs) provides methods that you can use to programmatically manage your projects in the Google Cloud Platform.
See the google-cloud-python
API Resource Manager documentation to learn how to
manage projects using this Client Library.
Stackdriver Logging API (Logging API docs) allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud Platform.
from google.cloud import logging
client = logging.Client()
logger = client.logger('log_name')
logger.log_text("A simple entry") # API call
Example of fetching entries:
entries, token = logger.list_entries()
for entry in entries:
print entry.payload
See the google-cloud-python
API logging documentation to learn how to connect
to Stackdriver Logging using this Client Library.
Stackdriver Monitoring (Monitoring API docs) collects metrics, events, and metadata from Google Cloud Platform, Amazon Web Services (AWS), hosted uptime probes, application instrumentation, and a variety of common application components including Cassandra, Nginx, Apache Web Server, Elasticsearch and many others. Stackdriver ingests that data and generates insights via dashboards, charts, and alerts.
This package currently supports all Monitoring API operations other than writing custom metrics.
List available metric types:
from google.cloud import monitoring
client = monitoring.Client()
for descriptor in client.list_metric_descriptors():
print(descriptor.type)
Display CPU utilization across your GCE instances during the last five minutes:
metric = 'compute.googleapis.com/instance/cpu/utilization'
query = client.query(metric, minutes=5)
print(query.as_dataframe())
See the google-cloud-python
API monitoring documentation to learn how to connect
to Stackdriver Monitoring using this Client Library.
Contributions to this library are always welcome and highly encouraged.
See CONTRIBUTING for more information on how to get started.
Apache 2.0 - See LICENSE for more information.