/java-bigquery

Primary LanguageJavaApache License 2.0Apache-2.0

Google Cloud BigQuery Client for Java

Java idiomatic client for Cloud BigQuery.

Maven Stability

Quickstart

If you are using Maven with BOM, add this to your pom.xml file

<!--  Using libraries-bom to manage versions.
See https://github.com/GoogleCloudPlatform/cloud-opensource-java/wiki/The-Google-Cloud-Platform-Libraries-BOM -->
<dependencyManagement>
  <dependencies>
    <dependency>
      <groupId>com.google.cloud</groupId>
      <artifactId>libraries-bom</artifactId>
      <version>8.0.0</version>
      <type>pom</type>
      <scope>import</scope>
    </dependency>
  </dependencies>
</dependencyManagement>

<dependencies>
  <dependency>
    <groupId>com.google.cloud</groupId>
    <artifactId>google-cloud-bigquery</artifactId>
  </dependency>

If you are using Maven without BOM, add this to your dependencies:

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-bigquery</artifactId>
  <version>1.116.3</version>
</dependency>

If you are using Gradle, add this to your dependencies

compile 'com.google.cloud:google-cloud-bigquery:1.116.3'

If you are using SBT, add this to your dependencies

libraryDependencies += "com.google.cloud" % "google-cloud-bigquery" % "1.116.3"

Authentication

See the Authentication section in the base directory's README.

Getting Started

Prerequisites

You will need a Google Cloud Platform Console project with the Cloud BigQuery API enabled. You will need to enable billing to use Google Cloud BigQuery. Follow these instructions to get your project set up. You will also need to set up the local development environment by installing the Google Cloud SDK and running the following commands in command line: gcloud auth login and gcloud config set project [YOUR PROJECT ID].

Installation and setup

You'll need to obtain the google-cloud-bigquery library. See the Quickstart section to add google-cloud-bigquery as a dependency in your code.

About Cloud BigQuery

Cloud BigQuery is a fully managed, NoOps, low cost data analytics service. Data can be streamed into BigQuery at millions of rows per second to enable real-time analysis. With BigQuery you can easily deploy Petabyte-scale Databases.

See the Cloud BigQuery client library docs to learn how to use this Cloud BigQuery Client Library.

Creating a dataset

With BigQuery you can create datasets. A dataset is a grouping mechanism that holds zero or more tables. Add the following import at the top of your file:

import com.google.cloud.bigquery.DatasetInfo;

Then, to create the dataset, use the following code:

// Create a dataset
String datasetId = "my_dataset_id";
bigquery.create(DatasetInfo.newBuilder(datasetId).build());

Creating a table

With BigQuery you can create different types of tables: normal tables with an associated schema, external tables backed by data stored on Google Cloud Storage and view tables that are created from a BigQuery SQL query. In this code snippet we show how to create a normal table with only one string field. Add the following imports at the top of your file:

import com.google.cloud.bigquery.Field;
import com.google.cloud.bigquery.Schema;
import com.google.cloud.bigquery.StandardTableDefinition;
import com.google.cloud.bigquery.Table;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.TableInfo;

Then add the following code to create the table:

TableId tableId = TableId.of(datasetId, "my_table_id");
// Table field definition
Field stringField = Field.of("StringField", LegacySQLTypeName.STRING);
// Table schema definition
Schema schema = Schema.of(stringField);
// Create a table
StandardTableDefinition tableDefinition = StandardTableDefinition.of(schema);
Table createdTable = bigquery.create(TableInfo.of(tableId, tableDefinition));

Loading data into a table

BigQuery provides several ways to load data into a table: streaming rows or loading data from a Google Cloud Storage file. In this code snippet we show how to stream rows into a table. Add the following imports at the top of your file:

import com.google.cloud.bigquery.InsertAllRequest;
import com.google.cloud.bigquery.InsertAllResponse;

import java.util.HashMap;
import java.util.Map;

Then add the following code to insert data:

Map<String, Object> firstRow = new HashMap<>();
Map<String, Object> secondRow = new HashMap<>();
firstRow.put("StringField", "value1");
secondRow.put("StringField", "value2");
// Create an insert request
InsertAllRequest insertRequest = InsertAllRequest.newBuilder(tableId)
    .addRow(firstRow)
    .addRow(secondRow)
    .build();
// Insert rows
InsertAllResponse insertResponse = bigquery.insertAll(insertRequest);
// Check if errors occurred
if (insertResponse.hasErrors()) {
  System.out.println("Errors occurred while inserting rows");
}

Querying data

BigQuery enables querying data by running queries and waiting for the result. Queries can be run directly or through a Query Job. In this code snippet we show how to run a query directly and wait for the result. Add the following imports at the top of your file:

import com.google.cloud.bigquery.FieldValueList;
import com.google.cloud.bigquery.QueryJobConfiguration;

Then add the following code to run the query and wait for the result:

// Create a query request
QueryJobConfiguration queryConfig =
    QueryJobConfiguration.newBuilder("SELECT my_column FROM my_dataset_id.my_table_id").build();
// Read rows
System.out.println("Table rows:");
for (FieldValueList row : bigquery.query(queryConfig).iterateAll()) {
  System.out.println(row);
}

Complete source code

In InsertDataAndQueryTable.java we put together all the code shown above into one program. The program assumes that you are running on Compute Engine or from your own desktop. To run the example on App Engine, simply move the code from the main method to your application's servlet class and change the print statements to display on your webpage.

Samples

Samples are in the samples/ directory. The samples' README.md has instructions for running the samples.

Sample Source Code Try it
Add Column Load Append source code Open in Cloud Shell
Add Empty Column source code Open in Cloud Shell
Auth Drive Scope source code Open in Cloud Shell
Auth Snippets source code Open in Cloud Shell
Browse Table source code Open in Cloud Shell
Cancel Job source code Open in Cloud Shell
Copy Multiple Tables source code Open in Cloud Shell
Copy Table source code Open in Cloud Shell
Create Clustered Table source code Open in Cloud Shell
Create Dataset source code Open in Cloud Shell
Create Job source code Open in Cloud Shell
Create Model source code Open in Cloud Shell
Create Partitioned Table source code Open in Cloud Shell
Create Range Partitioned Table source code Open in Cloud Shell
Create Routine source code Open in Cloud Shell
Create Routine DDL source code Open in Cloud Shell
Create Table source code Open in Cloud Shell
Create Table Without Schema source code Open in Cloud Shell
Create View source code Open in Cloud Shell
Dataset Exists source code Open in Cloud Shell
Delete Dataset source code Open in Cloud Shell
Delete Model source code Open in Cloud Shell
Delete Routine source code Open in Cloud Shell
Delete Table source code Open in Cloud Shell
Extract Table To Csv source code Open in Cloud Shell
Extract Table To Json source code Open in Cloud Shell
Get Dataset Info source code Open in Cloud Shell
Get Job source code Open in Cloud Shell
Get Model source code Open in Cloud Shell
Get Routine source code Open in Cloud Shell
Get Table source code Open in Cloud Shell
Get View source code Open in Cloud Shell
Inserting Data Types source code Open in Cloud Shell
List Datasets source code Open in Cloud Shell
List Models source code Open in Cloud Shell
List Tables source code Open in Cloud Shell
Load Csv From Gcs source code Open in Cloud Shell
Load Csv From Gcs Truncate source code Open in Cloud Shell
Load Local File source code Open in Cloud Shell
Load Parquet source code Open in Cloud Shell
Load Parquet Replace Table source code Open in Cloud Shell
Load Partitioned Table source code Open in Cloud Shell
Load Table Clustered source code Open in Cloud Shell
Nested Repeated Schema source code Open in Cloud Shell
Query source code Open in Cloud Shell
Query Batch source code Open in Cloud Shell
Query Clustered Table source code Open in Cloud Shell
Query With Named Parameters source code Open in Cloud Shell
Query With Positional Parameters source code Open in Cloud Shell
Query With Structs Parameters source code Open in Cloud Shell
Quickstart Sample source code Open in Cloud Shell
Relax Column Mode source code Open in Cloud Shell
Relax Table Query source code Open in Cloud Shell
Run Legacy Query source code Open in Cloud Shell
Save Query To Table source code Open in Cloud Shell
Simple App source code Open in Cloud Shell
Simple Query source code Open in Cloud Shell
Table Insert Rows source code Open in Cloud Shell
Update Dataset Access source code Open in Cloud Shell
Update Dataset Description source code Open in Cloud Shell
Update Dataset Expiration source code Open in Cloud Shell
Update Table DML source code Open in Cloud Shell
Update Table Description source code Open in Cloud Shell
Update Table Expiration source code Open in Cloud Shell

Troubleshooting

To get help, follow the instructions in the shared Troubleshooting document.

Java Versions

Java 7 or above is required for using this client.

Versioning

This library follows Semantic Versioning.

Contributing

Contributions to this library are always welcome and highly encouraged.

See CONTRIBUTING for more information how to get started.

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.

License

Apache 2.0 - See LICENSE for more information.

CI Status

Java Version Status
Java 7 Kokoro CI
Java 8 Kokoro CI
Java 8 OSX Kokoro CI
Java 8 Windows Kokoro CI
Java 11 Kokoro CI