GoogleCloudDataproc/spark-bigquery-connector

AWS Glue - Indirect write mode errors

Closed this issue · 3 comments

I am running my pyspark script on AWS glue using spark-3.3-bigquery-0.35.1.jar. I test direct write and it worked perfectly.

Now I need to create JSON field which requires indirect write mode and temporaryGcsBucket parameter.

Here starts the problems, I added 2 additional jars:

  • gcs-connector-hadoop3-2.2.11.jar, whcih resolved error: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "gs"
  • google-http-client-1.43.3.jar, which resolved error: An error occurred while calling o397.save.com/google/api/client/http/HttpRequestInitializer

Now I stuck on this one An error occurred while calling o397.save.com/google/api/client/googleapis/auth/oauth2/GoogleCredential.

Do I need to add another jar? If so which one? com.google.api.client.googleapis.auth.oauth2.GoogleCredential seems to be deprecated. Or I am doing something wrong?

Can you please share the full stack trace?

Can you please share the full stack trace?
@davidrabinowitz please find full stack trace:

24/01/19 07:54:44 INFO ProcessLauncher: enhancedFailureReason: Error Category: UNCLASSIFIED_ERROR; An error occurred while calling o397.save. com/google/api/client/googleapis/auth/oauth2/GoogleCredential
24/01/19 07:54:44 INFO ProcessLauncher: Error Category: UNCLASSIFIED_ERROR
24/01/19 07:54:44 INFO ProcessLauncher: ExceptionErrorMessage failureReason: An error occurred while calling o397.save. com/google/api/client/googleapis/auth/oauth2/GoogleCredential
24/01/19 07:54:44 INFO ProcessLauncher: postprocessing
24/01/19 07:54:44 INFO ProcessLauncher: Enhance failure reason and emit cloudwatch error metrics.
24/01/19 07:54:44 WARN OOMExceptionHandler: Failed to extract executor id from error message.
24/01/19 07:54:44 ERROR ProcessLauncher: Error from Python:Traceback (most recent call last):
  File "/tmp/test_s3_bq.py", line 41, in <module>
    .save("transactions.test")
  File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 968, in save
    self._jwrite.save(path)
  File "/opt/amazon/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__
    return_value = get_return_value(
  File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 190, in deco
    return f(*a, **kw)
  File "/opt/amazon/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o397.save.
: java.lang.NoClassDefFoundError: com/google/api/client/googleapis/auth/oauth2/GoogleCredential
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2625)
	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2590)
	at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686)
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3492)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3527)
	at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:173)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3635)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3582)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:547)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:373)
	at com.google.cloud.spark.bigquery.SparkBigQueryUtil.getUniqueGcsPath(SparkBigQueryUtil.java:143)
	at com.google.cloud.spark.bigquery.SparkBigQueryUtil.createGcsPath(SparkBigQueryUtil.java:124)
	at com.google.cloud.spark.bigquery.write.BigQueryWriteHelper.<init>(BigQueryWriteHelper.java:89)
	at com.google.cloud.spark.bigquery.write.BigQueryDeprecatedIndirectInsertableRelation.insert(BigQueryDeprecatedIndirectInsertableRelation.java:41)
	at com.google.cloud.spark.bigquery.write.CreatableRelationProviderHelper.createRelation(CreatableRelationProviderHelper.java:58)
	at com.google.cloud.spark.bigquery.v2.Spark31BigQueryTableProvider.createRelation(Spark31BigQueryTableProvider.java:65)
	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:103)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
	at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:100)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:96)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:615)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:177)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:615)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:591)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:96)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:83)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:81)
	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:124)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:860)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:357)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: com.google.api.client.googleapis.auth.oauth2.GoogleCredential
	at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
	... 65 more

As this issue is with the GCS connector, Please open the bug in that repository or contact AWS to fix it.