qubole/s3-sqs-connector

Get error for Error occured while creating Amazon SQS Client null

Closed this issue · 4 comments

20/06/23 17:12:38 ERROR MicroBatchExecution: Query [id = 99480fa7-a0c7-4dff-8282-ffad4fd30629, runId = 95b4cfe6-e950-4986-843c-69739a873d52] terminated with error
org.apache.spark.SparkException: Error occured while creating Amazon SQS Client null
at org.apache.spark.sql.streaming.sqs.SqsClient.createSqsClient(SqsClient.scala:227)
at org.apache.spark.sql.streaming.sqs.SqsClient.(SqsClient.scala:54)
at org.apache.spark.sql.streaming.sqs.SqsSource.(SqsSource.scala:53)
at org.apache.spark.sql.streaming.sqs.SqsSourceProvider.createSource(SqsSourceProvider.scala:47)
at org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:255)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:88)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:85)
at scala.collection.mutable.HashMap.getOrElseUpdate(HashMap.scala:79)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:85)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:83)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:286)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:71)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:285)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:275)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:83)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:65)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:269)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)

Any ideas on how to resolve this issue?

@jiweicao-outreach

I had a few questions:

  1. How are you running this? Are you using AWS EC2 cluster which uses IAM role?
  2. if you're using same IAM role for EMR & SQS and want to use credentials from your cluster, you need to add option useInstanceProfileCredentials as true.

The error is getting suppressed here because exception.getMessage returns null for some exceptions. I can fix that code. Meanwhile, let us know if setting useInstanceProfileCredentials to true works for you.

@abhishekd0907
Thanks for helping out. I'm running the spark job in EMR now. After adding useInstanceProfileCredentials to be true, I solve the exception now.
But I got some other errors:

20/06/23 22:27:02 ERROR MicroBatchExecution: Query [id = c78622e1-247d-48f4-9aff-de8da865dbf2, runId = c5bbbc1b-d792-4457-b9ab-7686ecbaf74e] terminated with error
java.lang.NoSuchMethodError: org.apache.spark.sql.Dataset$.ofRows$default$3()Ljava/lang/String;
at org.apache.spark.sql.streaming.sqs.SqsSource.getBatch(SqsSource.scala:80)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$10.apply(MicroBatchExecution.scala:438)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$10.apply(MicroBatchExecution.scala:434)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at org.apache.spark.sql.execution.streaming.StreamProgress.flatMap(StreamProgress.scala:25)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:434)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3.apply(MicroBatchExecution.scala:434)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:433)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)
Exception in thread "stream execution thread for [id = c78622e1-247d-48f4-9aff-de8da865dbf2, runId = c5bbbc1b-d792-4457-b9ab-7686ecbaf74e]" java.lang.NoSuchMethodError: org.apache.spark.sql.Dataset$.ofRows$default$3()Ljava/lang/String;

I'm running on emr-5.30.0, which use Spark 2.4.5.

@jiweicao-outreach
Can you please take a look at this issue. The same issue was faced by another user while using S3-SQS Connector with Zeppelin on EMR but got resolved when submitting the job via spark-submit.

org.apache.spark.sql.Dataset$.ofRows is available in open source spark in 2.4.5. Maybe it is not available in the classpath in EMR when job is run via Zeppelin.

I still have the issue using spark-submit. :(
I'm temporarily moving on to use other approaches, but I'll close the issue.