guardian/riff-raff

Lambda - Artifact not found before uploading to S3 if region not eu-west-1

Closed this issue · 7 comments

mchv commented

I am trying to deploy in us-east-1an AWS lambda which is bundled as an uber jar, which for a reason I think have figured out (not anymore) is failing on the first step, i.e finding the artifact in S3 to upload to the bucket.:

screen shot 2017-01-26 at 12 13 29

stacks:
- aws-billing
regions:
- us-east-1
deployments:
  aws-budgets:
    type: aws-lambda
    parameters:
      fileName: aws-budgets-assembly-0.1.jar
      bucket: aws-billing-dist
      functions:
        PROD:
          name: AWS_Budgets
          filename: aws-budgets.jar

screen shot 2017-01-26 at 11 59 42

I have tried to reproduce the issue by adding a test in magenta.deployment_type.LambdaTest, but this is working as expected.

  it should "produce an S3 upload task (Mariot)" in {

  val data2: Map[String, JsValue] = Map(
    "bucket" -> JsString("aws-billing-dist"),
    "fileName" -> JsString("aws-budgets-assembly-0.1.jar"),
    "functionNames" -> Json.arr("MyFunction-")
  )

  val app2 = Seq(App("lambda"))
  val pkg2 = DeploymentPackage("lambda", app2, data2, "aws-lambda", S3Path("artifact-bucket", "test/123/lambda"), true,
    deploymentTypes)
  val defaultRegion2 = Region("us-east-1")



    val tasks = Lambda.actionsMap("uploadLambda").taskGenerator(pkg2, DeploymentResources(reporter, lookupEmpty, artifactClient), DeployTarget(parameters(PROD), NamedStack("aws-billing"), defaultRegion2))
    tasks should be (List(
      S3Upload(
        Region("us-east-1"),
        bucket = "aws-billing-dist",
        paths = Seq(S3Path("artifact-bucket", "test/123/lambda/aws-budgets-assembly-0.1.jar") -> s"aws-billing/PROD/lambda/aws-budgets-assembly-0.1.jar")
      )
    ))
  }

I have checked as well as fileName is parsed correctly, which had left me 🤔, thinking I had missed something obvious until I realised that S3Upload is not intended for executing a transfer of a file cross-region!

mchv commented

Of course this is another example where karma hits back, as I am the one who initially attempted to implement this feature!

mchv commented

Looking again at the S3Upload code, there is a specific S3 client, artifactClient used to retrieve artifacts which is correctly configured with eu-west-1 region, so S3Upload should handle fine updating an artifact in another region.

@sihil any idea what I am missing?

sihil commented

This exception is happening in the call to the updateFunctionCode. This is called after it has been uploaded to S3. It seems like the call to lambda doesn't have read permissions on the bucket.

Is the bucket in the same AWS region as the lambda? Not sure if it needs to be....

sihil commented

Sorry - I didn't read your message properly. I'll look more.

sihil commented

OK - there are two ways of using the lambda deployment type.

One is to use the functions parameter and the other is to use the fileName and functionNames. This config uses functions and fileName. fileName contains a file that exists, but is being ignored as functionNames is not used. The filename specified in the functions map doesn't exist so isn't being uploaded.

mchv commented

@sihil thanks reading the code with this explanation I understand why I got confused and why it was not working as I expected! 💯

sihil commented

There are a couple of things here:

  1. We should improve the docs to make this behaviour clearer
  2. We should consider making S3Upload fail or warn in the situation where an explicit file is given that doesn't exist