aws-samples/sagemaker-custom-project-templates

[Bug] Batch-Inference Template - Model name in Pipeline Not being passed

georschi opened this issue · 2 comments

When deploying this template, using a model registry with an approved model, the deployed SageMaker Pipeline does not contain the name of the SageMaker Model to use for the Batch inference.

The problem can be pinpointed in the cloudformation template in this line:
https://github.com/aws-samples/sagemaker-custom-project-templates/blob/main/batch-inference/seedcode/endpoint-config-template.yml#L70

      PipelineDefinition: 
        PipelineDefinitionBody: !Sub
          - ${PipelineDefinitionBody}
          - { ModelName: !GetAtt ModelToDeploy.ModelName }

Which doesn't work as expected since cloudformation cannot essentially perform an nested substitution.
Here the expectation is that the ${PipelineDefinitionBody} is substituted with the json value of the variable and then ModelName is replaces in that json. However, this "double" substitution is not possible with cloudFormation.

I can think of 2 solutions to this.

1st. - quick and dirty (tested and working but still not recommended for a template solution)
Do the following changes:

in seedcode/endpoint-config-template.yml change Line 70-72 with

        PipelineDefinitionBody:
          !Join
          - ''
          - - '{'
            - '"Parameters": [{"Name": "ModelName",    "Type": "String",   "DefaultValue": "'
            - !GetAtt ModelToDeploy.ModelName
            - '"},  {"Name": "BatchInstanceCount", "Type": "Integer", "DefaultValue": 1},  {"Name": "BatchInstanceType",   "Type": "String",   "DefaultValue": "ml.m5.xlarge"},  {"Name": "InputPath",   "Type": "String",   "DefaultValue": "s3://sagemaker-servicecatalog-seedcode-eu-west-1/dataset/abalone-dataset.csv"},  {"Name": "OutputPath", "Type": "String"}],'
            - !Ref PipelineDefinitionBody
            - '}'

in seedcode/build.py add in line 175

    pipeline_definition = pipeline_definition.replace("${ModelName}", "ModelToDeploy-HHu6ShVCAbPJ")
    import json
    temp = json.loads(pipeline_definition)
    temp.pop('Parameters')
    temp = json.dumps(temp)
    pipeline_definition = temp[1:-1]

Essentially what I'm proposing here is to break apart the Pipeline definition, remove the parameters, and re insert them in the CloudFormation template while "injecting" the Model name.
happy to explain in more detail if this doesn't seem clear.

2nd. more of a reachitecture
With the limitation of CloudFormation explained above, this template could be re-achitected so that the inference pipeline instead of having one step, the batch transform has the following 3 steps.
LambdaStep to read the latest model (or latest model package arn can be passed as input)
CreateModelStep to create a model
TransformStep for the actual transformation.

With this change, the SageMaker model is created within the pipeline execution so it doesn't need to be created at deployment time. Creating a model adds minimal overhead to the whole process so it wouldn't affect the performance of the execution.

An example of this can be seen here: https://github.com/aws-samples/sagemaker-pipelines-callback-step-for-batch-transform/blob/main/batch_transform_with_lambdastep.ipynb (scroll to bottom to see the pipeline diagram)

@giuseppe-zappia I've seen you've opened PR#25 updating batch inference. Have you solved this as well in said update?