patrickjohncyh/fashion-clip

Consume Hugging Face model via AWS SageMaker and Lambda

Closed this issue · 2 comments

Realize this might be out of scope but hoping someone can point me in the right direction.

I have deployed the Hugging Face model to sagemaker and I'm call it via a lambda function. However what are the inputs the model expects for zero shot image classification? Assuming I need an image url or base64 encoded input somewhere?

What should payload look like?

{
 "inputs": "????"
}

Lambda code:

import os
import io
import boto3
import json

ENDPOINT_NAME = os.environ['ENDPOINT_NAME']
runtime = boto3.client('runtime.sagemaker')

def lambda_handler(event, context):
    print("event: " + json.dumps(event))
    
    
    data = json.dumps(event)
    payload = data
    
    print(payload)
    
    response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
                                       ContentType='application/json',
                                       Body=bytes(payload, 'utf-8'))
                                       
    print(response)
    
    result = json.loads(response['Body'].read().decode())
    
    print(result)
    
    return {
        'statusCode': 200,
        'body': json.dumps(result)
    }
vinid commented

Hi @roger-rodriguez!

I am not sure I know how to help (maybe @patrickjohncyh ?) but I think this is a good question for the HF transformers repo.

Our model has the same architecture as the HF CLIP (https://huggingface.co/openai/clip-vit-base-patch32) so what works for the general model should work also for FashionCLIP.

Thanks for the quick reply @vinid! We can close this one. I ended up downloading the model and deploying it with a docker lambda.