philschmid/serverless-bert-huggingface-aws-lambda-docker

Local run using docker and Curl Error

kshitijzutshi opened this issue ยท 3 comments

After running - docker run -p 8080:8080 bert-lambda and then doing a Curl POST request -

curl --request POST \
  --url http://localhost:8080/2015-03-31/functions/function/invocations \
  --header 'Content-Type: application/json' \
  --data '{"body":"{\"context\":\"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).\",\n\"question\":\"What is the GLUE score for Bert?\"\n}"}'

I get the following error - Unable to local ./model from config - Model path error as shown below:

image

Hey @kshitijzutshi,

it looks that you don't have model stored inside the container at model/

@philschmid Thanks for the response. Seems like it was some issue with the transformer library version I had installed. I was able to resolve this issue. โœ…

โŒ›I wanted to get some guidance from you regarding the following -

I am new to the AWS ecosystem and hugging face, have worked with GCP in the past. I am looking to implement an end to end big data pipeline project using AWS services like Lambda, API gateway, S3 and Sagemaker. ๐Ÿง 

Could you share some project ideas that I can implement? I'd really appreciate your ideas. ๐Ÿ’ก

Thanks again!

โŒ›I wanted to get some guidance from you regarding the following -

I am new to the AWS ecosystem and hugging face, have worked with GCP in the past. I am looking to implement an end to end big data pipeline project using AWS services like Lambda, API gateway, S3 and Sagemaker. ๐Ÿง 

Could you share some project ideas that I can implement? I'd really appreciate your ideas. ๐Ÿ’ก

We did a workshop series at the end of last year on how to productionize nice Transformers with Amazon SageMaker, including DevOps workflows from training to inference.

You can find the workshop here: https://github.com/philschmid/huggingface-sagemaker-workshop-series
It includes videos and code for all 3 workshops we did.