aws-samples/zero-administration-inference-with-aws-lambda-for-hugging-face

Input Sample for Question Answering Pipeline

ishwara-bhat opened this issue · 1 comments

Hi,
This is a great project. It worked for sentiment analysis example. However, my need is question answering use case.

I created myquestionanswer.py as below
import json
from transformers import pipeline

import json
from transformers import pipeline

summarizer = pipeline("question-answering")

def handler(event, context):
response = {
"statusCode": 200,
"body": summarizer(event['article'])[0]
}
return response

i.e. Only change I made is the string in pipeline parameter. Now it is 'question-answering'

What is the json input format to be given at Lambda test ? I tried the following. Both failed:

  1. { "context": "My name is Rama. Sita is his wife", "question": " what is your name?"}
  2. {"context": questions": [ "What is the name?", "Who is his wife?"] }

I saw other huggingface examples. These aren't applicable since they directly feed into model and are not helpful.

Thanks in advance.

Your handler should be something like this:

from transformers import pipeline

question_answerer = pipeline("question-answering")

def handler(event, context):
    response = {
        "statusCode": 200,
        "body": question_answerer({
           'question': event['question'],
           'context': event['context']
       })
    }
    return response

and the answer would be:
{'score': 0.95555555, 'start': 16, 'end': 20, 'answer': 'Sita'}