yya518/FinBERT

sentiment predictions are not consistent

fanyuyu opened this issue · 1 comments

I am using your sentiment model to predict sentences from calls. There are two sentences:

  • The probability of neutral is .99 for the sentence 'Thanks, Martin.'
  • The probability of positive is .94 for the sentence 'Thank you.'

I am trying to understand why it gives quite different labels. Initially I thought it was the label confusion, but you have answered the question in Issue #17.
Could you explain more about how you fine-tuning the model for analyst tones and what date you use for the classification model? Thank you!

I cannot re-produce the issue that you described. I got Neutral for both sentences.

finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-tone')
nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer)
results = nlp(['Thank you.', 
               'Thanks, Martin.'])
print(results)
#[{'label': 'Neutral', 'score': 0.8304300308227539},  {'label': 'Neutral', 'score': 0.9677242040634155}]