HF

OpenVino

  1. Load a text generation model from Hugging Face's Transformers library (specifically, the TinyLlama model)
  2. Tokenize a given text input using the model's tokenizer.
  3. Convert the loaded model to OpenVINO IR format for inference on Intel hardware.
  4. Perform inference with the converted OpenVINO model.
  5. Print the decoded output from the inference.

Reads

reddit1

llamaHF

TinyLlama Repo

TinyLlama HF Model

Hugging Face Model Hub with OpenVINO

Hugging Face Model Hub with OpenVINO -> Requirements

Twitter-roBERTa-base for Sentiment Analysis