This demo provides endpoints for interacting with Hugging Face models for chat and text summarization. It uses the Hugging Face API to perform these tasks and is designed to be run locally( was just trying out to familiarize myself with HF and how it works).
- Chat Endpoint: Allows users to interact with a chat model.
- Summarize Endpoint: Provides text summarization using a pre-trained model.
- Python 3.8 or higher
- FastAPI
- Hugging Face Hub
- Requests
-
Clone the Repository:
git clone <repository-url> cd <repository-directory>
-
Create and Activate a Virtual Environment:
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install Dependencies:
pip install fastapi uvicorn huggingface_hub requests
-
Set Environment Variables: Make sure you have an Hugging Face API key. Set it in your environment:
export HF_API_KEY=<your-huggingface-api-key>
On Windows:
set HF_API_KEY=<your-huggingface-api-key>
To start the FastAPI server, run:
uvicorn main:app --reload
Replace main
with the name of your Python file if it's different.
-
POST /chat/
: Interact with the chat model.- Request Body:
{ "text": "Your message here" }
- Response:
{ "response": "Model's response here" }
- Request Body:
-
POST /summarize/
: Summarize the provided text.- Request Body:
{ "text": "Text to be summarized" }
- Response:
{ "summary": "Summary of the provided text" }
- Request Body:
- If you encounter any issues, make sure your Hugging Face API key is correct and that you have set it properly in your environment.