demo.mov
MetaModel is a web application that streamlines the extraction and generation of structured data from unstructured text or images. It leverages advanced language models to parse information according to the provided schema, which can be defined either in plain language or through a user-friendly visual interface.
Built on top of instructor and pydantic, MetaModel creates dynamic Pydantic models for constraining and validating data. It also integrates with litellm to support language models from various providers.
- Intuitive Schema Definition: Easily define complex data structures using MetaModel's JSON format. Specify data types, constraints, nested schemas, and more. Or, describe your schema in plain language, and let language models generate it for you!
- LLM-Powered Data Extraction: Parse text or images into structured data using language models from various providers, supported by litellm.
- Built-in Validation: Ensure data integrity with Pydantic's built-in data validation against your schema constraints.
- Interactive Web Interface: A user-friendly interface allows you to easily create, edit, and test your schemas.
- Streamlined Workflow: Seamlessly integrate data extraction into your applications and workflows using MetaModel's backend API. Define schemas, send parse requests, and receive structured data effortlessly.
- Node.js (v20 or later)
- Python (v3.11 or later)
- Docker and Docker Compose (optional, for containerized deployment)
-
Clone the repository:
git clone https://github.com/lazyhope/metamodel.git cd metamodel
-
Set up the frontend:
cd frontend echo "VITE_API_URL=http://localhost:8000" > .env # Set the API URL npm install
-
Set up the backend:
cd ../backend echo "BACKEND_CORS_ORIGINS=http://localhost,http://localhost:5173" > .env # Optional: set the CORS origins (separated by commas) pip install -r requirements.txt
-
Start the backend server:
cd backend uvicorn app.main:app --reload
-
In a new terminal, start the frontend development server:
cd frontend npm run dev
-
Open your browser and navigate to
http://localhost:5173
to use.
For complex schema definitions and parsing, language models may require multiple attempts. Adjust the default maxDuration
in your Vercel project settings from 10 seconds to 60 seconds to prevent timeouts during retry attempts.
To deploy the application using Docker:
-
Ensure Docker and Docker Compose are installed on your system.
-
Edit
.env
file in the root directory and set your environment variables, for example:VITE_API_URL=http://localhost:8000 BACKEND_CORS_ORIGINS="http://localhost,http://localhost:5173"
-
Run the following command in the root directory:
docker compose up --build
-
Access the application at
http://localhost:80
.
It is also possible to deploy frontend and backend separately using their respective Dockerfile and environment variables.
- Choose language models and enter your API key in the settings.
- Customize other parameters for optimal performance.
- Use the schema builder interface to create your own data structure.
- Interact with the AI chat to refine your schema or parse data.
- Import existing JSON schemas or export your created schemas.
- Add tests