AI Chat Hub - Universal Multi-Provider Assistant - React, TypeScript, Vite FullStack Project (Multi-Model AI Chatbot including Business Insights & Performance Dashboard)
A modern, responsive AI chat bot application supporting multiple AI providers including Google Gemini, Groq, OpenRouter, Hugging Face, and OpenAI and enable to store the chat history. Built with React, TypeScript, and Vite including business-insights analytics and performance dashboard, typewriter effect, and animated icons for the best user experience.
- Live-Demo: https://multi-ai-chat-hub.vercel.app/
- Overview
- Features
- Technologies Used
- Project Structure
- Installation
- Configuration
- Usage
- Project Walkthrough
- Component Details
- API Integration
- Reusing Components
- Deployment
- Conclusion
AI Chat Hub is a comprehensive, production-ready chat application that integrates with multiple AI providers, offering users the flexibility to choose their preferred AI model or let the system automatically select the best available option. The application features a modern UI with dark theme, responsive design, chat history management, and real-time AI interactions.
- Multi-Provider AI Support: Seamlessly switch between Google Gemini, Groq, OpenRouter, Hugging Face, and OpenAI
- Auto Fallback System: Automatic provider switching when one fails
- Chat History Management: Save and manage multiple conversation threads with local storage persistence
- Real-time Typing Indicator: Visual feedback when AI is processing
- Emoji Picker: Add emojis to messages with an intuitive picker interface
- Business Insights Dashboard: Real-time analytics and performance metrics for admin monitoring
- Anonymous Session Tracking: Track usage patterns, API calls, and user engagement without authentication
- Responsive Design: Optimized for desktop, tablet, and mobile devices
- Dark Theme: Modern dark UI with gradient accents
- Typewriter Effect: Elegant text animation for enhanced user experience
- Collapsible Sidebar: Efficient space management on all screen sizes
- Tooltip System: Informative tooltips for better UX
- React 18.3.1: Modern UI library with hooks
- TypeScript 5.6.3: Type-safe JavaScript
- Vite 7.1.12: Fast build tool and dev server
- Lucide React: Modern icon library
- Prisma ORM: Type-safe database access
- PostgreSQL (Neon): Serverless PostgreSQL database
- Vercel Serverless Functions: API endpoints for analytics
- Emoji Mart: Professional emoji picker component
- Boxicons: Modern icon library
- Font Awesome: Additional icon set
- ESLint: Code quality and linting
- TypeScript ESLint: TypeScript-specific linting rules
ai-chat-bot/
├── api/ # Vercel serverless functions
│ ├── events.ts # POST /api/events - Track analytics events
│ ├── usage.ts # GET /api/usage - Usage statistics
│ ├── insights.ts # GET /api/insights - Provider analytics
│ ├── providers.ts # GET /api/providers - Provider details
│ └── dashboard.ts # GET /api/dashboard - All analytics data
├── prisma/
│ └── schema.prisma # Database schema (PostgreSQL)
├── public/
│ ├── ai.svg # Background SVG
│ ├── chatbot.svg # App icon
│ └── favicon.ico # Browser favicon
├── src/
│ ├── Components/
│ │ ├── ChatBotApp.tsx # Main chat interface
│ │ ├── ChatBotApp.css # Chat styles
│ │ ├── ChatBotStart.tsx # Welcome screen
│ │ ├── ChatBotStart.css # Welcome styles
│ │ ├── BusinessInsights.tsx # Analytics dashboard
│ │ ├── BusinessInsights.css # Dashboard styles
│ │ ├── Tooltip.tsx # Tooltip component
│ │ ├── Tooltip.css # Tooltip styles
│ │ ├── TypingIndicator.tsx
│ │ └── TypingIndicator.css
│ ├── hooks/
│ │ └── useTypewriter.ts # Typewriter animation hook
│ ├── services/
│ │ ├── aiService.ts # AI API integration
│ │ └── aiProviders.ts # Provider configurations
│ ├── App.tsx # Root component
│ ├── main.tsx # Entry point
│ ├── index.css # Global styles
│ └── vite-env.d.ts # TypeScript environment types
├── .env # Environment variables
├── index.html # HTML template
├── package.json # Dependencies & scripts
├── tsconfig.json # TypeScript config
├── vite.config.ts # Vite configuration
└── README.md # This file- Node.js (v18 or higher)
- npm or yarn package manager
- Git (for cloning)
-
Clone the repository
git clone https://github.com/yourusername/ai-chat-bot.git cd ai-chat-bot -
Install dependencies
npm install
-
Configure environment variables (See Configuration section below)
-
Start the development server
npm run dev
-
Open your browser Navigate to
http://localhost:5173(or the port shown in terminal)
Create a .env file in the root directory with your AI provider API keys and database connection:
# Google Gemini AI API (1.5M free tokens/month)
VITE_GEMINI_API_KEY=your_gemini_api_key_here
# Groq API (Fast Llama 3 - Always-free daily quota)
VITE_GROQ_API_KEY=your_groq_api_key_here
# OpenRouter API (Multi-model aggregator)
VITE_OPENROUTER_API_KEY=your_openrouter_api_key_here
# Hugging Face Inference API
VITE_HUGGINGFACE_API_KEY=your_huggingface_api_key_here
# OpenAI API
VITE_OPENAI_API_KEY=your_openai_api_key_here
# PostgreSQL Database Connection (for Analytics)
DATABASE_URL=postgresql://username:password@hostname:port/database?sslmode=requireNote: For local development, the analytics API endpoints will not work (they require Vercel deployment). The frontend will gracefully handle this with dev-mode guards.
- Visit Google AI Studio
- Sign in with your Google account
- Click "Create API Key"
- Copy and paste into
.envfile
- Visit Groq Console
- Sign up or log in
- Navigate to API Keys section
- Create and copy your API key
- Visit OpenRouter.ai
- Sign up for an account
- Go to Keys section
- Create a new key
Important: Hugging Face has migrated to a new Inference Providers API. The old endpoint is deprecated.
- Visit Hugging Face
- Create an account
- Go to Settings > Access Tokens (or hf.co/settings/tokens)
- Create a fine-grained token with "Make calls to Inference Providers" permission
- Copy and paste into
.envfile
Note: The app now uses the new OpenAI-compatible endpoint at https://router.huggingface.co/v1/chat/completions for better reliability and access to multiple models. The old api-inference.huggingface.co endpoint was deprecated in January 2025 and returns 404 errors. The app automatically tries 16 different free-tier models in order until one responds successfully (6 primary models + 10 fallback models).
- Visit OpenAI Platform
- Sign up or log in
- Go to API Keys section
- Create a new secret key
The Business Insights feature requires a PostgreSQL database for storing analytics data.
- Visit Neon Console
- Sign up for a free account (generous free tier)
- Create a new project
- Copy the connection string from your dashboard
- Paste it into your
.envfile asDATABASE_URL
Note: The connection string format should be:
DATABASE_URL=postgresql://username:password@hostname:port/database?sslmode=require
-
Set up the database schema:
npx prisma generate npx prisma db push
⚠️ Never commit your.envfile to version control- The
.envfile is already in.gitignore - You can use any combination of providers (at least one is required)
- The app will automatically fallback to available providers if one fails
- The database is only needed for the Business Insights analytics feature
- Analytics tracking is anonymous and uses session-based tracking (no user accounts required)
-
Start the application
npm run dev
-
Click "Get Started" on the welcome screen
-
Select AI Provider from the dropdown menu in the header
-
Type your message in the input field
-
Send message by:
- Pressing Enter key
- Clicking the send button
- Using the emoji picker to add emojis
- Create New Chat: Click the "+" button in the sidebar
- Switch Between Chats: Click on any chat in the sidebar
- Delete Chat: Click the "X" icon on any chat item
- Toggle Sidebar: Click the hamburger menu button
- Auto Provider Selection: Leave "Auto" selected for automatic fallback
- Business Insights Dashboard: Click the "📊 Insights" button in the header to view analytics
The Business Insights page provides comprehensive analytics and performance metrics:
- Total events and sessions
- Recent activity (24h)
- Active sessions
- Storage usage
- Uptime indicator
- API calls by provider
- Success/failure rates
- Average response times
- Provider performance comparison
- Local storage usage
- Total messages and sessions
- Performance metrics
- Most-used providers
- Success rates per provider
- Average response times
- Usage trends
- Hourly activity breakdown
- Peak usage times
- Daily event trends
- Visual activity charts
- Average events per session
- Session duration
- Total conversations
- Engagement patterns
- Total errors
- Errors by provider
- Overall success rate
- Error patterns
- Fast/Normal/Slow request distribution
- Min/Median/Max duration
- Performance metrics
Note: The Business Insights feature requires Vercel deployment with a configured PostgreSQL database. It tracks anonymous session data and does not require user authentication.
User Input → ChatBotApp Component → AI Service → Provider API → Response → DisplayThe application uses React hooks for state management:
- App.tsx: Manages global chat state, active chat, and chat list
- ChatBotApp.tsx: Manages message state, input value, typing indicators
- Local Storage: Persists chat history and messages
- Message Creation: User types message → stored in state
- API Call: Message sent to AI service → provider selected
- Response Handling: AI response received → displayed to user
- Persistence: All messages saved to localStorage
- Chat Management: Multiple chats managed with unique IDs
App
├── ChatBotStart (Initial Screen)
└── ChatBotApp (Main Application)
├── Chat List Sidebar
│ ├── Chat List Header
│ └── Chat List Items
└── Chat Window
├── Chat Header
├── Messages Area
└── Input Area
├── Emoji Picker
├── Input Field
└── Send ButtonLocation: src/Components/ChatBotApp.tsx
Purpose: Main chat interface component
Key Features:
- Manages chat messages and input
- Handles AI provider selection
- Manages emoji picker visibility
- Controls sidebar collapse/expand
- Implements message sending and receiving
Props:
interface ChatBotAppProps {
chats: Chat[];
setChats: React.Dispatch<React.SetStateAction<Chat[]>>;
activeChat: string | null;
setActiveChat: React.Dispatch<React.SetStateAction<string | null>>;
onNewChat: (initialMessage?: string) => void;
}Key Methods:
sendMessage(): Sends message to AI and handles responsehandleKeyDown(): Handles Enter key presshandleEmojiSelect(): Adds emoji to inputhandleClickOutside(): Closes dropdowns/pickers
Location: src/Components/ChatBotStart.tsx
Purpose: Welcome screen with typewriter animation
Key Features:
- Animated typewriter effect
- "Get Started" button
- Gradient background with SVG
Props:
interface ChatBotStartProps {
onStartChat: () => void;
}Location: src/Components/Tooltip.tsx
Purpose: Reusable tooltip component
Usage:
<Tooltip text="Tooltip text" position="top">
<button>Hover me</button>
</Tooltip>Features:
- Dynamic positioning
- Auto-adjustment for fixed elements
- Smooth animations
- Multiple positions (top, bottom, left, right)
Location: src/Components/TypingIndicator.tsx
Purpose: Shows animated dots when AI is typing
Location: src/Components/BusinessInsights.tsx
Purpose: Analytics dashboard for monitoring user activity and performance
Key Features:
- Real-time analytics from PostgreSQL database
- 8 different tabs for various metrics
- Anonymous session tracking
- Provider performance analytics
- Hourly activity charts
- Error monitoring
Props:
interface BusinessInsightsProps {
onBack: () => void;
}Tabs:
- Overview: Total events, sessions, storage, uptime
- Provider Analytics: Per-provider statistics
- Storage & Performance: Local storage and performance metrics
- Usage Patterns: Provider usage trends
- Time & Trends: Hourly/daily activity charts
- User Engagement: Session metrics
- Error Monitoring: Error tracking by provider
- Performance: Request speed distribution
Location: src/hooks/useTypewriter.ts
Purpose: Creates typewriter animation effect
Usage:
const { displayText, isComplete } = useTypewriter({
text: "Your text here",
speed: 50,
delay: 500,
});Features:
- Configurable typing speed
- Optional delay before starting
- Returns completion status
The project includes serverless API endpoints for tracking and analytics:
Location: api/ directory (Vercel serverless functions)
-
POST
/api/events: Track analytics events- Records: API calls, success/failure, duration, provider
- Creates/updates session records
- Used by frontend to log all user interactions
-
GET
/api/usage: Fetch usage statistics- Returns: Total events, sessions, recent activity
- Aggregates session and event data
-
GET
/api/insights: Fetch provider insights- Returns: Provider stats, success rates, daily trends
- Calculates provider performance metrics
-
GET
/api/providers: Fetch detailed provider data- Returns: Individual provider analytics
- Includes: Total calls, success/failure counts, avg duration
-
GET
/api/dashboard: Fetch all dashboard data- Returns: Combined data from all analytics endpoints
- Single request for complete dashboard view
The analytics system uses PostgreSQL with the following schema:
model Event {
id String @id @default(uuid())
sessionId String
eventType String
provider String?
success Boolean @default(true)
duration Int? // Duration in milliseconds
metadata String? // JSON string for additional data
timestamp DateTime @default(now())
}
model Session {
sessionId String @id @unique
userAgent String?
platform String?
startedAt DateTime @default(now())
lastSeen DateTime @updatedAt
}Location: src/services/aiService.ts
Purpose: Centralized AI API integration with fallback mechanism
Location: src/services/aiProviders.ts
Purpose: Defines all AI provider configurations
Supported Providers:
-
Google Gemini (
gemini-2.0-flash)- Endpoint:
https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent - Model:
gemini-2.0-flash
- Endpoint:
-
Groq (
llama-3.1-8b-instant)- Endpoint:
https://api.groq.com/openai/v1/chat/completions - Model:
llama-3.1-8b-instant
- Endpoint:
-
OpenRouter (
meta-llama/llama-3.2-3b-instruct:free)- Endpoint:
https://openrouter.ai/api/v1/chat/completions - Model:
meta-llama/llama-3.2-3b-instruct:free
- Endpoint:
-
Hugging Face (16 models with fallback - New Inference Providers API)
- Endpoint:
https://router.huggingface.co/v1/chat/completions(OpenAI-compatible) - Primary Models:
meta-llama/Llama-3.1-8B-Instruct,mistralai/Mistral-7B-Instruct-v0.3,HuggingFaceH4/zephyr-7b-beta,tiiuae/falcon-7b-instruct,google/gemma-2b-it,NousResearch/Hermes-2-Pro-Mistral-7B - Fallback Models:
mistralai/Mistral-7B-Instruct-v0.2,google/gemma-2b,google/gemma-7b,mistralai/Mixtral-8x7B-Instruct-v0.1,tiiuae/falcon-7b,microsoft/phi-1_5,bigscience/bloomz-560m,HuggingFaceH4/zephyr-7b-alpha,tiiuae/falcon-40b-instruct,facebook/bart-large-cnn
- Endpoint:
-
OpenAI (
gpt-4o-mini)- Endpoint:
https://api.openai.com/v1/responses - Model:
gpt-4o-mini
- Endpoint:
The AI service automatically tries providers in this order:
- Gemini
- Groq
- OpenRouter
- Hugging Face
- OpenAI
If one provider fails, it automatically tries the next available provider.
This project is designed with reusable components that can be easily integrated into other projects.
import Tooltip from "./Components/Tooltip";
function MyComponent() {
return (
<Tooltip text="This is a tooltip" position="top">
<button>Hover me</button>
</Tooltip>
);
}import { useTypewriter } from "./hooks/useTypewriter";
function MyComponent() {
const { displayText, isComplete } = useTypewriter({
text: "Loading...",
speed: 50,
delay: 0,
});
return <div>{displayText}</div>;
}import { aiService } from "./services/aiService";
import { AIProvider } from "./services/aiProviders";
async function sendMessage(message: string) {
try {
const response = await aiService.getChatResponse(message, "gemini");
console.log(response.content);
} catch (error) {
console.error("Error:", error);
}
}Here's how to integrate this chat system into your own project:
- Copy the Components folder to your project
- Copy the services folder for AI integration
- Copy the hooks folder for reusable hooks
- Update API keys in your
.envfile - Import and use components as needed
npm run buildThis creates an optimized production build in the dist/ folder.
-
Install Vercel CLI
npm install -g vercel
-
Set up Database
- Create a Neon PostgreSQL database
- Copy your connection string
- Update your
DATABASE_URLin.env
-
Deploy
vercel
-
Set Environment Variables in Vercel
- Go to your Vercel project settings
- Navigate to Environment Variables
- Add all your API keys from
.envfile - Important: Add
DATABASE_URLfrom your Neon database
-
Run Database Migrations
After deployment, trigger a build that runs Prisma:
npx prisma generate npx prisma db push
Or add to your
package.json:"scripts": { "vercel-build": "prisma generate && prisma db push && npm run build" }
- Connect GitHub repository to Netlify
- Build command:
npm run build - Publish directory:
dist - Add environment variables in site settings
-
Install gh-pages
npm install --save-dev gh-pages
-
Add deploy script to
package.json"deploy": "vite build && gh-pages -d dist"
-
Run deploy
npm run deploy
This project demonstrates:
- React component architecture and state management
- TypeScript type safety and interfaces
- API integration with multiple providers
- Responsive design principles
- Modern UI/UX patterns
- Local storage management
- Custom React hooks
- Error handling and fallback mechanisms
- Serverless API development (Vercel Functions)
- PostgreSQL database integration (Prisma ORM)
- Analytics and performance monitoring
- Anonymous session tracking
- Modular Architecture: Well-organized component structure
- Type Safety: Leveraging TypeScript for better code quality
- User Experience: Smooth animations and responsive design
- Scalability: Easy to add new AI providers or features
- Analytics: Real-time performance monitoring with anonymous tracking
- Full-Stack: Complete solution with frontend, backend, and database
- Best Practices: Following React and TypeScript conventions
- Add analytics dashboard (Business Insights)
- Implement anonymous session tracking
- Add PostgreSQL database integration
- Add authentication system
- Implement user accounts
- Add message search functionality
- Create chat export feature
- Add voice input/output
- Implement markdown support
- Add code syntax highlighting
- Create mobile app version
- Add real-time charts and visualizations
Feel free to use this project repository and extend this project further!
If you have any questions or want to share your work, reach out via GitHub or my portfolio at https://arnob-mahmud.vercel.app/.
Enjoy building and learning! 🚀
Thank you! 😊









