/transcription-with-whisper-learnweb3DAO

In this lesson, we’re gonna build out a super simple bot that gets transcriptions from audio files. We’ll be using Whisper by OpenAI for this purpose.

Primary LanguageTypeScriptGNU General Public License v3.0GPL-3.0

This is a Next.js project bootstrapped with create-next-app.

Transcription with Whisper

UI improvements needed to be added after learning react and tailwind

Getting Started

First, run the development server:

npm run dev
# or
yarn dev
# or
pnpm dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

This project uses next/font to automatically optimize and load Inter, a custom Google Font.

Learn More

To learn more about Next.js, take a look at the following resources:

You can check out the Next.js GitHub repository - your feedback and contributions are welcome!

Deploy on Vercel

The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.

Check out our Next.js deployment documentation for more details.