This project runs a Faster Whisper model locally, exposing a local REST endpoint.
You can launch the service by running e.g. python faster-whisper.py
.
- Windows
- FFMpeg - you can run
winget install ffmpeg
to install the package
- FFMpeg - you can run
- MaxOS
brew install portaudio
- needed to installpyaudio
Option 1: Batch file
- Windows:
- Run ./setup.cmd
- MacOS/Linux:
- Run
source setup.sh
- Run
Option 2: Manual steps
- Create and activate a python virtual environment.
pip config --site set global.extra-index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
Test if the installation worked by starting the backend python faster-whisper.py
.
You can connect to the whisper service using the example "whisperClient" project in the ts/examples
folder. To use it:
- Go to the repo's ts/examples/whisperClient folder
- Build the project using
pnpm run build
- Start the web UI using
pnpm run start
This web client will capture audio from microphone, send to the local service for transcription and show the result.