Web version of rhasspy/piper running locally in the browser.
- Phoneme Generation
- WAV Audio Output
- Expression / Emotions Generation
- WebWorker Support
npm install piper-tts-web
To use PiperTTS client-side in your project, copy the neccessary files into your public directory.
If you're using Webpack, you need to install the copy-webpack-plugin and modify your config like this:
const nextConfig = {
webpack: (config) => {
config.plugins.push(
new CopyPlugin({
patterns: [
{
from: 'node_modules/piper-tts-web/dist/onnx',
to: '../public/'
},
{
from: 'node_modules/piper-tts-web/dist/piper',
to: '../public/'
},
{
src: 'node_modules/piper-tts-web/dist/worker',
dest: '../public/'
},
],
})
);
return config;
},
};
For Vite use vite-plugin-static-copy and modify your config like this:
export default defineConfig({
plugins: [
viteStaticCopy({
targets: [
{
src: 'node_modules/piper-tts-web/dist/onnx',
dest: '.'
},
{
src: 'node_modules/piper-tts-web/dist/piper',
dest: '.'
},
{
src: 'node_modules/piper-tts-web/dist/worker',
dest: '.'
},
]
}),
],
});
Other build tools may require different configurations, so check which one you're using and figure out how to copy files to your public directory if you don't know how to do it.
Basic:
import { PiperWebEngine } from 'piper-tts-web';
const engine = new PiperWebEngine();
const text = 'This is a test!';
const voice = 'en_US-libritts_r-medium';
const speaker = 0;
const response = await engine.generate(text, voice, speaker);
console.log(response);
const expressions = await engine.expressions(response.phonemeData);
console.log(expressions);
Basic with WebGPU:
import { PiperWebEngine, OnnxWebGPURuntime } from 'piper-tts-web';
const engine = new PiperWebEngine({
onnxRuntime: new OnnxWebGPURuntime(),
});
const text = 'This is a test!';
const voice = 'en_US-libritts_r-medium';
const speaker = 0;
const response = await engine.generate(text, voice, speaker);
console.log(response);
const expressions = await engine.expressions(response.phonemeData);
console.log(expressions);
Advanced with WebWorker, WebGPU and VoiceProvider:
import { PiperWebWorkerEngine, OnnxWebGPUWorkerRuntime, HuggingFaceVoiceProvider } from 'piper-tts-web';
const voiceProvider = new HuggingFaceVoiceProvider();
const voices = await voiceProvider.list();
console.log(voices);
const engine = new PiperWebWorkerEngine({
onnxRuntime: new OnnxWebGPUWorkerRuntime(),
voiceProvider,
});
const text = 'This is a test!';
const voice = 'en_US-libritts_r-medium';
const speaker = 0;
const response = await engine.generate(text, voice, speaker);
console.log(response);
const expressions = await engine.expressions(response.phonemeData);
console.log(expressions);
Take also a look at the example for more details.
Vite Dev Server:
npm run dev
Vite Build Distribution:
npm run build
Build Piper-Phonemize with Docker:
npm run build:phonemize
piper-tts-web:
- Jonas Plamann [@Poket-Jony]
piper-wasm:
- Jozef Chutka [@jozefchutka]
- David Christ [@DavidCks]
vits-web:
- Konstantin Paulus [@k9p5]
MIT