Run AI models natively across multiple JS environments, be it Node.js, Expo, and Bun. The SDK abstracts away platform complexity while providing consistent AI capabilities whether you're building desktop apps, mobile apps, or server applications
Decentralization that doesn’t get in the way
We baked-in the entire Pear stack (by Holepunch) to enable decentralized model sharing, delegated inference or decentralized vector databases. P2P is native but optional; you can still run RAG using Chroma, LanceDB or SQLite-vector or fetch models from the most common providers or from your filesystem.
Create distributed AI inference networks where devices can provide or consume AI services. Enable resource sharing across the network, allowing lightweight devices to access powerful AI models running on other peers in the network.
Seamlessly integrate multiple AI capabilities including completion, transcription, tool calling, embeddings and retrieval, translation, vision or text-to-speech using a single entrypoint. It also supports streaming and multimodal inputs.
import { loadModel, unloadModel, textToSpeech, TTS_PIPER_NORMAN_EN_US_ONNX_MEDIUM, TTS_PIPER_NORMAN_EN_US_ONNX_MEDIUM_CONFIG } from "@qvac/sdk";const eSpeakDataPath ="some path";const modelId = await loadModel({ modelSrc: TTS_PIPER_NORMAN_EN_US_ONNX_MEDIUM, modelType: "tts", configSrc: TTS_PIPER_NORMAN_EN_US_ONNX_MEDIUM_CONFIG, eSpeakDataPath, modelConfig: { language: "en", }});const result = textToSpeech({ modelId, text: "QVAC SDK is the canonical entry point to QVAC", inputType: "text", stream: false});const audioBuffer = await result.buffer;await unloadModel ({ modelId });