Installation
Installation
Get started with WebLLM.io by installing the core SDK package.
Core Package
Install @webllm-io/sdk using your preferred package manager:
# npmnpm install @webllm-io/sdk
# pnpmpnpm add @webllm-io/sdk
# yarnyarn add @webllm-io/sdkOptional Peer Dependency
For local inference support, you need to install @mlc-ai/web-llm as a peer dependency:
# npmnpm install @mlc-ai/web-llm
# pnpmpnpm add @mlc-ai/web-llm
# yarnyarn add @mlc-ai/web-llmTypeScript Support
TypeScript definitions are included out of the box. No additional @types packages are needed.
Browser Requirements
Local Inference (WebGPU)
To run models locally in the browser, you need:
- Chrome 113+ or Edge 113+
- WebGPU support enabled (usually on by default)
- Sufficient VRAM — Device scoring adapts model selection to available resources
Cloud Mode
Cloud inference works in all modern browsers with no special requirements:
- Chrome, Firefox, Safari, Edge (latest versions)
- Mobile browsers (iOS Safari, Chrome Mobile)
Verification
After installation, verify everything is set up correctly:
import { createClient } from '@webllm-io/sdk';
const client = createClient({ local: 'auto', cloud: { baseURL: 'https://api.openai.com/v1', apiKey: 'sk-...' },});
console.log('WebLLM client created successfully!');Next Steps
- Quick Start — Make your first completion
- Playground — Try the interactive demo
- Configuration — Learn about advanced options