Skip to content

Installation

Installation

Get started with WebLLM.io by installing the core SDK package.

Core Package

Install @webllm-io/sdk using your preferred package manager:

Terminal window
# npm
npm install @webllm-io/sdk
# pnpm
pnpm add @webllm-io/sdk
# yarn
yarn add @webllm-io/sdk

Optional Peer Dependency

For local inference support, you need to install @mlc-ai/web-llm as a peer dependency:

Terminal window
# npm
npm install @mlc-ai/web-llm
# pnpm
pnpm add @mlc-ai/web-llm
# yarn
yarn add @mlc-ai/web-llm

TypeScript Support

TypeScript definitions are included out of the box. No additional @types packages are needed.

Browser Requirements

Local Inference (WebGPU)

To run models locally in the browser, you need:

  • Chrome 113+ or Edge 113+
  • WebGPU support enabled (usually on by default)
  • Sufficient VRAM — Device scoring adapts model selection to available resources

Cloud Mode

Cloud inference works in all modern browsers with no special requirements:

  • Chrome, Firefox, Safari, Edge (latest versions)
  • Mobile browsers (iOS Safari, Chrome Mobile)

Verification

After installation, verify everything is set up correctly:

import { createClient } from '@webllm-io/sdk';
const client = createClient({
local: 'auto',
cloud: { baseURL: 'https://api.openai.com/v1', apiKey: 'sk-...' },
});
console.log('WebLLM client created successfully!');

Next Steps