Chrome Nano AI Errors: Complete Troubleshooting Guide
Keywords: Chrome Nano AI errors, Chrome Nano AI troubleshooting, LanguageModel API errors, Gemini Nano debugging, Chrome AI error codes
You've integrated Chrome's built-in LanguageModel API into your application, but instead of AI-powered magic, you're getting error messages. Don't worry—you're not alone. Chrome Nano AI's on-device architecture introduces unique error scenarios that differ from traditional cloud API failures.
This comprehensive troubleshooting guide covers every common Chrome Nano AI error, explains why it happens, and provides actionable solutions to get your on-device AI integration working smoothly.
Table of Contents
- Understanding Chrome Nano AI Error Types
- Quick Error Reference Table
- Troubleshooting Decision Flowchart
- Common Error 1: LanguageModel API Not Available
- Common Error 2: NotAllowedError - User Gesture Required
- Common Error 3: Model Download Issues
- Common Error 4: Availability Status Errors
- Common Error 5: Session Creation Failures
- Common Error 6: Prompt Execution Errors
- Common Error 7: Streaming Response Issues
- Common Error 8: Memory and Resource Errors
- Advanced Debugging Techniques
- Implementation Best Practices
- Error Prevention Strategies
- Testing and Validation
- Frequently Asked Questions
- Next Steps
Reading Time: ~20 minutes | Difficulty: Intermediate | Last Updated: January 10, 2026
Understanding Chrome Nano AI Error Types
Chrome Nano AI errors fall into three main categories, each requiring different troubleshooting approaches:
1. Availability Errors
These occur before you even attempt to use the API, indicating environmental or configuration issues:
- API Not Available: Browser doesn't support LanguageModel API
- Model Unavailable: Device doesn't support on-device AI
- Download Required: Model needs to be downloaded before use
2. Permission Errors
These happen during session creation due to browser security requirements:
- NotAllowedError: User gesture required for model initialization
- SecurityError: Context security requirements not met
3. Runtime Errors
These occur during actual AI operations:
- Session Failures: Problems creating or maintaining AI sessions
- Prompt Errors: Issues executing prompts or streaming responses
- Resource Errors: Memory, timeout, or resource exhaustion issues
Understanding which category your error falls into helps you apply the right solution quickly.
Quick Error Reference Table
| Error Type | Common Cause | Quick Fix | Severity |
|---|---|---|---|
LanguageModel is not defined | API not available in browser | Update to Chrome 138+ or check chrome://settings/ai | Critical |
NotAllowedError | No user gesture when creating session | Create session inside click/user event handler | High |
availability: "unavailable" | Device doesn't support AI | Use fallback to cloud LLM providers | Critical |
availability: "downloadable" | Model not downloaded | Trigger download with user gesture | Medium |
availability: "downloading" | Download in progress | Wait for completion or show progress | Low |
Session creation timeout | Model initializing or busy | Implement retry with exponential backoff | Medium |
Prompt execution failed | Malformed prompt or context | Validate prompt structure and length | Medium |
Streaming interrupted | Connection or resource issue | Implement abort handling and cleanup | Low |
Out of memory | Too many concurrent sessions | Implement session pooling and cleanup | High |
Troubleshooting Decision Flowchart
[PLACEHOLDER: Troubleshooting Decision Flowchart]
Start
↓
Is LanguageModel available? (window.LanguageModel exists)
├─ No → Error: API Not Available → Solution: Update Chrome or enable AI features
└─ Yes
↓
Check availability status (await LanguageModel.availability())
├─ "unavailable" → Error: Device unsupported → Solution: Use cloud fallback
├─ "downloadable" → Error: Model not downloaded → Solution: Trigger download
├─ "downloading" → Info: Download in progress → Solution: Wait or show progress
└─ "readily-available"
↓
Create session (await LanguageModel.create())
├─ NotAllowedError → Error: No user gesture → Solution: Call in user event handler
├─ Timeout/Failure → Error: Resource issue → Solution: Retry with backoff
└─ Success
↓
Execute prompt (await session.prompt())
├─ Execution error → Check prompt structure and length
└─ Success → Continue operation
This flowchart represents the logical troubleshooting path. Each branch corresponds to specific error scenarios covered in detail below.
Common Error 1: LanguageModel API Not Available
Error Manifestation
// Console error:
ReferenceError: LanguageModel is not defined
// Or runtime check fails:
if (typeof window !== "undefined" && "LanguageModel" in window) {
// This block never executes
}
Root Causes
- Chrome Version Too Old: LanguageModel API requires Chrome 138+
- Built-in AI Disabled: User has disabled AI features in Chrome settings
- Extension Context Issues: Attempting to access from incorrect execution context
- Origin Trial Not Enabled: Web apps need origin trial token (extensions don't)
Diagnostic Steps
// Step 1: Check if running in browser context
if (typeof window === "undefined") {
console.error("Not running in browser context");
}
// Step 2: Check Chrome version
const chromeVersion = navigator.userAgent.match(/Chrome\/(\d+)/)?.[1];
if (chromeVersion && parseInt(chromeVersion) < 138) {
console.error(`Chrome version ${chromeVersion} detected. Need 138+`);
}
// Step 3: Check API availability
if (!("LanguageModel" in window)) {
console.error("LanguageModel API not available");
console.log("Check chrome://settings/ai to enable Built-in AI");
}
Solutions
Solution 1: Update Chrome
# Check current version
chrome://version
# Update to Chrome 138+
# Navigate to: chrome://settings/help
# Chrome will auto-update to latest version
Solution 2: Enable Built-in AI Features
# Navigate to: chrome://settings/ai
# Ensure "Help me write" and "AI features" are enabled
# Restart Chrome after enabling
Solution 3: Implement Feature Detection
class NanoAiService {
static isAvailable(): boolean {
return typeof window !== "undefined" && "LanguageModel" in window;
}
async initialize() {
if (!NanoAiService.isAvailable()) {
throw new Error(
"Chrome Nano AI not available. " +
"Update to Chrome 138+ and enable AI features at chrome://settings/ai"
);
}
// Proceed with initialization
const availability = await window.LanguageModel.availability();
return availability;
}
}
Solution 4: Implement Graceful Fallback
class HybridAiService {
async prompt(text: string): Promise<string> {
// Try on-device AI first
if (NanoAiService.isAvailable()) {
try {
return await this.nanoAi.prompt(text);
} catch (error) {
console.warn("On-device AI failed, falling back to cloud", error);
}
}
// Fallback to cloud API
return await this.cloudAi.prompt(text);
}
}
Prevention
- Always implement feature detection before attempting to use the API
- Provide clear user guidance when API is unavailable
- Design your application with fallback strategies
- Test on minimum supported Chrome version (138)
Common Error 2: NotAllowedError - User Gesture Required
Error Manifestation
// Attempting to create session outside user gesture
async function initializeAi() {
const session = await LanguageModel.create();
// Error: NotAllowedError: Failed to execute 'create' on 'LanguageModel'
}
// Called from non-user-initiated context
setTimeout(initializeAi, 1000); // Will fail
Root Causes
This is Chrome's most common Nano AI error. The NotAllowedError occurs when:
- Model in "downloadable" state: First-time use requires user gesture to trigger download
- Model in "downloading" state: User gesture needed to show download UI
- Session Created Outside Gesture Context: Call happens after async operations that lose gesture context
Understanding User Gestures
A user gesture is a synchronous browser event triggered by direct user interaction:
Valid User Gestures:
- Click events (
click) - Keyboard events (
keydown,keypress) - Touch events (
touchstart,touchend) - Form submissions
Invalid Contexts (No Gesture):
- setTimeout/setInterval callbacks
- Promise resolution callbacks (after async operations)
- Network request callbacks
- Initialization code that runs on page load
Diagnostic Steps
// Detect when you're in user gesture context
function hasUserGesture(): boolean {
try {
// Attempt an operation that requires user gesture
const audio = new Audio();
audio.play(); // Will throw if no gesture
audio.pause();
return true;
} catch {
return false;
}
}
// Check during session creation
async function createSessionWithCheck() {
console.log("Has user gesture:", hasUserGesture());
try {
const session = await LanguageModel.create();
console.log("Session created successfully");
return session;
} catch (error) {
if (error.name === "NotAllowedError") {
console.error("NotAllowedError - user gesture required");
}
throw error;
}
}
Solutions
Solution 1: Create Session Immediately in Event Handler
// ❌ INCORRECT: Gesture context lost after await
button.addEventListener('click', async () => {
const data = await fetchSomeData(); // Async operation
const session = await LanguageModel.create(); // NotAllowedError!
});
// ✅ CORRECT: Create session BEFORE async operations
button.addEventListener('click', async () => {
const session = await LanguageModel.create(); // In gesture context
const data = await fetchSomeData(); // Async OK after session created
await session.prompt(`Process: ${data}`);
});
Solution 2: Pre-Initialize Session on First Interaction
class NanoAiService {
private session: LanguageModelSession | null = null;
async ensureSession(): Promise<LanguageModelSession> {
if (this.session) {
return this.session;
}
// Must be called in user gesture context first time
this.session = await LanguageModel.create();
return this.session;
}
async prompt(text: string): Promise<string> {
const session = await this.ensureSession();
return await session.prompt(text);
}
}
// Initialize on first user interaction
button.addEventListener('click', async () => {
await aiService.ensureSession(); // Creates session if needed
// Subsequent calls can use existing session
const result = await aiService.prompt("Hello");
});
Solution 3: Prompt User to Enable AI
async function initializeWithUserPrompt() {
const availability = await LanguageModel.availability();
if (availability === "downloadable" || availability === "downloading") {
// Show UI to prompt user action
showNotification({
title: "AI Features Need Setup",
message: "Click to enable on-device AI features",
action: async () => {
// This click provides the user gesture
try {
const session = await LanguageModel.create();
console.log("AI enabled successfully");
return session;
} catch (error) {
console.error("Failed to enable AI", error);
}
}
});
}
}
Solution 4: Check Availability First
async function safeSessionCreation() {
// Check availability before attempting creation
const availability = await LanguageModel.availability();
if (availability === "readily-available") {
// Safe to create without user gesture
return await LanguageModel.create();
}
// Requires user gesture - inform user
throw new Error(
`Model not ready (${availability}). ` +
"User interaction required to initialize AI features."
);
}
Prevention
- Check availability status before creating sessions
- Create sessions early in user interaction flow
- Cache sessions for reuse across operations
- Provide clear UI when user action is needed
- Test initialization in various scenarios
Common Error 3: Model Download Issues
Error Manifestation
// Availability check shows:
const availability = await LanguageModel.availability();
// Returns: "downloadable" or "downloading"
// User can't use AI features yet
const session = await LanguageModel.create();
// May hang, timeout, or throw NotAllowedError
Root Causes
- First-Time Use: Model hasn't been downloaded to device yet
- Incomplete Download: Previous download was interrupted
- Storage Issues: Insufficient disk space for model files
- Network Problems: Download connection issues
- Chrome Updates: Model update in progress after Chrome update
Model Download Requirements
- Storage Space: ~200-500MB depending on device and model version
- Network Connection: Required for initial download (not for usage after download)
- Download Time: 2-10 minutes depending on connection speed
- User Gesture: Required to initiate download
Diagnostic Steps
async function diagnoseDownloadIssues() {
const availability = await LanguageModel.availability();
console.log("Model availability:", availability);
// Check storage quota
if ('storage' in navigator && 'estimate' in navigator.storage) {
const estimate = await navigator.storage.estimate();
const availableMB = ((estimate.quota - estimate.usage) / 1024 / 1024).toFixed(2);
console.log(`Available storage: ${availableMB}MB`);
if (estimate.quota - estimate.usage < 500 * 1024 * 1024) {
console.warn("Low disk space may prevent model download");
}
}
// Check network status
if ('connection' in navigator) {
console.log("Connection type:", navigator.connection?.effectiveType);
console.log("Online status:", navigator.onLine);
}
}
Solutions
Solution 1: Monitor Download Progress
async function createSessionWithProgress() {
const availability = await LanguageModel.availability();
if (availability === "downloadable") {
console.log("Model needs download - requires user gesture");
}
const session = await LanguageModel.create({
monitor(monitor) {
monitor.addEventListener("downloadprogress", (e) => {
const percent = (e.loaded / e.total * 100).toFixed(1);
console.log(`Download progress: ${percent}%`);
// Update UI with progress
updateProgressBar(percent);
});
}
});
return session;
}
Solution 2: Implement Download State Management
class DownloadManager {
private downloadState: {
status: 'not_started' | 'in_progress' | 'completed' | 'failed';
progress: number;
error?: string;
} = { status: 'not_started', progress: 0 };
async initiateDownload(): Promise<LanguageModelSession> {
this.downloadState.status = 'in_progress';
try {
const session = await LanguageModel.create({
monitor: (monitor) => {
monitor.addEventListener("downloadprogress", (e) => {
this.downloadState.progress = (e.loaded / e.total * 100);
this.notifyListeners();
});
}
});
this.downloadState.status = 'completed';
this.downloadState.progress = 100;
return session;
} catch (error) {
this.downloadState.status = 'failed';
this.downloadState.error = error.message;
throw error;
}
}
getDownloadState() {
return this.downloadState;
}
}
Solution 3: User-Friendly Download UI
async function initializeWithUI() {
const availability = await LanguageModel.availability();
switch (availability) {
case "readily-available":
return await LanguageModel.create();
case "downloadable":
return await showDownloadPrompt({
title: "Enable On-Device AI",
message: "Download AI model (~250MB) for offline use?",
onConfirm: async () => {
showProgressModal();
return await LanguageModel.create({
monitor: (m) => {
m.addEventListener("downloadprogress", (e) => {
updateProgressModal(e.loaded / e.total * 100);
});
}
});
}
});
case "downloading":
return await showWaitingUI({
message: "AI model downloading... Please wait."
});
case "unavailable":
throw new Error("On-device AI not supported on this device");
}
}
Solution 4: Handle Download Interruption
async function robustDownload(maxRetries = 3): Promise<LanguageModelSession> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
console.log(`Download attempt ${attempt}/${maxRetries}`);
const session = await LanguageModel.create({
monitor: (monitor) => {
monitor.addEventListener("downloadprogress", (e) => {
console.log(`Progress: ${(e.loaded/e.total*100).toFixed(1)}%`);
});
}
});
console.log("Download completed successfully");
return session;
} catch (error) {
console.error(`Download attempt ${attempt} failed:`, error);
if (attempt === maxRetries) {
throw new Error(
`Failed to download AI model after ${maxRetries} attempts. ` +
"Check your internet connection and available storage space."
);
}
// Wait before retry (exponential backoff)
await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
}
}
}
Prevention
- Check availability before prompting for download
- Provide clear download UI with progress indication
- Implement retry logic for interrupted downloads
- Check storage space before initiating download
- Cache availability status to avoid repeated checks
- Handle offline scenarios gracefully
Common Error 4: Availability Status Errors
Error Manifestation
// Availability check returns unexpected or changing values
const availability = await LanguageModel.availability();
console.log(availability); // May be "unavailable", "downloading", etc.
// Or availability changes during operation
const initial = await LanguageModel.availability(); // "readily-available"
// ... some time passes ...
const later = await LanguageModel.availability(); // "downloading" (update in progress)
Root Causes
- Device Limitations: Hardware doesn't support on-device AI
- Model Updates: Chrome updating the AI model in background
- Resource Constraints: System temporarily unable to load model
- Configuration Changes: User disabled AI features
- Race Conditions: Checking availability while state is transitioning
Availability Status Values
| Status | Meaning | Action Required |
|---|---|---|
"readily-available" | Model ready to use immediately | Create session normally |
"downloadable" | Model can be downloaded | Trigger download with user gesture |
"downloading" | Download in progress | Wait for completion or show progress |
"unavailable" | Device doesn't support AI | Use fallback solution |
Solutions
Solution 1: Comprehensive Availability Handler
class AvailabilityManager {
async checkAndHandle(): Promise<'ready' | 'downloading' | 'unavailable'> {
const availability = await LanguageModel.availability();
switch (availability) {
case "readily-available":
console.log("✓ AI model ready");
return 'ready';
case "downloadable":
console.log("⚠ Model needs download");
await this.promptUserForDownload();
return 'downloading';
case "downloading":
console.log("⏳ Model download in progress");
await this.waitForDownload();
return await this.checkAndHandle(); // Recheck after download
case "unavailable":
console.log("✗ On-device AI not supported");
return 'unavailable';
default:
console.warn("Unknown availability status:", availability);
return 'unavailable';
}
}
private async waitForDownload(): Promise<void> {
// Poll until download completes or fails
const maxWait = 300000; // 5 minutes
const startTime = Date.now();
while (Date.now() - startTime < maxWait) {
await new Promise(resolve => setTimeout(resolve, 2000));
const status = await LanguageModel.availability();
if (status === "readily-available") {
return;
}
if (status === "unavailable") {
throw new Error("Download failed - model became unavailable");
}
}
throw new Error("Download timeout - exceeded maximum wait time");
}
}
Solution 2: Availability Caching with Invalidation
class CachedAvailability {
private cache: {
status: string;
timestamp: number;
} | null = null;
private readonly CACHE_DURATION = 60000; // 1 minute
async getAvailability(forceRefresh = false): Promise<string> {
// Return cached value if fresh
if (!forceRefresh && this.cache) {
const age = Date.now() - this.cache.timestamp;
if (age < this.CACHE_DURATION) {
return this.cache.status;
}
}
// Fetch fresh status
const status = await LanguageModel.availability();
this.cache = {
status,
timestamp: Date.now()
};
return status;
}
invalidate() {
this.cache = null;
}
}
Solution 3: Fallback Strategy Based on Availability
class AdaptiveAiService {
async prompt(text: string): Promise<string> {
const availability = await LanguageModel.availability();
if (availability === "readily-available") {
// Use on-device AI
return await this.promptWithNanoAi(text);
}
if (availability === "downloadable" || availability === "downloading") {
// Offer choice: wait for download or use cloud
return await this.promptUserChoice({
local: () => this.waitAndUseNanoAi(text),
cloud: () => this.promptWithCloudAi(text)
});
}
// Fallback to cloud for "unavailable"
return await this.promptWithCloudAi(text);
}
}
Prevention
- Cache availability checks to reduce overhead
- Handle all possible status values including unknown ones
- Implement state change listeners if monitoring over time
- Provide clear user feedback for each status
- Design for status transitions (especially during updates)
Common Error 5: Session Creation Failures
Error Manifestation
// Session creation fails with various errors
try {
const session = await LanguageModel.create();
} catch (error) {
// Possible errors:
// - Timeout
// - Resource unavailable
// - Invalid parameters
// - System busy
}
Root Causes
- Resource Exhaustion: Too many concurrent sessions
- Invalid Parameters: Incorrect temperature, topK, or other config
- System State: Model in transition or updating
- Memory Constraints: Insufficient memory for new session
- Timing Issues: Creating sessions too rapidly
Diagnostic Steps
async function diagnoseSessionCreation() {
console.log("Diagnosing session creation...");
// Check availability
const availability = await LanguageModel.availability();
console.log("Availability:", availability);
// Get default parameters
try {
const params = await LanguageModel.params();
console.log("Default params:", params);
} catch (error) {
console.error("Failed to get default params:", error);
}
// Attempt session creation with logging
const startTime = Date.now();
try {
const session = await LanguageModel.create({
temperature: 0.7,
topK: 5
});
console.log(`Session created in ${Date.now() - startTime}ms`);
return session;
} catch (error) {
console.error(`Session creation failed after ${Date.now() - startTime}ms:`, error);
throw error;
}
}
Solutions
Solution 1: Use Default Parameters Safely
async function createSessionWithDefaults() {
try {
// Get default parameters first
const defaultParams = await LanguageModel.params();
const session = await LanguageModel.create({
temperature: defaultParams.defaultTemperature ?? 0.7,
topK: defaultParams.defaultTopK ?? 5,
initialPrompts: []
});
return session;
} catch (error) {
console.error("Failed to create session with defaults:", error);
// Retry with minimal config
return await LanguageModel.create();
}
}
Solution 2: Implement Session Pooling
class SessionPool {
private sessions: Set<LanguageModelSession> = new Set();
private readonly maxSessions = 3;
async getSession(): Promise<LanguageModelSession> {
// Reuse existing session if available
if (this.sessions.size < this.maxSessions) {
const session = await LanguageModel.create();
this.sessions.add(session);
return session;
}
// Return existing session (will be reused)
return Array.from(this.sessions)[0];
}
releaseSession(session: LanguageModelSession) {
session.destroy();
this.sessions.delete(session);
}
cleanup() {
this.sessions.forEach(s => s.destroy());
this.sessions.clear();
}
}
Solution 3: Retry with Exponential Backoff
async function createSessionWithRetry(
maxRetries = 3,
baseDelay = 1000
): Promise<LanguageModelSession> {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const session = await LanguageModel.create();
return session;
} catch (error) {
console.warn(`Session creation attempt ${attempt} failed:`, error);
if (attempt === maxRetries) {
throw new Error(
`Failed to create session after ${maxRetries} attempts: ${error.message}`
);
}
// Exponential backoff
const delay = baseDelay * Math.pow(2, attempt - 1);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
Solution 4: Validate Parameters Before Creation
async function createSessionWithValidation(customParams?: {
temperature?: number;
topK?: number;
}) {
// Get valid ranges
const defaults = await LanguageModel.params();
// Validate temperature
const temperature = customParams?.temperature ?? defaults.defaultTemperature ?? 0.7;
if (temperature < 0 || temperature > 2) {
throw new Error(`Invalid temperature: ${temperature}. Must be between 0 and 2`);
}
// Validate topK
const topK = customParams?.topK ?? defaults.defaultTopK ?? 5;
if (topK < 1 || topK > 100) {
throw new Error(`Invalid topK: ${topK}. Must be between 1 and 100`);
}
return await LanguageModel.create({
temperature,
topK,
initialPrompts: []
});
}
Prevention
- Limit concurrent sessions to avoid resource exhaustion
- Use default parameters when unsure of valid ranges
- Implement session reuse for multiple operations
- Add retry logic with backoff for transient failures
- Clean up sessions promptly when done
- Monitor session lifecycle for leaks
Common Error 6: Prompt Execution Errors
Error Manifestation
const session = await LanguageModel.create();
try {
const response = await session.prompt("Your prompt here");
} catch (error) {
// Possible errors:
// - Prompt too long
// - Invalid characters or encoding
// - Session destroyed
// - Timeout
}
Root Causes
- Prompt Length: Exceeds model context window
- Invalid Content: Special characters or encoding issues
- Session State: Session was destroyed or invalidated
- Resource Limits: Model busy or out of resources
- Timeout: Prompt takes too long to process
Solutions
Solution 1: Validate Prompt Length
class PromptValidator {
private readonly MAX_TOKENS = 8000; // Approximate
private readonly CHARS_PER_TOKEN = 4; // Rough estimate
validatePrompt(prompt: string): { valid: boolean; reason?: string } {
// Check length
const estimatedTokens = prompt.length / this.CHARS_PER_TOKEN;
if (estimatedTokens > this.MAX_TOKENS) {
return {
valid: false,
reason: `Prompt too long (~${Math.round(estimatedTokens)} tokens, max ${this.MAX_TOKENS})`
};
}
// Check for empty prompt
if (!prompt.trim()) {
return {
valid: false,
reason: "Prompt cannot be empty"
};
}
return { valid: true };
}
async safePrompt(
session: LanguageModelSession,
prompt: string
): Promise<string> {
const validation = this.validatePrompt(prompt);
if (!validation.valid) {
throw new Error(`Invalid prompt: ${validation.reason}`);
}
return await session.prompt(prompt);
}
}
Solution 2: Chunk Long Content
async function promptWithLongContent(
session: LanguageModelSession,
longContent: string,
maxChunkSize = 30000 // characters
): Promise<string> {
if (longContent.length <= maxChunkSize) {
return await session.prompt(longContent);
}
// Split into chunks
const chunks: string[] = [];
for (let i = 0; i < longContent.length; i += maxChunkSize) {
chunks.push(longContent.slice(i, i + maxChunkSize));
}
console.log(`Processing ${chunks.length} chunks...`);
// Process each chunk
const results: string[] = [];
for (let i = 0; i < chunks.length; i++) {
console.log(`Processing chunk ${i + 1}/${chunks.length}`);
const result = await session.prompt(
`Summarize this part (${i + 1}/${chunks.length}):\n${chunks[i]}`
);
results.push(result);
}
// Combine results
if (results.length === 1) {
return results[0];
}
return await session.prompt(
`Combine these summaries into one:\n${results.join('\n\n')}`
);
}
Solution 3: Implement Timeout and Abort
async function promptWithTimeout(
session: LanguageModelSession,
prompt: string,
timeoutMs = 30000
): Promise<string> {
return await Promise.race([
session.prompt(prompt),
new Promise<never>((_, reject) =>
setTimeout(
() => reject(new Error(`Prompt execution timeout after ${timeoutMs}ms`)),
timeoutMs
)
)
]);
}
// With abort signal
async function promptWithAbort(
session: LanguageModelSession,
prompt: string,
signal?: AbortSignal
): Promise<string> {
if (signal?.aborted) {
throw new Error("Operation aborted");
}
// Start prompt execution
const promptPromise = session.prompt(prompt);
// Listen for abort
if (signal) {
const abortPromise = new Promise<never>((_, reject) => {
signal.addEventListener('abort', () => {
reject(new Error("Operation aborted by user"));
});
});
return await Promise.race([promptPromise, abortPromise]);
}
return await promptPromise;
}
Solution 4: Handle Session State Errors
class ManagedSession {
private session: LanguageModelSession | null = null;
private isValid = false;
async initialize() {
this.session = await LanguageModel.create();
this.isValid = true;
}
async prompt(text: string): Promise<string> {
if (!this.isValid || !this.session) {
throw new Error("Session not initialized or destroyed");
}
try {
return await this.session.prompt(text);
} catch (error) {
console.error("Prompt execution failed:", error);
// Check if session needs recreation
if (error.message.includes("session") || error.message.includes("destroyed")) {
console.log("Attempting to recreate session...");
await this.initialize();
return await this.session!.prompt(text);
}
throw error;
}
}
destroy() {
if (this.session) {
this.session.destroy();
this.session = null;
this.isValid = false;
}
}
}
Prevention
- Validate prompt length before execution
- Implement chunking for long content
- Add timeout handling to prevent hanging
- Monitor session state before each prompt
- Sanitize input to remove problematic characters
- Implement retry logic for transient failures
Common Error 7: Streaming Response Issues
Error Manifestation
try {
for await (const chunk of session.promptStreaming("Your prompt")) {
console.log(chunk);
// Streaming may:
// - Stop prematurely
// - Throw errors mid-stream
// - Provide incomplete data
}
} catch (error) {
console.error("Streaming failed:", error);
}
Root Causes
- Connection Interruption: Network or resource issues
- User Cancellation: User navigates away or closes UI
- Memory Pressure: System running low on resources
- Session Invalidation: Session destroyed during streaming
- Error Handling: Unhandled errors in stream processing
Solutions
Solution 1: Robust Streaming with Error Handling
async function* robustStreaming(
session: LanguageModelSession,
prompt: string
): AsyncIterable<string> {
let accumulatedContent = "";
let chunkCount = 0;
try {
for await (const chunk of session.promptStreaming(prompt)) {
chunkCount++;
accumulatedContent += chunk;
yield accumulatedContent;
}
console.log(`Streaming completed: ${chunkCount} chunks received`);
} catch (error) {
console.error("Streaming interrupted:", error);
// Return accumulated content even if stream fails
if (accumulatedContent) {
console.log("Returning partial result");
yield accumulatedContent;
}
throw error;
}
}
Solution 2: Streaming with Abort Signal
async function* streamingWithAbort(
session: LanguageModelSession,
prompt: string,
signal?: AbortSignal
): AsyncIterable<string> {
if (signal?.aborted) {
throw new Error("Operation aborted before start");
}
let accumulatedContent = "";
try {
for await (const chunk of session.promptStreaming(prompt)) {
// Check abort signal
if (signal?.aborted) {
console.log("Streaming aborted by user");
throw new Error("Streaming aborted");
}
accumulatedContent += chunk;
yield accumulatedContent;
}
} catch (error) {
// Cleanup on error
console.error("Streaming error:", error);
throw error;
}
}
// Usage
const abortController = new AbortController();
// Cancel button handler
cancelButton.addEventListener('click', () => {
abortController.abort();
});
// Use streaming
try {
for await (const content of streamingWithAbort(
session,
"Your prompt",
abortController.signal
)) {
updateUI(content);
}
} catch (error) {
if (error.message.includes("aborted")) {
console.log("User cancelled operation");
}
}
Solution 3: Streaming with Timeout
async function* streamingWithTimeout(
session: LanguageModelSession,
prompt: string,
chunkTimeoutMs = 5000
): AsyncIterable<string> {
let accumulatedContent = "";
const iterator = session.promptStreaming(prompt)[Symbol.asyncIterator]();
while (true) {
// Race between next chunk and timeout
const result = await Promise.race([
iterator.next(),
new Promise<{ done: true, value: undefined }>((resolve) =>
setTimeout(() => resolve({ done: true, value: undefined }), chunkTimeoutMs)
)
]);
if (result.done) {
break;
}
accumulatedContent += result.value;
yield accumulatedContent;
}
}
Solution 4: Fallback to Non-Streaming
class AdaptivePrompting {
async executeWithFallback(
session: LanguageModelSession,
prompt: string,
preferStreaming = true
): Promise<string> {
if (preferStreaming) {
try {
// Try streaming first
let result = "";
for await (const chunk of session.promptStreaming(prompt)) {
result = chunk;
}
return result;
} catch (error) {
console.warn("Streaming failed, falling back to non-streaming:", error);
}
}
// Fallback to regular prompt
return await session.prompt(prompt);
}
}
Prevention
- Implement abort handling for user cancellations
- Add timeout logic to detect stalled streams
- Accumulate content to preserve partial results
- Handle errors gracefully mid-stream
- Provide fallback to non-streaming when needed
- Test streaming under various network conditions
Common Error 8: Memory and Resource Errors
Error Manifestation
// Multiple sessions or operations causing:
// - Out of memory errors
// - Browser tab crashes
// - Slow performance
// - Session creation failures
Root Causes
- Session Leaks: Not destroying sessions after use
- Too Many Concurrent Sessions: Exceeding system resources
- Large Context: Processing very large prompts or documents
- Memory Accumulation: Long-running operations without cleanup
- Browser Limitations: Tab memory limits
Diagnostic Steps
class ResourceMonitor {
private sessions = new Set<LanguageModelSession>();
trackSession(session: LanguageModelSession) {
this.sessions.add(session);
console.log(`Active sessions: ${this.sessions.size}`);
}
releaseSession(session: LanguageModelSession) {
session.destroy();
this.sessions.delete(session);
console.log(`Active sessions: ${this.sessions.size}`);
}
async checkMemoryUsage() {
if ('memory' in performance) {
const memory = (performance as any).memory;
console.log('Memory usage:', {
used: `${(memory.usedJSHeapSize / 1024 / 1024).toFixed(2)}MB`,
total: `${(memory.totalJSHeapSize / 1024 / 1024).toFixed(2)}MB`,
limit: `${(memory.jsHeapSizeLimit / 1024 / 1024).toFixed(2)}MB`
});
}
}
cleanup() {
console.log(`Cleaning up ${this.sessions.size} sessions`);
this.sessions.forEach(s => s.destroy());
this.sessions.clear();
}
}
Solutions
Solution 1: Implement Session Lifecycle Management
class SessionManager {
private session: LanguageModelSession | null = null;
private operationCount = 0;
private readonly MAX_OPERATIONS = 100; // Recreate after N operations
async getSession(): Promise<LanguageModelSession> {
// Recreate session periodically to prevent memory buildup
if (this.session && this.operationCount >= this.MAX_OPERATIONS) {
console.log("Recreating session after max operations");
this.destroy();
}
if (!this.session) {
this.session = await LanguageModel.create();
this.operationCount = 0;
}
return this.session;
}
async execute(prompt: string): Promise<string> {
const session = await this.getSession();
this.operationCount++;
return await session.prompt(prompt);
}
destroy() {
if (this.session) {
this.session.destroy();
this.session = null;
this.operationCount = 0;
}
}
}
Solution 2: Limit Concurrent Operations
class RateLimitedExecutor {
private queue: Array<() => Promise<void>> = [];
private activeCount = 0;
private readonly maxConcurrent = 2;
async execute<T>(operation: () => Promise<T>): Promise<T> {
// Wait if at capacity
while (this.activeCount >= this.maxConcurrent) {
await new Promise(resolve => setTimeout(resolve, 100));
}
this.activeCount++;
try {
return await operation();
} finally {
this.activeCount--;
}
}
}
// Usage
const executor = new RateLimitedExecutor();
// These will execute max 2 at a time
await Promise.all([
executor.execute(() => processTask1()),
executor.execute(() => processTask2()),
executor.execute(() => processTask3()),
executor.execute(() => processTask4())
]);
Solution 3: Implement Memory-Efficient Chunking
async function processLargeDataset(
items: string[],
batchSize = 10
): Promise<string[]> {
const results: string[] = [];
// Process in batches to limit memory usage
for (let i = 0; i < items.length; i += batchSize) {
console.log(`Processing batch ${Math.floor(i / batchSize) + 1}`);
const batch = items.slice(i, i + batchSize);
// Create session for batch
const session = await LanguageModel.create();
try {
// Process batch items
for (const item of batch) {
const result = await session.prompt(`Process: ${item}`);
results.push(result);
}
} finally {
// Cleanup immediately after batch
session.destroy();
// Allow garbage collection
await new Promise(resolve => setTimeout(resolve, 100));
}
}
return results;
}
Solution 4: Automatic Cleanup on Navigation
class AutoCleanupManager {
private sessions = new Set<LanguageModelSession>();
constructor() {
// Cleanup on page unload
window.addEventListener('beforeunload', () => {
this.cleanup();
});
// Cleanup on navigation
if ('navigation' in window) {
(window as any).navigation.addEventListener('navigate', () => {
this.cleanup();
});
}
}
registerSession(session: LanguageModelSession) {
this.sessions.add(session);
}
cleanup() {
console.log(`Cleaning up ${this.sessions.size} sessions`);
this.sessions.forEach(s => {
try {
s.destroy();
} catch (error) {
console.error("Error destroying session:", error);
}
});
this.sessions.clear();
}
}
Prevention
- Always destroy sessions when done
- Limit concurrent sessions to 2-3 maximum
- Implement cleanup on navigation/unmount
- Monitor memory usage during development
- Use session pooling instead of creating many sessions
- Process large datasets in batches
- Recreate sessions periodically for long-running operations
Advanced Debugging Techniques
Enable Verbose Logging
class DebugLogger {
private enabled = false;
enable() {
this.enabled = true;
}
log(category: string, message: string, data?: any) {
if (!this.enabled) return;
console.log(`[${category}] ${message}`, data || '');
}
async createDebugSession() {
this.log('Session', 'Checking availability...');
const availability = await LanguageModel.availability();
this.log('Session', 'Availability result', availability);
this.log('Session', 'Getting default params...');
const params = await LanguageModel.params();
this.log('Session', 'Default params', params);
this.log('Session', 'Creating session...');
const startTime = Date.now();
try {
const session = await LanguageModel.create({
monitor: (monitor) => {
monitor.addEventListener('downloadprogress', (e) => {
this.log('Download', 'Progress', {
loaded: e.loaded,
total: e.total,
percent: (e.loaded / e.total * 100).toFixed(1)
});
});
}
});
this.log('Session', 'Created successfully', {
duration: Date.now() - startTime
});
return session;
} catch (error) {
this.log('Session', 'Creation failed', {
error: error.message,
duration: Date.now() - startTime
});
throw error;
}
}
}
Chrome DevTools Integration
// Expose debug interface to console
(window as any).__nanoAiDebug = {
async checkStatus() {
console.group('Chrome Nano AI Status');
console.log('API Available:', 'LanguageModel' in window);
if ('LanguageModel' in window) {
const availability = await LanguageModel.availability();
console.log('Availability:', availability);
const params = await LanguageModel.params();
console.log('Default Params:', params);
}
if ('memory' in performance) {
const memory = (performance as any).memory;
console.log('Memory Usage:', {
used: `${(memory.usedJSHeapSize / 1024 / 1024).toFixed(2)}MB`,
limit: `${(memory.jsHeapSizeLimit / 1024 / 1024).toFixed(2)}MB`
});
}
console.groupEnd();
},
async testSession() {
console.log('Creating test session...');
const session = await LanguageModel.create();
console.log('Executing test prompt...');
const response = await session.prompt('Say "test successful"');
console.log('Response:', response);
session.destroy();
console.log('Test completed successfully');
}
};
// Usage in console:
// __nanoAiDebug.checkStatus()
// __nanoAiDebug.testSession()
Performance Profiling
class PerformanceProfiler {
private metrics: Map<string, number[]> = new Map();
async profileOperation<T>(
name: string,
operation: () => Promise<T>
): Promise<T> {
const startTime = performance.now();
try {
const result = await operation();
const duration = performance.now() - startTime;
this.recordMetric(name, duration);
console.log(`${name}: ${duration.toFixed(2)}ms`);
return result;
} catch (error) {
const duration = performance.now() - startTime;
console.error(`${name} failed after ${duration.toFixed(2)}ms:`, error);
throw error;
}
}
private recordMetric(name: string, duration: number) {
if (!this.metrics.has(name)) {
this.metrics.set(name, []);
}
this.metrics.get(name)!.push(duration);
}
getStats(name: string) {
const values = this.metrics.get(name) || [];
if (values.length === 0) return null;
const sorted = [...values].sort((a, b) => a - b);
const sum = values.reduce((a, b) => a + b, 0);
return {
count: values.length,
avg: sum / values.length,
min: sorted[0],
max: sorted[sorted.length - 1],
median: sorted[Math.floor(sorted.length / 2)]
};
}
printReport() {
console.group('Performance Report');
for (const [name, _] of this.metrics) {
const stats = this.getStats(name);
console.log(`${name}:`, stats);
}
console.groupEnd();
}
}
// Usage
const profiler = new PerformanceProfiler();
await profiler.profileOperation('Session Creation', async () => {
return await LanguageModel.create();
});
await profiler.profileOperation('Prompt Execution', async () => {
return await session.prompt('Test');
});
profiler.printReport();
Implementation Best Practices
1. Comprehensive Error Handling Wrapper
class RobustNanoAiService {
private session: LanguageModelSession | null = null;
async initialize(): Promise<boolean> {
try {
// Check API availability
if (!('LanguageModel' in window)) {
console.error('LanguageModel API not available');
return false;
}
// Check model availability
const availability = await LanguageModel.availability();
if (availability === 'unavailable') {
console.error('On-device AI not supported');
return false;
}
if (availability !== 'readily-available') {
console.warn(`Model status: ${availability}`);
return false;
}
// Create session
this.session = await LanguageModel.create();
return true;
} catch (error) {
console.error('Failed to initialize Nano AI:', error);
return false;
}
}
async prompt(text: string): Promise<string | null> {
try {
if (!this.session) {
const initialized = await this.initialize();
if (!initialized) {
return null;
}
}
return await this.session!.prompt(text);
} catch (error) {
console.error('Prompt execution failed:', error);
// Try to recover by recreating session
try {
this.destroy();
await this.initialize();
return await this.session!.prompt(text);
} catch (retryError) {
console.error('Recovery failed:', retryError);
return null;
}
}
}
destroy() {
if (this.session) {
this.session.destroy();
this.session = null;
}
}
}
2. User-Friendly Error Messages
function getUserFriendlyError(error: Error): string {
const errorMap: Record<string, string> = {
'LanguageModel is not defined':
'Your browser doesn\'t support on-device AI. Please update to Chrome 138+ and enable AI features in chrome://settings/ai',
'NotAllowedError':
'AI features need to be activated. Please click a button or interact with the page first.',
'availability: unavailable':
'Your device doesn\'t support on-device AI. Try using cloud-based AI instead.',
'availability: downloadable':
'AI model needs to be downloaded. Click to download (~250MB).',
'availability: downloading':
'AI model is downloading. This may take a few minutes.',
'Session creation failed':
'Failed to start AI session. Please try again or reload the page.',
'Prompt too long':
'Your request is too long. Please try breaking it into smaller parts.',
'Out of memory':
'Your device is running low on memory. Please close some tabs and try again.'
};
// Find matching error
for (const [key, message] of Object.entries(errorMap)) {
if (error.message.includes(key) || error.name === key) {
return message;
}
}
return 'An unexpected error occurred. Please try again.';
}
// Usage
try {
await aiService.prompt("Your prompt");
} catch (error) {
const userMessage = getUserFriendlyError(error);
showNotification(userMessage);
}
3. Telemetry and Monitoring
class AiTelemetry {
private events: Array<{
type: string;
timestamp: number;
data: any;
}> = [];
recordEvent(type: string, data?: any) {
this.events.push({
type,
timestamp: Date.now(),
data
});
}
async trackSession() {
this.recordEvent('session_start');
try {
const availability = await LanguageModel.availability();
this.recordEvent('availability_check', { availability });
const session = await LanguageModel.create();
this.recordEvent('session_created');
return session;
} catch (error) {
this.recordEvent('session_failed', {
error: error.message,
name: error.name
});
throw error;
}
}
async trackPrompt(session: LanguageModelSession, prompt: string) {
const startTime = Date.now();
this.recordEvent('prompt_start', {
promptLength: prompt.length
});
try {
const response = await session.prompt(prompt);
this.recordEvent('prompt_success', {
duration: Date.now() - startTime,
responseLength: response.length
});
return response;
} catch (error) {
this.recordEvent('prompt_failed', {
duration: Date.now() - startTime,
error: error.message
});
throw error;
}
}
getReport() {
const sessionAttempts = this.events.filter(e =>
e.type === 'session_start'
).length;
const sessionSuccesses = this.events.filter(e =>
e.type === 'session_created'
).length;
const promptAttempts = this.events.filter(e =>
e.type === 'prompt_start'
).length;
const promptSuccesses = this.events.filter(e =>
e.type === 'prompt_success'
).length;
return {
sessionSuccessRate: `${(sessionSuccesses / sessionAttempts * 100).toFixed(1)}%`,
promptSuccessRate: `${(promptSuccesses / promptAttempts * 100).toFixed(1)}%`,
totalEvents: this.events.length
};
}
}
Error Prevention Strategies
1. Pre-Flight Checks
class PreFlightChecker {
async runChecks(): Promise<{
passed: boolean;
issues: string[];
}> {
const issues: string[] = [];
// Check 1: Browser support
if (!('LanguageModel' in window)) {
issues.push('Browser does not support LanguageModel API');
}
// Check 2: Chrome version
const chromeVersion = navigator.userAgent.match(/Chrome\/(\d+)/)?.[1];
if (chromeVersion && parseInt(chromeVersion) < 138) {
issues.push(`Chrome version ${chromeVersion} is too old (need 138+)`);
}
// Check 3: Model availability
if ('LanguageModel' in window) {
const availability = await LanguageModel.availability();
if (availability === 'unavailable') {
issues.push('On-device AI not supported on this device');
} else if (availability !== 'readily-available') {
issues.push(`Model not ready (status: ${availability})`);
}
}
// Check 4: Memory
if ('memory' in performance) {
const memory = (performance as any).memory;
const available = memory.jsHeapSizeLimit - memory.usedJSHeapSize;
const availableMB = available / 1024 / 1024;
if (availableMB < 100) {
issues.push(`Low memory available (${availableMB.toFixed(0)}MB)`);
}
}
return {
passed: issues.length === 0,
issues
};
}
}
// Usage
const checker = new PreFlightChecker();
const result = await checker.runChecks();
if (!result.passed) {
console.warn('Pre-flight checks failed:', result.issues);
// Show warning to user or use fallback
}
2. Health Monitoring
class HealthMonitor {
private checkInterval: number;
startMonitoring(intervalMs = 60000) {
this.checkInterval = window.setInterval(async () => {
await this.runHealthCheck();
}, intervalMs);
}
stopMonitoring() {
if (this.checkInterval) {
clearInterval(this.checkInterval);
}
}
private async runHealthCheck() {
const health = {
apiAvailable: 'LanguageModel' in window,
availability: null as string | null,
memory: null as any
};
if (health.apiAvailable) {
health.availability = await LanguageModel.availability();
}
if ('memory' in performance) {
const memory = (performance as any).memory;
health.memory = {
used: memory.usedJSHeapSize,
limit: memory.jsHeapSizeLimit,
percent: (memory.usedJSHeapSize / memory.jsHeapSizeLimit * 100).toFixed(1)
};
}
console.log('Health check:', health);
// Alert on issues
if (health.memory && parseFloat(health.memory.percent) > 90) {
console.warn('Memory usage above 90%');
}
return health;
}
}
Testing and Validation
Unit Testing Error Scenarios
// Example test suite using Vitest
import { describe, it, expect, vi } from 'vitest';
describe('NanoAiService Error Handling', () => {
it('should handle API not available', async () => {
// Mock LanguageModel not existing
const originalLanguageModel = (window as any).LanguageModel;
delete (window as any).LanguageModel;
const service = new NanoAiService();
await expect(
service.initialize()
).rejects.toThrow('LanguageModel API not available');
// Restore
(window as any).LanguageModel = originalLanguageModel;
});
it('should handle NotAllowedError', async () => {
// Mock LanguageModel.create to throw NotAllowedError
vi.spyOn(window.LanguageModel, 'create').mockRejectedValue(
Object.assign(new Error('User gesture required'), {
name: 'NotAllowedError'
})
);
const service = new NanoAiService();
await expect(
service.createSession()
).rejects.toThrow('user gesture');
});
it('should handle unavailable status', async () => {
// Mock availability check
vi.spyOn(window.LanguageModel, 'availability')
.mockResolvedValue('unavailable');
const service = new NanoAiService();
const available = await service.checkAvailability();
expect(available).toBe(false);
});
it('should retry on transient failures', async () => {
let attempts = 0;
// Fail twice, then succeed
vi.spyOn(window.LanguageModel, 'create').mockImplementation(async () => {
attempts++;
if (attempts < 3) {
throw new Error('Transient error');
}
return { prompt: vi.fn(), destroy: vi.fn() };
});
const service = new NanoAiService();
const session = await service.createSessionWithRetry(3);
expect(attempts).toBe(3);
expect(session).toBeDefined();
});
});
Integration Testing
// Integration test with real API
async function runIntegrationTests() {
console.group('Integration Tests');
try {
// Test 1: API availability
console.log('Test 1: Checking API availability...');
const available = 'LanguageModel' in window;
console.log(available ? '✓ PASS' : '✗ FAIL');
if (!available) {
console.log('Skipping remaining tests - API not available');
return;
}
// Test 2: Availability status
console.log('Test 2: Checking availability status...');
const status = await LanguageModel.availability();
console.log(`Status: ${status}`);
console.log(status !== 'unavailable' ? '✓ PASS' : '✗ FAIL');
if (status !== 'readily-available') {
console.log('Skipping remaining tests - model not ready');
return;
}
// Test 3: Session creation
console.log('Test 3: Creating session...');
const session = await LanguageModel.create();
console.log('✓ PASS');
// Test 4: Prompt execution
console.log('Test 4: Executing prompt...');
const response = await session.prompt('Say "test"');
console.log(`Response: ${response}`);
console.log(response ? '✓ PASS' : '✗ FAIL');
// Test 5: Cleanup
console.log('Test 5: Cleaning up session...');
session.destroy();
console.log('✓ PASS');
console.log('\n✅ All tests passed');
} catch (error) {
console.error('❌ Test failed:', error);
} finally {
console.groupEnd();
}
}
// Run tests
runIntegrationTests();
Frequently Asked Questions
Q1: Why am I getting "LanguageModel is not defined" even though I have Chrome 138+?
A: This error typically occurs because:
- Built-in AI features are disabled in Chrome settings
- You're in an incompatible context (Service Worker without proper scope)
- Your Chrome installation hasn't fully updated yet
Solution: Navigate to chrome://settings/ai and ensure AI features are enabled. Restart Chrome completely (not just close and reopen - use chrome://restart).
Q2: How do I fix NotAllowedError when creating sessions?
A: NotAllowedError requires a user gesture. You must create the session inside a direct user event handler (click, keypress, etc.), not after async operations.
Example:
// ✅ Correct
button.onclick = async () => {
const session = await LanguageModel.create(); // In gesture
const data = await fetchData(); // After is OK
};
// ❌ Wrong
button.onclick = async () => {
const data = await fetchData(); // Loses gesture
const session = await LanguageModel.create(); // Fails
};
Q3: The model status is "downloadable" - how do I trigger the download?
A: Call LanguageModel.create() within a user gesture context. This will prompt the user to download the model (if needed) and show progress. The download only needs to happen once per device.
button.onclick = async () => {
const session = await LanguageModel.create({
monitor: (m) => {
m.addEventListener('downloadprogress', (e) => {
console.log(`${(e.loaded/e.total*100).toFixed(1)}%`);
});
}
});
};
Q4: My prompts fail with no clear error message - how do I debug this?
A: Enable detailed logging and check:
- Prompt length (may exceed context window)
- Session validity (not destroyed)
- Special characters or encoding issues
- System resources (memory)
Use the debugging wrapper from the Advanced Debugging section to get detailed logs.
Q5: Can I use Chrome Nano AI in a Service Worker or Web Worker?
A: Currently, the LanguageModel API is primarily designed for use in page contexts and Chrome Extensions. Service Worker support may be limited. For browser automation, use the API from your extension's background script or side panel.
Q6: How many concurrent sessions can I create?
A: There's no hard limit, but practical limits exist based on:
- Device memory (each session uses ~100-300MB)
- Processing power
- Browser stability
Recommendation: Limit to 2-3 concurrent sessions maximum. Use session pooling and reuse for better performance.
Q7: What should I do if availability status is "unavailable"?
A: This means the device doesn't support on-device AI. Your options:
- Fallback to cloud-based LLM providers (OpenAI, Claude, Gemini)
- Use a hybrid approach where complex tasks use cloud APIs
- Inform the user that on-device AI isn't available
Never assume Chrome Nano AI is available - always implement fallbacks.
Q8: How do I handle errors during streaming responses?
A: Implement robust error handling with accumulated content:
let accumulated = "";
try {
for await (const chunk of session.promptStreaming(prompt)) {
accumulated = chunk;
updateUI(accumulated);
}
} catch (error) {
// Return partial result
if (accumulated) {
console.log("Returning partial result");
return accumulated;
}
throw error;
}
Q9: My extension works in development but fails in production - why?
A: Common causes:
- Different Chrome versions (dev vs production users)
- Model not downloaded on user's device
- Missing user gesture handling in production flows
- Resource constraints on user's device
Solution: Add comprehensive availability checks, user-friendly error messages, and fallback strategies. Test on minimum supported Chrome version (138).
Q10: How do I prevent memory leaks with Chrome Nano AI?
A: Follow these practices:
- Always call
session.destroy()when done - Implement cleanup on navigation/unmount
- Limit concurrent sessions
- Recreate sessions periodically for long-running operations
- Monitor memory usage during development
Use the SessionPool pattern from the Memory and Resource Errors section.
Q11: Can I catch and retry specific error types automatically?
A: Yes, implement a retry wrapper with error classification:
async function promptWithAutoRetry(
session: LanguageModelSession,
prompt: string,
maxRetries = 3
): Promise<string> {
const retryableErrors = [
'timeout',
'busy',
'temporary',
'resource'
];
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
return await session.prompt(prompt);
} catch (error) {
const isRetryable = retryableErrors.some(keyword =>
error.message.toLowerCase().includes(keyword)
);
if (!isRetryable || attempt === maxRetries) {
throw error;
}
await new Promise(r => setTimeout(r, 1000 * attempt));
}
}
}
Q12: What's the difference between session.prompt() and session.promptStreaming()?
A: Both execute prompts but with different response patterns:
prompt():
- Returns complete response at once
- Simpler to use
- Better for short responses
- No intermediate updates
promptStreaming():
- Returns AsyncIterable for incremental updates
- Better for long responses
- Provides real-time user feedback
- Requires more complex handling
Choose based on your UI needs - streaming for chat interfaces, regular for background tasks.
Next Steps
Implementing Production-Ready Error Handling
Now that you understand Chrome Nano AI errors, implement these patterns:
-
Feature Detection Layer
- Check API availability before use
- Detect Chrome version and capabilities
- Provide clear user messaging
-
Graceful Fallback Strategy
- Design for hybrid AI architecture
- Implement cloud API fallbacks
- Handle transitions smoothly
-
Comprehensive Logging
- Track error patterns
- Monitor success rates
- Profile performance
-
User Experience
- User-friendly error messages
- Progress indicators for downloads
- Clear calls-to-action
Related Resources
Continue your Chrome Nano AI journey:
- Chrome Nano AI: Technical Deep Dive - Complete guide to on-device AI integration
- Multi-Agent Browser Automation - Build sophisticated automation with multiple AI agents
- Privacy-First Automation - Leverage on-device AI for complete privacy
- Flexible LLM Provider Integration - Implement hybrid cloud and local AI
- Natural Language Automation - Control browsers with plain English
External Documentation
Conclusion
Chrome Nano AI's LanguageModel API brings powerful on-device AI capabilities to browser automation, but it introduces unique error scenarios that differ from traditional cloud APIs. Understanding these errors—from NotAllowedError to availability states to resource management—is essential for building robust applications.
The key to successful Chrome Nano AI integration is defensive programming: always check availability, handle errors gracefully, implement fallbacks, and provide clear user feedback. With the patterns and solutions in this guide, you can build production-ready applications that leverage on-device AI while handling errors elegantly.
Remember that Chrome Nano AI is still evolving. Error patterns may change, new status values may be introduced, and capabilities will expand. Build your applications with flexibility in mind, monitor errors in production, and be ready to adapt as the platform matures.
Most importantly, always provide fallback options. On-device AI is powerful when available, but not all users will have it. Design your browser automation workflows to work with or without Chrome Nano AI, and you'll create resilient applications that work for everyone.
Need Help? If you're still experiencing Chrome Nano AI errors after trying these solutions, check the Chrome AI Community Forums or inspect your specific error messages with the debugging tools provided in this guide.
Real-World Implementation: See these patterns in action in Onpiste, a Chrome extension that successfully implements Chrome Nano AI with comprehensive error handling and fallback strategies. Install from Chrome Web Store
