Skip to main content

TypeScript SDK

The official TypeScript/JavaScript SDK for Runcrate. Zero runtime dependencies, native fetch, full type safety.

Installation

npm install @runcrate/sdk
Requires Node.js 18+.

Quick Start

import Runcrate from '@runcrate/sdk';

const rc = new Runcrate({ apiKey: 'rc_live_YOUR_API_KEY' });

// Chat completion
const response = await rc.models.chatCompletion({
  model: 'deepseek-ai/DeepSeek-V3',
  messages: [{ role: 'user', content: 'Hello!' }],
});
console.log(response.choices[0].message.content);

// List GPU instances
const instances = await rc.instances.list();
instances.forEach(i => console.log(`${i.name} — ${i.status}`));
You can also set the RUNCRATE_API_KEY environment variable instead of passing apiKey.

Configuration

const rc = new Runcrate({
  apiKey: 'rc_live_...',                       // or RUNCRATE_API_KEY env var
  baseUrl: 'https://runcrate.ai',              // infrastructure API
  inferenceUrl: 'https://api.runcrate.ai',     // model inference API
  timeout: 30,                                 // seconds
  maxRetries: 3,                               // retry on 429/5xx
  customHeaders: {},                           // extra headers
  environment: 'production',                   // optional — target a specific environment
});

Environments

API keys are workspace-scoped. By default, requests target the workspace’s default environment (usually main). Pass environment at client construction to target a different one:
// Default environment
const rc = new Runcrate({ apiKey: 'rc_live_...' });

// Specific environments
const staging = new Runcrate({ apiKey: 'rc_live_...', environment: 'staging' });
const prod    = new Runcrate({ apiKey: 'rc_live_...', environment: 'production' });

// Each client only sees resources in its own environment
await staging.instances.list();   // only staging instances
await prod.instances.list();      // only production instances
Environment-scoped: instances, crates, storage volumes. Workspace-wide: SSH keys, billing, API keys, templates.

Model Inference

All inference methods hit api.runcrate.ai.

Chat Completions

const response = await rc.models.chatCompletion({
  model: 'deepseek-ai/DeepSeek-V3',
  messages: [{ role: 'user', content: 'Explain quantum computing' }],
  maxTokens: 500,
  temperature: 0.7,
});
console.log(response.choices[0].message.content);

Streaming

const stream = await rc.models.chatCompletion({
  model: 'deepseek-ai/DeepSeek-V3',
  messages: [{ role: 'user', content: 'Tell me a story' }],
  stream: true,
});

for await (const chunk of stream) {
  const content = (chunk.choices as any[])?.[0]?.delta?.content ?? '';
  process.stdout.write(content);
}

Image Generation

const image = await rc.models.generateImage({
  model: 'black-forest-labs/FLUX.1-schnell',
  prompt: 'A futuristic cityscape at sunset',
  width: 1024,
  height: 768,
});

// Save directly — handles base64, data URIs, and URLs automatically
await image.save('output.webp');

Model-Specific Parameters

All inference methods accept extra parameters that get passed through to the provider. Different models support different parameters:
// Seed for reproducibility
const image = await rc.models.generateImage({
  model: 'black-forest-labs/FLUX.1-schnell',
  prompt: 'A cat in space',
  seed: 42,
  numInferenceSteps: 4,
  guidance: 3.5,
});

// Image editing — pass a file path, URL, or base64 string
const edited = await rc.models.generateImage({
  model: 'black-forest-labs/FLUX.1-kontext-pro',
  prompt: 'Make the sky purple',
  image: './photo.png',              // file path (auto base64-encoded)
});

// Image editing with URL
const result = await rc.models.generateImage({
  model: 'Wan-AI/Wan2.6-Image-Edit',
  prompt: 'Remove the background',
  image: 'https://example.com/photo.png',  // URL (passed as-is)
});
The image, startImage, mask, and controlImage fields accept three formats:
  • File path"./photo.png" (auto-detected, read and base64-encoded)
  • URL"https://..." (passed through as-is)
  • Base64 string — raw base64 data (passed through as-is)

Video Generation

// Submit, poll, and save in one call
const job = await rc.models.generateVideoAndSave('output.mp4', {
  model: 'google/veo-3.0',
  prompt: 'A drone flying over mountains',
  duration: 8,
  onStatus: (j) => console.log(`Status: ${j.status}`),
});
Or manage the lifecycle manually:
const job = await rc.models.generateVideo({
  model: 'google/veo-3.0',
  prompt: 'Ocean waves at sunset',
});

// Poll until done
let status = job;
while (status.status !== 'completed' && status.status !== 'failed') {
  await new Promise(r => setTimeout(r, 5000));
  status = await rc.models.getVideoStatus(job.id);
}

// Download
const videoBuffer = await rc.models.downloadVideo(job.id);
Extra parameters (e.g., seed, negative_prompt, image for image-to-video) are passed through:
const job = await rc.models.generateVideo({
  model: 'some-img2vid-model',
  prompt: 'Animate this scene',
  image: './first_frame.png',   // image-to-video
  seed: 42,
  negative_prompt: 'blurry',
});

Text-to-Speech

// One-liner save
await rc.models.textToSpeechAndSave('speech.mp3', {
  model: 'hexgrad/Kokoro-82M',
  input: 'Hello from Runcrate!',
  voice: 'af_heart',
});

// Or get raw Buffer
const audio = await rc.models.textToSpeech({
  model: 'hexgrad/Kokoro-82M',
  input: 'Hello from Runcrate!',
  voice: 'af_heart',
});
Extra parameters like speed or language are passed through:
await rc.models.textToSpeechAndSave('speech.mp3', {
  model: 'hexgrad/Kokoro-82M',
  input: 'Hello!',
  voice: 'af_heart',
  speed: 1.5,           // model-specific
  language: 'en',       // model-specific
});

Transcription

import { readFile } from 'node:fs/promises';

const audioFile = await readFile('recording.wav');
const result = await rc.models.transcribe({
  model: 'openai/whisper-1',
  file: audioFile,
  filename: 'recording.wav',
  language: 'en',              // hint language
  responseFormat: 'srt',       // text, json, srt, vtt
});
console.log(result.text);

Infrastructure Management

GPU Instances

// List instances
const instances = await rc.instances.list();
const filtered = await rc.instances.list({ search: 'training' });

// Browse available GPU types
const types = await rc.instances.listTypes({ gpuType: 'A100' });
types.forEach(t => console.log(`${t.id} — ${t.gpuType} x${t.gpuCount} — $${t.hourlyRate}/hr`));

// Create an instance
const instance = await rc.instances.create({
  name: 'training-run',
  sshKeyId: 'your-key-id',
  gpuType: 'A100',
  gpuCount: 1,
  startupCommands: ['pip install torch'],
});
console.log(`Created: ${instance.id} — ${instance.status}`);

// Check status
const status = await rc.instances.getStatus(instance.id);
console.log(`Status: ${status.status}, IP: ${status.ip}`);

// Terminate
await rc.instances.terminate(instance.id);

SSH Keys

// List keys
const keys = await rc.sshKeys.list();

// Add a key
const key = await rc.sshKeys.create({
  name: 'my-laptop',
  publicKey: 'ssh-ed25519 AAAA...',
});

// Delete a key
await rc.sshKeys.delete(key.id);

Storage

Storage volumes are environment-scoped. Your workspace’s storage provider (AWS S3, Wasabi, or Backblaze B2) must be configured in the dashboard first — the SDK picks it up automatically.
// List available regions (with friendly names)
const regions = await rc.storage.listRegions();
regions.forEach(r => console.log(`${r.name} (${r.provider})`));

// List volumes in the current environment
const volumes = await rc.storage.list();
const filtered = await rc.storage.list({ search: 'datasets' });

// Get a specific volume
const volume = await rc.storage.get('volume-id');

// Create a 100GB volume
const created = await rc.storage.create({
  name: 'datasets',
  sizeGb: 100,
  region: 'us-east-1',
});

// Resize (increase capacity only)
await rc.storage.resize(created.id, 200);

// Delete (refunds unused prepaid days pro-rata)
const result = await rc.storage.delete(created.id);
console.log(`Refunded $${result.refundAmount}`);
Billing: $0.03/GB/month, charged weekly in advance. Deletion refunds the unused portion of the current billing week.

Billing

// Check balance
const balance = await rc.billing.getBalance();
console.log(`Credits: $${balance.creditsBalance}`);

// List transactions (paginated)
const txns = await rc.billing.listTransactions({ limit: 20 });
txns.data.forEach(t => console.log(`${t.type}: $${t.amount}`));
console.log(`Has more: ${txns.hasMore}`);

// Usage summary
const usage = await rc.billing.usage({ from: '2025-01-01', to: '2025-01-31' });
console.log(`Total cost: $${usage.totalCost}`);

Templates

// List templates with search
const templates = await rc.templates.list({
  search: 'pytorch',
  category: 'ml',
  page: 1,
  pageSize: 10,
});
templates.data.forEach(t => console.log(`${t.name} — ${t.category}`));
console.log(`Total: ${templates.total}`);

Error Handling

All API errors include the actual error message from the server — never a generic fallback.
import {
  NotFoundError,
  AuthenticationError,
  RateLimitError,
  InsufficientCreditsError,
  BadRequestError,
  UnprocessableEntityError,
} from '@runcrate/sdk';

try {
  await rc.instances.get('nonexistent');
} catch (err) {
  if (err instanceof NotFoundError) {
    console.log(`Not found: ${err.message}`);
  } else if (err instanceof AuthenticationError) {
    console.log('Invalid API key');
  } else if (err instanceof RateLimitError) {
    console.log('Rate limited — retry later');
  } else if (err instanceof InsufficientCreditsError) {
    console.log(`Not enough credits: ${err.message}`);
  } else if (err instanceof UnprocessableEntityError) {
    console.log(`Validation error: ${err.message}`);  // e.g. invalid model params
  } else if (err instanceof BadRequestError) {
    console.log(`Bad request: ${err.message}`);
  }
}
Every error exposes:
  • err.message — human-readable error description from the API
  • err.statusCode — HTTP status code
  • err.code — machine-readable error code (e.g. not_found, rate_limited)
  • err.details — additional details (when available)

Error Hierarchy

ExceptionStatus CodeDescription
BadRequestError400Invalid parameters
AuthenticationError401Invalid or missing API key
InsufficientCreditsError402Not enough credits
PermissionDeniedError403Insufficient permissions
NotFoundError404Resource not found
ConflictError409Resource conflict
UnprocessableEntityError422Validation error (e.g. invalid model params)
RateLimitError429Rate limit exceeded
InternalServerError500Server error
ConnectionErrorNetwork failure
TimeoutErrorRequest timed out