Try Sevalla today and get $50 free credit

Blog

Building a TypeScript SDK that developers love to use

Learn how rebuilding the same SDK in Go, PHP, and TypeScript revealed the patterns that truly matter.

·by Steve McDougall

I've built the same SDK three times: first in Go, then in PHP, and now in TypeScript. Each time, I was convinced I already understood the problem, and each one proved I still had something fundamental to learn.

This isn't one of those "here's the perfect code" articles. Instead, it’s a clear look at what I discovered through trial, error, and iteration: what worked well, what didn't, and why I'd do things differently next time.

What most SDK articles get wrong

Many SDK tutorials present a polished class hierarchy and stop there. They rarely explain why they made those architectural decisions, what trade-offs they considered, or which approaches failed along the way. You don’t see the rough drafts, the rewrites, or the mistakes that shaped the final design.

I made those mistakes myself. The first version of this TypeScript SDK used axios. It worked flawlessly until I ran Vite’s bundle analyzer and saw the impact: roughly 30KB for an HTTP client that could be replaced with ofetch, a 5KB alternative. That wasn’t a robustness choice; it was an unexamined assumption.

So I rebuilt it. And when the class-only design made common workflows harder than they needed to be, I rebuilt it again. Those iterations revealed what a modern, lightweight, and maintainable TypeScript SDK should actually look like.

What building SDKs in three languages taught me

Before writing any code, it’s worth understanding what building this same SDK in Go, PHP, and TypeScript taught me, because those lessons shaped every decision that follows.

Go taught me that explicit is better than clever. You can’t hide complexity in Go. Error handling is unavoidable, dependencies are always visible, and the language pushes you to think about architecture early. Coming back to TypeScript after the Go version, I knew exactly where custom error classes were needed and why factory patterns made sense.

PHP taught me that standards enable ecosystems. Working with PSR-18 and HTTPlug demonstrated the power of standardized HTTP interfaces. When everyone agrees on a contract, swapping implementations becomes effortless. That’s exactly why ofetch alignment with the Fetch standard matters.

TypeScript taught me that types are documentation. After Go’s static typing and PHP’s gradual typing, TypeScript feels like the ideal middle ground: strict enough to catch mistakes, flexible enough to avoid boilerplate. Well-designed types make behavior obvious without reading a single line of implementation.

Put these together, and you get a clear picture: explicit error handling + standard interfaces + strong typing = SDK that's actually pleasant to use.

What we're actually building

The Sevalla API manages applications, databases, deployments—the usual infrastructure stack. But here’s what makes this interesting: this isn’t just another walkthrough of wrapping HTTP endpoints in TypeScript classes. That alone would be painfully boring.

What we’re building goes deeper:

  • Why factory functions beat classes for configuration
  • How helper functions can encode real best practices
  • When to use classes vs. functions (and why both deserve a place)
  • Why bundle size matters far more than most developers think
  • How to turn errors into something genuinely useful instead of generic HTTP failures
  • Why Vite is the only tool that makes sense for modern SDK development

The goal is a real SDK—the kind I’d want to use in production. Something that feels simple when you’re getting started and powerful when you need to scale up.

The first big decision: Ditching axios for ofetch

The first major mistake I made was building the initial version of this SDK with axios simply because “everyone uses it.”

The problem is that “everyone uses it” isn’t a good architectural reason for anything. Plenty of people still use jQuery too, but that doesn’t make it a sensible default in 2025.

Version 1.0 shipped with axios. It worked perfectly—until I ran Vite’s build analyzer and saw what it was actually costing me: around 30KB for an HTTP client. In environments where edge functions have size limits and cold starts depend on every kilobyte, that’s a poor trade-off.

There had to be a better option.

Enter ofetch: about 5KB, built directly on top of the Fetch standard, and compatible across Node, Deno, Cloudflare Workers, and the browser. And honestly, its API feels cleaner than axios.

Here’s what convinced me:

  • It throws on non-2xx responses by default. With axios, you end up checking error.response, error.request, or error.message. With ofetch, you catch a clean error and move on.

    try {
      const data = await ofetch("/api/users")
      // Already parsed JSON, no .data property needed
    } catch (error) {
      // It threw because the status wasn't 2xx, clean and simple
    }
    
  • It parses JSON automatically. If the response is JSON, you get the parsed object—no .data accessor, no manual .json() calls.

  • It has built-in retry logic. Three retries with exponential backoff out of the box. With axios, you’re adding plugins or rolling it yourself.

  • It's fully tree-shakeable and ESM-first. This is huge. When Vite builds your library, it can eliminate unused pieces of ofetch. axios, being CommonJS, bundles in its entirety every time.

The truth is, axios is comfortable. It’s familiar. But “comfortable” isn’t the same as “optimal.” When you build an SDK that other developers depend on, decisions about dependencies directly affect their bundle sizes, cold start times, and deployment environments.

So I rebuilt it using ofetch—and the SDK improved in every measurable way.

Building with Vite: The only sensible choice

Before getting into the SDK architecture, we need to talk about the build tool. Because if you’re building a TypeScript library in 2025 and you’re not using Vite, you’re making things harder than they need to be.

I’ve built libraries with webpack, Rollup, and esbuild. All of them can work, but Vite is built for the modern JavaScript ecosystem, and it shows. It solves the problems that other tools still make you work around.

Here’s why Vite is the right choice for SDK development:

  • Native ESM support. No complicated configuration, no Babel maze. You write modern JavaScript, and Vite outputs modern JavaScript.
  • Extremely fast builds. Development rebuilds are instant thanks to esbuild, while production builds are optimized by Rollup. You get speed and quality without sacrificing either.
  • Reliable tree-shaking. When users import your SDK, they only bundle the pieces they use. Unused exports disappear automatically—critical for library code.
  • Built-in library mode. No wrestling with webpack’s output settings or a pile of Rollup plugins. build.lib handles ESM and CJS builds with minimal config.

Here’s the vite.config.ts used for this SDK:

import { defineConfig } from "vite"
import { resolve } from "path"
import dts from "vite-plugin-dts"

export default defineConfig({
  build: {
    lib: {
      entry: resolve(__dirname, "src/index.ts"),
      name: "Sevalla",
      formats: ["es", "cjs"],
      fileName: (format) => `sevalla.${format}.js`,
    },
    rollupOptions: {
      external: ["ofetch"],
      output: {
        globals: {
          ofetch: "ofetch",
        },
      },
    },
    sourcemap: true,
    minify: "esbuild",
  },
  plugins: [dts({ rollupTypes: true })],
})

Here’s what each part does:

  • build.lib tells Vite this is a library, not an application, and configures the output formats.

  • formats: ['es', 'cjs'] produces both ESM and CommonJS builds—modern tools use ESM, older Node versions still expect CJS.

  • external: ['ofetch'] marks ofetch as a peer dependency so users install it themselves, avoiding duplicate bundles.

  • vite-plugin-dts generates clean TypeScript declaration files and combines them into a single output.

  • sourcemap: true makes debugging far easier for end users.

Run npm run build, and Vite produces:

dist/
  sevalla.es.js       (5.2 KB)
  sevalla.es.js.map
  sevalla.cjs.js      (5.4 KB)
  sevalla.cjs.js.map
  sevalla.d.ts

Roughly 5KB of SDK code—small, fast, and portable.

Compare that to webpack, where you’d be maintaining separate configs for dev and prod, bolting on loaders for TypeScript, dealing with CommonJS/ESM compatibility, and configuring an entire pipeline just to emit .d.ts files.

Vite gets out of your way. And when the build tool disappears, you can focus on what actually matters: building a great SDK instead of fighting a configuration file.

Starting with types, not code

Here’s something Go taught me early: if you start writing implementation code before you understand your data structures, you’re setting yourself up for pain.

In Go, you define your structs first. You think about the data flowing through the system. You make the types explicit, then write the code that works with those types. TypeScript lets you follow the same discipline—yet most people skip straight to classes and methods and only retrofit types once errors start piling up.

Don’t do that.

I start every SDK with a types.ts file:

export interface Application {
  id: string
  name: string
  repository_url: string
  branch: string
  status: "pending" | "building" | "running" | "stopped" | "failed"
  url: string
  created_at: string
  updated_at: string
  replicas: number
  plan: "hobby" | "starter" | "pro" | "business" | "enterprise"
  region: "us-central" | "us-east" | "europe-west" | "asia-south"
  port?: number
  ssl_enabled: boolean
  cdn_enabled: boolean
}

Notice the literal types: status: 'pending' | 'building' | 'running' | 'stopped' | 'failed'. Not status: string.

Why? Because when you type app.status ===, your editor shows exactly five valid options. No typos. No guessing. The type system does the guardrails work for you.

This is also why I prefer literal unions over enums. TypeScript enums compile into runtime objects with a strange numeric/string duality. Literal unions stay simple at runtime while offering strong type safety.

Now compare request vs response types:

export interface CreateApplicationRequest {
  name: string
  repository_url: string
  branch: string
  plan?: Application["plan"] // Optional, has a default
  region?: Application["region"]
  replicas?: number
  port?: number
  build_command?: string
  start_command?: string
  ssl_enabled?: boolean
  cdn_enabled?: boolean
}

Request types and response types are related, but not identical. Request types have optional fields with defaults. Response types have IDs, timestamps, and computed fields. Keeping them separate avoids partial types and confusing conditionals that make TypeScript miserable.

Use type references to stay DRY:

plan?: Application['plan']

This means “use whatever type Application.plan uses.” If you later add a new plan tier, both types stay in sync automatically. No duplication, no drift.

I learned this the hard way when building the PHP SDK. I updated the Application schema to include a new plan tier but forgot to update the request schema. The SDK compiled fine but quietly stopped supporting a valid option. Type references eliminate this entire class of mistakes.

Pagination types follow the same pattern:

export interface PaginationParams {
  page?: number
  per_page?: number
  sort?: string
  order?: "asc" | "desc"
}

export interface PaginatedResponse<T> {
  data: T[]
  meta: {
    current_page: number
    per_page: number
    total: number
    total_pages: number
  }
  links: {
    first: string
    last: string
    next?: string
    prev?: string
  }
}

Every list endpoint returns a PaginatedResponse<Something>. One type, used everywhere, always consistent.

Starting with types forces you to think about your API's shape before you write the code. It makes the implementation clearer because you know exactly what you're working with.

The factory pattern: Because configuration is hard

Here’s a controversial opinion: most SDK constructors are terrible.

// Don't do this
const client = new ApiClient({
  apiKey: "key",
  baseUrl: "url",
  timeout: 30000,
  retries: 3,
  retryDelay: 1000,
  maxRetryDelay: 30000,
  retryStatusCodes: [429, 500, 502, 503, 504],
  headers: {
    /* ... */
  },
  // 15 more options
})

This falls apart fast because:

  • Configuration is mixed with instantiation
  • Testing requires mocking the entire class
  • Creating multiple clients with different configs becomes messy
  • The constructor turns into a dumping ground for options

A much cleaner solution is a factory function:

export interface SevallaConfig {
  apiKey: string
  baseUrl?: string
  timeout?: number
  retry?: number
  debug?: boolean
}

export function createHttpClient(config: SevallaConfig) {
  const {
    apiKey,
    baseUrl = "https://api.sevalla.com/v1",
    timeout = 30000,
    retry = 3,
    debug = false,
  } = config

  return ofetch.create({
    baseURL: baseUrl,
    timeout,
    retry,

    onRequest({ options }) {
      options.headers = {
        ...options.headers,
        Authorization: `Bearer ${apiKey}`,
        "Content-Type": "application/json",
        Accept: "application/json",
      }

      if (debug) {
        console.log(
          "[Sevalla SDK]",
          options.method,
          options.baseURL + (options.url || ""),
        )
      }
    },

    onResponse({ response, options }) {
      if (debug) {
        console.log(
          "[Sevalla SDK]",
          response.status,
          options.method,
          options.url,
        )
      }
    },

    onResponseError({ response }) {
      const data = response._data as SevallaError

      throw new SevallaApiError(
        data.message || "An error occurred",
        response.status,
        data.code || "UNKNOWN_ERROR",
        data.details,
      )
    },
  })
}

Why this is better:

  • It’s just a function—no new, no this, no prototype surprises
  • Defaults are explicit and easy to see
  • It’s composable: you can wrap, extend, or mock it effortlessly
  • Testing becomes trivial: pass in config, get a client

The factory returns a fully configured ofetch instance. Authentication, retry logic, and error handling are consistent everywhere. Configure it once, and it works across your entire SDK.

Interceptors: The secret sauce

The real power of the factory is in the interceptors. This is where the SDK becomes cohesive instead of repetitive.

Request interceptor adds authentication to every request:

onRequest({ options }) {
  options.headers = {
    ...options.headers,
    'Authorization': `Bearer ${apiKey}`,
    'Content-Type': 'application/json',
    'Accept': 'application/json',
  };
}

Without this, you'd be writing:

await ofetch("/applications", {
  headers: { Authorization: `Bearer ${apiKey}` },
})

...in every single method. Thirty times. And when you need to change how auth works, you get to update thirty methods. Fun!

The interceptor does it once, in one place. Every request gets authenticated. You never forget it. You never typo it.

Debug logging when you need it:

if (debug) {
  console.log("[Sevalla SDK]", options.method, options.url)
}

I added this after losing hours debugging a production issue caused by a wrong base URL. A simple debug: true flag would have caught it immediately.

Now users can enable debug mode:

const sevalla = new Sevalla({
  apiKey: "key",
  debug: true, // See every request and response
})

And they get visibility into what the SDK is doing. This saves so much debugging time.

Error interceptor transforms garbage into gold:

onResponseError({ response }) {
  const data = response._data as SevallaError;

  throw new SevallaApiError(
    data.message || 'An error occurred',
    response.status,
    data.code || 'UNKNOWN_ERROR',
    data.details
  );
}

Without this, your users catch raw HTTP errors and have to dig through .response.data.message or whatever structure your API uses. With this, they catch SevallaApiError instances that have a predictable structure.

It's the difference between:

// Bad
try {
  await fetch("/api/applications")
} catch (error) {
  // What do I even check here? error.response? error.message?
}

And:

// Good
try {
  await sevalla.applications.create(config)
} catch (error) {
  if (error instanceof SevallaApiError) {
    console.log(error.code, error.status, error.details)
  }
}

The error interceptor is you, the SDK author, taking responsibility for your API's error format so your users don't have to.

Custom error classes: Make errors useful

Speaking of errors, let's build that SevallaApiError class properly:

export interface SevallaError {
  message: string
  code: string
  details?: Record<string, unknown>
}

export class SevallaApiError extends Error {
  constructor(
    message: string,
    public readonly status: number,
    public readonly code: string,
    public readonly details?: Record<string, unknown>,
  ) {
    super(message)
    this.name = "SevallaApiError"

    // Maintains proper stack trace in V8
    if (Error.captureStackTrace) {
      Error.captureStackTrace(this, SevallaApiError)
    }
  }

  /**
   * Check if error is a validation error
   */
  isValidationError(): boolean {
    return this.status === 422 || this.code === "VALIDATION_ERROR"
  }

  /**
   * Check if error is a rate limit error
   */
  isRateLimitError(): boolean {
    return this.status === 429 || this.code === "RATE_LIMIT_EXCEEDED"
  }

  /**
   * Check if error is an authentication error
   */
  isAuthError(): boolean {
    return this.status === 401 || this.code === "UNAUTHORIZED"
  }

  /**
   * Get field-specific validation errors
   */
  getValidationErrors(): Record<string, string[]> | undefined {
    if (!this.isValidationError() || !this.details) {
      return undefined
    }
    return this.details as Record<string, string[]>
  }
}

Now your users can write clean error handling:

try {
  await sevalla.applications.create(config)
} catch (error) {
  if (!(error instanceof SevallaApiError)) {
    throw error // Network error, timeout, etc.
  }

  if (error.isValidationError()) {
    const fieldErrors = error.getValidationErrors()
    console.error("Validation failed:", fieldErrors)
    // Show field-specific errors to user
  } else if (error.isRateLimitError()) {
    console.error("Rate limited, backing off...")
    // Implement exponential backoff
  } else if (error.isAuthError()) {
    console.error("Authentication failed")
    // Redirect to login or refresh token
  } else {
    console.error("API error:", error.message)
  }
}

The helper methods make error handling readable. You're not checking magic status codes or error strings - you're asking semantic questions: "Is this a validation error?" "Is this rate limiting?"

This is how error handling should work. Clear categories, useful information, actionable responses.

Resource classes: Namespacing with structure

Once you have a configured HTTP client, you could just export it and call it a day:

export const sevalla = createHttpClient({ apiKey: "key" })

// Usage
await sevalla("/applications", { method: "POST", body: config })

This works, but it’s not great. There’s no structure, no discoverability, and no type-safe surface for requests. Users have to remember URLs and HTTP methods for every operation.

Instead, we introduce resource classes:

export type HttpClient = ReturnType<typeof createHttpClient>

export class ApplicationsResource {
  constructor(private client: HttpClient) {}

  async list(
    params?: PaginationParams,
  ): Promise<PaginatedResponse<Application>> {
    return this.client("/applications", {
      method: "GET",
      query: params,
    })
  }

  async get(id: string): Promise<Application> {
    return this.client(`/applications/${id}`, {
      method: "GET",
    })
  }

  async create(data: CreateApplicationRequest): Promise<Application> {
    return this.client("/applications", {
      method: "POST",
      body: data,
    })
  }

  async update(
    id: string,
    data: Partial<CreateApplicationRequest>,
  ): Promise<Application> {
    return this.client(`/applications/${id}`, {
      method: "PATCH",
      body: data,
    })
  }

  async delete(id: string): Promise<void> {
    return this.client(`/applications/${id}`, {
      method: "DELETE",
    })
  }

  async deploy(id: string): Promise<Deployment> {
    return this.client(`/applications/${id}/deploy`, {
      method: "POST",
    })
  }

  async scale(id: string, replicas: number): Promise<Application> {
    return this.client(`/applications/${id}/scale`, {
      method: "POST",
      body: { replicas },
    })
  }

  async logs(id: string, lines?: number): Promise<{ logs: string }> {
    return this.client(`/applications/${id}/logs`, {
      method: "GET",
      query: lines ? { lines } : undefined,
    })
  }

  async restart(id: string): Promise<Application> {
    return this.client(`/applications/${id}/restart`, {
      method: "POST",
    })
  }

  async rollback(id: string, deploymentId: string): Promise<Deployment> {
    return this.client(`/applications/${id}/rollback`, {
      method: "POST",
      body: { deployment_id: deploymentId },
    })
  }

  async deployments(
    id: string,
    params?: PaginationParams,
  ): Promise<PaginatedResponse<Deployment>> {
    return this.client(`/applications/${id}/deployments`, {
      method: "GET",
      query: params,
    })
  }

  async setEnvironmentVariables(
    id: string,
    variables: Record<string, string>,
  ): Promise<Application> {
    return this.client(`/applications/${id}/environment`, {
      method: "PUT",
      body: { variables },
    })
  }

  async getEnvironmentVariables(id: string): Promise<Record<string, string>> {
    return this.client(`/applications/${id}/environment`, {
      method: "GET",
    })
  }
}

Each method is thin. It knows:

  • The URL
  • The HTTP method
  • The request/response types

No business logic. No clever abstractions. Just well-typed API calls.

So why classes instead of just exporting functions?

I wrestled with this. Functions are simpler on paper: createApplication(), deployApplication(), etc. But classes buy you namespacing and discoverability.

With classes:

sevalla.applications.   // IDE shows: create, deploy, scale, list, get...

Your IDE immediately shows everything you can do with applications: create, deploy, scale, list, get, and so on.

Without classes:

// How do I know what's available? Read the docs?
createApplication()
deployApplication()
scaleApplication()
// Plus all the database functions, deployment functions...

The class groups related methods together. When you type sevalla.applications., your IDE shows you everything you can do with applications. It's self-documenting.

But keep them thin. The temptation is to add helper methods and business logic to these classes. Don't. Each method should map directly to an API endpoint. The class is a namespace, not a service layer.

The “smart” workflows live somewhere else.

The main SDK class: Assembly required

Now we tie everything together:

export class Sevalla {
  private client: HttpClient

  public readonly applications: ApplicationsResource
  public readonly databases: DatabasesResource
  public readonly deployments: DeploymentsResource

  constructor(config: SevallaConfig) {
    this.client = createHttpClient(config)
    this.applications = new ApplicationsResource(this.client)
    this.databases = new DatabasesResource(this.client)
    this.deployments = new DeploymentsResource(this.client)
  }
}

// Clean export for users
export function createClient(config: SevallaConfig): Sevalla {
  return new Sevalla(config)
}

Simple, clean: one configured client, multiple resources.

The readonly keyword matters:

sevalla.applications = somethingElse // TypeScript error

You'd be surprised how often people try to do weird things if you don't stop them.

I also added a createClient() factory function. Some developers prefer new Sevalla(), others prefer createClient(). Support both. It costs nothing.

Now your users write:

const sevalla = createClient({ apiKey: "key" })
await sevalla.applications.create(config)
await sevalla.databases.create({ name: "db", type: "postgresql" })

Or:

const sevalla = new Sevalla({ apiKey: "key" })
await sevalla.applications.deploy("app-id")

Structured. Predictable. Type-safe. Everything you want in an SDK.

Helper Functions: Where the Magic Happens

Here's where my thinking evolved after building the PHP and Go versions.

Resource classes give you low-level control. But most people don't need low-level control most of the time. They're doing common things: creating an app and deploying it. Setting up a database and connecting an app to it. Deploying with rollback capability.

In Laravel, I got used to facades and services that make these common flows effortless. I wanted that same ergonomics in the SDK.

So I added helper functions in a helpers.ts file:

export async function createAndDeploy(
  sevalla: Sevalla,
  config: CreateApplicationRequest,
): Promise<{ application: Application; deployment: Deployment }> {
  const application = await sevalla.applications.create(config)
  const deployment = await sevalla.applications.deploy(application.id)
  return { application, deployment }
}

Now instead of:

const app = await sevalla.applications.create(config)
const deployment = await sevalla.applications.deploy(app.id)

You write:

const { application, deployment } = await createAndDeploy(sevalla, config)

"That's barely any savings!" you might say. And you'd be right, for this simple example.

But look at provisionFullStack():

export async function provisionFullStack(
  sevalla: Sevalla,
  appConfig: CreateApplicationRequest,
  dbConfig: CreateDatabaseRequest,
): Promise<{
  application: Application
  database: Database
  deployment: Deployment
  credentials: DatabaseCredentials
}> {
  // Create database first
  const database = await sevalla.databases.create(dbConfig)

  // Wait for database to be ready
  let dbStatus = database
  while (dbStatus.status === "provisioning") {
    await new Promise((resolve) => setTimeout(resolve, 5000))
    dbStatus = await sevalla.databases.get(database.id)
  }

  if (dbStatus.status === "failed") {
    throw new Error("Database provisioning failed")
  }

  // Get credentials
  const credentials = await sevalla.databases.getCredentials(database.id)

  // Create application with database connection
  const application = await sevalla.applications.create({
    ...appConfig,
  })

  // Set environment variables
  await sevalla.applications.setEnvironmentVariables(application.id, {
    DATABASE_URL: credentials.connection_string,
    DATABASE_HOST: credentials.host,
    DATABASE_PORT: credentials.port.toString(),
    DATABASE_NAME: credentials.database,
    DATABASE_USER: credentials.username,
    DATABASE_PASSWORD: credentials.password,
  })

  // Deploy
  const deployment = await sevalla.applications.deploy(application.id)

  return { application, database, deployment, credentials }
}

This helper encodes a best practice: create the database first, wait for it to be ready, get the credentials, inject them into the app's environment, then deploy.

Without it, every user has to figure out this workflow. Many will get it wrong (deploy first, then try to add env vars). Some will forget to wait for the database to be ready. Others won't structure the environment variables correctly.

The helper does it right, once, for everyone.

Or look at deployWithRollback():

export interface DeploymentOptions {
  pollInterval?: number
  timeout?: number
  onStatusChange?: (status: string) => void
}

export async function deployWithRollback(
  sevalla: Sevalla,
  applicationId: string,
  options: DeploymentOptions = {},
): Promise<Deployment> {
  const { pollInterval = 5000, timeout = 600000, onStatusChange } = options

  // Get last successful deployment
  const history = await sevalla.applications.deployments(applicationId, {
    per_page: 20,
  })
  const lastSuccessful = history.data.find((d) => d.status === "success")

  // Start deployment
  const deployment = await sevalla.applications.deploy(applicationId)

  // Poll until complete
  let current = deployment
  const startTime = Date.now()

  while (current.status === "pending" || current.status === "building") {
    if (Date.now() - startTime > timeout) {
      throw new Error(`Deployment timeout after ${timeout}ms`)
    }

    await new Promise((resolve) => setTimeout(resolve, pollInterval))

    const latest = await sevalla.deployments.get(current.id)

    if (latest.status !== current.status && onStatusChange) {
      onStatusChange(latest.status)
    }

    current = latest
  }

  // Handle failure
  if (current.status === "failed") {
    if (lastSuccessful) {
      await sevalla.applications.rollback(applicationId, lastSuccessful.id)
      throw new Error(
        `Deployment ${current.id} failed. Rolled back to ${lastSuccessful.id}`,
      )
    }
    throw new Error(
      `Deployment ${current.id} failed. No previous deployment to rollback to.`,
    )
  }

  return current
}

This is production-ready deployment with automatic rollback. Most developers won't implement this correctly on their own. They'll deploy, check once, and move on. They won't handle timeouts. They won't automatically rollback on failure. They won't provide status callbacks.

This is the point of helper functions: they encode best practices.

You're not just wrapping API calls — you're saying "here's the right way to do this thing."

And because they're separate from the resource classes, users can choose:

// Low-level control
await sevalla.applications.deploy("app-id")

// High-level safety
await deployWithRollback(sevalla, "app-id", {
  onStatusChange: (status) => console.log("Status:", status),
})

Both are valid. Both have their place. The SDK supports both.

Real usage: What this looks like

Here’s how the SDK feels in practice.

Simple case — deploy an app:

import { createClient } from "@sevalla/sdk"

const sevalla = createClient({
  apiKey: process.env.SEVALLA_API_KEY!,
})

const app = await sevalla.applications.create({
  name: "my-api",
  repository_url: "https://github.com/user/my-api",
  branch: "main",
  port: 3000,
  plan: "starter",
})

await sevalla.applications.deploy(app.id)

Medium complexity — use a helper:

import { createClient, createAndDeploy } from "@sevalla/sdk"

const sevalla = createClient({ apiKey: process.env.SEVALLA_API_KEY! })

const { application, deployment } = await createAndDeploy(sevalla, {
  name: "my-api",
  repository_url: "https://github.com/user/my-api",
  branch: "main",
  port: 3000,
})

console.log(`Deployed: ${application.url}`)

Production deployment — safe with rollback:

import { createClient, deployWithRollback } from "@sevalla/sdk"

const sevalla = createClient({ apiKey: process.env.SEVALLA_API_KEY! })

try {
  const deployment = await deployWithRollback(sevalla, "app-id", {
    pollInterval: 10000,
    timeout: 600000,
    onStatusChange: (status) => {
      console.log(`Deployment status: ${status}`)
    },
  })

  console.log("✓ Deployed successfully:", deployment.id)
} catch (error) {
  if (error instanceof SevallaApiError) {
    console.error("✗ Deployment failed:", error.message)
  } else {
    console.error("✗ Deployment failed and was rolled back")
  }
}

Full stack — app + database:

import { createClient, provisionFullStack } from "@sevalla/sdk"

const sevalla = createClient({ apiKey: process.env.SEVALLA_API_KEY! })

const { application, database, deployment } = await provisionFullStack(
  sevalla,
  {
    name: "my-app",
    repository_url: "https://github.com/user/app",
    branch: "main",
    port: 3000,
  },
  {
    name: "my-db",
    type: "postgresql",
    version: "15",
    size: "small",
  },
)

console.log("App:", application.url)
console.log("Database:", database.id)
console.log("Deployment:", deployment.status)

The SDK scales with you. Start simple, go deeper when you need to.

Package structure and exports

Here’s the structure that gives maximum clarity for contributors and maximum tree-shaking for consumers:

Here's my src/ structure:

src/
  index.ts          # Main exports
  client.ts         # createHttpClient factory
  sevalla.ts        # Sevalla class
  types.ts          # All TypeScript interfaces
  errors.ts         # SevallaApiError class
  resources/
    applications.ts
    databases.ts
    deployments.ts
  helpers/
    deployment.ts   # deployWithRollback, etc.
    provisioning.ts # provisionFullStack, etc.

And my index.ts:

// Core exports
export { Sevalla, createClient } from "./sevalla"
export { createHttpClient } from "./client"
export { SevallaApiError } from "./errors"

// Types
export type {
  SevallaConfig,
  Application,
  Database,
  Deployment,
  CreateApplicationRequest,
  CreateDatabaseRequest,
  PaginationParams,
  PaginatedResponse,
} from "./types"

// Resources (for advanced use cases)
export { ApplicationsResource } from "./resources/applications"
export { DatabasesResource } from "./resources/databases"
export { DeploymentsResource } from "./resources/deployments"

// Helpers
export {
  createAndDeploy,
  deployWithRollback,
  provisionFullStack,
} from "./helpers"

This structure gives users:

  1. The main entry point: createClient() or new Sevalla()
  2. All the types they need for TypeScript
  3. Helper functions for common workflows
  4. Advanced exports (like individual resources), if they need them

Because everything is ESM and properly exported, Vite's tree-shaking eliminates unused code. If a user only imports createClient and createAndDeploy, they don't get the entire DatabasesResource class in their bundle.

Your package.json should look like this:

{
  "name": "@sevalla/sdk",
  "version": "1.0.0",
  "type": "module",
  "main": "./dist/sevalla.cjs.js",
  "module": "./dist/sevalla.es.js",
  "types": "./dist/sevalla.d.ts",
  "exports": {
    ".": {
      "types": "./dist/sevalla.d.ts",
      "import": "./dist/sevalla.es.js",
      "require": "./dist/sevalla.cjs.js"
    }
  },
  "files": ["dist"],
  "scripts": {
    "build": "vite build",
    "dev": "vite build --watch",
    "test": "vitest",
    "typecheck": "tsc --noEmit"
  },
  "peerDependencies": {
    "ofetch": "^1.3.0"
  },
  "devDependencies": {
    "@types/node": "^20.10.0",
    "typescript": "^5.3.0",
    "vite": "^5.0.0",
    "vite-plugin-dts": "^3.7.0",
    "vitest": "^1.0.0"
  }
}

Key points:

  • "type": "module" makes Node treat .js files as ESM
  • exports field provides proper conditional exports
  • types field points to the declaration file
  • peerDependencies for ofetch - users install it themselves
  • files array ensures only dist/ is published

This provides users with a modern ESM experience while maintaining compatibility with CommonJS for older Node versions.

Testing your SDK

I usually write tests after the first working version, and they immediately pay off. The goal is simple: test the behavior of the HTTP layer and the helper logic—not the internals of fetch or the API itself.

Here's how I test SDKs:

import { describe, it, expect, vi, beforeEach } from "vitest"
import { createHttpClient } from "../src/client"
import { SevallaApiError } from "../src/errors"

describe("HTTP Client", () => {
  it("adds authentication header", async () => {
    const client = createHttpClient({
      apiKey: "test-key",
      baseUrl: "https://api.test.com",
    })

    // Mock the fetch
    global.fetch = vi.fn().mockResolvedValue({
      ok: true,
      status: 200,
      json: async () => ({ id: "123" }),
    })

    await client("/test")

    expect(global.fetch).toHaveBeenCalledWith(
      expect.any(String),
      expect.objectContaining({
        headers: expect.objectContaining({
          Authorization: "Bearer test-key",
        }),
      }),
    )
  })

  it("transforms API errors", async () => {
    const client = createHttpClient({
      apiKey: "test-key",
    })

    global.fetch = vi.fn().mockResolvedValue({
      ok: false,
      status: 422,
      _data: {
        message: "Validation failed",
        code: "VALIDATION_ERROR",
        details: { name: ["Name is required"] },
      },
    })

    await expect(client("/test")).rejects.toThrow(SevallaApiError)
  })
})

The key is mocking at the HTTP level, not the SDK level. You want to test that your interceptors work, that errors transform correctly, that auth headers get added.

For helper functions:

describe("deployWithRollback", () => {
  it("rolls back on deployment failure", async () => {
    const mockSevalla = {
      applications: {
        deploy: vi.fn().mockResolvedValue({ id: "dep-1", status: "pending" }),
        deployments: vi.fn().mockResolvedValue({
          data: [{ id: "dep-old", status: "success" }],
        }),
        rollback: vi.fn().mockResolvedValue({ id: "dep-old" }),
      },
      deployments: {
        get: vi.fn().mockResolvedValue({ id: "dep-1", status: "failed" }),
      },
    } as any

    await expect(deployWithRollback(mockSevalla, "app-1")).rejects.toThrow(
      /rolled back/i,
    )

    expect(mockSevalla.applications.rollback).toHaveBeenCalledWith(
      "app-1",
      "dep-old",
    )
  })
})

Mock the entire SDK, test the helper logic. You're not testing the HTTP layer here - you're testing the business logic.

What I'd do differently next time

Building this SDK three times taught me a lot, but I still made mistakes. Here’s what I’d improve in the next iteration:

1. Better rate limit handling

ofetch retries automatically, but it doesn’t honor Retry-After headers. I should have added:

onResponseError({ response }) {
  if (response.status === 429) {
    const retryAfter = response.headers.get('Retry-After');
    if (retryAfter) {
      const delay = parseInt(retryAfter) * 1000;
      // Store this delay and use it in retry logic
    }
  }
}

2. Add debug mode earlier

The debug: true option saved my ass so many times during development. I should have added it from day one instead of using console.log everywhere.

3. Add pagination helpers

Most list endpoints return paginated results. I should have added helpers for iterating through pages:

export async function* paginateAll<T>(
  fetcher: (params: PaginationParams) => Promise<PaginatedResponse<T>>,
) {
  let page = 1
  let hasMore = true

  while (hasMore) {
    const response = await fetcher({ page, per_page: 100 })

    for (const item of response.data) {
      yield item
    }

    hasMore = response.meta.current_page < response.meta.total_pages
    page++
  }
}

// Usage
for await (const app of paginateAll(sevalla.applications.list)) {
  console.log(app.name)
}

4. Version types separately

Right now, types and implementation live together. If the API changes, I have to bump the entire SDK version. Better: version the types separately so users can upgrade types without upgrading the SDK.

5. Add request/response hooks

Let users inject their own logic:

const sevalla = createClient({
  apiKey: "key",
  hooks: {
    beforeRequest: (options) => {
      // Custom logic before each request
    },
    afterResponse: (response) => {
      // Custom logic after each response
    },
  },
})

But you know what? That's fine. Perfect is the enemy of shipped. The SDK works well, solves real problems, and I can improve it later.

The cross-language perspective changed everything

Here's the thing: I wouldn't have built this TypeScript SDK this way if I hadn't built the Go and PHP versions first.

Go showed me factories work better than classes for configuration. In Go, you don't have constructors — you have factory functions. This forced me to think about configuration differently, and when I came back to TypeScript, I brought that pattern with me.

PHP showed me the value of standard interfaces. PSR-18's ClientInterface means any HTTP client can be swapped in. ofetch's alignment with the Fetch standard achieves the same thing - it's not inventing new concepts, it's embracing existing ones.

TypeScript showed me that types can replace documentation. In Go, you read godoc. In PHP, you read docblocks. In TypeScript, you hover over a function and see its signature. Good types are self-documenting.

If I'd only built this in TypeScript, I'd probably have made a pure class-based SDK with axios, thrown in some promises, and called it done. It would work, but it wouldn't be as good.

Building the same thing three times forces you to separate essential complexity from accidental complexity. The essential part - wrapping an HTTP API — stays the same. The accidental part - language-specific patterns, library choices, bundling concerns - changes every time.

When you rebuild, you keep the essential and improve the accidental.

Why Vite makes all of this possible

I want to come back to the build tool for a second, because it's easy to underestimate how much your build tool affects your SDK's quality.

With webpack, I'd be fighting configuration. Different configs for development and production. Plugins for TypeScript, plugins for declaration files, plugins for tree-shaking. And at the end of it all, I'd probably still have a bigger bundle than I wanted.

With Rollup directly, I'd have more control but more complexity. I'd need to configure every plugin myself, manage the build pipeline, handle type generation separately.

Vite just works. It's opinionated in the right ways:

  • ESM by default - the web platform's standard
  • esbuild for speed - development builds are instant
  • Rollup for production - optimized, tree-shakeable output
  • TypeScript support - just works, no configuration
  • Plugin ecosystem - easy to extend when needed

When your build tool gets out of your way, you can focus on building a good SDK instead of fighting your tooling.

And when you run vite build, you get exactly what you want: a small, optimized, tree-shakeable library that works everywhere.

Bundle size: The number that matters

Let me hammer this point home one more time: bundle size matters.

"But Steve," you might say, "30KB isn't that much."

Here's why you're wrong:

  • Edge functions have size limits. Cloudflare Workers caps at 1MB for the free tier. Vercel Edge Functions have similar limits. Every kilobyte in your dependencies is a kilobyte you can't use for your code.
  • Cold starts are real. Larger bundles take longer to load and parse. In serverless environments, this directly affects your P99 latencies. Users notice 100ms differences.
  • Mobile users exist. That SDK you bundle in your web app? Mobile users download it over cellular. Every kilobyte costs them real money.
  • Bundle budget is limited. Most teams have bundle size budgets. If your SDK is 30KB and theirs is 500KB, you just ate 6% of their budget. Make it count.
  • Composition compounds. One 30KB dependency doesn't seem bad. Ten of them and you're at 300KB. Choose lightweight dependencies, and your users can compose more freely.

Let me show you the Vite build output for this SDK:

$ npm run build

vite v5.0.0 building for production...
✓ 23 modules transformed.
dist/sevalla.es.js    5.23 kB │ gzip: 2.01 kB
dist/sevalla.cjs.js   5.41 kB │ gzip: 2.08 kB
dist/sevalla.d.ts     2.15 kB
✓ built in 234ms

5KB. That's the entire SDK. Not 30KB for a single dependency — 5KB for the complete SDK including all resources, all helpers, all error handling.

And when you import just what you need:

import { createClient, createAndDeploy } from "@sevalla/sdk"

Tree-shaking reduces it even further. You might end up with 2-3KB in your bundle.

This is only possible because:

  1. ofetch is tiny (5KB vs axios's 30KB)
  2. We're ESM-first (tree-shaking works)
  3. Vite optimizes automatically (no manual intervention)
  4. We don't bundle dependencies (ofetch is a peer dependency)

Choose your dependencies carefully. Choose your build tool carefully. Your users will thank you.

Wrapping up

Let me leave you with the principles that made this SDK work:

1. Start with types. Good types tell you what a function does without reading the implementation. Define your data structures first, then write the code that operates on them.
2. Factories beat constructors for configuration. They're more testable, more composable, and clearer about defaults.
3. Interceptors are your friends. Put authentication, error handling, and retry logic in one place. Every request benefits.
4. Custom errors are worth it. SevallaApiError with helper methods beats raw HTTP errors every time.
5. Classes for namespacing, functions for helpers. Give users both low-level control and high-level convenience. They'll use both.
6. Use Vite. Modern build tool for modern libraries. Fast, simple, works everywhere.
7. Choose lightweight dependencies. ofetch over axios. Standards over custom solutions. Your users' bundle sizes depend on your choices.
8. Helper functions encode best practices. Don't just wrap API calls - show users the right way to do things.
9. Debug mode from day one. You'll need it. Your users will need it. Add debug: true early.
10. Build it, ship it, improve it. You won't get it right the first time. That's fine. Ship something useful, learn from usage, iterate.

This SDK started with axios and pure classes. It evolved to ofetch, Vite, and hybrid patterns. It got better by being rebuilt.

If you're building an SDK, don't aim for perfection. Aim for useful. Ship it. Learn from it. Improve it.

And for the love of all that is holy, check your bundle size.

The complete SDK is available on GitHub, and if you want to see how this plays out in other languages.

Now go build an SDK that developers actually enjoy using.

Deep dive into the cloud!

Stake your claim on the Interwebz today with Sevalla's platform!
Deploy your application, database, or static site in minutes.

Get started