# UseBetterDev — Complete Documentation > Open-source TypeScript libraries for building production-ready SaaS applications. This file contains the complete documentation for all products. Use it for single-fetch AI consumption. --- # General ## Core Concepts Every SaaS application needs the same infrastructure: tenancy audit logs webhooks transactional email. These are hard problems — row-level security policies, change-data capture, reliable delivery guarantees — and most teams rebuild them from scratch for every project. UseBetter provides production-grade implementations for each of these concerns. Every product follows the same API pattern, runs in your database, and plugs into your existing ORM and framework. Learn one product and the rest feel immediately familiar. ## Same pattern, every product Every UseBetter product starts with a `better*()` factory function. You pass in your database adapter and configuration, wire up middleware, and your application code stays clean. **Tenant:** ```ts title="src/tenant.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); const tenant = betterTenant({ database: drizzleDatabase(db), tenantResolver: { header: "x-tenant-id" }, }); app.use("/api/*", createHonoMiddleware(tenant)); // Every query is now scoped to the current tenant — // Postgres RLS enforces isolation at the database level app.get("/api/projects", async (c) => { const projects = await tenant.getDatabase() .select().from(projectsTable); return c.json(projects); }); ``` **Audit:** ```ts title="src/audit.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders"], }); // Wrap your database — insert/update/delete are captured automatically const auditedDb = withAuditProxy(db, audit.captureLog); await auditedDb.insert(usersTable).values({ name: "Alice" }); // → audit_logs entry: INSERT on users, after: { name: "Alice" } ``` The structure is the same in both: create an instance with `better*()`, pass a database adapter, and plug it into your app. Tenant uses middleware for request scoping. Audit uses a database proxy for automatic capture. The underlying pattern — configure once, then use your ORM normally — is identical. ## Your database, not a service UseBetter products store everything in your own PostgreSQL database. There is no external service, no data leaving your infrastructure, and no vendor dashboard between you and your data. Each product ships a CLI that generates migration SQL for your ORM. You review the SQL, apply it with your existing migration tooling, and verify the setup: **Tenant:** ```bash # Generate RLS policies and triggers npx @usebetterdev/tenant-cli migrate -o ./migrations/rls # Run your ORM's migration tool to apply the SQL, then verify setup npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` **Audit:** ```bash # Preview the audit_logs table migration npx @usebetterdev/audit-cli migrate --dry-run # Run your ORM's migration tool to apply the SQL, then verify setup npx @usebetterdev/audit-cli check --database-url $DATABASE_URL ``` Because the data lives in your database, you can inspect it with plain SQL, back it up with your existing tools, and query it from any application that has database access: ```sql -- Who modified the users table in the last hour? SELECT actor_id, operation, record_id, created_at FROM audit_logs WHERE table_name = 'users' AND created_at > now() - interval '1 hour' ORDER BY created_at DESC; ``` ## Swap anything Every product is built from independent layers. Each layer has a single responsibility, and you can swap any layer without affecting the others: | Layer | Responsibility | Examples | |-------|---------------|----------| | **Core** | Types, context, adapter contracts. Zero runtime dependencies. | `tenant-core`, `audit-core` | | **ORM adapters** | Implement the core contract for a specific ORM. | `tenant-drizzle`, `tenant-prisma`, `audit-drizzle` | | **Framework adapters** | Middleware for web frameworks. | `tenant-hono`, `tenant-express`, `tenant-next` | | **CLI** | Migrations, health checks, scaffolding. | `tenant-cli`, `audit-cli` | | **Main umbrella** | Re-exports everything via subpath imports. | `@usebetterdev/tenant`, `@usebetterdev/audit` | Dependencies flow inward — framework adapters depend on core, but core never depends on adapters. This means switching frameworks is a one-line change: **Hono:** ```ts import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; app.use("/api/*", createHonoMiddleware(tenant)); ``` **Express:** ```ts import { createExpressMiddleware } from "@usebetterdev/tenant/express"; app.use("/api", createExpressMiddleware(tenant)); ``` **Next.js:** ```ts import { withTenant } from "@usebetterdev/tenant/next"; export const GET = withTenant(tenant, async (req) => { const db = tenant.getDatabase(); return Response.json(await db.select().from(projectsTable)); }); ``` Your `betterTenant()` configuration, database adapter, and application logic stay exactly the same — only the framework import changes. ## Type-safe end to end All packages are written in strict TypeScript with `noUncheckedIndexedAccess` and `exactOptionalPropertyTypes`. Types flow from your schema through the adapter to your application code. When you call `tenant.getDatabase()`, you get back a fully typed Drizzle or Prisma client scoped to the current tenant. When you call `audit.query()`, the query builder returns typed results. Configuration errors are caught at compile time, not at runtime: ```ts const tenant = betterTenant({ database: drizzleDatabase(db), tenantResolver: { header: "x-tenant-id" }, }); // Fully typed — db is your Drizzle client with all table types preserved const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); // ^? { id: number; name: string; tenantId: string }[] // Query builder is typed too const logs = await audit.query() .resource("users") .operation("DELETE") .since("24h") .list(); // ^? { entries: AuditLogEntry[] } ``` No `any`, no type assertions, no casting. If your schema changes, the compiler tells you everywhere that needs updating. ## Next steps Pick your starting point: - [Tenant — Getting Started](https://docs.usebetter.dev/tenant/getting-started/) — request-scoped multi-tenancy with Postgres RLS - [Audit — Introduction](https://docs.usebetter.dev/audit/introduction/) — automatic mutation logging for compliance and debugging --- # Audit > Auto-capture every database mutation with actor tracking, before/after snapshots, and compliance tagging — stored in your own database. ## Introduction > **Coming soon:** UseBetter Audit is under active development. Follow the project on GitHub for updates. UseBetter Audit is an open-source library that adds compliance-ready audit logging to your TypeScript application without touching your business logic. It intercepts every INSERT, UPDATE, and DELETE through your ORM, tags each entry with the current actor, and stores everything in your own database — no external service, no per-event pricing, no data leaving your infrastructure. ## Key features - **ORM auto-capture** — a transparent proxy (Drizzle) or extension (Prisma) intercepts every mutation. No `captureLog()` calls scattered through your code. - **Actor tracking** — framework middleware extracts the current user from each request (JWT, header, or cookie) and stores it in `AsyncLocalStorage`. Every audit entry is tagged automatically, with no explicit passing required. By default, if extraction fails the request proceeds without an actor — switch to fail-closed via `onError` if attribution is mandatory for your compliance requirements. - **Before/after snapshots** — each entry records the full record state before and after the mutation as typed `beforeData` and `afterData` snapshots. - **Enrichment** — attach human-readable labels, severity levels, and dynamic descriptions to specific operations. Configured once, applied everywhere. - **Compliance-ready** — tag operations with `gdpr`, `soc2`, `hipaa`, and other compliance frameworks. Redact sensitive columns from snapshots on a per-table basis. - **Your own database** — audit logs are stored in an `audit_logs` table alongside your application data. Postgres, MySQL, and SQLite are all supported. - **Queryable history** — filter by actor, resource, operation, or time range via `audit.query()` or the CLI. - **Plugin-driven** — extend with custom enrichers, exporters, and retention policies. ## How it works 1. A request arrives. Framework middleware (Hono, Express, or Next.js) extracts the actor ID from the `Authorization` header or a custom source and stores it in `AsyncLocalStorage`. 2. Your handler runs a mutation — `db.insert(users).values(...)` or `prisma.user.create(...)` — against the audited client. 3. The ORM proxy or extension intercepts the mutation, reads the actor from `AsyncLocalStorage`, captures before/after state, and writes a typed entry to `audit_logs`. 4. Audit entries are queryable immediately via `audit.query()` or the audit CLI export command. Because capture happens at the ORM layer, not the HTTP layer, it works in background jobs and cron tasks too — just set the actor explicitly with [`setAuditContext()`](https://docs.usebetter.dev/audit/guides/actor-context/). ## Library, not a service Most audit logging solutions are SaaS products: you send events to their API, pay per event, and your data lives on their servers. UseBetter Audit is a library: - **No external API calls** — mutations are captured in-process and written directly to your database. - **No vendor lock-in** — your audit data is in a plain SQL table you own. Query it with any tool. - **No per-event pricing** — capture as much as you want; your only cost is storage. - **No schema surprises** — the `audit_logs` table is generated by the CLI and lives in your migrations. You can inspect and extend it. ## Architecture | Layer | Package | Role | | ---------------------- | ------------------------------------------------ | ------------------------------------------------------------------------------ | | **Core** | `@usebetterdev/audit-core` | Event model, adapter contract, enrichment config, query API. Zero runtime deps.| | **ORM adapters** | `@usebetterdev/audit-drizzle`, `@usebetterdev/audit-prisma` | Intercept mutations, write `audit_logs`, expose the `auditLogs` table schema. | | **Framework adapters** | `audit-hono`, `audit-express`, `audit-next` | Middleware that extracts the actor per request and stores it in context. | | **CLI** | `@usebetterdev/audit-cli` | Migrations, health check, stats, export, purge. | | **Umbrella** | `@usebetterdev/audit` | Single install, subpath exports for all adapters. | You install the umbrella package (`@usebetterdev/audit`) and import adapters via subpath exports like `@usebetterdev/audit/drizzle` and `@usebetterdev/audit/hono`. ## Next steps - [Installation](https://docs.usebetter.dev/audit/installation/) — install the package and its peer dependencies - [Quick Start](https://docs.usebetter.dev/audit/quick-start/) — wire up audit logging in your app in minutes - [Configuration](https://docs.usebetter.dev/audit/configuration/) — core options, enrichment, retention, and hooks - [Actor Context](https://docs.usebetter.dev/audit/actor-context/) — automatic actor propagation via AsyncLocalStorage - [Enrichment](https://docs.usebetter.dev/audit/enrichment/) — human-readable labels, severity, compliance tags, and redaction - [Compliance Overview](https://docs.usebetter.dev/audit/compliance/overview/) — map SOC 2, HIPAA, GDPR, and PCI DSS requirements to Better Audit features - [Adapters](https://docs.usebetter.dev/audit/adapters/) — per-adapter setup for Drizzle, Prisma, Hono, Express, and Next.js - [How Audit Works](https://docs.usebetter.dev/audit/internals/how-audit-works/) — interactive walkthrough of the three-layer pipeline --- ## Installation ## Install the package **npm:** ```bash npm install @usebetterdev/audit ``` **pnpm:** ```bash pnpm add @usebetterdev/audit ``` **yarn:** ```bash yarn add @usebetterdev/audit ``` **bun:** ```bash bun add @usebetterdev/audit ``` The main package (`@usebetterdev/audit`) includes the core library and all adapters via subpath exports. The CLI (`@usebetterdev/audit-cli`) is used via `npx` for generating migrations and managing audit data — no installation required. ## Peer dependencies You need a database driver and (optionally) a framework. Install the ones you use: ### ORM adapter **Drizzle + pg:** ```bash npm install drizzle-orm pg ``` ```ts import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; ``` **Drizzle + postgres.js:** ```bash npm install drizzle-orm postgres ``` ```ts import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; ``` **Drizzle + better-sqlite3:** ```bash npm install drizzle-orm better-sqlite3 ``` ```ts import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; ``` **Prisma:** ```bash npm install @prisma/client ``` Requires `@prisma/client` >= 5.0.0. ```ts import { prismaAuditAdapter, withAuditExtension } from "@usebetterdev/audit/prisma"; ``` ### Framework adapter **Hono:** ```bash npm install hono ``` Requires `hono` >= 4. ```ts import { betterAuditHono } from "@usebetterdev/audit/hono"; ``` **Express:** ```bash npm install express ``` Requires `express` >= 4. ```ts import { betterAuditExpress } from "@usebetterdev/audit/express"; ``` **Next.js:** ```bash npm install next ``` Requires `next` >= 14. ```ts import { createAuditMiddleware, withAuditRoute, withAudit } from "@usebetterdev/audit/next"; ``` ## Requirements - **Node.js 22+** (also supports Bun and Deno) - **PostgreSQL 13+**, **MySQL**, or **SQLite** - **TypeScript 5+** (recommended, but not required) ## Subpath exports All adapters are available through the umbrella package via subpath exports: ```ts ``` ## Next steps - [Quick Start](https://docs.usebetter.dev/audit/quick-start/) — wire up audit logging in your app --- ## Quick Start This guide walks you through adding audit logging to an existing application. By the end, every INSERT, UPDATE, and DELETE will be automatically captured — with actor tracking, before/after snapshots, and compliance tagging — stored in your own database. ## Prerequisites - `@usebetterdev/audit` and peer dependencies [installed](https://docs.usebetter.dev/audit/installation/) - A running PostgreSQL 13+ database (MySQL and SQLite are also supported — see [Installation](https://docs.usebetter.dev/audit/installation/)) - An existing application with tables you want to audit ## Overview Better Audit works in three layers: 1. **ORM adapter** — writes audit log entries to an `audit_logs` table in your database 2. **ORM proxy/extension** — transparently intercepts mutations so you don't litter your code with manual `captureLog()` calls 3. **Framework middleware** — extracts the current actor (user) from each request via JWT, header, or cookie 1. **Generate the `audit_logs` migration** The CLI auto-detects your ORM and database dialect: **Drizzle:** ```bash # Generate a custom migration file npx drizzle-kit generate --custom --name=audit_logs --prefix=none npx @usebetterdev/audit-cli migrate -o drizzle/_audit_logs.sql # Apply the migration npx drizzle-kit migrate ``` **Prisma:** ```bash # Create a draft migration (--create-only generates the file without applying) npx prisma migrate dev --create-only --name audit_logs # Fill it with the audit_logs table DDL npx @usebetterdev/audit-cli migrate \ -o prisma/migrations/*_audit_logs/migration.sql # Apply the migration npx prisma migrate dev ``` :::tip[Preview first] Run `npx @usebetterdev/audit-cli migrate --dry-run` to print the SQL to stdout without writing any files. ::: 2. **Wire up audit** Create the audit instance, wrap your ORM client for automatic capture, and add actor tracking middleware. **Drizzle + Hono:** ```ts title="src/server.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { Hono } from "hono"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; import { betterAuditHono } from "@usebetterdev/audit/hono"; import { usersTable } from "./schema"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); // 1. Create audit instance const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], }); // 2. Wrap the database for automatic capture const auditedDb = withAuditProxy(db, audit.captureLog); // 3. Set up the app with actor tracking const app = new Hono(); app.use("*", betterAuditHono()); app.post("/users", async (c) => { const body = await c.req.json(); await auditedDb.insert(usersTable).values(body); return c.json({ ok: true }); }); ``` **Drizzle + Express:** ```ts title="src/server.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import express from "express"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; import { betterAuditExpress } from "@usebetterdev/audit/express"; import { usersTable } from "./schema"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); // 1. Create audit instance const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], }); // 2. Wrap the database for automatic capture const auditedDb = withAuditProxy(db, audit.captureLog); // 3. Set up the app with actor tracking const app = express(); app.use(express.json()); app.use(betterAuditExpress()); app.post("/users", async (req, res, next) => { try { await auditedDb.insert(usersTable).values(req.body); res.json({ ok: true }); } catch (error) { next(error); } }); ``` **Prisma + Hono:** ```ts title="src/server.ts" import { PrismaClient } from "./generated/prisma/client.js"; import { Hono } from "hono"; import { betterAudit } from "@usebetterdev/audit"; import { prismaAuditAdapter, withAuditExtension } from "@usebetterdev/audit/prisma"; import { betterAuditHono } from "@usebetterdev/audit/hono"; const prisma = new PrismaClient(); // 1. Create audit instance const audit = betterAudit({ database: prismaAuditAdapter(prisma), auditTables: ["users"], }); // 2. Extend Prisma for automatic capture const auditedPrisma = withAuditExtension(prisma, audit.captureLog); // 3. Set up the app with actor tracking const app = new Hono(); app.use("*", betterAuditHono()); app.post("/users", async (c) => { const body = await c.req.json(); await auditedPrisma.user.create({ data: body }); return c.json({ ok: true }); }); ``` **Prisma + Express:** ```ts title="src/server.ts" import { PrismaClient } from "./generated/prisma/client.js"; import express from "express"; import { betterAudit } from "@usebetterdev/audit"; import { prismaAuditAdapter, withAuditExtension } from "@usebetterdev/audit/prisma"; import { betterAuditExpress } from "@usebetterdev/audit/express"; const prisma = new PrismaClient(); // 1. Create audit instance const audit = betterAudit({ database: prismaAuditAdapter(prisma), auditTables: ["users"], }); // 2. Extend Prisma for automatic capture const auditedPrisma = withAuditExtension(prisma, audit.captureLog); // 3. Set up the app with actor tracking const app = express(); app.use(express.json()); app.use(betterAuditExpress()); app.post("/users", async (req, res, next) => { try { await auditedPrisma.user.create({ data: req.body }); res.json({ ok: true }); } catch (error) { next(error); } }); ``` **Drizzle + Next.js:** ```ts title="middleware.ts" import { createAuditMiddleware } from "@usebetterdev/audit/next"; // Extracts actor from JWT, forwards as x-better-audit-actor-id header export default createAuditMiddleware(); export const config = { matcher: "/api/:path*" }; ``` ```ts title="lib/audit.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); export const db = drizzle(pool); export const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], }); export const auditedDb = withAuditProxy(db, audit.captureLog); ``` ```ts title="app/api/users/route.ts" import { NextRequest } from "next/server"; import { withAuditRoute, fromHeader, AUDIT_ACTOR_HEADER } from "@usebetterdev/audit/next"; import { auditedDb } from "@/lib/audit"; import { usersTable } from "@/schema"; async function handler(request: NextRequest) { const body = await request.json(); await auditedDb.insert(usersTable).values(body); return Response.json({ ok: true }); } // Reads actor from the header set by middleware.ts export const POST = withAuditRoute(handler, { extractor: { actor: fromHeader(AUDIT_ACTOR_HEADER) }, }); ``` All adapters extract the `sub` claim from `Authorization: Bearer ` by default. To customize: **Hono:** ```ts import { fromHeader } from "@usebetterdev/audit"; app.use("*", betterAuditHono({ extractor: { actor: fromHeader("x-user-id") }, })); ``` **Express:** ```ts import { fromHeader } from "@usebetterdev/audit"; app.use(betterAuditExpress({ extractor: { actor: fromHeader("x-user-id") }, })); ``` :::caution[Fail-open by default] If actor extraction fails, the request proceeds without audit context. Override with `onError` if you need fail-closed behavior. ::: 3. **Test it** Make a mutation and then query the audit log: ```bash # Create a user (triggers audit capture) curl -X POST http://localhost:3000/users \ -H "Content-Type: application/json" \ -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1c2VyLTQyIn0.abc" \ -d '{"id": "1", "name": "Alice"}' ``` Query the logs programmatically: ```ts const result = await audit.query() .resource("users") .since("1h") .list(); console.log(result.entries); // [{ id: "...", tableName: "users", operation: "INSERT", // recordId: "1", actorId: "user-42", afterData: { id: "1", name: "Alice" }, ... }] ``` Or export via the CLI: ```bash npx @usebetterdev/audit-cli export --since 1h --format json ``` You should see output like: ```json [ { "id": "a1b2c3", "timestamp": "2025-01-15T10:30:00.000Z", "tableName": "users", "operation": "INSERT", "recordId": "1", "actorId": "user-42", "afterData": { "id": "1", "name": "Alice" } } ] ``` ## What just happened? 1. The CLI generated the `audit_logs` table in your database — a single table with columns for timestamps, operations, before/after snapshots, actor IDs, and compliance metadata. 2. The ORM proxy (Drizzle) or extension (Prisma) transparently intercepts every `INSERT`, `UPDATE`, and `DELETE` and writes an audit log entry. 3. The framework middleware (Hono or Express) extracted the actor ID from the JWT and stored it in `AsyncLocalStorage`. The audit log entry was tagged with `actorId: "user-42"` without you passing it explicitly. 4. All audit data lives in your database — no external service, no vendor lock-in. ## Optional: enrichment and compliance Add human-readable labels, severity levels, and compliance tags to specific operations: ```ts title="src/audit.ts" audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", compliance: ["gdpr", "soc2"], redact: ["password", "ssn"], }); audit.enrich("users", "UPDATE", { label: "User profile updated", severity: "medium", description: ({ before, after, actorId }) => `Actor ${actorId} changed user name from ${before?.name} to ${after?.name}`, }); ``` ## Optional: retention policy Automatically purge old audit entries: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], retention: { days: 365 }, }); ``` Then run the purge command on a schedule: ```bash npx @usebetterdev/audit-cli purge ``` ## Next steps - [How Audit Works](https://docs.usebetter.dev/audit/internals/how-audit-works/) — interactive walkthrough of the ORM proxy → actor context → audit_logs pipeline - [Adapters](https://docs.usebetter.dev/audit/adapters/) — ORM adapter setup, automatic capture, custom extractors, and error handling - [CLI & Migrations](https://docs.usebetter.dev/audit/cli/) — generate the audit_logs migration, verify setup, export data, and purge old entries --- ## Configuration ## `betterAudit()` options All configuration is passed to the `betterAudit()` factory. Only `database` and `auditTables` are required. ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders", "payments"], asyncWrite: false, maxQueryLimit: 1000, retention: { days: 365 }, onError: (error) => logger.error("Audit write failed", error), }); ``` | Option | Type | Default | Description | |---|---|---|---| | `database` | `AuditDatabaseAdapter` | — | **Required.** ORM adapter that handles writing and querying audit logs. | | `auditTables` | `string[]` | — | **Required.** SQL table names to audit. Events for unlisted tables are silently skipped. | | `asyncWrite` | `boolean` | `false` | When `true`, writes are fire-and-forget. Per-call `asyncWrite` overrides this. | | `maxQueryLimit` | `number` | `1000` | Hard upper-bound for `query().limit(n)`. Throws if `n` exceeds this value. | | `retention` | `RetentionPolicy` | — | Retention policy for automatic purge. See [Retention policy](#retention-policy). | | `onError` | `(error: unknown) => void` | `console.error` | Called when an async write or `afterLog` hook fails. | | `beforeLog` | `BeforeLogHook[]` | `[]` | Hooks that run before each log is written. See [Lifecycle hooks](#lifecycle-hooks). | | `afterLog` | `AfterLogHook[]` | `[]` | Hooks that run after each log is written. See [Lifecycle hooks](#lifecycle-hooks). | | `console` | `ConsoleRegistration` | — | Console integration. See [Console integration](#console-integration). | ## Table filtering The `auditTables` array is an allowlist of **SQL table names** that should be audited. Any mutation on a table not in this list is silently ignored — no error, no log entry. ```ts const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders"], }); // INSERT into "users" → captured // INSERT into "orders" → captured // INSERT into "sessions" → silently skipped ``` ## Enrichment Enrichment adds human-readable labels, severity levels, compliance tags, and dynamic descriptions to audit entries. Register enrichment rules with `audit.enrich()`. See the [Enrichment guide](https://docs.usebetter.dev/audit/enrichment/) for a full walkthrough with examples. ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "payments"], }); audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", compliance: ["gdpr", "soc2"], redact: ["password", "ssn"], }); audit.enrich("payments", "*", { label: "Payment mutation", severity: "high", compliance: ["pci"], }); ``` ### Enrichment options | Option | Type | Description | |---|---|---| | `label` | `string` | Human-readable label for the audit entry. | | `description` | `(context) => string` | Dynamic description. Receives `{ before, after, diff, actorId, metadata }`. | | `severity` | `"low" \| "medium" \| "high" \| "critical"` | Severity level for the operation. | | `compliance` | `string[]` | Compliance tags (e.g., `"gdpr"`, `"soc2"`, `"hipaa"`, `"pci"`). | | `notify` | `boolean` | Flag for downstream notification integrations. | | `redact` | `string[]` | Field names to remove from `beforeData`/`afterData`. Mutually exclusive with `include`. | | `include` | `string[]` | Field names to keep — all others are removed. Mutually exclusive with `redact`. | ### Dynamic descriptions The `description` function receives context about the mutation: ```ts audit.enrich("users", "UPDATE", { label: "User profile updated", severity: "medium", description: ({ before, after, diff, actorId }) => `Actor ${actorId} changed fields: ${diff?.changedFields.join(", ")}`, }); ``` ### Specificity tiers When multiple enrichment rules match an event, they are resolved from least to most specific. Scalar values (like `label`) are overwritten by more specific rules. Array values (like `compliance`) are concatenated and deduplicated. | Tier | Pattern | Matches | |---|---|---| | 1 (least specific) | `"*", "*"` | All tables, all operations | | 2 | `"*", "DELETE"` | All tables, specific operation | | 3 | `"users", "*"` | Specific table, all operations | | 4 (most specific) | `"users", "DELETE"` | Specific table, specific operation | ```ts // Tier 1: global default audit.enrich("*", "*", { severity: "low", compliance: ["soc2"], }); // Tier 4: specific override audit.enrich("users", "DELETE", { severity: "critical", // overwrites "low" compliance: ["gdpr"], // merged → ["soc2", "gdpr"] }); ``` ## Field redaction Control which fields appear in `beforeData` and `afterData` snapshots. Two modes are available — use one or the other, not both. ### Blocklist (`redact`) Remove specific fields. Everything else is kept: ```ts audit.enrich("users", "*", { redact: ["password", "ssn", "secret_key"], }); // beforeData: { id: "1", name: "Alice", password: "[REDACTED]" } // → stored as: { id: "1", name: "Alice" } (password removed) ``` ### Allowlist (`include`) Keep only the listed fields. Everything else is removed: ```ts audit.enrich("users", "*", { include: ["id", "name", "email"], }); // beforeData: { id: "1", name: "Alice", password: "hash", ssn: "123" } // → stored as: { id: "1", name: "Alice", email: "alice@example.com" } ``` > **Caution:** `redact` and `include` are mutually exclusive. Setting both on the same enrichment rule will throw an error at registration time. When fields are redacted, the audit entry's `redactedFields` column records which fields were removed — useful for compliance audits that need to prove sensitive data was excluded. ## Retention policy Configure automatic purge of old audit entries by setting a `retention` window: ```ts const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders"], retention: { days: 365 }, }); ``` | Option | Type | Default | Description | |---|---|---|---| | `days` | `number` | — | **Required.** Purge entries older than this many days. Must be a positive integer. | | `tables` | `string[]` | all tables | When set, only purge entries for these specific tables. | See [Retention Policies](https://docs.usebetter.dev/audit/compliance/retention/) for table-scoped retention, automated purge scheduling, archiving strategies, and legal hold patterns. ## Lifecycle hooks Hooks let you observe or transform audit entries at write time. There are two hook points: **before** the log is written and **after**. ### Config-time hooks Pass hook arrays when creating the audit instance. These run for every audit entry: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], beforeLog: [ (log) => { // Mutate the log before it's written log.metadata = { ...log.metadata, environment: "production" }; }, ], afterLog: [ (log) => { // Observe the written log (read-only snapshot) metrics.increment("audit.entries", { table: log.tableName }); }, ], }); ``` ### Runtime hooks Register hooks dynamically after creation. Returns a dispose function to unregister: ```ts // Register const dispose = audit.onBeforeLog((log) => { log.metadata = { ...log.metadata, requestId: getCurrentRequestId() }; }); // Later: unregister dispose(); ``` ### Hook behavior | Hook | Can mutate? | Error behavior | |---|---|---| | `beforeLog` | Yes | Errors **abort** the write — the entry is not stored. | | `afterLog` | No (read-only snapshot) | The entry is already written. In async mode, errors are passed to `onError`. In sync mode, errors propagate to the caller. | Hooks run sequentially in registration order. `beforeLog` hooks see post-enrichment, post-redaction data. ## Async writes By default, `captureLog()` awaits the database write. Enable async mode for fire-and-forget behavior: ```ts const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], asyncWrite: true, }); ``` When `asyncWrite` is `true`, `captureLog()` returns immediately without waiting for the write to complete. Individual `captureLog()` calls can override the global setting: ```ts // Global async, but force this specific write to be synchronous await audit.captureLog({ tableName: "users", operation: "INSERT", recordId: "1", asyncWrite: false, }); ``` ## Error handling The `onError` callback is called when an async write or an `afterLog` hook fails. If not set, errors are logged to `console.error` with sanitized output (no PII). ```ts const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], onError: (error) => logger.error("Audit error", error), }); ``` In synchronous mode (`asyncWrite: false`), write errors and `afterLog` hook errors propagate to the caller as thrown exceptions. `onError` is only invoked for errors that cannot be thrown — async writes and async `afterLog` failures. ## Manual context The framework middleware (Hono/Express) automatically injects the actor from the HTTP request. For code that runs outside a request — background jobs, cron tasks, CLI scripts — use `audit.withContext()` to set actor identity manually. You can also enrich context mid-request with `mergeAuditContext()` and read it with `getAuditContext()`. See the [Actor Context](https://docs.usebetter.dev/audit/actor-context/) guide for full details, examples, and troubleshooting. ## ORM adapter options The ORM proxy (Drizzle) and extension (Prisma) accept additional options beyond the core `betterAudit()` config. **Drizzle:** ```ts title="src/audit.ts" import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders"], }); const auditedDb = withAuditProxy(db, audit.captureLog, { primaryKey: "id", onError: (error) => logger.error("Audit proxy error", error), onMissingRecordId: "warn", skipBeforeState: ["large_events"], maxBeforeStateRows: 1000, }); ``` | Option | Type | Default | Description | |---|---|---|---| | `primaryKey` | `string` | `"id"` | Fallback primary key column name for record ID extraction. | | `onError` | `(error: unknown) => void` | `console.error` | Called when audit capture fails. | | `onMissingRecordId` | `"warn" \| "skip" \| "throw"` | `"warn"` | What to do when the record ID cannot be determined. | | `skipBeforeState` | `string[]` | `[]` | Table names to skip before-state SELECT for (improves performance for large tables). | | `maxBeforeStateRows` | `number` | `1000` | Safety limit for before-state SELECT queries. | **Prisma:** ```ts title="src/audit.ts" import { PrismaClient } from "./generated/prisma/client.js"; import { betterAudit } from "@usebetterdev/audit"; import { prismaAuditAdapter, withAuditExtension } from "@usebetterdev/audit/prisma"; const prisma = new PrismaClient(); const audit = betterAudit({ database: prismaAuditAdapter(prisma), auditTables: ["users", "orders"], }); const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { bulkMode: "per-row", onError: (error) => logger.error("Audit extension error", error), skipBeforeCapture: ["large_events"], maxBeforeStateRows: 100, tableNameTransform: (modelName) => modelName.toLowerCase() + "s", }); ``` | Option | Type | Default | Description | |---|---|---|---| | `bulkMode` | `"per-row" \| "bulk"` | `"per-row"` | How `createMany`/`updateMany`/`deleteMany` are logged. `"per-row"` creates one entry per row; `"bulk"` creates one entry for the whole operation. | | `onError` | `(error: unknown) => void` | `console.error` | Called when audit capture fails. | | `metadata` | `Record` | — | Extra metadata merged into every log entry from this extension. | | `tableNameTransform` | `(modelName: string) => string` | auto-detect | Maps Prisma model name to SQL table name. Overrides auto-detection from `@@map`. | | `skipBeforeCapture` | `string[]` | `[]` | SQL table names to skip before-state capture for (no extra `findUnique`/`findMany`). | | `maxBeforeStateRows` | `number` | `100` | Max rows fetched by before-state `findMany`. | :::note[Model name → table name] Prisma extensions receive model names (e.g., `"User"`), not SQL table names (`"users"`). The adapter auto-detects the mapping from `_runtimeDataModel` (populated by `@@map`). Use `tableNameTransform` only when auto-detection doesn't match your schema. ::: ## Console integration Connect Better Audit to [UseBetter Console](https://docs.usebetter.dev/console/getting-started/) for a web-based audit dashboard: ```ts title="src/audit.ts" const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH ?? "", sessions: { autoApprove: process.env.NODE_ENV === "development" }, }); const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], console: consoleInstance, }); ``` This registers audit dashboard endpoints with the console automatically. ## Next steps - [Actor Context](https://docs.usebetter.dev/audit/actor-context/) — how actor identity propagates through your request lifecycle - [Adapters](https://docs.usebetter.dev/audit/adapters/) — adapter-specific setup, API reference, and error handling - [Quick Start](https://docs.usebetter.dev/audit/quick-start/) — working example with ORM + framework middleware in one page --- ## Actor Context Every audit log entry should record **who** performed the action. Better Audit uses Node.js `AsyncLocalStorage` to propagate actor identity through your entire request lifecycle — once the middleware extracts an actor, every `captureLog()` call in that request automatically receives it. No manual passing required. ## How it works When a request arrives, the framework middleware: 1. Extracts the actor identity from the request (JWT, cookie, header, or custom logic) 2. Creates an `AuditContext` and stores it in `AsyncLocalStorage` 3. Runs the rest of the request inside that scope Any code that calls `captureLog()` — whether in a route handler, a service layer, or a deeply nested utility — automatically picks up the context. When the request ends, the scope is cleaned up. ```txt title="Request lifecycle" Request → Middleware extracts actor → AsyncLocalStorage scope created └─ Route handler └─ Service layer └─ captureLog() ← actorId attached automatically ``` ### The `AuditContext` type The context carries more than just the actor. All fields are optional: | Field | Type | Description | |---|---|---| | `actorId` | `string` | User or system identifier performing the action. | | `label` | `string` | Human-readable label for the event (e.g., `"User updated profile"`). | | `reason` | `string` | Justification for the action (e.g., `"GDPR deletion request"`). | | `compliance` | `string[]` | Compliance framework tags (e.g., `["soc2", "gdpr"]`). | | `metadata` | `Record` | Arbitrary key-value data merged into each entry. | When `captureLog()` runs, per-call fields override context fields. For example, passing `actorId` directly to `captureLog()` takes precedence over the context's `actorId`. ## Actor extraction The middleware accepts an `extractor` option that controls how it identifies the actor from each request. With no configuration, it decodes the `sub` claim from the `Authorization: Bearer ` header: **Hono:** ```ts title="src/server.ts" import { Hono } from "hono"; import { betterAuditHono } from "@usebetterdev/audit/hono"; const app = new Hono(); // Reads `sub` from Authorization: Bearer app.use("*", betterAuditHono()); ``` **Express:** ```ts title="src/server.ts" import express from "express"; import { betterAuditExpress } from "@usebetterdev/audit/express"; const app = express(); // Reads `sub` from Authorization: Bearer app.use(betterAuditExpress()); ``` Better Audit ships three built-in extractors — `fromBearerToken`, `fromHeader`, and `fromCookie` — and supports custom extractor functions for full control. See the [Adapters](https://docs.usebetter.dev/audit/adapters/) guide for the full extractor reference, custom extractors, and error handling options. ## Enriching context mid-request The middleware sets the initial context, but you can add more fields later in the request lifecycle using `mergeAuditContext()` and read the current context with `getAuditContext()`. ### `mergeAuditContext()` Merges additional fields into the current context for the duration of a callback. Override properties take precedence over existing ones: **Hono:** ```ts title="src/routes/users.ts" import { Hono } from "hono"; import { mergeAuditContext } from "@usebetterdev/audit"; const app = new Hono(); app.delete("/users/:id", async (c) => { const reason = c.req.header("x-deletion-reason"); await mergeAuditContext( { reason, compliance: ["gdpr"] }, async () => { // All captureLog() calls inside this callback // include reason and compliance tags await deleteUser(c.req.param("id")); }, ); return c.json({ ok: true }); }); ``` **Express:** ```ts title="src/routes/users.ts" import express from "express"; import { mergeAuditContext } from "@usebetterdev/audit"; const router = express.Router(); router.delete("/users/:id", async (req, res, next) => { try { const reason = req.headers["x-deletion-reason"] as string | undefined; await mergeAuditContext( { reason, compliance: ["gdpr"] }, async () => { // All captureLog() calls inside this callback // include reason and compliance tags await deleteUser(req.params.id); }, ); res.json({ ok: true }); } catch (error) { next(error); } }); ``` ### `getAuditContext()` Returns the current context, or `undefined` if called outside a scope: ```ts title="src/services/user-service.ts" function deleteUser(userId: string) { const context = getAuditContext(); if (context?.actorId) { logger.info(`User ${userId} deleted by ${context.actorId}`); } // ... } ``` ### `audit.withContext()` For code running **outside** a request — background jobs, cron tasks, CLI scripts — use `audit.withContext()` to create a context scope manually: ```ts title="src/jobs/cleanup.ts" async function runCleanupJob() { await audit.withContext( { actorId: "system:cleanup-job", reason: "Scheduled daily cleanup", compliance: ["gdpr"], metadata: { jobId: "cleanup-2025-01-15" }, }, async () => { // All captureLog() calls here receive the context await deactivateExpiredAccounts(); }, ); } ``` ## Behavior and design decisions ### Fail-open Extraction errors never crash the request. If an extractor throws or returns `undefined`, the request proceeds normally — audit entries are captured without an `actorId`. Use the `onError` option on the middleware to log extraction failures. See [Adapters — Error handling](https://docs.usebetter.dev/audit/adapters/#error-handling) for details. ### Request isolation Each request gets its own `AsyncLocalStorage` scope. Concurrent requests never leak context between each other — even under high concurrency, each request's `actorId` stays isolated. ### Scope lifetime - **Hono:** The scope naturally ends when the middleware's `next()` completes. - **Express:** The adapter keeps the scope open until the response finishes (via `response.on('finish'/'close')`). This ensures context survives across `await` boundaries in async route handlers. ## Troubleshooting If `actorId` is `undefined`, context disappears after `await`, or you see the wrong actor in logs, see the [Troubleshooting](https://docs.usebetter.dev/audit/troubleshooting/) guide for detailed diagnosis and fixes. ## Next steps - [Troubleshooting](https://docs.usebetter.dev/audit/troubleshooting/) — common actor context issues and how to fix them - [Adapters](https://docs.usebetter.dev/audit/adapters/) — extractors, custom extractors, error handling, and API reference - [Configuration](https://docs.usebetter.dev/audit/configuration/) — enrichment, retention, hooks, and ORM adapter tuning - [Quick Start](https://docs.usebetter.dev/audit/quick-start/) — working example with ORM + framework middleware in one page --- ## Enrichment ORM adapters capture raw audit data — table name, operation, before/after snapshots. Enrichment layers on human-readable meaning: labels, severity levels, compliance tags, and dynamic descriptions. You register rules once and they apply automatically to every matching entry, without touching your route handlers or service code. ## `audit.enrich()` — the API ```ts audit.enrich(table, operation, config) ``` - **`table`** — SQL table name, or `"*"` for all tables. - **`operation`** — `"INSERT"`, `"UPDATE"`, `"DELETE"`, or `"*"` for all operations. - **`config`** — enrichment options (see sections below). Three typical patterns: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "payments", "orders"], }); // Pattern 1: specific table + specific operation audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", compliance: ["gdpr"], }); // Pattern 2: specific table + all operations audit.enrich("payments", "*", { severity: "high", compliance: ["pci"], }); // Pattern 3: global default — all tables, all operations audit.enrich("*", "*", { severity: "low", compliance: ["soc2"], }); ``` You can call `audit.enrich()` as many times as you like. Rules for the same table/operation pair accumulate — they are merged according to the [specificity rules](#specificity-tiers-and-rule-merging) below. ## Human-readable labels The `label` field attaches a short, human-readable name to an audit entry. Without enrichment, entries only carry raw table names and operations: ```jsonc // Without enrichment { "tableName": "users", "operation": "DELETE", "recordId": "u_123" } // With enrichment { "tableName": "users", "operation": "DELETE", "recordId": "u_123", "label": "User account deleted" } ``` Set a label per rule: ```ts audit.enrich("users", "INSERT", { label: "New user registered" }); audit.enrich("users", "UPDATE", { label: "User profile updated" }); audit.enrich("users", "DELETE", { label: "User account deleted" }); ``` **Precedence:** if `label` is passed directly to `captureLog()`, it takes precedence over the registry label. The registry fills the gap when no per-call label is set. ## Severity levels Four severity levels are available: `"low"`, `"medium"`, `"high"`, `"critical"`. | Operation | Suggested severity | |---|---| | `INSERT` | `"low"` | | `UPDATE` (non-sensitive) | `"medium"` | | `DELETE` | `"high"` | | `DELETE` on PII tables | `"critical"` | | Bulk mutations | `"high"` or `"critical"` | ```ts audit.enrich("*", "INSERT", { severity: "low" }); audit.enrich("*", "UPDATE", { severity: "medium" }); audit.enrich("*", "DELETE", { severity: "high" }); // Override for sensitive tables audit.enrich("users", "DELETE", { severity: "critical" }); audit.enrich("payments", "*", { severity: "high" }); ``` ## Compliance tags The `compliance` field attaches framework tags to entries. Common values: `"gdpr"`, `"soc2"`, `"hipaa"`, `"pci"`. When multiple rules match, compliance arrays are **concatenated and deduplicated** — more specific rules add to, rather than replace, tags from less specific ones: ```ts // Tier 1: global default — applies to every entry audit.enrich("*", "*", { compliance: ["soc2"], }); // Tier 4: specific rule — adds "gdpr" for user deletes audit.enrich("users", "DELETE", { compliance: ["gdpr"], }); // Result for users DELETE: ["soc2", "gdpr"] // Result for orders INSERT: ["soc2"] ``` Tag values are freeform strings — use whatever your compliance team standardises on. ## Dynamic descriptions The `description` option is a function called at write time with context about the mutation. Use it to generate richer, event-specific descriptions beyond what a static label provides: ```ts audit.enrich("users", "UPDATE", { label: "User profile updated", description: ({ before, after, diff, actorId }) => { const fields = diff?.changedFields.join(", ") ?? "unknown fields"; return `Actor ${actorId ?? "unknown"} changed: ${fields}`; }, }); ``` ### `EnrichmentDescriptionContext` | Field | Type | Description | |---|---|---| | `before` | `Record \| undefined` | Pre-mutation row snapshot. | | `after` | `Record \| undefined` | Post-mutation row snapshot. | | `diff` | `{ changedFields: string[] } \| undefined` | Fields that changed between `before` and `after`. | | `actorId` | `string \| undefined` | Actor from the current context. | | `metadata` | `Record \| undefined` | Merged metadata from the current context. | > **Caution:** The `before`, `after`, and `diff` fields contain **post-redaction** data. If you use `redact` or `include` on the same rule (or a less specific rule), redacted fields will be absent from what the description function sees. Use this intentionally — do not read sensitive fields inside `description` that you also redact. **Error handling:** if the description function throws, the error is swallowed and `description` is left unset on the log entry. The entry is still written. ## Notifications The `notify` flag marks an entry for downstream notification integrations. Setting it to `true` does not trigger anything on its own — it is a signal for external systems (webhooks, alerting pipelines) that consume the audit log to act on the entry: ```ts audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", notify: true, }); ``` Like `label` and `severity`, `notify` is a scalar — more specific tiers overwrite less specific ones. ## Custom metadata There are two ways to attach metadata to audit entries. ### Per-request metadata via context Use `mergeAuditContext()` to inject metadata into the current request scope. Every `captureLog()` call inside the callback inherits it: **Hono:** ```ts title="src/routes/users.ts" import { Hono } from "hono"; import { mergeAuditContext } from "@usebetterdev/audit"; const app = new Hono(); app.delete("/users/:id", async (c) => { await mergeAuditContext( { metadata: { requestId: c.req.header("x-request-id"), region: "eu-west-1" } }, async () => { await deleteUser(c.req.param("id")); }, ); return c.json({ ok: true }); }); ``` **Express:** ```ts title="src/routes/users.ts" import express from "express"; import { mergeAuditContext } from "@usebetterdev/audit"; const router = express.Router(); router.delete("/users/:id", async (req, res, next) => { try { await mergeAuditContext( { metadata: { requestId: req.headers["x-request-id"], region: "eu-west-1" } }, async () => { await deleteUser(req.params.id); }, ); res.json({ ok: true }); } catch (error) { next(error); } }); ``` See the [Actor Context](https://docs.usebetter.dev/audit/actor-context/) guide for full details on context propagation. ### Static metadata via `beforeLog` hooks For metadata that is the same for every entry — environment, service name, version — use a `beforeLog` hook: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], beforeLog: [ (log) => { log.metadata = { ...log.metadata, environment: process.env.NODE_ENV, service: "api", }; }, ], }); ``` > **Note:** Metadata is **not** subject to redaction. `redact` and `include` only apply to `beforeData` and `afterData`. Avoid storing PII directly in metadata. ## Specificity tiers and rule merging When multiple rules match an event, they are resolved from least to most specific: | Tier | Pattern | Matches | |---|---|---| | 1 (least specific) | `"*", "*"` | All tables, all operations | | 2 | `"*", "DELETE"` | All tables, specific operation | | 3 | `"users", "*"` | Specific table, all operations | | 4 (most specific) | `"users", "DELETE"` | Specific table, specific operation | **Merge rules:** - **Scalar fields** (`label`, `severity`, `description`, `notify`): last-write-wins — more specific tiers overwrite less specific ones. - **Array fields** (`compliance`, `redact`, `include`): concatenated and deduplicated across all matching tiers. ```ts // Tier 1 audit.enrich("*", "*", { severity: "low", compliance: ["soc2"], }); // Tier 3 audit.enrich("users", "*", { severity: "medium", // overwrites "low" compliance: ["internal"], // merged → ["soc2", "internal"] }); // Tier 4 audit.enrich("users", "DELETE", { severity: "critical", // overwrites "medium" compliance: ["gdpr"], // merged → ["soc2", "internal", "gdpr"] }); // users DELETE resolves to: severity="critical", compliance=["soc2", "internal", "gdpr"] // users INSERT resolves to: severity="medium", compliance=["soc2", "internal"] // orders DELETE resolves to: severity="low", compliance=["soc2"] ``` ## Field redaction Control which fields appear in `beforeData` and `afterData` snapshots. Two modes are available — use one or the other. ### Blocklist (`redact`) Remove specific fields. Everything else is kept: ```ts audit.enrich("users", "*", { redact: ["password", "ssn", "secret_key"], }); // beforeData: { id: "1", name: "Alice", password: "hash" } // stored as: { id: "1", name: "Alice" } ``` ### Allowlist (`include`) Keep only the listed fields. Everything else is removed: ```ts audit.enrich("users", "*", { include: ["id", "email", "name"], }); // beforeData: { id: "1", name: "Alice", email: "alice@example.com", password: "hash", ssn: "123" } // stored as: { id: "1", name: "Alice", email: "alice@example.com" } ``` > **Caution:** `redact` and `include` are mutually exclusive. Setting both on the same enrichment rule throws at registration time. Setting them on different rules that resolve to the same event also throws at write time. > **Caution:** Only top-level keys are matched. To redact a nested field like `profile.ssn`, list the parent key `"profile"` — the entire object is removed. When fields are redacted, the `redactedFields` column on the audit entry records which fields were removed. This is useful for compliance audits that need proof sensitive data was excluded. ## Processing order When a log entry is written, enrichment is applied in this order: 1. **Field redaction** — `redact` / `include` applied to `beforeData`, `afterData`, `diff.changedFields` 2. **Description function** — called with post-redaction data; result stored in `description` (only if not already set per-call) 3. **Scalar and array fields** — `label`, `severity`, `compliance`, `notify` applied (only fill gaps; per-call values and context values take precedence) 4. **`beforeLog` hooks** — run on the fully enriched log ## Next steps - [Compliance Overview](https://docs.usebetter.dev/audit/compliance/overview/) — map SOC 2, HIPAA, GDPR, and PCI DSS requirements to Better Audit features - [Configuration](https://docs.usebetter.dev/audit/configuration/) — full `betterAudit()` options reference, retention, and lifecycle hooks - [Actor Context](https://docs.usebetter.dev/audit/actor-context/) — per-request metadata, `mergeAuditContext()`, and `audit.withContext()` - [Adapters](https://docs.usebetter.dev/audit/adapters/) — middleware setup, actor extractors, and error handling --- ## Adapters Better Audit has two kinds of adapters: - **ORM adapters** — connect audit to your database (`drizzleAuditAdapter`, `prismaAuditAdapter`) and intercept mutations for automatic capture (`withAuditProxy`, `withAuditExtension`) - **Framework adapters** — middleware that extracts the current actor from each HTTP request and stores it in `AsyncLocalStorage` so every audit entry is tagged automatically --- ## ORM adapters ### Drizzle Install peer dependencies if you haven't already: **pg:** ```bash npm install drizzle-orm pg ``` **postgres.js:** ```bash npm install drizzle-orm postgres ``` **SQLite:** ```bash npm install drizzle-orm better-sqlite3 ``` **Wiring up the adapter:** **pg:** ```ts title="lib/audit.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; import { usersTable } from "./schema"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); export const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], }); // Wrap db — insert/update/delete are now captured automatically export const auditedDb = withAuditProxy(db, audit.captureLog); ``` **postgres.js:** ```ts title="lib/audit.ts" import { drizzle } from "drizzle-orm/postgres-js"; import postgres from "postgres"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; const client = postgres(process.env.DATABASE_URL); const db = drizzle(client); export const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], }); export const auditedDb = withAuditProxy(db, audit.captureLog); ``` **SQLite:** ```ts title="lib/audit.ts" import Database from "better-sqlite3"; import { drizzle } from "drizzle-orm/better-sqlite3"; import { betterAudit } from "@usebetterdev/audit"; import { drizzleSqliteAuditAdapter, withAuditProxy } from "@usebetterdev/audit/drizzle"; const sqlite = new Database("./dev.db"); const db = drizzle(sqlite); export const audit = betterAudit({ database: drizzleSqliteAuditAdapter(db), auditTables: ["users"], }); export const auditedDb = withAuditProxy(db, audit.captureLog); ``` **`drizzleAuditAdapter(db)`** connects audit to the `audit_logs` table — it handles `writeLog`, `queryLogs`, `getLogById`, `getStats`, and `purgeLogs`. **`withAuditProxy(db, captureLog)`** wraps the Drizzle database with a transparent proxy that intercepts `db.insert()`, `db.update()`, and `db.delete()`. It also wraps `db.transaction()` so the proxy carries into nested transactions. The proxy reads the before-state via a `SELECT` before each `UPDATE` or `DELETE`. **Proxy options:** | Option | Type | Default | Description | |--------|------|---------|-------------| | `onError` | `(error: unknown) => void` | `console.error` | Called when audit capture fails. Errors are always swallowed. | | `onMissingRecordId` | `"warn" \| "skip" \| "throw"` | `"warn"` | What to do when the primary key cannot be detected. | | `skipBeforeState` | `string[]` | `[]` | Table names to skip the pre-mutation SELECT for (high-throughput tables). | | `maxBeforeStateRows` | `number` | `1000` | If the pre-mutation SELECT returns more rows than this, skip before-state capture. | ```ts const auditedDb = withAuditProxy(db, audit.captureLog, { onMissingRecordId: "skip", skipBeforeState: ["events", "metrics"], }); ``` ### Prisma Install peer dependencies: ```bash npm install @prisma/client ``` Requires `@prisma/client` >= 5.0.0. ```ts title="lib/audit.ts" const prisma = new PrismaClient(); export const audit = betterAudit({ database: prismaAuditAdapter(prisma), auditTables: ["users"], }); // Extend Prisma — all mutations are now captured automatically export const auditedPrisma = withAuditExtension(prisma, audit.captureLog); ``` **`prismaAuditAdapter(prisma)`** connects audit to the `audit_logs` table using `$executeRawUnsafe` and `$queryRawUnsafe` — no Prisma model is generated for `audit_logs`. It handles `writeLog`, `queryLogs`, `getLogById`, `getStats`, and `purgeLogs`. **`withAuditExtension(prisma, captureLog)`** uses Prisma's `$extends` API to intercept all mutations across all models. For `update` and `upsert`, a `findUnique` is issued before the mutation to capture before-state. For `updateMany` and `deleteMany`, a `findMany` captures per-row state up to `maxBeforeStateRows`. **Extension options:** | Option | Type | Default | Description | |--------|------|---------|-------------| | `bulkMode` | `"per-row" \| "bulk"` | `"per-row"` | `"per-row"` emits one audit entry per row; `"bulk"` emits a single entry for the whole operation. | | `onError` | `(error: unknown) => void` | `console.error` | Called when audit capture fails. Errors are always swallowed. | | `skipBeforeCapture` | `string[]` | `[]` | SQL table names to skip the pre-mutation lookup for. | | `maxBeforeStateRows` | `number` | `100` | If before-state lookup exceeds this many rows, falls back to a single bulk entry. | | `tableNameTransform` | `(modelName: string) => string` | auto-detect | Override the model → table name mapping. Auto-detection reads `@@map` directives. | ```ts const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { bulkMode: "bulk", skipBeforeCapture: ["events", "metrics"], }); ``` > **`auditTables` uses SQL table names:** `betterAudit({ auditTables: [...] })` filters entries by table name. With Prisma, use the SQL table name (from `@@map` directives), not the Prisma model name. `withAuditExtension` auto-detects this mapping from `_runtimeDataModel`, so in most cases it just works. --- ## Framework adapters The framework adapter's job is to extract the current actor (user or service) from each HTTP request and store it in `AsyncLocalStorage`. Every audit entry captured during that request automatically receives the actor — you don't pass it explicitly. All adapters share the same behaviour: - Extract actor via the configured `ContextExtractor` - Wrap the request handler inside `runWithAuditContext()` so `getAuditContext()` returns the actor anywhere in the call tree - **Fail open** — if extraction fails or yields nothing, the request proceeds without context ### Hono ```ts title="src/server.ts" const app = new Hono(); // Reads `sub` from Authorization: Bearer by default app.use("*", betterAuditHono()); app.post("/users", async (c) => { const body = await c.req.json(); // actorId is automatically attached from the JWT await auditedDb.insert(usersTable).values(body); return c.json({ ok: true }, 201); }); ``` ### Express ```ts title="src/server.ts" const app = express(); app.use(express.json()); // Reads `sub` from Authorization: Bearer by default app.use(betterAuditExpress()); app.post("/users", async (req, res, next) => { try { // actorId is automatically attached from the JWT await auditedDb.insert(usersTable).values(req.body); res.status(201).json({ ok: true }); } catch (error) { next(error); } }); ``` > **ALS scope and async handlers:** The Express adapter keeps the `AsyncLocalStorage` scope open until the response finishes (via `response.on('finish'/'close')`). This means audit context is available even after `await` inside async route handlers — the context does not reset at the first `await` boundary. --- ### Next.js App Router Next.js App Router has three distinct execution contexts that each need a different approach. #### The ALS propagation constraint Next.js edge middleware runs in a **separate V8 isolate** from route handlers. `AsyncLocalStorage` set in middleware does not carry over into route handlers or server actions. The solution is a two-part pattern: middleware extracts the actor and forwards it as a request header; the route handler wrapper reads that header and sets up ALS. ``` request → Edge Middleware: extract actor → set x-better-audit-actor-id header → Route Handler: read header → runWithAuditContext() → getAuditContext() available here ``` #### Pattern 1 — Middleware + route handlers (recommended for APIs) ```ts title="middleware.ts" export default createAuditMiddleware(); export const config = { matcher: "/api/:path*" }; ``` ```ts title="app/api/orders/route.ts" async function handler(request: NextRequest) { const body = await request.json(); await auditedDb.insert(ordersTable).values(body); return Response.json({ ok: true }); } export const POST = withAuditRoute(handler, { extractor: { actor: fromHeader(AUDIT_ACTOR_HEADER) }, }); ``` Use `AUDIT_ACTOR_HEADER` (the exported constant `"x-better-audit-actor-id"`) in both places so the header name never gets out of sync. > **Spoof prevention:** `createAuditMiddleware` always overwrites the actor header on the forwarded request — including when extraction fails (it sets it to `""`). This prevents clients from injecting a fake actor id by sending the header directly. Never trust `x-better-audit-actor-id` on incoming requests unless your middleware is in the path. #### Pattern 2 — Route handler only (no middleware) Skip the middleware entirely and extract from the request directly. Useful for standalone routes or apps without a `middleware.ts`. ```ts title="app/api/orders/route.ts" // Reads `sub` from Authorization: Bearer by default async function handler(request: NextRequest) { const body = await request.json(); await auditedDb.insert(ordersTable).values(body); return Response.json({ ok: true }); } export const POST = withAuditRoute(handler); ``` #### Pattern 3 — Server actions Server actions don't receive a `Request` object. `withAudit` reads all request headers via `next/headers` and constructs a synthetic request for the extractor. ```ts title="app/actions.ts" "use server"; export const createOrder = withAudit(async (formData: FormData) => { const ctx = getAuditContext(); // { actorId: "user-123" } await audit.captureLog({ tableName: "orders", operation: "INSERT", recordId: "ord-1", after: {} }); }); ``` #### Custom extractors All three wrappers accept the same `extractor` option. The extractor receives a Web-standard `Request` and returns a string or `undefined`. ```ts // Different JWT claim withAuditRoute(handler, { extractor: { actor: fromBearerToken("user_id") } }); // Plain header (e.g. from a trusted API gateway) withAuditRoute(handler, { extractor: { actor: fromHeader("x-user-id") } }); // Cookie-based session withAudit(action, { extractor: { actor: fromCookie("session_id") } }); ``` --- ## Actor extraction All adapters accept the same `extractor` option. The extractor receives a Web-standard `Request` and returns the actor identifier as a string (or `undefined`). ### Default: JWT Bearer token With no options, all adapters decode `sub` from the `Authorization: Bearer ` header. The token is decoded **without** signature verification — that is the auth layer's responsibility. **Hono:** ```ts app.use("*", betterAuditHono()); // Authorization: Bearer eyJ... → actorId = jwt.sub ``` **Express:** ```ts app.use(betterAuditExpress()); // Authorization: Bearer eyJ... → actorId = jwt.sub ``` **Next.js:** ```ts // middleware.ts export default createAuditMiddleware(); // Authorization: Bearer eyJ... → actorId = jwt.sub ``` ### Custom JWT claim **Hono:** ```ts import { fromBearerToken } from "@usebetterdev/audit"; app.use("*", betterAuditHono({ extractor: { actor: fromBearerToken("user_id") } })); ``` **Express:** ```ts import { fromBearerToken } from "@usebetterdev/audit"; app.use(betterAuditExpress({ extractor: { actor: fromBearerToken("user_id") } })); ``` **Next.js:** ```ts import { fromBearerToken } from "@usebetterdev/audit/next"; export default createAuditMiddleware({ extractor: { actor: fromBearerToken("user_id") } }); ``` ### Header-based extraction Use `fromHeader` when the actor identity is passed as a plain request header (common behind API gateways): > **Only safe behind a trusted gateway:** `fromHeader` reads the header value exactly as received. If clients can set that header directly, they can spoof any actor identity. Only use `fromHeader` when an upstream gateway or load balancer controls the header and strips any client-supplied value before forwarding the request. **Hono:** ```ts import { fromHeader } from "@usebetterdev/audit"; app.use("*", betterAuditHono({ extractor: { actor: fromHeader("x-user-id") } })); ``` **Express:** ```ts import { fromHeader } from "@usebetterdev/audit"; app.use(betterAuditExpress({ extractor: { actor: fromHeader("x-user-id") } })); ``` **Next.js:** ```ts import { fromHeader } from "@usebetterdev/audit/next"; // Route handler only — reads actor from a gateway-injected header export const GET = withAuditRoute(handler, { extractor: { actor: fromHeader("x-user-id") }, }); ``` ### Cookie-based extraction Use `fromCookie` for session-based auth where the actor ID lives in a cookie: **Hono:** ```ts import { fromCookie } from "@usebetterdev/audit"; app.use("*", betterAuditHono({ extractor: { actor: fromCookie("session_id") } })); ``` **Express:** ```ts import { fromCookie } from "@usebetterdev/audit"; app.use(betterAuditExpress({ extractor: { actor: fromCookie("session_id") } })); ``` **Next.js:** ```ts import { fromCookie } from "@usebetterdev/audit/next"; // Works in withAudit (server actions) — cookies() are read via next/headers export const createOrder = withAudit(action, { extractor: { actor: fromCookie("session_id") }, }); ``` ### Custom extractor function Write your own `ValueExtractor` for full control. It receives a Web-standard `Request` and returns a string or `undefined`: **Hono:** ```ts app.use( "*", betterAuditHono({ extractor: { actor: async (request) => { const apiKey = request.headers.get("x-api-key"); if (!apiKey) return undefined; const owner = await resolveApiKeyOwner(apiKey); return owner?.id; }, }, }), ); ``` **Express:** ```ts app.use( betterAuditExpress({ extractor: { actor: async (request) => { const apiKey = request.headers.get("x-api-key"); if (!apiKey) return undefined; const owner = await resolveApiKeyOwner(apiKey); return owner?.id; }, }, }), ); ``` **Next.js:** ```ts export const POST = withAuditRoute(handler, { extractor: { actor: async (request) => { const apiKey = request.headers.get("x-api-key"); if (!apiKey) return undefined; const owner = await resolveApiKeyOwner(apiKey); return owner?.id; }, }, }); ``` --- ## Error handling Extraction errors never break the request. By default all adapters **fail open** — if an extractor throws, the request proceeds without audit context. Use `onError` to log or report extraction failures: **Hono:** ```ts app.use( "*", betterAuditHono({ onError: (error) => console.error("Audit extraction failed:", error), }), ); ``` **Express:** ```ts app.use( betterAuditExpress({ onError: (error) => console.error("Audit extraction failed:", error), }), ); ``` **Next.js:** ```ts export default createAuditMiddleware({ onError: (error) => console.error("Audit extraction failed:", error), }); ``` --- ## API reference ### `drizzleAuditAdapter(db)` Creates an `AuditDatabaseAdapter` backed by a Drizzle pg database. Accepts any Drizzle database instance with `insert`, `select`, and `delete` support. ### `drizzleSqliteAuditAdapter(db)` Creates an `AuditDatabaseAdapter` backed by a Drizzle SQLite database. ### `withAuditProxy(db, captureLog, options?)` Wraps a Drizzle database (or transaction) with a transparent proxy that intercepts `insert`, `update`, and `delete`. Returns a new database handle with identical types — use it everywhere in place of the original `db`. The proxy propagates into nested `db.transaction()` calls automatically. ### `prismaAuditAdapter(prisma)` Creates an `AuditDatabaseAdapter` backed by a Prisma client. Uses `$executeRawUnsafe` and `$queryRawUnsafe` for precise PostgreSQL type casting — all user-supplied values are passed as bound parameters, never string-interpolated. No Prisma schema changes required. ### `withAuditExtension(prisma, captureLog, options?)` Wraps a Prisma client using `$extends` to intercept all mutations across all models. Returns a new extended client of the same type — use it everywhere in place of the original `prisma`. ### `betterAuditHono(options?)` Convenience wrapper for Hono. Equivalent to `createHonoMiddleware(options)`. ### `createHonoMiddleware(options?)` Returns a Hono-compatible middleware function `(context, next) => Promise`. ### `betterAuditExpress(options?)` Convenience wrapper for Express. Equivalent to `createExpressMiddleware(options)`. ### `createExpressMiddleware(options?)` Returns an Express-compatible middleware function `(req, res, next) => Promise`. ### `betterAuditNext(options?)` / `createAuditMiddleware(options?)` Returns a Next.js edge middleware function `(request: NextRequest) => Promise`. Always overwrites the actor header on the forwarded request to prevent client spoofing. Additional option: | Option | Type | Description | | ------------- | -------- | ------------------------------------------------------------------------------------ | | `actorHeader` | `string` | Header name for forwarding the actor id. Defaults to `AUDIT_ACTOR_HEADER`. | ### `withAuditRoute(handler, options?)` Wraps an App Router route handler `(request: NextRequest, context) => Promise`. Extracts actor from the request and runs the handler inside an ALS scope. ### `withAudit(action, options?)` Wraps a server action `(...args) => Promise`. Reads all request headers via `next/headers`, extracts actor, and runs the action inside an ALS scope. ### `AUDIT_ACTOR_HEADER` Exported string constant `"x-better-audit-actor-id"` — the default header used to forward the actor id between middleware and route handlers. **Shared options (all framework adapters):** | Option | Type | Description | | ----------- | -------------------------- | ---------------------------------------------------- | | `extractor` | `ContextExtractor` | Actor extractor config. Defaults to JWT `sub` claim. | | `onError` | `(error: unknown) => void` | Called when an extractor throws. Defaults to no-op. | ## Next steps - [Actor Context](https://docs.usebetter.dev/audit/actor-context/) — how context propagates, enriching mid-request, and troubleshooting - [Configuration](https://docs.usebetter.dev/audit/configuration/) — enrichment, retention, hooks, and ORM adapter tuning - [Troubleshooting](https://docs.usebetter.dev/audit/troubleshooting/) — common issues with capture, actor context, and migrations - [Quick Start](https://docs.usebetter.dev/audit/quick-start/) — working example with ORM + framework middleware in one page --- ## CLI & Migrations The audit CLI generates the `audit_logs` migration, verifies your schema, exports log entries, and purges old data — available install-free via `npx` without adding anything to your `package.json`. ## Migration workflow 1. **Generate the `audit_logs` migration** The CLI auto-detects your ORM and dialect from `package.json` and `DATABASE_URL`. :::tip[Dry run first] Run `npx @usebetterdev/audit-cli migrate --dry-run` to print the SQL to stdout without writing any files. ::: **Drizzle:** ```bash # Create an empty custom migration file for Drizzle Kit to manage npx drizzle-kit generate --custom --name=audit_logs --prefix=none # Fill it with the audit_logs DDL npx @usebetterdev/audit-cli migrate -o drizzle/_audit_logs.sql ``` **Prisma:** ```bash # Create a draft migration without applying it npx prisma migrate dev --create-only --name audit_logs # Fill it with the audit_logs DDL npx @usebetterdev/audit-cli migrate \ -o prisma/migrations/*_audit_logs/migration.sql ``` 2. **Apply the migration** **Drizzle:** ```bash npx drizzle-kit migrate ``` **Prisma:** ```bash npx prisma migrate dev ``` 3. **Verify with check** ```bash npx @usebetterdev/audit-cli check --database-url $DATABASE_URL ``` All checks should pass before wiring up the audit instance in your application. ## Commands ### migrate Generates the `audit_logs` table DDL for your database dialect. Outputs SQL to a file or stdout. | Flag | Default | Description | |------|---------|-------------| | `-o, --output ` | — | File path or directory to write the SQL (omit to print to stdout). Supports glob patterns (e.g., `prisma/migrations/*_audit_logs/migration.sql`). When a directory is given, the CLI generates a timestamped filename. | | `--adapter ` | auto-detected | ORM to target: `drizzle` or `prisma`. | | `--dialect ` | auto-detected | Database dialect: `postgres`, `mysql`, or `sqlite`. | | `--dry-run` | false | Print SQL to stdout without writing a file. | ```bash # Preview the SQL without writing a file npx @usebetterdev/audit-cli migrate --dry-run # Write to a specific directory npx @usebetterdev/audit-cli migrate -o drizzle/ # Explicit adapter and dialect override npx @usebetterdev/audit-cli migrate --adapter drizzle --dialect postgres -o ./migrations/ ``` ORM and dialect are inferred from installed packages (`drizzle-orm`, `@prisma/client`) and the `DATABASE_URL` environment variable. Pass `--adapter` and `--dialect` explicitly to override. If auto-detection fails, the CLI exits with a clear error listing which flag to add. ### check Connects to your database and verifies that the `audit_logs` table exists with the expected schema. Reports any missing columns or indexes. | Flag | Default | Description | |------|---------|-------------| | `--database-url ` | `$DATABASE_URL` | Connection string to your database. | ```bash npx @usebetterdev/audit-cli check --database-url $DATABASE_URL ``` A passing run prints `✓` for each check. Any `✗` line identifies the missing column or index — re-run `migrate`, apply the migration, and run `check` again. See [Quick Start](https://docs.usebetter.dev/audit/quick-start/) for the full migration workflow. ### stats Reports aggregate counts and storage size for the `audit_logs` table. > **Coming soon:** `stats` prints "not yet implemented" in the current release. The command is reserved for a future release that will add row counts, size estimates, and per-table breakdowns. ### purge Deletes audit entries older than the configured retention period. Always run with `--dry-run` first to see how many rows would be deleted. See [Retention Policies](https://docs.usebetter.dev/audit/compliance/retention/) for scheduling and archiving strategies. | Flag | Default | Description | |------|---------|-------------| | `--database-url ` | `$DATABASE_URL` | Connection string to your database. | | `--since ` | from config | Override the cutoff for this run. Accepts ISO dates (`2025-01-01`) or duration shorthands (`90d`, `4w`, `3m`, `1y`). | | `--batch-size ` | `1000` | Rows per DELETE batch. | | `--dry-run` | false | Print the number of rows that would be deleted without deleting anything. | | `--yes` | false | Skip the confirmation prompt. Required for non-interactive use (CI, cron jobs). | ```bash # See how many rows would be deleted (safe — no changes made) npx @usebetterdev/audit-cli purge --dry-run --since 90d --database-url $DATABASE_URL # Delete entries older than 90 days (prompts for confirmation) npx @usebetterdev/audit-cli purge --since 90d --database-url $DATABASE_URL # Non-interactive deletion (CI/cron — skips confirmation prompt) npx @usebetterdev/audit-cli purge --since 90d --yes --database-url $DATABASE_URL ``` > **Caution:** `purge` permanently deletes rows. Always run `--dry-run` first. Pass `--yes` only in automated pipelines after validating the row count looks correct. ### export Exports audit log entries to stdout in JSON or CSV format. Supports filtering by time range, table, actor, severity, and compliance tag. Use either `--since` (relative) or `--from`/`--to` (absolute range), not both. | Flag | Default | Description | |------|---------|-------------| | `--database-url ` | `$DATABASE_URL` | Connection string to your database. | | `--format ` | `json` | Output format: `json` or `csv`. | | `--since ` | _(none)_ | Relative time filter, e.g. `1h`, `7d`, `30d`. | | `--from ` | _(none)_ | Absolute start timestamp (ISO 8601). | | `--to ` | _(none)_ | Absolute end timestamp (ISO 8601). | | `--table ` | _(none)_ | Filter by table name. | | `--actor ` | _(none)_ | Filter by actor ID. | | `--severity ` | _(none)_ | Filter by severity: `low`, `medium`, `high`, `critical`. | | `--compliance ` | _(none)_ | Filter by compliance tag, e.g. `gdpr`, `soc2`. | | `-o, --output ` | — | File path to write output (omit to print to stdout). | > **Sensitive data:** Exported files contain actor IDs and before/after record snapshots. Store them with appropriate access controls and do not commit them to version control. ```bash # Export all entries from the last hour as JSON npx @usebetterdev/audit-cli export --since 1h --database-url $DATABASE_URL # Export critical-severity entries from the last 7 days as CSV npx @usebetterdev/audit-cli export \ --since 7d \ --severity critical \ --format csv \ --database-url $DATABASE_URL # Export GDPR-tagged entries for a specific actor in a date range, saved to a file npx @usebetterdev/audit-cli export \ --from 2025-01-01T00:00:00Z \ --to 2025-01-31T23:59:59Z \ --compliance gdpr \ --actor user-42 \ --format json \ -o audit-export.json \ --database-url $DATABASE_URL ``` ## Next steps --- ## Troubleshooting ## `actorId` is `undefined` in logs Audit entries are written but `actorId` is `undefined` (or `null` in the database). This means the framework middleware could not extract an actor from the request — or the middleware was not mounted. ### Common causes **Middleware not mounted.** The audit middleware must be registered before your route handlers: **Hono:** ```ts title="src/server.ts" import { Hono } from "hono"; import { betterAuditHono } from "@usebetterdev/audit/hono"; const app = new Hono(); // Must come before route handlers app.use("*", betterAuditHono()); ``` **Express:** ```ts title="src/server.ts" import express from "express"; import { betterAuditExpress } from "@usebetterdev/audit/express"; const app = express(); // Must come before route handlers app.use(betterAuditExpress()); ``` **Next.js:** ```ts title="middleware.ts" import { createAuditMiddleware } from "@usebetterdev/audit/next"; export default createAuditMiddleware(); export const config = { matcher: "/api/:path*" }; ``` **Token or header missing from request.** The default extractor reads `sub` from an `Authorization: Bearer ` header. If your requests don't carry a JWT, configure a different extractor: ```ts title="src/server.ts" // Use a plain header from your API gateway app.use("*", betterAuditHono({ extractor: { actor: fromHeader("x-user-id") } })); ``` **Extractor returning `undefined`.** Add `onError` to surface extraction failures: ```ts title="src/server.ts" app.use("*", betterAuditHono({ onError: (error) => console.error("Audit extraction failed:", error), })); ``` **Code outside the request scope.** `getAuditContext()` returns `undefined` when called outside a request — for example, in a module-level initializer or a background job. Use `audit.withContext()` for non-request code: ```ts title="src/jobs/cleanup.ts" await audit.withContext({ actorId: "system:cleanup-job" }, async () => { // captureLog() calls here receive the actorId await deactivateExpiredAccounts(); }); ``` --- ## Events not being captured Mutations happen in the database but no audit entries appear. ### Table not in `auditTables` > **Caution:** `auditTables` is an allowlist — mutations on unlisted tables produce no error and no log entry. This is the most common reason events appear to be missing. The `auditTables` array controls which tables are audited. Mutations on tables not in this list are silently skipped: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders"], }); // INSERT into "users" → captured // INSERT into "payments" → silently skipped (not in auditTables) ``` **Fix:** Add the missing table name to `auditTables`. Use the SQL table name, not the ORM model name. ### Not using the audited database handle The audit proxy only intercepts mutations through the wrapped database. If you use the original `db` instead of `auditedDb`, nothing is captured: ```ts title="src/audit.ts" const auditedDb = withAuditProxy(db, audit.captureLog); // Captured — uses audited handle await auditedDb.insert(usersTable).values({ name: "Alice" }); // NOT captured — uses original handle await db.insert(usersTable).values({ name: "Alice" }); ``` **Fix:** Replace all `db` references with `auditedDb` in your route handlers and services. ### Prisma model name vs SQL table name With Prisma, `auditTables` uses the SQL table name (from `@@map`), not the Prisma model name. If your model is `User` but `@@map("users")` maps it to `users`, use `"users"` in `auditTables`: ```ts title="src/audit.ts" const audit = betterAudit({ database: prismaAuditAdapter(prisma), auditTables: ["users"], // SQL table name, not "User" }); ``` The `withAuditExtension` auto-detects the model-to-table mapping from `_runtimeDataModel`. If auto-detection fails, use `tableNameTransform`: ```ts title="src/audit.ts" const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { tableNameTransform: (modelName) => modelName.toLowerCase() + "s", }); ``` --- ## `beforeData` is missing for updates and deletes Audit entries for `UPDATE` and `DELETE` operations show `null` for `beforeData`. ### Drizzle: table in `skipBeforeState` If the table is listed in `skipBeforeState`, the proxy skips the pre-mutation `SELECT`: ```ts title="src/audit.ts" const auditedDb = withAuditProxy(db, audit.captureLog, { skipBeforeState: ["events"], // "events" table won't have beforeData }); ``` **Fix:** Remove the table from `skipBeforeState` if you need before-state capture. ### Prisma: table in `skipBeforeCapture` Same concept — Prisma skips the `findUnique`/`findMany` before the mutation: ```ts title="src/audit.ts" const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { skipBeforeCapture: ["events"], // "events" table won't have beforeData }); ``` ### Too many rows affected When an `UPDATE` or `DELETE` affects more rows than `maxBeforeStateRows`, the before-state capture is skipped to avoid expensive queries. The default limits are 1000 (Drizzle) and 100 (Prisma). **Fix:** Increase `maxBeforeStateRows` if your use case requires it, but be aware of the performance impact: ```ts title="src/audit.ts" // Drizzle const auditedDb = withAuditProxy(db, audit.captureLog, { maxBeforeStateRows: 5000, }); // Prisma const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { maxBeforeStateRows: 500, }); ``` --- ## `recordId` is empty or missing The audit entry is captured but `recordId` is empty. ### Primary key not detected The Drizzle proxy tries to extract the primary key from the mutation result. If the table uses a column name other than `id`, set `primaryKey`: ```ts title="src/audit.ts" const auditedDb = withAuditProxy(db, audit.captureLog, { primaryKey: "uuid", // default is "id" }); ``` ### Controlling the behavior Use `onMissingRecordId` to control what happens when the record ID cannot be determined: ```ts title="src/audit.ts" const auditedDb = withAuditProxy(db, audit.captureLog, { onMissingRecordId: "warn", // log a warning (default) // onMissingRecordId: "skip", // skip the audit entry entirely // onMissingRecordId: "throw", // throw an error }); ``` --- ## `redact` and `include` conflict ``` redact and include are mutually exclusive ``` An enrichment rule has both `redact` and `include` set. These are two different modes — use one or the other: ```ts title="src/audit.ts" // Blocklist — remove specific fields, keep everything else audit.enrich("users", "*", { redact: ["password", "ssn"], }); // Allowlist — keep only listed fields, remove everything else audit.enrich("users", "*", { include: ["id", "name", "email"], }); ``` --- ## Async write errors are silent When `asyncWrite` is `true`, `captureLog()` returns immediately without waiting for the write. If the write fails, the error is swallowed by default. Also check `beforeLog` hooks — if a `beforeLog` hook throws, the entry is silently dropped regardless of `asyncWrite`. **Fix:** Always configure `onError` when using async writes: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users"], asyncWrite: true, onError: (error) => { logger.error("Audit write failed", error); metrics.increment("audit.write_errors"); }, }); ``` --- ## Context lost after `await` (Express) The Express adapter extends the `AsyncLocalStorage` scope to cover the full response lifecycle. If context disappears after an `await`, check: - **Middleware order.** `betterAuditExpress()` must be mounted **before** any middleware or route handler that needs context. - **Custom wrappers.** If another middleware wraps `next()` in a way that detaches from the `AsyncLocalStorage` scope, the context is lost. Move the audit middleware earlier in the chain. --- ## Migration issues ### `audit_logs` table not found ``` check: FAIL - table "audit_logs" not found ``` The migration has not been applied. Run the migration workflow: **Drizzle:** ```bash # Generate the migration npx drizzle-kit generate --custom --name=audit_logs --prefix=none npx @usebetterdev/audit-cli migrate -o drizzle/_audit_logs.sql # Apply it npx drizzle-kit migrate ``` **Prisma:** ```bash # Create a draft migration npx prisma migrate dev --create-only --name audit_logs # Fill it with the audit DDL npx @usebetterdev/audit-cli migrate \ -o prisma/migrations/*_audit_logs/migration.sql # Apply it npx prisma migrate dev ``` Verify with: ```bash npx @usebetterdev/audit-cli check --database-url $DATABASE_URL ``` ### ORM or dialect auto-detection fails The CLI infers the ORM from installed packages (`drizzle-orm`, `@prisma/client`) and the dialect from `DATABASE_URL`. If auto-detection fails, pass the flags explicitly: ```bash npx @usebetterdev/audit-cli migrate --adapter drizzle --dialect postgres -o drizzle/ ``` ---
Performance tuning ### Before-state SELECTs are slow For high-throughput tables where the pre-mutation `SELECT` adds unacceptable latency, skip before-state capture: ```ts title="src/audit.ts" // Drizzle const auditedDb = withAuditProxy(db, audit.captureLog, { skipBeforeState: ["events", "metrics"], }); // Prisma const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { skipBeforeCapture: ["events", "metrics"], }); ``` The audit entry is still written — it just won't include `beforeData` or the computed `diff`. ### Async writes for non-critical tables Enable `asyncWrite` globally or per-call to avoid blocking the request on the database write: ```ts title="src/audit.ts" // Global — all writes are fire-and-forget const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders"], asyncWrite: true, onError: (error) => logger.error("Audit write failed", error), }); // Per-call — force sync for critical operations await audit.captureLog({ tableName: "payments", operation: "DELETE", recordId: "pay-1", asyncWrite: false, // overrides global asyncWrite }); ``` ### Bulk mode for Prisma When `createMany`, `updateMany`, or `deleteMany` affect many rows, `"per-row"` mode (the default) creates one audit entry per row. Switch to `"bulk"` mode for a single entry per operation: ```ts title="src/audit.ts" const auditedPrisma = withAuditExtension(prisma, audit.captureLog, { bulkMode: "bulk", }); ```
--- ## CLI errors ### Database URL required ``` check requires --database-url or DATABASE_URL environment variable ``` Pass the URL via flag or environment variable: ```bash # Via flag npx @usebetterdev/audit-cli check --database-url postgres://user:pass@localhost:5432/mydb # Via environment variable export DATABASE_URL=postgres://user:pass@localhost:5432/mydb npx @usebetterdev/audit-cli check ``` ### Purge safety The `purge` command permanently deletes rows. Always preview first: ```bash # Preview — no rows deleted npx @usebetterdev/audit-cli purge --dry-run --days 90 --database-url $DATABASE_URL # Delete (prompts for confirmation) npx @usebetterdev/audit-cli purge --days 90 --database-url $DATABASE_URL # Non-interactive (CI/cron) npx @usebetterdev/audit-cli purge --days 90 --yes --database-url $DATABASE_URL ``` --- ## Wrong actor in audit logs (Next.js) Audit entries show an unexpected `actorId` — or an actor you didn't set. This can happen when clients spoof the actor header. > **Caution:** If you skip the Next.js edge middleware and use `withAuditRoute` alone with `fromHeader`, clients can inject any actor identity by sending the header directly. Always use `createAuditMiddleware` in production to prevent spoofing. `createAuditMiddleware` always overwrites the `x-better-audit-actor-id` header on the forwarded request — even when extraction fails (it sets it to `""`). This prevents clients from injecting a fake actor by sending the header directly. ```ts title="app/api/orders/route.ts" // Safe — reads from JWT, not a spoofable header export const POST = withAuditRoute(handler); // Unsafe without middleware — clients can set x-user-id directly export const POST = withAuditRoute(handler, { extractor: { actor: fromHeader("x-user-id") }, }); ``` ## Next steps --- ## Overview After reading this page you will know which tables to audit, which compliance tags to apply, and how to generate evidence exports for SOC 2, HIPAA, GDPR, and PCI DSS assessments. ## Framework requirements at a glance | Requirement | SOC 2 | HIPAA | GDPR | PCI DSS | Better Audit feature | |---|---|---|---|---|---| | Record every data mutation | CC7.2 | §164.312(b) | Art. 30 | 10.2 | [ORM auto-capture](https://docs.usebetter.dev/audit/quick-start/) | | Track the acting user | CC6.1 | §164.312(a)(1) | Art. 5(2) | 10.1 | [Actor context](https://docs.usebetter.dev/audit/actor-context/) | | Capture before/after state | CC7.2 | §164.312(b) | Art. 17 | 10.3 | Before/after snapshots | | Protect sensitive fields | CC6.5 | §164.312(a)(2)(iv) | Art. 25 | 3.4 | [Field redaction](https://docs.usebetter.dev/audit/enrichment/#field-redaction) | | Tag entries by framework | CC7.1 | §164.530(j) | Art. 30 | 10.2 | [Compliance tags](https://docs.usebetter.dev/audit/enrichment/#compliance-tags) | | Query and export evidence | CC7.3 | §164.530(j) | Art. 15 | 10.7 | [Query & export](https://docs.usebetter.dev/audit/querying/) | | Retain logs for required period | CC7.4 | §164.530(j)(2) | Art. 5(1)(e) | 10.7 | [Retention policy](https://docs.usebetter.dev/audit/configuration/) | | Alert on critical events | CC7.3 | §164.308(a)(6)(ii) | — | 10.6 | [Notify flag](https://docs.usebetter.dev/audit/enrichment/#notifications) | > **Caution:** Compliance tags are freeform strings. Standardize casing across your team — use lowercase (`gdpr`, not `GDPR`). A typo or casing mismatch means entries silently won't appear in `.compliance()` queries. Also make sure every audited table is listed in `auditTables` — mutations on unlisted tables are silently skipped. ## What to capture and why Not every table needs the same level of detail. Focus your audit surface on data that matters to your compliance posture. The snippets below assume an `audit` instance created with `betterAudit()` — see the [full example](#putting-it-together) at the bottom for a complete setup. ### Identity and access **Tables:** `users`, `roles`, `permissions`, `sessions` Every compliance framework requires tracking who has access and how that access changes. Capture account creation, role assignments, permission grants/revokes, and session events. ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "roles", "permissions", "sessions"], }); audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", compliance: ["gdpr", "soc2", "hipaa"], notify: true, }); audit.enrich("roles", "*", { severity: "high", compliance: ["soc2"], }); audit.enrich("permissions", "*", { severity: "high", compliance: ["soc2"], }); ``` ### Protected health information (HIPAA) **Tables:** `patients`, `medical_records`, `prescriptions`, `appointments` HIPAA requires tracking every access and modification to PHI. Combine compliance tags with field redaction to log the event without storing the sensitive data itself. ```ts title="src/audit.ts" audit.enrich("patients", "*", { severity: "high", compliance: ["hipaa"], redact: ["ssn", "insurance_id"], }); audit.enrich("medical_records", "*", { severity: "critical", compliance: ["hipaa"], redact: ["diagnosis", "treatment_notes"], notify: true, }); ``` ### Financial and payment data (PCI DSS) **Tables:** `payments`, `invoices`, `subscriptions`, `refunds` PCI DSS demands a record of all access to cardholder data environments. Redact card numbers and use compliance tags so you can filter and export PCI-specific entries during assessments. ```ts title="src/audit.ts" audit.enrich("payments", "*", { severity: "high", compliance: ["pci"], redact: ["card_number", "cvv", "card_expiry"], }); audit.enrich("refunds", "*", { severity: "high", compliance: ["pci", "soc2"], }); ``` ### Personal data (GDPR) **Tables:** `users`, `profiles`, `consent_records`, `data_exports` GDPR requires demonstrating lawful processing and honoring data subject rights. Audit logs provide evidence that deletions, consent changes, and data exports happened as requested. ```ts title="src/audit.ts" audit.enrich("consent_records", "*", { severity: "high", compliance: ["gdpr"], }); audit.enrich("profiles", "UPDATE", { label: "Personal data updated", compliance: ["gdpr"], description: ({ diff }) => { const fields = diff?.changedFields.join(", ") ?? "unknown fields"; return `Profile fields changed: ${fields}`; }, }); audit.enrich("profiles", "DELETE", { label: "Personal data erased", severity: "critical", compliance: ["gdpr"], notify: true, }); ``` ## Generating evidence When an auditor requests evidence, you need to produce filtered, time-bounded exports that match the scope of the control being assessed. Better Audit's query builder and export engine handle this without custom SQL. ### Compliance-scoped queries Use `.compliance()` to pull entries tagged with a specific framework. Tags use AND semantics, so `.compliance("gdpr", "hipaa")` returns only entries tagged with both: ```ts title="src/routes/audit.ts" // All GDPR-relevant events in the last quarter const gdprQuery = audit.query() .compliance("gdpr") .since("90d"); // SOC 2 critical events this year const soc2Query = audit.query() .compliance("soc2") .severity("critical") .since("1y"); ``` ### Exporting for auditors Generate downloadable reports scoped to a framework, time range, and severity: ```ts title="src/routes/audit.ts" // CSV export for SOC 2 assessment — last 12 months, high and critical only const soc2Evidence = audit.query() .compliance("soc2") .severity("high", "critical") .since("1y"); const response = audit.exportResponse({ format: "csv", query: soc2Evidence, filename: "soc2-evidence-2025", }); ``` For HIPAA audits, export PHI access logs with redacted fields — the `redactedFields` column in each entry proves sensitive data was excluded from snapshots: ```ts title="src/routes/audit.ts" const hipaaEvidence = audit.query() .compliance("hipaa") .since("1y"); const result = await audit.export({ format: "json", jsonStyle: "array", query: hipaaEvidence, output: "string", }); ``` ### Retention alignment Compliance frameworks impose minimum retention periods: | Framework | Minimum retention | |---|---| | SOC 2 | 1 year (typical) | | HIPAA | 6 years | | GDPR | As short as necessary (data minimization) | | PCI DSS | 1 year (3 months immediately accessible) | Configure Better Audit's retention policy to match your strictest requirement, then use the CLI purge command with date filters for framework-specific cleanup: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "payments", "patients"], retention: { days: 2190, // ~6 years — HIPAA, the strictest requirement }, }); ``` > **Caution:** GDPR's data minimization principle may conflict with longer retention requirements from other frameworks. Work with your legal team to define per-table retention policies that satisfy all applicable regulations. ## Putting it together Full example: multi-framework compliance setup A typical compliance setup combines global defaults with table-specific overrides: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "roles", "payments", "patients", "consent_records"], retention: { days: 2190 }, }); // Global baseline — every entry gets SOC 2 tagging and low severity audit.enrich("*", "*", { severity: "low", compliance: ["soc2"], }); // Destructive operations are high severity by default audit.enrich("*", "DELETE", { severity: "high", }); // Identity tables — SOC 2 cares about access control changes audit.enrich("roles", "*", { severity: "high" }); audit.enrich("users", "DELETE", { severity: "critical", compliance: ["gdpr"], notify: true, }); // Healthcare tables — HIPAA with redaction audit.enrich("patients", "*", { compliance: ["hipaa"], redact: ["ssn", "insurance_id"], }); // Payment tables — PCI DSS with redaction audit.enrich("payments", "*", { severity: "high", compliance: ["pci"], redact: ["card_number", "cvv"], }); // Consent — GDPR tracking audit.enrich("consent_records", "*", { severity: "high", compliance: ["gdpr"], }); ``` With this configuration, a `users DELETE` entry resolves to `severity: "critical"`, `compliance: ["soc2", "gdpr"]`, `notify: true` — ready for both SOC 2 and GDPR evidence pulls. ## Next steps --- ## Retention Policies Configure how long audit entries are kept, schedule automated purges, archive entries before deletion, and implement legal holds — all covered on this page. ## Configuring retention windows Set a retention window when creating your audit instance. Entries older than the configured number of days become eligible for purging: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders", "payments"], retention: { days: 365 }, }); ``` | Option | Type | Default | Description | |---|---|---|---| | `days` | `number` | — | **Required.** Purge entries older than this many days. Must be a positive integer. | | `tables` | `string[]` | all tables | When set, only purge entries for these specific tables. | ### Table-scoped retention Different tables may have different compliance requirements. Use `tables` to scope the retention policy to specific tables: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders", "sessions", "api_requests"], retention: { days: 90, tables: ["sessions", "api_requests"], }, }); ``` Entries for tables **not** in the list are kept indefinitely. To purge all audited tables, omit `tables`. ### Common retention windows | Regulation | Typical requirement | Suggested `days` | |---|---|---| | SOC 2 | 1 year | `365` | | GDPR | As short as possible (data minimization) | `90`–`365` | | HIPAA | 6 years | `2190` | | PCI DSS | 1 year (3 months immediately accessible) | `365` | | Internal ops | Varies | `30`–`90` | > **Note:** These are common starting points, not legal advice. Verify retention requirements with your compliance team before deploying to production. ## Automated purge scheduling The retention config declares the rules. To actually delete old entries, run the purge command via the CLI: ```bash # Preview what would be deleted (safe — no changes made) npx @usebetterdev/audit-cli purge --dry-run # Delete entries older than the configured retention period npx @usebetterdev/audit-cli purge --yes ``` ### Purge CLI reference | Flag | Default | Description | |------|---------|-------------| | `--database-url ` | `$DATABASE_URL` | Connection string to your database. | | `--since ` | from config | Override the cutoff for this run. Accepts ISO dates (`2025-01-01`) or duration shorthands (`90d`, `4w`, `3m`, `1y`). | | `--batch-size ` | `1000` | Rows per DELETE batch. | | `--dry-run` | `false` | Print the number of eligible rows without deleting anything. | | `--yes` | `false` | Skip the confirmation prompt. Required for non-interactive use. | > **Caution:** `purge` permanently deletes rows. Always run `--dry-run` first to verify the row count looks correct. Pass `--yes` only in automated pipelines. ### Setting up a cron job Schedule the purge command to run automatically. The `--yes` flag is required for non-interactive execution: **GitHub Actions:** ```yaml title=".github/workflows/audit-purge.yml" name: Audit log purge on: schedule: - cron: "0 3 * * *" # Daily at 3 AM UTC jobs: purge: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: pnpm/action-setup@v4 - run: pnpm install --frozen-lockfile - run: npx @usebetterdev/audit-cli purge --yes env: DATABASE_URL: ${{ secrets.DATABASE_URL }} ``` **System cron:** ```bash title="crontab -e" # Daily at 3 AM — purge audit entries older than the configured retention period 0 3 * * * cd /path/to/project && DATABASE_URL="postgresql://..." npx @usebetterdev/audit-cli purge --yes >> /var/log/audit-purge.log 2>&1 ``` > **Note:** Run `--dry-run` manually before deploying a cron schedule. This confirms your retention config resolves correctly and the row count is what you expect. ### How purge works The CLI uses **batched deletes** (1,000 rows per batch by default) to avoid holding long row-level locks on large tables. Each batch runs a `DELETE … WHERE id IN (SELECT id … LIMIT n)` query, which works across PostgreSQL, MySQL, and SQLite. Progress is reported to stderr every 10 batches. Priority for resolving the cutoff date: 1. `--since` flag (if provided) 2. `retention.days` from your `better.config` file If neither is set, the command exits with: `No retention policy configured. Pass --since or set audit.retention.days in your better.config file.` ## Archiving strategies For regulations that require long-term access to historical entries, archive before purging. ### Export before purge Use the CLI export command to save entries before they are deleted: ```bash # Export entries older than 1 year to a JSON file, then purge them npx @usebetterdev/audit-cli export \ --since 365d \ --format json \ -o archive-older-than-365d.json \ --database-url $DATABASE_URL npx @usebetterdev/audit-cli purge --since 365d --yes --database-url $DATABASE_URL ``` ### Archive-then-purge script Combine export and purge into a single script for your CI pipeline: ```bash title="scripts/archive-and-purge.sh" #!/usr/bin/env bash set -euo pipefail ARCHIVE_DIR="./audit-archives" DATE=$(date -u +%Y-%m-%d) RETENTION_DAYS=365 # GNU date || macOS date CUTOFF=$(date -u -d "-${RETENTION_DAYS} days" +%Y-%m-%dT00:00:00Z 2>/dev/null \ || date -u -v-${RETENTION_DAYS}d +%Y-%m-%dT00:00:00Z) mkdir -p "$ARCHIVE_DIR" echo "Exporting entries older than ${RETENTION_DAYS} days..." npx @usebetterdev/audit-cli export \ --to "$CUTOFF" \ --format json \ -o "${ARCHIVE_DIR}/audit-archive-${DATE}.json" \ --database-url "$DATABASE_URL" echo "Purging exported entries..." npx @usebetterdev/audit-cli purge \ --since "${RETENTION_DAYS}d" \ --yes \ --database-url "$DATABASE_URL" echo "Done. Archive saved to ${ARCHIVE_DIR}/audit-archive-${DATE}.json" ``` > **Sensitive data:** Archive files contain actor IDs and before/after record snapshots. Store them with appropriate access controls — encrypted object storage (S3, GCS) with restricted IAM policies. Do not commit archives to version control. Cold storage recommendations | Storage tier | Use case | Example | |---|---|---| | Hot (database) | Active queries, dashboards, real-time alerts | Your primary database | | Warm (object storage) | Compliance audits, investigation lookbacks | S3 Standard, GCS Standard | | Cold (glacier) | Long-term legal retention | S3 Glacier, GCS Archive | Move archives to progressively cheaper storage tiers as they age. Most compliance audits only need warm-tier access. ## Legal hold patterns A legal hold suspends normal retention rules for entries relevant to an ongoing investigation or litigation. While Better Audit does not enforce legal holds at the database level, you can implement them using compliance tags and scoped retention. ### Tag entries for legal hold Use enrichment to tag entries that should be preserved: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders", "payments"], retention: { days: 90, tables: ["orders"], // Only auto-purge orders }, }); // Tag sensitive operations for legal preservation audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", compliance: ["gdpr", "legal-hold"], }); audit.enrich("payments", "*", { severity: "high", compliance: ["pci", "legal-hold"], }); ``` ### Exclude held entries from purge Scope your retention policy to only purge tables that are not under legal hold. Tables not listed in `retention.tables` are kept indefinitely: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders", "payments", "sessions", "api_requests"], retention: { days: 90, tables: ["sessions", "api_requests"], // Only purge these tables }, // users, orders, payments → kept indefinitely (legal hold) }); ``` > **Danger:** The `--since` CLI flag bypasses the `tables` filter from your config — it purges **all** tables. When legal holds are active, always rely on the config-defined retention policy and do not pass `--since` to the purge command. ### Export held entries for legal review Use compliance tag filtering to extract entries relevant to a legal matter: ```bash # Export all entries tagged with legal-hold npx @usebetterdev/audit-cli export \ --compliance legal-hold \ --format json \ -o legal-hold-export.json \ --database-url $DATABASE_URL # Export legal-hold entries for a specific actor npx @usebetterdev/audit-cli export \ --compliance legal-hold \ --actor user-42 \ --format json \ -o legal-hold-user-42.json \ --database-url $DATABASE_URL ``` ### Lifting a legal hold When the hold is lifted, remove the table exclusion from your retention config so those entries become eligible for normal purging again: 1. Confirm with your legal team that the hold can be released. 2. Update your retention config to include the previously held tables: ```ts title="src/audit.ts" const audit = betterAudit({ database: drizzleAuditAdapter(db), auditTables: ["users", "orders", "payments", "sessions", "api_requests"], retention: { days: 90, // tables omitted → all audited tables are now eligible for purging }, }); ``` 3. Run a dry-run to verify what will be purged: ```bash npx @usebetterdev/audit-cli purge --dry-run ``` 4. Export a final archive of the held entries before purging: ```bash npx @usebetterdev/audit-cli export \ --compliance legal-hold \ --format json \ -o legal-hold-final-archive.json \ --database-url $DATABASE_URL ``` 5. Run the purge: ```bash npx @usebetterdev/audit-cli purge --yes ``` ## Next steps --- ## How Audit Works UseBetter Audit captures every INSERT, UPDATE, and DELETE transparently — without you adding `captureLog()` calls to each route or service. This page walks through exactly what happens on every mutation so you understand what is recorded, how actor identity is tracked, what enrichment layers on, and what guarantees you get. If you just want to get started, skip to [Quick Start](https://docs.usebetter.dev/audit/quick-start/). Come back here when you want to understand what happens beneath the surface. ## The problem with manual audit logging The most common approach to audit logging is to call a logging function manually after each mutation: ```ts await db.update(usersTable).set({ name }).where(eq(usersTable.id, id)); await audit.log({ table: "users", operation: "UPDATE", actorId: req.user.id }); ``` It works, but it has the same flaw as WHERE-clause multi-tenancy: **a single missed call leaves a gap in your audit trail**. The more mutations your application has, the more likely someone will forget one — especially across teams, across services, and over time. UseBetter Audit moves capture out of your application code and into the ORM layer. Every mutation goes through a proxy or extension that intercepts it automatically. Even if a route handler does not call anything explicitly, the audit entry is written. ## What happens on every mutation Three layers collaborate on every INSERT, UPDATE, or DELETE: 1. **ORM proxy / extension** — wraps your Drizzle or Prisma client and intercepts every write before it reaches the database. 2. **AsyncLocalStorage** — carries the actor identity (set by framework middleware at the top of the request) through every `await` boundary without parameter passing. 3. **Adapter write** — the adapter builds a structured `audit_log` row — including enrichment rules — and writes it to your database. ## Try it yourself Run mutations, switch actors, enable enrichment rules, and watch what lands in `audit_logs`. The log history at the bottom accumulates entries across runs — the same view you get when you query `audit.query()` in your application. ## Step by step ### 1. ORM proxy intercepts the mutation When you wrap your ORM client with `withAuditProxy` (Drizzle) or `withAuditExtension` (Prisma), every subsequent mutation goes through a proxy layer before it reaches the database driver. **Drizzle** uses a JavaScript `Proxy` object to intercept `db.insert()`, `db.update()`, and `db.delete()` calls. The proxy executes the original query and, if the table is in `auditTables`, immediately runs capture with the result. **Prisma** uses a Prisma Client Extension (`$extends`) to add `beforeQuery` and `afterQuery` hooks on write operations. Same result, different mechanism — the adapter abstracts this away. Neither approach requires you to modify your route handlers. Wrap once at setup time; capture happens automatically from that point forward. ### 2. Actor pulled from AsyncLocalStorage Node.js `AsyncLocalStorage` propagates values through async call chains without explicit parameter passing. The framework middleware sets up a context scope at the very beginning of each request: ``` Request arrives └─ betterAuditHono() middleware └─ Extracts actorId from Authorization: Bearer └─ Creates AuditContext { actorId: "user-42" } └─ Stores context in AsyncLocalStorage └─ Route handler runs └─ auditedDb.insert(usersTable).values(body) └─ Proxy intercepts └─ AsyncLocalStorage.getStore() → { actorId: "user-42" } └─ captureLog({ actorId: "user-42", ... }) ``` The context is scoped to the request. When the request ends, the scope is cleaned up. Concurrent requests each have their own scope — context never leaks between them. ### 3. Before/after snapshot captured For **INSERT**, the proxy reads the new row from the query result — this is `afterData`. `beforeData` is `null`. For **DELETE**, the proxy issues a `SELECT` before the delete executes to capture the current state of the row. This becomes `beforeData`; `afterData` is `null`. For **UPDATE**, the proxy reads both the previous state (pre-query SELECT) and the new state (post-query result or re-fetch). Both appear in the entry so reviewers can see exactly what changed, field by field. The snapshot is always the full row — not just the changed columns. Every entry is self-contained: it tells the complete story of the record at that point in time. ### 4. Enrichment rules applied Before the entry is written, the adapter checks the enrichment registry. Rules registered with `audit.enrich()` are matched by table name and operation: ```ts audit.enrich("users", "INSERT", { label: "New user registered", severity: "low", compliance: ["soc2"], }); audit.enrich("users", "DELETE", { label: "User account deleted", severity: "critical", compliance: ["gdpr", "soc2"], redact: ["email", "phone"], }); ``` Enrichment fields are merged into the entry: `label`, `severity`, `compliance`, and any redacted fields are removed from `beforeData` / `afterData` before storage. Enrichment is declarative and registered once at startup. Your route handlers do not know it is happening. ### 5. Audit entry written to `audit_logs` After enrichment, the adapter inserts a row into `audit_logs`: ```ts { id: string; // UUID timestamp: Date; // server clock at capture time tableName: string; // e.g. "users" operation: "INSERT" | "UPDATE" | "DELETE"; recordId: string; // primary key of the mutated row actorId: string | null; // from AsyncLocalStorage, or null if extraction failed beforeData: Record | null; // redacted fields removed afterData: Record | null; // redacted fields removed label: string | undefined; severity: "low" | "medium" | "high" | "critical" | undefined; compliance: string[] | undefined; } ``` You can query this table directly or use `audit.query()`: ```ts const result = await audit.query() .resource("users") .actor("user-42") .since("24h") .list(); ``` ## Why you can trust this ### A missing `captureLog()` call cannot create a gap Capture is delegated to the proxy layer, not your application code. There is no `captureLog()` to forget. Every write that goes through `auditedDb` is captured. ### Request isolation is a Node.js guarantee `AsyncLocalStorage` is a Node.js built-in. Its isolation guarantee is the same one that makes session stores and request-scoped loggers safe under high concurrency. One request's `actorId` is never visible to another request. ### Fail-open does not mean silent failure If actor extraction fails, the request proceeds and the entry is still written — with `actorId: null`. This is an explicit signal that attribution was unavailable, not that capture was skipped. To fail-closed, configure `onError` on the middleware. ### Enrichment is append-only, not a filter Enrichment rules add fields to the stored entry; they never suppress or delay the write. The `redact` option removes sensitive field values from `beforeData` / `afterData`, but the row itself is always written. You cannot accidentally configure enrichment in a way that drops log entries. ## Summary | What | How | Why it is reliable | |---|---|---| | Automatic capture | ORM proxy / Prisma extension intercepts all writes | No `captureLog()` to forget | | Actor attribution | `AsyncLocalStorage` propagates actor from middleware | No parameter passing; concurrent requests never share context | | Before/after snapshots | Pre-query SELECT (UPDATE/DELETE) + post-query result (INSERT/UPDATE) | Full row state at each point in time | | Fail-open | Missing actor → `actorId: null`, request still proceeds | Audit trail has no gaps | | Enrichment | Declarative rules registered once at startup | Route handlers never need to know | | Storage | Your own database | No external service; query with your existing tooling | ## Next steps - [Actor Context](https://docs.usebetter.dev/audit/guides/actor-context/) — extractors, `mergeAuditContext()`, and background job contexts - [Enrichment](https://docs.usebetter.dev/audit/guides/enrichment/) — labels, severity, compliance tags, and field redaction in detail - [Adapters](https://docs.usebetter.dev/audit/guides/adapters/) — ORM adapter reference and error handling - [Quick Start](https://docs.usebetter.dev/audit/quick-start/) — working example with ORM + framework middleware in one page --- ## Architecture UseBetter Audit's automatic capture rests on five subsystems. This reference covers each one in engineering depth: the `captureLog()` pipeline, the Drizzle ORM proxy, AsyncLocalStorage (ALS) bridging per framework, the enrichment registry's tier resolution, and the storage adapter contract. > **Note:** For a narrative walkthrough with a live playground, see [How Audit Works](https://docs.usebetter.dev/audit/internals/how-audit-works/). ## Overview — subsystem map |"populates ALS context"| CE CE -->|"registry.resolve()"| ER CE -->|"database.writeLog()"| SA OP -->|"implements adapter"| SA`} /> Dependency direction: framework adapters and ORM proxy depend on core. Core has zero runtime dependencies — all SQL lives in adapter packages. ## Auto-capture engine ### `betterAudit()` factory `betterAudit()` in `core/src/better-audit.ts` is the top-level factory. On each call it: 1. Validates the config — checks retention policy consistency, warns when `asyncWrite: true` is set without `onError`. 2. Creates a `Set` of audited table names for O(1) membership checks. 3. Initialises a new `EnrichmentRegistry`. 4. Returns a `BetterAuditInstance` object with all methods closed over the local config. There is no shared global state — two `betterAudit()` calls produce fully independent instances. ### `captureLog()` pipeline Every mutation flows through `captureLog()`. The pipeline executes 10 steps in order: | Step | Action | |------|--------| | 1 | **Early return** — skip if `tableName` not in `auditTables`; throw if `recordId` is empty | | 2 | **Normalize** — call `normalizeInput()` to enforce valid before/after data per operation | | 3 | **Get context** — read the current `AuditContext` from ALS via `getAuditContext()` | | 4 | **Merge** — coalesce per-call fields with ALS context fields; per-call values win on conflict | | 5 | **Assemble** — build the `AuditLog` with a UUID, server timestamp, and all resolved fields | | 6 | **Diff** — for UPDATE only, call `computeDiff(before, after)` to produce `changedFields` | | 7 | **Enrich** — resolve the enrichment registry, then call `applyEnrichment()` (redact → describe → scalars) | | 8 | **beforeLog hooks** — run each hook sequentially; hooks may mutate the log; an error aborts the write | | 9 | **Write** — call `database.writeLog(log)` synchronously, or fire-and-forget when `asyncWrite` is true. Steps 9 and 10 share `writeAndRunAfterHooks()`; when `asyncWrite` is true it runs without `await` and errors are routed to `onError` or `console.error` | | 10 | **afterLog hooks** — run each hook sequentially with a `Readonly` snapshot | ### `computeDiff()` — `core/src/diff.ts` `computeDiff(before, after)` compares two row snapshots and returns the names of fields that changed: - Iterates the union of all keys from both objects. - Compares values with `JSON.stringify` — order-sensitive for objects and arrays. - A field present in only one snapshot counts as changed. - Non-serializable values (e.g. circular references) are always treated as changed; the `catch` branch handles the `JSON.stringify` throw. The result is `{ changedFields: string[] }` stored as `AuditLog.diff`. Diff is computed only for UPDATE operations. ### `normalizeInput()` — `core/src/normalize.ts` `normalizeInput()` enforces which data fields are meaningful per operation before the log is assembled: | Operation | `beforeData` | `afterData` | |-----------|-------------|-------------| | `INSERT` | absent — field omitted from log | kept | | `DELETE` | kept | absent — field omitted from log | | `UPDATE` | kept | kept | Even if the caller passes both fields, a DELETE entry will never have `afterData` in the stored log. Fields are omitted (absent from the object), not set to `null`. ## AsyncLocalStorage context flow ### Core primitives — `core/src/context.ts` ```ts title="packages/audit/core/src/context.ts" // Read the current AuditContext, or undefined when outside a request scope getAuditContext(): AuditContext | undefined // Run fn inside a new ALS scope with the given context runWithAuditContext(context: AuditContext, fn: () => T): Promise // Merge additional fields into the current scope (or create one) and run fn // Fields in override take precedence over the existing context mergeAuditContext(override: Partial, fn: () => T): Promise ``` A single `AsyncLocalStorage` instance is created at module load time. `storage.run(context, fn)` establishes a scope for the duration of `fn`; nested calls to `runWithAuditContext` are safe — each scope is fully isolated from concurrent sibling scopes. `mergeAuditContext` reads the current store and spreads the override, so it works both inside and outside an existing scope. ### Per-framework bridging **Hono** — `context.req.raw` is a standard Web `Request`. The middleware passes it directly to the shared `handleMiddleware()` from core: ```ts title="packages/audit/hono/src/index.ts" await handleMiddleware(extractor, context.req.raw, next, handlerOptions); ``` **Express** — Express uses Node.js `IncomingMessage`, which is not a Web `Request`. The adapter bridges it with `toWebRequest(req)`: - Normalises `string | string[] | number | undefined` header values to plain `string`. - Reconstructs a URL from `req.originalUrl ?? req.url` and `req.hostname`. - Constructs `new Request(url, { method, headers })`. The ALS scope must stay open until after the route handler finishes. The adapter wraps `next()` in a Promise that resolves only when `response.on('finish')` or `response.on('close')` fires: ```ts title="packages/audit/express/src/index.ts" const nextWrapper = () => new Promise((resolve) => { response.on("finish", resolve); response.on("close", resolve); next(); }); await handleMiddleware(extractor, webRequest, nextWrapper, handlerOptions); ``` **Next.js** — Next.js middleware runs in the Edge Runtime, a separate execution context from route handlers. ALS context set in middleware does not propagate into route handlers. The adapter uses three integration points: 1. **Edge middleware (`createAuditMiddleware`)** — extracts the actor and forwards it as the `x-better-audit-actor-id` request header. The header is always overwritten — set to an empty string when extraction fails — to prevent clients from spoofing the actor id. 2. **Route Handlers (`withAuditRoute`)** — wraps the route handler, extracts the actor from the `NextRequest` (JWT or forwarded header), and calls `runWithAuditContext()` for the duration of the handler. 3. **Server Actions (`withAudit`)** — server actions have no `Request` object. The wrapper reads headers via `next/headers`, constructs a synthetic `new Request("http://localhost", { headers })`, runs the extractor against it, then wraps the action in `runWithAuditContext()`. If `headers()` throws (outside a Next.js request context), the action runs without audit context. > **Caution:** When `withAuditRoute` or `withAudit` use `fromHeader(AUDIT_ACTOR_HEADER)` as the extractor, they trust the forwarded header value. Deploy `createAuditMiddleware` to ensure the header is always overwritten by server-side extraction before it reaches route handlers. **Fail-open guarantee.** All framework adapters route extractor errors through `safeExtract()`, which catches any thrown error and returns `undefined`. If extraction fails, the request proceeds without an ALS context. The log entry is still written — with `actorId` absent. ## Enrichment registry ### `EnrichmentRegistry` — `core/src/enrichment-registry.ts` The registry is a `Map` keyed by `"table:OPERATION"`. Multiple rules may be registered for the same key — they accumulate in registration order. ### Four specificity tiers When `registry.resolve(table, operation)` is called, it collects configs from four keys in ascending specificity order: | Tier | Key pattern | Meaning | |------|-------------|---------| | 1 (lowest) | `*:*` | Any table, any operation | | 2 | `*:OP` | Any table, specific operation | | 3 | `table:*` | Specific table, any operation | | 4 (highest) | `table:OP` | Exact table + operation match | All matching configs from all tiers are collected into a flat list and passed to `mergeEnrichmentConfigs()`. Within each tier, configs are ordered by registration sequence. ### Merge strategy | Field | Strategy | |-------|----------| | `label`, `severity`, `notify`, `description` | Last-write-wins — more specific tier overrides less specific | | `compliance` array | Concatenate across all tiers, then deduplicate via `Set` | | `redact` array | Concatenate across all tiers, then deduplicate via `Set` | | `include` array | Concatenate across all tiers, then deduplicate via `Set` | > **Caution:** `redact` and `include` are mutually exclusive per registration. If the merged result contains both (from registrations at different tiers), `resolve()` throws at call time with a descriptive message. ### Application order — `applyEnrichment()` `applyEnrichment(log, resolved)` applies the merged enrichment to the log in a fixed order: 1. **Redact** — remove listed fields (`redact`) or remove all unlisted fields (`include`) from `beforeData`, `afterData`, and `diff.changedFields`. Removed field names are recorded in `log.redactedFields` (sorted). 2. **Description** — call the `description(ctx)` function with structurally-cloned, post-redaction snapshots. The description function sees only the data that will be stored. If the function throws, the description is left unset and the log is written anyway. 3. **Scalars** — apply `label`, `severity`, `notify` only when not already set on the log (enrichment fills gaps; per-call values take precedence). `compliance` is concatenated and deduplicated with any existing value. Enrichment never suppresses the write — it only adds or transforms fields. ### `EnrichmentConfig` field reference | Field | Type | Description | |-------|------|-------------| | `label` | `string` | Human-readable event label | | `description` | `(ctx: EnrichmentDescriptionContext) => string` | Dynamic description computed after redaction | | `severity` | `"low" \| "medium" \| "high" \| "critical"` | Severity classification | | `compliance` | `string[]` | Compliance framework tags (e.g. `["soc2", "gdpr"]`) | | `notify` | `boolean` | Mark for notification routing | | `redact` | `string[]` | Top-level field names to remove from data snapshots | | `include` | `string[]` | Top-level field names to keep; all others removed | `redact` and `include` match top-level keys only. To redact a nested field like `profile.ssn`, list the parent key `"profile"`. Redaction does not apply to `metadata`. ## Storage abstraction ### `AuditDatabaseAdapter` — `core/src/types.ts` ```ts title="packages/audit/core/src/types.ts" interface AuditDatabaseAdapter { writeLog(log: AuditLog): Promise; // required queryLogs?(spec: AuditQuerySpec): Promise; getLogById?(id: string): Promise; getStats?(options?: { since?: Date }): Promise; purgeLogs?(options: { before: Date; tableName?: string }): Promise<{ deletedCount: number }>; } ``` Only `writeLog` is required. All other methods are optional — the core engine checks for their presence and throws a descriptive error when a missing method is called. | Method | Required | Used by | |--------|----------|---------| | `writeLog` | Yes | `captureLog()` pipeline | | `queryLogs` | No | `audit.query()`, `audit.export()`, `audit.exportResponse()` | | `getLogById` | No | Console dashboard | | `getStats` | No | Console dashboard | | `purgeLogs` | No | `audit-cli purge`, retention scheduler | ### Sync vs async write modes By default, `captureLog()` awaits `database.writeLog(log)` before returning. The mutation does not complete until the audit entry is durably written. When `asyncWrite: true` is configured (at the instance level or per-call via `CaptureLogInput.asyncWrite`), `writeLog()` is called without `await`. The mutation returns immediately and the write completes in the background. Errors are caught and routed to `onError`, or logged to `console.error` if `onError` is not set. > **Caution:** `asyncWrite: true` means write failures are invisible to the caller. Always set `onError` to route failures to your error tracking system. ### `beforeLog` / `afterLog` hooks Hooks are stored as ordered arrays and run sequentially: - **`beforeLog`** — receives the fully assembled, enriched, post-redaction log. May mutate the log (e.g. add a custom field). Errors are caught and routed to `onError`. - **`afterLog`** — receives `Readonly` after the write completes. Mutations have no effect. Errors are caught and routed to `onError`. > **Caution:** If a `beforeLog` hook throws, the write is aborted entirely — no audit entry is stored for that mutation. Keep hooks lightweight and wrap any external calls in `try/catch` if they must be non-blocking. `onBeforeLog(hook)` and `onAfterLog(hook)` both return a dispose function to unregister the hook. ### Stats implementation (Drizzle) `drizzleAuditAdapter.getStats()` in `drizzle/src/adapter.ts` runs 6 aggregation queries in parallel via `Promise.all()`: | Query | Computes | |-------|---------| | 1 | `totalLogs` (COUNT), `tablesAudited` (COUNT DISTINCT) | | 2 | `eventsPerDay` grouped by `date_trunc('day', timestamp)`, up to 365 entries | | 3 | `topActors` grouped by `actorId`, top 10, NULL actors excluded | | 4 | `topTables` grouped by `tableName`, top 10 | | 5 | `operationBreakdown` grouped by `operation` | | 6 | `severityBreakdown` grouped by `severity`, NULL severity excluded | Results are assembled by `assembleStats()` from core into the `AuditStats` shape. All SQL lives in adapter packages — the zero-dependency core never imports a database driver or ORM. ## Next steps --- # Console > Optional self-hosted admin dashboard backend for UseBetterDev products. No data leaves your server. ## Introduction UseBetter Console is a **completely optional** self-hosted backend that connects your app to the Console UI at [`console.usebetter.dev`](https://console.usebetter.dev). **No data leaves your infrastructure** — the UI runs in your browser and all API calls go directly to your server via `/.well-known/better/*` endpoints. ## Key features - **Optional** — not required for any UseBetterDev product. Add it when you want a visual admin dashboard. - **No data leaves your server** — the hosted UI at `console.usebetter.dev` is a static frontend. It makes API calls to YOUR server only. - **5-minute setup** — one middleware, one CLI command, and you're connected. - **Multiple auth methods** — auto-approve for development, magic link for production. - **Permission-based access** — three levels (`read`, `write`, `admin`) with a hierarchical model. - **Product endpoint registration** — UseBetterDev products (like [UseBetter Tenant](https://docs.usebetter.dev/tenant/getting-started/)) that the Console UI discovers automatically. - **CORS-restricted by default** — only `console.usebetter.dev` can call your console endpoints. Add custom origins if needed. ## How it works 1. **Add the middleware** — mount `createConsoleMiddleware()` in your Hono app. It intercepts `/.well-known/better/*` requests. 2. **Set the token hash** — run `npx @usebetterdev/console-cli init` to generate a connection token. Store the hash in your environment. 3. **Open the Console UI** — visit [`console.usebetter.dev`](https://console.usebetter.dev) and enter your server URL (e.g., `https://myapp.com`). 4. **Authenticate** — the UI calls your server's session endpoints. In development, auto-approve issues a token instantly. In production, magic link sends a 6-character code to your email. 5. **Manage via registered endpoints** — once authenticated, the UI discovers your registered products and presents their data through typed endpoints. ## Architecture | Layer | Package | Role | | ------------------- | ------------------------------- | ------------------------------------------------------------- | | **Core** | `@usebetterdev/console` | Config, session management, routing, CORS. Zero runtime deps. | | **Drizzle adapter** | `@usebetterdev/console/drizzle` | Session and magic link storage via Drizzle ORM. | | **Hono middleware** | `@usebetterdev/console/hono` | Thin adapter between Hono Request/Response and Console. | | **CLI** | `@usebetterdev/console-cli` | Token generation, migration SQL, setup verification. | You install the main package (`@usebetterdev/console`) and import adapters via subpath exports. The CLI is a separate package used via `npx`. ## Next steps - [Installation](https://docs.usebetter.dev/console/installation/) — install the package and its peer dependencies - [Quick Start](https://docs.usebetter.dev/console/quick-start/) — connect your app to UseBetter Console in 5 minutes --- ## Installation ## Install the package **npm:** ```bash npm install @usebetterdev/console ``` **pnpm:** ```bash pnpm add @usebetterdev/console ``` **yarn:** ```bash yarn add @usebetterdev/console ``` **bun:** ```bash bun add @usebetterdev/console ``` The main package (`@usebetterdev/console`) includes the core library and all adapters via subpath exports. The CLI (`@usebetterdev/console-cli`) is used via `npx` — no installation required. ## Peer dependencies ### Hono (required) UseBetter Console uses Hono middleware to intercept `/.well-known/better/*` requests: ```bash npm install hono ``` ### Drizzle + pg (optional) Only needed if you use the **magic link** authentication flow in production. Auto-approve mode is stateless and needs no database at all. ```bash npm install drizzle-orm pg ``` > **Note:** If you only use `autoApprove` (development mode), you don't need Drizzle, pg, or any database tables. Auto-approve issues a stateless JWT immediately. ## Requirements - **Node.js 22+** (also supports Bun and Deno) - **PostgreSQL 13+** (only needed for magic link auth — auto-approve needs no database) - **TypeScript 5+** (recommended, but not required) ## Subpath exports All adapters are available through the main package via subpath exports: | Import | Contents | | --------------------------------- | ----------------------------------------------------------- | | `@usebetterdev/console` | Core: `betterConsole`, types, config | | `@usebetterdev/console/drizzle` | `drizzleConsoleAdapter`, schema tables | | `@usebetterdev/console/hono` | `createConsoleMiddleware` | ## Next steps - [Quick Start](https://docs.usebetter.dev/console/quick-start/) — connect your app to UseBetter Console in 5 minutes --- ## Quick Start This guide walks you through adding UseBetter Console to an existing Hono application. Choose the tab that matches your environment — development (instant setup, no database) or production (magic link auth with Drizzle). **Development (auto-approve):** Auto-approve issues a stateless JWT immediately — no database tables, no email flow. Use this for local development. 1. **Generate a connection token** ```bash npx @usebetterdev/console-cli init --email admin@localhost -n ``` This prints two environment variables. Copy `BETTER_CONSOLE_TOKEN_HASH` — you'll need it in the next step. 2. **Add to your `.env`** ```bash title=".env" BETTER_CONSOLE_TOKEN_HASH=sha256:e5f6a7b8... ``` 3. **Add the middleware** ```ts title="src/index.ts" import { Hono } from "hono"; import { betterConsole } from "@usebetterdev/console"; import { createConsoleMiddleware } from "@usebetterdev/console/hono"; const app = new Hono(); const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { autoApprove: process.env.NODE_ENV === "development" }, }); app.use("*", createConsoleMiddleware(consoleInstance)); // Your existing routes below... app.get("/", (c) => c.text("Hello!")); export default app; ``` 4. **Verify the health endpoint** ```bash curl http://localhost:3000/.well-known/better/console/health ``` You should see `{"status":"ok"}`. 5. **Open the Console UI** Visit [`console.usebetter.dev`](https://console.usebetter.dev), enter your server URL (e.g., `http://localhost:3000`), and authenticate. Auto-approve grants a session token instantly. :::caution Auto-approve is blocked outside of development. It throws `ConsoleAutoApproveInProductionError` when `NODE_ENV` is not `"development"` or `"test"`. Use magic link auth for production — see the Production tab. ::: **Production (magic link):** Magic link uses a 3-step handshake with email verification. Requires Drizzle and a PostgreSQL database. 1. **Generate a connection token** ```bash npx @usebetterdev/console-cli init --email admin@myapp.com ``` This prints two environment variables. Copy both. 2. **Add to your `.env`** ```bash title=".env" BETTER_CONSOLE_TOKEN_HASH=sha256:e5f6a7b8... BETTER_CONSOLE_ALLOWED_EMAILS=admin@myapp.com ``` 3. **Create the database tables** Generate the migration SQL and apply it: ```bash # Option A: Generate a file, then apply with your migration tool npx @usebetterdev/console-cli migrate -o drizzle/console_tables.sql # Option B: Pipe directly to psql npx @usebetterdev/console-cli migrate --dry-run | psql $DATABASE_URL ``` This creates two tables: `better_console_sessions` and `better_console_magic_links`. 4. **Add the middleware with Drizzle adapter** ```ts title="src/index.ts" import { Hono } from "hono"; import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterConsole } from "@usebetterdev/console"; import { drizzleConsoleAdapter } from "@usebetterdev/console/drizzle"; import { createConsoleMiddleware } from "@usebetterdev/console/hono"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); const app = new Hono(); const consoleInstance = betterConsole({ adapter: drizzleConsoleAdapter(db), connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { magicLink: { allowedEmails: process.env.BETTER_CONSOLE_ALLOWED_EMAILS!, }, }, }); app.use("*", createConsoleMiddleware(consoleInstance)); // Your existing routes below... app.get("/", (c) => c.text("Hello!")); export default app; ``` 5. **Verify the setup** ```bash npx @usebetterdev/console-cli check --database-url $DATABASE_URL ``` All checks should pass. 6. **Open the Console UI** Visit [`console.usebetter.dev`](https://console.usebetter.dev), enter your server URL (e.g., `https://myapp.com`), and authenticate via magic link. A 6-character code is sent to your email — enter it in the UI to complete authentication. ## What just happened? 1. The CLI generated a connection token pair. The **hash** is stored in your environment and used server-side to sign/verify session JWTs. The raw token is only shown once during init. 2. The middleware intercepts all `/.well-known/better/*` requests and delegates them to `handleConsoleRequest()`. 3. The Console UI at `console.usebetter.dev` calls your server's endpoints to authenticate and discover registered products. **No data leaves your infrastructure** — the UI is a static frontend that talks directly to YOUR server. ## Next steps - [Configuration](https://docs.usebetter.dev/console/configuration/) — all config options, permissions, CORS, environment variables - [Authentication](https://docs.usebetter.dev/console/authentication/) — auto-approve, magic link, custom auth hook - [Product Registration](https://docs.usebetter.dev/console/product-registration/) — expose data to the Console UI - [CLI](https://docs.usebetter.dev/console/cli/) — all CLI commands --- ## Configuration ## betterConsole() config The `betterConsole()` factory accepts a `BetterConsoleConfig` object: ```ts const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, adapter: drizzleConsoleAdapter(db), // optional for auto-approve sessions: { autoApprove: process.env.NODE_ENV === "development", // or magicLink: { allowedEmails: "admin@myapp.com", }, // or (upcoming — not yet implemented) // authenticate: async (request) => { ... }, tokenLifetime: "24h", }, allowedOrigins: ["https://console.usebetter.dev"], allowedActions: ["read", "write", "admin"], onError: (error) => console.error(error), }); ``` ### Config reference | Option | Type | Default | Description | | --- | --- | --- | --- | | `connectionTokenHash` | `string` | _(required)_ | SHA-256 hash of the connection token. Format: `sha256:<64 hex chars>`. Generated by `npx @usebetterdev/console-cli init`. | | `adapter` | `ConsoleAdapter` | `undefined` | Database adapter. Required for magic link sessions. Not needed for auto-approve. | | `sessions` | `ConsoleSessionConfig` | _(required)_ | At least one session method must be configured. | | `sessions.autoApprove` | `boolean` | `false` | Issue a stateless JWT immediately. Dev only — throws in production. | | `sessions.magicLink` | `MagicLinkConfig` | `undefined` | Enable magic link authentication. Requires `adapter`. | | `sessions.authenticate` | `(request) => Promise<{email, permissions} \| null>` | `undefined` | Custom auth hook for existing auth systems (SSO, OAuth). _(upcoming — not yet implemented)_ | | `sessions.tokenLifetime` | `string` | `"24h"` | Duration string for session tokens. Range: `1h` to `7d`. Accepts: `"1h"`, `"8h"`, `"24h"`, `"30m"`, `"7d"`. | | `allowedOrigins` | `string[]` | `["https://console.usebetter.dev"]` | Origins allowed for CORS. The Console UI origin is included by default. | | `allowedActions` | `ConsolePermission[]` | `["read", "write", "admin"]` | Permission levels granted to sessions. | | `onError` | `(error: unknown) => void` | `undefined` | Called when `handleConsoleRequest` catches an unexpected error. | ### Magic link config | Option | Type | Default | Description | | --- | --- | --- | --- | | `allowedEmails` | `string \| string[]` | _(required)_ | Email addresses or comma-separated list allowed to authenticate. | | `sendMagicLinkEmail` | `(data: {email, sessionId, code}) => Promise` | `undefined` | Optional — overrides the default Console email relay with your own email sender. When omitted, the code is delivered automatically via `console.usebetter.dev`. | | `maxAttempts` | `number` | `5` | Maximum failed code verification attempts before the magic link is locked out. | ## Permissions UseBetter Console uses a hierarchical permission model with three levels: | Level | Includes | Use case | | --- | --- | --- | | `read` | Read only | View dashboards, list data | | `write` | Read + Write | Create, update resources | | `admin` | Read + Write + Admin | Full access, manage settings | Higher levels include all lower levels. A session with `admin` permission can access endpoints requiring `read` or `write`. Configure which permissions sessions receive via `allowedActions`: ```ts const consoleInstance = betterConsole({ // ... allowedActions: ["read", "write"], // sessions cannot perform admin actions }); ``` Product endpoints specify their required permission level: ```ts consoleInstance.registerProduct({ id: "tenant", name: "UseBetter Tenant", endpoints: [ { method: "GET", path: "/tenants", requiredPermission: "read", // any session can access handler: async (req) => ({ status: 200, body: await listTenants() }), }, { method: "DELETE", path: "/tenants/:id", requiredPermission: "admin", // only admin sessions handler: async (req) => { await deleteTenant(req.params.id); return { status: 204, body: null }; }, }, ], }); ``` ## CORS By default, only `https://console.usebetter.dev` is allowed as a CORS origin. This means only the official Console UI can call your console endpoints from a browser. To add custom origins (e.g., a self-hosted Console UI or local development): ```ts const consoleInstance = betterConsole({ // ... allowedOrigins: [ "https://console.usebetter.dev", // keep the default "http://localhost:5173", // local dev "https://admin.myapp.com", // custom UI ], }); ``` CORS headers are automatically applied to all `/.well-known/better/*` responses, including `OPTIONS` preflight requests. ## Environment variables | Variable | Required | Description | | --- | --- | --- | | `BETTER_CONSOLE_TOKEN_HASH` | Always | SHA-256 hash of the connection token. Format: `sha256:<64 hex chars>` or bare `<64 hex chars>`. | | `BETTER_CONSOLE_ALLOWED_EMAILS` | Magic link mode | Comma-separated admin email addresses allowed to authenticate. | | `DATABASE_URL` | Magic link mode | Postgres connection string for session and magic link storage. | ## Next steps - [Authentication](https://docs.usebetter.dev/console/authentication/) — auto-approve, magic link, custom auth hook - [Product Registration](https://docs.usebetter.dev/console/product-registration/) — expose data to the Console UI - [CLI](https://docs.usebetter.dev/console/cli/) — all CLI commands --- ## Authentication UseBetter Console supports three authentication methods. You configure them in the `sessions` option of `betterConsole()`. At least one method must be enabled. ## Auto-approve (development) Auto-approve issues a stateless JWT immediately when a session is initiated — no database, no email flow. It is intended for **local development only**. ```ts const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { autoApprove: process.env.NODE_ENV === "development" }, }); ``` > **Caution:** Auto-approve throws `ConsoleAutoApproveInProductionError` when `NODE_ENV` is not `"development"` or `"test"`. Never use it in production — any client could obtain an admin session. ## Magic link (production) Magic link uses a 3-step handshake with email verification. It requires a database adapter (Drizzle) for storing session and magic link records. ```ts const consoleInstance = betterConsole({ adapter: drizzleConsoleAdapter(db), connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { magicLink: { allowedEmails: process.env.BETTER_CONSOLE_ALLOWED_EMAILS!, }, }, }); ``` ## Try it yourself Step through each authentication flow to see the HTTP requests and responses at each stage. ### Authentication Flow Playground The interactive playground above lets you step through each authentication flow. Here are the scenarios it demonstrates: ### Auto-approve flow #### Scenario: Happy path — instant JWT 1. User enters email and clicks "Sign in" in Console UI. 2. Request: `POST /console/session/init` with body `{ "email": "admin@myapp.com" }` 3. Response: `200 OK` with `{ "sessionToken": "eyJhbG...", "expiresIn": 86400 }` 4. Server signs a stateless JWT immediately — no database, no email required. Determined by `sessions.autoApprove` config. #### Scenario: Production error — auto-approve blocked 1. User enters email and clicks "Sign in" in Console UI. 2. Request: `POST /console/session/init` with body `{ "email": "admin@myapp.com" }` 3. Response: `403 Forbidden` with `{ "error": "ConsoleAutoApproveInProductionError" }` 4. Auto-approve checks NODE_ENV before signing. Rejects with 403 in production to prevent unauthorized access. ### Magic link flow #### Scenario: Happy path — 3-step handshake Step 1 — Init session: 1. User enters email and clicks "Sign in." 2. Request: `POST /console/session/init` with body `{ "email": "admin@myapp.com" }` 3. Response: `200 OK` with `{ "sessionId": "sess_abc123" }` 4. Server generates a random 6-character code, hashes it, and stores a magic link record. Code is sent via email relay. Step 2 — Verify code: 1. User opens email, copies the 6-character code, and enters it in Console UI. 2. Request: `POST /console/session/verify` with body `{ "sessionId": "sess_abc123", "code": "A1B2C3" }` 3. Response: `200 OK` with `{ "status": "verified" }` 4. Server validates code against stored hash. Decrements remaining attempts (4 of 5 left). Step 3 — Claim JWT: 1. Console UI automatically claims the session after verification succeeds. 2. Request: `POST /console/session/claim` with body `{ "sessionId": "sess_abc123" }` 3. Response: `200 OK` with `{ "sessionToken": "eyJhbG...", "expiresIn": 86400 }` 4. Server creates a database session record and signs a JWT. The magic link record is consumed. #### Scenario: Wrong code 1. Init: `POST /console/session/init` → `200 OK` with `{ "sessionId": "sess_abc123" }` 2. Verify: `POST /console/session/verify` with body `{ "sessionId": "sess_abc123", "code": "WRONG1" }` 3. Response: `401 Unauthorized` with `{ "error": "ConsoleMagicLinkInvalidCodeError", "remainingAttempts": 4 }` 4. Code hash does not match. Attempt counter decremented (4 of 5 remaining). User can retry. #### Scenario: Email not allowed 1. Request: `POST /console/session/init` with body `{ "email": "hacker@evil.com" }` 2. Response: `403 Forbidden` with `{ "error": "ConsoleEmailNotAllowedError" }` 3. Email checked against allowedEmails config. Rejected before any code is generated or stored. #### Scenario: Brute force lockout 1. Init: `POST /console/session/init` → `200 OK` with `{ "sessionId": "sess_abc123" }` 2. After 5 failed verify attempts, last response: `401` with `{ "error": "ConsoleMagicLinkInvalidCodeError", "remainingAttempts": 0 }` 3. Next attempt: `POST /console/session/verify` → `429 Too Many Requests` with `{ "error": "ConsoleMagicLinkLockedError" }` 4. All attempts exhausted. The magic link record is locked. User must start over with a new `POST /session/init`. ### Email allow-list Only emails matching the `allowedEmails` list can initiate a session. Emails that don't match receive a `ConsoleEmailNotAllowedError`. You can use exact addresses or glob patterns with `*` wildcards. ```ts sessions: { magicLink: { // Single email allowedEmails: "admin@myapp.com", // Multiple emails allowedEmails: ["admin@myapp.com", "ops@myapp.com"], // Glob pattern — allow any email from a domain allowedEmails: "*@myapp.com", // Mix exact emails and patterns allowedEmails: ["admin@myapp.com", "*@eng.myapp.com"], // From environment variable (comma-separated) allowedEmails: process.env.BETTER_CONSOLE_ALLOWED_EMAILS!, }, }, ``` The universal wildcard `*` allows any email to initiate a session. In production, this requires explicit opt-in: ```ts sessions: { magicLink: { allowedEmails: "*", allowUnauthenticatedEmails: true, }, }, ``` > **Caution:** Using `allowedEmails: "*"` without `allowUnauthenticatedEmails: true` throws a `WILDCARD_EMAIL_IN_PRODUCTION` error outside development/test environments. Always restrict to specific emails or domain patterns in production. ### Email delivery When `sendMagicLinkEmail` is not provided, the verification code is delivered automatically via the **Console email relay** (`console.usebetter.dev`). The relay only sends to emails that have a registered Console account — no additional configuration is needed. The Console UI sends `appName` and `baseUrl` in the session init request. These are used to construct the verification link in the email. ### Custom email sender To use your own email provider instead of the Console email relay, provide the `sendMagicLinkEmail` callback: ```ts sessions: { magicLink: { allowedEmails: process.env.BETTER_CONSOLE_ALLOWED_EMAILS!, sendMagicLinkEmail: async ({ email, sessionId, code }) => { await sendEmail({ to: email, subject: "Your Console access code", body: `Your verification code is: ${code}`, }); }, }, }, ``` ### Brute-force protection Magic link verification has a configurable attempt limit. After `maxAttempts` failed code entries, the magic link is locked out and a new session must be initiated. ```ts sessions: { magicLink: { allowedEmails: "admin@myapp.com", maxAttempts: 3, // default: 5 }, }, ``` The magic link code expires after 10 minutes regardless of attempts. ## Custom authenticate hook > **Note:** Custom authentication via `sessions.authenticate` is planned but not yet implemented. Currently, use **auto-approve** for development and **magic link** for production. This section will be updated when the feature ships. ## Session management ### Token lifetime Session tokens expire after 24 hours by default. Configure with `tokenLifetime`: ```ts sessions: { magicLink: { allowedEmails: "admin@myapp.com" }, tokenLifetime: "8h", // valid range: 1h to 7d }, ``` ### Token rotation To rotate the connection token (e.g., if compromised): ```bash npx @usebetterdev/console-cli token rotate ``` This generates a new token pair. Update `BETTER_CONSOLE_TOKEN_HASH` in your environment and restart your server. **All existing sessions are invalidated** because session JWTs are signed with the connection token secret. ## Next steps - [Configuration](https://docs.usebetter.dev/console/configuration/) — full config reference, CORS, environment variables - [CLI](https://docs.usebetter.dev/console/cli/) — all CLI commands including token management - [Troubleshooting](https://docs.usebetter.dev/console/troubleshooting/) — common auth issues and fixes --- ## Product Registration UseBetter Console discovers what data to show by querying registered product endpoints. Products register themselves with the console instance, and the Console UI automatically discovers them via the `/capabilities` endpoint. ## Registering UseBetter Tenant If you are using `@usebetterdev/tenant`, it has built-in support for UseBetter Console. Simply pass your console instance to the `betterTenant` configuration: ```ts // 1. Create the console instance const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { autoApprove: process.env.NODE_ENV === "development" }, }); // 2. Pass it to UseBetter Tenant const tenant = betterTenant({ database: ..., tenantResolver: ..., console: consoleInstance, // <--- Registers tenant endpoints automatically }); ``` This automatically registers the `tenant` product with all its endpoints (`/tenants`, `/tenants/:id`, etc.). ## Registering custom products For your own products or custom features, use `registerProduct()` on the console instance: ```ts const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { autoApprove: process.env.NODE_ENV === "development" }, }); consoleInstance.registerProduct({ id: "my-product", name: "My Custom Product", endpoints: [ { method: "GET", path: "/items", requiredPermission: "read", handler: async (request) => { // ... implementation ... return { status: 200, body: [] }; }, }, // ... more endpoints ... ], }); ``` ### ConsoleProduct interface | Field | Type | Description | | --- | --- | --- | | `id` | `string` | Short identifier. Used in the route path (e.g., `"tenant"` → `/.well-known/better/tenant/*`). | | `name` | `string` | Display name shown in the Console UI (e.g., `"UseBetter Tenant"`). | | `endpoints` | `ConsoleProductEndpoint[]` | Array of endpoint definitions. | ### ConsoleProductEndpoint interface | Field | Type | Default | Description | | --- | --- | --- | --- | | `method` | `"GET" \| "POST" \| "PATCH" \| "DELETE"` | _(required)_ | HTTP method. | | `path` | `string` | _(required)_ | Path relative to the product mount. Supports `:param` segments (e.g., `"/tenants/:id"`). | | `handler` | `(request: AuthenticatedConsoleRequest) => Promise` | _(required)_ | Request handler. Receives an authenticated request with `session` and `params` populated. | | `requiredPermission` | `ConsolePermission` | `"read"` | Minimum permission level required to access this endpoint. | ### Handler request Product endpoint handlers receive an `AuthenticatedConsoleRequest` — a `ConsoleRequest` with a guaranteed `session` field: ```ts handler: async (request) => { // request.session is always defined (auth is handled by the router) const { email, permissions } = request.session; // URL params from :param segments const tenantId = request.params.id; // Query string parameters const limit = request.query.limit; // Request body (parsed JSON) const body = request.body; return { status: 200, body: { data: "..." } }; } ``` ## Route pattern Registered product endpoints are served under: ``` /.well-known/better// ``` For example, a product with `id: "tenant"` and an endpoint with `path: "/tenants"` is accessible at: ``` GET /.well-known/better/tenant/tenants ``` All product endpoints **require authentication** — the router verifies the session token and checks permissions before calling your handler. ## Built-in routes UseBetter Console registers these routes automatically. These routes are unauthenticated — they handle the session handshake and discovery before a session exists. | Method | Path | Description | | --- | --- | --- | | `GET` | `/console/health` | Health check. Returns `{"status":"ok"}`. | | `GET` | `/console/capabilities` | Returns registered products, auth methods, and permissions. | | `POST` | `/console/session/init` | Initiates a session (auto-approve or magic link). | | `POST` | `/console/session/verify` | Verifies a magic link code. Only registered when adapter is configured. | | `GET` | `/console/session/poll` | Polls magic link session status. Only registered when adapter is configured. | | `POST` | `/console/session/claim` | Claims a verified magic link session. Only registered when adapter is configured. | All routes are prefixed with `/.well-known/better/` by the Hono middleware. ## Capabilities endpoint The capabilities endpoint (`GET /.well-known/better/console/capabilities`) returns information about your Console setup: ```json { "products": ["tenant"], "authMethods": ["magic_link"], "permissions": ["read", "write", "admin"] } ``` The Console UI calls this endpoint to discover what products are available and which authentication method to use. ## Next steps - [Configuration](https://docs.usebetter.dev/console/configuration/) — full config reference, permissions, CORS - [Authentication](https://docs.usebetter.dev/console/authentication/) — auth methods and session management - [Architecture](https://docs.usebetter.dev/console/architecture/) — request flow and routing internals --- ## CLI The CLI (`@usebetterdev/console-cli`) provides commands for setting up and managing UseBetter Console. Use it via `npx` — no installation required. ```bash npx @usebetterdev/console-cli ``` ## Commands ### `init` — generate token and config Generates a connection token pair and prints the environment variables needed to run Console. ```bash npx @usebetterdev/console-cli init npx @usebetterdev/console-cli init --email admin@myapp.com npx @usebetterdev/console-cli init --email admin@myapp.com -n # non-interactive ``` | Flag | Description | | --- | --- | | `--email ` | Admin email address (skips the prompt) | | `-n, --non-interactive` | Disable all prompts; requires `--email` | **Output:** ``` Connection token (save this — it won't be shown again): a1b2c3d4... BETTER_CONSOLE_TOKEN_HASH=sha256:e5f6a7b8... BETTER_CONSOLE_ALLOWED_EMAILS=admin@myapp.com ``` The **token hash** is the SHA-256 digest stored in your environment. It is used server-side to sign and verify session JWTs. The raw connection token is only shown once during init — only the hash is needed at runtime. ### `migrate` — generate table SQL Generates the SQL to create the `better_console_sessions` and `better_console_magic_links` tables. Only needed if you use the magic link auth flow — auto-approve is stateless and needs no tables. ```bash npx @usebetterdev/console-cli migrate --dry-run # print to stdout npx @usebetterdev/console-cli migrate # write to ./console-migrations/_better_console_tables.sql npx @usebetterdev/console-cli migrate -o path/to/file.sql # write to specific file npx @usebetterdev/console-cli migrate --force # overwrite if file exists ``` | Flag | Description | | --- | --- | | `--dry-run` | Print SQL to stdout instead of writing a file | | `-o, --output ` | File (`.sql`) or directory path | | `--force` | Overwrite an existing file without error | The generated SQL is idempotent (`CREATE TABLE IF NOT EXISTS`, `CREATE INDEX IF NOT EXISTS`). **Tables created:** | Table | Purpose | | --- | --- | | `better_console_sessions` | Stores claimed magic-link sessions (email, token hash, permissions, expiry) | | `better_console_magic_links` | Tracks the init → verify → claim flow (code hash, session correlation ID, failed attempts) | ### `token generate` — new token pair Generates a fresh connection token pair without the full init flow. Useful for scripting or CI. ```bash npx @usebetterdev/console-cli token generate ``` Prints the raw token and the `BETTER_CONSOLE_TOKEN_HASH=sha256:...` line. ### `token rotate` — rotate and invalidate Same as `generate` but prints a warning that rotating the token **invalidates all existing console sessions**. ```bash npx @usebetterdev/console-cli token rotate ``` After rotating, update `BETTER_CONSOLE_TOKEN_HASH` in your environment and restart your server. ### `check` — verify setup Validates that the database tables, indexes, and environment variables are set up correctly. ```bash npx @usebetterdev/console-cli check --database-url postgres://... # or with DATABASE_URL in the environment: npx @usebetterdev/console-cli check ``` | Flag | Description | | --- | --- | | `--database-url ` | Postgres connection string (default: `DATABASE_URL` env) | **Checks performed:** | Category | What it verifies | | --- | --- | | Tables | `better_console_sessions` and `better_console_magic_links` exist with all required columns | | Indexes | `token_hash`, `session_id`, and `expires_at` indexes exist | | Env vars | `BETTER_CONSOLE_TOKEN_HASH` is set and has valid format (`sha256:<64 hex>` or bare `<64 hex>`) | | Env vars | `BETTER_CONSOLE_ALLOWED_EMAILS` is set (warning only — only needed for magic link mode) | Exits with code 1 if any check fails. ## Programmatic API The CLI modules are also importable for use in scripts or custom tooling: ```ts // Generate a token pair const { token, hash } = await generateTokenPair(); // Get the migration SQL as a string const sql = generateConsoleMigrationSql(); // Run checks programmatically const { passed, results, warnings } = await runConsoleCheck(databaseUrl); // Run init (non-interactive) const output = await runInit({ email: "admin@myapp.com", nonInteractive: true }); ``` ## Next steps - [Configuration](https://docs.usebetter.dev/console/configuration/) — full config reference, CORS, environment variables - [Troubleshooting](https://docs.usebetter.dev/console/troubleshooting/) — common issues and fixes - [Architecture](https://docs.usebetter.dev/console/architecture/) — how the routing and auth internals work --- ## Troubleshooting ## Invalid or expired session token ``` {"error":"Invalid or expired session"} ``` The session JWT could not be verified. This happens when: - **Token has expired.** Sessions expire after `tokenLifetime` (default 24 hours). Re-authenticate via the Console UI. - **Connection token was rotated.** After running `npx @usebetterdev/console-cli token rotate`, all existing sessions are invalidated because JWTs are signed with the connection token secret. Update `BETTER_CONSOLE_TOKEN_HASH` in your environment, restart your server, and re-authenticate. - **Wrong `BETTER_CONSOLE_TOKEN_HASH`.** Ensure the environment variable matches the hash generated by `init` or `token generate`. --- ## Auto-approve blocked in production ``` ConsoleAutoApproveInProductionError: autoApprove is enabled outside of development. ``` This error is thrown during `betterConsole()` construction — your server will not start. Auto-approve is only allowed when `NODE_ENV` is `"development"` or `"test"`. **Fix:** Either set `NODE_ENV=development` for local dev, or switch to magic link auth: ```ts const consoleInstance = betterConsole({ adapter: drizzleConsoleAdapter(db), connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { magicLink: { allowedEmails: process.env.BETTER_CONSOLE_ALLOWED_EMAILS!, }, }, }); ``` --- ## Email not in allowed list ``` {"error":"Email \"user@example.com\" is not in the allowed list.","code":"EMAIL_NOT_ALLOWED"} ``` The email address used to initiate a magic link session is not in the `allowedEmails` configuration. **Fix:** Add the email to `BETTER_CONSOLE_ALLOWED_EMAILS`: ```bash title=".env" BETTER_CONSOLE_ALLOWED_EMAILS=admin@myapp.com,ops@myapp.com ``` Then restart your server. --- ## Adapter required for magic link ``` {"error":"Magic link sessions require a database adapter.","code":"ADAPTER_REQUIRED"} ``` Magic link sessions store session and magic link records in the database. You must provide a database adapter. **Fix:** Add the Drizzle adapter: ```ts const consoleInstance = betterConsole({ adapter: drizzleConsoleAdapter(db), // add this connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { magicLink: { allowedEmails: process.env.BETTER_CONSOLE_ALLOWED_EMAILS!, }, }, }); ``` --- ## Database tables missing ``` check: FAIL - table "better_console_sessions" not found check: FAIL - table "better_console_magic_links" not found ``` The console tables have not been created in your database. **Fix:** Run the migration: ```bash # Option A: Pipe directly to psql npx @usebetterdev/console-cli migrate --dry-run | psql $DATABASE_URL # Option B: Generate a file npx @usebetterdev/console-cli migrate -o drizzle/console_tables.sql # Then apply via your migration tool or psql ``` Verify with: ```bash npx @usebetterdev/console-cli check --database-url $DATABASE_URL ``` --- ## CORS errors ``` Access to fetch at 'https://myapp.com/.well-known/better/console/capabilities' from origin 'https://console.usebetter.dev' has been blocked by CORS policy ``` The Console UI origin is not in the `allowedOrigins` list, or CORS headers are being stripped by a reverse proxy. ### Common causes **Reverse proxy strips headers.** If you use nginx, Cloudflare, or another proxy, ensure it forwards CORS headers from your application. Do not add separate CORS headers at the proxy level — let UseBetter Console handle them. **Custom origins not included.** If you host the Console UI yourself or access it from a custom domain: ```ts const consoleInstance = betterConsole({ // ... allowedOrigins: [ "https://console.usebetter.dev", // default — keep this "https://admin.myapp.com", // your custom origin ], }); ``` **Missing middleware.** The Console middleware must be mounted before any other middleware that might handle `/.well-known/better/*` routes: ```ts app.use("*", createConsoleMiddleware(consoleInstance)); ``` --- ## Session expired ``` {"error":"Invalid or expired session"} ``` The database session record has passed its `expires_at` timestamp. Re-authenticate via the Console UI. To extend session lifetime, configure `tokenLifetime`: ```ts sessions: { magicLink: { allowedEmails: "admin@myapp.com" }, tokenLifetime: "7d", // max 7 days (default: 24h) }, ``` --- ## Health endpoint returns 404 ```bash curl http://localhost:3000/.well-known/better/console/health # 404 Not Found ``` The Console middleware is not intercepting requests. ### Common causes **Middleware not mounted.** Ensure `createConsoleMiddleware()` is called and applied to your Hono app: ```ts app.use("*", createConsoleMiddleware(consoleInstance)); ``` **Wrong base path.** The middleware intercepts `/.well-known/better/*` by default. If your app is behind a path prefix (e.g., `/api`), the full path would be `/api/.well-known/better/console/health`. **App not running.** Verify your server is actually running on the expected port. --- ## Permission denied ``` {"error":"Insufficient permissions"} ``` Your session does not have the required permission level for the endpoint you're accessing. The permission hierarchy is `read` < `write` < `admin`. **Fix:** Check what permission the endpoint requires and ensure your session has at least that level. The `allowedActions` config determines the maximum permission level granted to sessions: ```ts const consoleInstance = betterConsole({ // ... allowedActions: ["read", "write", "admin"], // default — grants all levels }); ``` --- ## No session method configured ``` {"error":"No session method configured.","code":"NO_SESSION_METHOD"} ``` You must enable at least one of: `autoApprove`, `magicLink`, or `authenticate` in the `sessions` config. **Fix:** ```ts sessions: { autoApprove: process.env.NODE_ENV === "development", // or magicLink: { allowedEmails: "admin@myapp.com" }, // for production // or authenticate: async (request) => { /* ... */ }, // custom auth }, ``` --- ## Email relay failed ``` {"error":"Console email relay returned 404: email not registered","code":"EMAIL_RELAY_FAILED"} ``` The Console email relay could not deliver the verification code. Common causes: - **Email not registered in Console.** The relay only sends to emails with an existing account at `console.usebetter.dev`. Sign up at [console.usebetter.dev](https://console.usebetter.dev) with the same email address listed in `allowedEmails`. - **Rate limited (429).** Too many magic link requests in a short period. Wait a few minutes and try again. - **Network error.** Your server could not reach `api.usebetter.dev`. Check your firewall and DNS settings. **Fix:** If you prefer to bypass the relay entirely, provide your own `sendMagicLinkEmail` callback: ```ts sessions: { magicLink: { allowedEmails: "admin@myapp.com", sendMagicLinkEmail: async ({ email, code }) => { await yourEmailService.send({ to: email, body: `Code: ${code}` }); }, }, }, ``` --- ## Weak secret in production ``` {"error":"connectionTokenHash secret is too short","code":"WEAK_SECRET"} ``` The connection token hash resolves to a secret shorter than 32 characters. This check is enforced when `NODE_ENV` is not `"development"` or `"test"`. **Fix:** Generate a new, strong token: ```bash npx @usebetterdev/console-cli token generate ``` Update `BETTER_CONSOLE_TOKEN_HASH` in your environment with the new value. ## Next steps - [Configuration](https://docs.usebetter.dev/console/configuration/) — full config reference - [Authentication](https://docs.usebetter.dev/console/authentication/) — auth methods and session management - [CLI](https://docs.usebetter.dev/console/cli/) — all CLI commands --- ## Architecture This page explains the internal mechanisms of UseBetter Console: the request flow through the middleware and core, how routing and authentication work, and the security model. ## Request flow When a request arrives at your Hono app, it passes through the Console middleware: 1. **Middleware intercept** — `createConsoleMiddleware()` checks if the URL starts with `/.well-known/better/`. If not, the request passes through to your normal routes. 2. **Request conversion** — the middleware converts the Hono `Request` into a `ConsoleRequest` (method, path, headers, query, body) and strips the `/.well-known/better` prefix from the path. 3. **Route matching** — `ConsoleRouter.match()` finds a registered route matching the method and path. If no route matches, a 404 is returned. 4. **CORS** — for `OPTIONS` requests, preflight CORS headers are returned immediately (204). For all other requests, CORS headers are added to the response based on `allowedOrigins`. 5. **Authentication** — if the route requires auth (`requiresAuth: true`), the router extracts the `Authorization: Bearer ` header, verifies the JWT, checks permissions, and attaches the `session` to the request. 6. **Handler execution** — the matched handler receives the enriched request and returns a `ConsoleResponse`. The middleware converts it back to a standard `Response`. ## Middleware as thin adapter The Hono middleware is intentionally minimal. It performs only two tasks: - **Convert** between Hono's `Request`/`Response` and Console's `ConsoleRequest`/`ConsoleResponse` - **Route** requests that start with the base path (`/.well-known/better/`) to `handleConsoleRequest()` All routing, authentication, CORS handling, and error recovery live inside the core `handleConsoleRequest()` function. This design means: - The middleware has no knowledge of routes, auth, or session logic - Adding support for other frameworks (Express, Fastify) requires only a thin adapter - Testing the core does not require an HTTP server ## Routing `ConsoleRouter` is a simple pattern-matching router that registers two kinds of routes: ### Console routes Built-in routes registered at startup, prefixed with `/console/`: ``` GET /console/health GET /console/capabilities POST /console/session/init POST /console/session/verify (only with adapter) GET /console/session/poll (only with adapter) POST /console/session/claim (only with adapter) ``` Console routes are **unauthenticated** — they handle the session handshake and health checks. ### Product routes Registered via `registerProduct()`, prefixed with `//`: ``` GET /tenant/tenants GET /tenant/tenants/:id POST /tenant/tenants DELETE /tenant/tenants/:id ``` Product routes are **always authenticated** and require a valid session token with the specified permission level. ### Path matching Routes support `:param` segments for dynamic path parameters: ``` Pattern: /tenant/tenants/:id Actual: /tenant/tenants/550e8400-e29b-41d4-a716-446655440000 Params: { id: "550e8400-e29b-41d4-a716-446655440000" } ``` The router performs exact segment matching — the pattern and actual path must have the same number of segments. ## Authentication internals ### Auto-approve (stateless JWT) In auto-approve mode, `initSession()` signs a JWT immediately using the connection token secret. The JWT payload contains: - `sessionId` — random UUID - `email` — from the request body (defaults to `"dev@localhost"`) - `permissions` — from `allowedActions` config - `expiresAt` — current time + `tokenLifetime` No database interaction occurs. The JWT is verified on each subsequent request by decoding and checking the signature and expiry. ### Magic link (database-backed) Magic link sessions use the database adapter for persistent storage: 1. **Init** — generates a random 6-character code, SHA-256 hashes it, stores a `ConsoleMagicLink` record with a unique `sessionId`, and sends the raw code to the user's email. 2. **Verify** — the user submits the code. The server hashes it and compares against the stored hash. Failed attempts are tracked; after `maxAttempts` failures, the magic link is locked. 3. **Claim** — creates a `ConsoleSession` record in the database with a new session token hash. Returns a signed JWT to the client. The claim is idempotent — it uses `WHERE token_hash IS NULL` to prevent double-claiming. Session verification on subsequent requests: the JWT is decoded, the token hash is computed, and the session is looked up in the database by token hash. If the session exists and hasn't expired, the request proceeds. ## CORS CORS is handled at the core level, not in the middleware. Every response from `handleConsoleRequest()` includes CORS headers when the request's `Origin` header matches an entry in `allowedOrigins`. - **Preflight (OPTIONS)** — returns 204 with `Access-Control-Allow-Origin`, `Access-Control-Allow-Methods` (GET, POST, PATCH, DELETE, PUT, OPTIONS), `Access-Control-Allow-Headers` (Content-Type, Authorization), and `Access-Control-Max-Age`. - **Normal requests** — `Access-Control-Allow-Origin` is set to the matched origin. If no origin matches, no CORS headers are added (the browser blocks the request). The default allowed origin is `https://console.usebetter.dev`. ## Security model ### Token hashing The connection token is never stored in plaintext. During `init`, the CLI generates a random token and its SHA-256 hash. Only the hash (`sha256:`) is stored in the environment. The raw token is shown once and then discarded. At runtime, the hash is used as the JWT signing secret. This means: - The JWT cannot be forged without knowing the hash - Rotating the hash invalidates all existing JWTs - The original raw token is not needed after setup ### Strength enforcement In production (`NODE_ENV` is not `"development"` or `"test"`), the connection token secret must be at least 32 characters. Shorter secrets throw `ConsoleWeakSecretError` at startup. ### Brute-force protection Magic link code verification tracks failed attempts per magic link. After `maxAttempts` (default 5), the magic link is locked — no further verification attempts are accepted. The user must initiate a new session. Magic link codes expire after 10 minutes regardless of attempts. ### CORS restriction By default, only `https://console.usebetter.dev` can make cross-origin requests to your console endpoints. This prevents unauthorized frontends from accessing your data. ### Auto-approve restriction Auto-approve is blocked in production via `ConsoleAutoApproveInProductionError`. This prevents accidental deployment of a configuration that grants instant admin access to anyone. ## Next steps - [Configuration](https://docs.usebetter.dev/console/configuration/) — full config reference, permissions, CORS - [Authentication](https://docs.usebetter.dev/console/authentication/) — auth methods and session management - [Troubleshooting](https://docs.usebetter.dev/console/troubleshooting/) — common issues and fixes --- # Tenant > Request-scoped multi-tenancy for TypeScript and Postgres, powered by Row-Level Security. ## Introduction UseBetter Tenant is an open-source library that adds multi-tenancy to your Postgres application in minutes. Instead of manually adding `WHERE tenant_id = ?` to every query, UseBetter Tenant uses **Postgres Row-Level Security (RLS)** to enforce tenant boundaries at the database level. ## Key features - **Database-enforced isolation** — RLS policies ensure tenants can never access each other's data, even if your application code has a bug. - **Zero WHERE clauses** — queries are automatically scoped to the current tenant. Write `db.select().from(projects)` and only that tenant's projects come back. - **Request-scoped context** — tenant identity is resolved from incoming requests (header, subdomain, path, JWT, or custom) and propagated via `AsyncLocalStorage`. - **Framework adapters** — drop-in middleware for Hono, Express, and Next.js App Router. - **ORM adapters** — first-class support for Drizzle ORM and Prisma, with transaction-scoped tenant context. - **CLI tooling** — generate migrations, verify your RLS setup, and seed tenants from the command line. - **Admin operations** — `runAs` and `runAsSystem` for background jobs, cron tasks, and cross-tenant admin work. ## How it works 1. A request arrives with a tenant identifier (header, subdomain, path, JWT, or custom function). 2. Middleware resolves the identifier to a tenant UUID. 3. The adapter opens a database transaction and runs `SELECT set_config('app.current_tenant', '', true)` — the function form of `SET LOCAL`. 4. Your handler executes — every query is automatically filtered by RLS to the current tenant. 5. The transaction commits and the session variable is cleared (safe for connection pooling). Because the tenant context is transaction-scoped (`SET LOCAL`), it works safely with connection pools like `pg.Pool` — no cross-request leakage. > **Deep dive:** [How RLS Works](https://docs.usebetter.dev/tenant/how-rls-works/) explains each step in detail — why connection pooling is safe, why missing WHERE clauses can't leak data, how admin bypass works, and includes an interactive playground. ## Architecture UseBetter Tenant follows a layered architecture: | Layer | Package | Role | | ---------------------- | ---------------------------------------------- | -------------------------------------------------------------------- | | **Core** | `@usebetterdev/tenant-core` | Context, resolver, adapter contract, API. Zero runtime dependencies. | | **ORM adapters** | `tenant-drizzle`, `tenant-prisma` | Transaction wrapping, `SET LOCAL`, RLS bypass. | | **Framework adapters** | `tenant-hono`, `tenant-express`, `tenant-next` | Middleware that resolves tenant and delegates to the adapter. | | **CLI** | `@usebetterdev/tenant-cli` | Migrations, verification, seeding. | | **Umbrella** | `@usebetterdev/tenant` | Single install, subpath exports for everything. | You install the umbrella package (`@usebetterdev/tenant`) and import adapters via subpath exports like `@usebetterdev/tenant/drizzle` and `@usebetterdev/tenant/hono`. ## Next steps - [Installation](https://docs.usebetter.dev/tenant/installation/) — install the package and its peer dependencies - [Quick Start](https://docs.usebetter.dev/tenant/quick-start/) — get a working multi-tenant app in 5 minutes - [How RLS Works](https://docs.usebetter.dev/tenant/how-rls-works/) — understand what happens under the hood and why you can trust it --- ## Installation ## Install the package **npm:** ```bash npm install @usebetterdev/tenant ``` **pnpm:** ```bash pnpm add @usebetterdev/tenant ``` **yarn:** ```bash yarn add @usebetterdev/tenant ``` **bun:** ```bash bun add @usebetterdev/tenant ``` The main package (`@usebetterdev/tenant`) includes the core library and all adapters via subpath exports. The CLI (`@usebetterdev/tenant-cli`) is used via `npx` for generating migrations and verifying your setup — no installation required. ## Peer dependencies You need a database driver and (optionally) a framework. Install the ones you use: ### ORM adapter **Drizzle + pg:** ```bash npm install drizzle-orm pg ``` Import: `@usebetterdev/tenant/drizzle` **Drizzle + postgres.js:** ```bash npm install drizzle-orm postgres ``` Import: `@usebetterdev/tenant/drizzle` **Prisma:** ```bash npm install @prisma/client @prisma/adapter-pg ``` Requires Prisma 7+ (`@prisma/client` >= 7.0.0 and `@prisma/adapter-pg` >= 7.0.0). Import: `@usebetterdev/tenant/prisma` ### Framework adapter **Hono:** ```bash npm install hono ``` Import: `@usebetterdev/tenant/hono` **Express:** ```bash npm install express ``` Requires Express 5+ (`express` >= 5.0.0). Import: `@usebetterdev/tenant/express` **Next.js:** Next.js is already installed in your project. Import: `@usebetterdev/tenant/next` ## Requirements - **Node.js 22+** (also supports Bun and Deno) - **PostgreSQL 13+** (RLS, session variables, `SET LOCAL`) - **TypeScript 5+** (recommended, but not required) - **Non-superuser database role** for your application connection > **Superusers bypass RLS:** PostgreSQL superusers (like the default `postgres` user) bypass **all** Row-Level Security policies. Your application must connect as a regular (non-superuser) role for RLS to enforce tenant isolation. See [Quick Start — Prerequisites](https://docs.usebetter.dev/tenant/quick-start/#prerequisites) for how to create one. ## Subpath exports All adapters are available through the umbrella package via subpath exports: | Import | Contents | |---|---| | `@usebetterdev/tenant` | Core: `betterTenant`, `getContext`, `runAs`, `runAsSystem` | | `@usebetterdev/tenant/drizzle` | `drizzleDatabase`, `tenantsTable`, `tenantId` | | `@usebetterdev/tenant/prisma` | `prismaDatabase` | | `@usebetterdev/tenant/hono` | `createHonoMiddleware` | | `@usebetterdev/tenant/express` | `createExpressMiddleware` | | `@usebetterdev/tenant/next` | `withTenant` | ## Next steps - [Quick Start](https://docs.usebetter.dev/tenant/quick-start/) — wire up a working multi-tenant app --- ## Quick Start This guide walks you through adding multi-tenancy to an existing Postgres application. By the end, your queries will be automatically scoped to the current tenant via RLS — no `WHERE tenant_id = ?` needed. > **Want to understand what's happening under the hood?:** [How RLS Works](https://docs.usebetter.dev/tenant/how-rls-works/) walks through the full request lifecycle and includes an interactive playground where you can run simulated queries. ## Prerequisites - A running PostgreSQL 13+ database - An existing application with tables you want to make tenant-scoped - Node.js 22+ - A **non-superuser** database role for your application > **Superusers bypass RLS:** PostgreSQL superusers (like the default `postgres` user in Docker) bypass **all** Row-Level Security policies, even with `FORCE ROW LEVEL SECURITY`. Your `DATABASE_URL` must connect as a regular (non-superuser) role. If you don't have one yet, create an application role: ```sql CREATE ROLE app_user WITH LOGIN PASSWORD 'app_password'; GRANT CONNECT ON DATABASE mydb TO app_user; GRANT USAGE ON SCHEMA public TO app_user; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user; ``` Then use `DATABASE_URL=postgresql://app_user:app_password@localhost:5432/mydb`. 1. **Initialize config** ```bash npx @usebetterdev/tenant-cli init --database-url $DATABASE_URL ``` This connects to your database, detects your tables, and creates `better-tenant.config.json`: ```json { "tenantTables": ["projects", "tasks"] } ``` 2. **Set up schema and RLS** Your ORM manages the schema (tables, columns). The CLI generates RLS policies and triggers. **Drizzle:** Add `tenantsTable`, `tenantId`, and `.enableRLS()` to each tenant-scoped table in your Drizzle schema: ```ts import { pgTable, serial, text } from "drizzle-orm/pg-core"; import { tenantsTable, tenantId } from "@usebetterdev/tenant/drizzle"; export { tenantsTable }; export const projectsTable = pgTable("projects", { id: serial("id").primaryKey(), name: text("name").notNull(), ...tenantId, }).enableRLS(); ``` `...tenantId` adds a `tenant_id` column: `UUID NOT NULL`, references `tenants(id)`, with a default from the PostgreSQL session variable `app.current_tenant`. You can write it manually instead — see the [Manual tab in CLI & Migrations](https://docs.usebetter.dev/tenant/cli/). **Unique constraints:** If your tables have `UNIQUE` constraints (e.g., on `email` or `slug`), you likely need to convert them to per-tenant composites — see [Configuration — Unique constraints](https://docs.usebetter.dev/tenant/configuration/#unique-constraints). Then generate and apply migrations: ```bash # Schema migration (creates tenants table + tenant_id columns) npx drizzle-kit generate npx drizzle-kit migrate # RLS migration (policies + triggers) npx drizzle-kit generate --custom --name=better_tenant_rls --prefix=none npx @usebetterdev/tenant-cli migrate -o drizzle/_better_tenant_rls.sql npx drizzle-kit migrate ``` **Prisma:** Add the `Tenant` model and `tenantId` to each tenant-scoped model in `schema.prisma`: ```prisma model Tenant { id String @id @default(uuid()) @db.Uuid name String slug String @unique createdAt DateTime @default(now()) @map("created_at") @db.Timestamptz projects Project[] @@map("tenants") } model Project { id Int @id @default(autoincrement()) name String tenantId String @map("tenant_id") @db.Uuid tenant Tenant @relation(fields: [tenantId], references: [id]) @@map("projects") } ``` Then generate and apply migrations: ```bash # Schema migration (creates tenants table + tenant_id columns) npx prisma migrate dev --name setup # Create a draft migration for RLS (--create-only generates the file without applying) npx prisma migrate dev --create-only --name better_tenant_rls # Fill it with RLS policies + triggers npx @usebetterdev/tenant-cli migrate \ -o prisma/migrations/*_better_tenant_rls/migration.sql # Apply the RLS migration npx prisma migrate dev ``` 3. **Verify the setup** ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` The check command runs 10+ validations to confirm RLS is correctly configured. 4. **Seed a test tenant** ```bash npx @usebetterdev/tenant-cli seed --name "Acme Corp" --database-url $DATABASE_URL ``` ```ansi ✓ Created tenant: Acme Corp (acme-corp) d4f8e2a1-3b5c-4e7f-9a1d-6c8b2e4f0a3d ``` Copy the UUID — you'll use it to test your app in the last step. 5. **Wire up the tenant instance** **pg + Hono:** ```ts import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { Hono } from "hono"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; import { projectsTable } from "./schema"; // your Drizzle table definition const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const database = drizzle(pool); const tenant = betterTenant({ database: drizzleDatabase(database), tenantResolver: { header: "x-tenant-id" }, }); const app = new Hono(); app.use("*", createHonoMiddleware(tenant)); app.get("/projects", async (c) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return c.json(projects); }); ``` **postgres.js + Hono:** ```ts import { drizzle } from "drizzle-orm/postgres-js"; import postgres from "postgres"; import { Hono } from "hono"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; import { projectsTable } from "./schema"; // your Drizzle table definition const client = postgres(process.env.DATABASE_URL); const database = drizzle(client); const tenant = betterTenant({ database: drizzleDatabase(database), tenantResolver: { header: "x-tenant-id" }, }); const app = new Hono(); app.use("*", createHonoMiddleware(tenant)); app.get("/projects", async (c) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return c.json(projects); }); ``` **pg + Express:** ```ts import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import express from "express"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; import { createExpressMiddleware } from "@usebetterdev/tenant/express"; import { projectsTable } from "./schema"; // your Drizzle table definition const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const database = drizzle(pool); const tenant = betterTenant({ database: drizzleDatabase(database), tenantResolver: { header: "x-tenant-id" }, }); const app = express(); app.use(createExpressMiddleware(tenant)); app.get("/projects", async (req, res) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); res.json(projects); }); ``` **Prisma + Express:** ```ts import { PrismaClient } from "./generated/prisma/client.js"; import { PrismaPg } from "@prisma/adapter-pg"; import express from "express"; import { betterTenant } from "@usebetterdev/tenant"; import { prismaDatabase } from "@usebetterdev/tenant/prisma"; import { createExpressMiddleware } from "@usebetterdev/tenant/express"; const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL }); const prisma = new PrismaClient({ adapter }); const tenant = betterTenant({ database: prismaDatabase(prisma), tenantResolver: { header: "x-tenant-id" }, }); const app = express(); app.use(express.json()); app.use(createExpressMiddleware(tenant)); app.get("/projects", async (req, res) => { const db = tenant.getDatabase(); if (!db) { return res.status(500).json({ error: "No tenant-scoped database" }); } const projects = await db.project.findMany(); res.json(projects); }); ``` `getDatabase()` is fully typed via generics — no cast needed. See the [Prisma guide](https://docs.usebetter.dev/tenant/prisma/#getdatabase-typing) for details. **Prisma + Hono:** ```ts import { PrismaClient } from "./generated/prisma/client.js"; import { PrismaPg } from "@prisma/adapter-pg"; import { Hono } from "hono"; import { betterTenant } from "@usebetterdev/tenant"; import { prismaDatabase } from "@usebetterdev/tenant/prisma"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL }); const prisma = new PrismaClient({ adapter }); const tenant = betterTenant({ database: prismaDatabase(prisma), tenantResolver: { header: "x-tenant-id" }, }); const app = new Hono(); app.use("*", createHonoMiddleware(tenant)); app.get("/projects", async (c) => { const db = tenant.getDatabase(); if (!db) { return c.json({ error: "No tenant-scoped database" }, 500); } const projects = await db.project.findMany(); return c.json(projects); }); ``` **Prisma + Next.js:** ```ts // lib/tenant.ts import { PrismaClient } from "./generated/prisma/client.js"; import { PrismaPg } from "@prisma/adapter-pg"; import { betterTenant } from "@usebetterdev/tenant"; import { prismaDatabase } from "@usebetterdev/tenant/prisma"; const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL }); const prisma = new PrismaClient({ adapter }); export const tenant = betterTenant({ database: prismaDatabase(prisma), tenantResolver: { header: "x-tenant-id" }, }); // app/api/projects/route.ts import { withTenant } from "@usebetterdev/tenant/next"; import { tenant } from "@/lib/tenant"; export const GET = withTenant(tenant, async () => { const db = tenant.getDatabase(); if (!db) { return Response.json({ error: "No tenant-scoped database" }, { status: 500 }); } const projects = await db.project.findMany(); return Response.json(projects); }); ``` **pg + Next.js:** ```ts // lib/tenant.ts import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const database = drizzle(pool); export const tenant = betterTenant({ database: drizzleDatabase(database), tenantResolver: { header: "x-tenant-id" }, }); // app/api/projects/route.ts import { withTenant } from "@usebetterdev/tenant/next"; import { tenant } from "@/lib/tenant"; import { projectsTable } from "@/schema"; // your Drizzle table definition export const GET = withTenant(tenant, async (request) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return Response.json(projects); }); ``` 6. **Test it** Use the UUID from the seed output: ```bash curl -H "x-tenant-id: d4f8e2a1-3b5c-4e7f-9a1d-6c8b2e4f0a3d" http://localhost:3000/projects ``` The response contains only projects belonging to that tenant. No `WHERE` clause needed — RLS handles it. ## What just happened? 1. Your ORM created the `tenants` table and `tenant_id` columns. The CLI generated RLS policies for the `tenants` lookup table (open SELECT, writes require `bypass_rls`) and RLS policies and triggers for your tenant-scoped tables. 2. Your app resolves the tenant ID from the `x-tenant-id` header on each request. 3. The adapter opens a transaction and runs `SELECT set_config('app.current_tenant', '', true)` — the function form of `SET LOCAL`. (Drizzle uses a Drizzle transaction; Prisma uses an interactive `$transaction`.) 4. RLS automatically filters every query to the current tenant's rows. 5. When the transaction commits, the session variable is cleared — safe for connection pooling. ## Next steps - [How RLS Works](https://docs.usebetter.dev/tenant/how-rls-works/) — understand the full request lifecycle, why connection pooling is safe, and what Postgres guarantees you get for free - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies, tenant API, admin operations - [Framework Adapters](https://docs.usebetter.dev/tenant/adapters/) — detailed usage for each framework - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — all CLI commands and workflows --- ## Configuration ## Tenant resolver The resolver determines how tenant identity is extracted from incoming requests. You configure it when creating the `betterTenant` instance. ### Resolution order When multiple strategies are configured, they are tried in this order: 1. **Header** — `x-tenant-id` (or custom header name) 2. **Path** — URL path segment (e.g., `/t/:tenantId/*`) 3. **Subdomain** — first subdomain (e.g., `acme.app.com` → `acme`) 4. **JWT** — claim from a decoded JWT 5. **Custom** — your own function The first strategy that returns a non-empty value wins. ### Strategies ```ts const tenant = betterTenant({ database: drizzleDatabase(db), tenantResolver: { // From a request header header: "x-tenant-id", // From a URL path segment path: "/t/:tenantId/*", // From subdomain (acme.app.com → "acme") subdomain: true, // From a JWT claim jwt: { claim: "tenant_id" }, // Custom function custom: (req) => extractTenantFromRequest(req), }, }); ``` You typically only need one strategy. The most common patterns: | Pattern | Strategy | Example | |---|---|---| | API with header | `header: "x-tenant-id"` | `curl -H "x-tenant-id: "` | | Subdomain routing | `subdomain: true` | `acme.app.com` | | Path-based routing | `path: "/t/:tenantId/*"` | `/t/acme/projects` | | Auth-based | `jwt: { claim: "tenant_id" }` | Tenant ID embedded in token | ### Slug-to-UUID resolution RLS requires a UUID for `SET LOCAL`. When your resolver returns a slug (like `"acme"` from a subdomain), UseBetter Tenant automatically looks it up in the tenants table and uses the matching UUID. This works out of the box — the database provider always includes a tenant repository: ```ts const tenant = betterTenant({ database: drizzleDatabase(db), tenantResolver: { subdomain: true }, }); // acme.app.com → extracts "acme" → finds tenant by slug → uses its UUID for RLS ``` If the identifier is already a UUID, it passes through unchanged — no lookup needed. ### Custom ID resolution For non-standard mappings (e.g., custom domains → tenant UUIDs), use `resolveToId`: ```ts tenantResolver: { custom: (req) => req.host, // "client.com" resolveToId: async (domain) => { const mapping = await lookupCustomDomain(domain); return mapping.tenantId; // UUID }, } ``` When `resolveToId` is provided, auto-resolution is skipped entirely. The library trusts your transform. ## Tenant API CRUD operations on the tenants table are available via `tenant.api`. All API calls run with RLS bypass (`runAsSystem`): ```ts // Create a tenant const created = await tenant.api.createTenant({ name: "Acme Corp", slug: "acme", }); // List tenants (paginated) const tenants = await tenant.api.listTenants({ limit: 20, offset: 0 }); // Update a tenant await tenant.api.updateTenant(created.id, { name: "Acme Inc", slug: "acme-inc", }); // Delete a tenant await tenant.api.deleteTenant(created.id); ``` | Method | Description | |---|---| | `createTenant({ name, slug })` | Create a tenant. Both fields required. Returns the created tenant. | | `listTenants({ limit?, offset? })` | List tenants. Default limit 50. | | `updateTenant(id, { name?, slug? })` | Update a tenant by ID. Returns updated tenant. | | `deleteTenant(id)` | Delete a tenant by ID. | > **Caution:** All API calls run with RLS bypass (`runAsSystem`). **Restrict these endpoints to admin users** — do not expose them to regular tenant users. ## Admin operations ### `runAs` — impersonate a tenant Run a function as a specific tenant. Useful for background jobs and cron tasks: **Drizzle:** ```ts await tenant.runAs(tenantId, async (db) => { const projects = await db.select().from(projectsTable); // scoped to the specified tenant }); ``` **Prisma:** ```ts await tenant.runAs(tenantId, async (db) => { const projects = await db.project.findMany(); // scoped to the specified tenant }); ``` The `tenantId` must be a valid UUID. `runAs` does not resolve slugs. ### `runAsSystem` — bypass RLS Run a function with RLS bypass for cross-tenant operations: **Drizzle:** ```ts await tenant.runAsSystem(async (db) => { const allProjects = await db.select().from(projectsTable); // returns projects from ALL tenants }); ``` **Prisma:** ```ts await tenant.runAsSystem(async (db) => { const allProjects = await db.project.findMany(); // returns projects from ALL tenants }); ``` > **Caution:** `runAsSystem` bypasses all tenant isolation. Use it only for admin dashboards, cron jobs, migrations, and system-level operations. If you expose an endpoint using `runAsSystem`, protect it with authentication and authorization. ## Context access Inside a tenant-scoped request or `runAs` call, you can access the current tenant context from anywhere in the call tree: ```ts // Get the current tenant context const ctx = tenant.getContext(); ctx.tenantId; // "550e8400-..." ctx.tenant; // { id, name, slug, createdAt } ctx.isSystem; // false (true inside runAsSystem) // Get the tenant-scoped database handle const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); ``` Both `getContext()` and `getDatabase()` return `undefined` when called outside a tenant scope (e.g., outside middleware or a `runAs` block). See [Troubleshooting — Context is undefined](https://docs.usebetter.dev/tenant/troubleshooting/#context-is-undefined) for common causes. This works because UseBetter Tenant uses `AsyncLocalStorage` to propagate context through the call stack. No need to pass the database handle or tenant ID through function arguments. ### getDatabase() with Prisma `getDatabase()` is fully typed via generics when you pass a typed `PrismaClient` to `prismaDatabase()`. No casting needed: ```ts const db = tenant.getDatabase(); if (!db) { throw new Error("No tenant-scoped database"); } const projects = await db.project.findMany(); // Full type safety ``` See the [Prisma guide](https://docs.usebetter.dev/tenant/prisma/#getdatabase-typing) for the recommended wrapper pattern. ## Non-tenant tables Not every table in your application needs tenant isolation. Lookup tables, feature flags, global settings, or shared reference data typically have no `tenant_id` column and no RLS policies. **These tables work through `tenant.getDatabase()` with no extra setup.** RLS is opt-in per table in Postgres. The `SET LOCAL app.current_tenant` variable is set on the transaction, but only tables with `ENABLE ROW LEVEL SECURITY` and a matching policy actually filter rows. Tables without RLS ignore the session variable entirely — all rows are visible. ```ts app.get("/projects", async (c) => { const db = tenant.getDatabase(); // Tenant-scoped (table has RLS) → only this tenant's rows const projects = await db.select().from(projectsTable); // Shared (no RLS on this table) → all rows visible const categories = await db.select().from(categoriesTable); return c.json({ projects, categories }); }); ``` ### Recommended: wrap `getDatabase()` as your default database handle Since `tenant.getDatabase()` returns a standard ORM transaction handle, you can use it as **the single database access point** for all queries in a request — tenant-scoped and shared alike. A thin wrapper makes this ergonomic: ```ts title="src/database.ts" export function getDatabase() { const database = tenant.getDatabase(); if (!database) { throw new Error( "No active tenant context — call getDatabase() inside a request or runAs/runAsSystem block", ); } return database; } ``` Then use `getDatabase()` everywhere in your handlers and services: ```ts title="src/handlers/projects.ts" export async function listProjects() { const projects = await getDatabase().select().from(projectsTable); // RLS-filtered const categories = await getDatabase().select().from(categoriesTable); // no RLS, all rows return { projects, categories }; } ``` This pattern gives you: - **Single access point** — no separate connection pool for shared tables, no confusion about which database handle to use. - **Consistent transactions** — reads from shared tables participate in the same transaction as tenant-scoped reads, giving you a consistent snapshot. > **Note:** If you need to query the database **outside** a request context entirely (e.g., a health-check endpoint or a standalone script that doesn't use middleware), use a separate database connection. For background jobs and cron, prefer `tenant.runAs()` or `tenant.runAsSystem()` instead. ## Unique constraints When adding `tenant_id` to existing tables, you need to decide how existing `UNIQUE` constraints should behave: - **Per-tenant uniqueness** (most common): Two tenants can have users with the same email. Convert the constraint to a composite `UNIQUE(tenant_id, email)`. - **Global uniqueness**: No two users across any tenant can share an email. Keep the existing `UNIQUE(email)` constraint as-is. UseBetter Tenant does not modify unique constraints automatically — you must update your schema. **Drizzle:** ```ts import { pgTable, uuid, varchar, unique } from "drizzle-orm/pg-core"; import { tenantId } from "@usebetterdev/tenant/drizzle"; // Per-tenant unique email export const usersTable = pgTable("users", { id: uuid("id").defaultRandom().primaryKey(), ...tenantId, email: varchar("email", { length: 255 }).notNull(), }, (table) => [ unique().on(table.tenantId, table.email), ]).enableRLS(); ``` This replaces the original `UNIQUE(email)` with `UNIQUE(tenant_id, email)`. Two tenants can now have users with the same email, but the same email cannot appear twice within a single tenant. **Prisma:** ```prisma model User { id String @id @default(uuid()) @db.Uuid tenantId String @map("tenant_id") @db.Uuid email String tenant Tenant @relation(fields: [tenantId], references: [id]) @@unique([tenantId, email]) @@map("users") } ``` The `@@unique([tenantId, email])` replaces a simple `@unique` on `email`. Remove the original `@unique` from the `email` field. > **Caution:** If you keep a global `UNIQUE` constraint (e.g., `UNIQUE(email)` without `tenant_id`), inserting a user with the same email in a different tenant will fail with a unique violation error. This is correct if you want global uniqueness, but unexpected if you assumed tenant isolation covers it — RLS filters reads, not constraint checks. ## Telemetry UseBetter Tenant collects anonymous telemetry by default (library version and runtime info). Opt out with: ```ts const tenant = betterTenant({ // ... telemetry: { enabled: false }, }); ``` Or via environment variable: ```bash BETTER_TENANT_TELEMETRY=0 ``` ## Console integration UseBetter Tenant integrates seamlessly with [UseBetter Console](https://docs.usebetter.dev/console/getting-started/) to provide a web-based admin dashboard for your tenants. Pass your `betterConsole` instance to the `console` configuration option: ```ts const consoleInstance = betterConsole({ connectionTokenHash: process.env.BETTER_CONSOLE_TOKEN_HASH!, sessions: { autoApprove: process.env.NODE_ENV === "development" }, }); const tenant = betterTenant({ database: ..., tenantResolver: ..., console: consoleInstance, // <--- Registers tenant endpoints automatically }); ``` This automatically registers the `tenant` product with the console, exposing endpoints for listing, creating, updating, and deleting tenants via the Console UI. ## Next steps - [Framework Adapters](https://docs.usebetter.dev/tenant/adapters/) — per-adapter setup for Drizzle, Prisma, Hono, Express, and Next.js - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — generate migrations, verify RLS, and seed tenants - [Architecture](https://docs.usebetter.dev/tenant/architecture/) — how transaction-scoped RLS works under the hood --- ## Framework Adapters UseBetter Tenant has two kinds of adapters: - **ORM adapters** — handle transactions, `SET LOCAL`, and RLS bypass - **Framework adapters** — middleware that resolves the tenant and delegates to the ORM adapter ## ORM adapters ### Drizzle The Drizzle adapter wraps your queries in a transaction with `set_config('app.current_tenant', '', true)` — the function form of `SET LOCAL`. It works with any Postgres driver that Drizzle supports. **pg:** ```ts import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const database = drizzle(pool); export const tenant = betterTenant({ database: drizzleDatabase(database), tenantResolver: { header: "x-tenant-id" }, }); ``` **postgres.js:** ```ts import { drizzle } from "drizzle-orm/postgres-js"; import postgres from "postgres"; import { betterTenant } from "@usebetterdev/tenant"; import { drizzleDatabase } from "@usebetterdev/tenant/drizzle"; const client = postgres(process.env.DATABASE_URL); const database = drizzle(client); export const tenant = betterTenant({ database: drizzleDatabase(database), tenantResolver: { header: "x-tenant-id" }, }); ``` `drizzleDatabase(database)` bundles the adapter and tenant repository into a single `database` provider. If you use a custom tenants table, pass it via the `table` option: ```ts export const tenant = betterTenant({ database: drizzleDatabase(database, { table: myCustomTenantsTable }), tenantResolver: { header: "x-tenant-id" }, }); ``` **What it does under the hood:** - `runWithTenant(tenantId, fn)` — opens a Drizzle transaction, runs `SELECT set_config('app.current_tenant', '', true)`, executes `fn` with the transaction handle, then commits. - `runAsSystem(fn)` — opens a transaction with `SELECT set_config('app.bypass_rls', 'true', true)` for admin operations. **Tenant repository:** `drizzleDatabase()` includes a built-in tenant repository that provides `getBySlug` for slug-to-UUID resolution and powers the `tenant.api` CRUD operations. The tenants table must match the CLI-generated schema: `id` (UUID), `name`, `slug`, `created_at`. **Peer dependencies:** `drizzle-orm` and either `pg` or `postgres` ### Prisma The Prisma adapter uses interactive transactions with `$executeRaw` for session variables. Requires **Prisma 7+** with `@prisma/adapter-pg`. ```ts const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL }); const prisma = new PrismaClient({ adapter }); export const tenant = betterTenant({ database: prismaDatabase(prisma), tenantResolver: { header: "x-tenant-id" }, }); ``` `prismaDatabase(prisma)` bundles the adapter and tenant repository into a single `database` provider. If you use a custom table name, pass it via the `tableName` option: ```ts export const tenant = betterTenant({ database: prismaDatabase(prisma, { tableName: "my_tenants" }), tenantResolver: { header: "x-tenant-id" }, }); ``` **What it does under the hood:** - `runWithTenant(tenantId, fn)` — opens a Prisma interactive `$transaction`, runs `SET LOCAL` via `$executeRaw`, executes `fn` with the transaction client. - `runAsSystem(fn)` — same pattern with `app.bypass_rls = 'true'`. **getDatabase() return type:** `getDatabase()` is fully typed via generics — when you pass a typed `PrismaClient` to `prismaDatabase()`, the transaction client type flows through to `getDatabase()` with all model methods. No casting needed. See the [Prisma guide](https://docs.usebetter.dev/tenant/prisma/#getdatabase-typing) for the wrapper pattern. **Peer dependencies:** `@prisma/client` (>= 7.0.0) and `@prisma/adapter-pg` (>= 7.0.0) For connection customization (SSL, pool settings), see the [Prisma guide](https://docs.usebetter.dev/tenant/prisma/#connection-settings). --- ## Framework adapters Framework adapters are middleware that: 1. Extract the tenant identifier from the request using your configured resolver 2. Resolve the identifier to a UUID (slug lookup if needed) 3. Delegate to the ORM adapter to open a transaction with `SET LOCAL` 4. Run your handler inside that transaction ### Hono **Drizzle:** ```ts import { Hono } from "hono"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; import { tenant } from "./tenant"; // your betterTenant instance import { projectsTable } from "./schema"; const app = new Hono(); // Apply to specific routes app.use("/api/*", createHonoMiddleware(tenant)); app.get("/api/projects", async (c) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return c.json(projects); }); ``` **Prisma:** ```ts import { Hono } from "hono"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; import { tenant } from "./tenant"; const app = new Hono(); app.use("/api/*", createHonoMiddleware(tenant)); app.get("/api/projects", async (c) => { const db = tenant.getDatabase(); if (!db) { return c.json({ error: "No tenant-scoped database" }, 500); } const projects = await db.project.findMany(); return c.json(projects); }); ``` ### Express **Drizzle:** ```ts import express from "express"; import { createExpressMiddleware } from "@usebetterdev/tenant/express"; import { tenant } from "./tenant"; // your betterTenant instance import { projectsTable } from "./schema"; const app = express(); // Apply to specific routes app.use("/api", createExpressMiddleware(tenant)); app.get("/api/projects", async (req, res) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); res.json(projects); }); ``` **Prisma:** ```ts import express from "express"; import { createExpressMiddleware } from "@usebetterdev/tenant/express"; import { tenant } from "./tenant"; const app = express(); app.use(express.json()); app.use("/api", createExpressMiddleware(tenant)); app.get("/api/projects", async (req, res) => { const db = tenant.getDatabase(); if (!db) { return res.status(500).json({ error: "No tenant-scoped database" }); } const projects = await db.project.findMany(); res.json(projects); }); app.post("/api/projects", async (req, res) => { const db = tenant.getDatabase(); if (!db) { return res.status(500).json({ error: "No tenant-scoped database" }); } const project = await db.project.create({ data: { name: req.body.name } }); res.status(201).json(project); }); ``` ### Next.js App Router Next.js uses a per-route wrapper instead of global middleware: **Drizzle:** ```ts // app/api/projects/route.ts import { withTenant } from "@usebetterdev/tenant/next"; import { tenant } from "@/lib/tenant"; import { projectsTable } from "@/schema"; export const GET = withTenant(tenant, async (request) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return Response.json(projects); }); export const POST = withTenant(tenant, async (request) => { const body = await request.json(); const db = tenant.getDatabase(); await db.insert(projectsTable).values(body); return Response.json({ ok: true }, { status: 201 }); }); ``` **Prisma:** ```ts // app/api/projects/route.ts import { withTenant } from "@usebetterdev/tenant/next"; import { tenant } from "@/lib/tenant"; export const GET = withTenant(tenant, async () => { const db = tenant.getDatabase(); if (!db) { return Response.json({ error: "No tenant-scoped database" }, { status: 500 }); } const projects = await db.project.findMany(); return Response.json(projects); }); export const POST = withTenant(tenant, async (request) => { const body = await request.json(); const db = tenant.getDatabase(); if (!db) { return Response.json({ error: "No tenant-scoped database" }, { status: 500 }); } const project = await db.project.create({ data: body }); return Response.json(project, { status: 201 }); }); ``` ## Mixing tenant and non-tenant routes Most apps have routes that don't need tenant context — health checks, global config, onboarding, authentication. The tenant middleware **requires** a valid tenant identifier on every request it handles, so applying it too broadly (e.g., `app.use("*", ...)`) will reject requests to non-tenant routes. **The fix: scope the middleware to only the routes that need it.** Non-tenant routes use your database connection directly — they don't go through the tenant middleware at all. **Hono:** ```ts import { Hono } from "hono"; import { createHonoMiddleware } from "@usebetterdev/tenant/hono"; import { tenant } from "./tenant"; import { db } from "./database"; import { projectsTable, globalConfigTable } from "./schema"; const app = new Hono(); // Non-tenant routes — no middleware, use db directly app.get("/api/health", (c) => c.json({ ok: true })); app.get("/api/global-config", async (c) => { const config = await db.select().from(globalConfigTable); return c.json(config); }); // Tenant routes — middleware scoped to these paths app.use("/api/projects/*", createHonoMiddleware(tenant)); app.use("/api/tasks/*", createHonoMiddleware(tenant)); app.get("/api/projects", async (c) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return c.json(projects); }); ``` **Express:** ```ts import express from "express"; import { createExpressMiddleware } from "@usebetterdev/tenant/express"; import { tenant } from "./tenant"; import { db } from "./database"; import { projectsTable, globalConfigTable } from "./schema"; const app = express(); // Non-tenant routes — no middleware, use db directly app.get("/api/health", (req, res) => res.json({ ok: true })); app.get("/api/global-config", async (req, res) => { const config = await db.select().from(globalConfigTable); res.json(config); }); // Tenant routes — middleware scoped via router const tenantRouter = express.Router(); tenantRouter.use(createExpressMiddleware(tenant)); tenantRouter.get("/projects", async (req, res) => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); res.json(projects); }); app.use("/api", tenantRouter); ``` **Next.js:** Next.js uses per-route wrappers, so this works naturally — only wrap routes that need tenant context: ```ts // app/api/projects/route.ts — tenant-scoped import { withTenant } from "@usebetterdev/tenant/next"; import { tenant } from "@/lib/tenant"; import { projectsTable } from "@/schema"; export const GET = withTenant(tenant, async () => { const db = tenant.getDatabase(); const projects = await db.select().from(projectsTable); return Response.json(projects); }); ``` ```ts // app/api/global-config/route.ts — no tenant needed import { db } from "@/lib/database"; import { globalConfigTable } from "@/schema"; export async function GET() { const config = await db.select().from(globalConfigTable); return Response.json(config); } ``` > **Caution:** Avoid `app.use("*", createHonoMiddleware(tenant))` or `app.use(createExpressMiddleware(tenant))` without a path — this forces tenant resolution on every route, including non-tenant ones. Requests without a tenant identifier will be rejected with a 404. ## Choosing your stack | ORM | Driver | Framework | Best for | |---|---|---|---| | Drizzle | pg | Hono | Traditional Node.js APIs | | Drizzle | postgres.js | Hono | Lightweight APIs, edge-compatible | | Drizzle | pg | Express | Existing Express apps, REST APIs | | Drizzle | pg / postgres.js | Next.js | Full-stack React apps | | Prisma | (managed by Prisma) | Express | Prisma-first projects, existing Express apps | | Prisma | (managed by Prisma) | Hono | Prisma-first projects, lightweight APIs | | Prisma | (managed by Prisma) | Next.js | Next.js + Prisma projects | For Prisma-specific details (schema requirements, `getDatabase()` typing, migration workflow), see the dedicated [Prisma guide](https://docs.usebetter.dev/tenant/prisma/). ## Next steps - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies, tenant API, and admin operations - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — generate migrations, verify RLS, and seed tenants - [Architecture](https://docs.usebetter.dev/tenant/architecture/) — how transaction-scoped RLS and bypass work under the hood --- ## Prisma Guide > **Prisma 7+ required:** This adapter requires **Prisma 7** with the `prisma-client` generator and `@prisma/adapter-pg`. All examples import `PrismaClient` from the generated output path (e.g., `./generated/prisma/client.js`) — adjust to match your `output` configuration in `schema.prisma`. This guide covers Prisma-specific details that go beyond the general [Quick Start](https://docs.usebetter.dev/tenant/quick-start/). If you haven't set up UseBetter Tenant yet, start there first. ## Schema requirements ### Tenant model (optional) UseBetter Tenant manages the tenants table via raw SQL — you don't _need_ a Prisma model for it. But adding one gives you type-safe queries and relations: ```prisma model Tenant { id String @id @default(uuid()) @db.Uuid name String slug String @unique createdAt DateTime @default(now()) @map("created_at") @db.Timestamptz users User[] projects Project[] @@map("tenants") } ``` The table must have columns `id` (UUID), `name`, `slug` (unique), and `created_at` — matching the CLI-generated schema. ### tenant_id on scoped models Every tenant-scoped model needs a `tenantId` field mapped to the `tenant_id` column with a `@default(dbgenerated(...))` that reads the session variable: **Without Tenant relation (recommended):** ```prisma model Project { id Int @id @default(autoincrement()) name String tenantId String @default(dbgenerated("(current_setting('app.current_tenant'::text, true))::uuid")) @map("tenant_id") @db.Uuid @@map("projects") } ``` With this pattern, Prisma generates `tenantId` as **optional** in `ProjectCreateInput` and `ProjectUncheckedCreateInput`. You can omit it from `create()` calls — the trigger fills it at INSERT time: ```ts await db.project.create({ data: { name: "Project Alpha" } }); // ✅ compiles ``` The DB-level foreign key constraint (created by the CLI migration) still enforces referential integrity. You lose the ability to do `include: { tenant: true }` in Prisma queries, but inside a tenant-scoped context you already know which tenant you're working with. **With Tenant relation:** ```prisma model Project { id Int @id @default(autoincrement()) name String tenantId String @default(dbgenerated("(current_setting('app.current_tenant'::text, true))::uuid")) @map("tenant_id") @db.Uuid tenant Tenant @relation(fields: [tenantId], references: [id]) @@map("projects") } ``` This gives you `include: { tenant: true }` support and Prisma-level relation queries. > **tenantId is required in TypeScript types:** When a model has a `@relation` to `Tenant`, Prisma generates `tenantId` as **required** in `UncheckedCreateInput` and replaces it with a required `tenant` relation in `CreateInput` — regardless of `@default(dbgenerated(...))`. This is a Prisma limitation: `dbgenerated()` defaults are not recognized as making FK fields optional when a relation exists. > > At the **SQL level**, the trigger still fills `tenant_id` automatically. The issue is only at the **TypeScript type level**. > > Use the `WithOptionalTenant` utility type exported from `@usebetterdev/tenant/prisma` to work around this: > > ```ts > > > // Type-safe wrapper — tenantId becomes optional > type CreateProject = WithOptionalTenant; > > const data: CreateProject = { name: "Project Alpha" }; > await db.project.create({ data: data as ProjectUncheckedCreateInput }); > ``` > > Or pass `tenantId` explicitly — the trigger overrides the value anyway: > > ```ts > const tenantId = tenant.getContext()!.tenantId; > await db.project.create({ data: { name: "Project Alpha", tenantId } }); > ``` > **Always use @map("tenant_id"):** The RLS policies and `set_tenant_id()` trigger reference the column as `tenant_id` (snake_case). Prisma's default camelCase mapping (`tenantId` → `tenantId` column) will **not** match. Always add `@map("tenant_id")`. > **The trigger handles tenant_id at INSERT time:** The `set_tenant_id()` trigger auto-populates `tenant_id` on INSERT from `current_setting('app.current_tenant')`. Inside a tenant-scoped request or `runAs` block, the value is always set correctly. The trigger also **prevents overriding** — even if you pass a different `tenantId`, the trigger sets it to the current session tenant. ### Table name mapping Prisma maps model names directly to PostgreSQL table names by default: model `User` → table `"User"` (PascalCase, double-quoted). This works but can cause confusion with the CLI config. **Recommendation:** Add `@@map("lowercase_name")` to all models: ```prisma model User { // ... @@map("users") } ``` This makes PostgreSQL table names lowercase, matching standard conventions and simplifying CLI config. See [CLI table names](#cli-table-names) below. ### Unique constraints With multi-tenancy, unique constraints should typically be scoped per tenant: ```prisma model User { id String @id @default(uuid()) @db.Uuid email String tenantId String @default(dbgenerated("(current_setting('app.current_tenant'::text, true))::uuid")) @map("tenant_id") @db.Uuid @@unique([tenantId, email]) // Two tenants can have the same email @@map("users") } ``` Remove any `@unique` on the `email` field — replace it with the composite `@@unique([tenantId, email])`. See [Configuration — Unique constraints](https://docs.usebetter.dev/tenant/configuration/#unique-constraints) for details. ### All referencing tables need RLS If `Task` belongs to a tenant-aware `Project`, `Task` also needs RLS. Without it, a query like `db.task.findMany()` returns tasks from all tenants — data leaks through child tables even if the parent is protected. **Rule of thumb:** if a table has a foreign key to a tenant-scoped table, it should be tenant-scoped too. > **Child table data leak example:** Consider a `Comment` model with foreign keys to `User` and `Task` (both tenant-aware). Without RLS on `Comment`: > - `db.comment.findMany()` returns comments from **all** tenants > - Only the joined/included User and Task data gets filtered by their own RLS > - Comment `body` text leaks across tenants > > **If you choose not to add RLS to a child table**, never expose a top-level query for it. Always query through a tenant-scoped parent: > ```ts > // Safe — RLS filters Task first, only current tenant's tasks + comments returned > const tasks = await db.task.findMany({ include: { comments: true } }); > > // UNSAFE — returns comments from all tenants > const comments = await db.comment.findMany(); > ``` > > The recommended approach is to add `tenantId` + RLS to all child tables. ## getDatabase() typing `prismaDatabase(prisma)` preserves the Prisma transaction client type through generics. When you pass a typed `PrismaClient`, `getDatabase()` returns the full transaction client with all model methods — no manual casting needed: ```ts title="src/tenant.ts" const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL }); const prisma = new PrismaClient({ adapter }); export const tenant = betterTenant({ database: prismaDatabase(prisma), tenantResolver: { header: "x-tenant-id" }, }); ``` ```ts title="src/routes/projects.ts" app.get("/projects", async (req, res) => { const db = tenant.getDatabase(); if (!db) { return res.status(500).json({ error: "No tenant-scoped database" }); } const projects = await db.project.findMany(); // Full autocomplete + types res.json(projects); }); ``` ### Wrapper pattern For cleaner handlers, create a wrapper that throws on missing context: ```ts title="src/database.ts" export function getDatabase() { const db = tenant.getDatabase(); if (!db) { throw new Error( "No active tenant context — call getDatabase() inside a request or runAs/runAsSystem block" ); } return db; } ``` ```ts title="src/routes/projects.ts" app.get("/projects", async (req, res) => { const db = getDatabase(); const projects = await db.project.findMany(); res.json(projects); }); app.post("/projects", async (req, res) => { const db = getDatabase(); const project = await db.project.create({ data: { name: req.body.name } }); res.status(201).json(project); }); ``` ## CLI table names The CLI config `tenantTables` must use **actual PostgreSQL table names**, not Prisma model names. | Prisma model | `@@map` | PG table | Config value | |---|---|---|---| | `User` | _(none)_ | `"User"` | `"User"` | | `User` | `@@map("users")` | `users` | `"users"` | | `Project` | `@@map("projects")` | `projects` | `"projects"` | **With `@@map` (recommended):** ```json title="better-tenant.config.json" { "tenantTables": ["users", "projects", "tasks", "comments"] } ``` **Without `@@map` (Prisma defaults):** ```json title="better-tenant.config.json" { "tenantTables": ["User", "Project", "Task", "Comment"] } ``` The CLI wraps names via `quoteIdent()` in the generated SQL, so both formats work. Just match the actual PostgreSQL table name. ## Migrating existing data If your database already has rows, adding a `tenant_id NOT NULL` column will fail — existing rows have no value for the new column. Use a multi-step migration: 1. **Add the column as nullable** (edit the generated migration SQL before applying, or use a separate migration): ```sql ALTER TABLE "projects" ADD COLUMN "tenant_id" UUID; ``` 2. **Create a default tenant** to assign existing data to: ```bash npx @usebetterdev/tenant-cli seed --name "Default" --slug "default" --database-url $DATABASE_URL ``` 3. **Backfill existing rows** with the default tenant's ID: ```sql UPDATE "projects" SET "tenant_id" = '' WHERE "tenant_id" IS NULL; ``` 4. **Add the NOT NULL constraint and foreign key:** ```sql ALTER TABLE "projects" ALTER COLUMN "tenant_id" SET NOT NULL; ALTER TABLE "projects" ADD CONSTRAINT "projects_tenant_id_fkey" FOREIGN KEY ("tenant_id") REFERENCES "tenants"("id"); ``` With Prisma, use [`--create-only`](https://www.prisma.io/docs/orm/prisma-migrate/workflows/customizing-migrations) to generate a draft migration, then edit the SQL to include the backfill steps before applying: ```bash # Generate draft migration (does NOT apply yet) npx prisma migrate dev --create-only --name add_multitenancy # Edit prisma/migrations/*_add_multitenancy/migration.sql: # - Change "ADD COLUMN tenant_id UUID NOT NULL" to "ADD COLUMN tenant_id UUID" # - Add UPDATE ... SET tenant_id = '' after each ADD COLUMN # - Add ALTER COLUMN tenant_id SET NOT NULL after each UPDATE # Apply the edited migration npx prisma migrate dev ``` For a fresh database (development), you can skip this — the column add with `NOT NULL` works when the table is empty. ## Migration workflow Prisma doesn't support custom SQL migration files like Drizzle's `--custom` flag. RLS policies and triggers must be applied alongside Prisma's migration system. ### Recommended: embed in Prisma migration history Uses Prisma's [`--create-only`](https://www.prisma.io/docs/orm/prisma-migrate/workflows/customizing-migrations) flag to create a draft migration you can edit before applying. Prisma 7 uses `prisma.config.ts` for database URL and CLI configuration. Create it at your project root: ```ts title="prisma.config.ts" import "dotenv/config"; export default defineConfig({ schema: "prisma/schema.prisma", migrations: { path: "prisma/migrations", }, datasource: { url: env("DATABASE_URL"), }, }); ``` 1. **Run your schema migration** (adds `tenantId` columns, Tenant model, etc.): ```bash npx prisma migrate dev --name add_multitenancy ``` 2. **Create a draft migration** for RLS (generates an empty migration file you'll fill with RLS SQL): ```bash npx prisma migrate dev --create-only --name better_tenant_rls ``` 3. **Replace the empty migration SQL with RLS policies:** ```bash npx @usebetterdev/tenant-cli migrate \ -o prisma/migrations/*_better_tenant_rls/migration.sql ``` 4. **Apply the RLS migration:** ```bash npx prisma migrate dev ``` 5. **Verify:** ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` ### Alternative: apply directly via psql If you don't need RLS tracked in Prisma's migration history: ```bash npx @usebetterdev/tenant-cli migrate -o ./rls psql $DATABASE_URL -f ./rls/*_better_tenant.sql ``` ### Adding a table later When adding a new tenant-scoped model after initial setup: 1. **Add the model** to `schema.prisma` with `tenantId`, `@default(dbgenerated(...))`, and `@@map`: ```prisma model Comment { id Int @id @default(autoincrement()) body String tenantId String @default(dbgenerated("(current_setting('app.current_tenant'::text, true))::uuid")) @map("tenant_id") @db.Uuid @@map("comments") } ``` 2. **Apply schema migration:** ```bash npx prisma migrate dev --name add_comments ``` 3. **Create a draft RLS migration:** ```bash npx prisma migrate dev --create-only --name comments_rls npx @usebetterdev/tenant-cli add-table comments \ -o prisma/migrations/*_comments_rls/migration.sql ``` 4. **Apply and verify:** ```bash npx prisma migrate dev npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` 5. **Add** `"comments"` to `tenantTables` in your config. ## Connection settings Prisma 7 requires `@prisma/adapter-pg` for PostgreSQL. You can customize SSL and connection pool behaviour via the adapter options: ```ts const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL, ssl: { rejectUnauthorized: false }, // for self-signed certs }); const prisma = new PrismaClient({ adapter }); ``` The `SET LOCAL` inside the transaction is handled at the PostgreSQL protocol level, so it works correctly through the adapter. ## Generated client import path Prisma 7 requires a custom `output` path in the generator block — `PrismaClient` is no longer available from `@prisma/client`: ```prisma generator client { provider = "prisma-client" output = "../generated/prisma" } ``` The import path depends on your `output` configuration and the file you're importing from. `prismaDatabase()` accepts any object matching `PrismaClientLike` (supports `$transaction`, `$executeRaw`, etc.) — it doesn't care where `PrismaClient` was imported from. ## Seeding Post-tenancy, all inserts to tenant-scoped tables need tenant context. The `set_tenant_id()` trigger requires `app.current_tenant` to be set. ### Via CLI ```bash npx @usebetterdev/tenant-cli seed --name "Acme Corp" --database-url $DATABASE_URL # → Created tenant: Acme Corp (acme-corp) # → d4f8e2a1-3b5c-4e7f-9a1d-6c8b2e4f0a3d ``` ### Via tenant API + runAs ```ts title="prisma/seed.ts" async function main() { // Create tenants (runs via runAsSystem internally) const acme = await tenant.api.createTenant({ name: "Acme Corp", slug: "acme" }); // Seed tenant-scoped data — db is fully typed via generics await tenant.runAs(acme.id, async (db) => { await db.user.create({ data: { email: "admin@acme.com", name: "Admin" }, }); await db.project.create({ data: { name: "Project Alpha" }, }); }); } main(); ``` ## Full example: Express + Prisma ```ts title="src/tenant.ts" const adapter = new PrismaPg({ connectionString: process.env.DATABASE_URL }); const prisma = new PrismaClient({ adapter }); export const tenant = betterTenant({ database: prismaDatabase(prisma), tenantResolver: { header: "x-tenant-id" }, }); ``` ```ts title="src/index.ts" const app = express(); app.use(express.json()); // Health check — no tenant needed app.get("/health", (req, res) => res.json({ ok: true })); // Admin routes — no tenant middleware app.post("/admin/tenants", async (req, res) => { const created = await tenant.api.createTenant({ name: req.body.name, slug: req.body.slug, }); res.status(201).json(created); }); app.get("/admin/tenants", async (req, res) => { const tenants = await tenant.api.listTenants(); res.json(tenants); }); // Tenant-scoped routes app.use(createExpressMiddleware(tenant)); app.get("/projects", async (req, res) => { const db = tenant.getDatabase(); if (!db) { return res.status(500).json({ error: "No tenant-scoped database" }); } const projects = await db.project.findMany(); res.json(projects); }); app.post("/projects", async (req, res) => { const db = tenant.getDatabase(); if (!db) { return res.status(500).json({ error: "No tenant-scoped database" }); } const project = await db.project.create({ data: { name: req.body.name } }); res.status(201).json(project); }); app.get("/api/tenant", (req, res) => { const ctx = tenant.getContext(); res.json({ tenantId: ctx?.tenantId ?? null, tenant: ctx?.tenant ?? null }); }); app.listen(3000, () => console.log("Listening on http://localhost:3000")); ``` Test it: ```bash # Create a tenant curl -X POST http://localhost:3000/admin/tenants \ -H "Content-Type: application/json" \ -d '{"name": "Acme Corp", "slug": "acme"}' # Query with tenant header curl -H "x-tenant-id: " http://localhost:3000/projects # Without header → 404 curl http://localhost:3000/projects ``` ## Common gotchas | Issue | Cause | Fix | |---|---|---| | RLS returns all rows | Connecting as superuser (`postgres`) | Use a [non-superuser role](https://docs.usebetter.dev/tenant/quick-start/#prerequisites) | | `tenant_id` column not found by trigger | Missing `@map("tenant_id")` on `tenantId` field | Add `@map("tenant_id")` to every tenant-scoped model | | CLI config doesn't match tables | Using Prisma model names instead of PG table names | Use `@@map("lowercase")` and match in config | | `@unique` on email fails across tenants | Unique not scoped to tenant | Use `@@unique([tenantId, email])` instead | | Data leaks through child tables | Child table (Task, Comment) missing RLS | Add `tenantId` + RLS to all tables referencing tenant-aware parents | | `getDatabase()` has no model methods | Generic inference failed or untyped `PrismaClient` | Pass a typed `PrismaClient` to `prismaDatabase()` — [see above](#getdatabase-typing) | | Prisma migration doesn't include RLS | Prisma has no custom SQL migration support | Apply RLS separately ([see workflow](#migration-workflow)) | | Seed script fails post-tenancy | Inserts need `app.current_tenant` set | Use `tenant.runAs()` or `tenant.api.createTenant()` | | `NOT NULL` column add fails | Existing rows have no `tenant_id` | [Migrate existing data](#migrating-existing-data) — add nullable, backfill, then set NOT NULL | | Generated client import path | Imports not from `@prisma/client` | `prismaDatabase()` accepts any `PrismaClientLike` instance — [see above](#generated-client-import-path) | | `tenantId` required in `create()` types | Model has `@relation` to Tenant | Use `WithOptionalTenant` type utility or remove the relation — [see above](#tenant_id-on-scoped-models) | | `tenantId` in `create()` ignored | `set_tenant_id()` trigger overrides the value | Don't pass `tenantId` in `data` — the trigger sets it from the session variable | ## Next steps - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies, tenant API, admin operations - [Framework Adapters](https://docs.usebetter.dev/tenant/adapters/) — per-adapter setup for all frameworks - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — all CLI commands and workflows - [Troubleshooting](https://docs.usebetter.dev/tenant/troubleshooting/) — common issues and fixes --- ## CLI & Migrations The CLI (`@usebetterdev/tenant-cli`) generates RLS policies and triggers for your tenant-scoped tables. Your ORM manages the schema (tables, columns) — the CLI handles everything else. ## Configuration The CLI reads from `better-tenant.config.json` in your project root: ```json title="better-tenant.config.json" { "tenantTables": ["projects", "tasks"] } ``` Alternatively, add a `"betterTenant"` key to your `package.json`: ```json title="package.json" { "betterTenant": { "tenantTables": ["projects", "tasks"] } } ``` `tenantTables` lists the tables that should be tenant-scoped. The CLI generates RLS policies and triggers for each one. > **Prisma table names:** Use **actual PostgreSQL table names**, not Prisma model names. Prisma maps model `User` to table `"User"` (PascalCase) by default. If you use `@@map("users")`, the PG table name is `users`. > > | Prisma model | `@@map` | PG table | Config value | > |---|---|---|---| > | `User` | _(none)_ | `"User"` | `"User"` | > | `User` | `@@map("users")` | `users` | `"users"` | > > **Recommendation:** Use `@@map("lowercase")` on all models, then use lowercase names in config. ## Commands ### `init` — create config interactively Connects to your database, detects tables, and creates `better-tenant.config.json`. It also detects your ORM (Drizzle or Prisma) and shows tailored next steps: ```bash npx @usebetterdev/tenant-cli init --database-url $DATABASE_URL npx @usebetterdev/tenant-cli init # prompts for DATABASE_URL ``` #### Non-interactive mode Pass `-n` (or `--non-interactive`) to disable all interactive prompts. This is useful for CI/CD pipelines and LLM coding agents (Claude Code, Cursor, etc.) that cannot respond to prompts. In this mode `--tables` and `--orm` are required — the CLI fails fast with a clear error if either is missing. ```bash npx @usebetterdev/tenant-cli init -n \ --tables "projects,tasks,users" \ --orm drizzle ``` | Flag | Description | |------|-------------| | `-n, --non-interactive` | Disable all prompts (requires `--tables` and `--orm`) | | `--tables ` | Comma-separated table names (required with `-n`) | | `--orm ` | `drizzle` or `prisma` (required with `-n`) | | `--overwrite` | Overwrite existing config without prompting | No database connection is needed in non-interactive mode. ### `migrate` — generate RLS migration Generates SQL for RLS policies on the `tenants` lookup table (open SELECT, writes require `bypass_rls`), plus RLS policies, triggers, and the `set_tenant_id()` function for all tenant-scoped tables in `tenantTables`: ```bash # Preview the SQL npx @usebetterdev/tenant-cli migrate --dry-run # Write to directory (auto-generates timestamped filename) npx @usebetterdev/tenant-cli migrate -o ./rls # Write to specific file npx @usebetterdev/tenant-cli migrate -o drizzle/0001_rls.sql ``` The generated migration includes: - RLS policies for the `tenants` lookup table: `rls_tenants_read` (open SELECT) and `rls_tenants_write` (requires `bypass_rls`) - `set_tenant_id()` function that auto-populates `tenant_id` on INSERT - `ENABLE ROW LEVEL SECURITY` and `FORCE ROW LEVEL SECURITY` on each tenant-scoped table - RLS policy with `USING` and `WITH CHECK` clauses (tenant isolation + bypass support) - `set_tenant_id_trigger` on each tenant-scoped table > **Note:** The CLI does **not** create the `tenants` table or add `tenant_id` columns — your ORM schema handles that. See the workflow sections below. ### `add-table` — add RLS to a new table Generates RLS SQL for a single table. Use this when you add a new tenant-scoped table after initial setup: ```bash # Preview npx @usebetterdev/tenant-cli add-table comments --dry-run # Write to directory (auto-generates timestamped filename) npx @usebetterdev/tenant-cli add-table comments -o ./rls # Write to specific file npx @usebetterdev/tenant-cli add-table comments -o drizzle/0002_comments_rls.sql ``` After running `add-table`, add the table name to `tenantTables` in your config. ### `check` — verify database setup Runs 10+ validations against your database to confirm RLS is correctly configured: ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` Checks include: - `tenants` table exists with correct columns - `tenants` table has RLS enabled, forced, and both policies (`rls_tenants_read`, `rls_tenants_write`) - `tenant_id` column exists on each tenant-scoped table - RLS is enabled and forced on each tenant-scoped table - Policies have correct `USING` and `WITH CHECK` clauses with `bypass_rls` - `set_tenant_id()` trigger is attached ### `seed` — insert a test tenant Creates a tenant record using `runAsSystem` (RLS bypass): ```bash npx @usebetterdev/tenant-cli seed --name "Acme Corp" --database-url $DATABASE_URL ``` ## Workflow Your ORM owns the schema (tables, columns). The CLI generates only RLS policies and triggers. **Drizzle:** 1. **Run `init`** to create `better-tenant.config.json` 2. **Add `tenantsTable`, `tenantId`, and `.enableRLS()` to each tenant-scoped table:** **Using helper:** ```ts import { tenantsTable, tenantId } from "@usebetterdev/tenant/drizzle"; export { tenantsTable }; export const projectsTable = pgTable("projects", { id: serial("id").primaryKey(), name: text("name").notNull(), ...tenantId, }).enableRLS(); ``` **Manual:** ```ts import { sql } from "drizzle-orm"; import { pgTable, serial, text, uuid } from "drizzle-orm/pg-core"; import { tenantsTable } from "@usebetterdev/tenant/drizzle"; export { tenantsTable }; export const projectsTable = pgTable("projects", { id: serial("id").primaryKey(), name: text("name").notNull(), tenantId: uuid("tenant_id") .notNull() .references(() => tenantsTable.id) .default(sql`(current_setting('app.current_tenant', true))::uuid`), }).enableRLS(); ``` 3. **Generate and apply schema migration:** ```bash npx drizzle-kit generate npx drizzle-kit migrate ``` 4. **Create a custom migration for RLS.** The `--prefix=none` flag is required — without it, Drizzle Kit adds a numeric prefix to the filename and the `-o` path in the next step won't match: ```bash npx drizzle-kit generate --custom --name=better_tenant_rls --prefix=none ``` 5. **Fill the empty migration with RLS SQL.** The filename must match exactly what Drizzle Kit generated in the previous step: ```bash npx @usebetterdev/tenant-cli migrate -o drizzle/_better_tenant_rls.sql ``` 6. **Apply the RLS migration:** ```bash npx drizzle-kit migrate ``` 7. **Verify setup:** ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` **Prisma:** 1. **Run `init`** to create `better-tenant.config.json` 2. **Add the Tenant model to `schema.prisma`:** ```prisma model Tenant { id String @id @default(uuid()) @db.Uuid name String slug String @unique createdAt DateTime @default(now()) @map("created_at") @db.Timestamptz projects Project[] @@map("tenants") } ``` 3. **Add `tenantId` to each tenant-scoped model** (and a reverse relation on `Tenant`): ```prisma tenantId String @map("tenant_id") @db.Uuid tenant Tenant @relation(fields: [tenantId], references: [id]) ``` 4. **Generate and apply schema migration:** ```bash npx prisma migrate dev --name setup ``` 5. **Create a draft migration** for RLS using [`--create-only`](https://www.prisma.io/docs/orm/prisma-migrate/workflows/customizing-migrations): ```bash npx prisma migrate dev --create-only --name better_tenant_rls ``` 6. **Replace the empty migration SQL with RLS policies:** ```bash npx @usebetterdev/tenant-cli migrate \ -o prisma/migrations/*_better_tenant_rls/migration.sql ``` 7. **Apply the RLS migration:** ```bash npx prisma migrate dev ``` 8. **Verify setup:** ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` ### Adding a table later When you add a new tenant-scoped table after initial setup: **Drizzle:** 1. **Add the table to your Drizzle schema** (with `tenant_id` column): ```ts import { tenantId } from "@usebetterdev/tenant/drizzle"; export const commentsTable = pgTable("comments", { id: serial("id").primaryKey(), body: text("body").notNull(), ...tenantId, }).enableRLS(); ``` 2. **Generate and apply the schema migration:** ```bash npx drizzle-kit generate npx drizzle-kit migrate ``` 3. **Create a custom migration for RLS.** The `--prefix=none` flag is required — without it, Drizzle Kit adds a numeric prefix and the `-o` path in the next step won't match: ```bash npx drizzle-kit generate --custom --name=comments_rls --prefix=none ``` 4. **Fill the empty migration with RLS SQL.** The filename must match exactly what Drizzle Kit generated in the previous step: ```bash npx @usebetterdev/tenant-cli add-table comments -o drizzle/_comments_rls.sql ``` 5. **Apply the RLS migration:** ```bash npx drizzle-kit migrate ``` 6. **Add the table name to `tenantTables` in your config.** **Prisma:** 1. **Add the model to `schema.prisma`** (with `tenantId` field): ```prisma model Comment { id Int @id @default(autoincrement()) body String tenantId String @map("tenant_id") @db.Uuid tenant Tenant @relation(fields: [tenantId], references: [id]) @@map("comments") } ``` Also add `comments Comment[]` to your existing `Tenant` model. 2. **Generate and apply the schema migration:** ```bash npx prisma migrate dev --name add_comments ``` 3. **Create a draft RLS migration** using [`--create-only`](https://www.prisma.io/docs/orm/prisma-migrate/workflows/customizing-migrations): ```bash npx prisma migrate dev --create-only --name comments_rls ``` 4. **Replace the empty migration SQL with RLS policies:** ```bash npx @usebetterdev/tenant-cli add-table comments \ -o prisma/migrations/*_comments_rls/migration.sql ``` 5. **Apply the RLS migration:** ```bash npx prisma migrate dev ``` 6. **Add the table name to `tenantTables` in your config.** ## Programmatic API The CLI exports functions for use in scripts or custom tooling: ```ts ``` ## Next steps - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies, tenant API, and admin operations - [Framework Adapters](https://docs.usebetter.dev/tenant/adapters/) — per-adapter setup for Drizzle, Prisma, Hono, Express, and Next.js - [Architecture](https://docs.usebetter.dev/tenant/architecture/) — how the generated SQL and RLS policies work under the hood --- ## Troubleshooting ## Tenant could not be resolved ``` better-tenant: tenant could not be resolved from request ``` This means none of your configured resolver strategies found a tenant identifier on the incoming request. The middleware responds with a `404` (or `401` if configured via `missingTenantStatus`). ### Common causes **Header is missing or misspelled.** If you use `header: "x-tenant-id"`, the request must include that exact header: ```bash # Wrong — header name doesn't match curl -H "X-Tenant: abc" http://localhost:3000/projects # Correct curl -H "x-tenant-id: abc" http://localhost:3000/projects ``` **Subdomain not detected on localhost.** The subdomain resolver needs at least 3 hostname segments. `localhost` and `example.com` don't have a subdomain: | Host | Extracted subdomain | | -------------------- | ---------------------------- | | `acme.app.com` | `acme` | | `acme.app.localhost` | `acme` | | `acme.localhost` | _nothing — only 2 segments_ | | `localhost` | _nothing — too few segments_ | | `example.com` | _nothing — only 2 segments_ | **Path resolver doesn't match.** The path pattern must include `:tenantId`. Check that the segment index is correct: ```ts // Matches /t/acme/projects — tenantId is "acme" tenantResolver: { path: "/t/:tenantId/*"; } // Does NOT match /projects/acme — wrong segment ``` **JWT claim is missing or not a string.** The resolver decodes the JWT payload and reads the configured claim. It returns nothing if the claim doesn't exist, is not a string, or the token is malformed. **Resolution order matters.** Strategies are tried in this order: header → path → subdomain → JWT → custom. The first one that returns a non-empty value wins. If you configure multiple strategies, an earlier one may match before the one you expect. ### Customizing the error response By default, all framework adapters return a `404` with a JSON body. You can change the status or handle it yourself: ```ts // Change status to 401 createHonoMiddleware(tenant, { missingTenantStatus: 401 }); createExpressMiddleware(tenant, { missingTenantStatus: 401 }); withTenant(tenant, handler, { missingTenantStatus: 401 }); // Full custom handling (Hono) createHonoMiddleware(tenant, { onMissingTenant: (c) => c.json({ error: "Unknown workspace" }, 404), }); // Full custom handling (Express) createExpressMiddleware(tenant, { onMissingTenant: (req, res) => { res.status(403).json({ error: "Unknown workspace" }); }, }); ``` --- ## RLS returns all rows Queries return data from **all** tenants instead of filtering to the current tenant — tenant isolation is not working. ### Check if you're connecting as a superuser PostgreSQL superusers bypass **all** RLS policies, even with `FORCE ROW LEVEL SECURITY`. The default `postgres` user in most Docker and cloud setups is a superuser. Check your current role: ```sql SELECT current_user, usesuper FROM pg_user WHERE usename = current_user; ``` If `usesuper` is `true`, RLS will never filter rows for that connection. Create a non-superuser application role: ```sql CREATE ROLE app_user WITH LOGIN PASSWORD '**********'; GRANT CONNECT ON DATABASE mydb TO app_user; GRANT USAGE ON SCHEMA public TO app_user; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user; ``` Then update your `DATABASE_URL` to use the new role. ### Middleware is not applied If the middleware is not on the route's path, no transaction with `SET LOCAL` is opened and RLS has no tenant to filter by. Verify your middleware is applied to the correct routes (see [Framework Adapters](https://docs.usebetter.dev/tenant/adapters/)). --- ## RLS blocks all rows Queries return empty results even though rows exist in the database. This usually means the RLS policy can't match `app.current_tenant` to the rows' `tenant_id`. ### Run the check command first ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` The check command runs 10+ validations and pinpoints the exact issue. Here's what each failure means and how to fix it: | Failure | Meaning | Fix | | ------------------------------------------------ | ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- | | `tenants table not found` | The `tenants` table doesn't exist | Run the migration: `psql $DATABASE_URL -f ./migrations/*_better_tenant.sql` | | `tenants table missing column: ` | The tenants table is missing a required column (`id`, `name`, `slug`, `created_at`) | Re-run the migration or add the column manually | | `table not found` | A table listed in `tenantTables` doesn't exist in the database | Create the table first, then re-run the migration | | `column tenant_id not found` | The table is missing the `tenant_id` column | Re-run the migration or add: `ALTER TABLE ADD COLUMN tenant_id UUID NOT NULL REFERENCES tenants(id)` | | `ROW LEVEL SECURITY not enabled` | RLS is not turned on for this table | Run: `ALTER TABLE ENABLE ROW LEVEL SECURITY` | | `FORCE ROW LEVEL SECURITY not enabled` | RLS can be bypassed by the table owner role | Run: `ALTER TABLE FORCE ROW LEVEL SECURITY` | | `no RLS policy found` | No policy exists on this table | Re-run the migration to generate the policy | | `policy missing USING expression` | The policy doesn't filter reads | Re-create the policy with a `USING` clause | | `policy missing WITH CHECK expression` | The policy doesn't validate writes | Re-create the policy with a `WITH CHECK` clause | | `policy should allow bypass_rls for runAsSystem` | The policy doesn't include the `app.bypass_rls` escape hatch | Re-create the policy with `OR current_setting('app.bypass_rls', true) = 'true'` in both clauses | | `trigger set_tenant_id_trigger not found` | The auto-populate trigger is missing | Re-run the migration to create it | ### Slug lookup fails silently If you use subdomain or path-based resolution, the resolver extracts a slug (like `"acme"`) and looks it up in the `tenants` table to get the UUID. If no tenant with that slug exists, resolution returns `undefined` and you get a "tenant could not be resolved" error — not an "empty results" error. Make sure the tenant exists: ```bash npx @usebetterdev/tenant-cli seed --name "Acme Corp" --slug "acme" --database-url $DATABASE_URL ``` --- ## Connection pooling UseBetter Tenant uses `set_config('app.current_tenant', '', true)` where the third argument `true` means **transaction-local**. The session variable is automatically cleared when the transaction commits. This makes it safe with connection pools — the next request gets a fresh transaction with no leftover state. ### PgBouncer PgBouncer must run in **transaction mode** (the default) for `SET LOCAL` / `set_config(..., true)` to work correctly. In **session mode**, the session variable persists across transactions on the same connection, which can leak tenant context between requests. ```ini # pgbouncer.ini pool_mode = transaction # correct — SET LOCAL is cleared per transaction # pool_mode = session # wrong — session variables persist across requests ``` ### No cross-request leakage Each request gets its own database transaction. The `app.current_tenant` variable is scoped to that transaction and invisible to other concurrent requests on the same connection. When the transaction ends (commit or rollback), the variable is gone. --- ## Context is undefined `tenant.getContext()` and `tenant.getDatabase()` return `undefined` when called outside a tenant scope. ### Common causes **Calling outside middleware.** These methods only work inside a request handled by tenant middleware, or inside a `runAs` / `runAsSystem` block: ```ts // This works — inside middleware scope app.get("/projects", async (c) => { const db = tenant.getDatabase(); // returns the scoped database }); // This does NOT work — outside any scope const db = tenant.getDatabase(); // undefined ``` **Calling inside setTimeout or detached async.** UseBetter Tenant uses `AsyncLocalStorage` to propagate context. Some patterns break the async chain: ```ts app.get("/projects", async (c) => { // Works — same async context const db = tenant.getDatabase(); // Does NOT work — setTimeout creates a new async context setTimeout(() => { const db = tenant.getDatabase(); // undefined }, 1000); }); ``` If you need to run tenant-scoped work outside the request lifecycle, capture the tenant ID and use `runAs`: ```ts app.get("/projects", async (c) => { const ctx = tenant.getContext(); const tenantId = ctx.tenantId; // Schedule work with explicit tenant scope setTimeout(async () => { await tenant.runAs(tenantId, async (db) => { // tenant context is available here }); }, 1000); }); ``` --- ## Prisma: tenant_id column not found ``` column "tenant_id" of relation "User" does not exist ``` The `set_tenant_id()` trigger and RLS policies reference the column as `tenant_id` (snake_case). Prisma's default field mapping uses camelCase (`tenantId` → column `tenantId`). **Fix:** Add `@map("tenant_id")` to every `tenantId` field in your Prisma schema: ```prisma model User { tenantId String @map("tenant_id") @db.Uuid // ... } ``` --- ## Prisma: table name mismatch in CLI config ``` table "users" not found ``` The CLI config `tenantTables` must use the actual PostgreSQL table name, not the Prisma model name: - Prisma model `User` (no `@@map`) → PG table `"User"` → config: `"User"` - Prisma model `User` + `@@map("users")` → PG table `users` → config: `"users"` **Recommendation:** Add `@@map("lowercase")` to all models and use lowercase names in config. See [CLI & Migrations — Configuration](https://docs.usebetter.dev/tenant/cli/#configuration). --- ## Prisma: getDatabase() has no model methods ```ts const db = tenant.getDatabase(); db.project.findMany(); // TS error: Property 'project' does not exist on type 'unknown' ``` `getDatabase()` is fully typed via generics when you pass a typed `PrismaClient` to `prismaDatabase()`. If you're seeing `unknown`, make sure the generic is inferred correctly — pass the PrismaClient instance directly to `prismaDatabase(prisma)`. See the [Prisma guide](https://docs.usebetter.dev/tenant/prisma/#getdatabase-typing) for details. --- ## Prisma: seed script fails after adding tenancy ``` null value in column "tenant_id" violates not-null constraint ``` After adding RLS, the `set_tenant_id()` trigger needs `app.current_tenant` to be set in the transaction. Direct `prisma.user.create()` calls outside a tenant context have no transaction with `SET LOCAL`, so the trigger can't populate `tenant_id`. **Fix:** Use `tenant.runAs()` or `tenant.api.createTenant()` for seeding: ```ts const acme = await tenant.api.createTenant({ name: "Acme", slug: "acme" }); await tenant.runAs(acme.id, async (db) => { await db.user.create({ data: { email: "admin@acme.com", name: "Admin" } }); }); ``` See the [Prisma guide — Seeding](https://docs.usebetter.dev/tenant/prisma/#seeding) for full examples. --- ## Prisma: RLS migration not tracked Prisma doesn't support custom SQL migration files like Drizzle. If you apply RLS via `psql` directly, Prisma's migration history won't know about it, and `prisma migrate dev` may warn about drift. **Fix:** Embed RLS SQL in a Prisma migration directory. See [Prisma guide — Migration workflow](https://docs.usebetter.dev/tenant/prisma/#migration-workflow) for step-by-step instructions. --- ## Drizzle migration journal out of sync ``` Error: No file ./drizzle/0001_better_tenant_rls.sql found in ./drizzle folder ``` This means Drizzle Kit's journal (`drizzle/meta/_journal.json`) references a migration file that doesn't exist on disk. This typically happens when a previous `drizzle-kit generate` run was partially completed or when migration files were manually deleted. ### How to fix 1. Open `drizzle/meta/_journal.json` and remove the entry referencing the missing file. 2. Delete the corresponding snapshot file from `drizzle/meta/` if one exists. 3. Re-run the workflow from the `drizzle-kit generate` step. ### Prevention Always use `--prefix=none` when creating custom RLS migrations. Without it, Drizzle Kit adds a numeric or timestamp prefix to the filename, and the `tenant-cli migrate -o` path won't match. See [CLI & Migrations — Workflow](https://docs.usebetter.dev/tenant/cli/#workflow) for the correct steps. --- ## CLI errors ### No config found ``` better-tenant: No config found. ``` The CLI looks for configuration in this order: 1. `better-tenant.config.json` in the current directory 2. A `"betterTenant"` key in `package.json` Fix: run `init` to create the config interactively, or create it manually: ```json title="better-tenant.config.json" { "tenantTables": ["projects", "tasks"] } ``` ### Invalid config ``` better-tenant: Invalid JSON in better-tenant.config.json: ... better-tenant: config must have tenantTables (string[]) ``` The config must be a valid JSON object with a `tenantTables` array of table name strings. Check for trailing commas, missing quotes, or a non-array value. ### Database URL required ``` check requires --database-url or DATABASE_URL environment variable seed requires --database-url or DATABASE_URL environment variable ``` Pass the URL via flag or environment variable. It must use a `postgres://` or `postgresql://` protocol: ```bash # Via flag npx @usebetterdev/tenant-cli check --database-url postgres://user:pass@localhost:5432/mydb # Via environment variable export DATABASE_URL=postgres://user:pass@localhost:5432/mydb npx @usebetterdev/tenant-cli check ``` ## Next steps - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — all CLI commands and workflows - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies and tenant API - [Architecture](https://docs.usebetter.dev/tenant/architecture/) — how RLS and session variables work under the hood --- ## How RLS Works UseBetter Tenant relies on a Postgres feature called **Row-Level Security (RLS)** to enforce tenant isolation at the database level. This page walks through exactly what happens on every request so you can see why this approach is safe, how it handles edge cases, and what Postgres guarantees you get for free. If you just want to get started, skip to [Quick Start](https://docs.usebetter.dev/tenant/quick-start/). Come back here when you want to understand what is happening beneath the surface. ## The problem with WHERE clauses The most common way to implement multi-tenancy is to add `WHERE tenant_id = ?` to every query. It works, but it has a critical flaw: **a single missing WHERE clause exposes all tenants' data**. The more queries your application has, the more likely someone will forget one, especially across teams and over time. UseBetter Tenant moves tenant filtering from your application code into the database itself. Even if your code has a bug — a forgotten filter, a bad JOIN, a raw SQL query — Postgres blocks access to other tenants' rows before they ever leave the database. ## What happens on every request Here is the full lifecycle, from HTTP request to database query and back: Every step is deliberate. The next sections explain the "why" behind each one. ## Try it yourself Switch tenants, run different queries, and see exactly what SQL executes and which rows come back. This is a browser simulation of the same logic UseBetter Tenant runs on every request. ### RLS Interactive Playground The interactive playground above lets you switch tenants and run queries to see RLS in action. Here are the four scenarios it demonstrates: #### Scenario 1: SELECT rows Query all projects — RLS filters to the current tenant. ```sql BEGIN; SELECT set_config('app.current_tenant', '550e8400-e29b-41d4-a716-446655440000', true); SELECT * FROM projects; -- RLS policy filters: only rows where tenant_id = '550e8400-...' are returned. -- Other tenants' rows are invisible — no WHERE clause needed in application code. COMMIT; -- Transaction ends, app.current_tenant is cleared. Connection returns to pool clean. ``` #### Scenario 2: INSERT a row Insert a new project — the trigger auto-sets tenant_id from the session variable. ```sql BEGIN; SELECT set_config('app.current_tenant', '550e8400-e29b-41d4-a716-446655440000', true); INSERT INTO projects (name) VALUES ('New Project'); -- The BEFORE INSERT trigger reads app.current_tenant and sets tenant_id automatically. -- You never need to pass tenant_id in your application code. COMMIT; ``` #### Scenario 3: Cross-tenant INSERT Try to insert a row with a different tenant_id — WITH CHECK rejects it. ```sql BEGIN; SELECT set_config('app.current_tenant', '550e8400-e29b-41d4-a716-446655440000', true); INSERT INTO projects (name, tenant_id) VALUES ('Sneaky', '7c9e6679-7425-40de-944b-e07fc1f90ae7'); -- ERROR: new row violates row-level security policy for table "projects" -- The WITH CHECK clause rejects the row because tenant_id does not match app.current_tenant. ROLLBACK; ``` #### Scenario 4: Admin bypass runAsSystem sets app.bypass_rls — all rows become visible in this transaction. ```sql BEGIN; SELECT set_config('app.bypass_rls', 'true', true); SELECT * FROM projects; -- All rows from ALL tenants are returned. -- The RLS policy's OR clause checks app.bypass_rls and allows access. COMMIT; -- bypass_rls is cleared when the transaction ends — next request has normal tenant isolation. ``` ## Step by step ### 1. Middleware intercepts the request Your framework adapter (Hono, Express, or Next.js) wraps your route handler. Before your code runs, the middleware calls `tenant.handleRequest()`, which kicks off tenant resolution and sets up the database context. Your handler only executes after the tenant is confirmed. ### 2. Tenant resolution: identifier to UUID The resolver extracts a raw identifier from the request. This could be a header value, a path segment, a subdomain, a JWT claim, or the return value of a custom function. Whatever it is, RLS needs a UUID — so the library normalizes it: - **Already a UUID?** Pass it through. - **A slug like `"acme"`?** Look it up in the `tenants` table and return its UUID. - **Custom `resolveToId` configured?** Call your function — it has full control. Slug lookups run inside a separate `runAsSystem` transaction so that the `tenants` table (which has its own RLS policies) is accessible. This lookup happens once per request, before the user-facing transaction starts. ### 3. Transaction + SET LOCAL: the key mechanism This is the core of the approach. The adapter (Drizzle or Prisma) does two things: 1. **Opens a database transaction** — `BEGIN` 2. **Sets a session variable scoped to that transaction:** ```sql SELECT set_config('app.current_tenant', '550e8400-...', true); ``` The third argument, `true`, is what makes this safe. It tells Postgres to scope this variable to the **current transaction only**. When the transaction ends — whether by `COMMIT` or `ROLLBACK` — the variable disappears. This is the SQL standard `SET LOCAL` behavior, just invoked via the function form so ORMs can call it as a regular query. **Why this matters:** - **Connection pool safety.** When the transaction finishes, the connection returns to the pool with no leftover state. The next request that picks up the same connection gets its own transaction and its own `app.current_tenant` value. - **No cross-request leakage.** Even if two requests run concurrently on the same connection pool, each has its own transaction and its own session variable. Postgres guarantees transaction isolation. - **No global state.** Unlike `SET` (without `LOCAL`), which persists for the entire session/connection, `SET LOCAL` is confined to the transaction. ### 4. RLS policy enforcement With `app.current_tenant` set inside the transaction, Postgres RLS takes over. The CLI generates a policy on each tenant-scoped table that looks like this: ```sql CREATE POLICY "rls_tenant_projects" ON "projects" FOR ALL USING ( (tenant_id)::text = current_setting('app.current_tenant', true) OR current_setting('app.bypass_rls', true) = 'true' ) WITH CHECK ( (tenant_id)::text = current_setting('app.current_tenant', true) OR current_setting('app.bypass_rls', true) = 'true' ); ``` Here is what each clause does: | Clause | When it runs | What it checks | | --- | --- | --- | | **USING** | On SELECT, UPDATE, DELETE | "Can this row be seen?" — only rows whose `tenant_id` matches `app.current_tenant` | | **WITH CHECK** | On INSERT, UPDATE | "Can this row be written?" — prevents inserting or updating rows with a different `tenant_id` | Both clauses also check `app.bypass_rls` — this is the escape hatch for admin operations (covered below). **What Postgres guarantees:** - The policy is evaluated **for every row**, on every query, regardless of how the query is written. A `SELECT *`, a complex JOIN, a subquery, a CTE — they all go through RLS. - `FORCE ROW LEVEL SECURITY` ensures the policy applies even to the role that owns the table. Without `FORCE`, table owners would bypass RLS silently. ### 5. Auto-population of tenant_id The CLI also generates a trigger on each tenant-scoped table: ```sql CREATE OR REPLACE FUNCTION set_tenant_id() RETURNS TRIGGER AS $$ BEGIN IF NEW.tenant_id IS NULL AND current_setting('app.current_tenant', true) IS NOT NULL THEN NEW.tenant_id := current_setting('app.current_tenant', true)::uuid; END IF; RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER set_tenant_id_trigger BEFORE INSERT ON "projects" FOR EACH ROW EXECUTE PROCEDURE set_tenant_id(); ``` This means your application code never has to set `tenant_id` manually. When you insert a row, the trigger reads the current tenant from the session variable and stamps it automatically. And because the `WITH CHECK` clause also validates, even if your code tries to set a wrong `tenant_id`, Postgres rejects it. ### 6. Commit and cleanup When the transaction commits, the `SET LOCAL` variable is gone. The connection goes back to the pool with zero tenant-related state. The next request starts clean. ## Why you can trust this Here are the properties that make this approach reliable, and the Postgres mechanisms behind each one. ### Isolation is enforced by the database, not your code RLS policies run inside Postgres, after your query is planned, before rows are returned. Your ORM, your application code, and your middleware cannot circumvent them (unless you connect as a superuser — see [the caveat below](#the-superuser-caveat)). ### A missing WHERE clause cannot leak data With the `WHERE tenant_id = ?` pattern, a forgotten filter leaks data. With RLS, Postgres adds the filter implicitly. Write `SELECT * FROM projects` — you only get the current tenant's rows. This is the single biggest advantage. ### Connection pooling is safe The `true` parameter in `set_config(..., true)` scopes the variable to the transaction. When the transaction ends, the variable is erased. The next request on that connection starts fresh. This has been a core Postgres behavior since RLS was introduced in Postgres 9.5. ### Concurrent requests cannot interfere Two requests running simultaneously each have their own transaction. Postgres transaction isolation guarantees that one transaction's `SET LOCAL` is invisible to another, even on the same connection (which cannot happen anyway — connection pools serialize transactions on a single connection). ### Admin operations use a session flag, not a superuser When you need to work across tenants (creating a tenant, running a cron job, seeding data), `runAsSystem` does not escalate privileges. It sets a second transaction-scoped variable: ```sql SELECT set_config('app.bypass_rls', 'true', true); ``` The RLS policies include an `OR` clause that checks this flag. This means: - The bypass is still **transaction-scoped** — it cannot leak to other requests. - The database role stays the same — no superuser, no role switching. - If an attacker compromises a normal request, they cannot set `app.bypass_rls` because only `runAsSystem` does that in code, and the setting is cleared when the transaction ends. > **Caution:** If you expose an endpoint that calls `runAsSystem`, that endpoint can read and write all tenant data. Protect it with strong authentication and access control. ### The tenants table has its own defense layer The `tenants` lookup table uses two separate policies: 1. **Read:** open to all (`USING (true)`) — your application needs to resolve slugs and list tenants. 2. **Write:** requires `app.bypass_rls = 'true'` — only `runAsSystem` can create, update, or delete tenants. Even if an attacker can execute raw SQL as the application role, they cannot modify the `tenants` table without the bypass flag. ## The superuser caveat PostgreSQL **superusers bypass all RLS policies**, regardless of `FORCE ROW LEVEL SECURITY`. This is by design in Postgres — superusers are unrestricted. Your application must connect as a **regular (non-superuser) role**. If you are using the default `postgres` user in Docker, create a dedicated application role: ```sql CREATE ROLE app_user WITH LOGIN PASSWORD 'app_password'; GRANT CONNECT ON DATABASE mydb TO app_user; GRANT USAGE ON SCHEMA public TO app_user; GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user; ``` Then use `DATABASE_URL=postgresql://app_user:app_password@localhost:5432/mydb`. > **Danger:** If your `DATABASE_URL` connects as a superuser, RLS policies are silently ignored and tenant isolation does not work. The CLI `check` command detects this — run `npx @usebetterdev/tenant-cli check` to verify your setup. ## Tables without RLS RLS is opt-in per table. Only tables listed in `tenantTables` in your `better-tenant.config.json` (and processed by the CLI `migrate` command) get RLS policies, a `tenant_id` column, and the auto-populate trigger. Everything else behaves like a normal Postgres table. This means you can mix tenant-scoped and shared tables in the same transaction: ```ts const db = tenant.getDatabase(); // projects has RLS → filtered to current tenant const projects = await db.select().from(projectsTable); // categories has no RLS → all rows visible const categories = await db.select().from(categoriesTable); ``` The session variable (`app.current_tenant`) is still set, but tables without policies simply ignore it. ## Verifying your setup The CLI includes a `check` command that validates your RLS configuration: ```bash npx @usebetterdev/tenant-cli check --database-url $DATABASE_URL ``` It verifies: - RLS is enabled and forced on all configured tables - Policies exist with the correct `USING` and `WITH CHECK` clauses - The `set_tenant_id` trigger is present - The database role is not a superuser Run this in CI or after migrations to catch configuration drift. ## Summary | What | How | Why it is safe | | --- | --- | --- | | Tenant context | `SET LOCAL app.current_tenant` inside a transaction | Transaction-scoped — cleared on commit, invisible to other transactions | | Row filtering | RLS policies with `USING` and `WITH CHECK` | Postgres enforces on every row, every query, regardless of how the query is written | | Auto-populate | `BEFORE INSERT` trigger reads session variable | Application cannot forget or override `tenant_id` | | Admin bypass | `SET LOCAL app.bypass_rls` in a separate transaction | Same transaction-scoping guarantees — cannot leak | | Connection pooling | `set_config(..., true)` = `SET LOCAL` | Variable is erased when transaction ends — next request starts clean | | Table owner bypass | `FORCE ROW LEVEL SECURITY` | Even the table owner role goes through RLS | ## Next steps - [Architecture](https://docs.usebetter.dev/tenant/architecture/) — detailed reference for every mechanism: policies, triggers, slug resolution, and the summary table - [Quick Start](https://docs.usebetter.dev/tenant/quick-start/) — get a working multi-tenant app in 5 minutes - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies, tenant API, and admin operations - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — generate and verify your RLS setup --- ## Architecture This page is a detailed reference for the database-level mechanisms that power UseBetter Tenant: transaction-scoped session variables, Row-Level Security policies, RLS bypass for admin operations, and slug-to-UUID resolution. > **Note:** For a narrative walkthrough of the full request lifecycle and why this approach is safe, start with [How RLS Works](https://docs.usebetter.dev/tenant/how-rls-works/). ## Request-scoped tenant and SET LOCAL For each request (or explicit `runWithTenant` / `runAs` call), the adapter runs your code inside a **transaction**. At the start of that transaction it runs: ```sql SELECT set_config('app.current_tenant', '', true); ``` The third argument `true` means **local to the transaction**: the setting is only visible inside that transaction and is automatically cleared when the transaction ends. - **Pooling-safe:** Connection pools can reuse connections; the next request gets a new transaction and its own `app.current_tenant`. - **No cross-request leakage:** Session state is transaction-scoped, not connection-scoped. Your tenant-scoped queries run in that same transaction, so Postgres RLS can read `current_setting('app.current_tenant', true)` and restrict rows to that tenant. ## RLS: USING, WITH CHECK, and FORCE ROW LEVEL SECURITY ### Tenants lookup table The `tenants` table is a lookup table (no `tenant_id` column). The CLI generates two PERMISSIVE policies for it: 1. **`rls_tenants_read`** — `FOR SELECT USING (true)`. All roles can read tenants — needed for tenant resolution (slug-to-UUID lookup) and dashboard listing. 2. **`rls_tenants_write`** — `FOR ALL` with `USING` and `WITH CHECK` requiring `current_setting('app.bypass_rls', true) = 'true'`. Only system operations (`runAsSystem`) can INSERT, UPDATE, or DELETE tenants. This provides defense-in-depth: even direct database access as the application role cannot modify tenants without `bypass_rls`. ### Tenant-scoped tables Each tenant-scoped table needs: 1. **`tenant_id` column** — `UUID NOT NULL REFERENCES tenants(id)`. Your ORM schema defines this (see [Quick Start](https://docs.usebetter.dev/tenant/quick-start/)). 2. **Row Level Security** — `ENABLE ROW LEVEL SECURITY` and `FORCE ROW LEVEL SECURITY`. Generated by the CLI. `FORCE` means RLS applies to the table owner role too — without it, the role that owns the table would bypass policies. Note that PostgreSQL **superusers always bypass RLS** regardless of `FORCE`; your application must connect as a regular (non-superuser) role. 3. **Policy** — one policy for `ALL` (SELECT, INSERT, UPDATE, DELETE) generated by the CLI, with: - **USING:** Rows are visible when `(tenant_id)::text = current_setting('app.current_tenant', true)` (or when bypass is set — see below). - **WITH CHECK:** New/updated rows must satisfy the same condition, so inserts and updates cannot set `tenant_id` to another tenant. The `set_tenant_id()` trigger (also generated by the CLI) sets `NEW.tenant_id` from `current_setting('app.current_tenant', true)` on INSERT, so application code does not have to pass `tenant_id` manually (and cannot override it). ## runAsSystem and RLS bypass Some operations must see or change data across tenants: creating/updating/listing/deleting tenants, seeding, or admin/cron jobs. Doing that with a **superuser** would be a security anti-pattern. UseBetter Tenant uses a **session flag** instead. ### Session flag: `app.bypass_rls` The adapter's `runAsSystem(fn)` runs `fn` inside a transaction that first runs: ```sql SELECT set_config('app.bypass_rls', 'true', true); ``` Again, `true` = local to the transaction, so the flag is cleared when the transaction ends. The CLI-generated RLS policies include an **OR** so that rows are allowed when either: - the row's `tenant_id` matches `app.current_tenant`, **or** - `current_setting('app.bypass_rls', true) = 'true'`. Example policy (conceptually): ```sql USING ( (tenant_id)::text = current_setting('app.current_tenant', true) OR current_setting('app.bypass_rls', true) = 'true' ) WITH CHECK ( (tenant_id)::text = current_setting('app.current_tenant', true) OR current_setting('app.bypass_rls', true) = 'true' ) ``` When the adapter runs with `app.bypass_rls = 'true'`, the same RLS policies allow access to all rows in that transaction. **No superuser or special role is required**; the app role just needs the usual table privileges. ### When to use runAsSystem - **Use for:** `tenant.api.*` (create/update/list/delete tenants), CLI seed, migrations, or cron jobs that must touch multiple tenants. - **Do not use** for normal request handling. Normal requests should use `runWithTenant` (or framework middleware that does), so RLS restricts data to a single tenant. > **Caution:** If you expose an endpoint that calls `runAsSystem`, that endpoint effectively has full read/write to all tenant data. Protect it with authentication and access control. ## Non-tenant-aware tables RLS is opt-in per table in Postgres. When the adapter runs `SET LOCAL app.current_tenant = ''`, only tables with `ENABLE ROW LEVEL SECURITY` and a matching policy are affected. Tables without RLS policies ignore the session variable — all rows remain visible regardless of which tenant is active. This means you can freely mix tenant-scoped and shared tables in the same transaction: ```ts const db = tenant.getDatabase(); // Table has RLS → filtered to current tenant const projects = await db.select().from(projectsTable); // Table has no RLS → all rows visible const categories = await db.select().from(categoriesTable); ``` **Which tables have RLS?** Only the tables listed in `tenantTables` in your `better-tenant.config.json` (and processed by the CLI `migrate` command) get RLS policies, a `tenant_id` column, and the `set_tenant_id()` trigger. Everything else is untouched and behaves like a normal Postgres table. For the recommended usage pattern (wrapping `getDatabase()` as your default `db()` handle), see [Configuration — Non-tenant tables](https://docs.usebetter.dev/tenant/configuration/#non-tenant-tables). --- ## Slug-to-UUID resolution The resolver extracts a raw identifier from the request (header, subdomain, path, JWT, or custom). This identifier may be a UUID or a slug (e.g., `"acme"` from `acme.app.com`). Since RLS requires a UUID, the library normalizes the identifier before it reaches the adapter. ### How it works | Identifier | What happens | | ------------------------------------ | ------------------------------------------------------------------------- | | UUID (e.g. `550e8400-...`) | Passes through unchanged | | Slug (e.g. `"acme"`) | Looked up via `runAsSystem → getBySlug(slug)` → returns the tenant's UUID | | Any value + `resolveToId` configured | `resolveToId` is called instead — skips all auto-resolution | | Non-UUID without `resolveToId` | Looked up via `getBySlug` | ### Where it applies - **`resolveTenant(request)`** — returns the normalized UUID (or undefined). - **`handleRequest(request, next)`** — uses the normalized UUID for `SET LOCAL` and RLS. - **`runAs(tenantId, fn)`** — passes `tenantId` through as-is; callers must provide a valid UUID. ### `resolveToId` escape hatch For custom mappings (e.g., custom domains → tenant UUIDs), configure `resolveToId` on the tenant resolver: ```ts tenantResolver: { custom: (req) => req.host, resolveToId: async (domain) => { const mapping = await lookupCustomDomain(domain); return mapping.tenantId; }, } ``` `resolveToId` always takes precedence over auto-resolution. When provided, the library does not check if the identifier is a UUID or look up by slug — it trusts the transform. ## Summary | Mechanism | Purpose | | ------------------------ | ---------------------------------------------------------------------------------------------------------------- | | `app.current_tenant` | Set per transaction by the adapter; RLS uses it to restrict rows to one tenant. | | `app.bypass_rls` | Set per transaction by the adapter in `runAsSystem`; policies allow all rows when `'true'`. | | Transaction-scoped | Both settings use `set_config(..., true)` so they are local to the transaction and safe with connection pooling. | | FORCE ROW LEVEL SECURITY | Ensures RLS applies to all roles, including table owner. | | Tenants table policies | Open SELECT for resolution; writes require `bypass_rls` (defense-in-depth for the lookup table). | | runAsSystem | For admin/cron only; uses session flag, not superuser. | | Non-tenant tables | Tables without RLS work through `getDatabase()` unchanged — session variables are ignored. | ## Next steps - [Configuration](https://docs.usebetter.dev/tenant/configuration/) — resolver strategies, tenant API, and admin operations - [Framework Adapters](https://docs.usebetter.dev/tenant/adapters/) — per-adapter setup for Drizzle, Prisma, Hono, Express, and Next.js - [CLI & Migrations](https://docs.usebetter.dev/tenant/cli/) — all CLI commands and workflows --- # Webhooks > Self-hosted webhook delivery with payload signing, automatic retries, and delivery logging — backed by your own database. ## Introduction > **Coming soon:** UseBetter Webhooks is under active development. Follow the project on GitHub for updates. Add webhook delivery to your TypeScript app — payload signing, retries, and delivery logging included — without depending on an external service. UseBetter Webhooks runs inside your application as a library. You define event types, your customers register endpoints, and the library handles the rest. Everything is written directly to your own database. No external API calls, no per-message fees, no data leaving your infrastructure. Works with any database your ORM adapter supports — Postgres, MySQL, SQLite, and more. ## Concepts - **Event** — a typed payload your application emits when something happens (e.g., `invoice.paid`, `user.created`). Events are defined in code with full TypeScript types. - **Endpoint** — a URL your customer registers to receive events. Each endpoint subscribes to one or more event types. - **Delivery attempt** — a single HTTP POST to an endpoint. The library records the request, response status, headers, and body for every attempt. - **Signing secret** — each endpoint gets a unique HMAC secret. The library signs every payload so receivers can verify it was not tampered with. ## Why self-hosted? Most webhook solutions are SaaS products: you push events to their API, they fan out delivery, and you pay per message. UseBetter Webhooks takes a different approach: - **Data ownership** — webhook payloads, delivery logs, and endpoint configurations live in your database. Query them with any tool. - **No per-message pricing** — deliver as many events as you want. Your only cost is compute and storage you already pay for. - **No vendor lock-in** — the library is a dependency, not a platform. Swap it out without migrating data from a third-party service. - **No external calls** — events are signed and dispatched from your own infrastructure. Payloads never pass through a middleman. ## Architecture | Layer | Package | Role | | ---------------------- | -------------------------------------------------------------- | --------------------------------------------------------------------------------- | | **Core** | `@usebetterdev/webhook-core` | Event model, adapter contract, signing, retry logic. Zero runtime deps. | | **ORM adapters** | `@usebetterdev/webhook-drizzle`, `@usebetterdev/webhook-prisma`| Store endpoints, events, and delivery attempts. Own the database schema. | | **Framework adapters** | `@usebetterdev/webhook-hono`, `@usebetterdev/webhook-express`, `@usebetterdev/webhook-next` | Routes for endpoint management and event ingestion. | | **Client library** | `@usebetterdev/webhook-client` | React components for endpoint management UI. | | **CLI** | `@usebetterdev/webhook-cli` | Migrations, health check, replay failed deliveries. | | **Umbrella** | `@usebetterdev/webhook` | Single install, subpath exports for all adapters. | You install the umbrella package (`@usebetterdev/webhook`) and import adapters via subpath exports like `@usebetterdev/webhook/drizzle` and `@usebetterdev/webhook/hono`. ## When NOT to use this UseBetter Webhooks is designed for teams that want full control over their webhook infrastructure. If that does not describe you, consider the alternatives: - **You need webhooks in under five minutes and don't want to manage infrastructure** — a hosted service like [Svix](https://www.svix.com/) handles delivery, retries, and a management dashboard out of the box. - **You only send a handful of webhook types to a few endpoints** — a simple HTTP client with basic retry logic may be all you need. - **You need global edge delivery with sub-100ms latency guarantees** — a dedicated delivery network will outperform an in-process library. If you want to own your data, avoid per-message costs, and keep webhook delivery inside your existing infrastructure, this library is built for that. ## Next steps --- ## Installation > **Coming soon:** UseBetter Webhooks is under active development. Package names and exports shown below reflect the planned API and may change before release. ## Prerequisites - **Node.js 22+** (also supports Bun and Deno) - **PostgreSQL 13+**, **MySQL**, or **SQLite** - **TypeScript 5+** (recommended, but not required) - An existing application with a supported ORM and framework ## Install the package **npm:** ```bash npm install @usebetterdev/webhooks ``` **pnpm:** ```bash pnpm add @usebetterdev/webhooks ``` **yarn:** ```bash yarn add @usebetterdev/webhooks ``` **bun:** ```bash bun add @usebetterdev/webhooks ``` The main package (`@usebetterdev/webhooks`) includes the core library and all adapters via subpath exports. The CLI (`@usebetterdev/webhooks-cli`) is used via `npx` for generating migrations and managing webhook data — no installation required. ## Peer dependencies You need a database driver and (optionally) a framework. Install the ones you use: ### ORM adapter **Drizzle + pg:** ```bash npm install drizzle-orm pg ``` ```ts import { drizzleWebhookAdapter } from "@usebetterdev/webhooks/drizzle"; ``` **Drizzle + postgres.js:** ```bash npm install drizzle-orm postgres ``` ```ts import { drizzleWebhookAdapter } from "@usebetterdev/webhooks/drizzle"; ``` **Drizzle + better-sqlite3:** ```bash npm install drizzle-orm better-sqlite3 ``` ```ts import { drizzleWebhookAdapter } from "@usebetterdev/webhooks/drizzle"; ``` **Prisma:** ```bash npm install @prisma/client ``` Requires `@prisma/client` >= 5.0.0. ```ts import { prismaWebhookAdapter } from "@usebetterdev/webhooks/prisma"; ``` **Kysely:** ```bash npm install kysely pg ``` ```ts import { kyselyWebhookAdapter } from "@usebetterdev/webhooks/kysely"; ``` ### Framework adapter **Hono:** ```bash npm install hono ``` Requires `hono` >= 4. ```ts import { betterWebhooksHono } from "@usebetterdev/webhooks/hono"; ``` **Express:** ```bash npm install express ``` Requires `express` >= 4. ```ts import { betterWebhooksExpress } from "@usebetterdev/webhooks/express"; ``` **Next.js:** Next.js is already installed in your project. ```ts import { betterWebhooksNext } from "@usebetterdev/webhooks/next"; ``` ## Run migrations The CLI reads your database connection from the `DATABASE_URL` environment variable. After installing, run it to create the required database tables: ```bash npx @usebetterdev/webhooks-cli migrate ```
Example output (may vary) ```text ✔ Connected to database ✔ Created table: webhook_events ✔ Created table: webhook_endpoints ✔ Created table: webhook_deliveries ✔ Migrations complete ```
> **Connection issues?:** If the command fails, verify that `DATABASE_URL` is set in your environment and that the database is running and reachable.
Peer dependency reference | Adapter | Peer dependency | Minimum version | |---|---|---| | `@usebetterdev/webhooks/drizzle` | `drizzle-orm` | 0.30+ | | `@usebetterdev/webhooks/drizzle` | `pg` or `postgres` or `better-sqlite3` | — | | `@usebetterdev/webhooks/prisma` | `@prisma/client` | 5.0.0 | | `@usebetterdev/webhooks/kysely` | `kysely` | 0.27+ | | `@usebetterdev/webhooks/kysely` | `pg` | — | | `@usebetterdev/webhooks/hono` | `hono` | 4.0.0 | | `@usebetterdev/webhooks/express` | `express` | 4.0.0 | | `@usebetterdev/webhooks/next` | `next` | 14.0.0 |
## Next steps --- ## Quick Start This guide walks you through adding webhooks to an existing application. By the end, you'll have an event defined, an endpoint registered, a webhook delivered, and its signature verified. ## Prerequisites - A running PostgreSQL 13+ database - Node.js 22+ - An existing application with a database connection 1. **Install** ```bash npm install @usebetterdev/webhook @usebetterdev/webhook-verify zod ``` Install the adapter for your ORM: **Drizzle:** ```bash npm install @usebetterdev/webhook-drizzle ``` **Prisma:** ```bash npm install @usebetterdev/webhook-prisma ``` You'll install a framework adapter in step 5. 2. **Run migrations** ```bash npx @usebetterdev/webhook-cli migrate --database-url $DATABASE_URL ``` This creates the `webhook_endpoints`, `webhook_deliveries`, and `webhook_delivery_attempts` tables. 3. **Define an event** ```ts title="src/webhook/events.ts" import { z } from "zod"; import type { EventMap } from "@usebetterdev/webhook"; export const events = { "user.created": { description: "A new user was created", schema: z.object({ id: z.string(), email: z.string(), name: z.string(), }), }, } satisfies EventMap; ``` 4. **Create the webhook instance** **Drizzle:** ```ts title="src/webhook/instance.ts" import { drizzle } from "drizzle-orm/node-postgres"; import { Pool } from "pg"; import { webhook, PollingRunner, parseEncryptionKeyFromEnv } from "@usebetterdev/webhook"; import { drizzleWebhookAdapter } from "@usebetterdev/webhook/drizzle"; import { events } from "./events.js"; const pool = new Pool({ connectionString: process.env.DATABASE_URL }); const db = drizzle(pool); const encryption = parseEncryptionKeyFromEnv(process.env.WEBHOOK_ENCRYPTION_KEY!); const adapter = drizzleWebhookAdapter(db); const runner = new PollingRunner({ adapter, encryption, interval: 2000, concurrency: 5, }); export const webhookInstance = webhook({ events, adapter, jobRunner: runner, encryption, }); runner.start(); process.on("SIGTERM", () => void runner.stop()); process.on("SIGINT", () => void runner.stop()); ``` **Prisma:** ```ts title="src/webhook/instance.ts" import { PrismaClient } from "@prisma/client"; import { webhook, PollingRunner, parseEncryptionKeyFromEnv } from "@usebetterdev/webhook"; import { prismaWebhookAdapter } from "@usebetterdev/webhook/prisma"; import { events } from "./events.js"; const prisma = new PrismaClient(); const encryption = parseEncryptionKeyFromEnv(process.env.WEBHOOK_ENCRYPTION_KEY!); const adapter = prismaWebhookAdapter(prisma); const runner = new PollingRunner({ adapter, encryption, interval: 2000, concurrency: 5, }); export const webhookInstance = webhook({ events, adapter, jobRunner: runner, encryption, }); runner.start(); process.on("SIGTERM", () => void runner.stop()); process.on("SIGINT", () => void runner.stop()); ``` :::danger[Encryption key is required] `WEBHOOK_ENCRYPTION_KEY` encrypts endpoint secrets at rest. Generate one with: ```bash node -e "console.log('v1:' + crypto.randomBytes(32).toString('hex'))" ``` Store the key securely. Losing it means existing endpoint secrets become unrecoverable. ::: 5. **Mount the API routes** **Hono:** ```bash npm install @usebetterdev/webhook-hono ``` ```ts title="src/index.ts" import { Hono } from "hono"; import { createWebhookMiddleware } from "@usebetterdev/webhook-hono"; import { webhookInstance } from "./webhook/instance.js"; const app = new Hono(); app.use("*", createWebhookMiddleware(webhookInstance, { basePath: "/api/webhooks", })); export default app; ``` **Express:** ```bash npm install @usebetterdev/webhook-node ``` ```ts title="src/index.ts" import express from "express"; import { toNodeHandler } from "@usebetterdev/webhook-node"; import { webhookInstance } from "./webhook/instance.js"; const app = express(); app.use("/api/webhooks", toNodeHandler(webhookInstance)); app.listen(3000); ``` **Next.js:** ```bash npm install @usebetterdev/webhook-next ``` ```ts title="app/api/webhooks/[...path]/route.ts" import { toNextJsHandler } from "@usebetterdev/webhook-next"; import { webhookInstance } from "@/webhook/instance"; export const { GET, POST, PATCH, DELETE } = toNextJsHandler(webhookInstance, { basePath: "/api/webhooks", }); ``` 6. **Register an endpoint** Start your server, then create an endpoint that subscribes to `user.created` events: ```bash curl -s -X POST http://localhost:3000/api/webhooks/endpoints \ -H "Content-Type: application/json" \ -d '{ "url": "https://example.com/my-webhook", "events": ["user.created"] }' ``` ```json title="Response" { "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "url": "https://example.com/my-webhook", "events": ["user.created"], "secret": "5f2b9a3e1c7d84f09b3e6a2d5c8f1e7a4b0d6c9f2a5e8b1d4f7a0c3e6b9d2f5", "status": "active", "createdAt": "2026-03-16T14:30:00.000Z", "updatedAt": "2026-03-16T14:30:00.000Z" } ``` :::caution The `secret` is returned only once. Save it — you'll need it to verify signatures on the receiving end. ::: :::note Endpoint URLs must use HTTPS in production. `http://localhost:*` is allowed for development. ::: 7. **Send your first event** Trigger an event from a route handler in your application: **Hono:** ```ts title="src/index.ts" import { webhookInstance } from "./webhook/instance.js"; app.post("/users", async (c) => { const user = { id: "usr_001", email: "alice@example.com", name: "Alice" }; // ... save user to database ... await webhookInstance.send("user.created", user); return c.json(user, 201); }); ``` **Express:** ```ts title="src/index.ts" import { webhookInstance } from "./webhook/instance.js"; app.post("/users", async (req, res) => { const user = { id: "usr_001", email: "alice@example.com", name: "Alice" }; // ... save user to database ... await webhookInstance.send("user.created", user); res.status(201).json(user); }); ``` **Next.js:** ```ts title="app/api/users/route.ts" import { webhookInstance } from "@/webhook/instance"; export async function POST() { const user = { id: "usr_001", email: "alice@example.com", name: "Alice" }; // ... save user to database ... await webhookInstance.send("user.created", user); return Response.json(user, { status: 201 }); } ``` The `PollingRunner` picks up the delivery and POSTs it to every subscribed endpoint. 8. **Verify delivery** Check the delivery log via the API (use the endpoint ID from step 6): ```bash curl -s "http://localhost:3000/api/webhooks/deliveries?endpointId=a1b2c3d4-e5f6-7890-abcd-ef1234567890" ``` ```json title="Response" { "data": [ { "id": "d8e9f0a1-b2c3-4567-890a-bcdef1234567", "endpointId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", "eventName": "user.created", "status": "delivered", "attempts": 1, "maxAttempts": 5, "createdAt": "2026-03-16T14:30:01.000Z", "updatedAt": "2026-03-16T14:30:01.000Z" } ] } ``` A `"status": "delivered"` confirms the webhook reached your endpoint. 9. **Verify the signature (consumer side)** On the receiving end, verify that the webhook payload is authentic using `@usebetterdev/webhook-verify`. The `assertWebhookSignature` function throws a `WebhookSignatureError` if verification fails. Use `verifyWebhookSignature` instead if you prefer a boolean return. **Hono:** ```ts title="consumer/src/index.ts" import { Hono } from "hono"; import { assertWebhookSignature } from "@usebetterdev/webhook-verify"; const app = new Hono(); app.post("/my-webhook", async (c) => { await assertWebhookSignature({ headers: Object.fromEntries(c.req.raw.headers), body: await c.req.text(), secret: process.env.WEBHOOK_SECRET!, // the secret from step 6 }); const event = await c.req.json(); console.log("Received:", event.type, event.data); return c.json({ received: true }); }); ``` **Express:** ```ts title="consumer/src/index.ts" import express from "express"; import { assertWebhookSignature } from "@usebetterdev/webhook-verify"; const app = express(); app.post("/my-webhook", express.text({ type: "application/json" }), async (req, res) => { await assertWebhookSignature({ headers: req.headers, body: req.body, secret: process.env.WEBHOOK_SECRET!, // the secret from step 6 }); const event = JSON.parse(req.body); console.log("Received:", event.type, event.data); res.json({ received: true }); }); ``` **Next.js:** ```ts title="consumer/app/api/my-webhook/route.ts" import { assertWebhookSignature } from "@usebetterdev/webhook-verify"; export async function POST(request: Request) { const body = await request.text(); await assertWebhookSignature({ headers: Object.fromEntries(request.headers), body, secret: process.env.WEBHOOK_SECRET!, // the secret from step 6 }); const event = JSON.parse(body); console.log("Received:", event.type, event.data); return Response.json({ received: true }); } ``` The signature is verified using HMAC-SHA256 with the `X-Webhook-Signature` and `X-Webhook-Timestamp` headers. If the signature is invalid or the timestamp is too old (default: 5 minutes), verification fails. ## What just happened? 1. You defined a `user.created` event with a typed Zod schema. 2. The webhook instance registered the event and connected to your database via the ORM adapter. 3. When you called `webhookInstance.send()`, a delivery row was created for each subscribed endpoint. 4. The `PollingRunner` picked up pending deliveries, signed the payload with HMAC-SHA256 using the endpoint's secret, and POSTed it with `X-Webhook-ID`, `X-Webhook-Timestamp`, and `X-Webhook-Signature` headers. 5. Failed deliveries are retried automatically with exponential backoff (up to 5 attempts by default). ## Next steps - [Configuration](https://docs.usebetter.dev/webhooks/configuration/) — encryption keys, retry strategies, batch size, hooks - [Defining Events](https://docs.usebetter.dev/webhooks/defining-events/) — event schemas, payload validation, event catalog - [Security](https://docs.usebetter.dev/webhooks/security/) — signature scheme details, key rotation, HTTPS enforcement --- # Mail > Transactional email with React component templates, any provider, and a local preview server. ## Introduction > **Coming soon:** UseBetter Email is under active development. Follow the project on GitHub for updates. ## What is UseBetter Email? UseBetter Email is a library, not a service. Templates are React components (`.tsx` files) with type-safe variables, stored alongside your app code. Send via any provider (Resend, Postmark, SES, SMTP) through a unified API. Every send is logged to your own database. A local preview server (`npx @usebetterdev/email dev`) renders all your templates with sample data in a browser — no more "send test email to myself" loops. Works with any database your ORM adapter supports — Postgres, MySQL, SQLite, and more. ### Planned features - **React component templates** — `.tsx` files with type-safe variables, colocated with your app - **Any provider** — Resend, Postmark, SES, SMTP through a unified send API - **Local preview server** — render templates with sample data in a browser - **Send logging** — every email logged to your own database with delivery status - **Type-safe end-to-end** — template variables checked at compile time - **Plugin-driven** — extend with custom providers, renderers, and tracking