Skip to content

How Audit Works

UseBetter Audit captures every INSERT, UPDATE, and DELETE transparently — without you adding captureLog() calls to each route or service. This page walks through exactly what happens on every mutation so you understand what is recorded, how actor identity is tracked, what enrichment layers on, and what guarantees you get.

If you just want to get started, skip to Quick Start. Come back here when you want to understand what happens beneath the surface.

The most common approach to audit logging is to call a logging function manually after each mutation:

await db.update(usersTable).set({ name }).where(eq(usersTable.id, id));
await audit.log({ table: "users", operation: "UPDATE", actorId: req.user.id });

It works, but it has the same flaw as WHERE-clause multi-tenancy: a single missed call leaves a gap in your audit trail. The more mutations your application has, the more likely someone will forget one — especially across teams, across services, and over time.

UseBetter Audit moves capture out of your application code and into the ORM layer. Every mutation goes through a proxy or extension that intercepts it automatically. Even if a route handler does not call anything explicitly, the audit entry is written.

Three layers collaborate on every INSERT, UPDATE, or DELETE:

  1. ORM proxy / extension — wraps your Drizzle or Prisma client and intercepts every write before it reaches the database.
  2. AsyncLocalStorage — carries the actor identity (set by framework middleware at the top of the request) through every await boundary without parameter passing.
  3. Adapter write — the adapter builds a structured audit_log row — including enrichment rules — and writes it to your database.

Run mutations, switch actors, enable enrichment rules, and watch what lands in audit_logs. The log history at the bottom accumulates entries across runs — the same view you get when you query audit.query() in your application.

Actor:
Scenario:
The ORM proxy intercepts the INSERT and writes an entry with beforeData: null and afterData set to the new row.
label
severity
compliance
redact
audit_log entry
Run a scenario to capture an entry.
Log history
click to inspect
No entries yet.

When you wrap your ORM client with withAuditProxy (Drizzle) or withAuditExtension (Prisma), every subsequent mutation goes through a proxy layer before it reaches the database driver.

Drizzle uses a JavaScript Proxy object to intercept db.insert(), db.update(), and db.delete() calls. The proxy executes the original query and, if the table is in auditTables, immediately runs capture with the result.

Prisma uses a Prisma Client Extension ($extends) to add beforeQuery and afterQuery hooks on write operations. Same result, different mechanism — the adapter abstracts this away.

Neither approach requires you to modify your route handlers. Wrap once at setup time; capture happens automatically from that point forward.

Node.js AsyncLocalStorage propagates values through async call chains without explicit parameter passing. The framework middleware sets up a context scope at the very beginning of each request:

Request arrives
└─ betterAuditHono() middleware
└─ Extracts actorId from Authorization: Bearer <jwt>
└─ Creates AuditContext { actorId: "user-42" }
└─ Stores context in AsyncLocalStorage
└─ Route handler runs
└─ auditedDb.insert(usersTable).values(body)
└─ Proxy intercepts
└─ AsyncLocalStorage.getStore() → { actorId: "user-42" }
└─ captureLog({ actorId: "user-42", ... })

The context is scoped to the request. When the request ends, the scope is cleaned up. Concurrent requests each have their own scope — context never leaks between them.

For INSERT, the proxy reads the new row from the query result — this is afterData. beforeData is null.

For DELETE, the proxy issues a SELECT before the delete executes to capture the current state of the row. This becomes beforeData; afterData is null.

For UPDATE, the proxy reads both the previous state (pre-query SELECT) and the new state (post-query result or re-fetch). Both appear in the entry so reviewers can see exactly what changed, field by field.

The snapshot is always the full row — not just the changed columns. Every entry is self-contained: it tells the complete story of the record at that point in time.

Before the entry is written, the adapter checks the enrichment registry. Rules registered with audit.enrich() are matched by table name and operation:

audit.enrich("users", "INSERT", {
label: "New user registered",
severity: "low",
compliance: ["soc2"],
});
audit.enrich("users", "DELETE", {
label: "User account deleted",
severity: "critical",
compliance: ["gdpr", "soc2"],
redact: ["email", "phone"],
});

Enrichment fields are merged into the entry: label, severity, compliance, and any redacted fields are removed from beforeData / afterData before storage.

Enrichment is declarative and registered once at startup. Your route handlers do not know it is happening.

After enrichment, the adapter inserts a row into audit_logs:

{
id: string; // UUID
timestamp: Date; // server clock at capture time
tableName: string; // e.g. "users"
operation: "INSERT" | "UPDATE" | "DELETE";
recordId: string; // primary key of the mutated row
actorId: string | null; // from AsyncLocalStorage, or null if extraction failed
beforeData: Record<string, unknown> | null; // redacted fields removed
afterData: Record<string, unknown> | null; // redacted fields removed
label: string | undefined;
severity: "low" | "medium" | "high" | "critical" | undefined;
compliance: string[] | undefined;
}

You can query this table directly or use audit.query():

const result = await audit.query()
.resource("users")
.actor("user-42")
.since("24h")
.list();

A missing captureLog() call cannot create a gap

Section titled “A missing captureLog() call cannot create a gap”

Capture is delegated to the proxy layer, not your application code. There is no captureLog() to forget. Every write that goes through auditedDb is captured.

AsyncLocalStorage is a Node.js built-in. Its isolation guarantee is the same one that makes session stores and request-scoped loggers safe under high concurrency. One request’s actorId is never visible to another request.

If actor extraction fails, the request proceeds and the entry is still written — with actorId: null. This is an explicit signal that attribution was unavailable, not that capture was skipped. To fail-closed, configure onError on the middleware.

Enrichment rules add fields to the stored entry; they never suppress or delay the write. The redact option removes sensitive field values from beforeData / afterData, but the row itself is always written. You cannot accidentally configure enrichment in a way that drops log entries.

WhatHowWhy it is reliable
Automatic captureORM proxy / Prisma extension intercepts all writesNo captureLog() to forget
Actor attributionAsyncLocalStorage propagates actor from middlewareNo parameter passing; concurrent requests never share context
Before/after snapshotsPre-query SELECT (UPDATE/DELETE) + post-query result (INSERT/UPDATE)Full row state at each point in time
Fail-openMissing actor → actorId: null, request still proceedsAudit trail has no gaps
EnrichmentDeclarative rules registered once at startupRoute handlers never need to know
StorageYour own databaseNo external service; query with your existing tooling
  • Actor Context — extractors, mergeAuditContext(), and background job contexts
  • Enrichment — labels, severity, compliance tags, and field redaction in detail
  • Adapters — ORM adapter reference and error handling
  • Quick Start — working example with ORM + framework middleware in one page