Back to blog
securitycomplianceaudit-logginggdprnis2enterprise

Audit Logging and Compliance Trails for Your SaaS: The Complete Guide

By SaaS Masters23 maart 20269 min read

Every serious SaaS needs it, but few founders think about it early enough: audit logging. Who did what, when, and what was the previous value? Once you're dealing with enterprise customers, GDPR requests, or security incidents, a proper audit trail isn't "nice to have" — it's essential.

In this article, we'll build an audit logging system step by step that's scalable, searchable, and compliance-ready.

Why audit logging is indispensable

Audit logs answer the question: "What happened?" They're crucial for:

  • Compliance: GDPR requires you to demonstrate who accessed or modified personal data. NIS2 sets requirements for incident traceability.
  • Security: During a breach, you need to know exactly which data was accessed and by whom.
  • Customer questions: "Who changed my account settings?" — without an audit log, you can only guess.
  • Debugging: Sometimes an audit log is the only way to understand how data ended up in a particular state.
  • Enterprise sales: Large customers routinely ask about audit logging in their security questionnaires.

What should you log?

Not everything belongs in your audit log. Focus on state-changing actions and sensitive read operations:

Always log

  • User creation, modification, deletion
  • Role and permission changes
  • Login attempts (successful and failed)
  • Data exports and bulk operations
  • Billing and subscription changes
  • API key creation and revocation

Optionally log

  • Read operations on sensitive data (PII, financial)
  • Configuration changes
  • Searches on personal data

Don't log

  • Every pageview or API call (that's analytics, not audit)
  • Health checks and internal system calls

The data model

A good audit log event contains enough context to stand on its own — you should be able to understand it without pulling up the rest of your database.

interface AuditEvent {
  id: string;
  timestamp: Date;
  
  // Who
  actorId: string;
  actorType: 'user' | 'admin' | 'system' | 'api_key';
  actorEmail?: string;
  
  // What
  action: string; // e.g., 'user.role.updated'
  resourceType: string; // e.g., 'user'
  resourceId: string;
  
  // Context
  tenantId: string;
  ipAddress?: string;
  userAgent?: string;
  requestId?: string;
  
  // Details
  changes?: {
    field: string;
    oldValue: any;
    newValue: any;
  }[];
  metadata?: Record<string, any>;
}

Action naming conventions

Use a consistent resource.subresource.action notation:

user.created
user.updated
user.deleted
user.role.updated
team.member.added
team.member.removed
invoice.created
invoice.paid
settings.billing.updated
api_key.created
api_key.revoked
auth.login.success
auth.login.failed
auth.mfa.enabled

Implementation in practice

Step 1: Database table

CREATE TABLE audit_logs (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  
  actor_id TEXT NOT NULL,
  actor_type TEXT NOT NULL,
  actor_email TEXT,
  
  action TEXT NOT NULL,
  resource_type TEXT NOT NULL,
  resource_id TEXT NOT NULL,
  
  tenant_id TEXT NOT NULL,
  ip_address INET,
  user_agent TEXT,
  request_id TEXT,
  
  changes JSONB,
  metadata JSONB,
  
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Indexes for fast lookups
CREATE INDEX idx_audit_tenant_timestamp 
  ON audit_logs (tenant_id, timestamp DESC);
CREATE INDEX idx_audit_actor 
  ON audit_logs (actor_id, timestamp DESC);
CREATE INDEX idx_audit_resource 
  ON audit_logs (resource_type, resource_id, timestamp DESC);
CREATE INDEX idx_audit_action 
  ON audit_logs (action, timestamp DESC);

Step 2: Audit service

// lib/audit.ts
import { db } from './db';

interface LogAuditParams {
  actorId: string;
  actorType: 'user' | 'admin' | 'system' | 'api_key';
  actorEmail?: string;
  action: string;
  resourceType: string;
  resourceId: string;
  tenantId: string;
  ipAddress?: string;
  userAgent?: string;
  requestId?: string;
  changes?: { field: string; oldValue: any; newValue: any }[];
  metadata?: Record<string, any>;
}

export async function logAudit(params: LogAuditParams): Promise<void> {
  // Audit logging should never block your main flow
  try {
    await db.auditLog.create({
      data: {
        actorId: params.actorId,
        actorType: params.actorType,
        actorEmail: params.actorEmail,
        action: params.action,
        resourceType: params.resourceType,
        resourceId: params.resourceId,
        tenantId: params.tenantId,
        ipAddress: params.ipAddress,
        userAgent: params.userAgent,
        requestId: params.requestId,
        changes: params.changes ?? undefined,
        metadata: params.metadata ?? undefined,
      },
    });
  } catch (error) {
    // Log to error tracking, but let the main flow continue
    console.error('Audit logging failed:', error);
    // Optional: send to a fallback (e.g., a queue)
  }
}

Step 3: Middleware for automatic context

// middleware/audit-context.ts
import { AsyncLocalStorage } from 'node:async_hooks';

interface AuditContext {
  actorId: string;
  actorType: string;
  actorEmail?: string;
  tenantId: string;
  ipAddress?: string;
  userAgent?: string;
  requestId: string;
}

export const auditStorage = new AsyncLocalStorage<AuditContext>();

export function auditMiddleware(req, res, next) {
  const context: AuditContext = {
    actorId: req.user?.id ?? 'anonymous',
    actorType: req.apiKey ? 'api_key' : 'user',
    actorEmail: req.user?.email,
    tenantId: req.tenant?.id,
    ipAddress: req.ip,
    userAgent: req.headers['user-agent'],
    requestId: req.headers['x-request-id'] ?? crypto.randomUUID(),
  };
  
  auditStorage.run(context, () => next());
}

Now you can retrieve the context anywhere in your code:

import { auditStorage } from '../middleware/audit-context';
import { logAudit } from '../lib/audit';

export async function updateUserRole(userId: string, newRole: string) {
  const ctx = auditStorage.getStore();
  const user = await db.user.findUnique({ where: { id: userId } });
  const oldRole = user.role;
  
  await db.user.update({
    where: { id: userId },
    data: { role: newRole },
  });
  
  await logAudit({
    ...ctx,
    action: 'user.role.updated',
    resourceType: 'user',
    resourceId: userId,
    changes: [{ field: 'role', oldValue: oldRole, newValue: newRole }],
  });
}

Automatically detecting changes

Manually tracking old/new values is error-prone. Automate it:

function detectChanges(
  oldObj: Record<string, any>,
  newObj: Record<string, any>,
  fields: string[]
): { field: string; oldValue: any; newValue: any }[] {
  const changes = [];
  
  for (const field of fields) {
    const oldVal = oldObj[field];
    const newVal = newObj[field];
    
    if (JSON.stringify(oldVal) !== JSON.stringify(newVal)) {
      changes.push({
        field,
        oldValue: oldVal,
        newValue: newVal,
      });
    }
  }
  
  return changes;
}

// Usage:
const oldUser = await db.user.findUnique({ where: { id } });
const updatedUser = await db.user.update({ where: { id }, data: updates });

const changes = detectChanges(oldUser, updatedUser, [
  'name', 'email', 'role', 'status'
]);

if (changes.length > 0) {
  await logAudit({
    ...ctx,
    action: 'user.updated',
    resourceType: 'user',
    resourceId: id,
    changes,
  });
}

Sensitive data in audit logs

Be careful: audit logs themselves contain data you need to protect. Some guidelines:

What NOT to put in your audit log

  • Passwords (not even hashed)
  • Full credit card numbers
  • Social security numbers
  • API keys (log only the last 4 characters)

Applying masking

function maskSensitive(value: string, type: 'email' | 'key' | 'card'): string {
  switch (type) {
    case 'email':
      const [user, domain] = value.split('@');
      return user[0] + '***@' + domain;
    case 'key':
      return '***' + value.slice(-4);
    case 'card':
      return '****-****-****-' + value.slice(-4);
    default:
      return '***';
  }
}

Making audit logs searchable

An audit log you can't search is useless. Build an API for your admin panel:

// api/admin/audit-logs.ts
export async function getAuditLogs(params: {
  tenantId: string;
  actorId?: string;
  resourceType?: string;
  resourceId?: string;
  action?: string;
  startDate?: Date;
  endDate?: Date;
  page?: number;
  limit?: number;
}) {
  const where: any = { tenantId: params.tenantId };
  
  if (params.actorId) where.actorId = params.actorId;
  if (params.resourceType) where.resourceType = params.resourceType;
  if (params.resourceId) where.resourceId = params.resourceId;
  if (params.action) where.action = { startsWith: params.action };
  if (params.startDate || params.endDate) {
    where.timestamp = {};
    if (params.startDate) where.timestamp.gte = params.startDate;
    if (params.endDate) where.timestamp.lte = params.endDate;
  }

  const limit = params.limit ?? 50;
  const offset = ((params.page ?? 1) - 1) * limit;

  const [logs, total] = await Promise.all([
    db.auditLog.findMany({
      where,
      orderBy: { timestamp: 'desc' },
      take: limit,
      skip: offset,
    }),
    db.auditLog.count({ where }),
  ]);

  return { logs, total, page: params.page ?? 1, totalPages: Math.ceil(total / limit) };
}

Retention and scaling

Audit logs grow fast. Plan ahead:

Time-based partitioning

-- PostgreSQL partitioning by month
CREATE TABLE audit_logs (
  id UUID NOT NULL DEFAULT gen_random_uuid(),
  timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  -- ... other columns
) PARTITION BY RANGE (timestamp);

CREATE TABLE audit_logs_2026_03 PARTITION OF audit_logs
  FOR VALUES FROM ('2026-03-01') TO ('2026-04-01');

CREATE TABLE audit_logs_2026_04 PARTITION OF audit_logs
  FOR VALUES FROM ('2026-04-01') TO ('2026-05-01');

Retention policy

  • Hot storage (database): last 90 days — fast and searchable
  • Warm storage (S3/object storage): 90 days to 2 years — JSON exports, still searchable
  • Cold storage (archive): 2-7 years — compressed, for compliance
// Monthly archival cron job
async function archiveOldAuditLogs() {
  const cutoffDate = new Date();
  cutoffDate.setDate(cutoffDate.getDate() - 90);
  
  // Export to S3
  const oldLogs = await db.auditLog.findMany({
    where: { timestamp: { lt: cutoffDate } },
  });
  
  await uploadToS3(
    \`audit-archive/\${cutoffDate.toISOString().slice(0, 7)}.jsonl\`,
    oldLogs.map(l => JSON.stringify(l)).join('\n')
  );
  
  // Remove from database
  await db.auditLog.deleteMany({
    where: { timestamp: { lt: cutoffDate } },
  });
}

Audit logs as a feature for your customers

Smart SaaS companies also make audit logs visible to their customers. This is a premium feature that enterprise customers expect:

  • Activity feed: show team members who did what
  • Security log: login history, MFA changes, API key usage
  • Compliance export: CSV/JSON export for auditors
  • Webhooks: send audit events to your customer's SIEM

This can literally be the difference in closing an enterprise deal. Customers will pay extra for it.

Immutability: audit logs are sacred

An audit log you can modify is worthless. Ensure immutability:

  1. No UPDATE or DELETE on the audit_logs table (except for archival)
  2. Database permissions: the application user only has INSERT rights
  3. Optional: use append-only storage or blockchain-style hashing
-- Separate database user for audit writing
CREATE ROLE audit_writer;
GRANT INSERT ON audit_logs TO audit_writer;
-- No UPDATE, DELETE, or TRUNCATE

Production checklist

Before taking audit logging live:

  • All state-changing actions are logged
  • Login/logout and failed attempts are logged
  • Sensitive data is masked
  • Audit logs are immutable (no update/delete)
  • Retention policy is defined and automated
  • Admin panel has an audit log viewer
  • Indexes are in place for common queries
  • Audit logging fails gracefully (doesn't block main flow)
  • Partitioning or archival is set up for growth
  • Export functionality for compliance audits

Conclusion

Audit logging is one of those things you'd rather implement too early than too late. Start simple — a table, a `logAudit()` function, and consistent naming conventions — and expand as you grow.

The investment pays for itself at your first security incident, your first enterprise customer, or your first GDPR request. And with the NIS2 directive affecting more and more companies, the question isn't if you need audit logging, but when.

Start today. Your future self will thank you.