PostgreSQL RLS with JWTs for Granular API Authorization
The Fragility of Application-Layer Authorization
In modern multi-tenant SaaS applications, authorization is a critical, cross-cutting concern. The standard approach involves middleware in the application layer that inspects an incoming request, validates a user's session (often via a JWT), and then injects authorization constraints into every subsequent database query. For a senior engineer, this pattern feels uncomfortably familiar—and fragile.
Consider a typical Node.js/Express implementation for a project management tool:
// A typical, but flawed, service layer function
async function getTask(taskId, user) {
// 1. Fetch the task
const task = await db.query('SELECT * FROM tasks WHERE id = $1', [taskId]);
if (!task) { throw new NotFoundError('Task not found'); }
// 2. Fetch the project it belongs to
const project = await db.query('SELECT * FROM projects WHERE id = $1', [task.project_id]);
// 3. APPLICATION-LAYER AUTHORIZATION CHECK
if (project.organization_id !== user.organization_id) {
throw new UnauthorizedError('Access denied');
}
// Add more checks for user roles, etc.
// const membership = await db.query('SELECT role FROM project_members WHERE ...');
// if (user.role !== 'admin' && !membership) { ... }
return task;
}
This approach suffers from several fundamental problems:
WHERE organization_id = ? clause creates a critical security vulnerability.This is a solved problem, but the solution requires a paradigm shift: move the authorization logic from the application into the database itself. By leveraging PostgreSQL's powerful Row-Level Security (RLS) feature, we can create a robust, centralized, and highly performant authorization layer that is impossible for application code to bypass.
This article will demonstrate how to build this system by securely propagating JWT claims into the database session to drive dynamic RLS policies.
The Database-Centric Authorization Pattern with RLS and JWTs
Row-Level Security (RLS) is a PostgreSQL feature that allows you to define policies on a table, restricting which rows users can view, modify, or delete. When RLS is enabled on a table, every query against it—even from a superuser—has the policy's security predicate implicitly added to its WHERE clause.
The core of our pattern is to make these policies dynamic, based on the identity of the application user making the request. Since our application authenticates users via JWTs, the claims within the token (e.g., user_id, organization_id, role) are the perfect source of truth for our RLS policies.
The flow looks like this:
Authorization: Bearer header.- An API middleware verifies the JWT and extracts its claims.
WHERE organization_id = ? clauses.- PostgreSQL sees the query, applies the RLS policy, which reads the session-level variables, and automatically filters the results. The application code is blissfully unaware, receiving only the data it's authorized to see.
This creates an unbreakable security boundary at the data layer. No matter how flawed the application logic is, it's impossible for it to query data from another tenant.
Implementation Deep Dive: The Core Mechanism
Let's build this system from the ground up. We'll use a schema for a multi-tenant project management application.
1. Schema Setup
First, our database schema. Notice the organization_id on almost every table, which is the cornerstone of our multi-tenancy strategy.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE organizations (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name TEXT NOT NULL
);
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
organization_id UUID NOT NULL REFERENCES organizations(id),
email TEXT NOT NULL UNIQUE,
role TEXT NOT NULL DEFAULT 'member' -- e.g., 'admin', 'member'
);
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
organization_id UUID NOT NULL REFERENCES organizations(id),
name TEXT NOT NULL
);
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
project_id UUID NOT NULL REFERENCES projects(id),
-- We can get organization_id via a JOIN, but denormalizing it here
-- can simplify RLS policies and improve performance.
organization_id UUID NOT NULL REFERENCES organizations(id),
title TEXT NOT NULL,
assignee_id UUID REFERENCES users(id)
);
-- Create a dedicated, non-superuser role for our application
CREATE ROLE app_user NOLOGIN;
GRANT CONNECT ON DATABASE my_database TO app_user;
GRANT USAGE ON SCHEMA public TO app_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO app_user;
-- Note: We will grant permissions on future tables as well.
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO app_user;
2. The JWT Bridge: Propagating Claims to PostgreSQL
This is the most critical part of the pattern. How do we securely pass JWT claims to the database session? The wrong way is to try and embed it in comments or use temporary tables. The right way is to use session-level configuration variables.
While you can use SET my.vars.user_id = '...', this can lead to conflicts. The most robust method is to use current_setting() with a JSONB object, providing a single, namespaced container for all claims.
Here is a production-ready Node.js middleware using the node-postgres (pg) library. This pattern ensures that every time a connection is checked out from the pool for a request, it is properly configured with the request's context.
// file: db.js
const { Pool } = require('pg');
const pool = new Pool({ /* connection details */ });
// A wrapper to get a client from the pool and set the JWT claims
async function getClient(jwtClaims) {
const client = await pool.connect();
// We must start a transaction to ensure the SET command
// is scoped to this transaction and connection.
await client.query('BEGIN');
// Use a single JSONB object for all claims. It's cleaner and more extensible.
// The 'true' argument to current_setting makes it return null for missing settings
// instead of throwing an error.
const claimsJson = JSON.stringify(jwtClaims || {});
await client.query(`SELECT set_config('app.jwt.claims', $1, true)`, [claimsJson]);
return client;
}
module.exports = { getClient };
// file: authMiddleware.js
const jwt = require('jsonwebtoken');
const { getClient } = require('./db');
async function attachDbClient(req, res, next) {
let client;
try {
const token = req.headers.authorization?.split(' ')[1];
const claims = token ? jwt.verify(token, process.env.JWT_SECRET) : {};
// Attach a configured client to the request object
client = await getClient(claims);
req.dbClient = client;
next();
} catch (err) {
if (client) await client.query('ROLLBACK');
if (client) client.release();
return res.status(401).send('Authentication failed.');
}
}
// file: transactionMiddleware.js
// This middleware ensures every request is wrapped in a transaction
// and the client is released.
async function handleTransaction(req, res, next) {
if (!req.dbClient) {
return next(new Error('Database client not attached to request.'));
}
try {
// The actual route handler runs here
await next();
// If the handler didn't send a response, commit.
if (!res.headersSent) {
await req.dbClient.query('COMMIT');
}
} catch (error) {
console.error('Transaction failed, rolling back:', error);
await req.dbClient.query('ROLLBACK');
if (!res.headersSent) {
res.status(500).send('An internal error occurred.');
}
} finally {
if (req.dbClient) {
req.dbClient.release();
}
}
}
// file: server.js
// app.use(attachDbClient);
// app.use(handleTransaction);
// app.get('/projects', async (req, res) => {
// // Notice: no WHERE clause for tenancy!
// const { rows } = await req.dbClient.query('SELECT * FROM projects');
// res.json(rows);
// });
This setup is robust. Every request gets a dedicated, transaction-scoped client from the pool. We set the claims, the business logic runs, and then we commit/rollback and release the client. The set_config variable is local to the transaction, so it won't leak between requests even if connection pooling gets complicated.
3. Crafting Granular RLS Policies
Now that our claims are available inside the database via current_setting('app.jwt.claims', true), we can write our policies.
First, we need some helper functions to safely extract claims from the JSONB object. Creating functions is superior to casting directly in the policy because it's reusable, easier to test, and allows us to add security features like LEAKPROOF.
-- Helper function to get the current organization ID
CREATE OR REPLACE FUNCTION auth.organization_id()
RETURNS UUID AS $$
BEGIN
-- The 'true' flag makes current_setting return NULL if the setting is not found
RETURN (current_setting('app.jwt.claims', true)::jsonb ->> 'org_id')::uuid;
EXCEPTION
-- Handle cases where the claim is missing or not a valid UUID
WHEN OTHERS THEN
RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE LEAKPROOF;
-- Helper function to get the current user's role
CREATE OR REPLACE FUNCTION auth.user_role()
RETURNS TEXT AS $$
BEGIN
RETURN current_setting('app.jwt.claims', true)::jsonb ->> 'role';
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE LEAKPROOF;
-- Helper function to get the current user ID
CREATE OR REPLACE FUNCTION auth.user_id()
RETURNS UUID AS $$
BEGIN
RETURN (current_setting('app.jwt.claims', true)::jsonb ->> 'sub')::uuid;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE LEAKPROOF;
Note on LEAKPROOF: This is a critical security setting. It tells PostgreSQL that the function has no side effects and its return value depends only on its arguments. This prevents users from inferring information by calling the function with invalid inputs and observing whether it throws an error. For security functions used in policies, this should be standard practice.
Now, let's enable RLS and create the policies.
-- Enable RLS on the tables
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
ALTER TABLE tasks ENABLE ROW LEVEL SECURITY;
-- Policy for projects: Users can only see projects in their own organization.
CREATE POLICY select_projects_for_organization
ON projects
FOR SELECT
USING (organization_id = auth.organization_id());
-- Policy for tasks: More complex, requires a JOIN.
-- Users can see tasks belonging to projects in their organization.
CREATE POLICY select_tasks_for_organization
ON tasks
FOR SELECT
USING (
organization_id = auth.organization_id()
);
-- Now for INSERTs, UPDATEs, DELETEs. We need to be more restrictive.
-- A user must be an 'admin' to create a project.
CREATE POLICY insert_projects_for_admins
ON projects
FOR INSERT
WITH CHECK (
organization_id = auth.organization_id() AND
auth.user_role() = 'admin'
);
-- A user can update a project's name if they are an admin in that org.
CREATE POLICY update_projects_for_admins
ON projects
FOR UPDATE
USING (organization_id = auth.organization_id() AND auth.user_role() = 'admin')
WITH CHECK (organization_id = auth.organization_id()); -- Ensure they can't change the org_id
-- A member can be assigned a task, but can't change its project.
CREATE POLICY update_tasks_for_members
ON tasks
FOR UPDATE
USING (organization_id = auth.organization_id())
WITH CHECK (
organization_id = auth.organization_id() AND
project_id = (SELECT t.project_id FROM tasks t WHERE t.id = tasks.id) -- Cannot change project
);
USING applies to rows that are already in the table (for SELECT, UPDATE, DELETE). WITH CHECK applies to new rows being created (INSERT) or modified (UPDATE). A restrictive UPDATE policy often needs both.
With these policies in place, our application code becomes dramatically simpler:
// The new, simpler service layer function
async function getTask(taskId, req) {
// The RLS policy is applied automatically by PostgreSQL.
// If the user is not in the right organization, this query will return no rows.
const { rows } = await req.dbClient.query('SELECT * FROM tasks WHERE id = $1', [taskId]);
if (rows.length === 0) {
throw new NotFoundError('Task not found or access denied');
}
return rows[0];
}
The authorization logic is gone. The code is simpler, safer, and more maintainable.
Performance Considerations and Benchmarking
RLS is not free. It adds predicates to your queries, and the query planner must account for them. Poorly implemented RLS can cripple your application's performance.
1. `EXPLAIN ANALYZE` is Your Best Friend
Always analyze the query plans for your key queries with RLS enabled. Let's see the effect of our policy on a simple SELECT.
-- Simulate an application user's session
BEGIN;
SELECT set_config('app.jwt.claims', '{"org_id": "...", "role": "admin"}', true);
-- Analyze the query
EXPLAIN ANALYZE SELECT * FROM projects WHERE name LIKE 'A%';
ROLLBACK;
The output will show the RLS policy predicate being applied:
Bitmap Heap Scan on projects (cost=...)
Recheck Cond: (name ~~ 'A%')
Filter: (organization_id = auth.organization_id())
-> Bitmap Index Scan on projects_name_idx (cost=...)
Index Cond: (name ~~ 'A%')
Notice the Filter line. This is RLS in action. The planner is using an index on name and then filtering the results based on the RLS policy.
2. Indexing for Policies
This is the single most important performance optimization. The columns used in your RLS policies MUST be indexed. In our case, organization_id is used in almost every policy. Without an index on (organization_id), PostgreSQL would have to perform a full table scan for every single query on an RLS-enabled table, just to evaluate the policy. This would be disastrous.
-- Critical indexes for our RLS policies
CREATE INDEX ON users (organization_id);
CREATE INDEX ON projects (organization_id);
CREATE INDEX ON tasks (organization_id);
CREATE INDEX ON tasks (project_id);
With these indexes, the planner can efficiently find the relevant subset of data for the current organization before executing the rest of the query.
3. Function Volatility and Inlining
The volatility of your helper functions (STABLE, IMMUTABLE, VOLATILE) has a significant impact.
* VOLATILE (the default): The function is re-evaluated for every row. Never use this for RLS helper functions.
STABLE: The function's result is considered stable within a single query. It will be executed once per query. This is perfect for our auth. functions, as the JWT claims do not change mid-query.
* IMMUTABLE: The function's result is guaranteed to be the same forever for the same inputs. PostgreSQL can cache these results aggressively. This is not suitable for our current_setting functions, but could be used for a pure utility function.
PostgreSQL can often inline simple SQL functions, which is the most performant option. By keeping our helper functions simple, we encourage the planner to do this.
4. Benchmarking the Overhead
Let's run a simple benchmark using k6 to demonstrate the impact. Assume we have a /projects endpoint.
// k6_script.js
import http from 'k6/http';
import { check } from 'k6';
export const options = {
vus: 10,
duration: '30s',
};
const JWT = '...'; // A valid JWT for a test user
export default function () {
const res = http.get('http://localhost:3000/projects', {
headers: { Authorization: `Bearer ${JWT}` },
});
check(res, { 'status was 200': (r) => r.status == 200 });
}
Run this benchmark against two versions of your code:
- With RLS enabled and proper indexes.
WHERE organization_id = ? clauses in the application code.In a well-configured system with correct indexing, the performance difference should be negligible (< 5% overhead). The security and maintainability gains far outweigh this minor cost.
Advanced Patterns and Edge Cases
Implementing this pattern in a real-world system requires handling several important edge cases.
1. The `SUPERUSER` and `BYPASSRLS` Problem
A PostgreSQL SUPERUSER or any role with the BYPASSRLS attribute will ignore all RLS policies. This is a massive security risk. The database role used by your application (app_user in our example) must NOT be a superuser and must NOT have BYPASSRLS.
This creates a challenge for administrative tasks.
2. Migrations and Background Jobs
How do you run database migrations or background jobs that need to operate across all tenants?
* Pattern 1: The Privileged Migration Role (Recommended)
Create a separate database role used only by your migration tool (e.g., migrator_role). This role can be granted BYPASSRLS or be a temporary superuser. Your application never uses these credentials. This maintains a strong separation of concerns.
CREATE ROLE migrator_role LOGIN PASSWORD '...';
ALTER ROLE migrator_role BYPASSRLS;
-- Grant necessary permissions
GRANT ALL ON ALL TABLES IN SCHEMA public TO migrator_role;
* Pattern 2: Temporarily Disabling RLS
In a migration script, you can temporarily disable RLS for a specific table. This is explicit and auditable.
-- in a migration file
ALTER TABLE tasks DISABLE ROW LEVEL SECURITY;
-- ... perform cross-tenant data backfill ...
ALTER TABLE tasks ENABLE ROW LEVEL SECURITY;
* Pattern 3: The "System" Role for Background Jobs
For a background worker process, you can't use a user's JWT. Instead, the worker can authenticate to the API (or directly to the DB) and use a special, internal JWT or set a specific claim.
// In a background worker
const systemClaims = { role: 'system_internal' };
const client = await getClient(systemClaims);
// ... do work ...
You would then add policies that explicitly grant access to this role.
CREATE POLICY select_all_for_system
ON tasks
FOR SELECT
USING (auth.user_role() = 'system_internal');
You need to combine this with your existing tenant policy using OR logic, which can get complex. The privileged role pattern is often cleaner.
3. Handling Cross-Tenant Data Access
What if a support user needs to temporarily access a customer's organization? This is a classic impersonation scenario. You can handle this by adding an optional impersonated_org_id claim to the JWT.
-- A new helper function
CREATE OR REPLACE FUNCTION auth.effective_organization_id()
RETURNS UUID AS $$
DECLARE
impersonated_id TEXT;
BEGIN
impersonated_id := current_setting('app.jwt.claims', true)::jsonb ->> 'impersonated_org_id';
IF impersonated_id IS NOT NULL AND auth.user_role() = 'support_admin' THEN
RETURN impersonated_id::uuid;
END IF;
RETURN (current_setting('app.jwt.claims', true)::jsonb ->> 'org_id')::uuid;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE LEAKPROOF;
-- Update your policies to use the new function
CREATE POLICY select_projects_for_organization
ON projects
FOR SELECT
USING (organization_id = auth.effective_organization_id());
This keeps the authorization logic clean and centralized in the database, rather than scattered across your application code.
Conclusion: A Paradigm Shift in Authorization
By moving authorization logic from the application to the database using PostgreSQL RLS and JWT claims, we achieve a fundamentally more secure and maintainable system. This pattern isn't a silver bullet—it doesn't replace all application-level validation—but it provides an exceptionally strong foundation for data access control, especially in multi-tenant environments.
The benefits are compelling for any senior engineering team:
* Unbreakable Security: The database, not the application, enforces tenancy. Accidental data leaks from a buggy API endpoint are virtually impossible.
* Simplified Application Code: Data access logic is stripped of repetitive, error-prone authorization checks, making developers more productive and code easier to reason about.
* Single Source of Truth: Authorization rules live alongside the data they protect, making them easy to audit, test, and manage.
* High Performance: When implemented correctly with proper indexing, the performance overhead is minimal.
Adopting this pattern requires a deep understanding of PostgreSQL and a disciplined approach to session management. But for complex applications where data security is paramount, the investment pays dividends, creating a robust and scalable architecture that will stand the test of time.