PostgreSQL RLS with JWTs for Granular API Authorization
Decoupling Authorization: The Case for Database-Enforced Policies
In modern multi-tenant SaaS applications, authorization logic is a notorious source of complexity and bugs. Senior engineers have all seen it: endless if statements, complex ORM query manipulations, and scattered permission checks that litter the application layer. Every new feature or data model change requires a painstaking review of this fragile logic. A single mistake in a WHERE clause can lead to catastrophic data leaks between tenants.
The core problem is a violation of the principle of locality. Authorization rules are fundamentally about the data, yet we implement them far away in the application code. The solution is to move the logic to the data itself. This is where PostgreSQL's Row-Level Security (RLS) becomes a game-changer.
This article is not an introduction to RLS. It assumes you understand its basic purpose. Instead, we will construct a production-ready, high-performance pattern for integrating RLS with a stateless API authenticated via JSON Web Tokens (JWTs). We will build a secure bridge that translates JWT claims into dynamic, per-request database policies, effectively turning PostgreSQL into an active participant in your authorization architecture.
We will cover:
- The core transactional pattern for securely passing JWT claims to RLS policies.
- Crafting sophisticated policies for multi-tenancy, ownership, and role-based access.
- Advanced performance tuning to mitigate the overhead of RLS in high-throughput systems.
- Navigating critical edge cases like service accounts, background workers, and internal admin tools.
The Architectural Pattern: A JWT-to-RLS Bridge
The fundamental challenge is making the stateless, request-scoped information from a JWT available to the database session in a secure and efficient manner. We cannot simply pass the claims as query parameters, as that would be vulnerable and unwieldy. The solution lies in using PostgreSQL's session configuration parameters within a transaction.
Here’s the high-level data flow for every authenticated API request:
Authorization: Bearer header.sub for user ID, org_id for tenant ID, role).BEGIN).SET LOCAL, which ensures the setting only persists for the current transaction. This is critical for correctness when using connection pools, as the connection will be returned to the pool with its settings reset.SELECT * FROM projects;).COMMIT) or rolled back (ROLLBACK), and the connection is released to the pool. The SET LOCAL variables are automatically discarded.This pattern provides a powerful, secure, and isolated context for every API request.
Why `SET LOCAL` is Non-Negotiable
In any production environment, you are using a database connection pool. If you were to use SET instead of SET LOCAL, the configuration parameter would persist on that connection for its entire lifetime. When that connection is later checked out from the pool to serve a different user, it would still have the old user's claims, leading to a massive security breach. SET LOCAL guarantees that the settings are transient and tied strictly to the lifecycle of a single transaction.
Schema and Policy Foundation
Let's model a simple multi-tenant project management system. We have organizations, users, projects, and tasks.
-- Enable the pgcrypto extension for UUIDs
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Organizations (Tenants)
CREATE TABLE organizations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL
);
-- Users
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
organization_id UUID NOT NULL REFERENCES organizations(id),
email TEXT NOT NULL UNIQUE,
role TEXT NOT NULL DEFAULT 'member' -- e.g., 'member', 'admin'
);
-- Projects
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
organization_id UUID NOT NULL REFERENCES organizations(id),
name TEXT NOT NULL
);
-- A join table for project membership
CREATE TABLE project_members (
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
PRIMARY KEY (project_id, user_id)
);
-- Tasks
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
assignee_id UUID REFERENCES users(id),
title TEXT NOT NULL
);
-- Enable RLS on all relevant tables
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
ALTER TABLE project_members ENABLE ROW LEVEL SECURITY;
ALTER TABLE tasks ENABLE ROW LEVEL SECURITY;
Crafting Production-Grade RLS Policies
Our policies will read from a JSONB object we'll set in the session. This is more flexible than setting individual parameters. We'll use the key request.jwt.claims.
First, a helper function to safely retrieve the current user's ID. This simplifies policies and can be optimized by the planner.
-- Helper function to get the current user ID from the JWT claims.
-- It's defined as STABLE so it can be used in policies and indexes.
CREATE OR REPLACE FUNCTION auth.current_user_id()
RETURNS UUID AS $$
BEGIN
-- The 'missing_ok' flag prevents an error if the setting is not found.
RETURN (current_setting('request.jwt.claims', true)::jsonb ->> 'sub')::uuid;
EXCEPTION
-- Handle cases where the claim is not a valid UUID or is missing.
WHEN OTHERS THEN
RETURN NULL;
END;
$$ LANGUAGE plpgsql STABLE;
Now, let's build our policies, from simple to complex.
Policy 1: Tenant Isolation (The Foundation)
Every query must be strictly confined to the user's organization. This is our most important security boundary.
-- We'll create a generic policy that applies to all actions (SELECT, INSERT, UPDATE, DELETE)
CREATE POLICY tenant_isolation_policy ON projects
FOR ALL
USING (organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid)
WITH CHECK (organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid);
-- Apply the same logic to other tables
CREATE POLICY tenant_isolation_policy ON users
FOR ALL
USING (organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid)
WITH CHECK (organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid);
-- ... and so on for tasks and project_members
The USING clause applies to SELECT, UPDATE, and DELETE operations, filtering which rows are visible/modifiable. The WITH CHECK clause applies to INSERT and UPDATE, ensuring that new or modified rows adhere to the policy.
Policy 2: Role-Based Access (Admins vs. Members)
Now for more granular logic. An organization 'admin' should be able to see all projects in their organization, but a 'member' should only see projects they are explicitly assigned to via the project_members table.
We need to modify the policy on the projects table.
-- Drop the old policy before creating a new one
DROP POLICY tenant_isolation_policy ON projects;
CREATE POLICY project_access_policy ON projects
FOR SELECT
USING (
-- First, always enforce tenant isolation
organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid
AND (
-- Admins can see all projects in the org
(current_setting('request.jwt.claims', true)::jsonb ->> 'role') = 'admin'
OR
-- Members can see projects they are a part of
EXISTS (
SELECT 1
FROM project_members pm
WHERE pm.project_id = projects.id AND pm.user_id = auth.current_user_id()
)
)
);
-- We still need policies for write operations
CREATE POLICY project_insert_policy ON projects
FOR INSERT
WITH CHECK (
organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid
-- Let's say only admins can create projects
AND (current_setting('request.jwt.claims', true)::jsonb ->> 'role') = 'admin'
);
CREATE POLICY project_update_policy ON projects
FOR UPDATE
USING (true) -- The SELECT policy already filters which rows are visible
WITH CHECK (
organization_id = (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid
-- Admins can update any project in the org
AND (current_setting('request.jwt.claims', true)::jsonb ->> 'role') = 'admin'
);
This demonstrates the power of composing complex boolean logic directly in SQL. This logic is now centralized, enforced for every query, and impossible for application code to bypass.
Application Layer Implementation (Node.js & `node-postgres`)
Let's implement the server-side bridge. We'll use Express.js and the pg library.
const express = require('express');
const jwt = require('jsonwebtoken'); // e.g., jsonwebtoken library
const { Pool } = require('pg');
const app = express();
const port = 3000;
const JWT_SECRET = 'your-super-secret-key'; // In production, use environment variables
const pool = new Pool({
user: 'db_user',
host: 'localhost',
database: 'my_app_db',
password: 'db_password',
port: 5432,
});
// Middleware to validate JWT and prepare RLS context
const authMiddleware = (req, res, next) => {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.sendStatus(401); // Unauthorized
}
jwt.verify(token, JWT_SECRET, (err, user) => {
if (err) {
return res.sendStatus(403); // Forbidden
}
// Attach user claims to the request object for later use
req.userClaims = {
sub: user.userId,
org_id: user.organizationId,
role: user.role,
};
next();
});
};
// A robust database query function that wraps queries in a transaction
// and sets the RLS context.
async function queryWithRLS(claims, sql, params) {
if (!claims) {
throw new Error('User claims are required for RLS-enabled queries.');
}
const client = await pool.connect();
try {
await client.query('BEGIN');
// Crucially, serialize the claims and set them for the transaction
const claimsJson = JSON.stringify(claims);
// Use $1 to prevent SQL injection in the claims string itself
await client.query(`SET LOCAL request.jwt.claims = $1`, [claimsJson]);
// Now execute the application's intended query
const result = await client.query(sql, params);
await client.query('COMMIT');
return result;
} catch (e) {
await client.query('ROLLBACK');
throw e;
} finally {
client.release();
}
}
// Example API endpoint
app.get('/api/projects', authMiddleware, async (req, res) => {
try {
// The application code is beautifully simple. It doesn't know about RLS.
const sql = 'SELECT id, name FROM projects;';
const { rows } = await queryWithRLS(req.userClaims, sql);
res.json(rows);
} catch (error) {
console.error(error);
res.status(500).send('Internal Server Error');
}
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Look at the /api/projects endpoint. The business logic is a simple SELECT * FROM projects. It has no WHERE user_id = ? or WHERE organization_id = ? clauses. The authorization is completely transparent to the application developer writing the endpoint, which dramatically reduces the chance of errors.
Performance Deep Dive: Taming RLS Overhead
Row-Level Security is not free. Every query against an RLS-enabled table forces the planner to incorporate the policy's USING clause. If not handled carefully, this can lead to significant performance degradation.
The Problem with `current_setting()`
The current_setting() function is marked as STABLE, not IMMUTABLE. This means PostgreSQL assumes its value can change within a single query (though not within a statement). This can prevent the query planner from making certain optimizations, like pre-calculating the value or pushing it down into subqueries.
Benchmark Scenario:
Let's compare a direct WHERE clause to an RLS policy on a table with 10 million projects and an index on organization_id.
SELECT * FROM projects WHERE organization_id = 'some-uuid'; - Result: Extremely fast. Uses the index on organization_id. ~1-2ms.
SELECT * FROM projects; (with the tenant_isolation_policy active).- Result: Potentially slower. The planner might be more conservative. ~5-15ms.
While this difference seems small, it can become a major bottleneck under high load.
Optimization 1: `IMMUTABLE` Helper Functions
We can give the query planner more confidence by wrapping current_setting() calls in our own IMMUTABLE functions. This is a strong hint to the planner that the value will not change for the duration of the query.
Warning: This is technically a lie to the planner, as the value is set by the transaction. However, because we guarantee it's set once per transaction and never changed, it's a safe and highly effective optimization.
CREATE OR REPLACE FUNCTION get_current_org_id()
RETURNS UUID AS $$
SELECT (current_setting('request.jwt.claims', true)::jsonb ->> 'org_id')::uuid;
$$ LANGUAGE sql IMMUTABLE;
-- Now, rewrite the policy to use this function:
DROP POLICY tenant_isolation_policy ON projects;
CREATE POLICY tenant_isolation_policy ON projects
FOR ALL
USING (organization_id = get_current_org_id())
WITH CHECK (organization_id = get_current_org_id());
By marking the function IMMUTABLE, the planner is more likely to treat get_current_org_id() as a constant for the query's duration, leading to better plan generation, often matching the performance of a direct WHERE clause.
Optimization 2: Strategic Indexing
This cannot be overstated: any columns used in your RLS policies MUST be indexed.
tenant_isolation_policy, the organization_id column is critical. It should have a B-tree index.project_access_policy, the EXISTS subquery on project_members will be executed for every row. This means project_members(project_id, user_id) needs a composite index to be efficient.Use EXPLAIN ANALYZE liberally. If you see a Sequential Scan on a large table where you expect an Index Scan, your RLS policy is likely missing a corresponding index.
EXPLAIN ANALYZE SELECT * FROM projects;
Run this from a client where you've set the session variable to simulate a real request and inspect the query plan.
Advanced Edge Cases and Production Patterns
Real-world systems have complexities beyond simple user requests.
Case 1: The Superuser / Background Worker Problem
By default, RLS policies do not apply to table owners or roles with the SUPERUSER or BYPASSRLS attributes. This is often desired for database maintenance, but what about a background worker that needs to process data across all tenants (e.g., a nightly billing job)?
Solution: Create dedicated service roles.
CREATE ROLE background_worker LOGIN PASSWORD '...';
GRANT CONNECT ON DATABASE my_app_db TO background_worker;
GRANT USAGE ON SCHEMA public TO background_worker;
-- Grant only the specific permissions needed
GRANT SELECT, UPDATE ON tasks TO background_worker;
BYPASSRLS to this role: ALTER ROLE background_worker WITH BYPASSRLS;
Now, when your worker connects to the database using this background_worker role, it will bypass all RLS policies. This is a clean, explicit way to manage exceptions, far superior to disabling RLS on tables temporarily.
Case 2: The Internal Admin Dashboard
Your customer support team needs an admin panel to view data from any tenant to resolve issues. They are not superusers.
Solution: Use a privileged role and an RLS policy that checks for it.
"role": "support_admin". -- On the projects table, for example
CREATE POLICY support_access_policy ON projects
FOR SELECT
USING (
-- Allow access if the role is support_admin
(current_setting('request.jwt.claims', true)::jsonb ->> 'role') = 'support_admin'
OR
-- Otherwise, apply the standard tenant isolation
organization_id = get_current_org_id()
);
This pattern allows you to selectively override tenancy rules based on claims present in the token, keeping the logic centralized in the database policy.
Case 3: Complex Joins and Leaky Data
What happens when you join an RLS-protected table with a public, non-protected one? Consider a public tags table.
SELECT p.name, t.tag_name
FROM projects p
JOIN project_tags pt ON p.id = pt.project_id
JOIN tags t ON pt.tag_id = t.id;
RLS works as expected here. The policy on projects is applied first, filtering the set of projects to only those in the user's tenant. The subsequent joins will only operate on this pre-filtered set. The danger arises from misconfigured policies on joined tables. If project_tags also had an RLS policy, but it was subtly different from the projects policy, you could get unexpected results or no results at all.
Best Practice: Ensure that for any given query, the chain of RLS policies across joined tables is logically consistent.
Case 4: Gracefully Handling Missing Claims
What if a token is issued without an org_id? The (current_setting(...)::jsonb ->> 'org_id')::uuid cast will fail, erroring the entire transaction. The current_setting('key', true) form (with the second true parameter) is crucial. It returns NULL if the setting is not found, preventing an error.
Your policies should be written to handle these NULLs. For example:
USING (organization_id = get_current_org_id() AND get_current_org_id() IS NOT NULL)
This ensures that if the org_id is missing from the claims, the policy evaluates to FALSE (or NULL, which is treated as FALSE), returning no rows, rather than crashing the query.
Conclusion: A Paradigm Shift in Authorization
Integrating JWTs with PostgreSQL's Row-Level Security is more than just a clever trick; it's a paradigm shift. By co-locating authorization rules with the data, you create a single source of truth that is enforced universally, regardless of how the data is accessed—be it through your main API, a new microservice, a direct database connection, or a reporting tool.
This pattern yields a system that is:
While it requires a deeper understanding of PostgreSQL and careful performance tuning, the investment pays dividends in the form of a robust, secure, and elegant authorization architecture. It's a powerful pattern that every senior engineer building multi-tenant systems should have in their toolkit.