PostgreSQL RLS with JWTs for Performant Multi-Tenant SaaS Auth
The Hard Stop: Database-Enforced Multi-Tenancy
In multi-tenant SaaS architectures, data isolation is not a feature; it's a foundational requirement. The most common implementation pattern involves adding a tenant_id to every relevant table and meticulously ensuring every single database query includes a WHERE tenant_id = ? clause. While functional, this application-layer enforcement is fragile. A single missing clause in a complex JOIN or a hastily written analytics query can lead to catastrophic data leaks across tenants. The cost of a single mistake is unacceptable.
This is where PostgreSQL's Row-Level Security (RLS) becomes a strategic architectural choice, not just a security feature. By moving the tenancy check from the application to the database itself, we create a hard security boundary that is impossible to bypass through application code bugs. The database guarantees that a user's connection can never see or modify data belonging to another tenant.
This article is not an introduction to RLS. It assumes you understand its basic purpose. Instead, we will architect and implement a production-ready system that leverages JSON Web Tokens (JWTs) to dynamically inform RLS policies in a performant, scalable, and operationally sound manner. We will focus on the intricate details: securely passing claims, optimizing policy performance, handling role-based access, and solving complex edge cases like administrative access and controlled cross-tenant data sharing.
Architectural Pattern: Propagating JWT State to the Database Session
The core of this pattern is establishing a trusted channel to communicate the authenticated user's identity and permissions (from the JWT) to PostgreSQL for every transaction. We cannot simply pass the JWT as a query parameter for every statement; that would be inefficient and would tightly couple our application logic to authorization.
Instead, we leverage PostgreSQL's session-level configuration variables. The flow is as follows:
Authorization header.SET command on the acquired connection. This command stores the JWT claims in a custom, namespaced session variable.Node.js and `node-postgres` Implementation
Let's see this in practice using Node.js with the popular pg library. The key is to wrap the connection logic to ensure state is set and cleared reliably.
1. Database Connection Wrapper
First, we'll create a higher-order function that takes a JWT payload and a database operation, and handles the session setup and teardown.
// db.js
const { Pool } = require('pg');
const pool = new Pool({
// Your connection details
connectionString: process.env.DATABASE_URL,
});
// This is the core of our pattern.
// It acquires a client, sets the session context, runs the operation,
// and ensures cleanup.
async function runInTransactionWithRLS(jwtPayload, operation) {
const client = await pool.connect();
try {
// Start a transaction
await client.query('BEGIN');
// Set the session variable. We stringify the JSON payload.
// Using a namespaced variable like 'myapp.jwt.claims' is best practice.
// The second argument to query() parametrizes the value to prevent SQL injection.
await client.query(`SELECT set_config('myapp.jwt.claims', $1, false)`, [JSON.stringify(jwtPayload)]);
// Execute the actual business logic passed as the 'operation' callback
const result = await operation(client);
// Commit the transaction
await client.query('COMMIT');
return result;
} catch (e) {
// If anything fails, roll back
await client.query('ROLLBACK');
throw e;
} finally {
// ALWAYS release the client back to the pool.
// The session state ('myapp.jwt.claims') is automatically cleared
// when the session ends, so we don't need a manual RESET.
client.release();
}
}
module.exports = { pool, runInTransactionWithRLS };
Note on set_config: The third parameter, is_local, is set to false. This makes the setting last for the duration of the session. If set to true, it would only last for the current transaction, which is often too restrictive. When the client is released back to the pool, the session ends, and the setting is automatically discarded, preventing state leakage. This is a robust cleanup mechanism.
2. Express.js Middleware Integration
Now, let's integrate this into an Express.js application.
// server.js
const express = require('express');
const jwt = require('jsonwebtoken'); // e.g., jsonwebtoken library
const { runInTransactionWithRLS } = require('./db');
const app = express();
const JWT_SECRET = process.env.JWT_SECRET;
// A middleware to verify JWT and attach payload to the request object
const authMiddleware = (req, res, next) => {
try {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).send('Unauthorized');
}
const token = authHeader.split(' ')[1];
// In production, you'd have more robust verification (audience, issuer, etc.)
const payload = jwt.verify(token, JWT_SECRET);
req.user = payload; // Attach decoded payload to request
next();
} catch (error) {
res.status(401).send('Invalid token');
}
};
app.use(express.json());
// Example route to get projects for the authenticated user's tenant
app.get('/projects', authMiddleware, async (req, res) => {
try {
const projects = await runInTransactionWithRLS(req.user, async (client) => {
// Inside this callback, RLS is active and configured.
// This query does NOT need a 'WHERE tenant_id = ...' clause.
const result = await client.query('SELECT id, name, created_at FROM projects');
return result.rows;
});
res.json(projects);
} catch (error) {
console.error('Error fetching projects:', error);
res.status(500).send('Internal Server Error');
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Server running on port ${PORT}`));
This architecture cleanly separates concerns. The route handler knows nothing about RLS; it just executes a simple query. The runInTransactionWithRLS wrapper transparently enforces the security context.
Core Implementation: Crafting Performant RLS Policies
With the application architecture in place, we can now define the database objects that enforce our policies.
1. Schema Definition
Let's assume a standard SaaS schema:
-- Use UUIDs for primary keys to prevent enumeration attacks
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE TABLE tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL
);
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
email TEXT NOT NULL UNIQUE,
role TEXT NOT NULL DEFAULT 'member' -- e.g., 'member', 'admin'
);
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
owner_id UUID REFERENCES users(id) ON DELETE SET NULL,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- CRITICAL: Index the columns used in RLS policies
CREATE INDEX idx_users_tenant_id ON users(tenant_id);
CREATE INDEX idx_projects_tenant_id ON projects(tenant_id);
CREATE INDEX idx_projects_owner_id ON projects(owner_id);
2. Immutable Helper Functions for Claim Extraction
Calling current_setting('myapp.jwt.claims')::jsonb ->> 'tid' directly in every policy is verbose and inefficient. The query planner has a harder time optimizing repeated complex expressions. The solution is to wrap this logic in STABLE or IMMUTABLE functions. This signals to PostgreSQL that for a given input, the function always returns the same result within a single statement, allowing the planner to cache its result.
-- Function to get the tenant ID from the JWT claims
CREATE OR REPLACE FUNCTION auth.current_tenant_id()
RETURNS UUID AS $$
SELECT (current_setting('myapp.jwt.claims', true)::jsonb ->> 'tid')::uuid;
$$ LANGUAGE SQL STABLE;
-- Function to get the user ID (subject) from the JWT claims
CREATE OR REPLACE FUNCTION auth.current_user_id()
RETURNS UUID AS $$
SELECT (current_setting('myapp.jwt.claims', true)::jsonb ->> 'sub')::uuid;
$$ LANGUAGE SQL STABLE;
-- Function to get the user's role from the JWT claims
CREATE OR REPLACE FUNCTION auth.current_user_role()
RETURNS TEXT AS $$
SELECT current_setting('myapp.jwt.claims', true)::jsonb ->> 'role';
$$ LANGUAGE SQL STABLE;
We place these in a dedicated auth schema for organization. The true in current_setting('...', true) tells PostgreSQL it's okay to return NULL if the setting is not found, preventing errors if a connection is used without the RLS context set.
3. Basic Tenant Isolation Policy
Now we can create a simple, elegant policy.
-- First, enable RLS on the table
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
-- Force RLS for table owners as well (good practice)
ALTER TABLE projects FORCE ROW LEVEL SECURITY;
-- Create the policy
CREATE POLICY tenant_isolation_policy ON projects
FOR ALL -- Applies to SELECT, INSERT, UPDATE, DELETE
USING (tenant_id = auth.current_tenant_id())
WITH CHECK (tenant_id = auth.current_tenant_id());
USING (expression): This clause applies to rows that are returned by a query (SELECT) or are the target of an UPDATE or DELETE. If the expression evaluates to true for a given row, it's visible/modifiable.WITH CHECK (expression): This clause applies to rows being added by an INSERT or UPDATE. If the expression evaluates to false, the operation is rejected with an error. This is critical to prevent a user from one tenant from inserting data and assigning it to another tenant.With this policy in place, a query like SELECT FROM projects will now implicitly behave like SELECT FROM projects WHERE tenant_id = auth.current_tenant_id().
Advanced Policies and Performance Deep Dive
Simple tenant isolation is just the start. Real-world applications require more granular, role-based permissions.
Role-Based Access within a Tenant
Let's define a policy where 'admin' users can see all projects in their tenant, but 'member' users can only see projects they own.
-- Drop the old policy first if it exists
DROP POLICY IF EXISTS projects_access_policy ON projects;
-- Create a more sophisticated, role-based policy
CREATE POLICY projects_access_policy ON projects
FOR ALL
USING (
tenant_id = auth.current_tenant_id() AND (
auth.current_user_role() = 'admin' OR
owner_id = auth.current_user_id()
)
)
WITH CHECK (
tenant_id = auth.current_tenant_id()
-- Optionally, you could add a check on owner_id for inserts
-- WITH CHECK (tenant_id = auth.current_tenant_id() AND owner_id = auth.current_user_id())
-- This would prevent users from creating projects for others, even admins.
);
This single policy now elegantly handles multiple permission levels. The logic is centralized, declarative, and enforced by the database.
Performance Analysis: The Absolute Necessity of Indexing
RLS policies are effectively invisible WHERE clauses added to your queries. Just like any WHERE clause, their performance is dictated by the availability of suitable indexes.
Consider the query SELECT * FROM projects WHERE name LIKE 'Q1%'; under our role-based policy.
The effective query plan becomes:
SELECT * FROM projects WHERE name LIKE 'Q1%' AND (tenant_id = auth.current_tenant_id() AND (auth.current_user_role() = 'admin' OR owner_id = auth.current_user_id()));
projects table, evaluating the RLS policy for every single row. This is a performance disaster on large tables.(tenant_id, owner_id) and a separate one on (tenant_id, name) would be highly effective. The planner could use the tenant_id part of the index to immediately filter down to the relevant tenant's data, and then use the second part of the index (owner_id or name) to satisfy the rest of the query.Let's prove it with EXPLAIN ANALYZE:
Imagine a projects table with 10 million rows, 10,000 tenants.
Scenario 1: No relevant index
EXPLAIN ANALYZE SELECT id FROM projects WHERE id = 'some-uuid';
-- OUTPUT (Simplified)
Parallel Seq Scan on projects (cost=0.00..250000.00 rows=1 width=16) (actual time=2500.00..2500.00 ms)
Filter: ((tenant_id = auth.current_tenant_id()) AND (id = '...'))
Rows Removed by Filter: 9999999
Planning Time: 0.15 ms
Execution Time: 2501.30 ms
Even though we are querying by primary key, the RLS policy forces a full table scan because there's no index to help it efficiently find rows for the current tenant first. The database checks all 10 million rows.
Scenario 2: With CREATE INDEX ON projects (tenant_id);
EXPLAIN ANALYZE SELECT id FROM projects WHERE id = 'some-uuid';
-- OUTPUT (Simplified)
Index Scan using projects_pkey on projects (cost=0.43..8.45 rows=1 width=16) (actual time=0.05..0.05 ms)
Index Cond: (id = '...')
Filter: (tenant_id = auth.current_tenant_id())
Rows Removed by Filter: 0
Planning Time: 0.20 ms
Execution Time: 0.08 ms
This is much better but not perfect. The planner uses the primary key index first and then applies the RLS policy as a filter. This is fine for unique lookups.
What about a more common query, like listing projects?
EXPLAIN ANALYZE SELECT id FROM projects ORDER BY created_at DESC LIMIT 10;
Without an index on (tenant_id, created_at), this will be slow. It will have to find all projects for the tenant, sort them, then take 10. With a composite index, it's a lightning-fast index scan.
Rule of Thumb: Any column used in a USING clause of an RLS policy must be a leading column in an index. For multi-column policies, composite indexes are your best friend.
Edge Cases and Production Scenarios
1. The 'Superuser' Problem: Administrative Bypass
Your internal support tools, background workers, or analytics pipelines may need to operate on data across all tenants. How do they bypass RLS?
Option A (Recommended): The BYPASSRLS Attribute
Create a dedicated database role for these services and grant it the BYPASSRLS attribute. This is the cleanest, most explicit, and most secure method.
-- Create a role for a backend service
CREATE ROLE service_worker_role LOGIN PASSWORD '...';
-- Grant it necessary permissions on tables
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO service_worker_role;
-- Grant the bypass privilege
ALTER ROLE service_worker_role BYPASSRLS;
When your service connects to the database using this service_worker_role, RLS policies are simply not applied to its queries. No special logic is needed.
Option B (Use with Caution): Policy-Based Bypass
You could build a bypass into your policy itself.
-- In your JWT, you might issue a special 'system_admin' role
CREATE POLICY projects_access_policy ON projects
FOR ALL
USING (
auth.current_user_role() = 'system_admin' OR
(tenant_id = auth.current_tenant_id() AND (...))
)
...
This works, but it complicates the policy and tightly couples administrative access to your JWT issuing logic. The BYPASSRLS attribute provides better separation of concerns.
2. The Shared Data Problem: Controlled Cross-Tenant Access
What if a user from Tenant A needs to be granted read-only access to a specific project in Tenant B? Our rigid tenant_id = auth.current_tenant_id() model breaks down here.
To solve this, we introduce a linking table.
CREATE TABLE project_shares (
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
-- We include tenant_id here for efficient lookups
tenant_id UUID NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
permission TEXT NOT NULL DEFAULT 'read', -- e.g., 'read', 'write'
PRIMARY KEY (project_id, user_id)
);
CREATE INDEX idx_project_shares_user_id ON project_shares(user_id);
Now, we can craft a more sophisticated policy using an EXISTS subquery.
DROP POLICY projects_access_policy ON projects;
CREATE POLICY projects_universal_access_policy ON projects
FOR SELECT -- Create a separate, more restrictive policy for UPDATE/DELETE
USING (
-- Case 1: The user is accessing a project within their own tenant
(tenant_id = auth.current_tenant_id()) OR
-- Case 2: The project has been explicitly shared with the user
EXISTS (
SELECT 1
FROM project_shares ps
WHERE ps.project_id = projects.id
AND ps.user_id = auth.current_user_id()
)
);
Performance Warning: Subqueries in RLS policies can be performance killers if not written carefully. The EXISTS clause here is relatively efficient because it will be correlated to the outer query on projects.id. The index on project_shares(user_id) is also crucial so the inner lookup is fast.
For UPDATE and DELETE, you would likely want a more restrictive policy, perhaps only allowing the project's direct owner.
CREATE POLICY projects_modification_policy ON projects
FOR UPDATE, DELETE
USING (
tenant_id = auth.current_tenant_id() AND owner_id = auth.current_user_id()
);
3. Operational Tooling: Migrations and Testing
node-pg-migrate) just like CREATE TABLE statements. Always write DROP POLICY IF EXISTS ... before CREATE POLICY ... to make your migrations idempotent.1. Create two tenants (Tenant A, Tenant B) with their own users and projects.
2. Write a test helper that wraps database queries with the RLS session context for a specific user's JWT.
3. Test Positive Case: As User A, query for projects. Assert that you only see projects from Tenant A.
4. Test Negative Case: As User A, attempt to query for a specific project ID belonging to Tenant B. Assert that the query returns zero rows.
5. Test WITH CHECK: As User A, attempt to INSERT a project with tenant_id set to Tenant B's ID. Assert that the database throws a new row violates row-level security policy error.
Conclusion: A Robust Foundation for Secure SaaS
Implementing multi-tenancy with PostgreSQL RLS and JWTs is a significant architectural decision that pays dividends in security and maintainability. By moving authorization logic from the fallible application layer to the hardened database layer, you establish a security posture that is fundamentally more robust. The application code becomes simpler, as it can query data naively, trusting the database to enforce isolation.
While the initial setup is more complex than adding WHERE clauses, the long-term benefits are clear:
This pattern is not a silver bullet. It requires careful planning, rigorous testing, and a deep understanding of PostgreSQL's query planner. But for any senior engineer building a serious multi-tenant SaaS platform, it represents a powerful, production-proven approach to building a secure and scalable foundation.