Postgres RLS with JWTs for Fine-Grained API Authorization
Decoupling Authorization from Application Logic: The RLS/JWT Pattern
In modern multi-tenant architectures, authorization is a pervasive cross-cutting concern. The conventional approach involves scattering if (user.can('action', resource))
checks throughout the application layer. While functional, this pattern suffers from several drawbacks: code duplication, potential for inconsistencies, and a tight coupling between business logic and authorization rules. A more robust and secure pattern centralizes this logic at the data layer itself, ensuring that no matter how the data is accessed—via an API, a background job, or a direct database connection—the rules are consistently enforced.
This is where PostgreSQL's Row-Level Security (RLS) becomes a game-changer. RLS allows you to define policies on a table that dictate which rows a user is allowed to view, insert, update, or delete. The core challenge, however, is bridging the stateless world of web applications (typically authenticated via JWTs) with the stateful session model of PostgreSQL.
This article details a production-ready pattern for achieving this synergy. We will not cover the basics of JWTs or SQL. Instead, we'll focus on the advanced mechanics of securely passing JWT claims into the PostgreSQL session and crafting sophisticated, performant RLS policies that handle complex, real-world authorization scenarios.
The Architectural Blueprint: Passing Claims via Session Configuration
The fundamental mechanism for linking our application's user context to PostgreSQL's RLS is the use of runtime configuration parameters. We can set custom, namespaced parameters within a database session that our RLS policies can then reference.
The flow is as follows:
- A client authenticates with the API, receiving a JWT.
Authorization
header.- An API middleware intercepts the request, validates the JWT, and decodes its payload (the claims).
- Before executing any business logic, the middleware acquires a database connection from the pool.
SET
command to store relevant claims (e.g., user_id
, tenant_id
, role
) as session-local configuration variables.SELECT * FROM projects;
) without any explicit WHERE
clauses for authorization.- PostgreSQL automatically applies the RLS policies, which read the session variables to filter the results. The application code remains blissfully unaware of the underlying authorization complexity.
Here’s a practical implementation of this middleware using Node.js, Express, and the node-postgres
(pg) library.
// middleware/auth.js
const jwt = require('jsonwebtoken');
const pool = require('../db/pool'); // Your configured pg.Pool instance
// A higher-order function to wrap our database logic
// This ensures session variables are set and cleaned up properly.
const withRLS = (handler) => async (req, res) => {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).send('Unauthorized: No token provided');
}
const token = authHeader.split(' ')[1];
let claims;
try {
claims = jwt.verify(token, process.env.JWT_SECRET);
} catch (err) {
return res.status(401).send('Unauthorized: Invalid token');
}
// We need a dedicated client for the entire request to maintain session state.
const client = await pool.connect();
req.dbClient = client; // Attach client to the request object
try {
// Use pg_catalog.set_config to prevent SQL injection on keys.
// The last parameter 'false' means the setting is not local to the transaction.
await client.query(`SELECT set_config('app.jwt.claims', $1, false)`, [JSON.stringify(claims)]);
// For frequently accessed claims, setting them directly can be more performant
// as it avoids JSON parsing within the policy.
await client.query(`SELECT set_config('app.current_user_id', $1, false)`, [claims.sub]);
await client.query(`SELECT set_config('app.current_tenant_id', $1, false)`, [claims.tid]);
await handler(req, res);
} catch (error) {
console.error('Error during RLS-enabled request:', error);
res.status(500).send('Internal Server Error');
} finally {
// CRITICAL: Release the client back to the pool.
if (req.dbClient) {
req.dbClient.release();
}
}
};
// Example API Route
// routes/projects.js
const express = require('express');
const router = express.Router();
router.get('/projects', withRLS(async (req, res) => {
// Notice the query has no authorization clauses. RLS handles it.
const { rows } = await req.dbClient.query('SELECT id, name, tenant_id FROM projects');
res.json(rows);
}));
module.exports = router;
Key Implementation Details:
* Connection Handling: We must acquire a single client from the pool and use it for the entire request lifecycle. If we were to get a new client for each query, the session settings would be lost.
* Passing Claims: Instead of setting each claim individually, we pass the entire JWT claims object as a JSON string to app.jwt.claims
. This is flexible and avoids littering the session with numerous variables. For highly accessed claims like user_id
and tenant_id
, we set them as top-level variables for easier access and slightly better performance in our policies.
* set_config
: We use the set_config(key, value, is_local)
function. Setting is_local
to false
makes the variable persist for the life of the session, not just the current transaction.
Designing Multi-Layered RLS Policies
With the user context established in the session, we can now define our security policies. RLS policies are attached to tables and are either PERMISSIVE
or RESTRICTIVE
. For our purposes, PERMISSIVE
policies are generally sufficient; they are combined with OR
, meaning a row is accessible if any permissive policy allows it. RLS is default-deny, so if no policy grants access, the action is blocked.
Let's model a typical multi-tenant SaaS application with tenants, users, projects, and project memberships.
Schema Definition:
CREATE TABLE tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL
);
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE
);
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
name TEXT NOT NULL
);
CREATE TYPE project_role AS ENUM ('admin', 'editor', 'viewer');
CREATE TABLE project_members (
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
user_id UUID NOT NULL REFERENCES users(id) ON DELETE CASCADE,
role project_role NOT NULL,
PRIMARY KEY (project_id, user_id)
);
-- Helper function to get current user_id from session cleanly
CREATE OR REPLACE FUNCTION current_user_id() RETURNS UUID AS $$
SELECT nullif(current_setting('app.current_user_id', true), '')::UUID;
$$ LANGUAGE SQL STABLE;
-- Helper function to get current tenant_id from session cleanly
CREATE OR REPLACE FUNCTION current_tenant_id() RETURNS UUID AS $$
SELECT nullif(current_setting('app.current_tenant_id', true), '')::UUID;
$$ LANGUAGE SQL STABLE;
Note on Helper Functions: Creating STABLE
functions like current_user_id()
is a best practice. It encapsulates the logic of retrieving and casting the session variable. The STABLE
keyword tells the planner that the function's result is consistent within a single scan, allowing for better optimization.
Policy Implementation: From Simple Tenancy to RBAC
First, we enable RLS on the tables that need protection.
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
ALTER TABLE project_members ENABLE ROW LEVEL SECURITY;
1. Basic Tenancy Policy on projects
This policy ensures users can only see projects belonging to their tenant.
CREATE POLICY select_projects_for_tenant
ON projects
FOR SELECT
USING (tenant_id = current_tenant_id());
This is simple and effective. However, it's too broad. It allows any user in a tenant to see all projects within that tenant. We need more granularity.
2. Fine-Grained RBAC Policy on projects
Let's refine this. A user should only be able to see projects they are explicitly a member of.
-- Drop the old policy first
DROP POLICY IF EXISTS select_projects_for_tenant ON projects;
CREATE POLICY select_projects_if_member
ON projects
FOR SELECT
USING (
EXISTS (
SELECT 1
FROM project_members
WHERE project_id = projects.id AND user_id = current_user_id()
)
);
This USING
clause now checks for the existence of a corresponding entry in the project_members
table. This correctly scopes SELECT
access.
3. Policies for Write Operations (INSERT
, UPDATE
, DELETE
)
Write operations require WITH CHECK
clauses to validate incoming data, in addition to USING
clauses to scope which rows can be targeted.
-- Policy for INSERTING new projects
CREATE POLICY insert_projects_for_tenant
ON projects
FOR INSERT
WITH CHECK (tenant_id = current_tenant_id());
-- Policy for UPDATING projects
CREATE POLICY update_projects_if_admin_or_editor
ON projects
FOR UPDATE
USING (
EXISTS (
SELECT 1
FROM project_members
WHERE project_id = projects.id
AND user_id = current_user_id()
AND role IN ('admin', 'editor')
)
)
WITH CHECK (tenant_id = current_tenant_id()); -- Also ensure they can't change the tenant_id
-- Policy for DELETING projects
CREATE POLICY delete_projects_if_admin
ON projects
FOR DELETE
USING (
EXISTS (
SELECT 1
FROM project_members
WHERE project_id = projects.id
AND user_id = current_user_id()
AND role = 'admin'
)
);
Analysis of Write Policies:
* INSERT
: The WITH CHECK
clause ensures that a user can only create a project within their own tenant. The tenant_id
must match the one from their JWT claims.
UPDATE
: The USING
clause restricts which rows* can be targeted for an update (only projects where the user is an 'admin' or 'editor'). The WITH CHECK
clause prevents the user from modifying the row in a way that would make it violate the policy (e.g., assigning it to a different tenant).
* DELETE
: This is the most restrictive, requiring the 'admin' role. It only needs a USING
clause to identify the target rows.
Performance Deep Dive: Analyzing and Optimizing RLS
Row-Level Security is not free. The policy conditions are added to every query against the protected table. Poorly written policies can severely degrade database performance.
Let's analyze the SELECT
policy on our projects
table.
-- Ensure we have indexes on foreign keys and columns used in policies
CREATE INDEX ON projects (tenant_id);
CREATE INDEX ON project_members (project_id);
CREATE INDEX ON project_members (user_id);
-- Set session variables for a test run
SET app.current_user_id = 'some-user-uuid';
SET app.current_tenant_id = 'some-tenant-uuid';
EXPLAIN ANALYZE SELECT * FROM projects;
The query plan will reveal how PostgreSQL is executing the policy. Without proper indexing, the EXISTS
subquery in our policy would trigger a sequential scan on project_members
for every single row in projects
, resulting in a nested loop and catastrophic performance on large tables.
With the correct indexes, the planner should choose a much more efficient plan, likely using an Index Scan on project_members
within the subquery.
Example Optimized Plan Output (Conceptual):
Seq Scan on projects (cost=0.00..35529.50 rows=1000 width=56) (actual time=0.038..15.869 rows=5 loops=1)
Filter: (SubPlan 1)
Rows Removed by Filter: 99995
SubPlan 1
-> Index Scan using project_members_user_id_project_id_idx on project_members (cost=0.43..8.45 rows=1 width=0) (actual time=0.003..0.003 rows=1 loops=100000)
Index Cond: ((user_id = 'some-user-uuid'::uuid) AND (project_id = projects.id))
Planning Time: 0.231 ms
Execution Time: 15.918 ms
Key Optimization Strategies:
WHERE
or JOIN
condition is a candidate for an index. This is the single most important performance factor.STABLE
or IMMUTABLE
Functions: As we did with current_user_id()
, marking functions appropriately allows the planner to cache their results and avoid re-evaluation for every row.VOLATILE
functions (the default) in policies unless absolutely necessary, as they force re-evaluation per-row and can inhibit many query optimizations.tenant_id
to project_members
) to simplify policy checks.Advanced Edge Cases and Production Gotchas
Implementing RLS in a real system requires handling several non-obvious scenarios.
1. The Superuser and Service Account Dilemma
By default, superusers and table owners bypass RLS. This is necessary for administrative tasks like schema migrations. However, what about internal services or background jobs that need broad access but shouldn't run as a full superuser?
Solution A: The BYPASS RLS
Attribute
You can create a specific role for services and grant it the BYPASS RLS
attribute. This is powerful but coarse-grained.
CREATE ROLE service_worker LOGIN PASSWORD '...';
ALTER ROLE service_worker BYPASSRLS;
This role will now ignore all RLS policies. Use it with extreme caution.
Solution B: Policy-Aware Service Logic
A more secure pattern is to make your policies aware of a service account context. We can modify our JWT middleware and policies to handle this.
* Your authentication service issues a special JWT for the service account with a claim like "is_service_account": true
.
* The middleware sets a session variable: SET app.is_service_account = 'true'
.
* Your policies include an escape hatch:
CREATE OR REPLACE FUNCTION is_service_account() RETURNS BOOLEAN AS $$
SELECT current_setting('app.is_service_account', true) = 'true';
$$ LANGUAGE SQL STABLE;
-- Revised SELECT policy for projects
DROP POLICY select_projects_if_member ON projects;
CREATE POLICY select_projects_policy ON projects FOR SELECT USING (
is_service_account() OR -- Escape hatch for service accounts
EXISTS (
SELECT 1
FROM project_members
WHERE project_id = projects.id AND user_id = current_user_id()
)
);
This approach is more auditable and allows for finer control than the all-or-nothing BYPASS RLS
.
2. The Connection Pooling Trap (PgBouncer)
This is the most critical operational issue. Many popular connection poolers, like PgBouncer in its default transaction pooling mode, will break this RLS pattern. In transaction pooling, a client is assigned a connection only for the duration of a single transaction. When the transaction commits, the connection is returned to the pool, and its session state (including our app.current_user_id
setting) may be wiped or given to a completely different application user for their next transaction.
Solutions:
node-postgres
's pg.Pool
) and ensure your application architecture correctly manages the lifecycle of a client connection for the duration of a single API request, as shown in our middleware example. This is often the most practical solution.SET LOCAL
to scope the settings. This is less common for typical web APIs but can be useful for complex background jobs.3. Views and Functions: `security_definer` vs `security_invoker`
RLS interaction with other database objects can be tricky.
* Views: RLS policies on the underlying tables of a view are applied when the view is queried.
* Functions: This is more complex. Functions can be defined as SECURITY INVOKER
(the default) or SECURITY DEFINER
.
SECURITY INVOKER
: The function executes with the permissions of the user calling* it. RLS policies are applied as expected.
SECURITY DEFINER
: The function executes with the permissions of the user who defined* it. This can be used to intentionally grant elevated, temporary permissions. For example, a security_definer
function owned by a user with BYPASS RLS
could perform an action that the calling user normally couldn't. This is an extremely powerful tool for encapsulating privileged operations but is also a potential security vulnerability if not written carefully. Use SET search_path
inside SECURITY DEFINER
functions to prevent search path hijacking attacks.
Conclusion: A Paradigm Shift in Authorization
Integrating PostgreSQL RLS with JWT-based authentication moves authorization from a scattered application-level concern to a centralized, consistent, and powerful data-layer enforcement mechanism. While the initial setup is more complex than simple application checks, the long-term benefits are substantial:
* Security: Authorization rules are enforced right next to the data, minimizing the surface area for bugs and vulnerabilities.
* Consistency: The same rules apply whether data is accessed via your API, a data analytics tool, or a direct database query.
* Simplicity: Application code is simplified, focusing purely on business logic without being cluttered by repetitive authorization checks.
Successfully implementing this pattern requires a deep understanding of PostgreSQL's session model, query planner, and the operational realities of connection pooling. By carefully designing performant policies, handling administrative access patterns, and navigating the pitfalls of connection management, you can build a highly secure and maintainable authorization system that scales with your application's complexity.