PostgreSQL RLS with JWTs for Granular API Authorization
The Fallacy of Application-Layer Authorization
For years, the standard pattern for multi-tenant data isolation has been deceptively simple: diligently add a WHERE tenant_id = :current_tenant_id clause to every single database query. This approach, while functional, is a house of cards. It's brittle, error-prone, and decentralizes your core security model across countless repositories, functions, and developers. A single missed WHERE clause in a complex JOIN or a hastily written analytics query can lead to catastrophic data leaks.
Senior engineers know that relying on developer discipline alone for fundamental security is a losing strategy. The principle of least privilege should be enforced at the lowest possible layer—the database itself. This is where PostgreSQL's Row-Level Security (RLS) becomes a game-changer.
This article presents a production-proven architecture for offloading authorization logic directly to PostgreSQL. We will use signed JSON Web Tokens (JWTs) as the source of truth for user identity and permissions, propagate this context securely to the database session, and use RLS policies to enforce access control automatically and transparently for every query. We will move beyond simple examples and dive into complex RBAC, performance tuning, and the subtle edge cases that separate a proof-of-concept from a production-ready system.
Part 1: Architecture Overview and Schema Foundation
Our goal is to make the database itself aware of the current user's identity and permissions for the duration of a single API request. The flow is as follows:
user_id, tenant_id, and role.Authorization header of every subsequent request.SET LOCAL.tenant_id or user_id filtering.WHERE clauses to each query based on the session context and the RLS policies defined on the tables.The Database Schema
Let's start with a canonical multi-tenant schema. Note the tenant_id and creator_id columns, which will be central to our policies.
-- A unique identifier for each tenant in our system
CREATE TABLE tenants (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Users belong to a single tenant
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
email TEXT NOT NULL UNIQUE,
-- In a real app, this would be a hashed password
password_hash TEXT NOT NULL,
-- 'admin' or 'member'
role TEXT NOT NULL CHECK (role IN ('admin', 'member')),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Projects are the primary resource, owned by a tenant and a user
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant_id UUID NOT NULL REFERENCES tenants(id) ON DELETE CASCADE,
creator_id UUID NOT NULL REFERENCES users(id) ON DELETE RESTRICT,
name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Create indexes for performance
CREATE INDEX ON users(tenant_id);
CREATE INDEX ON projects(tenant_id);
CREATE INDEX ON projects(creator_id);
The JWT Structure
Our JWT payload will be the source of truth for the user's session. It must contain all the necessary claims for our RLS policies. A typical payload after a user logs in would look like this:
{
"sub": "a1b2c3d4-e5f6-a7b8-c9d0-e1f2a3b4c5d6", // user_id
"tenant_id": "f0e9d8c7-b6a5-f4e3-d2c1-b0a9f8e7d6c5",
"role": "admin",
"iat": 1678886400,
"exp": 1678890000
}
This token unambiguously identifies the user, their tenant, and their role within that tenant.
Part 2: Securely Passing JWT Claims to PostgreSQL
This is the most critical and often misunderstood part of the implementation. We cannot simply pass the claims as query parameters, as that would defeat the purpose of transparent RLS. Instead, we must set them as session-level configuration variables.
PostgreSQL provides the SET command for this, but SET is session-wide and can leak context between requests if you're using a connection pool. The correct, transaction-safe command is SET LOCAL.
SET LOCAL sets a configuration parameter for the current transaction only. When the transaction commits or rolls back, the setting reverts to its previous value. This is perfect for web requests, where each request is typically wrapped in its own transaction.
We'll create a helper function in our API to set these claims. Here's an example using Node.js with the pg library.
// db.js - Database client setup
const { Pool } = require('pg');
const pool = new Pool({
// your connection config
});
// A helper to get a client, begin a transaction, and set RLS context
async function getClientWithRLS(jwtPayload) {
const client = await pool.connect();
// Start a transaction
await client.query('BEGIN');
try {
// Use SET LOCAL for transaction-scoped settings. This is CRITICAL.
// We use a custom namespace 'app.claims.*' to avoid conflicts.
await client.query(`SET LOCAL app.claims.tenant_id = '${jwtPayload.tenant_id}';`);
await client.query(`SET LOCAL app.claims.user_id = '${jwtPayload.sub}';`);
await client.query(`SET LOCAL app.claims.role = '${jwtPayload.role}';`);
// IMPORTANT: PostgreSQL settings are always strings. We'll need to cast them in policies.
// For security, it's better to create a role that can't change these settings.
// We will set the role of the user for this transaction.
// Assume we have a role 'api_user' that our application connects with.
// We can't change the role itself mid-transaction easily, so we use settings.
return client;
} catch (err) {
// If setting claims fails, rollback and release
await client.query('ROLLBACK');
client.release();
throw new Error('Failed to set RLS context');
}
}
// In your Express middleware or request handler:
app.get('/projects', async (req, res) => {
let client;
try {
// Assume req.user is populated by JWT validation middleware
client = await getClientWithRLS(req.user);
// Application logic now runs with RLS enabled.
// Notice NO `WHERE tenant_id = ...` clause!
const { rows } = await client.query('SELECT id, name FROM projects');
await client.query('COMMIT');
res.json(rows);
} catch (err) {
if (client) {
await client.query('ROLLBACK');
}
console.error(err);
res.status(500).send('Internal Server Error');
} finally {
if (client) {
client.release();
}
}
});
Security Consideration: The database user connecting from your application (api_user in this example) should be a non-privileged role. It should NOT be the table owner or a superuser. This role will be granted SELECT, INSERT, UPDATE, DELETE on the tables it needs, but nothing more. This prevents a potential SQL injection vulnerability from being able to alter or disable the RLS policies themselves.
Part 3: Implementing Core RLS Policies
With the context in place, we can now define the security rules. RLS policies are attached to a table and define boolean expressions that must be true for a row to be visible (USING clause) or modifiable (WITH CHECK clause).
First, we must enable RLS on each table we want to protect.
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
-- By default, this is restrictive. No rows are visible until a policy is created.
-- It's also good practice to force RLS for the table owner.
ALTER TABLE users FORCE ROW LEVEL SECURITY;
ALTER TABLE projects FORCE ROW LEVEL SECURITY;
Now, let's create our first, most fundamental policy: a user can only interact with data within their own tenant.
We'll use the current_setting() function to retrieve the variables we set with SET LOCAL. We must cast the string value to the correct type (e.g., ::uuid).
-- Policy for 'projects' table
CREATE POLICY tenant_isolation_policy ON projects
AS PERMISSIVE -- 'PERMISSIVE' means policies are combined with OR. 'RESTRICTIVE' is AND.
FOR ALL -- Applies to SELECT, INSERT, UPDATE, DELETE
TO api_user -- Apply this policy only to our application's role
USING (
tenant_id = current_setting('app.claims.tenant_id')::uuid
)
WITH CHECK (
tenant_id = current_setting('app.claims.tenant_id')::uuid
);
Dissecting the Policy:
* ON projects: The policy is for the projects table.
* FOR ALL: The policy applies to all command types. You can specify FOR SELECT, FOR INSERT, etc., for more granular rules.
* TO api_user: This is crucial for security. The policy only applies to our application's user role. This means administrative roles (like for migrations) are not affected.
* USING (...): This clause is for SELECT, UPDATE, and DELETE. A row is only visible or affectable if this expression returns true.
WITH CHECK (...): This clause is for INSERT and UPDATE. It prevents a user from creating or moving a row outside* their permitted scope. For example, it stops them from inserting a project with a different tenant_id.
Now, when our application runs SELECT * FROM projects;, PostgreSQL internally rewrites it to:
SELECT * FROM projects WHERE tenant_id = (SELECT current_setting('app.claims.tenant_id'))::uuid;
This is completely transparent to the application developer. The security is now centralized and unavoidable.
Part 4: Advanced RBAC Policies
Simple tenant isolation is powerful, but real applications require more complex role-based access control (RBAC). Let's extend our model to handle the 'admin' vs. 'member' roles defined in our JWT.
Requirement:
An admin can see and edit all* projects within their tenant.
A member can only see and edit projects they created*.
To achieve this, we can create multiple PERMISSIVE policies. If any permissive policy for a given command evaluates to true, the operation is allowed.
First, let's drop the simple policy and create more specific ones.
DROP POLICY tenant_isolation_policy ON projects;
-- Policy for tenant admins
CREATE POLICY admin_full_access ON projects
AS PERMISSIVE
FOR ALL
TO api_user
USING (
-- User must be an 'admin' AND within the correct tenant
current_setting('app.claims.role') = 'admin' AND
tenant_id = current_setting('app.claims.tenant_id')::uuid
)
WITH CHECK (
current_setting('app.claims.role') = 'admin' AND
tenant_id = current_setting('app.claims.tenant_id')::uuid
);
-- Policy for tenant members
CREATE POLICY member_own_projects_access ON projects
AS PERMISSIVE
FOR ALL
TO api_user
USING (
-- User must be a 'member' AND the creator of the project
current_setting('app.claims.role') = 'member' AND
creator_id = current_setting('app.claims.user_id')::uuid
)
WITH CHECK (
-- When creating/updating, they can only assign themselves as the creator
current_setting('app.claims.role') = 'member' AND
creator_id = current_setting('app.claims.user_id')::uuid AND
-- Also ensure they don't try to insert into another tenant
tenant_id = current_setting('app.claims.tenant_id')::uuid
);
Now, the logic is as follows:
* If a user with an admin role connects, the admin_full_access policy's USING clause will be true for all projects in their tenant, so they see everything.
* If a user with a member role connects, the admin_full_access policy is false. The database then checks member_own_projects_access. Its USING clause will only be true for projects where creator_id matches their own user_id.
This model is incredibly expressive. You can model complex ownership, team-based access, or feature-flag-based permissions by adding more JWT claims and corresponding RLS policies.
Part 5: Performance, Indexing, and Connection Pooling
Moving logic into the database always raises performance questions. Fortunately, RLS is implemented efficiently and can often be faster than application-layer filtering.
RLS and the Query Planner
The PostgreSQL query planner is fully aware of RLS policies. The policy's USING clause is merged into the query's WHERE clause before the query plan is generated. This allows the planner to use indexes that match the policy predicates.
Consider our member policy:
USING (creator_id = current_setting('app.claims.user_id')::uuid)
When a member runs SELECT * FROM projects ORDER BY created_at DESC;, the planner sees the effective query as:
SELECT * FROM projects WHERE creator_id = '...' ORDER BY created_at DESC;
If we have an index on (creator_id, created_at), the planner can use it for a highly efficient index scan. An application-layer approach might fetch all tenant projects and filter/sort in memory, which is far less scalable.
Benchmarking current_setting()
The current_setting() function is marked as STABLE, meaning its result is consistent within a single query. Its overhead is minimal, typically measured in microseconds. For 99% of applications, this overhead is negligible compared to query execution time and network I/O. Do not prematurely optimize this away.
Connection Pooling (PgBouncer)
A major concern for senior engineers is compatibility with transaction-level connection poolers like PgBouncer. Because we are using SET LOCAL, our architecture is perfectly safe.
* Session Pooling: Not recommended. A connection is assigned to a client for its entire lifecycle. State leaks, but not between different clients.
* Transaction Pooling: This is the most common and efficient mode. A client gets a connection for the duration of a single transaction. Since SET LOCAL is transaction-scoped, when the transaction ends and the connection is returned to the pool, our custom settings are automatically cleared. The next client to get that connection starts with a clean slate. This is the ideal scenario.
* Statement Pooling: Not compatible. Each statement gets a connection, so SET LOCAL would not persist between the SET command and the actual business query.
Conclusion: Use SET LOCAL and PgBouncer in transaction pooling mode for a scalable and safe setup.
Part 6: Advanced Edge Cases and Production Hardening
The `security_barrier` Attribute for Views and Functions
This is a subtle but critical security feature. Imagine you have a view or function that contains a leaky operator. For example:
-- A function that might throw an error based on its input
CREATE FUNCTION is_valid_project_name(name TEXT) RETURNS BOOLEAN AS $$
BEGIN
IF LENGTH(name) < 3 THEN
RAISE EXCEPTION 'Project name too short';
END IF;
RETURN TRUE;
END;
$$ LANGUAGE plpgsql;
-- A view that uses this function
CREATE VIEW valid_projects AS
SELECT * FROM projects WHERE is_valid_project_name(name);
Now, a malicious user in tenant_A could try to probe for project names in tenant_B:
SELECT * FROM valid_projects WHERE id = 'project_id_from_tenant_B';
Normally, the RLS policy (tenant_id = ...) would be applied first, returning no rows and preventing is_valid_project_name from ever being called on tenant_B's data. However, under certain complex query plans, PostgreSQL might reorder operations and evaluate the function before the RLS policy. If the project name in tenant_B is too short, the function would throw an error, leaking the information that the project exists and its name is short. This is a side-channel attack.
To prevent this, we use security_barrier:
ALTER VIEW valid_projects SET (security_barrier = true);
-- For functions, you define it at creation
CREATE FUNCTION get_project_name(pid UUID) RETURNS TEXT AS $$
...
$$ LANGUAGE sql SECURITY BARRIER;
The security_barrier option enforces that row-level security policies are always applied before any other conditions or functions in the view/function definition. This closes the side-channel vulnerability at a minor performance cost.
Superuser Access and Migrations
Your migration tools and superuser roles should not be subject to RLS. There are two ways to handle this:
TO api_user. If you connect as a different role (e.g., postgres or a dedicated migration_user), the policies simply don't apply.BYPASSRLS Attribute: You can grant a specific role the ability to bypass all RLS policies. This is extremely powerful and should be used with caution. ALTER ROLE migration_user BYPASSRLS;
Differentiating "Not Found" vs. "Forbidden"
One downside of RLS is that it treats non-existent rows and forbidden rows identically: it returns an empty result set. If your API needs to return a 404 Not Found for non-existent items but a 403 Forbidden for existing but inaccessible items, RLS alone is insufficient.
The common pattern to solve this is to perform a lightweight secondary check:
// In an Express handler for GET /projects/:id
const { id } = req.params;
const client = await getClientWithRLS(req.user);
// This query will return a row ONLY if it exists AND the user has access
const { rows } = await client.query('SELECT id FROM projects WHERE id = $1', [id]);
if (rows.length > 0) {
// User has access, proceed to fetch full data
res.json(fullProjectData);
} else {
// Now, we need to know WHY it was empty. Was it not found or forbidden?
// We need a query that can bypass RLS for a moment.
// This can be done with a SECURITY DEFINER function owned by a BYPASSRLS role.
// A simpler, pragmatic approach: run a count without RLS.
// This requires a separate DB client with a different user role that is NOT subject to the RLS policy.
const { rowCount } = await adminClient.query('SELECT 1 FROM projects WHERE id = $1', [id]);
if (rowCount > 0) {
res.status(403).send('Forbidden');
} else {
res.status(404).send('Not Found');
}
}
This adds complexity and is a trade-off. For many internal APIs or list-based endpoints, simply returning an empty set is sufficient and more secure.
Conclusion: A Paradigm Shift in Application Security
By embedding authorization logic directly into PostgreSQL with Row-Level Security and driving it with JWT claims, we achieve a superior security posture. This pattern centralizes your most critical business rules in one place, making them auditable, robust, and immune to common application-layer bugs.
The application code becomes dramatically simpler—it no longer needs to be aware of tenancy or user roles when querying data. It simply asks for what it wants, and the database guarantees that only the appropriate data is returned.
While the initial setup is more involved than peppering your code with WHERE clauses, the long-term benefits in security, maintainability, and peace of mind are immeasurable. It represents a fundamental shift from suggesting security guidelines at the application layer to enforcing them at the data layer.