Postgres RLS with JSONB for Dynamic Attribute-Based Access Control
The Inevitable Ceiling of Role-Based Access Control (RBAC)
For senior engineers who have built and maintained complex applications, the limitations of traditional Role-Based Access Control (RBAC) are not a theoretical concept—they are a source of technical debt and architectural friction. While RBAC is effective for simple systems (admin, editor, viewer), it breaks down under the pressure of real-world complexity:
* Role Explosion: As requirements become more granular, the number of roles explodes. You end up with roles like project_123_billing_admin_us_east_region, leading to an unmanageable permission matrix.
Context-Insensitivity: RBAC is static. It cannot easily answer questions like, "Can this user edit this document only during business hours?" or "Can this support agent view customer data only if an active support ticket exists*?"
Attribute-Dependent Logic: Access is often dependent on the attributes of the resource* itself. For example, a user can only access documents marked with sensitivity: 'public' unless they have a specific clearance: 'secret' attribute. Implementing this in RBAC often requires hardcoding logic into the application layer, which is brittle and bypassable.
This forces authorization logic out of a centralized, secure layer and scatters it across microservices, API gateways, and front-end components. It becomes inconsistent, difficult to audit, and prone to security vulnerabilities.
The solution is to shift the paradigm from who the user is (their role) to what the user can do based on their attributes and the context of the request. This is Attribute-Based Access Control (ABAC).
This article details a production-ready pattern for implementing a robust ABAC system directly within PostgreSQL, using two of its most powerful features: Row-Level Security (RLS) and the jsonb data type.
The ABAC Paradigm: RLS + JSONB + Session Context
Our architecture hinges on a simple yet powerful trifecta:
jsonb columns. This schema-on-read approach allows us to define and evolve complex permission structures without costly schema migrations.WHERE clause, ensuring that a user can only ever see or modify the rows they are authorized to access, regardless of the client application (API, direct DB connection, analytics tool).current_setting). The RLS policies then use these session variables to make dynamic, per-request authorization decisions.This moves authorization from the application's imperative code to the database's declarative layer, creating a single, auditable, and highly secure source of truth for access control.
Section 1: Schema Design for a Production ABAC System
Let's design a schema for a multi-tenant document management system. Our requirements:
* Users belong to an organization.
* Documents have a set of arbitrary attributes (e.g., project_id, region, sensitivity_level).
* A user's permissions are complex: they can have read/write access to specific projects, be limited to certain regions, and have clearance levels.
Here is our core schema:
-- Enable the pgcrypto extension for UUIDs
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- Organizations table
CREATE TABLE organizations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL
);
-- Users table with a jsonb column for permissions
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
organization_id UUID NOT NULL REFERENCES organizations(id),
email TEXT NOT NULL UNIQUE,
-- permissions stores user-specific attributes and rights
permissions JSONB NOT NULL DEFAULT '{}'::jsonb
);
-- Documents table with a jsonb column for resource attributes
CREATE TABLE documents (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
organization_id UUID NOT NULL REFERENCES organizations(id),
title TEXT NOT NULL,
content TEXT,
-- attributes stores resource-specific metadata for policy evaluation
attributes JSONB NOT NULL DEFAULT '{}'::jsonb,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Enable RLS on the tables we want to protect
ALTER TABLE users ENABLE ROW LEVEL SECURITY;
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
Dissecting the `jsonb` Structures
The power of this model lies in the structure of our permissions and attributes columns.
Example users.permissions object:
{
"projects": {
"proj_alpha": ["read", "write"],
"proj_beta": ["read"]
},
"regions": ["us-east-1", "eu-west-2"],
"clearance_level": 3,
"is_org_admin": false
}
This structure is incredibly flexible. We can grant project-specific rights, define geographical access, assign numerical clearance levels, and add boolean flags without altering the table schema.
Example documents.attributes object:
{
"project_id": "proj_alpha",
"region": "us-east-1",
"sensitivity_level": 2,
"status": "published"
}
This allows us to tag resources with metadata that our RLS policies will use for enforcement.
Section 2: Implementing Core RLS Policies
With our schema in place, we can define the rules. The first step is to establish how the application will pass user context to PostgreSQL.
The Session Context Bridge
Our application will set a session variable, for example abac.user_claims, after a user is authenticated. This variable will contain a JSON string of the user's ID, organization ID, and their permissions.
Here's how a Node.js Express middleware might do it:
// Middleware to set PostgreSQL session context
async function setDbUserContext(req, res, next) {
if (!req.user) { // Assuming req.user is populated by a previous auth middleware
return next();
}
const client = await pool.connect();
try {
const claims = {
user_id: req.user.id,
org_id: req.user.organization_id,
permissions: req.user.permissions // This is the JSONB object from the users table
};
// IMPORTANT: Use SET LOCAL to scope the setting to the current transaction
// This prevents context bleeding between requests in a connection pool.
await client.query(`SET LOCAL abac.user_claims = '${JSON.stringify(claims)}';`);
// Store the client on the request object for the route handler to use
req.dbClient = client;
next();
} catch (err) {
console.error('Failed to set DB user context', err);
res.status(500).send('Internal Server Error');
} finally {
// The client is released in an 'after' middleware to ensure it's available
// for the entire request lifecycle.
res.on('finish', () => {
if (req.dbClient) {
req.dbClient.release();
}
});
}
}
Now, inside our PostgreSQL policies, we can access this context:
-- Helper function to get the current user's claims (with caching)
CREATE OR REPLACE FUNCTION abac_get_claims() RETURNS JSONB AS $$
DECLARE
claims_str TEXT;
BEGIN
-- current_setting will return NULL if not set. The 'true' flag means
-- it won't error if the setting is missing.
claims_str := current_setting('abac.user_claims', true);
-- Return a default empty object if not set, preventing errors in policies
IF claims_str IS NULL OR claims_str = '' THEN
RETURN '{}'::jsonb;
END IF;
RETURN claims_str::jsonb;
END;
$$ LANGUAGE plpgsql STABLE;
Note: We use STABLE to allow PostgreSQL to cache the result of the function within a single query, avoiding repeated parsing of the JSON string.
Policy 1: Basic Tenancy and Read Access
Let's create our first policy. A user can see a document if:
- The document belongs to their organization.
permissions.projects object contains the document's attributes.project_id as a key.CREATE POLICY select_documents
ON documents
FOR SELECT
USING (
-- Rule 1: Enforce organization tenancy
organization_id = (abac_get_claims() ->> 'org_id')::UUID
AND
-- Rule 2: Check project-level access
-- The `?` operator checks if a string exists as a top-level key in a JSONB object.
(abac_get_claims() -> 'permissions' -> 'projects') ? (attributes ->> 'project_id')
);
Let's break down the USING clause:
* (abac_get_claims() ->> 'org_id')::UUID: We extract the org_id from our session claims JSON, cast it to UUID, and ensure it matches the document's organization_id. This is our fundamental multi-tenancy barrier.
* (abac_get_claims() -> 'permissions' -> 'projects') ? (attributes ->> 'project_id'): This is the core of our ABAC logic.
* abac_get_claims() -> 'permissions' -> 'projects': Navigates into the user's permissions to get the projects object.
* attributes ->> 'project_id': Extracts the project_id value from the document's attributes.
* ?: The JSONB "exists" operator checks if the right-hand side string ('proj_alpha') exists as a key in the left-hand side JSONB object ({"proj_alpha": [...], "proj_beta": [...]}).
With this policy active, a SELECT FROM documents; executed by a user with access only to proj_alpha will automatically* be filtered by the database to return only proj_alpha documents.
Section 3: Advanced Policy Logic and Encapsulation
Real-world scenarios are more complex. Let's build a policy for updates that incorporates multiple attribute checks.
Policy 2: Conditional Write Access
A user can update a document if:
- All read conditions are met.
"write".clearance_level is greater than or equal to the document's sensitivity_level.CREATE POLICY update_documents
ON documents
FOR UPDATE
USING (
-- Reuse the tenancy check
organization_id = (abac_get_claims() ->> 'org_id')::UUID
)
WITH CHECK (
-- ALL conditions from the USING clause are implicitly part of WITH CHECK for UPDATE.
-- We add our more restrictive write-specific rules here.
-- Rule 2: Check for 'write' permission in the project's permission array
-- The `@>` operator checks if the left JSONB contains the right JSONB.
(abac_get_claims() -> 'permissions' -> 'projects' -> (attributes ->> 'project_id')) @> '"write"'::jsonb
AND
-- Rule 3: Check clearance level against sensitivity level
-- We coalesce to 0 to handle cases where the attribute might be missing.
COALESCE((abac_get_claims() -> 'permissions' ->> 'clearance_level')::int, 0)
>=
COALESCE((attributes ->> 'sensitivity_level')::int, 0)
);
Key Concepts Introduced:
WITH CHECK vs. USING: For UPDATE and INSERT, USING applies to rows that are visible to the command, while WITH CHECK is an additional constraint that ensures the new or updated* row version also satisfies the policy. An UPDATE can't move a row out of the user's visible set.
* JSONB Path Navigation (->): We drill down into the JSONB object: ... -> 'projects' -> (attributes ->> 'project_id'). This dynamically selects the correct project's permission array based on the document being updated.
* JSONB Contains Operator (@>): jsonb_array @> '"value"'::jsonb is the canonical way to check if a JSON array of strings contains a specific value.
Encapsulating Logic in Functions
As policies grow, they become repetitive and hard to maintain. A senior-level pattern is to encapsulate complex, reusable logic in IMMUTABLE or STABLE functions.
Let's create a function that checks all document permissions at once.
CREATE OR REPLACE FUNCTION check_document_access(doc_attributes JSONB, doc_org_id UUID)
RETURNS BOOLEAN AS $$
DECLARE
claims JSONB := abac_get_claims();
user_permissions JSONB;
project_id TEXT;
project_perms JSONB;
BEGIN
-- Basic tenancy check
IF (claims ->> 'org_id')::UUID != doc_org_id THEN
RETURN FALSE;
END IF;
user_permissions := claims -> 'permissions';
project_id := doc_attributes ->> 'project_id';
-- Must have access to the project
IF NOT (user_permissions -> 'projects') ? project_id THEN
RETURN FALSE;
END IF;
-- Check region access
IF doc_attributes ? 'region' AND NOT (user_permissions -> 'regions') @> (doc_attributes -> 'region') THEN
RETURN FALSE;
END IF;
-- Check clearance level
IF COALESCE((user_permissions ->> 'clearance_level')::int, 0) < COALESCE((doc_attributes ->> 'sensitivity_level')::int, 0) THEN
RETURN FALSE;
END IF;
RETURN TRUE;
END;
$$ LANGUAGE plpgsql STABLE;
Now, our SELECT policy becomes incredibly clean and readable:
-- Drop the old policy first
DROP POLICY IF EXISTS select_documents ON documents;
-- Create the new, simpler policy
CREATE POLICY select_documents ON documents
FOR SELECT
USING ( check_document_access(attributes, organization_id) );
This is far more maintainable. If our access logic changes, we update the function in one place, and all policies that use it are automatically updated.
Section 4: Performance Deep Dive - The GIN Index Imperative
RLS policies are executed for every row access attempt. Without proper indexing, policies that query jsonb data will trigger sequential scans, leading to catastrophic performance degradation on large tables. A SELECT that should return 10 rows might have to scan millions.
Standard B-Tree indexes are not effective for most jsonb operators (?, @>, etc.). The solution is the Generalized Inverted Index (GIN).
A GIN index creates an index entry for each key and each value within the jsonb document. This allows PostgreSQL to rapidly find rows where the jsonb column contains a specific key or value, turning a full table scan into a highly efficient index scan.
Let's create GIN indexes on our attribute columns:
-- Index on user permissions to speed up lookups of projects, regions, etc.
CREATE INDEX idx_users_permissions_gin ON users USING GIN (permissions jsonb_path_ops);
-- Index on document attributes to speed up policy checks against project_id, region, etc.
CREATE INDEX idx_documents_attributes_gin ON documents USING GIN (attributes jsonb_path_ops);
Why jsonb_path_ops? The default GIN opclass (jsonb_ops) indexes every key and value, which can create very large indexes. jsonb_path_ops is an optimized version that only indexes path lookups (like ->, ->>) and containment (@>), which covers the vast majority of our ABAC use cases and results in a smaller, faster index.
Benchmarking the Impact
Let's simulate a table with 1 million documents and run a query with RLS enabled, both with and without the GIN index.
Scenario: A user with access to a single project (proj_123) queries the documents table.
Query Plan without GIN Index:
Seq Scan on documents (cost=0.00..55015.00 rows=5000 width=128)
Filter: (check_document_access(attributes, organization_id))
Execution Time: ~1500ms
The planner is forced to perform a sequential scan, executing our check_document_access function for all 1 million rows.
Query Plan with GIN Index on documents.attributes:
Bitmap Heap Scan on documents (cost=124.25..15487.50 rows=5000 width=128)
Recheck Cond: (attributes @> '{"project_id": "proj_123"}')
Filter: (check_document_access(attributes, organization_id))
-> Bitmap Index Scan on idx_documents_attributes_gin (cost=0.00..123.00 rows=5000 width=0)
Index Cond: (attributes @> '{"project_id": "proj_123"}')
Execution Time: ~15ms
The difference is staggering—a 100x improvement. The planner now uses the GIN index to instantly find the candidate rows matching the project_id and then applies the more expensive RLS function only to that small subset. Using GIN indexes is not optional for production RLS on jsonb; it is a fundamental requirement.
Section 5: Edge Cases and Production Considerations
A production system requires handling subtle but critical edge cases.
1. The Superuser/Migration Bypass Problem
By default, RLS does not apply to table owners or superusers. This is necessary for administrative tasks and schema migrations. However, during development or when using a shared role, this can lead to a false sense of security. Always test your policies with a dedicated, non-privileged application role.
To run migrations, you might temporarily disable RLS for your migration user: ALTER USER migration_user BYPASSRLS;
2. Preventing Cross-Tenant Data Leakage in Policies
A poorly written policy is a massive security risk. Consider this seemingly innocent policy:
-- DANGEROUS POLICY --
CREATE POLICY select_user_profile ON users
FOR SELECT
USING ( id = (abac_get_claims() ->> 'user_id')::UUID );
This allows a user to see their own profile. But what if a malicious user crafts a request where abac.user_claims is not set? current_setting will be NULL, ->> will be NULL, casting to UUID will be NULL. The condition becomes id = NULL, which is always false. Safe, right?
Wrong. If the function or logic somehow errors out, the policy might not be applied correctly. A robust policy always checks tenancy first.
-- SAFE POLICY --
CREATE POLICY select_user_profile ON users
FOR SELECT
USING (
organization_id = (abac_get_claims() ->> 'org_id')::UUID
AND
id = (abac_get_claims() ->> 'user_id')::UUID
);
Always anchor your policies to the highest-level tenant identifier first.
3. Auditing and Debugging
When a user reports they can't see data they expect, debugging RLS can be tricky. Use EXPLAIN to see how the policy is being applied.
-- As the application user, run:
EXPLAIN (ANALYZE, VERBOSE) SELECT * FROM documents WHERE id = 'some-doc-id';
The output will show the filter condition being added by the RLS policy, allowing you to see exactly what is being evaluated.
For more advanced auditing, you can use a trigger or log errors within your policy functions to a separate audit table, recording the user claims and the resource attributes for failed access attempts.
4. The LEAKPROOF Function Qualifier
If your RLS policy function uses arguments that depend on data from a row it's evaluating, PostgreSQL may refuse to run it to prevent side-channel attacks where the function's execution time or error messages could leak information. If you are certain your function has no such side channels, you can mark it as LEAKPROOF to bypass this check. Use this with extreme caution and only after a thorough security review of the function's logic.
CREATE OR REPLACE FUNCTION my_secure_check(val TEXT) RETURNS BOOLEAN AS $$
-- ... logic that does not depend on external state or throw data-dependent errors
$$ LANGUAGE plpgsql STABLE LEAKPROOF;
Section 6: Full, Runnable Example
To put it all together, here is a docker-compose setup to demonstrate the entire system.
docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:15
restart: always
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=abac_demo
ports:
- '5432:5432'
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
app:
build: .
ports:
- '3000:3000'
depends_on:
- postgres
environment:
- DATABASE_URL=postgres://app_user:app_password@postgres:5432/abac_demo
init.sql
-- Create the application user and grant basic permissions
CREATE USER app_user WITH PASSWORD 'app_password';
GRANT CONNECT ON DATABASE abac_demo TO app_user;
-- Switch to the app user to ensure objects are owned by it
SET ROLE app_user;
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE TABLE organizations (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name TEXT NOT NULL);
CREATE TABLE users (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), organization_id UUID NOT NULL REFERENCES organizations(id), email TEXT NOT NULL UNIQUE, permissions JSONB NOT NULL DEFAULT '{}'::jsonb);
CREATE TABLE documents (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), organization_id UUID NOT NULL REFERENCES organizations(id), title TEXT NOT NULL, attributes JSONB NOT NULL DEFAULT '{}'::jsonb);
GRANT SELECT, INSERT, UPDATE, DELETE ON organizations, users, documents TO app_user;
-- Create GIN Indexes for performance
CREATE INDEX idx_users_permissions_gin ON users USING GIN (permissions jsonb_path_ops);
CREATE INDEX idx_documents_attributes_gin ON documents USING GIN (attributes jsonb_path_ops);
-- Data Setup
INSERT INTO organizations (id, name) VALUES ('11111111-1111-1111-1111-111111111111', 'Org Alpha');
INSERT INTO organizations (id, name) VALUES ('22222222-2222-2222-2222-222222222222', 'Org Beta');
-- User 1: Admin in Org Alpha, access to project 'apollo' and 'gemini'
INSERT INTO users (id, organization_id, email, permissions) VALUES ('aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', '11111111-1111-1111-1111-111111111111', '[email protected]',
'{"projects": {"apollo": ["read", "write"], "gemini": ["read"]}, "clearance_level": 5, "is_org_admin": true}');
-- User 2: Viewer in Org Alpha, access to project 'apollo' only
INSERT INTO users (id, organization_id, email, permissions) VALUES ('bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', '11111111-1111-1111-1111-111111111111', '[email protected]',
'{"projects": {"apollo": ["read"]}, "clearance_level": 2}');
-- Documents for Org Alpha
INSERT INTO documents (organization_id, title, attributes) VALUES
('11111111-1111-1111-1111-111111111111', 'Apollo Mission Plan', '{"project_id": "apollo", "sensitivity_level": 2}'),
('11111111-1111-1111-1111-111111111111', 'Apollo High-Risk Analysis', '{"project_id": "apollo", "sensitivity_level": 4}'),
('11111111-1111-1111-1111-111111111111', 'Gemini Launch Codes', '{"project_id": "gemini", "sensitivity_level": 5}');
-- Document for another org (should never be visible)
INSERT INTO documents (organization_id, title, attributes) VALUES
('22222222-2222-2222-2222-222222222222', 'Beta Org Secret', '{"project_id": "zeta"}');
-- RLS Setup
ALTER TABLE documents ENABLE ROW LEVEL SECURITY;
CREATE OR REPLACE FUNCTION abac_get_claims() RETURNS JSONB AS $$
BEGIN
RETURN current_setting('abac.user_claims', true)::jsonb;
EXCEPTION WHEN OTHERS THEN
RETURN '{}'::jsonb;
END;
$$ LANGUAGE plpgsql STABLE;
CREATE POLICY select_documents ON documents FOR SELECT USING (
organization_id = (abac_get_claims() ->> 'org_id')::UUID
AND
(abac_get_claims() -> 'permissions' -> 'projects') ? (attributes ->> 'project_id')
AND
COALESCE((abac_get_claims() -> 'permissions' ->> 'clearance_level')::int, 0) >= COALESCE((attributes ->> 'sensitivity_level')::int, 0)
);
Node.js app.js
const express = require('express');
const { Pool } = require('pg');
const app = express();
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
// Mocks for user data. In a real app, this comes from a JWT or session.
const USERS_DB = {
'user-admin': { id: 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', organization_id: '11111111-1111-1111-1111-111111111111', permissions: { projects: { apollo: ['read', 'write'], gemini: ['read'] }, clearance_level: 5, is_org_admin: true } },
'user-viewer': { id: 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb', organization_id: '11111111-1111-1111-1111-111111111111', permissions: { projects: { apollo: ['read'] }, clearance_level: 2 } },
};
// Middleware to set DB context
app.use(async (req, res, next) => {
const userId = req.header('x-user-id');
if (!userId || !USERS_DB[userId]) {
return res.status(401).send('Unauthorized');
}
const client = await pool.connect();
req.dbClient = client;
const claims = {
user_id: USERS_DB[userId].id,
org_id: USERS_DB[userId].organization_id,
permissions: USERS_DB[userId].permissions
};
await client.query(`SET LOCAL abac.user_claims = '${JSON.stringify(claims)}';`);
res.on('finish', () => client.release());
next();
});
app.get('/documents', async (req, res) => {
try {
const { rows } = await req.dbClient.query('SELECT id, title, attributes FROM documents');
res.json(rows);
} catch (err) {
console.error(err);
res.status(500).send('Error');
}
});
app.listen(3000, () => console.log('Server running on port 3000'));
To test:
docker-compose up --build.curl http://localhost:3000/documents -H "x-user-id: user-admin"* Result: Will return all three documents for Org Alpha, as the admin has high clearance.
curl http://localhost:3000/documents -H "x-user-id: user-viewer"* Result: Will return only "Apollo Mission Plan". The viewer is filtered from the high-sensitivity Apollo document and the Gemini document because they lack project access and clearance.
The Beta Org Secret document is never returned, demonstrating that the tenancy barrier is holding firm.
Conclusion: A Paradigm Shift in Authorization
By combining PostgreSQL's Row-Level Security and jsonb data type, we have built a powerful, centralized, and dynamic Attribute-Based Access Control system. This pattern moves authorization logic from a scattered, brittle application layer into a secure, performant, and auditable data layer.
While the initial setup is more complex than simple role flags, the long-term benefits for complex, multi-tenant applications are immense. It provides the architectural flexibility to evolve your permission model without constant code changes, eliminates role explosion, and enforces security at the lowest possible level, closest to the data itself. For senior engineers building the next generation of secure and scalable systems, mastering this pattern is not just a novelty—it is a strategic necessity.