PostgreSQL RLS with JWTs for Fine-Grained API Authorization

16 min read
Goh Ling Yong
Technology enthusiast and software architect specializing in AI-driven development tools and modern software engineering practices. Passionate about the intersection of artificial intelligence and human creativity in building tomorrow's digital solutions.

The Fallacy of Application-Layer Authorization

For years, the standard playbook for API authorization has been to implement checks within the application layer. We write middleware, service-layer logic, and repository methods riddled with conditionals like if (user.tenantId !== resource.tenantId) { throw new UnauthorizedError(); }. While functional for simple cases, this approach becomes a significant liability in complex, multi-tenant SaaS applications. The authorization logic becomes scattered, duplicated, and brittle. A missed check in a new endpoint or a complex JOIN query can silently lead to catastrophic data leaks across tenants.

The fundamental problem is that we are enforcing data access rules far from the data itself. The database, which holds the data, is treated as a passive store, blindly serving whatever the application asks for. A more robust architecture enforces security at the strongest possible boundary: the database engine.

This is where PostgreSQL's Row-Level Security (RLS) becomes a transformative tool. By combining RLS with stateless JWTs, we can create a powerful, centralized, and performant authorization system where the database itself guarantees that a given user session can never see or modify data it isn't authorized to access, regardless of application-layer bugs.

This post is not an introduction to RLS. It assumes you understand the basics of CREATE POLICY and ALTER TABLE ... ENABLE ROW LEVEL SECURITY. Instead, we will focus on a production-ready pattern for integrating RLS into a modern API backend, covering the nuanced implementation details, performance pitfalls, and operational challenges you'll face in a real-world system.

The Core Pattern: Propagating JWT Claims to RLS Policies

The entire system hinges on securely passing user identity and permissions from a stateless JWT to the database session in a way that RLS policies can consume. The flow is as follows:

  • Authentication: A user authenticates and receives a JWT containing custom claims essential for authorization (e.g., user_id, organization_id, role).
  • API Request: The client sends the JWT in the Authorization header of every subsequent request.
  • Middleware Processing: API middleware intercepts the request, validates the JWT, and extracts the claims.
  • Transaction-Scoped Settings: Before executing any business logic, the middleware initiates a database transaction and sets special, transaction-local configuration parameters (session variables) using the claims from the JWT.
  • Query Execution: The application's data access logic runs its queries as usual, without any explicit authorization checks.
  • RLS Policy Enforcement: PostgreSQL transparently intercepts every query. The RLS policies defined on the target tables read the transaction-local settings to filter the result set, guaranteeing that only permissible rows are returned.
  • This pattern centralizes all authorization logic within the database schema, making it auditable, consistent, and immune to application-level mistakes.

    Step 1: Designing the Schema and JWT Claims

    Let's model a simple multi-tenant project management application. Our schema has organizations, users, projects, and tasks.

    sql
    -- A simplified multi-tenant schema
    CREATE TABLE organizations (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        name TEXT NOT NULL
    );
    
    CREATE TABLE users (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        organization_id UUID NOT NULL REFERENCES organizations(id),
        email TEXT NOT NULL UNIQUE,
        role TEXT NOT NULL DEFAULT 'member' -- e.g., 'member', 'admin', 'owner'
    );
    
    CREATE TABLE projects (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        organization_id UUID NOT NULL REFERENCES organizations(id),
        name TEXT NOT NULL
    );
    
    CREATE TABLE tasks (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        project_id UUID NOT NULL REFERENCES projects(id),
        assignee_id UUID REFERENCES users(id),
        title TEXT NOT NULL
    );

    For this schema, our JWT payload needs to contain the necessary identifiers for our RLS policies. A well-structured payload might look like this:

    json
    {
      "sub": "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d", // This is our users.id
      "org_id": "f0e9d8c7-b6a5-4f4e-3d2c-1b0a9f8e7d6c",
      "role": "admin",
      "email": "[email protected]",
      "iat": 1678886400,
      "exp": 1678890000
    }

    We will pass these claims to PostgreSQL. Storing them as a single JSONB object is often most efficient.

    Step 2: API Middleware for Claim Propagation

    This is the critical link between your stateless application and the stateful database session. The middleware's job is to verify the JWT and then safely inject its claims into the database transaction. We'll use Node.js with node-postgres (pg) as an example.

    javascript
    // middleware/auth.js
    const jwt = require('jsonwebtoken');
    const pool = require('../db/pool'); // Your configured pg.Pool instance
    
    const JWT_SECRET = process.env.JWT_SECRET;
    
    async function withAuthenticatedTransaction(req, res, next) {
        const authHeader = req.headers['authorization'];
        const token = authHeader && authHeader.split(' ')[1];
    
        if (!token) {
            return res.sendStatus(401); // Unauthorized
        }
    
        let claims;
        try {
            claims = jwt.verify(token, JWT_SECRET);
        } catch (err) {
            return res.sendStatus(403); // Forbidden
        }
    
        // Get a client from the pool. This is our transactional boundary.
        const client = await pool.connect();
        req.dbClient = client; // Attach client to the request object
    
        try {
            await client.query('BEGIN');
    
            // CRITICAL: Set claims as a transaction-local variable.
            // We use JSONB for easy access to nested properties in our policies.
            // `SET LOCAL` ensures the setting only lasts for the current transaction.
            await client.query(`SET LOCAL app.jwt.claims = '${JSON.stringify(claims)}'::jsonb;`);
    
            // For convenience, we can also set individual claims.
            // Note the quoting to prevent SQL injection, though claims are trusted here.
            // Using parameterized queries is even better if your driver supports it for SET.
            await client.query(`SET LOCAL app.current_user_id = '${claims.sub}';`);
            await client.query(`SET LOCAL app.current_org_id = '${claims.org_id}';`);
            
            await next(); // Proceed to the route handler
    
            await client.query('COMMIT');
        } catch (error) {
            await client.query('ROLLBACK');
            console.error('Transaction failed:', error);
            res.status(500).send('Internal Server Error');
        } finally {
            client.release(); // Return the client to the pool
        }
    }
    
    // Example route handler
    // app.get('/api/projects', withAuthenticatedTransaction, async (req, res) => {
    //     // req.dbClient is the transaction-aware client
    //     const { rows } = await req.dbClient.query('SELECT * FROM projects;');
    //     res.json(rows);
    // });

    Key Implementation Details:

    SET LOCAL vs. SET SESSION: We use SET LOCAL exclusively. This scopes the setting to the current transaction*. When the transaction ends (commit or rollback), the setting is discarded. This is crucial for correctness when using connection poolers like PgBouncer in transaction pooling mode, as a single session can be reused by different users for different transactions. SET SESSION would leak settings across transactions and users, completely breaking the security model.

    * JSONB for Claims: Passing the entire claims object as a single JSONB variable (app.jwt.claims) is highly effective. It allows policies to be self-contained and access any claim they need without requiring middleware changes. You can access nested values in SQL with current_setting('app.jwt.claims')::jsonb ->> 'sub'.

    * Transactional Integrity: The entire operation, from setting the claims to executing the business logic, is wrapped in a single database transaction. This ensures atomicity.

    Step 3: Writing the RLS Policies

    Now we can write policies that consume these settings. First, we must enable RLS on each table.

    sql
    ALTER TABLE users ENABLE ROW LEVEL SECURITY;
    ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
    ALTER TABLE tasks ENABLE ROW LEVEL SECURITY;

    Let's define some policies with increasing complexity.

    Policy 1: Simple Tenancy Isolation for projects

    A user can only see projects within their own organization.

    sql
    -- Helper function to get the current org_id from the setting
    CREATE OR REPLACE FUNCTION current_org_id() RETURNS UUID AS $$
    BEGIN
        RETURN current_setting('app.current_org_id')::UUID;
    EXCEPTION
        WHEN OTHERS THEN -- Handle cases where setting is not available
            RETURN NULL;
    END;
    $$ LANGUAGE plpgsql STABLE;
    
    -- The policy itself
    CREATE POLICY select_projects_in_own_org ON projects
    FOR SELECT
    USING (organization_id = current_org_id());

    With this policy, a SELECT FROM projects; will now be transparently rewritten by PostgreSQL to SELECT FROM projects WHERE organization_id = '...' where the organization ID is pulled from the transaction's local setting.

    Policy 2: Role-Based Access for projects

    Let's add more granularity. An owner or admin can see all projects in the organization, but a member can't see any projects (a contrived example, but it shows role differentiation).

    sql
    -- Helper function to get a specific claim
    CREATE OR REPLACE FUNCTION get_current_claim(claim TEXT) RETURNS TEXT AS $$
        SELECT current_setting('app.jwt.claims', true)::jsonb ->> claim;
    $$ LANGUAGE sql STABLE;
    
    -- Drop the old policy before creating a new one
    DROP POLICY IF EXISTS select_projects_in_own_org ON projects;
    
    -- A more complex, role-based policy
    CREATE POLICY select_projects_by_role ON projects
    FOR SELECT
    USING (
        -- All users must belong to the project's organization
        organization_id = (get_current_claim('org_id'))::UUID
        AND (
            -- 'owner' and 'admin' roles can see all projects in the org
            get_current_claim('role') IN ('owner', 'admin')
        )
    );
    
    -- Don't forget policies for other operations!
    CREATE POLICY insert_projects_for_admins ON projects
    FOR INSERT
    WITH CHECK (
        organization_id = (get_current_claim('org_id'))::UUID
        AND get_current_claim('role') IN ('owner', 'admin')
    );

    * USING vs. WITH CHECK: USING applies to rows returned by a query (SELECT). WITH CHECK applies to rows being created or modified (INSERT, UPDATE). They can be, and often are, different.

    * Helper Functions: Encapsulating current_setting calls in helper functions is a best practice. It makes policies more readable and allows you to centralize error handling (e.g., what happens if a claim is missing?). The STABLE volatility tells Postgres the function's result is consistent within a scan, allowing for better optimization.

    Policy 3: Relational Access for tasks

    A user can see tasks if they are the assignee OR if they are an admin in the organization that owns the project the task belongs to.

    sql
    CREATE POLICY select_tasks_by_assignment_or_role ON tasks
    FOR SELECT
    USING (
        -- The user is the direct assignee
        assignee_id = (get_current_claim('sub'))::UUID
        OR
        -- The user is an admin/owner in the org that owns the task's project
        (
            get_current_claim('role') IN ('owner', 'admin')
            AND EXISTS (
                SELECT 1 FROM projects
                WHERE projects.id = tasks.project_id
                AND projects.organization_id = (get_current_claim('org_id'))::UUID
            )
        )
    );

    This demonstrates how RLS policies can contain subqueries to enforce complex, relational authorization rules.

    Advanced Scenarios and Performance Considerations

    This is where senior engineering diligence is required. A naive RLS implementation can cripple your database.

    1. Performance Impact of RLS Policies

    Every RLS policy adds predicates to your queries. These predicates can drastically alter query plans. A poorly written policy is equivalent to a poorly written WHERE clause.

    Scenario: Consider our select_tasks_by_assignment_or_role policy. The OR condition and the EXISTS subquery can be expensive.

    Let's analyze the query SELECT * FROM tasks WHERE project_id = 'some-project-uuid';

    Without RLS, the plan might be a simple Index Scan on tasks(project_id). With RLS, the planner must combine the query's WHERE clause with the policy's USING clause. The final predicate becomes:

    WHERE project_id = '...' AND (assignee_id = '...' OR (get_current_claim(...) AND EXISTS(...)))

    This is much more complex. Let's look at a potential EXPLAIN ANALYZE output.

    Bad Policy Example: If projects.id is not indexed, the EXISTS subquery could trigger a full sequential scan on the projects table for every single row in tasks being evaluated.

    sql
    -- EXPLAIN (ANALYZE, BUFFERS) SELECT id FROM tasks WHERE project_id = '...';
    
    -- Potentially bad plan with a slow subquery
    -- Filter: ((assignee_id = '...') OR ((SubPlan 1) AND (alternatives))) 
    --   ->  Index Scan using tasks_project_id_idx on tasks ...
    --         SubPlan 1
    --           ->  Seq Scan on projects ...
    --                 Filter: ((id = tasks.project_id) AND (organization_id = '...'))

    The key is to ensure your policies can be supported by indexes. In our tasks policy, ensure projects.id is a primary key (which it is) and that tasks.assignee_id is indexed.

    Optimization Strategy: Indexing for Policies

    Your indexes must support your RLS predicates. For our multi-tenant system, nearly every RLS policy will filter on organization_id. Therefore, most of your indexes should be compound indexes starting with organization_id.

    sql
    -- BAD index for RLS
    CREATE INDEX ON projects(name);
    
    -- GOOD index for RLS
    CREATE INDEX ON projects(organization_id, name);

    This allows the planner to first narrow down the search space to the correct tenant's data, and then apply further filters.

    2. The Superuser / Internal Tool Problem

    Your support team or internal admin tools might need to view data across all tenants. How do you bypass RLS for these specific use cases?

    * Option 1 (The Big Hammer): Connect with a superuser role or a role with the BYPASS RLS attribute. This is simple but dangerous. A single compromised credential for this role gives an attacker access to all data. Use this with extreme caution and only for trusted internal networks.

    sql
        CREATE ROLE internal_support_role BYPASSRLS;
        GRANT internal_support_role TO my_support_user;

    * Option 2 (The Controlled Bypass): Create a dedicated API that requires a highly privileged JWT (e.g., with a special scope: 'internal_admin'). This API's middleware can then set a specific session variable, like SET LOCAL app.rls.bypass = 'true';. Your policies can then check for this.

    sql
        -- Modified policy
        CREATE POLICY select_projects_by_role ON projects
        FOR SELECT
        USING (
            current_setting('app.rls.bypass', true) = 'true'
            OR
            (
                organization_id = (get_current_claim('org_id'))::UUID
                AND get_current_claim('role') IN ('owner', 'admin')
            )
        );

    This is far more auditable and controlled, as the bypass is explicitly requested and scoped per-transaction.

    3. Managing and Testing RLS Policies

    RLS policies are code. They belong in version control and should be managed via your database migration tool (e.g., Flyway, Liquibase, node-pg-migrate). Never apply them manually to production.

    Testing RLS is non-trivial but essential. A combination of strategies is most effective:

    A. SQL-level Unit Testing with pgTAP

    pgTAP is a testing framework that lets you write unit tests directly in SQL. You can test your RLS policies by simulating different user contexts.

    sql
    -- tests/rls/test_projects.sql
    BEGIN;
    
    SELECT plan(3); -- Number of tests we plan to run
    
    -- Setup: Create some test data
    INSERT INTO organizations (id, name) VALUES ('org1', 'Org One'), ('org2', 'Org Two');
    INSERT INTO users (id, organization_id, role) VALUES ('user_admin_org1', 'org1', 'admin'), ('user_member_org2', 'org2', 'member');
    INSERT INTO projects (id, organization_id, name) VALUES ('proj1_org1', 'org1', 'Project 1'), ('proj2_org2', 'org2', 'Project 2');
    
    -- Test 1: Admin in Org1 can see their project
    SET LOCAL app.jwt.claims = '{"sub": "user_admin_org1", "org_id": "org1", "role": "admin"}';
    SELECT is(
        (SELECT COUNT(*)::INT FROM projects),
        1,
        'Admin in Org1 should see only 1 project'
    );
    
    -- Test 2: Member in Org2 should see no projects (based on our policy)
    SET LOCAL app.jwt.claims = '{"sub": "user_member_org2", "org_id": "org2", "role": "member"}';
    SELECT is(
        (SELECT COUNT(*)::INT FROM projects),
        0,
        'Member in Org2 should see 0 projects'
    );
    
    -- Test 3: Unauthenticated (no settings) should see nothing
    RESET app.jwt.claims;
    SELECT is(
        (SELECT COUNT(*)::INT FROM projects),
        0,
        'Unauthenticated session should see 0 projects'
    );
    
    SELECT * FROM finish();
    ROLLBACK;

    These tests can be run as part of your CI pipeline against a temporary database.

    B. API-level Integration Testing

    Your API tests should also validate the RLS behavior. These tests are higher-level but closer to the user experience.

    javascript
    // tests/api/projects.test.js (using Jest and supertest)
    describe('/api/projects', () => {
        it('should return projects for an admin user', async () => {
            const adminToken = generateJwt({ sub: 'user_admin_org1', org_id: 'org1', role: 'admin' });
            const response = await request(app)
                .get('/api/projects')
                .set('Authorization', `Bearer ${adminToken}`);
            
            expect(response.status).toBe(200);
            expect(response.body).toHaveLength(1);
            expect(response.body[0].name).toBe('Project 1');
        });
    
        it('should return an empty array for a member in another org', async () => {
            const memberToken = generateJwt({ sub: 'user_member_org2', org_id: 'org2', role: 'member' });
            const response = await request(app)
                .get('/api/projects')
                .set('Authorization', `Bearer ${memberToken}`);
    
            expect(response.status).toBe(200);
            expect(response.body).toHaveLength(0);
        });
    });

    Conclusion: A Paradigm Shift in Application Security

    Integrating PostgreSQL RLS with JWT claims represents a fundamental shift from application-centric to data-centric security. While it introduces new complexities in performance tuning, testing, and devops, the benefits are profound.

    By enforcing authorization at the database level, you create a robust, centralized, and auditable security model that is resilient to entire classes of application-level vulnerabilities. Your API code becomes simpler, focusing on business logic rather than being littered with repetitive security checks. You can add new data clients (e.g., a direct reporting tool, a new microservice) without needing to re-implement the same authorization logic. The database becomes the ultimate source of truth not just for data, but for data access policy.

    This pattern is not a silver bullet, and its implementation requires a deep understanding of PostgreSQL's execution model. However, for senior engineers building complex, secure, and scalable systems, it is an exceptionally powerful architecture to have in your toolkit.

    Found this article helpful?

    Share it with others who might benefit from it.

    More Articles