PostgreSQL pg_trgm GIN/GiST Indexes for Multi-Column Fuzzy Search

27 min read
Goh Ling Yong
Technology enthusiast and software architect specializing in AI-driven development tools and modern software engineering practices. Passionate about the intersection of artificial intelligence and human creativity in building tomorrow's digital solutions.

The Senior Engineer's Dilemma: Scalable, Typo-Tolerant Multi-Column Search

As a senior engineer, you've inevitably faced the requirement: "We need a search bar that finds users by name, email, or company, and it needs to be fast and handle typos." The junior developer's first instinct is often a chain of ILIKE '%query%' conditions. You know this leads to sequential scans and a performance cliff as the table grows beyond a few thousand rows. The next step in sophistication is full-text search (tsvector, tsquery), which is powerful but semantically different. It's designed for document searching, stemming, and stop words, not necessarily for correcting a simple typo like 'Jhon' for 'John'.

This is the precise domain of trigram-based fuzzy matching, and PostgreSQL's pg_trgm extension is the canonical tool. However, simply enabling the extension and adding an index is where most tutorials stop. The real engineering challenge lies in applying it effectively to multiple columns with demanding performance constraints.

This article is not an introduction to pg_trgm. It assumes you know what a trigram is and have used a basic GIN index before. We will dissect the advanced strategies required for production systems, focusing on:

  • The GIN vs. GiST Indexing Trade-off: A deep look at their internal structures and how they impact performance for trigram operations, especially concerning write amplification vs. read speed and KNN search capabilities.
  • Multi-Column Indexing Patterns: We'll analyze the common but flawed concatenated index pattern and demonstrate a superior strategy using individual indexes with query-level orchestration.
  • Performance Tuning and Edge Cases: Leveraging similarity_threshold to prune query results at the index level and combining fuzzy search with standard B-Tree indexes for complex, filtered queries.
  • Production-Grade Implementation: A comprehensive PL/pgSQL function encapsulating these advanced patterns for a reusable, high-performance search API.
  • Let's begin by setting up a realistic schema and dataset.

    Schema and Test Data Setup

    We'll use a users table with several text fields we want to search across. To make our performance tests meaningful, we'll populate it with 1 million records.

    sql
    -- Ensure the extension is enabled
    CREATE EXTENSION IF NOT EXISTS pg_trgm;
    CREATE EXTENSION IF NOT EXISTS pgcrypto; -- For generating UUIDs
    
    -- The table for our examples
    CREATE TABLE users (
        id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
        first_name TEXT NOT NULL,
        last_name TEXT NOT NULL,
        email TEXT NOT NULL UNIQUE,
        company TEXT,
        department_id INT, -- For filtered search examples
        created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
    );
    
    -- Let's populate it with a significant amount of data
    -- This may take a few minutes to run
    INSERT INTO users (first_name, last_name, email, company, department_id)
    SELECT
        'user' || g.i,
        'lastname' || (g.i % 1000),
        'user' || g.i || '@' || (CASE (g.i % 10) 
                                    WHEN 0 THEN 'example.com' 
                                    WHEN 1 THEN 'workplace.io' 
                                    ELSE 'domain.net' 
                                END),
        'Company ' || (g.i % 200),
        (g.i % 50) + 1
    FROM generate_series(1, 1000000) AS g(i);
    
    -- Add a standard B-Tree index for filtered queries later
    CREATE INDEX idx_users_department_id ON users(department_id);
    
    -- Analyze the table for accurate query planning
    ANALYZE users;

    The Indexing Conundrum: GIN vs. GiST for Trigrams

    Your choice of index type is the most critical architectural decision when implementing pg_trgm. Choosing incorrectly can lead to suboptimal query performance, high write latency, or an inability to perform certain types of searches. Both are powerful, but they solve different problems.

    GIN (Generalized Inverted Index)

    A GIN index on a text column with gin_trgm_ops works by extracting all possible trigrams from the text and creating an inverted index. It maps each unique trigram to a list of rows (a posting list) where that trigram appears.

  • Structure: [trigram] -> [row_id_1, row_id_2, ...]
  • Querying (% operator): When you run WHERE email % 'somequery', PostgreSQL extracts trigrams from 'somequery', looks them up in the index, and fetches the posting lists. It then performs a bitmap union or intersection on these lists to find rows that share a sufficient number of trigrams.
  • Performance Characteristics:
  • - Read Speed: Extremely fast for lookups. Finding rows containing a set of trigrams is its primary design goal.

    - Write Speed: Slower. Inserting or updating a single row requires updating the posting lists for every trigram in the new value. This can cause significant write amplification, especially with the fastupdate mechanism, which buffers changes and merges them later.

    - Disk Size: Generally larger than GiST because it stores each trigram and a potentially long list of row pointers.

    GiST (Generalized Search Tree)

    A GiST index with gist_trgm_ops creates a balanced tree structure, similar in concept to a B-Tree but generalized for complex data types. For trigrams, parent nodes in the tree store a lossy representation (a signature or union) of all trigrams present in their child nodes.

  • Structure: A hierarchical tree where leaf nodes point to table rows, and parent nodes contain a compressed representation of their children's data.
  • Querying (% operator): The query traverses the tree, pruning branches that cannot possibly match the search term based on the trigram signatures. This can result in false positives that the index returns to the executor, which must then be re-checked against the actual table data.
  • Querying (<-> KNN operator): This is GiST's killer feature. The word <-> query operator calculates the "distance" between two strings. When used in an ORDER BY clause with a LIMIT, GiST can perform an incredibly efficient K-Nearest Neighbor search, traversing the tree to find the closest matches without scanning a large number of rows.
  • Performance Characteristics:
  • - Read Speed (%): Often slower than GIN for simple similarity lookups due to the potential for re-checks.

    - Read Speed (<->): Unbeatable for ranked distance/similarity searches.

    - Write Speed: Significantly faster than GIN. An update typically affects only a few nodes in the tree, resulting in much lower write amplification.

    - Disk Size: Generally smaller and more compact than GIN.

    Decision Matrix

    Use CaseRecommended IndexRationale
    Find all rows that are at least X% similar (e.g., search results)GINFastest lookup for boolean-style matching. Ideal for WHERE col % 'query'.
    Find the top N most similar rows (e.g., "Did you mean?")GiSTThe only efficient way to perform KNN search using the <-> operator in ORDER BY.
    Read-heavy workload (e.g., user directory)GINOptimizes for the most common operation: searching. Write latency is less of a concern.
    Write-heavy workload (e.g., logging table, frequently updated profiles)GiSTMinimizes write amplification and keeps INSERT/UPDATE performance high.
    Need to combine with other GiST-indexable types (e.g., geo-spatial)GiSTGiST can create a multi-column index on different data types (though not directly for this use case).

    For the remainder of this article, we'll focus primarily on the common "search results" use case, making GIN our primary tool, but we will revisit GiST for a specific KNN example.

    Pattern 1 (Anti-Pattern): The Single Concatenated Functional Index

    A common first attempt at multi-column search is to create a single functional index on the concatenated values of the columns.

    sql
    CREATE INDEX idx_users_search_concatenated_gin ON users
    USING GIN ( (first_name || ' ' || last_name || ' ' || email || ' ' || company) gin_trgm_ops );

    Let's test it with a query for a user with a slight typo in their name, 'user999 lastnam999'.

    sql
    EXPLAIN ANALYZE
    SELECT id, first_name, last_name, email, company
    FROM users
    WHERE (first_name || ' ' || last_name || ' ' || email || ' ' || company) % 'user999 lastnam999';

    Result Analysis (EXPLAIN ANALYZE output):

    text
                                                                  QUERY PLAN                                                              
    --------------------------------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on users  (cost=68.27..100.31 rows=10 width=103) (actual time=0.983..0.985 rows=1 loops=1)
       Recheck Cond: ((first_name || ' ' || last_name || ' ' || email || ' ' || company) %> 'user999 lastnam999'::text)
       Heap Blocks: exact=1
       ->  Bitmap Index Scan on idx_users_search_concatenated_gin  (cost=0.00..68.27 rows=10 width=0) (actual time=0.978..0.978 rows=1 loops=1)
             Index Cond: ((first_name || ' ' || last_name || ' ' || email || ' ' || company) %> 'user999 lastnam999'::text)
     Planning Time: 0.153 ms
     Execution Time: 1.012 ms

    It works, and it's fast for this simple case because it uses the index. However, this pattern has several critical production flaws:

  • Inflexible and Inefficient: The expression in the WHERE clause must exactly match the expression in the CREATE INDEX statement. This forces you to perform string concatenation on every query, which adds overhead.
  • No Field Weighting: You can't determine which field matched or assign a higher relevance score to a match in last_name versus a match in company.
  • Update Overhead: An update to any of the indexed columns (first_name, last_name, email, company) forces a complete re-indexing of the entire concatenated string for that row, even if only one character changed in a single field.
  • Delimiter Ambiguity: The space delimiter (' ') is now part of the indexed data. Searching for 'John Smith' is different from searching for 'Smith John'. It also makes it difficult to search for terms that naturally contain spaces.
  • While simple to implement, the concatenated index is a brittle and inefficient solution that should be avoided in serious production systems.

    Pattern 2 (Superior): Individual GIN Indexes with Query Orchestration

    The robust, scalable, and flexible approach is to create separate GIN indexes on each column you intend to search.

    sql
    -- Drop the old index if you created it
    DROP INDEX IF EXISTS idx_users_search_concatenated_gin;
    
    -- Create individual indexes
    CREATE INDEX idx_users_first_name_gin_trgm ON users USING GIN (first_name gin_trgm_ops);
    CREATE INDEX idx_users_last_name_gin_trgm ON users USING GIN (last_name gin_trgm_ops);
    CREATE INDEX idx_users_email_gin_trgm ON users USING GIN (email gin_trgm_ops);
    CREATE INDEX idx_users_company_gin_trgm ON users USING GIN (company gin_trgm_ops);

    Now, we orchestrate the search at the query level using an OR condition.

    sql
    EXPLAIN ANALYZE
    SELECT id, first_name, last_name, email
    FROM users
    WHERE 
        first_name % 'user999' OR
        last_name % 'lastnam999' OR -- Deliberate typo
        email % 'user999';

    Result Analysis (EXPLAIN ANALYZE output):

    text
                                                                     QUERY PLAN                                                                 
    --------------------------------------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on users  (cost=40.08..44.60 rows=3 width=87) (actual time=1.458..1.460 rows=1 loops=1)
       Recheck Cond: ((first_name %> 'user999'::text) OR (last_name %> 'lastnam999'::text) OR (email %> 'user999'::text))
       Heap Blocks: exact=1
       ->  BitmapOr  (cost=40.08..40.08 rows=3 width=0) (actual time=1.452..1.453 rows=0 loops=1)
             ->  Bitmap Index Scan on idx_users_first_name_gin_trgm  (cost=0.00..12.02 rows=1 width=0) (actual time=0.457..0.457 rows=1 loops=1)
                   Index Cond: (first_name %> 'user999'::text)
             ->  Bitmap Index Scan on idx_users_last_name_gin_trgm  (cost=0.00..16.03 rows=1 width=0) (actual time=0.551..0.551 rows=1 loops=1)
                   Index Cond: (last_name %> 'lastnam999'::text)
             ->  Bitmap Index Scan on idx_users_email_gin_trgm  (cost=0.00..12.02 rows=1 width=0) (actual time=0.443..0.443 rows=1 loops=1)
                   Index Cond: (email %> 'user999'::text)
     Planning Time: 0.354 ms
     Execution Time: 1.501 ms

    Observe the query plan. PostgreSQL is using a BitmapOr operation. It performs a Bitmap Index Scan on each of the three relevant indexes, collects the resulting row locations into bitmaps, and then performs a bitwise OR operation to create a final bitmap of all matching rows. This is an incredibly efficient way to combine results from multiple indexes.

    This pattern provides enormous benefits:

  • Flexibility: You can easily add or remove fields from the search by changing the WHERE clause, without any schema changes.
  • Field Weighting and Scoring: You can now calculate similarity scores for each field individually and use them to rank results.
  • Efficiency: Updates are localized. An update to first_name only affects idx_users_first_name_gin_trgm, not the indexes for other columns.
  • Advanced Ranking with `similarity()`

    Let's implement a ranked search where a match in last_name or first_name is considered more relevant than a match in company.

    sql
    -- Our search term
    -- Let's search for 'Compny 123 user1230 lastnam230'
    -- This query has parts that match a company, a user, and a last name.
    WITH params AS (SELECT 'Compny 123 user1230 lastnam230' AS query)
    SELECT 
        id,
        first_name,
        last_name,
        company,
        -- Calculate individual similarity scores
        similarity(first_name, params.query) AS fname_score,
        similarity(last_name, params.query) AS lname_score,
        similarity(company, params.query) AS company_score,
        -- Create a weighted final score
        (
            similarity(first_name, params.query) * 1.0 + 
            similarity(last_name, params.query) * 1.0 + 
            similarity(company, params.query) * 0.5
        ) AS total_score
    FROM users, params
    WHERE 
        -- The WHERE clause must still use the '%' operator for index usage!
        -- The similarity() function calls alone will not use the index.
        first_name % params.query OR
        last_name % params.query OR
        company % params.query
    ORDER BY total_score DESC
    LIMIT 10;

    This query is the gold standard for production fuzzy search. It uses the % operator to efficiently pre-filter a candidate set of rows using our individual GIN indexes. Then, on that small, filtered set, it performs the more expensive similarity() calculations to produce a relevance score, which is then used for ordering.

    Critical Performance Tuning: `pg_trgm.similarity_threshold`

    By default, the % operator is equivalent to similarity(column, query) > 0.3. For a large dataset, even with an index, a low threshold can return a massive number of rows, forcing the database to perform heap fetches and similarity calculations for many irrelevant results.

    We can dramatically improve performance by setting a higher threshold before our query. This is a session-local variable, making it perfect for application-level control.

    Let's run a broad query for 'user' and see the difference.

    Query 1: Default Threshold (0.3)

    sql
    EXPLAIN ANALYZE
    SELECT id FROM users WHERE first_name % 'user';
    text
                                                                  QUERY PLAN                                                              
    --------------------------------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on users  (cost=12025.02..26538.50 rows=100000 width=16) (actual time=81.391..288.541 rows=1000000 loops=1)
       Recheck Cond: (first_name %> 'user'::text)
       Rows Removed by Index Recheck: 65535
       Heap Blocks: exact=18335
       ->  Bitmap Index Scan on idx_users_first_name_gin_trgm  (cost=0.00..11999.99 rows=100000 width=0) (actual time=68.956..68.956 rows=1000000 loops=1)
             Index Cond: (first_name %> 'user'::text)
     Planning Time: 0.089 ms
     Execution Time: 299.764 ms

    The planner estimates 100,000 rows but actually finds all 1,000,000. It takes nearly 300ms.

    Query 2: Higher Threshold (0.8)

    sql
    -- Set the threshold for this transaction only
    SET LOCAL pg_trgm.similarity_threshold = 0.8;
    
    EXPLAIN ANALYZE
    SELECT id FROM users WHERE first_name % 'user';
    text
                                                                  QUERY PLAN                                                              
    --------------------------------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on users  (cost=12.02..16.14 rows=1 width=16) (actual time=0.033..0.033 rows=0 loops=1)
       Recheck Cond: (first_name %> 'user'::text)
       ->  Bitmap Index Scan on idx_users_first_name_gin_trgm  (cost=0.00..12.02 rows=1 width=0) (actual time=0.029..0.029 rows=0 loops=1)
             Index Cond: (first_name %> 'user'::text)
     Planning Time: 0.165 ms
     Execution Time: 0.052 ms

    Execution time dropped from 299ms to 0.05ms. By increasing the threshold, we told the index to be much more selective. The query now returns no rows (as no first_name is 80% similar to just 'user'), but it demonstrates the power of this setting. In a real application, you can dynamically adjust this threshold based on the length of the search query to ensure a good balance between tolerance and performance.

    Combining Fuzzy Search with Structured Filters

    Real-world searches are rarely just text-based. Users often need to combine a fuzzy name search with a structured filter, like a department.

    sql
    EXPLAIN ANALYZE
    SELECT id, first_name, last_name, email
    FROM users
    WHERE 
        department_id = 25
        AND (first_name % 'user42' OR last_name % 'lastname42');

    Result Analysis (EXPLAIN ANALYZE output):

    text
                                                                   QUERY PLAN                                                                
    -----------------------------------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on users  (cost=32.29..36.81 rows=1 width=87) (actual time=0.231..0.233 rows=1 loops=1)
       Recheck Cond: ((first_name %> 'user42'::text) OR (last_name %> 'lastname42'::text))
       Filter: (department_id = 25)
       Heap Blocks: exact=1
       ->  BitmapOr  (cost=32.29..32.29 rows=2 width=0) (actual time=0.225..0.226 rows=0 loops=1)
             ->  Bitmap Index Scan on idx_users_first_name_gin_trgm  (cost=0.00..12.02 rows=1 width=0) (actual time=0.098..0.098 rows=1 loops=1)
                   Index Cond: (first_name %> 'user42'::text)
             ->  Bitmap Index Scan on idx_users_last_name_gin_trgm  (cost=0.00..20.27 rows=1 width=0) (actual time=0.126..0.126 rows=1 loops=1)
                   Index Cond: (last_name %> 'lastname42'::text)
     Planning Time: 0.220 ms
     Execution Time: 0.257 ms

    This plan is good, but we can do better. Notice the Filter: (department_id = 25) is applied after the BitmapOr. For low-cardinality filters, this is fine. But if department_id = 25 selects only a small fraction of the table, we want to apply that filter first.

    Let's slightly rewrite to encourage a different plan.

    sql
    -- Often the planner is smart enough, but in complex queries, this can be a hint.
    -- The goal is to get a BitmapAnd between the B-Tree index and the GIN indexes.
    
    EXPLAIN ANALYZE
    SELECT id, first_name, last_name, email
    FROM users
    WHERE department_id = 25
      AND (first_name % 'user1234' OR last_name % 'user1234');

    With a more selective query, the planner combines the indexes perfectly:

    text
                                                                        QUERY PLAN                                                                    
    --------------------------------------------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on users  (cost=36.56..41.25 rows=1 width=87) (actual time=0.203..0.205 rows=1 loops=1)
       Recheck Cond: ((first_name %> 'user1234'::text) OR (last_name %> 'user1234'::text))
       Filter: (department_id = 25)
       Heap Blocks: exact=1
       ->  BitmapAnd  (cost=36.56..36.56 rows=1 width=0) (actual time=0.198..0.199 rows=0 loops=1)
             ->  BitmapOr  (cost=24.04..24.04 rows=2 width=0) (actual time=0.155..0.155 rows=0 loops=1)
                   ->  Bitmap Index Scan on idx_users_first_name_gin_trgm  (cost=0.00..12.02 rows=1 width=0) (actual time=0.082..0.082 rows=1 loops=1)
                         Index Cond: (first_name %> 'user1234'::text)
                   ->  Bitmap Index Scan on idx_users_last_name_gin_trgm  (cost=0.00..12.02 rows=1 width=0) (actual time=0.072..0.072 rows=0 loops=1)
                         Index Cond: (last_name %> 'user1234'::text)
             ->  Bitmap Index Scan on idx_users_department_id  (cost=0.00..12.50 rows=500 width=0) (actual time=0.038..0.038 rows=40 loops=1)
                   Index Cond: (department_id = 25)
     Planning Time: 0.287 ms
     Execution Time: 0.233 ms

    The BitmapAnd node is key. PostgreSQL is using the B-Tree index on department_id to create one bitmap and the BitmapOr of the two GIN indexes to create another, then finding the intersection. This is the pinnacle of query optimization for this use case, allowing you to combine fast fuzzy search with fast structured filtering.

    When to Use GiST: KNN and "Find Most Similar"

    What if the requirement changes to: "When a user signs up with an email, find the 5 most similar existing emails to check for potential duplicates or fraud"?

    This is a K-Nearest Neighbor (KNN) problem. Using our GIN indexes for this would be disastrously slow. We would have to calculate similarity() for all 1M rows against the new email and then sort. This is where GiST and the distance operator (<->) are essential.

    sql
    -- We only need a GiST index on the column we're comparing.
    CREATE INDEX idx_users_email_gist_trgm ON users USING GIST (email gist_trgm_ops);
    
    -- Let's find emails similar to '[email protected]' (typo in domain)
    EXPLAIN ANALYZE
    SELECT 
        id, 
        email,
        email <-> '[email protected]' AS distance
    FROM users
    ORDER BY email <-> '[email protected]'
    LIMIT 5;

    Result Analysis (EXPLAIN ANALYZE output):

    text
                                                                   QUERY PLAN                                                                
    -----------------------------------------------------------------------------------------------------------------------------------------
     Limit  (cost=0.29..0.83 rows=5 width=59) (actual time=0.347..0.352 rows=5 loops=1)
       ->  Index Scan using idx_users_email_gist_trgm on users  (cost=0.29..108154.29 rows=1000000 width=59) (actual time=0.346..0.350 rows=5 loops=1)
             Order By: (email <-> '[email protected]'::text)
     Planning Time: 0.127 ms
     Execution Time: 0.380 ms

    Look at the plan: Index Scan using idx_users_email_gist_trgm. PostgreSQL is able to directly use the GiST index to find the 5 closest matches. It traverses the tree, following the paths that minimize the distance function. It doesn't need to scan the whole index or table. The execution time of 0.380ms to find the 5 best matches out of 1 million rows is phenomenal and simply impossible with a GIN index.

    Production-Ready Search Function

    To make this functionality robust and reusable from your application layer, encapsulate the logic in a PL/pgSQL function. This ensures that settings like similarity_threshold are always applied correctly and provides a clean API.

    sql
    CREATE OR REPLACE FUNCTION search_users(p_query TEXT, p_threshold NUMERIC DEFAULT 0.4, p_limit INT DEFAULT 20)
    RETURNS TABLE (
        id UUID,
        first_name TEXT,
        last_name TEXT,
        email TEXT,
        company TEXT,
        score NUMERIC
    )
    LANGUAGE plpgsql
    AS $$
    BEGIN
        -- Set the threshold locally for this transaction to avoid affecting other sessions.
        -- This is a critical step for performance.
        PERFORM set_config('pg_trgm.similarity_threshold', p_threshold::text, true);
    
        RETURN QUERY
        SELECT 
            u.id,
            u.first_name,
            u.last_name,
            u.email,
            u.company,
            (
                -- Weighted score calculation
                similarity(u.first_name, p_query) * 1.0 + 
                similarity(u.last_name, p_query) * 1.0 + 
                similarity(u.email, p_query) * 0.8 + 
                similarity(u.company, p_query) * 0.5
            )::NUMERIC AS total_score
        FROM users u
        WHERE 
            u.first_name % p_query OR
            u.last_name % p_query OR
            u.email % p_query OR
            u.company % p_query
        ORDER BY total_score DESC
        LIMIT p_limit;
    END;
    $$;

    Now, your application can simply call:

    sql
    SELECT * FROM search_users(p_query => 'Jhon Smit', p_threshold => 0.35, p_limit => 10);

    This function provides a clean separation of concerns, enforces performance best practices, and simplifies application code.

    Final Recommendations

    Building high-performance, multi-column fuzzy search is not a matter of choosing one tool, but of understanding the system's architecture and applying the right patterns.

  • Default to Individual GIN Indexes: For general-purpose, multi-field search interfaces, the pattern of one GIN index per column combined with a BitmapOr query plan is the most flexible and performant solution.
  • Reject Concatenated Indexes: Avoid the single functional index on concatenated columns. It's an anti-pattern that limits functionality and creates performance bottlenecks on writes.
  • Master similarity_threshold: This is your most powerful performance tuning knob. Dynamically adjust it in your application based on query length and desired strictness to prevent the planner from fetching millions of low-quality results.
  • Embrace GiST for KNN: When the requirement is to find the "top N most similar" items, GiST with the <-> operator is not just an option; it's the only scalable solution.
  • Combine with B-Trees: Leverage PostgreSQL's powerful query planner by combining your pg_trgm GIN index lookups with standard B-Tree indexes on structured data for highly selective, complex queries.
  • By moving beyond basic pg_trgm usage and implementing these advanced patterns, you can build search features that are not only powerful and user-friendly but also scale gracefully as your data grows.

    Found this article helpful?

    Share it with others who might benefit from it.

    More Articles