Database index optimization query plan

PostgreSQL 18 Skip Scan for Multicolumn Indexes

How PostgreSQL 18 skip scan optimizes queries on multicolumn indexes. Covers when skip scan activates, benchmark patterns, and index design implications.

DatabasesData & Databases
postgresqlskip-scanindexesquery-optimization

For years, one of the most common PostgreSQL performance questions has been: "Why does my multicolumn index not help this query?" The answer was usually that the query filtered on a non-leading column, and PostgreSQL could not skip past irrelevant values in the leading column. PostgreSQL 18 changes this with skip scan support for B-tree indexes.

This article explains the skip scan mechanism, identifies the query patterns that benefit most, walks through practical examples, and discusses when you should redesign your indexes versus relying on skip scan. It is written for backend engineers and DBAs who tune query performance in PostgreSQL.

The databases hub covers broader database reading resources. The data and databases path provides a structured learning progression. For the broader PostgreSQL 18 context, see the upgrade guide and the UUIDv7 article.

The problem skip scan solves

Consider a multicolumn B-tree index on (status, created_at). Before skip scan, PostgreSQL could efficiently use this index for:

  • Queries that filter on status (the leading column)
  • Queries that filter on both status and created_at

But it could not efficiently use the index for:

  • Queries that filter only on created_at (the non-leading column)

Without skip scan, PostgreSQL had to either use a sequential scan or a bitmap index scan, both of which are slower than a direct index scan for selective queries.

How skip scan works

Skip scan exploits the fact that B-tree indexes store entries in sorted order. If the leading column has a small number of distinct values (low cardinality), the database can:

  1. Start at the first value of the leading column
  2. Seek directly to the target range in the second column
  3. Read the matching entries
  4. Skip to the next distinct value of the leading column
  5. Repeat

Instead of scanning the entire index, the database makes one seek per distinct value of the leading column. If there are 5 distinct status values and you are filtering on created_at, PostgreSQL makes 5 index seeks instead of a full scan.

When skip scan helps most

Low cardinality leading column

Skip scan is most effective when the leading column has few distinct values. A status column with 3–10 values is ideal. A user_id column with millions of values is not—the overhead of skipping between each value exceeds the benefit.

Selective filter on non-leading column

The query must be selective on the non-leading column. If the filter matches most of the table, a sequential scan is still faster regardless of skip scan.

Existing multicolumn indexes

Skip scan is most valuable when you have an existing index on (A, B) and queries that filter on B. Previously, you would need to create a separate index on (B) or (B, A). Skip scan may eliminate the need for that extra index, saving storage and write overhead.

Practical examples

Example 1: order status and date

You have an orders table with an index on (status, order_date). Status has 5 values: pending, confirmed, shipped, delivered, cancelled.

A query for recent orders regardless of status:

SELECT * FROM orders WHERE order_date > '2026-03-01';

Before PostgreSQL 18: sequential scan or bitmap scan. With PostgreSQL 18: skip scan on the existing index, making 5 seeks (one per status value).

Example 2: tenant and timestamp

A multi-tenant application with an index on (tenant_id, created_at). If you have 50 tenants and need to query across all tenants for a time range, skip scan makes 50 seeks—far fewer than scanning the entire index.

Example 3: boolean and sort key

An index on (is_active, sort_order) where is_active is boolean. Skip scan makes exactly 2 seeks (true and false), which is nearly as efficient as a dedicated index on sort_order.

When to still create a separate index

Skip scan does not eliminate the need for index design. Create a separate index when:

  • The leading column has high cardinality. If the leading column has thousands or millions of distinct values, skip scan degrades to near-sequential performance.
  • The query is performance-critical and latency-sensitive. A dedicated index on the filtered column will always be faster than skip scan on a composite index. Skip scan is an optimisation, not a replacement for proper index design.
  • The workload is write-heavy. If the table has a very high insert rate, the cost of maintaining an extra index may be worth it to keep read latency predictable.

How to verify skip scan is being used

Run EXPLAIN (ANALYZE, BUFFERS) on your query and look for:

  • Index Scan or Index Only Scan with a Skip annotation in the plan output
  • Significantly fewer buffer reads compared to a full index scan

Compare the plan and execution time with and without the composite index to confirm the benefit.

Trade-offs

  • Skip scan adds CPU overhead per skip. For very high cardinality leading columns, this overhead makes it slower than a sequential scan.
  • The planner's cost model is new. In early PostgreSQL 18 releases, the planner may not always choose skip scan when it would be beneficial. Check your query plans and use explicit index hints (via pg_hint_plan) if needed.
  • Skip scan does not help if you need the leading column in the result sort order. If the query needs results sorted by the leading column, a full index scan is already optimal.

Further reading on EBooks-Space