
PostgreSQL 18 Upgrade Guide: Performance and Operations
What changes in PostgreSQL 18 and how to plan your upgrade. Covers new performance features, breaking changes, pg_upgrade steps, and rollback strategies.
PostgreSQL 18 brings meaningful improvements to query planning, indexing, and operational tooling. For teams running PostgreSQL in production, the upgrade is worth pursuing—but it requires careful planning, testing, and validation. This guide walks through the upgrade process with a focus on what actually changes for database administrators and backend engineers.
The databases topic hub on this site provides broader context on database fundamentals, and the data and databases learning path offers a structured progression from SQL basics to advanced operations.
What is new in PostgreSQL 18
Query planner improvements
The query planner in PostgreSQL 18 makes better decisions in several areas:
- Join ordering for queries with more than 8–10 tables has been improved, reducing pathological plan choices in complex analytical queries.
- Incremental sorting is used more aggressively, benefiting queries that can partially reuse an existing sort order.
- Parallel query execution has been expanded to cover additional query patterns, including some subquery types that previously ran single-threaded.
Index improvements
- Skip scan support for multicolumn B-tree indexes. This is a significant improvement for queries that filter on non-leading columns. We cover this in detail in a separate article.
- BRIN index improvements for temporal data, reducing false positives in range scans.
- Improved index-only scan coverage for more data types.
Operational improvements
- Logical replication supports more DDL operations, reducing the need for manual intervention during schema changes.
pg_stat_iohas been expanded with more detailed I/O statistics, making it easier to diagnose storage bottleneck issues.- Improved
VACUUMperformance for tables with many dead tuples, reducing the impact of maintenance operations on concurrent queries.
Security
- SCRAM-SHA-256 is now the default authentication method for new installations. Existing clusters using MD5 should plan to migrate.
- Row-level security policy improvements make it easier to implement multi-tenant isolation at the database layer.
Pre-upgrade checklist
Before upgrading, work through these items:
- Review the release notes for removed features and behaviour changes. Pay special attention to changes in default settings.
- Check extension compatibility. Run
SELECT * FROM pg_available_extensions;and verify that every extension you use has a version compatible with PostgreSQL 18. - Review your
postgresql.confsettings. Some default values change between major versions. Compare your current settings with the new defaults. - Test your most critical queries. Export your top 20 queries by execution time and run
EXPLAIN (ANALYZE, BUFFERS)on each one against a PostgreSQL 18 test instance. - Verify backup and restore procedures. Ensure your backup tooling (pg_dump, pg_basebackup, or third-party) works with the new version.
Upgrade procedure
Option 1: pg_upgrade (in-place)
pg_upgrade converts the data directory from the old format to the new format without a full dump and restore. It is faster but requires downtime.
Steps:
- Stop the old PostgreSQL instance
- Install PostgreSQL 18 binaries alongside the old version
- Run
pg_upgrade --checkto validate compatibility - Run
pg_upgradeto perform the conversion - Start the new instance
- Run
ANALYZEon all databases to update planner statistics - Remove the old installation once validated
Option 2: Logical replication (minimal downtime)
For systems that cannot tolerate extended downtime:
- Set up a PostgreSQL 18 replica using logical replication
- Let it catch up to the primary
- Switch application connections to the new instance
- Decommission the old primary
This approach requires more setup but limits downtime to the connection switch window.
Option 3: pg_dump and pg_restore
The most conservative approach. Dump the entire database, install PostgreSQL 18, and restore. This is appropriate for smaller databases or when you want a completely clean starting point.
Post-upgrade validation
After upgrading, validate:
- Query performance: re-run your top 20 queries and compare execution times with the pre-upgrade baseline. Regressions of more than 10% warrant investigation.
- Replication health: if you use streaming or logical replication, confirm that replication lag is within normal bounds.
- Extension functionality: test every extension's core features.
- Backup and restore: run a full backup and test a restore to a separate instance.
- Monitoring: verify that your monitoring tools correctly collect metrics from the new version.
Common upgrade pitfalls
Pitfall: skipping ANALYZE after pg_upgrade. The planner statistics are not automatically rebuilt. Without fresh statistics, the query planner may make poor decisions.
Pitfall: not testing with production-scale data. Query plans can change dramatically based on table size and data distribution. Always test with a copy of production data, not a small sample.
Pitfall: assuming extensions are compatible. Some extensions (especially those with C components) need to be recompiled for the new PostgreSQL version. Test in staging first.
Pitfall: ignoring authentication changes. The switch to SCRAM-SHA-256 as the default may break clients that only support MD5.
Trade-offs
pg_upgradeis fast but requires downtime. For databases under 100 GB, the downtime is usually measured in minutes. For multi-terabyte databases, it can take hours.- Logical replication minimises downtime but adds operational complexity and requires careful validation of data consistency.
- Waiting for the first minor release (18.1) reduces the risk of encountering release-day bugs, but delays access to new features and fixes.
Further reading on EBooks-Space
- Databases topic hub — reading lists for SQL, Oracle, and data fundamentals
- Data & Databases learning path — structured path through database engineering
- Oracle Reading List — for teams managing multiple database platforms