DEV Community

Cover image for How to Optimize PostgreSQL for High Traffic and Concurrent Users
Amr Saafan for Nile Bits

Posted on

How to Optimize PostgreSQL for High Traffic and Concurrent Users

https://www.nilebits.com/blog/2025/06/postgresql-high-connections/

PostgreSQL is a powerful, open-source relational database system known for its reliability, extensibility, and advanced SQL compliance. But when your application scales and thousands of users start making concurrent requests, PostgreSQL can run into performance bottlenecks if not properly configured.

This comprehensive guide covers everything you need to know about optimizing PostgreSQL for high traffic and concurrent users. From tuning parameters to connection pooling, operating system configurations, and architectural recommendations—we’ll walk you through strategies that ensure your PostgreSQL database can handle increased load without compromising performance.

Understanding the Challenge with High Concurrent Connections

Because PostgreSQL has a process-per-connection design, a new backend process is generated for each new client connection. Each of these functions contributes to context switching and uses memory. This model may result in the following when the number of concurrent connections rises noticeably:

Increased query latency

Memory exhaustion

Backend process thrashing

Connection timeouts

Excessive system load

These issues often stem not from PostgreSQL limitations, but from insufficient configuration and infrastructure planning.

More on PostgreSQL architecture:
PostgreSQL Architecture Overview – IBM Developer

Step 1: Adjust max_connections Wisely

The max_connections setting defines how many concurrent clients can be connected to the PostgreSQL server.

Check the current value:

SHOW max_connections;

In postgresql.conf, you can set it as:

max_connections = 500

Keep in mind that higher values require more memory. Avoid arbitrarily increasing this number. Instead, combine it with a connection pooler like PgBouncer to efficiently manage client sessions.

Official documentation:
PostgreSQL - Resource Consumption Settings

Step 2: Tune Memory Settings

As you increase max_connections, memory consumption increases. You’ll need to tune these important parameters:

shared_buffers

The amount of memory PostgreSQL uses for caching data. Recommended: 25% of total RAM.

shared_buffers = 4GB

work_mem

The memory allocated per operation (e.g., sort or join). Be careful—it applies per operation, per connection.

work_mem = 4MB

effective_cache_size

Estimates how much memory the OS will use for disk caching. Recommended: 50–75% of total RAM.

effective_cache_size = 12GB

For in-depth guidance:
PostgreSQL Memory Configuration – Cybertec

Step 3: Use a Connection Pooler (e.g., PgBouncer)

One of the most critical components for high concurrency is using a connection pooler. PostgreSQL’s backend process model is not designed to scale to thousands of concurrent connections.

PgBouncer is a lightweight connection pooler that sits between your application and PostgreSQL.

Installation on Ubuntu:

sudo apt install pgbouncer

Sample configuration (pgbouncer.ini):

[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer]
listen_port = 6432
listen_addr = 127.0.0.1
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 50

For details on pooling modes and performance:
PgBouncer Documentation

Step 4: Configure PostgreSQL for High Performance

PostgreSQL performance can be significantly enhanced by tweaking default settings.

WAL and commit settings

wal_level = replica
synchronous_commit = off
commit_delay = 10000

Checkpoint tuning

checkpoint_timeout = 15min
max_wal_size = 2GB
min_wal_size = 1GB

Background writer settings

bgwriter_lru_maxpages = 100
bgwriter_lru_multiplier = 2.0

Checkpoint tuning helps reduce I/O spikes, while WAL tuning optimizes disk writes under heavy transaction loads.

Step 5: Tune Operating System Settings

PostgreSQL's performance also depends heavily on OS-level tuning.

File Descriptors

Increase file descriptor limits to handle more connections.

ulimit -n 65535

In /etc/security/limits.conf:

postgres soft nofile 65535
postgres hard nofile 65535

Shared Memory Settings

Add or modify /etc/sysctl.conf:

kernel.shmmax = 8589934592 # 8GB
kernel.shmall = 2097152

Apply changes:

sudo sysctl -p

Step 6: Monitor PostgreSQL in Real Time

Monitoring helps detect slow queries, blocking issues, and connection spikes.

Recommended Tools

pg_stat_statements (query performance)

Prometheus + Grafana (metrics and dashboards)

pgAdmin (GUI-based monitoring)

To enable pg_stat_statements:

CREATE EXTENSION pg_stat_statements;

In postgresql.conf:

shared_preload_libraries = 'pg_stat_statements'

Step 7: Indexing and Partitioning

With high traffic, data grows rapidly. You must design for efficient access.

Partitioning

Split large tables into smaller ones:

CREATE TABLE events (
id serial,
event_date date
) PARTITION BY RANGE (event_date);

Indexing

Use EXPLAIN ANALYZE to examine slow queries and create appropriate indexes:

CREATE INDEX idx_event_date ON events(event_date);

Step 8: Reduce Idle Connections

Idle connections consume resources unnecessarily. Use timeouts to free them:

idle_in_transaction_session_timeout = 60000 # 60 seconds

Also monitor and kill stale connections with:

SELECT pid, state, query_start, state_change
FROM pg_stat_activity
WHERE state = 'idle in transaction';

Step 9: Benchmarking with pgbench

Before deploying any tuning in production, simulate load using pgbench.

Initialize test data:

pgbench -i -s 10 mydb

Simulate high concurrency:

pgbench -c 100 -j 10 -T 60 mydb

Monitor metrics like:

Transactions per second (TPS)

Average latency

Failed transactions

Official documentation:
pgbench – PostgreSQL

Step 10: Scale Horizontally if Needed

Once you've optimized everything and you're still facing limits, consider scaling:

Read Replicas using streaming replication

Load Balancers like HAProxy

Logical Replication to decouple systems

Cloud-native options like Amazon RDS for PostgreSQL or Google Cloud SQL

Final Thoughts

Scaling PostgreSQL for high traffic is achievable with the right balance of configuration, monitoring, and infrastructure. You don’t need thousands of connections—what you need is an efficient way to manage them using pooling, optimized queries, and scalable architecture.

Performance tuning is not a one-time task. It’s a continual process based on how your application evolves and grows.

Work With PostgreSQL Experts at Nile Bits

If you're running PostgreSQL in production or preparing to scale your app for high concurrency, Nile Bits can help.

We specialize in performance optimization, infrastructure scaling, and managed DevOps services tailored to PostgreSQL.

Our services include:

PostgreSQL Performance Audits

Connection Pooling & Tuning

High Availability & Replication Design

24/7 DevOps Support for Mission-Critical Systems

Let us help you unlock the full potential of PostgreSQL.
Visit us at https://www.nilebits.com or contact us directly to get started.

https://www.nilebits.com/blog/2025/06/postgresql-high-connections/

Top comments (0)