PostgreSQL CPU Challenges

Mastering PostgreSQL CPU Challenges: A Tale of Success

Written by Veerendra Pulapa

| Apr 22, 2024

3 MIN READ

When a seemingly well-tuned PostgreSQL database starts straining under CPU load, standard fixes often aren’t enough. This tale outlines a customer’s journey to overcome such a challenge, uncovering the hidden causes of excessive CPU demands and achieving remarkable performance gains.

The Customer’s Situation

Our customer, a large enterprise with Global business footprint, is running order processing application for the APAC region. Over 600 outlets or labs are using this application for order processing, generating high volume transactions. This application is powered by PostgreSQL to manage a high volume of transactions. With a throughput of 300-350 transactions per second (TPS), this mission-critical application serves as the backbone of their order processing system. However, the customer was experiencing significant performance degradation in their application due to high CPU utilization within PostgreSQL database. This led to delays in critical processes like order fulfillment, causing user frustration. Slow queries, bottlenecks, and unpredictable performance made scaling a major concern.

The Problem Statement

The customer’s PostgreSQL database is experiencing excessive CPU utilization, leading to performance degradation, bottlenecks, and concerns about scalability. Also this application and Database is running on Cloud infrastructure. In order to handle high demand on CPU utilization, customer have to pay a higher cost of CPU infrastructure.

The Investigation Begins

I have been tuning this DB regularly, despite my prior tuning of the customer’s PostgreSQL database, the frequent CPU bottleneck hinted at issues that went beyond the surface. Realizing that quick configuration changes wouldn’t solve this, I embarked on a methodical investigation:

Zeroing in on CPU Culprits

PostgreSQL’s pg_stat_statements became my go-to tool. I used it to pinpoint the queries eating up the most CPU time. Unsurprisingly, heavy calculations and complex data transformations were frequent offenders.
Example:

SELECT query, total_time, calls, total_time/calls AS avg_time
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;

Decoding Parameterized Queries

These queries, where values are represented by placeholders ($1, $2, etc.), were a significant hurdle. They obscured the true cause of CPU strain, requiring targeted testing and meticulous analysis.
Example:

SELECT * FROM orders WHERE order_date >= $1 AND order_date <= $2;

Dissecting Query Plans

To truly understand query performance, I turned to EXPLAIN and EXPLAIN ANALYZE. My focus was on operations like inefficient sorts or sequential scans that could be draining CPU resources.
Example:

EXPLAIN ANALYZE SELECT * FROM orders WHERE order_date BETWEEN ‘2023-01-01’ AND ‘2023-12-31’;

Strategic Indexing

Guided by the query plan analysis, I carefully added indexes to boost efficiency. This included single-column indexes for frequent filters and composite indexes for queries with multiple conditions.
Example:

CREATE INDEX idx_order_date ON orders(order_date);

Streamlining with Query Rewrites

I experimented with rewriting certain queries using Common Table Expressions (CTEs) or substituting simpler functions. These optimizations often led to noticeable reductions in CPU usage.
Example:

WITH recent_orders AS (
SELECT * FROM orders WHERE order_date >= ‘2023-01-01’
)
SELECT * FROM recent_orders WHERE total_amount > 1000;

This step-by-step approach, combined with the power of PostgreSQL’s built-in tools, allowed me to unravel the root causes of the CPU woes. The result was a more performant database and a much more satisfied customer.

Achievements and Statistics

1. Initial CPU utilization of 100% reduced to below 20%. This indicates that the system is generally performing well.
2. We observed a 80% reduction in the number of queries running for over 10 seconds.
3. Recent analysis shows that disk I/O performance has improved by 40-45% compared to the previous review.
4. Critical reports query has seen an average improvement of 75-80% in response time.
5. The slowest individual query duration has been reduced by 80% after implementing optimization techniques.

Ashnik’s Expertise in the Spotlight

This tale of PostgreSQL CPU optimization reveals Ashnik’s methodical approach and problem-solving prowess. Their insights into query planning, optimization techniques, and PostgreSQL’s internal workings led the charge in resolving performance limitations. Ashnik’s team delivered not just a fix, but a more efficient and scalable database environment. This success demonstrates the value of expertise and collaboration in the database realm, highlighting Ashnik’s commitment to transforming customer operations.

Customer Experience

The remarkable achievements and statistics outlined in our journey of mastering PostgreSQL CPU challenges have had a profound impact on our customer’s day-to-day operations. With the initial CPU utilization plummeting from 100% to below 20%, the customer experienced a significant improvement in system performance. This reduction in CPU load translated into tangible benefits for their business processes, as critical queries that once suffered from delays and user frustration are now executed swiftly and seamlessly. Additionally, the drastic reduction in the number of queries running for over 10 seconds alleviated bottlenecks and unpredictability in performance, providing a more stable and predictable user experience. Moreover, the substantial enhancements in disk I/O performance and response times for critical report queries further optimized their operations, enabling faster decision-making and improved productivity across the board. Overall, these improvements have not only enhanced the efficiency and scalability of their database environment but have also positively impacted on their bottom line and customer satisfaction.

Conclusion

The journey of mastering PostgreSQL CPU challenges is more than just a success story—it exemplifies the importance of going beyond standard fixes when optimizing complex database systems. Our experience underscores the significance of leveraging expertise and open-source tools to address evolving challenges in database management. By embracing a community-driven spirit and fostering collaboration between experts and clients, we can navigate the ever-changing landscape of database technologies and drive meaningful transformations that propel businesses forward.


Go to Top