crm search casestady 1

Written by Ashnik Team

| May 19, 2025

3 min read

Why Your CRM Search is Too Slow—and How We Engineered Real-Time Customer360 at Scale

In enterprise banking, search latency is not a technical inconvenience—it’s a business risk. Relationship managers (RMs) must access accurate, real-time customer data to offer timely insights, fulfill regulatory requirements, and deliver personalized services. Anything short of that results in missed opportunities, operational inefficiencies, and ultimately, customer dissatisfaction.

One of India’s leading private banks encountered this very challenge. Their internally developed CRM application was responsible for consolidating customer data flowing in from core banking systems. The architecture depended on Hadoop for storage and analytics. While this supported batch-mode use cases like MIS and static analytics, the system faltered under the demands of real-time Customer360 search and operational observability.

Ashnik was brought in to address this bottleneck. The solution: a dual-cluster Elastic Stack architecture that enabled real-time CRM search, robust log ingestion, and unified dashboarding—all while scaling to production workloads exceeding 20,000 events per second.

This blog post outlines how we achieved that, sticking strictly to the implementation facts and outcomes documented.

The Ground Reality: Hadoop is Powerful, but Not Real-Time

The bank’s CRM system aggregated customer data from core banking systems and stored it in a Hadoop ecosystem. This supported high-volume, structured data ingestion and was effective for batch analytics and reporting use cases like:

  • Customer360 profile generation
  • Marketing analytics
  • MIS reporting

However, as the bank’s operations grew more digital and customer expectations shifted to real-time experiences, this Hadoop-centered architecture revealed significant gaps:

Documented Challenges:

  • Search performance bottlenecks over large Hadoop datasets
  • Lack of real-time log visibility
  • No centralized dashboards for business and application monitoring
  • No High Availability (HA) or Disaster Recovery (DR) in place for critical systems
  • Operational support gaps for managing evolving and unpredictable workloads

These limitations had a cascading impact:

  • Relationship managers couldn’t instantly access customer insights.
  • IT teams had no consolidated view of system behavior.
  • High-value digital engagement scenarios were being delayed due to infrastructure latency.

The Architecture: Purpose-Built Elastic Stack for Scale

To solve these challenges, Ashnik designed and implemented an architecture centered on Elastic Stack, built for real-time ingestion, search, and visualization.

Key Architectural Decision: Two Dedicated Elastic Clusters

We did not deploy a monolithic Elastic environment. Instead, we architected two separate clusters:

  1. CRM Search Cluster
  2. Log Ingestion and Observability Cluster

This was a strategic move.

Why Split the Clusters?

  • Workload Isolation: The CRM search workloads had different indexing and query patterns compared to the log data. Mixing them would create performance contention.
  • Scale and Performance: CRM search was handling upwards of 20,000 events per second, while the log cluster managed around 6,000 events/sec with a significantly higher data volume.
  • Operational Safety: Separate clusters allowed tuning, scaling, and upgrading each environment independently without risking the other.

Implementation Metrics: Real Numbers, Real Scale

The scale of deployment reflected production-grade capacity planning. Here are the ingestion volumes and event throughput rates as documented:

Ingestion Volumes (Daily):

Log Cluster: 750 GB/day
CRM Search Cluster: 100 GB/day

Event Rates:

CRM Search: 20,000 events per second
Transaction Search: 12,000 events per second
Log Ingestion: 6,000 events per second

These metrics highlight that this wasn’t a lab-scale experiment. The architecture was designed and tuned to support high-throughput, production workloads under real operational conditions.

Functional Outcomes Delivered

Each design choice translated into specific, measurable outcomes that addressed the bank’s original challenges:

  1. Real-Time Customer360 Search
    • RMs could now instantly retrieve customer profiles, view transaction summaries, and act on real-time insights.
    • The Elastic-powered search was performant and could scale with increasing data volume.
  2. Application-Level Observability
    • The log cluster ingested 750 GB/day of logs, which were indexed and made searchable in real time.
    • Dashboards allowed IT teams to proactively monitor service health.
  3. Unified Dashboard
    • Elastic visualizations were configured for both business and operational views.
    • MIS, usage trends, error rates, and performance metrics could now be monitored in a single pane.
  4. Improved Uptime and Resilience
    • With a DR-ready setup, the architecture ensured business continuity even under failure conditions.
    • High availability configurations support 24×7 uptime.
  5. Operational Flexibility
    • Ashnik provided tuning and operational support to align the infrastructure with evolving workload patterns.
    • This included ingestion management, cluster health monitoring, and query performance tuning.

What This Means for Platform Architects

This implementation is a clear demonstration that:

  1. Elastic Stack can handle real-time CRM workloads when tuned and scaled appropriately.
  2. Splitting search and log clusters isn’t overengineering—it’s a necessity at scale.
  3. DR and HA aren’t future options; they must be built into core customer-facing platforms.
  4. Operational readiness is as important as initial design—daily ingestion of 850+ GB across use cases demands continuous performance management.

Final Thoughts

By re-architecting the bank’s CRM search and observability stack with Elastic, we helped them unlock true real-time capabilities:

  • Relationship managers now operate with speed and confidence.
  • IT teams have visibility into live systems.
  • The business gains from real-time analytics and dashboards.
  • The platform can evolve with future demand without architectural rework.

This was not a replacement of Hadoop, but a purpose-built real-time layer to complement the batch backbone. And it was built entirely on open-source technology, with full transparency, observability, and scale baked in.

If your enterprise CRM or log infrastructure is struggling with latency, blind spots, or reliability gaps—this architecture provides a proven path forward.

Let us know if you want to explore it for your environment.


Go to Top