April 25, 2025
5
  minutes

How Plotline Reduced Database Costs by 50% Without Changing a Single Line of Code

Discover how we dynamically scale databases based on predictable load patterns, and reduce costs by 50% while maintaining optimal performance.

Adarsh Tadimari

Scaling servers based on application load is standard practice, but most teams miss an equally impactful opportunity: dynamically scaling databases. At Plotline, our usage follows clear daily patterns - intensive during peak hours, moderate during the day, and minimal at night. Keeping databases at peak size 24/7 proved unnecessarily expensive. We leveraged a simple Retool workflow to dynamically adjust database clusters, cutting costs dramatically without affecting performance. Here's how we did it.

The Problem

Static Databases, Dynamic Load. Most databases and stream processors like MongoDB, RedShift, Redis, Kafka, Postgres and Clickhouse offer Admin APIs allowing you to resize clusters.

Yet, teams typically configure databases statically, running them at full scale continuously—even during off-peak hours when only 25% of resources are sufficient.This static approach leads to significant overspending. Our Solution: Time-based Dynamic Scaling of clusters.

Scaling with Retool

We created an automated Retool workflow to adjust cluster sizes based on predictable load patterns:

  • Peak hours (6 hours/day): 100% of configured resources
  • Moderate load (6 hours/day): 50% of resource
  • Slow load/Night (12 hours/day): 25% of resources

The workflow triggers cluster resizing via API calls, scheduled precisely to match our known usage patterns.

How It Works

  1. Retool Scheduling: Retool's scheduler triggers API workflows at predetermined intervals (e.g., midnight, morning, evening).
  2. Cluster Resize via API: Retool invokes Admin APIs for MongoDB, RedShift, Redis, Kafka, Postgres and Clickhouse clusters to upscale/downscale the nodes accordingly.
  3. Error Handling: If an API call fails or a resize operation doesn't complete successfully, alerts are immediately sent to our engineering team via Slack or PagerDuty.

Plotline database cost reduction setup

Results

  • 50% reduction in database costs.
  • Maintained optimal performance at all times.
  • Zero downtime during scaling events.

Key Learnings

  • Predictable usage = cost savings: Knowing usage patterns can lead to substantial efficiency gains.
  • Simple workflows, big impacts: Leveraging straightforward tools (like Retool) for automations can deliver disproportionate value.
  • Reliability matters: Implementing robust error detection and alerting ensures peace of mind during automation.

The system we built works across multiple data centers, with each one tailored to the local time zone. Whether you're in North America, Europe, or Asia, the workflow runs on the local time in that region, ensuring that each data center operates optimally according to its local peak usage and demand. We've running this for over a year at Plotline and have seen it work reliably across managed DB providers.

Conclusion

Dynamic database scaling is simpler than many engineers realize. By taking advantage of Admin APIs and automating with Retool, teams can significantly reduce costs without sacrificing performance. If you're interested in setting up a similar workflow, or have thoughts on database scaling, we'd love to hear from you!

Share
Sign up for our newsletter
Share
Sign up for our newsletter

Improve funnel metrics with Plotline

Join companies like ShareChat, Meesho, Jupiter, Jar, Khatabook and others that use Plotline to run in-app engagement and boost activation, retention and monetization.

Thanks for your faith in us! We will reach out to you shortly :D
Oops! Something went wrong. Would you mind trying again?