Insights

The Infrastructure Advantage Behind a DSP That’s 10x Faster

September 2, 2025

About The Author

Tushar Patel is SVP of Engineering at Aarki, where he’s helping rewire how a DSP thinks, from metal to model. With three decades of engineering leadership behind him, Tushar has built and scaled platforms across SaaS, enterprise, and ad tech, managing teams of 350+ across four continents. He’s seen it all: turnarounds, takeovers, and tech stacks in need of serious tuning.

In an industry where cloud-native is the default, Aarki is an intentional outlier. Our 4 data centers span the globe and sit physically close to top exchanges like Google AdX, Unity, and Fyber. They’re wired to handle over 5 million bid requests per second and response times as fast as 20ms. That kind of throughput isn’t just a technical flex; it’s what lets our DSP stay fast, accurate, and responsive under pressure. 

Over the last two years, we’ve turned it into a strategic edge. From model training, inference, and ad delivery, it all runs under one roof. And this infrastructure makes a DSP’s bid time 4x to 10x faster than cloud-based DSPs.

Eliminating the Cost of Experimentation

One of the immediate benefits of owning our infrastructure: zero marginal cost for experimentation.

Unlike cloud-hosted ML and analytics teams constrained by per-query pricing and egress fees, our teams operate without cost-driven friction:

  • Bidder workloads: No penalty for high QPS; our infra is provisioned for peak throughput.
  • Model training/inference: We utilize dedicated GPUs for training and real-time inference.
  • Data warehouse queries: Analysts and ML engineers can query freely without worrying about the cost per terabyte scanned.

The result is an accelerated experimentation loop, which means more iterations, deeper insights, faster learning.

Bid Response Time Is a Profit Lever

Programmatic bidding is fundamentally a race, and latency wins. The faster you can respond to a bid request, the more impressions you can compete for. We’ve architected our infra stack specifically for this:

  • Low-latency backbone between our four data centers, optimized at the hardware level.
  • Placement-aware routing of our critical systems (e.g., Aerospike, Kafka, and bidder services).
  • Purpose-tuned NIC settings to reduce interrupt overhead and network jitter.

Our average bid response time is 4x to 10x faster than cloud-based DSPs. Faster responses yield higher win rates and better ROI for our clients.

Grinding Through Complexity, Gaining Mastery In Infra

Owning the stack means our engineers don’t abstract away complexity; they engage with it.

Provisioning a new service isn’t “click and go.” It involves capacity planning, placement decisions, tuning, and understanding actual IOPS or packet-per-second limits. We believe this results in better engineers, ones who deeply understand performance, cost tradeoffs, and how systems fail.

It Comes With Challenges, Which We Accept

There are downsides:

  • Longer lead times for memory, disk, or compute capacity; no “scale up” buttons here.
  • Talent constraints because hiring skilled infra and network engineers who can operate at the bare-metal level is non-trivial.

But these are known constraints we design around, not blockers.

Cloud vs Colocation: Not Dogma, Just Data

We reevaluate this strategy annually. The cloud is improving, and our next data center may well be virtual. When that happens, we’ll run the two environments side by side and compare:

  • Bid latency: How quickly our system can respond to an ad exchange’s bid request. Lower latency = more auctions at higher win rates.
  • ML throughput: How much model training and inference can we push through the system at once, without slowing down delivery.
  • Query performance: How fast and freely analysts and ML engineers can run data queries to extract insights, without hitting cost or speed bottlenecks.
  • Cost per $ of revenue: The total infrastructure cost required to generate one dollar of revenue. Lower cost per dollar means higher margins.

Until then, our current infrastructure continues to outperform.

It’s The Marketers That Gain From The Metal

Owning your infrastructure isn’t glamorous. It’s hard to scale. It’s harder to hire for. But when performance marketing comes down to milliseconds and margins, we believe in owning the variables that make the difference.

Our infrastructure gives us three core advantages: speed, control, and freedom. It lets us move fast, train fast, and respond fast, without waiting on someone else’s cloud capacity. That shows up in your campaign as better win rates, lower CPIs, and smarter optimization.

The same infra that wins more bids per second also lets us retrain models daily, so they keep learning, adapting, and outperforming yesterday’s version. This infrastructure powers our deep learning pipeline, where GPU-trained models are refreshed daily, running directly inside our stack.

What runs under the hood actually got a full rebuild — a faster, smarter engine driving everything from training to bidding.

We’re not anti-cloud. It’s about using what works best for the job. Right now, bare metal still gives us the edge. And we’ll keep using it until the data says otherwise.

Got thoughts on this blog? Don’t let a good idea go unsaid. Tushar’s just an email away: tusharpatel@aarki.com.

en_USEN