Latency measurement in a header bidding setup


#1

Hello,

In a header bidding setup, how are you measuring/monitoring the latency of each bidder?

Thank you


#2

Hi Benjamin,

We use a popular extension called Headerbid Expert to measure these types of things in most cases. As you can see from the screenshot below, this extension records the response time for each bidder in the auction.

One thing I would add is that the base representation alone can be deceptive. If you click the ‘Page Analysis’ button in the lower right corner, It will show you not only the bid response time, but also when the requests occurred relative to each other, which I have included below.

This can be especially useful in cases when you are using multiple header partners, and they aren’t all in the same standardized wrapper.

Thanks for taking interest and feel free to reach out anytime!


#3

Hello Ted,

Yes, I knew about HeaderBid Expert but it gives you only measures about your current session.
I’m looking for tool to measure latency accross “all” the users visiting my website.

Thank you


#4

Its a tough one. I’ve been thinking about posting ad latency to a GA event, but haven’t tried it yet.


#5

We use a custom system to log all bid data. GA has a limit of 10 million hits per month. So we log all bid data – bidder, amount, bid response time, ad size etc. – to a small NodeJS app that we built, which then loads this data to the cloud (BigQuery in our case).

This lets us analyze not just latency but other cool stuff like the marginal contribution of each bidder. i.e., what would the revenue have been if I removed this particular bidder who has high discrepancy/high latency/doesn’t pay on time.

And it doesn’t cost very much either. < $100 per month for the infra and storage.


#6

Thank you.
I just found this interesting article : https://www.analyticspros.com/blog/google-analytics/streaming-prebid-data-google-bigquery/

But if I can found a company providing a Prebid Analytics adapter + vizualisation tools, I’m ready to pay for it.


#7

Thanks for the clarification, Benjamin!

While I haven’t gotten the opportunity to thoroughly investigate this tool, http://prebidanalytics.com/, it has been gaining a lot of traction in the community. Not only is it supposed to calculate the average response time of each bidder, it also records the instances in which a bidder misses the timeout.

It is also very painless to integrate into your Prebid configuration code!


#8

Thank you Ted, I’ll test it tomorrow.


#9

This is a great find! Our setup is conceptually exactly the same. There are some implementation differences. e.g.

  • Instead of BigQuery’s realtime streaming API to insert data, we do batch inserts because that’s free and async. So our data in BigQuery is delayed by 30 minutes.
  • Instead of just the winning bids, we record all bids.
  • We don’t do any sampling.
  • We also record whether the bid won but didn’t render (i.e., likely lost out to dynamic allocation)

@Ted_Rand The tool you found is from Roxot. I generally don’t like 1 ad network getting all my data but it’s free so I can see how it can be appealing. @jvdc has used this tool IIRC. So perhaps he can chime in with his experience.


#10

Yes, we’re using Roxot Analytics. It’s still in a testing stage and we’re working with Roxot to improve it further. It’s extremely promising, very detailed, and above all free. There are some reporting discrepancies (which is to be expected) and once again, we’re working with Roxot to reduce them.

Definitely a great tool!


#11

@jvdc We are curious! Can you give us the pros and cons of the tool from your test? We’ve been looking into this tool since you told us about them. UX, discrepancies, reporting on winning bids vs. all bids. Thanks! Might be a resource we can suggest to our publishers.


#12

Sure!

Pros:

  • Easy to use and to install
  • Reporting on CPM, timeout, win rate, bid rate, and so on
  • Essentially, everything you’d need
  • Helps figure out which networks are worth it, and what timeouts you should use

Cons:

  • Still in development, we’re their first tester
  • Some discrepancies in reporting (but they’re working on it)