Unified Request Flow for Server-side Auctions
suprajasekhar opened this issue · 3 comments
Could the proposed Bidding and Auction Services API potentially pose the following challenges:
-
High adoption and maintenance costs of ad tech arising from the new integrations, disjoint from the existing real-time bidding data flows.
-
Increased network bandwidth from the additional data flows introduced between the ad techs, client and the Bidding and Auction Services API.
-
Increased end user latency due to the sequential network requests for the execution of the contextual and remarketing auction and bidding functions.
We wonder if this idea of using trusted servers to execute remarketing auction and bidding functions can be better integrated with the existing real-time bidding data flows.
For instance, the traditional ad requests from the client to an exchange and in turn the traditional bid requests from the exchange to bidders could include a user’s private interest group data that is encrypted by the client (such as Chrome browser or Android), such that only the trusted server can decrypt and process such data, along with contextual signals about the current site or app.
The same trust model outlined in FLEDGE services overview could be used to verifiably enforce that the encrypted interest group data can only be used as intended in FLEDGE auctions:
-
Only the expected binary version of the FLEDGE auction or bidding services hosted in a secure, hardware-based trusted execution environment (TEE) can decrypt the interest group data with the use of private keys from two key management systems.
-
Custom, ad tech provided auction scoring and bidding functions are run inside a sandboxed environment within a TEE to prevent any leakage or reuse of information across individual requests and to ensure the desired restrictions on the allowed data usage and FLEDGE API semantics.
Thanks for your feedback. There is a lot to unpack in your post so I will call out the questions we are attempting to address, and then provide the response.
Q: "Increased network bandwidth from the additional data flows introduced between the ad techs, client and the Bidding and Auction Services API."
A: Bidding and Auction services will optimize for overall end-to-end latency. Bidding & Auction services reduce the number of network communications compared to on-device FLEDGE to one umbrella request between client and server, with server <-> server communications in the cloud. The overall flow, we believe, is analogous to how sell-side and buy-side servers interact today.
We are aware that the current design requires data to be sent from client to server and we are designing ways to optimize the umbrella payload.
Q: Increased end user latency due to the sequential network requests for the execution of the contextual and remarketing auction and bidding functions.
A:Bidding and Auction services is an optimization for on-device FLEDGE. The current architecture is designed for that use case. At the moment, we are not exploring architecture beyond the performance optimization use case.
Q: High adoption and maintenance costs of ad tech arising from the new integrations, disjoint from the existing real-time bidding data flows.
A: Privacy Sandbox does introduce some paradigm shifts that will require new integrations to be built. However, these shifts bring privacy benefits to the end user.
I think the design @suprajasekhar suggests is an improvement on the current design in several ways.
- unifies contextual and remarkting request flow
- allows to inject data generated outside of the secured path in an efficient way (like an embedding vector for an ML model)
- can more directly leverage existing bidding infrastructure (OpenRTB)
The current FLEDGE proposal does not work in practise without a contextual auction happening first. The contextual auction provides the channel to communicate which buyer wants to participate in the on-device auction and give the chance to provide buyer specific data related to the context to the bidding function.
As the buyer does not know if one of the custom audiences he defined is active on the device, he will need to respond with his intention to participate in the on-device auction and the additional buyer data every single time.
The bidding and auction service does not improve upon this, as it only passes the seller auction config.
A:Bidding and Auction services is an optimization for on-device FLEDGE. The current architecture is designed for that use case. At the moment, we are not exploring architecture beyond the performance optimization use case.
Given that FLEDGE is still being work in progress and the explainer states "While this explainer talks about just the remarketing flow with Bidding & Auction Services, we plan to update and publish an extension of this proposal for unifying the contextual and remarketing flow." this proposal already seems to go in the right direction - overall latency / performance would benefit, resource efficiency too.
It would be great if not just "performance" is considered but the later, too. Due to:
Privacy Sandbox does introduce some paradigm shifts that will require new integrations to be built. However, these shifts bring privacy benefits to the end user.
In the end remarkting/reengagment campaigns need to be economically viable under the new paradigm. If this is not the case all the effort on the design and implementation side is wasted. FLEDGE will see no adoption and the impact will be worse than ATT, at least on Android.
Currently there are several features that directly reduce the economical viability:
- order of magnitude reduced performance of predictive models if only aggregated data is available for training (see the results of Criteo Privacy Preserving ML Competition, WICG/turtledove#272). While there might be ways to improve upon this, it puts a lot of additional effort on the buy side.
- well performing personalised creatives - i.e. containing specific items a user has interacted with - are not possible anymore
- increase in infrastructure cost: the buy side is forced to run an key/value server or the bidding and auction service. Both are unproven and unoptimised. The increase is significant especially if a buyer does not leverage the cloud at the moment. High bandwidth cost, additional cost per VM for confidential computing, while V8 is fast high performance bidding system are usually implement in C++, Rust or Go resulting in more needed resources for the same throughput. For example GCP compute is 2x+, egress is 4x+ (and that is with a multi million commitment in place) compared to non-cloud options.
- Teams on the buy side need to reimplement half of the stack - from custom audience management over bidding logic, reporting and various ML models.
Without some attention to economical viability on the buy and sell side, it is likely that the majority of what works now will be net negative under FLEDGE. 🤷
I am closing this since B&A supports the unified flow. We will discuss about more optimizations to the architecture in the future. Thanks!