Shared Storage for Anti-abuse mitigations
kmaini2023 opened this issue · 1 comments
In the documentation, I read that the shared storage can be use for the following use case:
Anti-abuse, anti-fraud, and web security organizations often use proprietary techniques to detect malicious users, whether automated bots or real humans trying to cause harm. It's possible to test many different strategies here, whether it's using URL Selection output gate to encode a user trustworthiness rating or using the Private Aggregation output gate to build datasets for anomaly detection.
I would like to know if we need to enable any flag (like the Privacy Sandbox Ads APIs) in order to achieve this? Does the flag has to be specifically this Privacy Sandbox Ads APIs one? If we do have to enable flag, how would this work if we are trying to identify an attacker who is enumerating same account across different products (domains)? Let's say we have a solution to identify the attacker using fenced frame worklet injection leveraging shared storage but we need flag enablement on the attacker's browser. Please advise if there's a gap in my understanding. Thank You!
Shared Storage is on by default but its output gates can be turned off through the Privacy Sandbox Ads Privacy Settings. It is possible that an attacker can turn off Privacy Sandbox APIs within their browser and we understand that this might impact anti-abuse applications. However, since we see Shared Storage as a replacement for third-party cookies and site data, this is similar to how an attacker can reset cookies and other site data today.