facebookexperimental/Robyn

Budget allocator: Max response gives lower response (-negative %) Bounded than Initial

NumesSanguis opened this issue · 4 comments

Project Robyn

Describe issue

The Allocator function with the maximize response option sometimes provides a lower response for Bounded than the Initial. My assumption was that the optimized result could never be lower than the Initial budget, so would like to seek understanding here. Or whether this is a bug.

Note: The model is trained on conversions (not revenue).

Provide reproducible example

Due to sensitive nature of the data, it would be hard to share, but I hope the bellow Allocator 1-pager shows enough detail.
R code to create the 1 pager (both R and Python API results are the same):

# Example 2: maximize response for latest 40 periods with given spend
AllocatorCollect2 <- robyn_allocator(
  InputCollect = InputCollect,
  OutputCollect = OutputCollect,
  select_model = robyn_model,
  date_range = "last_40",
  channel_constr_low = 0.7,
  channel_constr_up = 1.2,
  channel_constr_multiplier = 3,
  scenario = "max_response",
  export = TRUE
)

3_180_19_reallocated_best_cpa_40-weeks_s

As you can see,

  • Initial: Spend 0% | Resp: 0.00%
  • Bounded: Spend +0% | Resp: -2.56%
  • Bounded x3: Spend +0% | Resp: +12.60%

This one is the weirdest, with Bounded giving a lower response, but Bounded x3 giving a higher one.
With different settings (different amount of weeks, different constaints), I've also seen:

  • Both bounded and bounded x5 being higher than Initial, but bounded giving a higher +% than x5
  • Both bounded and bounded x5 being lower, with x5 even lower than bounded

Note: There is a period with 0 spends for some channels in the 40 weeks range. However, when checking only the last 8 weeks (with 1 channel having no spend at all, and 1 channel at some point 0 spend), weird behavior still shows up (Bounded having a higher +% than Bounded x5).

Environment & Robyn version

Make sure you're using the latest Robyn version before you post an issue.

  • Robyn version: 3.10.5.9000
  • R version: R version 4.3.3 (2024-02-29) | Executed as Notebook in JupyterLab
    Platform: x86_64-conda-linux-gnu (64-bit)
    Running under: Debian GNU/Linux 11 (bullseye)

Matrix products: default
BLAS/LAPACK: /opt/conda/envs/my_env/lib/libopenblasp-r0.3.27.so; LAPACK version 3.12.0

Robyn version 3.11.0 did change the response, but not resolve this issue.

I noticed this commit had been reversed, which might have been the reason:

fix: revert initial response calculation for lagged adstock in allocator
331b096

So I upgraded to Robyn 3.11.0, and while the response has changed, the response still looks wrong to me. Bounded is now positive for 40 weeks, but Bounded x3 became negative. See the new one-pager:
3_180_19_reallocated_best_cpa_40-weeks_Robyn-3 11 0_s

The allocator function has been called with the same arguments as the comment above, with the same model (made in v3.10.5).

This is quite an extreme case because your spend is on the levels of millions, but your response is below 100, with s_sea_nb even below 1... what are the conversions? I haven't seen something like this and assume that this might be due to the stepping of the optimizer. Not sure though

Hi @gufengzhou , the spend is not dollars, but yen (divide the number by 150), so that explains why the spend numbers are much higher. It's about high value products, so few conversions with a large spend is expected (over >$100 on media spend for 1 conversion is acceptable).

How would I change the step size of the optimizer? There seems to be no parameter for it:
https://search.r-project.org/CRAN/refmans/Robyn/html/robyn_allocator.html


Update: I'm not good with R, but after copying all the code of the allocator function into my Notebook, plus the relevant imports and the functions it calls, and then changing this changed the negative allocation to a positive one:

"xtol_rel" = 1.0e-12  # From the default "xtol_rel" = 1.0e-10

for all xtol_rel I could find.
Not sure if this will solve the negative allocation for all scenarios, but it is at least a step in the right direction.
Any other tips? I found e.g. this in the documentation:

If there is any chance that an optimal parameter is close to zero, you might want to set an absolute tolerance with xtol_abs as well.
https://cran.r-project.org/web/packages/nloptr/vignettes/nloptr.html

Future solution

Would it possible to expose these Nloptr params in the robyn_allocator() function rather than hardcoding it in the function?

Help

I'm not really familiar with R. Do you have a suggestion that would solve the step size issue that doesn't involve copying all the Robyn code into my Notebook? I will need this for deployment in combination with Python.

Thanks for digging into this. Looking at the documententation, it's indeed the step size that caused the issue.

Quote: "Stop when an optimization step (or an estimate of the optimum) changes every parameter by less than xtol_rel multiplied by the absolute value of the parameter. If there is any chance that an optimal parameter is close to zero, you might want to set an absolute tolerance with xtol_abs as well. Criterion is disabled if xtol_rel is non-positive.

@laresbernardo we can surface the params xtol_rel and xtol_abs in Robyn allocator. Actually I believe we could also automate this, e.g. when the magnitudes of spend and depvar are too different, adjust the steps accordingly, if you have time.