dbt-labs/snowplow

Performance issue with model snowplow_page_views

sphinks opened this issue · 9 comments

I belive in v 0.7.0 was introduce code that dramatically slow down building of model snowplow_page_views. Merge: 91e42ef
Line of code 45-57: https://github.com/fishtown-analytics/snowplow/blob/91e42efaef97a5f52ba9965155b3d50ac6d45f6b/macros/adapters/default/page_views/snowplow_page_views.sql#L45
It was commented as

-- we need to grab all events for any session that has appeared
-- in order to correctly process the session index below

We started collecting snowplow data about 8 month ago, and currently in snowplow_web_events there are about 73M of record. Looks like join to that table just as is will be painful for any mature projects. I suppose this part of code could be optimized (e.g. taking events for last n days). As far as I know sessions in snowplow by default lasts 30 minutes. So in 99% of time checking events from last several days will be OK. Am I correct follow the idea of introducing that code?

Hey @sphinks - I think you're totally right here - the changes we introduced in 0.7.x were optimized for correctness and not for speed. I think this is one case where it's better to be 99% correct and fast than 100% correct and slow. We have plans to revisit these models to make them a lot more performant, and I think a big part of that will be minimizing the "lookback" period as you're describing!

Is it possible to make fix for this part of code write now? It will allow to upgrade to new branch and use lot of fixes, while waiting for big refactoring? I can prepare PR in case my idea is ok.

Sure, feel free to submit a PR! I'd love to take a look at your approach here.

@drewbanin have created PR: #69 Any idea how to check it 30 days are ok? I was checking plans with original and new code. Looks like new one is much better. But the issues is every project has it is own amount of events per month, so it is better to setup as small interval for session as possible in any case.

In my experience on a previous project, where we had 2.5 months of Snowplow data (285mm events):

99.82% arrived within 1 hr of firing (diff b/w derived_tstamp + collector_tstamp)
99.88% arrived within 24 hr

We want to remove the full look-back and rebalance speed and correctness in 0.8.x. I wonder if we should even look back 30 days?

@jtcohen6 @sphinks I'd be inclined to make it less than 30 days for sure. We can either do:

  1. Always reprocess the last 2-3 days of data
  2. Dynamically find the last day that dbt ran, then reprocess data in that interval

I think the second approach is more aligned with how dbt usually works. You guys buy that?

@drewbanin but taking second approach is not reflecting the idea of initial commit as far as I understand. We talking events starting from last date dbt was running, but take a look in the whole history to find match for session. In case we replace history match with last date dbt was running, do we have any new info? I mean history interval we are reprocessing will be equal to interval of non processed events. Am I misunderstand second approach?

Have created a PR #71 to fix the issue.

Closing this issue, since we merged #71 and released 0.8.0