facebook/react

React Context value propagation performance

alexreardon opened this issue ยท 35 comments

Hi there!

I have observed a performance issue with context providers and value updates. I have spoken with @gaearon with this on twitter, so he will have a bit more context

Let's say you have a provider:

const CounterContext = React.createContext(1);
const Provider = ContextContext.Provider

And you update the value to the provider

render() {
  return (
    return <Provider value={this.state.value}>{this.props.children}</Provider>
  )
}

All good so far.

Let's say you want to pass this value down the tree, but for performance reasons you do not want to render the tree. The only components you want to render are your consumer components (in our case CounterContext.Consumer)

A naive way would be to do something like this:

class Blocker extends React.Component {
  shouldComponentUpdate() {
    return false;
  }
  render() {
    return this.props.children;
  }
}

// ...

render() {
  return (
    <Provider value={this.state.value}>
	 <Blocker>
          {this.props.children}
     </Blocker>
    </Provider>
  )
}

Even though no components in the tree are rendered except for the consumers, the update itself is very expensive. I suspect that the tree walking algorithm takes a while to run.

Standalone example

https://codesandbox.io/s/61jnr811kr

This example has about a 20-30ms render time for a simple counter update. In a production app with a list of 500 nodes (~500 * 10 components, 5000 components) we were looking at update times similar to that of rendering the tree (150ms+)

A bit more context

I was trying to replace react-redux with a root StateProvider that would create a subscription to a store and pass the latest state into the context. The consumers would then pick up this update, run a selector, and re-render if the result value had changed. I had this all working in react-beautiful-dnd but I found the updates through the context itself was too slow for usage (You can see the relevant files here and here)

I looked into this and running your CodeSandbox example with production mode flags makes the update time take around ~1ms for me: https://codesandbox.io/s/7kw8qozz9x.

Did you try running with the production flags to see if your performance issue went away? Looking into the development bundle, 10ms of the time taken seems to be due to performance.measure markers being really bad for performance (something the React team has already found in the past).

Looks like @alexreardon might be assuming

import "react/cjs/react.production.min.js";
import "react-dom/cjs/react-dom.production.min.js";

import React from "react";
import ReactDOM from "react-dom";

would switch on production mode โ€” I don't think that's correct.

That said I'd still appreciate if @trueadm could look into this locally with a prod build (and maybe less performance.now calls). In our previous discussion with @alexreardon he said these issues occur in prod builds too even without measurements.

@gaearon I added them in (well Ivan did) after I asked how to enable production mode on CodeSandbox and this was the recommended way for now. I'll keep digging into the issue though and see if something comes up.

The issue still existed, to a reduced amount, with production builds. In the real world example I listed (500 item list), the time to hit all the consumers was still taking 20-30ms with production builds. Compare this against something like react-redux which uses an event emitter to pass values down a tree, where the cost to hit a consumer is ~0ms

I made the codesandbox after to illustrate the issue and i did not try to get that working with production builds

This would kinda be a big deal for React-Redux, actually, because we're trying to switch to using createContext in version 6 instead of having all connected components be separate subscribers.

This does also tie into the performance benchmarks we've been trying to do with our WIP branches as well.

I can try to gather more numbers if that helps. I thought the sandbox showed the issue enough. If more clarity is needed, I can try to obtain more numbers

Yes, I'd certainly appreciate any further details we can gather on this.

I'm also curious whether observedBits comes into play with this at all. If consumers are marked with bitmasks that shouldn't require an update, does React still have to traverse the entire tree, or does that speed things up?

Sandbox isn't helpful by itself because it doesn't run in production mode and includes sandbox-specific overhead (e.g. they inject code into every loop to check if it's infinite).

To get these issues sorted, we'll definitely need repro cases as plain HTML files working against UMD production builds of React and ReactDOM.

@markerikson Let's leave observedBits out of this for now, I think it derails the issue a little bit. The issue is specifically about the default update performance, and if there's some extra overhead we'd want to look at it regardless of opt-in optimization APIs. In fact talking about these APIs here distracts us from fixing the bigger issue because it's tempting to say "we just need this opt-in thing to fix it".

Yeah, a self-contained HTML version using UMD builds would be a big help. I noticed also that there's another ReactDOM being used to render the CodeSandbox UI and that probably affected profiling a bit too.

I did a bit more digging. It turns out things are not as bad as I first observed, which is good! I must have been tested against dev builds at some point.

Tip: if you want to run the codesandbox examples locally as a React developer you can just copy paste it into babel-standalone/dev in the React repo. Then you can run it against your own custom builds of React (cheers @gaearon for the tip)

I have simplified my real world use case for clarity. You can find it here:

branch: state-provider-simplier (it will not allow parent renders through at all so reordering is broken, but it simplifies the use case)

Results (with production build of React 16.5.2):

  • with react-redux and critical render: ~3ms
  • using Context and critical render: ~5m

(a 40% slow down)

Critical render: when dragging an item we only render that Draggable and not the rest of the tree

Which sort of makes sense at face value. Context updates walk the tree, whereas react-redux has a direct subscription so its state update propigation is closer to 0ms.

It looks like context updates would be sufficient for most* use cases. Although it is a significant slowdown from an event emitter model.

For react-beautiful-dnd we would push for extremely fast updates, so a 2ms => 40% slow down is still significant for us

This is relevant for react-redux @markerikson as moving to leaning on context for propigating state values would lead to a slow down over the current subscription pattern

Look like a zero-subscription-cost model, used in Context API, is not absolutely free.

Ironically, one of the reasons what we're moving to createContext is because I had hoped that switching to a single subscription and passing the state down via context, instead of N subscriptions, would be faster. But, React-Redux v5's ability to do its memoization checks and bail out before even asking React to update does have an advantage over immediately calling setState() at the top of the component tree and letting React walk over everything.

FWIW, our current benchmarks indicate that the "single subscription + createContext approach winds up being just a bit slower than v5's "multiple subscriptions" approach in artificial stress test scenarios.

Would be nice if we can quantify exactly what the cost is inside React in this case.

Yep, with "old subscription" mechanics we could use "observedBits" on the sender side, not the receiver.

And I still don't get the real problem with "tearing". Is it actually so bad?

@alexreardon I also dug more into it last night and this morning. The new context will never be as fast as how Redux is updating state, simply because the new context model does more work behind the scenes and thus incurs more overhead (it has to traverse the entire tree, going through each fiber at a time). There might be things we can do to tweak the implementation of this function however, so I'll dig into that today.

@trueadm I think in the beginning we thought we would cache the traversal path between updates but then put it out of scope of the initial implementation. Maybe we can revisit this. The problem is then weโ€™ll need to somehow invalidate that cache or keep it consistent.

We could also have a linked list of all context consumers. Thatโ€™s still a traversal to do but itโ€™s less than the whole tree.

@trueadm @gaearon : Yeah , that's about what I expected. I knew calling setState() at the top of the tree on each Redux update will cause React to have to reevaluate things, which is more work than just bailing out early. That's one of the reasons why I've been so interested in the potential of this observedBits thing to help potentially skip some of that work. If there's any way the context implementation can be sped up (and especially cases where only context updates need to be propagated to the tree due to use of shouldComponentUpdate-type blocking), it would be very helpful.

Something like obtaining ref to Provider and calling value update out of React render would be great. We almost got unstable_read, why not to unstable_write?

@markerikson - that would work only client side stuff, where โ€œdefaultโ€ value could work for everyone, but not for SSR, where different clients shares the same context, but not a value.

simply because the new context model does more work behind the scenes and thus incurs more overhead (it has to traverse the entire tree, going through each fiber at a time).

@trueadm why is that? how current context model is different than consumers subscribing to the provider directly? rendering consumer (or using a hook ๐Ÿ˜‰) has to traverse the tree up (should be fairly lightweight) to find the provider that it has subscribe to, but other than that i would imagine that providing a new value should have same-ish cost as in redux case, is there any particular reason why provider doesnt maintain consumers registry (I guess it would have to be non-flat, to trigger consumers top-down rather than in insertion order)?

I think we're seeing a similar issue over at styled-components, but with use of the context consumer components (there's a repro sandbox in that issue): styled-components/styled-components#2215

In a wide tree of the same component rendered many times, it's quite slow and the vast majority of the scripting time is in React-land. In styled-components v4, the stack of each component looks like this:

ForwardRef > ReactClass > StylesheetConsumer > ThemeConsumer* > Element

* for static components with no function interpolations, we skip the ThemeConsumer

I have to say this is still my biggest concern about moving React-Redux to use createContext internally. Still not sure exactly how much it's going to affect things in real-world apps, but it does seem like there's definite overhead involved in traversing the tree to find consumers.

I'm curious why the traverse would affect the initial mount. Maybe I'm missing something, but it seems that we could face performance issues with updates. Can someone clarify this ?

One thing I've noticed is that this performance issue is only in development mode. If I build a production distribution and do the same performance tests, the issue simple go away.

Does anyone have an idea why this is slow just in dev mode? I can take a stab at investigating that, but I'm not very familiar with the react code base.

Here is yet another codesandbox: https://codesandbox.io/s/xv1rz10p8q.

It wraps a bunch of consumers in a loop. This is where I've started observing the slow to render issue.

Hey can anyone in the react core team try to answer to this? I could definitely try to help, but I'm afraid I will take forever to understand what is going on. Please? ๐Ÿ™ FYI @gaearon

We could also have a linked list of all context consumers. Thatโ€™s still a traversal to do but itโ€™s less than the whole tree.

Is this something that is likely to be on the roadmap soon? Given React-Redux has just released its new context subscription model I imagine you'll be hearing more about perf issues related to this...

can I update the value of the Provider from Context like

consider I have
consnt MyContext = React.createContext({})

and

<MyContext.Provider value={'5'}>
.........
</MyContext.Provider>

now I have value is 5
can I update it from MyContext directly?
like
MyContext.Provider.value = 6

I am aware that I can update via Consumer by callback but is there any alternative?

@Dhirajbhujbal I am pretty sure that that is not possible, and if it were it's certainly not a public API.

By providing a callback you can also do verification of the new value etc, it is much cleaner and there's no magic update detection.

If you're really worried about performance, because your application is being slow, it will likely not be due to context, and there are probably much better ways to optimize your app.

If you don't want to pass an object with a callback, you can also create 2 contexts, the other one for the callback.

stale commented

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contribution.

stale commented

Closing this issue after a prolonged period of inactivity. If this issue is still present in the latest release, please create a new issue with up-to-date information. Thank you!

@alexreardon
Really loved the library and it has all the necessary features we need but we do not use redux or react-redux in our project hence wanted to have a look at those samples which you had mentioned you tried with React Context in the first comment under "A bit more context". Can you please provide those sample files ? Would really appreciate the help !