denvned/isomorphic-relay

Question: how do you flush relay store?

Closed this issue · 23 comments

Hello!
In example we can see that relay instance lives in persistent scope.

Does it mean that query data will be cached in node memory and shared between requests?

Or does this line const storeData = RelayStoreData.getDefaultInstance(); create an empty instance for each request?

Very good question! Unfortunately, Relay 0.5 containers support only the global store. So, the data is shared among requests. But if we are lucky, that might change in the next Relay release with introduction of RelayContext. Or before that, I probably could write another hack that will localize store for each request. I will think about that.

Denis, thank you for the prompt reply!
As we can see they publish updates pretty fast. Probably we can just wait and see how the things go...
But guys behind facebook have they own priorities, they don't need to prefetch all page data and render only a component placeholders to fill them with data on client. That's might work for facebook, but won't go for spiderable websites.

What are the current problems on the way to production?

  1. Data store should live in request scope to prevent memory overflow and data access collisions.
  2. IsomorphicRouter.prepareData method should be able to pass request context to qraphql server to support session specific queries.
  3. We don't actually need to make network requests lo local graphql server because they add unwanted latency to response, so probably a new local network layer is needed to query graphql models directly in request handler.

Is there any other problems I didn't note here?

Looking to above problems, I have an idea. We could use relay methods to extract and compile queries from the router and then execute qraphql query this way: yield graphql schema, query, context (coffee).

Once we have our data we need to populates routes. What do you think?

The first two problems are really important, and have to be resolved. Unfortunately, that cannot be done cleanly until the local context and the local network layer are supported by Relay. Another way is to add more dirty hacks.

I don't think that the third issue poses a real problem though. And probably it is already solved by relay-local-schema.

I guess the first two make it clear that relay wasn't really designed with isomorphism in mind like they say.

And it means that isomorphic solution is the wrong objective, we don't have to change anything on the client side because it already works. What we really have to do is to create a server module which would extract queries out of relay routes, flatten them, execute within a request context and then populate routes with data.

And it means that isomorphic solution is the wrong objective, we don't have to change anything on the client side because it already works.

Sorry, I don't follow that logic.

and then populate routes with data.

What do you mean?

I'm only getting around yet, so I may miss some details.

I wonder why do we use relay to query on server when we aware of blocking problems it has.

I'll try to play with relay and relay router today to see what else we can do...

when we aware of blocking problems it has.

What blocking problems?

1 and 2 prevent you from using relay on server in production.

1 does not necessarily cause memory overflows. Data access collisions are also not a problem, because Relay store is capable of handling concurrent data access by multiple Relay containers. The only serious problem with the global Relay store that I see, is that it is not easy to have different data views on the same business object for different users. But it can be solved by including the user ID in each Relay node ID, so different users will access different Relay nodes (having data specific to the user) for the same business object. Also, many applications do not require user specific data views at all.

2 is also just an annoying inconvenience, and not a mission critical problem. It is always possible to send the user ID as a query parameter. And again, many applications do not require session specific data at all.

1 Imagine you have 2 GB database and 1GB memory limit in node, how would you flush old objects from the store? How do you know you have to update store if objects in database were updated?

2 session id is not the only param in request context, you just can't do it on client it's insecure

how would you flush old objects from the store?

Restart Node script after certain number of HTTP-requests?

How do you know you have to update store if objects in database were updated?

Currently, isomorphic-relay update data on each request (not all data of course, but only the requested data), so data cannot be stale.

2 session id is not the only param in request context, you just can't do it on client it's insecure

Can you give more details, e.g. a concrete example? I don't follow yet.

Restart Node script after specified number of HTTP-requests?

No, sorry

Currently, isomorphic-relay update data on each request (not all data of course, but only the requested data), so data cannot be stale.

What about concurrency?

Can you give more details? I don't follow yet.

Normally node-passport is used to handle sessions and it has a lot of different policies. If you ignore that I'll have to create own authentication layer in model. It's weird.

No, sorry

I also would prefer not to do so. Of course, that have to be fixed.

What about concurrency?

Write data to store, read data from store both are synchronous operations, and data in the store is not request specific. So, there shouldn't be any concurrency problems.

If you ignore that I'll have to create own authentication layer in model.

Why in model and not in GraphQL resolvers as usual?

Write data to store, read data from store both are synchronous operations, and data in the store is not request specific. So, there shouldn't be any concurrency problems.

you said:

Currently, isomorphic-relay update data on each request

imagine req 1 read and req 2 write happen the same time

imagine req 1 read and req 2 write happen the same time

As I said, read and write to the store are synchronous operations. And as you probably know, JavaScript has single-threaded execution model. So, there can't be a read and a write at the same moment of time. Next operation is possible only when the previous operation has finished.

I know about single-threaded execution model but I don't know about relay store implementation. Are you exactly sure they use blocking model to perform store operations? Or maybe they use events which does not guarantee the order of performed operations.

Yes, all Relay store data is stored in RAM. So, there is no need for asynchronicity.

How is the work on 1 and 2 going? I see from relay issue #683 that work is progressing on isolating the global data. Is there anything I can help with?

@DylanSale Fix for 1 is waiting facebook/relay#698. And fix for 2 was submitted as facebook/relay#704. I can't see yet how you can help here. It's all depends on promptness of Facebook. I wish they were quicker in reviewing PRs...

Since both of these have been merged into relay master, does it mean that next release of relay would make this possible?

@abhishiv The second one is not merged yet. But the first, the most important one is merged, so with the next release of Relay it should be possible to use request local Relay store.

Since version 0.5 of isomorphic-relay Relay store is not global, but local to each request. So I'm closing this as the main problem is solved.

And I'll probably open a separate issue on passing request specific context (e.g. using cookies) to graphql server.

thumb up!