Rolling back cache due to error: InvalidPayloadError: unsupported transition from a non-entity value to an entity
finnigantime opened this issue · 16 comments
TL;DR - make sure all fragments for that entity select the id
field (or that none of them do; but I'm pretty sure that's not what you want in this case)
Let's leave this issue open as a reminder that that error could use some more documentation
I'm seeing this error as well, but not for a cache rollback. I'm trying to writeFragment and change an entity that was a sub-entity of a larger query. A 'Game' object (fetched with a getGame query) has multiple Round objects in an array. I'm trying to writeFragment to one of the rounds with _key (id field), but adding the id field as suggested here does not work. Do I need to have a getRound query for each Round object and do a writeQuery instead? Seems like one fetch of a game and 12 rounds is more efficient. I just want to update this fragment of 'scores' key on the round object. A view of the cache is here:
Hmm, @boorad, the cache generally doesn't care how you reach a given entity (via query, fragment, etc); for this error, I think what is likely is that you've queried an entity once without _key
(the game in getGame
?), and are attempting to write a fragment that now includes a _key
for that same (relative) path in the graph
If possible, can you paste in the gql for the query that's executed, as well as the fragment passed to the writeFragment. You can probably boil this down to just those two calls triggering the error
So, first, here's my cache config:
const cache = new Hermes({
entityIdForNode: object => object._key || undefined
});
Then, getGame
query:
import gql from 'graphql-tag';
export default gql`
query GetGame($game: String!) {
getGame(_key: $game) {
rounds {
_key
date
seq
scores {
hole
values {
k v ts
}
}
player {
_key
name
short
}
}
}
}
`;
and fragment:
const scoreFragment = gql`
fragment my_round on Round {
scores
}
`;
Perhaps the fragment has to use the key on Game, then on Round to get where we need? i.e. a more nested gql.
And for the writeFragment
call, you're doing something like:
cache.writeFragment({
id: 'someroundid',
fragment: scoreFragment,
data: {…}
});
?
correct:
let res = cache.writeFragment({
id: postScore._key,
fragment: scoreFragment,
data: {
scores: newScores
}
});
postScore._key
is the round _key from the postScore
mutation you see not blacklisted ;)
import gql from 'graphql-tag';
export default gql`
mutation PostScore($round: String!, $score: HoleInput!) {
postScore(round: $round, score: $score) {
_key
}
}
`;
Welp, I think I'm going to call this one a bug! Thanks for the details here, I've got a reproduction case that hits this 👍
The cache is getting confused (ignoring) the fact that you've explicitly provided an id for that entity (the Round), but have not included the identity in the payload to merge.
So, as a workaround for now, add _key
to your fragment (and pass the correct value for it via writeFragment's
data
):
fragment my_round on Round {
_key
scores
}
(This may be pretty tricky to fix, without adding knowledge of schemas to Hermes, too)
@nevir your work-around gets past the error, but doesn't update my UI's that are supposedly subscribed to the cache changes (I thought for getGame
). Although I do have a convoluted path through the components, which I fear is the issue. I still may not have the hang of this GraphQL and Apollo thinking yet.
@jamesreggio has convinced me that the current behavior is not desirable 👍
The main thing that pushed it over the edge is an example where the cache becomes, effectively, corrupt if you end up writing the wrong type of node to the cache first.
Say you have a top level viewer
field, that is a User
with an id
—but, you accidentally kick off a query that fails to select id
before anything else. All subsequent (well formed) queries will fail, and be unable to recover.
So, proposed new behavior is:
- in both cases (entity -> non-entity, non-entity -> entity), ignore the previous values and write them as if there is nothing cached on that field.
- emit a warning, as this is almost certainly caused by an application bug
Here's a stawman implementation of what @nevir described above: jamesreggio@64eb2d8
I'll report back with how this performs.
Hi @jamesreggio. I'm curious to hear how that change worked. We recently switched to Hermes and this behavior was somewhat surprising, so I'm considering applying it to our local fork.
I'm also looking forward to hopefully contributing back to Hermes in the near future!
Hi @samkline — no problems so far!
@jamesreggio @nevir any update on getting the patch from 8/2/18 into master and released? Is this project dead or just in heavy sleep?