Global fetcher vs transport agnostic
hypeJunction opened this issue · 9 comments
According to the readme, the library is transport agnostic, but then the first thing the docs ask of you is to introduce a global fetcher, which goes against the notion of being agnostic to the transport or lack thereof. It feel that it would make sense to let people decide what fetcher to use for each of the queries rather than introducing a global one. I suppose the idea is that it could be used for testing purposes, but I question the implementation. Could you clarify your intentions with the global fetcher and what you would recommend for mixed transport setups?
@hypeJunction Hey there!
Could you clarify your intentions with the global fetcher
According to my "study" (I touched somewhere around 20 projects in Evil Martians; I also asked around a bunch of FE people on Twitter) the majority of projects work in GraphQL- or REST-powered environments, and both benefit tremendously from setting a global fetcher, because the broad majority (if not all, that's debatable) fetching actions are serialized in a form of string (and therefore a key)—like, path+qs and GQL query.
But, as TS signature suggests, all arguments in nanoquery
function are optional. So, you're free not skip adding global fetcher if it's not a right fit for you. In this case you must provide a fetcher on a fetcher-store basis, otherwise you'll get an error. So, in the simplest case setting a global fetcher will save you a lot of code repetition.
what you would recommend for mixed transport setups
Well, there are a couple of ideas:
- create multiple nanoquery contexts! You're not limited to one. Typically, I split those into "domain areas". So, if I consume 2 apis in one app, I'll at least have 2 contexts, maybe even more, if I find that settings reuse makes some difference. Keep in mind that nanoquery is not actually limited to RPCs, essentially it's a caching proxy for running async functions. You can use it with Web Workers, or with Tauri's native
invoke
bridge, or with any promise-based native API. In this case, I'd say, every "domain" area also requires a separate context. - if you can't justify having multiple contexts in one app, you can always set a fetcher function on a store basis. This one always overwrites the global one.
If you have any further questions, feel free to reopen the issue. For now, I'll close it 😄
Thanks for a quick response. Having domains
makes sense, but it would then require custom invalidation logic, which would then defeat the purpose of using the library for me. I guess my use case is somewhat different, hence I am having trouble finding the right way of bringing this into our project. I will poke around a bit more, and see if I should just use bare bones nanostores.
it would then require custom invalidation logic, which would then defeat the purpose of using the library for me
What's "custom logic"? We basically have 2 sources of invalidation:
- automatic—stuff like
refetchInterval
that's managed purely inside the library. - manual—when you call
$fetcherStore.invalidate()
orinvalidateKeys
.
Both are still there. It's true that if you have multiple contexts you'll need to be aware of what you're invalidating and where, but the mental overhead is borderline invisible.
Also, keep in mind you can just write your own factory for fetcher stores instead of using contexts. Something as simple as this should work:
const [createFetcherStore] = nanoquery();
// or graphql, or web worker, or whatever
export const createRestFetcherStore: typeof createFetcherStore = (
keys,
opts
) =>
createFetcherStore(keys, {
fetcher: async () => {
// implement one type of fetcher
},
...opts,
});
Thanks. I think this might work. As long as all keys are within the same "monolithic" store there is no reason to wonder which of the data stores the data is coming from at any given time in application's lifetime. My worry is that our API is in flux, and our motivation to use nanostores is to decouple the UI from the data sources (multiple REST APIs plus Jetstream plus data files forming a graph of related nodes) to then focus on rebuilding the data model. Having several fetch implementations spread across multiple nanoquery instances would be problematic for us, as it would require that any changes to the queries be followed by an audit of the entire codebase for the required invalidation targets. We have SDKs for our REST-clients, so even the key-based logic is not really going to work for us, as API routes are hidden away under layers of abstraction. Besides I wouldn't want to couple current server-centric keys which are going to change with our front-end code, so keys would not be a reliable pointer during these refractors.
One last question I have is how would you apply this fetcher logic to access values from another store. Presume we have a collection of items fetched together and no endpoint to fetch individual items (it may exist in the future though). It is safe to just access other stores and return values in a promise?
After some more thinking, I really don't like the idea of key-based fetchers, and I would rather have a more atomic approach, where each query corresponds to a specific promise. How the promises get their data shouldn't concern nanoquery.
const [createQueryStore, createMutationStore] = nanoquery();
type CollectionKey = 'books' | 'magazines';
type ItemIdKey = string | ReadableAtom<string | undefined>;
type StoreKey = CollectionKey | [CollectionKey, ItemIdKey];
const buildQueryStore = <T>(key: StoreKey, queryFn: Fetcher<T>, opts: CommonSettings<T> = {}) => {
const normalizedKey = Array.isArray(key) ? key : [key];
return createQueryStore(normalizedKey, {
fetcher: () => queryFn(),
...opts,
});
};
const buildBooksQuery = () => buildQueryStore('books', () => MyBooksClient.fetchBooks());
const buildMagazineByIdQuery = (id: string) =>
buildQueryStore(['magazines', id], () => Promise.resolve(magazines.find((m) => m.id === id)));
const useQuery = useStore;
const books = useQuery(buildBooksQuery());
const magazine = useQuery(buildMagazineByIdQuery(searchParams.magazineId));
- no need to do key normalization—that's already done in nanoquery
- you really shouldn't introduce any caching into derived values—we have
computed
for exactly that purpose (fetcherStore
andmutationStore
are a regularatom
from nanostores core) - also, I'm not entirely sure what you try to achieve by adding this indirect "build" pattern where every fetcher is hidden behind 2 factories. I understand the reasoning to have 1 factory (different transport layers), but 2 seems to be too much. Besides, it's not entirely understandable, how you'll approach the caching problem here: if you "build" a store inside render function of a component (I assume you use React), you'll have to deal with unnecessary rerendering due to store identity change for each render.
Anyways, this is how I would approach this.
@dkzlv Thanks a lot for taking the time. I think I am arriving at a nice solution now and I like how concise and intuitive it looks.
I have ended up using computed()
just as you suggested in 2.
The second factory is mostly syntactic sugar, similar to what you do here: https://github.com/nanostores/query?tab=readme-ov-file#local-state-and-pagination. I would imagine the return of the second factory would be memoized or exported as a constant depending on the use case, just as you suggest, to avoid unnecessary rerenders. For now, it's not really needed, as we access elements by ID from the store.
Oh, and regarding your comments about using types for keys - I think they might come in handy when doing invalidation, as well as prevent issues resulting from key changes.
@hypeJunction Yeah, great stuff, all looks good then 😄
Oh, and regarding your comments about using types for keys - I think they might come in handy when doing invalidation, as well as prevent issues resulting from key changes.
Yeah, that's understandable. Often times I find that types just don't work in this case. As I said in the snippet, it'll just widen to string
, so it won't really help you with DX/type safety. Besides, all functions that work with keys (e.g., invalidate
) expect string
, so you won't get any type safety there as well.
But if you also plan to put those keys in JS somehow (say, put keys and key factories into Services you have there), then it's totally ok, I do it myself all the time.