jitbit/FastCache

Nonblocking.ConcurrentDictionary

Opened this issue ยท 8 comments

johnib commented

Hi,

Wondering, have you considered using the concurrent dictionary implementation of Nonblocking namespace?
I wonder why it was not chosen if you did.

Thank you

in .NET 5 and higher the built-in ConcurrentDictionary hss even faster lookups than the regular Dictonary and since we target .NET 6 there's no need

ref: https://stackoverflow.com/a/53277034/56621

Answering your question: no we haven't considered it, if think we should please provide reasons why

johnib commented

We have wrapped your cache with an Async interface that uses ValueTask.

We encountered huge amount of lock contention that according to dotTrace profiler was attributed to the ConcurrentDictionary.TryAddInternal method - which uses locks.

This only happened when wrapping your interface with our own Async one. Without this interface we do not observe this behavior anymore.

I do not have enough knowledge to explain what could cause this, but the profiling dump was clear that the contention originated from the above mentioned method.

We already encountered such high contention levels with ConcurrentDictionary, that seem to have been gone ever since we used the Nonblocking version of it. (Easy change as the interface is the same)

Thanks. Can you provide an minimal example of the async API you're wrapping FastCache with? I'll see what we can do and consider moving to NonBlocking-dictionary

johnib commented

I'm not sure I am allowed. But think of the minimal class that inherits yours, and overriding your TryGet method with:

  • Call base.TryGet
  • Report opentelemtry hit/miss metric depending on the result
  • Return value

Only this method looks like this:

public async new ValueTask<TValue> TryGet (...)

(The new keyword is used to forcibly override your TryGet, as your class is not overridable)

Sorry for the late response.

You're probably not using TryGet since it's a get-only operation that's not invoking ConcurrentDictionary.TryAddInternal, and should be lock-free.

Most likely you've overridden GetOrAdd right? In that case I recommend using the lambda-overload GetOrAdd(TKey key, Func<TKey, TValue> valueFactory, TimeSpan ttl) which calls the value-factory only if the key is not found.

But is your factory async? In that case there's a chance you're using "sync over async" which causes the lock...

We have wrapped your cache with an Async interface that uses ValueTask.

Why on earth would one want to wrap in-memory operations which doesn't incur waiting in an async interface? You're asking for slowness and bugs. The architect or whoever designed it should be immediately fired.