google/guava

Concurrency issue between get(K key, Callable<? extends V> valueLoader) and invalidate(Object key)

gmaes opened this issue ยท 17 comments

Hi,

I encountered a concurrency issue between the "get(K key, Callable<? extends V> valueLoader)" and "invalidate(Object key)" methods with a basic (and i suppose a common) usage of the cache which let a stale value in the cache.

Here is the use case:

  • The cache is initially empty
  • 1 thread is getting a value in the callable with a given key while an other thread is invalidating the same given key

Thread 1

Bean myBean = cache.get(ID, new Callable<Bean>() {
    @Override
    public Bean call() throws Exception {
        Bean bean = loadFromDB(ID); // (1)
        return bean; // (4)
    }
});

Thread 2

// Update just one property of the bean in DB
updatePartialDataInDB(ID, "newValue1"); // (2)
// Then, we need to invalidate the cache
cache.invalidate(ID); // (3)

The execution sequence order is marked with // (number)
After the point // 4, I have the old object in myBean variable which is fine.
However, If i try to get again the bean with the identifier ID, I expect to have no value in the cache but a fresh bean reloaded from the DB.
Unfortunately, that is not the case and a stale value is kept in the cache.

Is that a wanted behavior?

In LocalCache$Segment#remove(Object key, int hash) line 3081 to 3087 (guava 17.0):

RemovalCause cause;
if (entryValue != null) {
    cause = RemovalCause.EXPLICIT;
} else if (valueReference.isActive()) {
    cause = RemovalCause.COLLECTED;
} else {
    // currently loading
    return null;
}

Shouldn't a particular process be done before return null ?

Thanks

gmaes commented

Hi,

Do you plan to fix this issue for the next release?
If so, do you have an ETA?
This is a blocking issue for us.
Else, should we patch the code meanwhile?

Thanks

My first reaction is that the current semantics make some sense: an entry still being computed isn't really in the cache; it's not really there to be invalidated?

Hi,

Speaking of semantics, it's not value that is being invalidated, it's the key. The key is certainly there - and the value is being calculated.

Given that value V(K) is calculated from some other value X(K) (a row in DB, in this sample case), invalidate(K) should mean that any values based on values of X(K) retrieved up to the moment of invalidation are subject for recalculation.

Otherwise, the cache is pretty much useless in a simple case of caching a database row.

Let's assume that we did what you're saying: we invalidated the pending fetch. What exactly should the ongoing get() call from the other thread return? Restarting the load() call seems awkward, if it's even possible.

gmaes commented

The problem does not come from the current pending fetch.
As i said, this is fine to return the not up to date bean at step 4 but this one must not be kept in the cache entry for next fetches.

Okay. If I understand correctly, that stale value will never be stored, but will be returned from the get?

I'd say this would be the expected behaviour.

I don't think that's 100% obvious -- I suspect there are cases where you don't want the value-in-flight to be invalidated -- and I'd also expect this to require a fairly significant rewrite to the internals of Cache, but we can research this further.

I don't really have anything concrete to say here, but I remember that, when it came to CacheBuilder/MapMaker, fry@ used to talk about "linearizability" (e.g., in internal bug 1774366). That might be the property that we want. But I don't know what that would imply for the behavior here, let alone whether implementing that behavior is feasible (if in fact it differs from the current behavior).

I don't think that's 100% obvious -- I suspect there are cases where you don't want the value-in-flight to be invalidated

@lowasser I'm trying out the implementation in my private fork. I had to revisit some tests, and from what I see, there is no good reason for the current behaviour. Some tests actually assume that the value does not change when the invalidation happens, which contradicts the purpose of invalidation.

All existing tests are currently green, and ones I added too; but I need to add more tests to check subtler consequences of my changes. I can't guarantee that I will do it fast, but if you have time, let me have a try on this one.

Please do; I think we'd welcome contributions on this.

This is causing some very serious issues for us. I would argue that anyone using a lazy-loading cache would expect a happens-before relationship between invalidate and a following call to get, but since it can piggy-back on an existing load operation that fetched data before invalidate was called, that relationship does not exist.

I started poking around in here to try to fix it, but as @lowasser hinted, it isn't trivial. The main problems are

1. Whether to remove the LoadingValueReference, block on it, or somehow mark it as stale; and
2. How to rework the removal notifications

This is sufficiently painful that at this point I'm inclined to switch to another library, instead.

@txshtkckr Please see https://github.com/baltiyskiy/guava -- I've went with the "remove LoadingValueReference" approach, firing removal notification for the value when it is loaded. It's generally done, but I'd like to add some tests cases there. There's one known problem (regaring modCount) which I need to poke more.

I've put it on a pause now because I don't have much time due to other commitments, but I intend to finish it.

I second @sereda expectation of causality, in that a loaded value V(K) must be updated when the key is invalidated.
Has there been a consensus how to work around this particular problem? If so, it should be included in https://github.com/google/guava/wiki/CachesExplained

@cpovirk sorry for refreshing an old issue, it seems still relevant.
Did you come to agreement on what's the expected behavior?

For me, "get() after invalidate() should return fresh state" is the expected behavior.

Caffeine resolves this for explicit keyed operations, but not for clearing out the whole cache. That is because it decorates, rather than forks, ConcurrentHashMap which suppresses in-flight loads from appearing in its iterators. A trivial workaround to that is to append a generational id to the key so that existing entries cannot be retrieved and a stale in-flight load detected by the reader to possibly retry.

@ben-manes thanks for the update about Caffeine.

I am still looking for the Guava team's perspective on the problem.
In particular, would a patch be accepted? what solution categories have been discarded already?
What are acceptable drawbacks, if any, for a potential fix?

@lowasser 's comment from 2014 (#1881 (comment)) suggests a patch would be welcome, but there was also a concern about value-in-flight invalidation (#1881 (comment)). Of course, both are couple years old, so they not necessarily describe current state.

Edit: a Guava Cache wrapper that doesn't suffer from invalidation race was extracted from Trino project as a standalone library: https://github.com/findepi/evictable-cache