Queue not thread/interrupt safe?
noahzarro opened this issue · 1 comments
I am developing in RISC-V and in my understanding the Queue is not thread or interrupt safe. Imagine the following scenario where an item is dequeued:
unsafe fn inner_dequeue(&self) -> Option<T> {
let current_head = self.head.load(Ordering::Relaxed); // holds head before second dequeue
// (1) interrupt or thread switch happens here, inner_dequeue is executed in second context
// -> head is incremented by second context
// current head is no longer up to date and still holds the not yet incremented value
if current_head == self.tail.load(Ordering::Acquire) {
None
} else {
// the same elemet is returned twice
// once here and once in the second context
let v = (self.buffer.get_unchecked(current_head).get() as *const T).read();
self.head
.store(Self::increment(current_head), Ordering::Release);
Some(v)
}
}
Imagine if the inner_dequeue
function is interrupted at position (1)
, right between the load of the head
and the tail
and in the other context (either interrupt or different thread) inner_dequeue
is executed. Now the head still is already incremented by the second context, but the original context still uses the old value. Like this, one value is returned/dequeued twice.
As far as I understand, the atomics do not prevent this behavior. Or at least not for the single core risv32imc
target. Here, interrupts get disabled and re-enabled just for the two load instructions. But they are enabled in between the two loads:
// disable interrupts
let current_head = self.head.load(Ordering::Relaxed);
// enable interrupts
// disable interrupts
if current_head == self.tail.load(Ordering::Acquire) {
// disable interrupts
...
}
So I am not sure if thread/interrupt safety is guaranteed, but if it is, I would suggest using a critical section around the whole dequeue process.
Hi, it's not possible to have 2 inner_dequeue
race against each other as it's guarded by an Consumer
that required &mut self
access.
What can happen is that there is a windows on inconsistency in which the queue can be considered empty/full while there is 1 element in it or 1 less than full - and this is by design.
Then we don't need to use CAS operations which makes the queue work on more Cortex-M targets.