[Bug] Expanding memory balloon causes VM to freeze
Closed this issue · 7 comments
Describe the bug
Even when there should be enough free memory in the VM, expanding the balloon sometimes causes the VM to freeze.
During a sample run (using the scripts linked below), after restoring the VM from a snapshot, free -h returned:
total used free shared buff/cache available
Mem: 108Mi 35Mi 45Mi 2.5Mi 35Mi 72Mi
Swap: 0B 0B 0B
Originally, the balloon was initialized to 5MB. When I inflated it to 20MB, it inflated successfully. When I inflated it to 30MB, the VM froze and there were a bunch of "Failed to update balloon stats, missing descriptor." errors.
To Reproduce
You can use the scripts in this branch: #4989
- Build firecracker with this patch: #4988
- (Only needs to be run once): Prepare rootfs and guest kernel: get_rootfs_guest_kernel.sh
- Run firecracker: run_firecracker.sh
- Initialize a VM with a balloon and snapshot it: snapshot_vm.sh
- You will probably need to kill the former firecracker process and restart it: run_firecracker.sh
- Start the UFFD handler with the snapshot: run_uffd_handler.sh
- Expand the balloon. : trigger_remove_events.sh
- Expand the balloon even more : If you edit trigger_remove_events.sh. to inflate the balloon to 40MB, the VM will freeze and there are "Failed to update balloon stats, missing descriptor." errors
Expected behavior
I expected the balloon to be able to expand to 30MB because there is 72Mi of memory available.
Environment
- Firecracker version: 1.10.0 with patch #4988
- Host and guest kernel versions: 6.1 for both
- Rootfs used: ubuntu-24.04
- Architecture: Intel x86
Additional context
We are using UFFD to restore snapshots. The memory snapshots are quite large, so we're looking into using memory balloons with the goal of having the UFFD handler process removed memory ranges, so we don't have to save those memory ranges in the snapshot files. We've noticed that the VM will sometimes freeze when expanding the balloon, even when there should be sufficient memory.
Around the same time as the freeze, we always see the "Failed to update balloon stats, missing descriptor." errors as well as vsock connection errors VIRTIO_VSOCK_OP_RST.
I've tried disabling async page faults, in case the freezing was related to some sort of race condition in the kernel but the problem persists.
Checks
- Have you searched the Firecracker Issues database for similar problems?
- Have you read the existing relevant Firecracker documentation?
- Are you certain the bug being reported is a Firecracker issue?
No - It could be a Linux bug as well. Though we've read cases where people seem to be successfully using UFFD + the balloon, so this use case seems like it should be possible now.
Hi @maggie-lou,
Thanks for raising the issue, with the helpful reproducer steps. I'll investigate myself and let you know what I find.
Thanks
In case this is helpful, when testing the balloon in a more production environment, I've noticed Kernel RCU stalls. I wonder if the balloon has a bad interaction with the RCU.
[ 234.560950] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 234.562786] rcu: 1-....: (14819 ticks this GP) idle=bb24/1/0x4000000000000000 softirq=4838/4841 fqs=5790
[ 234.565430] (t=14750 jiffies g=5389 q=4 ncpus=2)
[ 234.566859] CPU: 1 PID: 20 Comm: kworker/1:0 Not tainted 6.2.0 #1
[ 234.568654] Workqueue: virtio_vsock virtio_transport_rx_work
[ 234.570932] RIP: 0010:vring_unmap_one_split+0x41/0x50
Hey! I've had a look into this today, and think what you're seeing is a combination of two issues: The first one is indeed that the Firecracker incorrectly handles balloon inflation events on restored VMs, and the fix from your PR for that is indeed the right one. However, the behavior you see after that (VM freezes) are not actually a bug in Firecracker, but rather a shortcoming of the simplistic UFFD handler used in our integration tests.
Essentially, the problem is with the handling of -EAGAIN when its returned by UFFDIO_COPY - it's not actually a race condition on one page (e.g. a fault and a free for the same page), but rather a race between any pair of fault and free: UFFD will have any ioctl return -EAGAIN if there's a remove message pending in the queue. So if we get EAGAIN, we cannot ignore the pagefault message its associated with, but rather we must first process all pending remove messages before we can proceed with the pagefault (and this is where the problem was: if we instead ignore the pagefault, then the guest thread that caused this page fault will never get awoken again). I've opened #5021 to fix all that up. Could you please give that a try to see if it resolves your issue?
Thanks!
Thanks so much @roypat ! After applying a similar fix to our UFFD handler, it's resolved our issues.
In case anyone else hits this, I had to implement your suggestion A production handler will most likely want to ensure that remove events for a specific range are always handled before pagefault events. to fully resolve the issue.
@roypat Would you mind sharing how you debugged this? It would be helpful to have some strategies to debug similar issues in the future. Thanks!
Admittedly, there wasn't much finesse involved. I had the uffd handler print out all events it received (which showed that the guest didn't actually completely freeze, since page fault events still came in after inflating the balloon), and then I started looking at the EAGAIN return from uffdio_copy, because that was the only change done in the uffd handler. After reading the kernel code a bit to figure out why EAGAIN was being returned, the connection with pending remove events became clear.
Fair enough - thanks again!