Fixing VMO/VMAR
benpigchu opened this issue · 9 comments
There are still many failed vmar/vmo related core-test. Also, the attempt to run rustc on Linux mode of zCore failed due to mmap related issue. I believe these are related. We should revisited our VMO/VMAR implementation now.
And recover a simple version of VMO (#76), even if it will fail many core-tests.
Let's do some simple fixes first.
- Allocate a child vmar with size larger the parent should be an INVALID_ARGS instead of a NO_MEMORY (see Vmar.AllocateUnsatisfiableTest)
- We should not check the permission of a mapping in zx_vmar_map, since we can add the permission with zx_vmar_protect later (see Vmar.VmarMapRangeOffsetTest)
- the
actual
in zx_process_*_memory should be a pointer to usize - You cannot use MAP_RANGE with SPECIFIC_OVERWRITE in zx_vmar_map (see Vmar.InvalidArgsTest)
- Length in zx_vmar_map cannot be zero (see Vmar.InvalidArgsTest)
- Invalid bits in zx_vm_option_t should be rejected
We somehow handles permission of a VmMapping incorrectly. We should storage a limitation of permission in the VmMapping, and allowing increasing premission with zx_vmar_protect .
zx_vmar_protect is actually doing nothing. I attempted to fix it but since it do not support partially protect a VmMapping it will break at startup.
Maybe we should store flags of each pages in the VmMapping?
Currently there is a workaround in VmAddressRegion::new_root
to separate address space of different processes. We do not need this in qemu since we have the real page table, so we should remove this workaround on qemu.
Another easy fix: vmar.*_memory should limit length with map size
Also, we should add support to multiple mappings.
Should we use a unified lock in vmar instead to avoid race condition when read/write memory?
zx_vmar_protect somehow works now when I modified how the flags of each page storaged.
The next thing could be implement the ZX_VM_MAP_RANGE flag
It seems the content of the vmo is not correct if we do not always map the vmo in the page table(!)
See core-test VmoClone2TestCase.ManyCloneMappingOffset
The first issue is: we should not map the page as writable when we page fault when reading the page, otherwise we will breaks the copy-on-write semantic.
The second issue is: after a vmo.write call, the correct page to map should be changed. However, if we need to update page table when commiting the page, we get a deadlock when mapping the pages. But fortunately, If we are updating the page table, we do not need to update it with vmo, so it should be OK to just use try_lock instead of lock.
It should be possible to commit the page in vmo when we are copying things to user space! (See core-test VmoTestCase.MapRead etc.)
We need #146 to solve this problem
Another thing: VmoTestCase.ZeroPage encounter page fault forever since the 0x8 bit is set in the error_code
Another thing: vmo contiguous may breaks