ufrisk/MemProcFS

Rust vmm initialization gets stuck with pmem

Opened this issue · 12 comments

Initializing vmm with the following parameters gets stuck right after this output in the console:

LcMemMap_AddRange: 0000000000001000-000000000009ffff -> 0000000000001000
LcMemMap_AddRange: 0000000000100000-0000000009afefff -> 0000000000100000
LcMemMap_AddRange: 000000000a000000-000000000a1fffff -> 000000000a000000
LcMemMap_AddRange: 000000000a210000-000000000affffff -> 000000000a210000
LcMemMap_AddRange: 000000000b021000-000000006e590fff -> 000000000b021000
LcMemMap_AddRange: 0000000079fff000-000000007bff8fff -> 0000000079fff000
LcMemMap_AddRange: 000000007bffe000-000000007bffffff -> 000000007bffe000
LcMemMap_AddRange: 0000000100000000-000000105de7ffff -> 0000000100000000
LeechCore v2.19.2: Open Device: pmem
[CORE]     DTB  located at: 00000000001ae000. MemoryModel: X64
[CORE]     NTOS located at: fffff80047e00000
[CORE]     PsInitialSystemProcess located at fffff80048b1ea60
[CORE]     EPROCESS located at ffffa50d0cfb6040
[PROCESS]  SYSTEM DTB: 00000000001ae000 EPROCESS: ffffa50d0cfb6040
0000    03 00 00 00 00 00 00 00  48 60 fb 0c 0d a5 ff ff   ........H`......
0010    48 60 fb 0c 0d a5 ff ff  58 60 fb 0c 0d a5 ff ff   H`......X`......
0020    58 60 fb 0c 0d a5 ff ff  00 e0 1a 00 00 00 00 00   X`..............
0030    38 83 f3 0c 0d a5 ff ff  78 c3 98 c2 0d a5 ff ff   8.......x.......
...
07c0    80 88 26 19 0d a5 ff ff  40 98 07 0e 0d a5 ff ff   ..&.....@.......
07d0    8c 01 00 00 03 01 00 00  b0 b0 cf 10 0d a5 ff ff   ................
07e0    90 ad cf 10 0d a5 ff ff  15 00 00 00 00 00 00 00   ................
07f0    00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00   ................
[PROCESS]  OK: TRUE
[PROCESS]      PID:  440 PPID: 540 STAT: 004 DTB:  028 DTBU: 000 NAME: 5a8 PEB: 550
[PROCESS]      FLnk: 448 BLnk: 450 oMax: 858 SeAu: 5c0 VadR: 7d8 ObjT: 570 WoW: 580
[PROCESS]  SYSTEM DTB: 00000000001ae000 EPROCESS: ffffa50d0cfb6040
[PROCESS]     # STATE  PID      DTB          EPROCESS         PEB          NAME
    let args = [
        "-printf",
        "-device",
        "pmem://winpmem_x64.sys",
        "-vv",
    ]
    .to_vec();

    let vmm = Vmm::new("vmm.dll", &args)
    .context("VMM initialization failed, try running as admin")?;

Im running from an admin shell btw :D

Im on memprocfs = "5.12.0" and am using the vmm.dll from that release as well.
The winpmem_x64.sys im using is from the this release after extracting it through the commandline option.

Oh wow, dont mind me. Apparently restarting my pc fixed this? Im gonna close the issue for now then but might reopen later if I find a way to reproduce it.

Nice to see you got it to work,

And no need to keep me updated about it, it's more a winpmem driver issue rather than a memprocfs issue. Sometimes you can find out why it failed in the windows event log.

Nice to see you got it to work,

And no need to keep me updated about it, it's more a winpmem driver issue rather than a memprocfs issue. Sometimes you can find out why it failed in the windows event log.

Im noticing that initializing vmm with pmem as a device, right after a pc restart is almost instant which is great. However after having my pc on for a few hours and initialzing vmm ~100 times in total, it starts to take longer and longer to initialize (two minutes by now).

When initially had the issue, I also had my pc running for a few days already so it checks out.

This might very well be an issue with winpmem, just letting you know incase it might be related to vmm and not winpmem 👀

Unfortunately I won't be able to do much about winpmem issues. I never heard about those kinds of initialization times when running from memory dumps or FPGAs so I'd assume it's related to winpmem.

May it perhaps be related to vmm still? When I use '-printf -vv' it does seem to be able to read memory instantly as it instantly finds addresses like [CORE] EPROCESS located at ffffe301a39b6040 but then gets stuck right after this line
[PROCESS] # STATE PID DTB EPROCESS PEB NAME where its trying to enumerate all processes.

The last line after it loaded is [PROCESS] 22712 (list) 00000004 0000001ae000 ffffe301a39b6040 000000000000 System so 22712 processes got enumerated?

Wow you have 22712 running (or terminated) processes. Yeah, then it will probably take time to start MemProcFS.

If you're able to share the memory dump I can take a look (zip it and send me the download link).

Like I mentioned earlier on I'd really need a memory dump for which I'm able to replicate this issue.

It's super rare to have these many processes running, and most likely solution to make it work better would be to increase a number of buffers. This would however impact normal use negatively memory wise, and MemProcFS is already a bit of a memory hog... Maybe some separate mode can be introduced to resolve this.

But I still would need a memory dump file which have this issue to be able to look into it better.

I did some profiling on my own machine and this one is responsible for 98% cpu time.

VmmCachePrefetchPages3(H, pProcess, pObSet_vaAll, cbData, 0);

Just commenting out this prefetch (admitettly I have no Idea how bad of an Idea that is) makes it so the list of processes is printed almost instantly and doesnt break anything as far as I was able to test.

The function is there to speed up things. You're free to remove it if you wish to compile your own copy of course if it helps you.

But I'm not going to remove it on a whim and effectively slow down MemProcFS a bit for everyone without having confirmed the issue first, and even then it would be better to fix it rather than slow things down in the common use case.

If you're able to share a memory dump file in which this problem exists I'd be happy to take a look though.

Perhaps adding a flag to disable this optimization?
I could try making a PR for it if you were ok with that option.

No. I'm not going to add an obscure flag only you alone will use. Then it's better if you patch MemProcFS yourself and remove that line and keep your own internal version up-to-date.

Besides there may be multiple places in MemProcFS that would have to be looked into if the root cause is what you're hinting at.

I really do need a memory image to be able to look into this. Can't you just replicate the issue in some VM and send me the memory image of it (with this issue) and I'll look into it.