This project is intended to illustrate the harnessing required to fuzz a Linux kernel module using AFL through the Xen VMI API. The tool utilizes Xen VM forks to perform the fuzzing, thus allowing for parallel fuzzing/multiple AFL instances to fuzz at the same time. Coverage guidance for AFL is achieved using Capstone to dynamically disassemble the target code to locate the next control-flow instruction. The instruction is breakpointed and when the breakpoint triggers, MTF is activated to advance the VM ahead, then the processes is repeated again. The tool allows fine-tuning how many control-flow instructions to allow the fuzzing to encounter before terminating. This provides an alternative to timing out the fuzzing process.
This project is licensed under the terms of the MIT license
- Install dependencies
- Grab the project and all submodules
- Compile & Install Xen 3.b Install & Boot Xen from UEFI
- Create VM disk image
- Setup networking
- Create VM
- Grab the kernel's debug symbols & headers
- Configure the VM's console
- Build the kernel's debug JSON profile
- Compile & install Capstone
- Compile & install LibVMI
- Compile kfx
- Patch AFL
- Add harness
- Setup the VM for fuzzing
- Connect to the VM's console
- Insert the target kernel module
- Star fuzzing using AFL
- Debugging
- FAQ
The following instructions have been mainly tested on Debian Bullseye and Ubuntu 20.04. The actual package names may vary on different distros/versions. You may also find https://wiki.xenproject.org/wiki/Compiling_Xen_From_Source helpful if you run into issues.
sudo apt install git build-essential libfdt-dev libpixman-1-dev libssl-dev libsdl1.2-dev autoconf libtool xtightvncviewer tightvncserver x11vnc uuid-runtime uuid-dev bridge-utils python3-dev liblzma-dev libc6-dev wget git bcc bin86 gawk iproute2 libcurl4-openssl-dev bzip2 libpci-dev libc6-dev libc6-dev-i386 linux-libc-dev zlib1g-dev libncurses5-dev patch libvncserver-dev libssl-dev libsdl-dev iasl libbz2-dev e2fslibs-dev ocaml libx11-dev bison flex ocaml-findlib xz-utils gettext libyajl-dev libpixman-1-dev libaio-dev libfdt-dev cabextract libglib2.0-dev autoconf automake libtool libjson-c-dev libfuse-dev liblzma-dev autoconf-archive kpartx python3-pip gcc-7 libsystemd-dev cmake snap
git clone https://github.com/intel/kernel-fuzzer-for-xen-project
cd kernel-fuzzer-for-xen-project
git submodule update --init
There had been some compiler issues with newer gcc's so set your gcc version to GCC-7:
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-7 7
Make sure the pci include folder exists at /usr/include/pci
. In case it doesn't create a symbolic link to where it's installed at:
sudo ln -s /usr/include/x86_64-linux-gnu/pci /usr/include/pci
Before installing Xen from source make sure you don't have any pre-existing Xen packages installed:
sudo apt remove xen-* libxen*
Now we can compile & install Xen
cd xen
echo XEN_CONFIG_EXPERT=y > .config
echo CONFIG_MEM_SHARING=y > xen/.config
./configure --disable-pvshim --enable-githttp
make -C xen olddefconfig
make -j4 dist-xen
make -j4 dist-tools
sudo su
make -j4 install-xen
make -j4 install-tools
echo "/usr/local/lib" > /etc/ld.so.conf.d/xen.conf
ldconfig
echo "none /proc/xen xenfs defaults,nofail 0 0" >> /etc/fstab
systemctl enable xen-qemu-dom0-disk-backend.service
systemctl enable xen-init-dom0.service
systemctl enable xenconsoled.service
echo "GRUB_CMDLINE_XEN_DEFAULT=\"hap_1gb=false hap_2mb=false dom0_mem=4096M\"" >> /etc/default/grub
update-grub
reboot
Make sure to pick the Xen entry in GRUB when booting. You can verify you booted into Xen correctly by running xen-detect
.
Note that we assign 4GB RAM to dom0 above which is a safe default but feel free to increase that if your system has a lot of RAM available.
If Xen doesn't boot from GRUB you can try to boot it from UEFI directly
mkdir -p /boot/efi/EFI/xen
cp /usr/lib/efi/xen.efi /boot/efi/EFI/xen
cp /boot/vmlinuz /boot/efi/EFI/xen
cp /boot/initrd.img /boot/efi/EFI/xen
Gather your kernel boot command line from /proc/cmdline & paste the following into /boot/efi/EFI/xen/xen.cfg:
[global]
default=xen
[xen]
options=console=vga hap_1gb=false hap_2mb=false
kernel=vmlinuz console=hvc0 earlyprintk=xen <YOUR KERNEL'S BOOT COMMAND LINE>
ramdisk=initrd.img
Create an EFI boot entry for it:
efibootmgr -c -d /dev/sda -p 1 -w -L "Xen" -l "\EFI\xen\xen.efi"
reboot
You may want to use the -C
option above if you are on a remote system so you can set only the next-boot to try Xen. This is helpful in case the system can't boot Xen and you don't have remote KVM to avoid losing access in case Xen can't boot for some reason. Use efibootmgr --bootnext <BOOT NUMBER FOR XEN>
to try boot Xen only on the next reboot.
20GB is usually sufficient but if you are planning to compile the kernel from source you will want to increase that.
dd if=/dev/zero of=vmdisk.img bs=1G count=20
sudo su
brctl addbr xenbr0
ip addr add 10.0.0.1/24 dev xenbr0
ip link set xenbr0 up
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
exit
You might also want to save this as a script or add it to /etc/rc.local
Create the domain configuration file by pasting the following, for example into debian.cfg
, then tune it as you see fit. It's important the VM has only a single vCPU.
name="debian"
builder="hvm"
vcpus=1
maxvcpus=1
memory=2048
maxmem=2048
hap=1
boot="cd"
serial="pty"
vif=['bridge=xenbr0']
vnc=1
vnclisten="0.0.0.0"
vncpasswd='1234567'
usb=1
usbdevice=['tablet']
# Make sure to update the paths below!
disk=['file:/path/to/vmdisk.img,xvda,w',
'file:/path/to/debian.iso,xvdc:cdrom,r']
Start the VM with:
sudo xl create debian.cfg
You can connect to the VNC session using your favorite VNC viewer or by simply running:
vncviewer localhost
In case it's a remote system replace localhost with the IP of the system; note however that the VNC connection is not encrypted so it may be better to setup an SSH tunnel to connect through.
Follow the installation instructions in the VNC session. Configure the network manually to 10.0.0.2 with a default route via 10.0.0.1
Inside the VM, using Debian, you can install everything right away
su -
apt update && apt install linux-image-$(uname -r)-dbg linux-headers-$(uname -r)
On Ubuntu to install the Kernel debug symbols please follow the following tutorial: https://wiki.ubuntu.com/Debug%20Symbol%20Packages
From the VM copy /usr/lib/debug/boot/vmlinux-$(uname -r)
and /boot/System.map-$(uname -r)
to your dom0, for example using scp.
Inside the VM, edit /etc/default/grub
and add console=ttyS0
to GRUB_CMDLINE_LINUX_DEFAULT
line. Run update-grub
afterwards and reboot
.
Back in dom0, we'll convert the dwarf debug information to json that we copied in Step 7. We'll need Go 1.13 or newer for this. You can install it using snap as follows:
sudo snap install --classic go
If you distro's repository has go 1.13 or newer you can also install it from there (package name is golang-go).
Now we can build dwarf2json and generate the JSON profile. Change the paths to match your setup and make sure your dom0 has enough RAM as this may take up a lot of it.
cd dwarf2json
go build
./dwarf2json linux --elf /path/to/vmlinux --system-map /path/to/System.map > ~/debian.json
cd ..
We use a more recent version from the submodule (4.0.2) then what most distros ship by default. If your distro ships a newer version you could also just install libcapstone-dev
.
cd capstone
mkdir build
cd build
cmake ..
make
sudo make install
sudo ldconfig
cd ../..
cd libvmi
autoreconf -vif
./configure --disable-kvm --disable-bareflank --disable-file
make -j4
sudo make install
sudo ldconfig
cd ..
Test that base VMI works with:
sudo vmi-process-list --name debian --json ~/debian.json
autoreconf -vif
./configure
make -j4
cd AFL
patch -p1 < ../patches/0001-AFL-Xen-mode.patch
make
cd ..
The target kernel module needs to be harnessed using two CPUID instructions with leaf 0x13371337.
See the testmodule
folder for an example.
static inline void harness(void)
{
asm (
"push %rax\n\t"
"push %rbx\n\t"
"push %rcx\n\t"
"push %rdx\n\t"
"movq $0x13371337,%rax\n\t"
"cpuid\n\t"
"pop %rdx\n\t"
"pop %rcx\n\t"
"pop %rbx\n\t"
"pop %rax\n\t"
);
}
You can insert the harness before and after the code segment you want to fuzz:
harness();
x = test((int)test1[0]);
harness();
You can also use software breakpoints (0xCC) as your harness which can be placed by standard debuggers like GDB. Use --harness-type breakpoint
for this mode, which is particularly useful when you don't have access to the target's source-code to compile it with the CPUID-based harness.
Start ./kfx
with the --setup
option specified. This will wait for the domain to issue the harness CPUID and will leave the domain paused. This ensures that the VM is at the starting location of the code we want to fuzz when we fork it.
sudo ./kfx --domain debian --json ~/debian.json --setup
You may optionally want to do this in a screen
session, or you will need a separate shell to continue.
sudo xl console debian
You should see a login screen when you press enter. Proceed to login.
There is a testmodule included with the repository, you can copy it into the VM and compile it simply by running make
. Afterwards, load it via:
sudo insmod testmodule.ko
The VM's console should now appear frozen. This is normal and what's expected. You can exit the console with CTRL+]
. The kfx
should have now also exited with a message Parent ready
.
Everything is now ready for fuzzing to begin. The kernel fuzzer takes the input with --input
flag, its size via --input-limit
and the target memory address to write it to via --address
. With AFL the input file path needs to be @@
. You also have to first seed your fuzzer with an input that doesn't produces a crash in the code segment being fuzzed.
mkdir input
mkdir output
echo -n "not_beef" > input/beef
sudo ./AFL/afl-fuzz -i input/ -o output/ -m 500 -X -- ./kfx --domain debian --json ~/debian.json --input @@ --input-limit 8 --address 0x<KERNEL VIRTUAL ADDRESS TO WRITE INPUT TO>
You can also specify the --limit
option of how many control-flow instructions you want to encounter before timing out the fuzz iteration. This is an alternative to the AFL built-in time-out model.
The speed of the fuzzer will vary based on how much code you are fuzzing. The more code you are exercising the fewer iterations per second you will see. The testmodule included with the project has been observed to produce a speed of 200-600 iterations per second on i5 family CPUs. Don't forget: you can run multiple instances of the fuzzer to speed things up even further by utilizing more CPU cores on your machine.
After you are finished with fuzzing, the VM can be unpaused and should resume normally without any side-effects.
You can run the kernel fuzzer directly to inject an input into a VM fork without AFL, adding the --debug
option will provide you with a verbose output.
sudo ./kfx --domain debian --json ~/debian.json --debug --input /path/to/input/file --input-limit <MAX SIZE TO WRITE> --address 0x<KERNEL VIRTUAL ADDRESS TO WRITE INPUT TO>
Can I run this on ring3 applications?
You likely get better performance if you run AFL natively on a ring3 application but nothing prevents you from running it via this tool. You would need to adjust the sink points in src/sink.h
to catch the crash handlers that are called for ring3 apps. For example do_trap_error
in Linux handles segfaults, you would probably want to catch that.
Can I fuzz Windows?
This tool currently only targets Linux. You can modify the harness to target Windows or any other operating system by adjusting the sink points in src/sink.h
that are used to catch a crash condition. You could also manually define the sink points' addresses in case the operating system is not supported by LibVMI. In case you want to fuzz closed-source portions of Windows where you can't inject the cpuid
-based harness, you can use --harness breakpoint
to switch to using breakpoints as your harness. This allows you to mark the code-region to fuzz with a standard debugger like WinDBG.
Can I just pipe /dev/random in as fuzzing input?
Yes! You can use --loopmode
to simply read input from whatever source you want and pipe it into the VM forks. In this mode coverage trace is disabled so you will see more iterations per second.
How do I shutdown the VM after I'm done?
You can issue xl shutdown <domain name>
to initiate shutdown. If there are VM forks active, you need to issue xl destroy <domain id>
for each fork before shutdown.
Any tricks to increase performance?
To max out performance you can boot Xen with "dom0_max_vcpus=2 sched=null spec-ctrl=no-xen" which assigns only 2 vCPUs to dom0, disables the scheduler and speculative execution hardening features. You can also add "smt=0" to disable hyper-threading. Make sure your system has enough physical cores to run each vCPU as they get pinned.
*Other names and brands may be claimed as the property of others