/MicroV

A micro hypervisor for running micro VMs

Primary LanguageC++MIT LicenseMIT

microv-logo

Description

Join the chat

The MicroV Hypervisor is an open source, micro-hypervisor led by Assured Information Security, Inc., designed specifically to run micro VMs (i.e., tiny virtual machines that require little or no emulation).

highlevel

Advantages:

Unlike existing hypervisors, MicroV's design has some unique advantages including: cross-platform

  • Cross-Platform Support: In open source, examples of hypervisors that are capable of supporting guest virtual machines include Xen, KVM, and VirtualBox. The first two only support Linux and BSD and VirtualBox expands this support to Windows and macOS, while sacrificing security. MicroV aims to support as many operating systems as possible including Windows, Linux, UEFI and others while maintaining performance and security. This is accomplished by compiling the hypervisor as a self-contained binary, independent of the root operating system. All of the applications provided by MicroV are written in raw C++ to maximize cross-platform support, and any code that is platform specific, including drivers, is broken into the platform specific logic (which is minimal) and common logic that can be used across all platforms.

  • Disaggregation and Deprivilege: None of the above mentioned hypervisors have a true focus on a reduced Trusted Computing Base (TCB). Xen comes the closest and has made amazing progress in this direction, but a fully disaggregated and deprivileged hypervisor has yet to be fully realized or supported. MicroV's design starts with a microkernel architecture inside the hypervisor, running most of it's internal logic at lower privilege levels. On Intel, when virtualization is enabled, the CPU is divided into the host and the guest. The hypervisor runs in the host, while the operating system runs in the guest. Both the host and the guest have a Ring 0 and Ring 3 privilege level. Monolithic hypervisors like Xen and KVM run most, if not all of the hypervisor in Ring 0 of the host. Typically we call this Ring -1. This picture is far more complicated when you include System Management Mode (SMM) which adds two more Ring 0s, so lets pretend they do not exist for now. The thing is, this is not the only way to construct a hypervisor. Like the operating system in each guest, the hypervisor can use both Ring 0 and Ring 3 in the host, keeping the Ring 0 component as small as possible, while running most of the logic in Ring 3. In some ways, this is how KVM works. The kernel part of the hypervisor in KVM is the entire Linux kernel while VM management and emulation is handled in Ring 3 by QEMU (usually). The obvious problem with this design, besides the lack of cross-platform support is the size of the TCB is huge. MicroV's design leverages the Bareflank microkernel which is a small kernel capable of executing hypervisor extensions in Ring 3 of the host. By itself, Bareflank's microkernel is not capable of executing virtual machines, but instead relies on an extension to provide the meat and potatoes. MicroV, at its most basic level is a Bareflank extension that runs in Ring 3, that provides guest virtual machine support.

  • Performance: Another important focus of the project is on performance. Existing hypervisors make heavy use of emulation where virtualization could be used instead, Furthermore, Xen and KVM only support Linux and BSD and therefore are not capable of leveraging some of the performance benefits of macOS, Android and even Windows such as scheduling and power management, which becomes evident when attempting to use these hypervisors on laptops and mobile devices. Other approaches to microkernel hypervisors attempt to provide their own device drivers, schedulers and power management algorithms. This limits their ability to widely support hardware, and guarantees a compromised user experience. The operating system that was built for a device should be the operating system that manages the device.

  • Virtual Device Support: Hypervisors like KVM manage all physical devices in Ring 0 of the host (the most privileged code on the system) using the Linux kernel. This is one way in which Xen is more deprivileged than KVM. Unlike KVM which runs device drivers in Ring 0 of the host, Xen runs device drivers in Ring 0 of the guest (specifically in Dom 0). MicroV aims to take a similar approach to Xen, keeping the code in the host as small as possible, and instead, delegating the guest operating system to manage the physical devices it is given. All virtual device backend drivers run in Ring 0 or Ring 3 of the guest root VM, which is the main virtual machine on the system.

    cross-platform
  • Scheduling: Although closely related to performance, MicroV leverages a hybrid design, incorporating the design goals of Xen to provide disaggregation and deprivilege while leveraging the scheduling benefits of hypervisor designs like KVM and VirtualBox. Specifically MicroV doesn't include its own scheduler , relying on the root VM's operating system to schedule each VM along with the rest of the critical tasks it must perform. Not only does this reduce the
    overall complexity and size of MicroV, but it allows MicroV to leverage the advanced schedulers already present in modern operating systems. Since MicroV is designed with cross-platform in mind, this also means that support for custom schedulers is possible, including RTOS schedulers.

  • AUTOSAR Compliance: Although Xen, KVM and VirtualBox provide various levels of testing, continuous integration and continuous deployment, MicroV aims to take this a step further, providing the highest levels of testing possible to support standards such as ISO 26262. In addition, MicroV was re-engineered from the ground up using the AUTOSAR coding guidelines (MicroV is our third iteration of this hypervisor project). This will not only improve reliability and security, but also enable the use of MicroV in critical system environments were high levels of testing are required such as automotive, medical, space and government.

  • Early Boot and Late Launch Support: Xen, KVM and VirtualBox support early boot (i.e., the hypervisor starts first during boot) or late launch (i.e., the operating system starts first, and then the hypervisor starts). None of these hypervisors support both.

    highlevel

    Early boot is critical in supporting a fully deprivileged host while late launch is easier to set up, configure and use. Late launch is also a lot easier on developers, preventing the need to reboot each time a line of code changes in the hypervisor itself. MicroV aims to support both early boot and late launch from inception, giving both users and developers as many options as possible.

  • Licensing: Most of the hypervisors available today in open source leverage the GPL license making it difficult to incorporate their technologies in closed source commercial products. MicroV is licensed under MIT. Feel free to use it however you wish. All we ask is that if you find and fix something wrong with the open source code, that you work with us to upstream the fix. We also love pull requests, RFCs, bug reports and feature requests.

Disadvantages:

No design is without its disadvantages:

  • Limited Guest VM Support: As stated above, on Intel, the CPU is divided into the host and the guest. The hypervisor runs in the host, and the operating system runs in the guest. The root operating system, which Xen would call Dom 0 can be any operating system. The initial version of MicroV will support Windows and Linux, with limited support for UEFI. From there, MicroV will let you create additional, very small guest virtual machines. MicroV will only have support for enlightened operating systems running in these guest VMs. Because they are enlightened, we can keep their size really small, and in some cases, remove the need for emulation entirely, which is where the term MicroVM comes from (i.e., a really small VM that requires little to no emulation). This ultimately means MicroV will support Linux, unikernels and enlightened applications, with no support for more complicated operating systems like Windows and macOS. Support for these types of operating systems is possible, but that is a bridge we will cross in the future.

  • VM DoS Attacks: Since the main operating system is responsible for scheduling micro VMs for execution, it is possible that an attack in this operating system could prevent the micro VMs from executing (i.e., DoS attack). For most applications, this type of attack is a non-issue as isolation is more important than resilience against DoS attacks. With that said, there is no reason why a micro VM could not be in charge of scheduling VMs with its own scheduling and power management software (just like it would be possible to run all of the tool stack software in a dedicated micro VM as well). Like Xen, MicroV is designed to ensure these facilities are not dependent on the main operating system. The upstream project simply defaults to this type of configuration as its the larger, more prevalent use case. And keep in mind that there is always a tradeoff. Although the upstream approach is vulnerable to DoS attacks, implementing your own scheduler and power management software is no easy task, and should be limited to specific use cases (unless performance and battery life is not important).

Interested In Working For AIS?

Check out our Can You Hack It?® challenge and test your skills! Submit your score to show us what you’ve got. We have offices across the country and offer competitive pay and outstanding benefits. Join a team that is not only committed to the future of cyberspace, but to our employee’s success as well.

cross-platform

Specifications

The following defines the VM Specification (i.e., the CPUID/hypercall interface):
MicroV VM Specification

Roadmap

The code that is currently in this repo is a snapshot of our previous Boxy repo that already provides limited support for Linux virtual machines. If you need something right now, please see Boxy instead as it is more up-to-date and already provides some basic functionality. MicroV is the third iteration of this hypervisor (the original was called the hyperkernel), and will be the main hypervisor project moving forward. So with that said, this is a rough overview of our roadmap for this project (as well as the other Bareflank projects):

  • Before we can work on MicroV, the Bareflank hypervisor itself needs a fair amount of TLC. Specifically this includes finishing the microkernel implementation upstream, porting Bareflank to use the Bareflank Support Library (BSL) instead of using libc++, and stripping away a lot of the APIs that are no longer supported. As we perform this work, we are also adding native support for Windows (no need for Cygwin), AUTOSAR compliance, a new build system, and support for both AMD and Intel (ARM will come in the future). We expect this work will take us at least into Q3, maybe early Q4 of 2020.
  • Once the Bareflank hypervisor is ready, we will then begin the port of MicroV to the new architecture including AMD/Intel support, and support for AUTOSAR compliance. This will take us into early Q1 of 2021
  • The final step of our roadmap is to remove the remaining forms of emulation and implement the rest of MicroV's PV interface. The initial version of MicroV will have support for Console, Disk and Net, using a design that is similar to Xen, allowing backend and frontend support to be executed in any Micro VM (something that KVM's virtio is not capable of supporting). We believe this will take the better part of 2021 as this will require the implementation of multiple virtual device drivers for both Linux and Windows.

Once the above work is complete, we will cut our "first" version of MicroV. Until then, you are likely better off using Boxy, at least until the 2020 work is complete (Boxy currently doesn't have PV driver support, so MicroV will be feature complete with Boxy near the beginning of 2021). Future versions of MicroV will likely include the following before we would consider the project to be "feature complete":

  • Nested virtualization support
  • PCI pass-through support
  • Libvirt support
  • Optimizations
  • First class support for LibVMI
  • First class support for some unikernels

If there are additional features that you would like to see in this feature list, please add a feature request to our issue tracker, or feel free to reach out to us on Slack or email.