Say Y here to get to see options for using your Linux host to run other operating systems inside virtual machines (guests). This option alone does not add any kernel code. If you say N, all options in this submenu will be skipped and disabled.
Support running unmodified book3s_32 guest kernels in virtual machines on book3s_32 host processors. This module provides access to the hardware capabilities through a character device node named /dev/kvm. If unsure, say N.
Support running unmodified book3s_64 and book3s_32 guest kernels in virtual machines on book3s_64 host processors. This module provides access to the hardware capabilities through a character device node named /dev/kvm. If unsure, say N.
Support running unmodified book3s_64 guest kernels in virtual machines on POWER7 and newer processors that have hypervisor mode available to the host. If you say Y here, KVM will use the hardware virtualization facilities of POWER7 (and later) processors, meaning that guest operating systems will run at full hardware speed using supervisor and user modes. However, this also means that KVM is not usable under PowerVM (pHyp), is only usable on POWER7 or later processors, and cannot emulate a different processor from the host processor. If unsure, say N.
Support running guest kernels in virtual machines on processors without using hypervisor mode in the host, by running the guest in user mode (problem state) and emulating all privileged instructions and registers. This is not as fast as using hypervisor mode, but works on machines where hypervisor mode is not available or not usable, and can emulate processors that are different from the host processor, including emulating 32-bit processors on a 64-bit host.
Calculate time taken for each vcpu in the real-mode guest entry, exit, and interrupt handling code, plus time spent in the guest and in nap mode due to idle (cede) while other threads are still in the guest. The total, minimum and maximum times in nanoseconds together with the number of executions are reported in debugfs in kvm/vm#/vcpu#/timings. The overhead is of the order of 30 - 40 ns per exit on POWER8. If unsure, say N.
Calculate elapsed time for every exit/enter cycle. A per-vcpu report is available in debugfs kvm/vm#_vcpu#_timing. The overhead is relatively small, however it is not recommended for production environments. If unsure, say N.
Support running unmodified E500 guest kernels in virtual machines on E500v2 host processors. This module provides access to the hardware capabilities through a character device node named /dev/kvm. If unsure, say N.
Support running unmodified E500MC/E5500/E6500 guest kernels in virtual machines on E500MC/E5500/E6500 host processors. This module provides access to the hardware capabilities through a character device node named /dev/kvm. If unsure, say N.
Enable support for emulating MPIC devices inside the host kernel, rather than relying on userspace to emulate. Currently, support is limited to certain versions of Freescale's MPIC implementation.
Include support for the XICS (eXternal Interrupt Controller Specification) interrupt controller architecture used on IBM POWER (pSeries) servers.