Primariy goal is improving utilisation. Physical resources are used as a pool for logical resources.
Logical resources:
Compute resources: CPU, RAM, Caches, Interrupts, Timers, motherboard facilities, IO facilities
Forms of abstracting logical resources from a physical resource pool.
Simulate what a system does, not how
Uses different approaches to provide logical IT resources: Slicing, partitioning, aggregation, emulation, hardware extension
Hypervisor/Virtual Machine Monitor (VMM): Software component to provide and manage an envrionment that offers logical resources. An envrionment must have these properties:
Virtual Machine (VM): Instance of logical resource
Virtual Machine Image (VMI): Image with virtual disk and bootable OS installed on it
Virtualization levels:
Baremetal Hypervisors - runs on top of host hardware, full controll of hardware
Hardware -> VMM -> Guest OS (VM1..n)
Pros:
Cons:
Examples: VMware ESX and ESXi, Microsoft Hyper-V
Runs on top of the host OS
Hardware -> Host OS -> VMM -> Guest OS (VM1..n)
Pros:
Cons:
Examples: Oracle VM VirtualBox, Microsoft Virtual PC
X86 architecture supports resource management and access rights to memory and hardware via privileged CPU instructions.
Virtualizing privileged instructions is complicated. An OS within a VM needs to be prevented from directly executing privileged instructions i.e. not execute instructions in Ring 0.
Traps sensitive instructions and translated kernel code in real-time to replace non-virtualizable instructions with new instructions that have the intended effect on the virtual hardware.
Privileged instructions run with a new CPU execution mode feature. Virtualization with Hardware Extension is known as Hardware Virtual Machines (HVM).
VMM maintains Shadow Page Table. MMU looks directly into the Shadow Page Table for virtual address accesses from the guest OS.
Two types of memory address translations:
Virtual and physical addresses are per virtual machine. Host addresses are per physical host.
Hardware Extensions:
Complete simulation of the underlying hardware. Largely dependent on computer architecture.
Execute Hypercalls (sensitive instructions) via new interface (HCI) between VMM and OS. That way the guest kernel knows that it runs in a VM and can directly access hardware features.
AWS uses PV on HVM drivers (i.e mix para with full) for same or better performance than with paravitualisation.
Name | Virt. Type | Installation | Guest Arch | Openstack |
---|---|---|---|---|
KVM | Full | Bare metal | Same as host | Default |
QEMU | Emulation | Hosted | x86(-64),ARM | Yes |
Xen | Para/Full(HVM mode) | Bare metal | Same as host | Yes |
VMware ESXi | Para/Full(HVM mode) | Bare metal | x86(-64) | Yes |
Microsoft Hyper-V | Full | Hosted | x86-64 | Yes |
Quick Emulator (QEMU) is a generic and open source machine emulator and virtualizer based on Binary Translation and SoftMMU
Kernel-based Virtual Machine (KVM) is a Linux based open source hypervisor that was built as a cheap alternative to Intel and AMD vurtualization extensions.
Provides an interface to the Linux kernel via kernel module. CPU and Memoy access is exposed via /dev/kvm
. The VM is implemented as regular Linux process.
KVM does not perform any emulation or virtualization.
Combining KVM and QEMU gives the guest OS:
To start a VM:
qemu-system-x86_64 -enable-kvm [diskimage]
Libvirt is a hypervisor agnostic, multi-language virtualization library managing VMs.
Virsh is the virtual shell for for managing VMs based on Libvirt
Libvirtd is a deamon service for managing guests and virtual networks based on Libvirt
Virtual Machine Manager (virt-manager) is a desktop interface for managing VMs based on Libvirt
Libvirt is used extensively by OpenStack Nova
Creating a VM with Virsh (requires a definition file e.g. my_vm.xml
):
virsh create my_vm.xml
virsh start my_vm.xml
virsh define my_vm.xml
Simon Anliker Someone has to write all this stuff.