Edition 0.1
1801 Varsity Drive
Raleigh, NC 27606-2072 USA
Phone: +1 919 754 3700
Phone: 888 733 4281
Fax: +1 919 754 3701
Mono-spaced Bold
To see the contents of the filemy_next_bestselling_novelin your current working directory, enter thecat my_next_bestselling_novelcommand at the shell prompt and press Enter to execute the command.
Press Enter to execute the command.Press Ctrl+Alt+F2 to switch to the first virtual terminal. Press Ctrl+Alt+F1 to return to your X-Windows session.
mono-spaced bold. For example:
File-related classes includefilesystemfor file systems,filefor files, anddirfor directories. Each class has its own associated set of permissions.
Choose → → from the main menu bar to launch Mouse Preferences. In the Buttons tab, click the Left-handed mouse check box and click to switch the primary mouse button from the left to the right (making the mouse suitable for use in the left hand).To insert a special character into a gedit file, choose → → from the main menu bar. Next, choose → from the Character Map menu bar, type the name of the character in the Search field and click . The character you sought will be highlighted in the Character Table. Double-click this highlighted character to place it in the Text to copy field and then click the button. Now switch back to your document and choose → from the gedit menu bar.
Mono-spaced Bold Italic or Proportional Bold Italic
To connect to a remote machine using ssh, typesshat a shell prompt. If the remote machine isusername@domain.nameexample.comand your username on that machine is john, typessh john@example.com.Themount -o remountcommand remounts the named file system. For example, to remount thefile-system/homefile system, the command ismount -o remount /home.To see the version of a currently installed package, use therpm -qcommand. It will return a result as follows:package.package-version-release
Publican is a DocBook publishing system.
mono-spaced roman and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
mono-spaced roman but add syntax highlighting as follows:
package org.jboss.book.jca.ex1; import javax.naming.InitialContext; public class ExClient { public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create(); System.out.println("Created Echo"); System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); } }
setenforce command.
# setenforce 1
AutoFS, NFS, FTP, HTTP, NIS, telnetd, sendmail and so on.
/var/lib/libvirt/images/. If you are using a different directory for your virtual machine images make sure you add the directory to your SELinux policy and relabel it before starting the installation. Use of shareable, network storage for a central location is highly recommended.
fstab file, the initrd file or used by the kernel command line. If less privileged users, especially virtualized guests, have write access to whole partitions or LVM volumes.
/dev/sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.
NewVolumeName on the volume group named volumegroup.
# lvcreate -nNewVolumeName-L5Gvolumegroup
NewVolumeName logical volume with a file system that supports extended attributes, such as ext3.
# mke2fs -j /dev/volumegroup/NewVolumeName/etc, /var, /sys) or in home directories (/home or /root). This example uses a directory called /virtstorage
# mkdir /virtstorage# mount/dev/volumegroup/NewVolumeName/virtstorage
# semanage fcontext -a -t virt_image_t "/virtstorage(/.*)?"
/etc/selinux/targeted/contexts/files/file_contexts.local file which makes the change persistent. The appended line may resemble this:
/virtstorage(/.*)? system_u:object_r:virt_image_t:s0
/virtstorage) and all files under it to virt_image_t (the restorecon and setfiles commands read the files in /etc/selinux/targeted/contexts/files/).
# restorecon -R -v /virtstorage
touch command) on the file system.
# touch /virtstorage/newfile# sudo ls -Z/virtstorage-rw-------. root root system_u:object_r:virt_image_t:s0newfile
virt_image_t.
# semanage fcontext -a -t virt_image _t -f -b /dev/sda2 # restorecon /dev/sda2
| SELinux Boolean | Description |
|---|---|
| virt_use_comm | Allow virt to use serial/parallel communication ports. |
| virt_use_fusefs | Allow virt to read fuse files. |
| virt_use_nfs | Allow virt to manage NFS files. |
| virt_use_samba | Allow virt to manage CIFS files. |
| virt_use_sanlock | Allow sanlock to manage virt lib files. |
| virt_use_sysfs | Allow virt to manage device configuration (PCI). |
| virt_use_xserver | Allow virtual machine to interact with the xserver. |
| virt_use_usb | Allow virt to use USB devices. |
net.ipv4.ip_forward = 1) is also required for shared bridges and the default bridge. Note that installing libvirt enables this variable so it will be enabled when the virtualization packages are installed unless it was manually disabled.



# ps -eZ | grep qemu system_u:system_r:svirt_t:s0:c87,c520 27950 ? 00:00:17 qemu-kvm
# ls -lZ /var/lib/libvirt/images/* system_u:object_r:svirt_image_t:s0:c87,c520 image1
| Type/Description | SELinux Context |
|---|---|
| Virtualized guest processes. MCS1 is a random MCS field. Approximately 500,000 labels are supported. | system_u:system_r:svirt_t:MCS1 |
| Virtualized guest images. Only svirt_t processes with the same MCS fields can read/write these images. | system_u:object_r:svirt_image_t:MCS1 |
| Virtualized guest shared read/write content. All svirt_t processes can write to the svirt_image_t:s0 files. | system_u:object_r:svirt_image_t:s0 |
| Virtualized guest shared read only content. All svirt_t processes can read these files/devices. | system_u:object_r:svirt_content_t:s0 |
| Virtualized guest images. Default label for when an image exits. No svirt_t virtual processes can read files/devices with this label. | system_u:object_r:virt_content_t:s0 |
virsh command. The migrate command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURLGuestName parameter represents the name of the guest which you want to migrate.
DestinationURL parameter is the URL or hostname of the destination system. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running.
/etc/hosts file on the source server is required for migration to succeed. Enter the IP address and hostname for the destination host in this file as shown in the following example, substituting your destination host's IP address and hostname:
10.0.0.20 host2.example.com
host1.example.com to host2.example.com. Change the host names for your environment. This example migrates a virtual machine named guest1-rhel6-64.
Verify the guest is running
host1.example.com, verify guest1-rhel6-64 is running:
[root@host1 ~]# virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running
Migrate the guest
host2.example.com. Append /system to the end of the destination URL to tell libvirt that you need full access.
# virsh migrate --live guest1-rhel6-64 qemu+ssh://host2.example.com/systemWait
virsh only reports errors. The guest continues to run on the source host until fully migrated.
Verify the guest has arrived at the destination host
host2.example.com, verify guest1-rhel6-64 is running:
[root@host2 ~]# virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running
virt-manager from one host to another.
Open virt-manager
virt-manager. Choose → → from the main menu bar to launch virt-manager.

Connect to the target host

Add connection


Migrate guest



virt-manager now displays the newly migrated guest.

View the storage details for the host
<pool type='iscsi'>
<name>iscsirhel6guest</name>
<source>
<host name='virtlab22.example.com.'/>
<device path='iqn.2001-05.com.iscsivendor:0-8a0906-fbab74a06-a700000017a4cc89-rhevh'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>

ssh or TLS and SSL.
libvirt management connection securely tunneled over an SSH connection to manage the remote machines. All the authentication is done using SSH public key cryptography and passwords or passphrases gathered by your local SSH agent. In addition the VNC console for each guest is tunneled over SSH.
virt-manager virt-manager must be run by the user who owns the keys to connect to the remote host. That means, if the remote systems are managed by a non-root user virt-manager must be run in unprivileged mode. If the remote systems are managed by the local root user then the SSH keys must be owned and created by root.
virt-manager.
Optional: Changing user
$ su -
Generating the SSH key pair
virt-manager is used. This example uses the default key location, in the ~/.ssh/ directory.
$ ssh-keygen -t rsa
Copying the keys to the remote hosts
ssh-copy-id command to copy the key to root user at the system address provided (in the example, root@host2.example.com).
$ ssh-copy-id -i ~/.ssh/id_rsa.pub root@host2.example.com root@host2.example.com's password:
ssh root@host2.example.com command and check in the .ssh/authorized_keys file to make sure unexpected keys have not been added.
Optional: Add the passphrase to the ssh-agent
ssh-agent, if required. On the local host, use the following command to add the passphrase (if there was one) to enable password-less login.
# ssh-add ~/.ssh/id_rsa.pub
libvirt daemon (libvirtd)libvirt daemon provide an interface for managing virtual machines. You must have the libvirtd daemon installed and running on every remote host that needs managing.
$ ssh root@somehost
# chkconfig libvirtd on
# service libvirtd start
libvirtd and SSH are configured you should be able to remotely access and manage your virtual machines. You should also be able to access your guests with VNC at this point.
libvirt management connection opens a TCP port for incoming connections, which is securely encrypted and authenticated based on x509 certificates.
/etc/pki/CA/cacert.pem.
/etc/pki/libvirt/clientcert.pem for system wide use, or
$HOME/.pki/libvirt/clientcert.pem for an individual user.
/etc/pki/libvirt/private/clientkey.pem for system wide use, or
$HOME/.pki/libvirt/private/clientkey.pem for an individual user.
libvirt supports the following transport modes:
/var/run/libvirt/libvirt-sock and /var/run/libvirt/libvirt-sock-ro (for read-only connections).
libvirtd) must be running on the remote machine. Port 22 must be open for SSH access. You should use some sort of SSH key management (for example, the ssh-agent utility) or you will be prompted for a password.
ext parameter is used for any external program which can make a connection to the remote machine by means outside the scope of libvirt. This parameter is unsupported.
virsh and libvirt to connect to a remote host. URIs can also be used with the --connect parameter for the virsh command to execute single commands or migrations on remote hosts.
driver[+transport]://[username@][hostname][:port]/[path][?extraparameters]
host2, using SSH transport and the SSH username virtuser.
qemu+ssh://virtuser@host2/
host2 using TLS.
qemu://host2/
qemu+unix:///system?socket=/opt/libvirt/run/libvirt/libvirt-sock
test+tcp://10.1.1.10:5000/default
| Name | Transport mode | Description | Example usage |
|---|---|---|---|
| name | all modes | The name passed to the remote virConnectOpen function. The name is normally formed by removing transport, hostname, port number, username and extra parameters from the remote URI, but in certain very complex cases it may be better to supply the name explicitly. | name=qemu:///system |
| command | ssh and ext | The external command. For ext transport this is required. For ssh the default is ssh. The PATH is searched for the command. | command=/opt/openssh/bin/ssh |
| socket | unix and ssh | The path to the UNIX domain socket, which overrides the default. For ssh transport, this is passed to the remote netcat command (see netcat). | socket=/opt/libvirt/run/libvirt/libvirt-sock |
| netcat | ssh |
The
netcat command can be used to connect to remote systems. The default netcat parameter uses the nc command. For SSH transport, libvirt constructs an SSH command using the form below:
command -p port [-l username] hostname
netcat -U socket
The
port, username and hostname parameters can be specified as part of the remote URI. The command, netcat and socket come from other extra parameters.
| netcat=/opt/netcat/bin/nc |
| no_verify | tls | If set to a non-zero value, this disables client checks of the server's certificate. Note that to disable server checks of the client's certificate or IP address you must change the libvirtd configuration. | no_verify=1 |
| no_tty | ssh | If set to a non-zero value, this stops ssh from asking for a password if it cannot log in to the remote machine automatically (for using ssh-agent or similar). Use this when you do not have access to a terminal - for example in graphical programs which use libvirt. | no_tty=1 |
qemu-kvm process. Once the guest is running the contents of the guest operating system image can be shared when guests are running the same operating system or applications. KSM only identifies and merges identical pages which does not interfere with the guest or impact the security of the host or the guests. KSM allows KVM to request that these identical guest memory regions be shared.
ksm service starts and stops the KSM kernel thread.
ksmtuned service controls and tunes the ksm, dynamically managing same-page merging. The ksmtuned service starts ksm and stops the ksm service if memory sharing is not necessary. The ksmtuned service must be told with the retune parameter to run when new guests are created or destroyed.
ksm service is a standard Linux daemon that uses the KSM kernel features.
ksm service is not started, KSM shares only 2000 pages. This default is low and provides limited memory saving benefits.
ksm service is started, KSM will share up to half of the host system's main memory. Start the ksm service to enable KSM to share more memory.
# service ksm start Starting ksm: [ OK ]
ksm service can be added to the default startup sequence. Make the ksm service persistent with the chkconfig command.
# chkconfig ksm on
ksmtuned service does not have any options. The ksmtuned service loops and adjusts ksm. The ksmtuned service is notified by libvirt when a guest is created or destroyed.
# service ksmtuned start Starting ksmtuned: [ OK ]
ksmtuned service can be tuned with the retune parameter. The retune parameter instructs ksmtuned to run tuning functions manually.
/etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file.
# Configuration file for ksmtuned. # How long ksmtuned should sleep between tuning adjustments # KSM_MONITOR_INTERVAL=60 # Millisecond sleep between ksm scans for 16Gb server. # Smaller servers sleep more, bigger sleep less. # KSM_SLEEP_MSEC=10 # KSM_NPAGES_BOOST=300 # KSM_NPAGES_DECAY=-50 # KSM_NPAGES_MIN=64 # KSM_NPAGES_MAX=1250 # KSM_THRES_COEF=20 # KSM_THRES_CONST=2048 # uncomment the following to enable ksmtuned debug information # LOGFILE=/var/log/ksmtuned # DEBUG=1
/sys/kernel/mm/ksm/ directory. Files in this directory are updated by the kernel and are an accurate record of KSM usage and statistics.
/etc/ksmtuned.conf file as noted below.
/sys/kernel/mm/ksm/ files/var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings.
ksm service and the ksmtuned service. Stopping the services deactivates KSM but does not persist after restarting.
# service ksm stop Stopping ksm: [ OK ] # service ksmtuned stop Stopping ksmtuned: [ OK ]
chkconfig command. To turn off the services, run the following commands:
# chkconfig ksm off # chkconfig ksmtuned off
qemu.conf file is required. Hugepages are used by default if /sys/kernel/mm/redhat_transparent_hugepage/enabled is set to always.
virsh to set a guest, TestServer, to automatically start when the host boots.
# virsh autostart TestServer
Domain TestServer marked as autostarted
--disable parameter
# virsh autostart --disable TestServer
Domain TestServer unmarked as autostarted
qemu-img command line tool is used for formatting, modifying and verifying various file systems used by KVM. qemu-img options and usages are listed below.
filename.
# qemu-img check [-fformat]filename
qcow2, qed and vdi formats support consistency checks.
filename) to the file's base image with the qemu-img commit command. Optionally, specify the file's format type (format).
# qemu-img commit [-fformat]filename
convert option is used to convert one recognized image format to another image format.
# qemu-img convert [-c] [-fformat] [-ooptions] [-Ooutput_format]filenameoutput_filename
filename to disk image output_filename using format output_format. The disk image can be optionally compressed with the -c option, or encrypted with the -o option by setting -o encryption. Note that the options available with the -o parameter differ with the selected format.
qcow2 format supports encryption or compression. qcow2 encryption uses the AES format with secure 128-bit keys. qcow2 compression is read-only, so if a compressed sector is converted from qcow2 format, it is written to the new format as uncompressed data.
qcow or cow. The empty sectors are detected and suppressed from the destination image.
filename of size size and format format.
# qemu-img create [-fformat] [-ooptions]filename[size]
-o backing_file=filename, the image will only record differences between itself and the base image. The backing file will not be modified unless you use the commit command. No size needs to be specified in this case.
info parameter displays information about a disk image filename. The format for the info option is as follows:
# qemu-img info [-fformat]filename
# qemu-img rebase [-fformat] [-u] -bbacking_file[-Fbacking_format]filename
backing_file and (if the format of filename supports the feature), the backing file format is changed to backing_format.
qcow2 and qed formats support changing the backing file (rebase).
rebase can operate: Safe and Unsafe.
qemu-img rebase command will take care of keeping the guest-visible content of filename unchanged. In order to achieve this, any clusters that differ between backing_file and old backing file of filename are merged into filename before making any changes to the backing file.
-u option is passed to qemu-img rebase. In this mode, only the backing file name and format of filename is changed, without any checks taking place on the file contents. Make sure the new backing file is specified correctly or the guest-visible content of the image will be corrupted.
filename as if it had been created with size size. Only images in raw format can be resized regardless of version. Red Hat Enterprise Linux 6.1 and later adds the ability to grow (but not shrink) images in qcow2 format.
filename to size bytes:
# qemu-img resizefilenamesize
+ to grow, or - to reduce the size of the disk image by that number of bytes. Adding a unit suffix allows you to set the image size in kilobytes (K), megabytes (M), gigabytes (G) or terabytes (T).
# qemu-img resizefilename[+|-]size[K|M|G|T]
snapshot) of an image (filename).
# qemu-img snapshot [ -l | -asnapshot| -csnapshot| -dsnapshot]filename
-l lists all snapshots associated with the specified disk image. The apply option, -a, reverts the disk image (filename) to the state of a previously saved snapshot. -c creates a snapshot (snapshot) of an image (filename). -d deletes the specified snapshot.
raw qemu-img info to obtain the real size used by the image or ls -ls on Unix/Linux.
qcow2 raw or qcow2 format. The format of an image is usually detected automatically.
bochscloopcowcow format is included only for compatibility with previous versions. It does not work with Windows.
dmgnbdparallelsqcowvdivmdkvpcvhd, or Microsoft virtual hard disk image format.
vvfat$ grep -E 'svm|vmx' /proc/cpuinfo
vmx entry indicating an Intel processor with the Intel VT-x extension:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm
svm entry indicating an AMD processor with the AMD-V extensions:
flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8legacy ts fid vid ttp tm stc
flags:" output content may appear multiple times, once for each hyperthread, core or CPU on the system.
Ensure KVM subsystem is loaded
kvm modules are loaded in the kernel:
# lsmod | grep kvm
kvm_intel or kvm_amd then the kvm hardware virtualization modules are loaded and your system meets requirements.
virsh command can output a full list of virtualization system capabilities. Run virsh capabilities as root to receive the complete list.
virsh nodeinfo command provides information about how many sockets, cores and hyperthreads are attached to a host.
# virsh nodeinfo CPU model: x86_64 CPU(s): 8 CPU frequency: 1000 MHz CPU socket(s): 2 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 2 Memory size: 8179176 kB
virsh capabilities command to get additional output data about the CPU configuration.
# virsh capabilities
<capabilities>
<host>
<cpu>
<arch>x86_64</arch>
</cpu>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='2'>
<cell id='0'>
<cpus num='4'>
<cpu id='0'/>
<cpu id='1'/>
<cpu id='2'/>
<cpu id='3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'>
<cpu id='4'/>
<cpu id='5'/>
<cpu id='6'/>
<cpu id='7'/>
</cpus>
</cell>
</cells>
</topology>
<secmodel>
<model>selinux</model>
<doi>0</doi>
</secmodel>
</host>
[ Additional XML removed ]
</capabilities>virsh freecell --all command to display the free memory on all NUMA nodes.
# virsh freecell --all 0: 2203620 kB 1: 3354784 kB
virsh capabilities command) about NUMA topology.
virsh capabilities output.
<topology>
<cells num='2'>
<cell id='0'>
<cpus num='4'>
<cpu id='0'/>
<cpu id='1'/>
<cpu id='2'/>
<cpu id='3'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='4'>
<cpu id='4'/>
<cpu id='5'/>
<cpu id='6'/>
<cpu id='7'/>
</cpus>
</cell>
</cells>
</topology><cell id='1'>, uses physical CPUs 4 to 7.
cpuset attribute to the configuration file.
virsh edit.
vcpus element.
<vcpus>4</vcpus>
cpuset attribute with the CPU numbers for the relevant NUMA cell.
<vcpus cpuset='4-7'>4</vcpus>
virt-install provisioning tool provides a simple way to automatically apply a 'best fit' NUMA policy when guests are created.
cpuset option for virt-install can use a CPU set of processors or the parameter auto. The auto parameter automatically determines the optimal CPU locking using the available NUMA data.
--cpuset=auto with the virt-install command when creating new guests.
virsh vcpuinfo and virsh vcpupin commands can perform CPU affinity changes on running guests.
virsh vcpuinfo command gives up to date information about where each virtual CPU is running.
guest1 is a guest with four virtual CPUs is running on a KVM host.
# virsh vcpuinfo guest1
VCPU: 0
CPU: 3
State: running
CPU time: 0.5s
CPU Affinity: yyyyyyyy
VCPU: 1
CPU: 1
State: running
CPU Affinity: yyyyyyyy
VCPU: 2
CPU: 1
State: running
CPU Affinity: yyyyyyyy
VCPU: 3
CPU: 2
State: running
CPU Affinity: yyyyyyyyvirsh vcpuinfo output (the yyyyyyyy value of CPU Affinity) shows that the guest can presently run on any CPU.
# virsh vcpupinguest10 4 # virsh vcpupinguest11 5 # virsh vcpupinguest12 6 # virsh vcpupinguest13 7
virsh vcpuinfo command confirms the change in affinity.
# virsh vcpuinfo guest1
VCPU: 0
CPU: 4
State: running
CPU time: 32.2s
CPU Affinity: ----y---
VCPU: 1
CPU: 5
State: running
CPU time: 16.9s
CPU Affinity: -----y--
VCPU: 2
CPU: 6
State: running
CPU time: 11.9s
CPU Affinity: ------y-
VCPU: 3
CPU: 7
State: running
CPU time: 14.6s
CPU Affinity: -------ymacgen.py. Now from that directory you can run the script using ./macgen.py and it will generate a new MAC address. A sample output would look like the following:
$ ./macgen.py 00:16:3e:20:b0:11
#!/usr/bin/python # macgen.py script to generate a MAC address for guests # import random # def randomMAC(): mac = [ 0x00, 0x16, 0x3e, random.randint(0x00, 0x7f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] return ':'.join(map(lambda x: "%02x" % x, mac)) # print randomMAC()
python-virtinst to generate a new MAC address and UUID for use in a guest configuration file:
# echo 'import virtinst.util ; print\ virtinst.util.uuidToString(virtinst.util.randomUUID())' | python # echo 'import virtinst.util ; print virtinst.util.randomMAC()' | python
#!/usr/bin/env python # -*- mode: python; -*- print "" print "New UUID:" import virtinst.util ; print virtinst.util.uuidToString(virtinst.util.randomUUID()) print "New MAC:" import virtinst.util ; print virtinst.util.randomMAC() print ""
qemu-kvm processes) busy or stalled processes on the host.
pdflush kernel function. pdflush automatically kills processes to keep the system from crashing and to free up memory. Always ensure the host has sufficient swap space when overcommitting memory.
swapoff command can disable all swap partitions and swap files on a system.
# swapoff -a
swap lines from the /etc/fstab file and restart the host system.
# service smartd stop # chkconfig --del smartd
vino-preferences command.
~/.vnc/xstartup file to start a GNOME session whenever vncserver is started. The first time you run the vncserver script it will ask you for a password you want to use for your VNC session.
xstartup file:
#!/bin/sh # Uncomment the following two lines for normal desktop: # unset SESSION_MANAGER # exec /etc/X11/xinit/xinitrc [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources #xsetroot -solid grey #vncconfig -iconic & #xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & #twm & if test -z "$DBUS_SESSION_BUS_ADDRESS" ; then eval `dbus-launch --sh-syntax –exit-with-session` echo "D-BUS per-session daemon address is: \ $DBUS_SESSION_BUS_ADDRESS" fi exec gnome-session
Minimal installation installation option will not install the acpid package.
virsh shutdown command is executed. The virsh shutdown command is designed to gracefully shut down guests.
virsh shutdown is easier and safer for system administration. Without graceful shut down with the virsh shutdown command a system administrator must log into a guest manually or send the Ctrl-Alt-Del key combination to each guest.
virsh shutdown command requires that the guest operating system is configured to handle ACPI shut down requests. Many operating systems require additional configuration on the guest operating system to accept ACPI shut down requests.
Install the acpid package
acpid service listen and processes ACPI requests.
# yum install acpid
Enable the acpid service
acpid service to start during the guest boot sequence and start the service:
# chkconfig acpid on # service acpid start
virsh shutdown command is used.
virsh edit command. See Editing a guest's configuration file for details.
| Value | Description |
|---|---|
| utc | The guest clock will be synchronized to UTC when booted. |
| localtime | The guest clock will be synchronized to the host's configured timezone when booted, if any. |
| timezone |
The guest clock will be synchronized to a given timezone, specified by the timezone attribute.
|
| variable |
The guest clock will be synchronized to an arbitrary offset from UTC. The delta relative to UTC is specified in seconds, using the adjustment attribute. The guest is free to adjust the Real Time Clock (RTC) over time and expect that it will be honored following the next reboot. This is in contrast to utc mode, where any RTC adjustments are lost at each reboot.
|
<clock offset="utc" />
<clock offset="localtime" />
<clock offset="timezone" timezone="Europe/Paris" />
<clock offset="variable" adjustment="123456" />
name is required, all other attributes are optional.
| Value | Description |
|---|---|
| platform | The master virtual time source which may be used to drive the policy of other time sources. |
| pit | Programmable Interval Timer - a timer with periodic interrupts. |
| rtc | Real Time Clock - a continuously running timer with periodic interrupts. |
| hpet | High Precision Event Timer - multiple timers with periodic interrupts. |
| tsc | Time Stamp Counter - counts the number of ticks since reset, no interrupts. |
platform or rtc.
| Value | Description |
|---|---|
| boot | Corresponds to old host option, this is an unsupported tracking option. |
| guest | RTC always tracks guest time. |
| wall | RTC always tracks host time. |
| Value | Description |
|---|---|
| none | Continue to deliver at normal rate (i.e. ticks are delayed). |
| catchup | Deliver at a higher rate to catch up. |
| merge | Ticks merged into one single tick. |
| discard | All missed ticks are discarded. |
tsc. All other timers operate at a fixed frequency (pit, rtc), or at a frequency fully controlled by the guest (hpet).
tsc. All other timers are always emulated.
| Value | Description |
|---|---|
| auto | Native if safe, otherwise emulated. |
| native | Always native. |
| emulate | Always emulate. |
| paravirt | Native + para-virtualized. |
| Value | Description |
|---|---|
| yes | Force this timer to the visible to the guest. |
| no | Force this timer to not be visible to the guest. |
<clock offset="localtime"> <timer name="rtc" tickpolicy="catchup" wallclock="guest" /> <timer name="pit" tickpolicy="none" /> <timer name="hpet" present="no" /> </clock>
/var/lib/libvirt/images/ directory, as the default storage pool. The default storage pool can be changed to another storage pool.
--pool storage_pool volume_name.
firstimage in the guest_images pool.
# virsh vol-info --poolguest_imagesfirstimageName:firstimageType: block Capacity: 20.00 GB Allocation: 20.00 GB virsh #
secondimage.img, visible to the host system as /images/secondimage.img. The image can be referred to as /images/secondimage.img.
# virsh vol-info/images/secondimage.imgName:secondimage.imgType: file Capacity: 20.00 GB Allocation: 136.00 KB
c3pKz4-qPVc-Xf7M-7WNM-WJc8-qSiz-mtvpGn
/images/secondimage.img
Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr:
# virsh vol-info Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr
Name: firstimage
Type: block
Capacity: 20.00 GB
Allocation: 20.00 GB
virsh provides commands for converting between a volume name, volume path, or volume key:
# virsh vol-name /dev/guest_images/firstimagefirstimage# virsh vol-nameWlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr
# virsh vol-path Wlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr /dev/guest_images/firstimage# virsh vol-path --poolguest_imagesfirstimage/dev/guest_images/firstimage
# virsh vol-key /dev/guest_images/firstimageWlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr # virsh vol-key --poolguest_imagesfirstimageWlvnf7-a4a3-Tlje-lJDa-9eak-PZBv-LoZuUr
/dev/sdb). Use partitions (for example, /dev/sdb1) or LVM volumes.
virsh command.
Create a GPT disk label on the disk
msdos partition table.
# parted /dev/sdb GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel New disk label type? gpt (parted) quit Information: You may need to update /etc/fstab. #
Create the storage pool configuration file
name parameter determines the name of the storage pool. This example uses the name guest_images_disk in the example below.
/dev/sdb'/>device parameter with the path attribute specifies the device path of the storage device. This example uses the device /dev/sdb.
/dev</path></target>target parameter with the path sub-parameter determines the location on the host file system to attach volumes created with this storage pool.
/dev/, as in the example below, means volumes created from this storage pool can be accessed as /dev/sdb1, /dev/sdb2, /dev/sdb3.
gpt'/>format parameter specifies the partition table type. his example uses the gpt in the example below, to match the GPT disk label type created in the previous step.
<pool type='disk'> <name>guest_images_disk</name> <source> <device path='/dev/sdb'/> <format type='gpt'/> </source> <target> <path>/dev</path> </target> </pool>
Attach the device
virsh pool-define command with the XML configuration file created in the previous step.
# virsh pool-define ~/guest_images_disk.xml Pool guest_images_disk defined from /root/guest_images_disk.xml # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_disk inactive no
Start the storage pool
virsh pool-start command. Verify the pool is started with the virsh pool-list --all command.
# virsh pool-start guest_images_disk
Pool guest_images_disk started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_disk active no
Turn on autostart
autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart guest_images_disk
Pool guest_images_disk marked as autostarted
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_disk active yes
Verify the storage pool configuration
running.
# virsh pool-info guest_images_disk
Name: guest_images_disk
UUID: 551a67c8-5f2a-012c-3844-df29b167431c
State: running
Capacity: 465.76 GB
Allocation: 0.00
Available: 465.76 GB
# ls -la /dev/sdb
brw-rw----. 1 root disk 8, 16 May 30 14:08 /dev/sdb
# virsh vol-list guest_images_disk
Name Path
-----------------------------------------
Optional: Remove the temporary configuration file
# rm ~/guest_images_disk.xml
/dev/sdc) partitioned into one 500GB, ext4 formatted partition (/dev/sdc1). We set up a storage pool for it using the procedure below.
Open the storage pool settings
virt-manager graphical interface, select the host from the main window.


Create the new storage pool
Add a new pool (part 1)
guest_images_fs. Change the to fs: Pre-Formatted Block Device.

Add a new pool (part 2)

virt-manager will create the directory.
ext4 file system, the default Red Hat Enterprise Linux file system.
Source Path field.
/dev/sdc1 device.
Verify the new storage pool
458.20 GB Free in this example. Verify the field reports the new storage pool as Active.
libvirtd service starts.

virsh command.
/dev/sdb). Guests should not be given write access to whole disks or block devices. Only use this method to assign partitions (for example, /dev/sdb1) to storage pools.
Create the storage pool definition
pool-define-as command to create a new storage pool definition. There are three options that must be provided to define a pre-formatted disk as a storage pool:
name parameter determines the name of the storage pool. This example uses the name guest_images_fs in the example below.
device parameter with the path attribute specifies the device path of the storage device. This example uses the partition /dev/sdc1.
mountpoint on the local file system where the formatted device will be mounted. If the mount point directory does not exist, the virsh command can create the directory.
/guest_images is used in this example.
# virsh pool-define-asguest_images_fsfs - -/dev/sdc1- "/guest_images" Pool guest_images_fs defined
Verify the new pool
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs inactive no
Ceate the mount point
virsh pool-build command to create a mount point for a pre-formatted file system storage pool.
# virsh pool-buildguest_images_fsPool guest_images_fs built # ls -la /guest_imagestotal 8 drwx------. 2 root root 4096 May 31 19:38 . dr-xr-xr-x. 25 root root 4096 May 31 19:38 .. # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_fs inactive no
Start the storage pool
virsh pool-start command to mount the file system onto the mount point and make the pool available for use.
# virsh pool-start guest_images_fs
Pool guest_images_fs started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs active no
Turn on autostart
virsh is not set to automatically start each time libvirtd starts. Turn on automatic start with the virsh pool-autostart command. The storage pool is now automatically started each time libvirtd starts.
# virsh pool-autostart guest_images_fs
Pool guest_images_fs marked as autostarted
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_fs active yes
Verify the storage pool
running. Verify there is a "lost+found" directory in the mount point on the file system, indicating the device is mounted.
# virsh pool-info guest_images_fs
Name: guest_images_fs
UUID: c7466869-e82a-a66c-2187-dc9d6f0877d0
State: running
Capacity: 458.39 GB
Allocation: 197.91 MB
Available: 458.20 GB
# mount | grep /guest_images
/dev/sdc1 on /guest_images type ext4 (rw)
# ls -la /guest_images
total 24
drwxr-xr-x. 3 root root 4096 May 31 19:47 .
dr-xr-xr-x. 25 root root 4096 May 31 19:38 ..
drwx------. 2 root root 16384 May 31 14:18 lost+found
virt-manager or the virsh command line tools.
Create the local directory
Optional: Create a new directory for the storage pool
/guest_images.
# mkdir /guest_imagesSet directory ownership
# chown root:root /guest_imagesSet directory permissions
# chmod 700 /guest_imagesVerify the changes
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 28 13:57 .
dr-xr-xr-x. 26 root root 4096 May 28 13:57 ..
Configure SELinux file contexts
# semanage fcontext -a -t virt_image_t /guest_images
Open the storage pool settings
virt-manager graphical interface, select the host from the main window.


Create the new storage pool
Add a new pool (part 1)
guest_images_dir. Change the to dir: Filesystem Directory.

Add a new pool (part 2)
/guest_images.

Verify the new storage pool
36.41 GB Free in this example. Verify the field reports the new storage pool as Active.
libvirtd service starts.

Create the storage pool definition
virsh pool-define-as command to define a new storage pool. There are two options required for creating directory-based storage pools:
name of the storage pool.
guest_images_dir. All further virsh commands used in this example use this name.
path to a file system directory for storing guest image files. If this directory does not exist, virsh will create it.
/guest_images directory.
# virsh pool-define-asguest_images_dirdir - - - - "/guest_images" Pool guest_images_dir defined
Verify the storage pool is listed
inactive.
# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir inactive no
Create the local directory
virsh pool-build command to build the directory-based storage pool. virsh pool-build sets the required permissions and SELinux settings for the directory and creates the directory if it does not exist.
# virsh pool-buildguest_images_dirPool guest_images_dir built # ls -la /guest_imagestotal 8 drwx------. 2 root root 4096 May 30 02:44 . dr-xr-xr-x. 26 root root 4096 May 30 02:44 .. # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_dir inactive no
Start the storage pool
pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
# virsh pool-start guest_images_dir
Pool guest_images_dir started
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_dir active no
Turn on autostart
autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart guest_images_dir
Pool guest_images_dir marked as autostarted
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
guest_images_dir active yes
Verify the storage pool configuration
running.
# virsh pool-info guest_images_dir
Name: guest_images_dir
UUID: 779081bf-7a82-107b-2874-a19a9c51d24c
State: running
Capacity: 49.22 GB
Allocation: 12.80 GB
Available: 36.41 GB
# ls -la /guest_images
total 8
drwx------. 2 root root 4096 May 30 02:44 .
dr-xr-xr-x. 26 root root 4096 May 30 02:44 ..
#
Optional: Create new partition for LVM volumes
Create a new partition
fdisk command to create a new disk partition from the command line. The following example creates a new partition that uses the entire disk on the storage device /dev/sdb.
# fdisk /dev/sdb Command (m for help):
n for a new partition.
p for a primary partition.
Command action e extended p primary partition (1-4)
1.
Partition number (1-4): 1Enter.
First cylinder (1-400, default 1):
Enter.
Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
t.
Command (m for help): t1.
Partition number (1-4): 18e for a Linux LVM partition.
Hex code (type L to list codes): 8eCommand (m for help):wCommand (m for help):q
Create a new LVM volume group
guest_images_lvm.
# vgcreateguest_images_lvm/dev/sdb1 Physical volmue "/dev/vdb1" successfully created Volume group "guest_images_lvm" successfully created
guest_images_lvm, can now be used for an LVM-based storage pool.
Open the storage pool settings
virt-manager graphical interface, select the host from the main window.


Create the new storage pool
Start the Wizard
guest_images_lvm for this example. Then change the to logical: LVM Volume Group, and

Add a new pool (part 2)
/guest_images.
/dev/storage_pool_name.
/dev/guest_images_lvm.
Source Path field is optional if an existing LVM volume group is used in the .
Source Path field. This example uses a blank partition /dev/sdc.
virt-manager to create a new LVM volume group. If you are using an existing volume group you should not select the checkbox.

Confirm the device to be formatted

Verify the new storage pool
465.76 GB Free in our example. Also verify the field reports the new storage pool as Active.

virsh command. It uses the example of a pool named guest_images_lvm from a single drive (/dev/sdc). This is only an example and your settings should be substituted as appropriate.
# virsh pool-define-asguest_images_lvmlogical - -/dev/sdclibvirt_lvm\ /dev/libvirt_lvmPool guest_images_lvm defined
# virsh pool-build guest_images_lvm
Pool guest_images_lvm built
# virsh pool-start guest_images_lvm
Pool guest_images_lvm started
vgs command.
# vgs VG #PV #LV #SN Attr VSize VFree libvirt_lvm 1 0 0 wz--n- 465.76g 465.76g
# virsh pool-autostart guest_images_lvm
Pool guest_images_lvm marked as autostarted
virsh command.
# virsh pool-list --all Name State Autostart ----------------------------------------- default active yes guest_images_lvm active yes
# virsh vol-create-asguest_images_lvmvolume18GVol volume1 created # virsh vol-create-asguest_images_lvmvolume28GVol volume2 created # virsh vol-create-asguest_images_lvmvolume38GVol volume3 created
virsh command.
# virsh vol-list guest_images_lvm
Name Path
-----------------------------------------
volume1 /dev/libvirt_lvm/volume1
volume2 /dev/libvirt_lvm/volume2
volume3 /dev/libvirt_lvm/volume3
lvscan and lvs) display further information about the newly created volumes.
# lvscan ACTIVE '/dev/libvirt_lvm/volume1' [8.00 GiB] inherit ACTIVE '/dev/libvirt_lvm/volume2' [8.00 GiB] inherit ACTIVE '/dev/libvirt_lvm/volume3' [8.00 GiB] inherit # lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert volume1 libvirt_lvm -wi-a- 8.00g volume2 libvirt_lvm -wi-a- 8.00g volume3 libvirt_lvm -wi-a- 8.00g
Install the required packages
# yum install scsi-target-utils
Start the tgtd service
tgtd service hosts SCSI targets and uses the iSCSI protocol to host targets. Start the tgtd service and make the service persistent after restarting with the chkconfig command.
# service tgtd start # chkconfig tgtd on
Optional: Create LVM volumes
virtimage1 on a new volume group named virtstore on a RAID5 array for hosting guests with iSCSI.
Create the RAID array
Create the LVM volume group
virtstore with the vgcreate command.
# vgcreate virtstore /dev/md1Create a LVM logical volume
virtimage1 on the virtstore volume group with a size of 20GB using the lvcreate command.
# lvcreate --size 20G -n virtimage1 virtstorevirtimage1, is ready to use for iSCSI.
Optional: Create file-based images
virtimage2.img for an iSCSI target.
Create a new directory for the image
# mkdir -p /var/lib/tgtd/virtualizationCreate the image file
virtimage2.img with a size of 10GB.
# dd if=/dev/zero of=/var/lib/tgtd/virtualization/virtimage2.imgbs=1M seek=10000 count=0
Configure SELinux file contexts
# restorecon -R /var/lib/tgtd
virtimage2.img, is ready to use for iSCSI.
Create targets
/etc/tgt/targets.conf file. The target attribute requires an iSCSI Qualified Name (IQN). The IQN is in the format:
iqn.yyyy-mm.reversed domain name:optional identifier text
yyyy-mm represents the year and month the device was started (for example: 2010-05);
reversed domain name is the hosts domain name in reverse (for example server1.example.com in an IQN would be com.example.server1); and
optional identifier text is any text string, without spaces, that assists the administrator in identifying devices or hardware.
server1.example.com with an optional identifier trial. Add the following to the /etc/tgt/targets.conf file.
<target iqn.2010-05.com.example.server1:trial> backing-store /dev/virtstore/virtimage1#LUN 1 backing-store /var/lib/tgtd/virtualization/virtimage2.img#LUN 2 write-cache off </target>
/etc/tgt/targets.conf file contains the default-driver iscsi line to set the driver type as iSCSI. The driver uses iSCSI by default.
Restart the tgtd service
tgtd service to reload the configuration changes.
# service tgtd restart
iptables configuration
iptables.
# iptables -I INPUT -p tcp -m tcp --dport 3260 -j ACCEPT # service iptables save # service iptables restart
Verify the new targets
tgt-admin --show command.
# tgt-admin --show
Target 1: iqn.2010-05.com.example.server1:trial
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: None
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 20000 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: /dev/virtstore/virtimage1
LUN: 2
Type: disk
SCSI ID: IET 00010002
SCSI SN: beaf12
Size: 10000 MB
Online: Yes
Removable media: No
Backing store type: rdwr
Backing store path: /var/lib/tgtd/virtualization/virtimage2.img
Account information:
ACL information:
ALLOptional: Test discovery
# iscsiadm --mode discovery --type sendtargets --portal server1.example.com 127.0.0.1:3260,1 iqn.2010-05.com.example.server1:iscsirhel6guest
Optional: Test attaching the device
iqn.2010-05.com.example.server1:iscsirhel6guest) to determine whether the device can be attached.
# iscsiadm -d2 -m node --login scsiadm: Max file limits 1024 1024 Logging in to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful.
# iscsiadm -d2 -m node --logout scsiadm: Max file limits 1024 1024 Logging out of session [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260 Logout of [sid: 2, target: iqn.2010-05.com.example.server1:iscsirhel6guest, portal: 10.0.0.1,3260] successful.
virt-manager.
Open the host storage tab
virt-manager.
virt-manager window.


Add a new pool (part 1)

Add a new pool (part 2)
/dev/disk/by-path/, adds the drive path to that directory. The target path should be the same on all hosts for migration.
server1.example.com.
iqn.2010-05.com.example.server1:iscsirhel6guest.

Create the storage pool definition
name element sets the name for the storage pool. The name is required and must be unique.
uuid element provides a unique global identifier for the storage pool. The uuid element can contain any valid UUID or an existing UUID for the storage device. If a UUID is not provided, virsh will generate a UUID for the storage pool.
host element with the name attribute specifies the hostname of the iSCSI server. The host element attribute can contain a port attribute for a non-standard iSCSI protocol port number.
iqn.2010-05.com.example.server1:iscsirhel6guest'/>device element path attribute must contain the IQN for the iSCSI server.
iscsirhel6guest.xml.
<pool type='iscsi'>
<name>iscsirhel6guest</name>
<uuid>afcc5367-6770-e151-bcb3-847bc36c5e28</uuid>
<source>
<host name='server1.example.com.'/>
<device path='iqn.2001-05.com.example.server1:iscsirhel6guest'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>pool-define command to define the storage pool but not start it.
# virsh pool-define iscsirhel6guest.xml Pool iscsirhel6guest defined
Alternative step: Use pool-define-as to define the pool from the command line
virsh command line tool. Creating storage pools with virsh is useful for systems administrators using scripts to create multiple storage pools.
virsh pool-define-as command has several parameters which are accepted in the following format:
virsh pool-define-asname type source-host source-path source-dev source-nametarget
iscsi, defines this pool as an iSCSI based storage pool. The name parameter must be unique and sets the name for the storage pool. The source-host and source-path parameters are the hostname and iSCSI IQN respectively. The source-dev and source-name parameters are not required for iSCSI-based pools, use a - character to leave the field blank. The target parameter defines the location for mounting the iSCSI device on the host.
# virsh pool-define-as iscsirhel6guest iscsi server1.example.com iqn.2010-05.com.example.server1:iscsirhel6guest - - /dev/disk/by-path
Pool iscsirhel6guest definedVerify the storage pool is listed
inactive.
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
iscsirhel6guest inactive noStart the storage pool
pool-start for this. pool-start enables a directory storage pool, allowing it to be used for volumes and guests.
# virsh pool-startguest_images_diskPool guest_images_disk started # virsh pool-list --all Name State Autostart ----------------------------------------- default active yesiscsirhel6guestactive no
Turn on autostart
autostart for the storage pool. Autostart configures the libvirtd service to start the storage pool when the service starts.
# virsh pool-autostart iscsirhel6guest
Pool iscsirhel6guest marked as autostartediscsirhel6guest pool has autostart set:
# virsh pool-list --all
Name State Autostart
-----------------------------------------
default active yes
iscsirhel6guest active yes
Verify the storage pool configuration
running.
# virsh pool-infoiscsirhel6guestName:iscsirhel6guestUUID: afcc5367-6770-e151-bcb3-847bc36c5e28 State: running Persistent: unknown Autostart: yes Capacity: 100.31 GB Allocation: 0.00 Available: 100.31 GB
virt-manager.
Open the host storage tab
virt-manager.
virt-manager window.


Create a new pool (part 1)

Create a new pool (part 2)
server1.example.com.
/nfstrial.

# virsh vol-create-asguest_images_diskvolume18G Vol volume1 created # virsh vol-create-asguest_images_diskvolume28G Vol volume2 created # virsh vol-create-asguest_images_diskvolume38G Vol volume3 created # virsh vol-listguest_images_diskName Path ----------------------------------------- volume1 /dev/sdb1 volume2 /dev/sdb2 volume3 /dev/sdb3 # parted -s/dev/sdbprint Model: ATA ST3500418AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 2 17.4kB 8590MB 8590MB primary 3 8590MB 17.2GB 8590MB primary 1 21.5GB 30.1GB 8590MB primary #
# virsh vol-clone --poolguest_images_diskvolume3clone1Vol clone1 cloned from volume3 # virsh vol-listguest_images_diskName Path ----------------------------------------- clone1 /dev/sdb1 volume2 /dev/sdb2 volume3 /dev/sdb3 # parted -s/dev/sdbprint Model: ATA ST3500418AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 2 8590MB 17.2GB 8590MB primary 3 17.2GB 25.8GB 8590MB primary 1 25.8GB 34.4GB 8590MB primary #
dd:
# dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1M count=4096
# dd if=/dev/zero of=/var/lib/libvirt/images/FileName.img bs=1M seek=4096 count=0
<disk> element in a new file. In this example, this file will be known as NewStorage.xml. Ensure that you specify a device name for the virtual block device attributes. These attributes must be unique for each guest configuration file. The following example is a configuration file section which contains an additional file-based storage container named FileName.img.
<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none' io='threads'/> <source file='/var/lib/libvirt/images/FileName.img'/> <target dev='vdb' bus='virtio'/> </disk>
<address> sub-element is omitted in order to let libvirt locate and assign the next available PCI slot.
Guest1.
# virsh start Guest1
NewStorage.xml with your guest (Guest1):
# virsh attach-device Guest1 ~/NewStorage.xml
# virsh reboot Guest1
FileName.img as the device called /dev/vdb. This device requires formatting from the guest. On the guest, partition the device into one primary partition for the entire device, then format the device.
fdisk for the new device:
# fdisk /dev/vdb Command (m for help):
n for a new partition.
Command action e extended p primary partition (1-4)
p for a primary partition.
1.
Partition number (1-4): 1Enter.
First cylinder (1-400, default 1):
Enter.
Last cylinder or +size or +sizeM or +sizeK (2-400, default 400):
t to configure the partition type.
Command (m for help): t1.
Partition number (1-4): 183 for a Linux partition.
Hex code (type L to list codes): 83w to write changes to disk, and q to quit.
Command (m for help):wCommand (m for help):q
ext3 file system.
# mke2fs -j /dev/vdb1
# mkdir/myfiles# mount /dev/vdb1/myfiles
/etc/fstab file:
/dev/vdb1 /myfiles ext3 defaults 0 0multipath and persistence on the host if required.
virsh attach command as below, replacing:
# virsh attach-diskmyguest/dev/sdc1vdc--driver qemu --mode readonly
myguest with the name of the guest.
/dev/sdc1 with the device on the host to add.
vdc with the location on the guest where the device should be added. It must be an unused device name.
sd* notation for Windows guests as well, the guest will recognize the device correctly.
--mode readonly parameter if the device should be read only to the guest.
--type hdd parameter to the command for CD-ROM or DVD devices.
--type floppy parameter to the command for floppy devices.
/dev/sdb on Linux or D: drive, or similar, on Windows. This device may require formatting.
fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if less privileged users, such as guests, have write access to whole partitions or LVM volumes, because a guest could potentially write a disk label belonging to the host, to its own block device storage. Upon reboot of the host, the host could then mistakenly use the guest's disk as a system disk, which would compromise the host system.
fstab file, the initrd file or on the kernel command line. While using UUIDs is still not completely secure on certain file systems, a similar compromise with UUID is significantly less feasible.
/dev/sdb). Guests with access to whole block devices may be able to access other disk partitions that correspond to those block device names on the system, or modify volume labels, which can be used to compromise the host system. Use partitions (for example, /dev/sdb1) or LVM volumes to prevent this issue.
# virsh vol-delete --poolguest_images_diskvolume1Vol volume1 deleted # virsh vol-listguest_images_diskName Path ----------------------------------------- volume2 /dev/sdb2 volume3 /dev/sdb3 # parted -s/dev/sdbprint Model: ATA ST3500418AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 2 8590MB 17.2GB 8590MB primary 3 17.2GB 25.8GB 8590MB primary #
admin> portcfgshow 0 ...... NPIV capability ON ...... Usage portCfgNPIVPort <PortNumber> <Mode> Mode Meaning 0 Disable the NPIV capability on the port 1 Enable the NPIV capability on the port
admin> portCfgNPIVPort 0 1
# ls /proc/scsi QLogic HBAs are listed as qla2xxx. Emulex HBAs are listed as lpfc.
# ls /proc/scsi/qla2xxx
# ls /proc/scsi/lpfc
# cat /proc/scsi/qla2xxx/7 FC Port Information for Virtual Ports: Virtual Port index = 1 Virtual Port 1:VP State = <ACTIVE>, Vp Flags = 0x0 scsiqla2port3=500601609020fd54:500601601020fd54:a00000:1000: 1; scsiqla2port4=500601609020fd54:500601681020fd54:a10000:1000: 1; Virtual Port 1 SCSI LUN Information: ( 0:10): Total reqs 10, Pending reqs 0, flags 0x0, 2:0:1000,
# cat /proc/scsi/lpfc/3 SLI Rev: 3 NPIV Supported: VPIs max 127 VPIs used 1 RPIs max 512 RPIs used 13 Vports list on this physical port: Vport DID 0x2f0901, vpi 1, state 0x20 Portname: 48:19:00:0c:29:00:00:0d Nodename: 48:19:00:0c:29:00:00:0b
libvirt, you require a NPIV capable HBA and switch.
/sys/class/fc_host/hostN directory where class is the type of adaptor and fc_host is the host number.
1111222233334444:5555666677778888' is WWPN:WWNN and host5 is the physical HBA which the virtual HBA is a client of.
# echo '1111222233334444:5555666677778888' > /sys/class/fc_host/host5/vport_create# echo '1111222233334444:5555666677778888' > /sys/class/fc_host/host5/vport_deletevirsh. This procedure requires a compatible HBA device.
List available HBAs
# virsh nodedev-list --cap=scsi_host pci_10df_fe00_0_scsi_host pci_10df_fe00_0_scsi_host_0 pci_10df_fe00_scsi_host pci_10df_fe00_scsi_host_0 pci_10df_fe00_scsi_host_0_scsi_host pci_10df_fe00_scsi_host_0_scsi_host_0
Gather parent HBA device data
pci_10df_fe00_scsi_host.
# virsh nodedev-dumpxml pci_10df_fe00_scsi_host
<device>
<name>pci_10df_fe00_scsi_host</name>
<parent>pci_10df_fe00</parent>
<capability type='scsi_host'>
<host>5</host>
<capability type='fc_host'>
<wwnn>20000000c9848140</wwnn>
<wwpn>10000000c9848140</wwpn>
</capability>
<capability type='vport_ops' />
</capability>
</device>type='vport_ops' in the XML definition.
Create the XML definition for the virtual HBA
newHBA.xml.
<device>
<parent>pci_10df_fe00_0_scsi_host</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
<wwpn>1111222233334444</wwpn>
<wwnn>5555666677778888</wwnn>
</capability>
</capability>
</device><parent> element is the name of the parent HBA listed by the virsh nodedev-list command. The <wwpn> and <wwnn> elements are the WWNN and WWPN for the virtual HBA.
Create the virtual HBA
virsh nodedev-create command using the file from the previous step.
# virsh nodedev-create newHBA.xml Node device pci_10df_fe00_0_scsi_host_0_scsi_host created from newHBA.xml
virsh nodedev-destroy:
# virsh nodedev-destroy pci_10df_fe00_0_scsi_host_0_scsi_host Destroyed node device 'pci_10df_fe00_0_scsi_host_0_scsi_host'
/etc/vhostmd/vhostmd.conf.
<update_period>60</update_period> controls how often the metrics are updated (in seconds). Since updating metrics can be an expensive operation, you can reduce the load on the host by increasing this period. Secondly, each <metric>...</metric> section controls what information is exposed by vhostmd. For example:
<metric type="string" context="host"> <name>HostName</name> <action>hostname</action> </metric>
<metric> sections by putting <!-- ... --> around them. Note that disabling metrics may cause problems for guest software such as SAP that may rely on these metrics being available.
/dev/shm/vhostmd0. This file contains a small binary header followed by the selected metrics encoded as XML. In practice you can display this file with a tool like less. The file is updated every 60 seconds (or however often <update_period> was set).
/dev/shm/vhostmd0. To read this, do:
# man vhostmd
less /usr/share/doc/vhostmd-*/README
# /sbin/chkconfig vhostmd on
# /sbin/service vhostmd start
# /sbin/service vhostmd stop
# /sbin/chkconfig vhostmd off
# ls -l /dev/shm # less /dev/shm/vhostmd0
"/dev/shm/vhostmd0" may be a binary file. See it anyway?
<metrics> XML, and after that, many zero bytes (displayed as ^@^@^@...).
/dev/shm/vhostmd0, they are not made available to guests by default. The administrator must choose which guests get to see metrics, and must manually change the configuration of selected guests to see metrics.
# virsh edit GuestName
<devices>:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/shm/vhostmd0'/>
<target dev='vdd' bus='virtio'/>
<readonly/>
</disk># virsh edit GuestName
<devices>:
<disk type='file' device='disk'>
<source dev='/dev/shm/vhostmd0'/>
<target dev='hdd' bus='ide'/>
<readonly/>
</disk># vm-dump-metrics
<metrics>.
/dev/vd* (for example, /dev/vdb, /dev/vdd).
# virsh dumpxml GuestName
virsh is a command line interface tool for managing guests and the hypervisor. The virsh command-line tool is built on the libvirt management API and operates as an alternative to the qemu-kvm command and the graphical virt-manager application. The virsh command can be used in read-only mode by unprivileged users or, with root access, full administration functionality. The virsh command is ideal for scripting virtualization administration.
| Command | Description |
|---|---|
help
| Prints basic help information. |
list
| Lists all guests. |
dumpxml
| Outputs the XML configuration file for the guest. |
create
| Creates a guest from an XML configuration file and starts the new guest. |
start
| Starts an inactive guest. |
destroy
| Forces a guest to stop. |
define
| Creates a guest from an XML configuration file without starting the new guest. |
domid
| Displays the guest's ID. |
domuuid
| Displays the guest's UUID. |
dominfo
| Displays guest information. |
domname
| Displays the guest's name. |
domstate
| Displays the state of a guest. |
quit
| Quits the interactive terminal. |
reboot
| Reboots a guest. |
restore
| Restores a previously saved guest stored in a file. |
resume
| Resumes a paused guest. |
save
| Save the present state of a guest to a file. |
shutdown
| Gracefully shuts down a guest. |
suspend
| Pauses a guest. |
undefine
| Deletes all files associated with a guest. |
migrate
| Migrates a guest to another host. |
virsh command options manage guest and hypervisor resources:
| Command | Description |
|---|---|
setmem
|
Sets the allocated memory for a guest. Refer to the virsh manpage for more details.
|
setmaxmem
|
Sets maximum memory limit for the hypervisor. Refer to the virsh manpage for more details.
|
setvcpus
|
Changes number of virtual CPUs assigned to a guest. Refer to the virsh manpage for more details.
|
vcpuinfo
| Displays virtual CPU information about a guest. |
vcpupin
| Controls the virtual CPU affinity of a guest. |
domblkstat
| Displays block device statistics for a running guest. |
domifstat
| Displays network interface statistics for a running guest. |
attach-device
| Attach a device to a guest, using a device definition in an XML file. |
attach-disk
| Attaches a new disk device to a guest. |
attach-interface
| Attaches a new network interface to a guest. |
update-device
| Detach a disk image from a guest's CD-ROM drive. See Section 15.2, “Attaching and updating a device with virsh” for more details. |
detach-device
|
Detach a device from a guest, takes the same kind of XML descriptions as command attach-device.
|
detach-disk
| Detach a disk device from a guest. |
detach-interface
| Detach a network interface from a guest. |
virsh commands for managing and creating storage pools and volumes.
| Command | Description |
|---|---|
| find-storage-pool-sources | Returns the XML definition for all storage pools of a given type that could be found. |
find-storage-pool-sources port
| Returns data on all storage pools of a given type that could be found as XML. If the host and port are provided, this command can be run remotely. |
pool-autostart
| Sets the storage pool to start at boot time. |
pool-build
|
The pool-build command builds a defined pool. This command can format disks and create partitions.
|
pool-create
|
pool-create creates and starts a storage pool from the provided XML storage pool definition file.
|
pool-create-as name
|
Creates and starts a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool.
|
pool-define
| Creates a storage bool from an XML definition file but does not start the new storage pool. |
pool-define-as name
|
Creates but does not start, a storage pool from the provided parameters. If the --print-xml parameter is specified, the command prints the XML definition for the storage pool without creating the storage pool.
|
pool-destroy
|
Permanently destroys a storage pool in libvirt. The raw data contained in the storage pool is not changed and can be recovered with the pool-create command.
|
pool-delete
| Destroys the storage resources used by a storage pool. This operation cannot be recovered. The storage pool still exists after this command but all data is deleted. |
pool-dumpxml
| Prints the XML definition for a storage pool. |
pool-edit
| Opens the XML definition file for a storage pool in the users default text editor. |
pool-info
| Returns information about a storage pool. |
pool-list
|
Lists storage pools known to libvirt. By default, pool-list lists pools in use by active guests. The --inactive parameter lists inactive pools and the --all parameter lists all pools.
|
pool-undefine
| Deletes the definition for an inactive storage pool. |
pool-uuid
| Returns the UUID of the named pool. |
pool-name
| Prints a storage pool's name when provided the UUID of a storage pool. |
pool-refresh
| Refreshes the list of volumes contained in a storage pool. |
pool-start
| Starts a storage pool that is defined but inactive. |
| Command | Description |
|---|---|
vol-create
| Create a volume from an XML file. |
vol-create-from
| Create a volume using another volume as input. |
vol-create-as
| Create a volume from a set of arguments. |
vol-clone
| Clone a volume. |
vol-delete
| Delete a volume. |
vol-wipe
| Wipe a volume. |
vol-dumpxml
| Show volume information in XML. |
vol-info
| Show storage volume information. |
vol-list
| List volumes. |
vol-pool
| Returns the storage pool for a given volume key or path. |
vol-path
| Returns the volume path for a given volume name or key. |
vol-name
| Returns the volume name for a given volume key or path. |
vol-key
| Returns the volume key for a given volume name or path. |
| Command | Description |
|---|---|
secret-define
| Define or modify a secret from an XML file. |
secret-dumpxml
| Show secret attributes in XML. |
secret-set-value
| Set a secret value. |
secret-get-value
| Output a secret value. |
secret-undefine
| Undefine a secret. |
secret-list
| List secrets. |
| Command | Description |
|---|---|
nwfilter-define
| Define or update a network filter from an XML file. |
nwfilter-undefine
| Undefine a network filter. |
nwfilter-dumpxml
| Show network filter information in XML. |
nwfilter-list
| List network filters. |
nwfilter-edit
| Edit XML configuration for a network filter. |
virsh command options for snapshots:
| Command | Description |
|---|---|
snapshot-create
| Create a snapshot. |
snapshot-current
| Get the current snapshot. |
snapshot-delete
| Delete a domain snapshot. |
snapshot-dumpxml
| Dump XML for a domain snapshot. |
snapshot-list
| List snapshots for a domain. |
snapshot-revert
| Revert a domain to a snapshot. |
virsh commands:
| Command | Description |
|---|---|
version
|
Displays the version of virsh.
|
nodeinfo
| Outputs information about the hypervisor. |
virsh:
# virsh attach-disk <GuestName> sample.iso hdc --type cdrom --mode readonly
Disk attached successfully
<disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ide0-1-0'/> <address type='drive' controller='0' bus='1' unit='0'/> </disk>
virsh update-device <GuestName> guest-device.xml
Device updated successfully
virsh:
# virsh connect {name}{name} is the machine name (hostname) or URL (the output of the virsh uri command) of the hypervisor. To initiate a read-only connection, append the above command with --readonly.
virsh:
# virsh dumpxml {guest-id, guestname or uuid}stdout). You can save the data by piping the output to a file. An example of piping the output to a file called guest.xml:
# virsh dumpxmlGuestID>guest.xml
guest.xml can recreate the guest (refer to Editing a guest's configuration file. You can edit this XML configuration file to configure additional devices or to deploy additional guests. Refer to Section 20.1, “Using XML configuration files with virsh” for more information on modifying files created with virsh dumpxml.
virsh dumpxml output:
# virsh dumpxml guest1-rhel6-64
<domain type='kvm'>
<name>guest1-rhel6-64</name>
<uuid>b8d7388a-bbf2-db3a-e962-b97ca6e514bd</uuid>
<memory>2097152</memory>
<currentMemory>2097152</currentMemory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64' machine='rhel6.2.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='threads'/>
<source file='/home/guest-images/guest1-rhel6-64.img'/>
<target dev='vda' bus='virtio'/>
<shareable/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<interface type='bridge'>
<mac address='52:54:00:b9:35:a9'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<sound model='ich6'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</sound>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
</domain>
dumpxml option (refer to Section 15.4, “Creating a virtual machine XML dump (configuration file)”). To create a guest with virsh from an XML file:
# virsh create configuration_file.xmldumpxml option (refer to Section 15.4, “Creating a virtual machine XML dump (configuration file)”) guests can be edited either while they run or while they are offline. The virsh edit command provides this functionality. For example, to edit the guest named softwaretesting:
# virsh edit softwaretesting$EDITOR shell parameter (set to vi by default).
virsh:
# virsh suspend {domain-id, domain-name or domain-uuid}resume (Resuming a guest) option.
virsh using the resume option:
# virsh resume {domain-id, domain-name or domain-uuid}suspend and resume operations.
virsh command:
# virsh save {domain-name, domain-id or domain-uuid} filenamerestore (Restore a guest) option. Save is similar to pause, instead of just pausing a guest the present state of the guest is saved.
virsh save command (Save a guest) using virsh:
# virsh restore filenamevirsh command:
# virsh shutdown {domain-id, domain-name or domain-uuid}on_shutdown parameter in the guest's configuration file.
virsh command:
#virsh reboot {domain-id, domain-name or domain-uuid}on_reboot element in the guest's configuration file.
virsh command:
# virsh destroy {domain-id, domain-name or domain-uuid}virsh destroy can corrupt guest file systems. Use the destroy option only when the guest is unresponsive.
# virsh domid {domain-name or domain-uuid}# virsh domname {domain-id or domain-uuid}# virsh domuuid {domain-id or domain-name}virsh domuuid output:
# virsh domuuid r5b2-mySQL01 4a4c59a7-ee3f-c781-96e4-288f2862f011
virsh with the guest's domain ID, domain name or UUID you can display information on the specified guest:
# virsh dominfo {domain-id, domain-name or domain-uuid}virsh dominfo output:
# virsh dominfo r5b2-mySQL01 id: 13 name: r5b2-mysql01 uuid: 4a4c59a7-ee3f-c781-96e4-288f2862f011 os type: linux state: blocked cpu(s): 1 cpu time: 11.0s max memory: 512000 kb used memory: 512000 kb
# virsh nodeinfo
virsh nodeinfo output:
# virsh nodeinfo CPU model x86_64 CPU (s) 8 CPU frequency 2895 Mhz CPU socket(s) 2 Core(s) per socket 2 Threads per core: 2 Numa cell(s) 1 Memory size: 1046528 kb
virsh pool-edit command takes the name or UUID for a storage pool and opens the XML definition file for a storage pool in the users default text editor.
virsh pool-edit command is equivalent to running the following commands:
# virsh pool-dumpxmlpool>pool.xml # vimpool.xml # virsh pool-definepool.xml
$VISUAL or $EDITOR environment variables, and default is vi.
virsh:
# virsh list
--inactive option to list inactive guests (that is, guests that have been defined but are not currently active), and
--all option lists all guests. For example:
# virsh list --all Id Name State ---------------------------------- 0 Domain-0 running 1 Domain202 paused 2 Domain010 inactive 3 Domain9600 crashed
virsh list is categorized as one of the six states (listed below).
running state refers to guests which are currently active on a CPU.
blocked are blocked, and are not running or runnable. This is caused by a guest waiting on I/O (a traditional wait state) or guests in a sleep mode.
paused state lists domains that are paused. This occurs if an administrator uses the pause button in virt-manager, xm pause or virsh suspend. When a guest is paused it consumes memory and other resources but it is ineligible for scheduling and CPU resources from the hypervisor.
shutdown state is for guests in the process of shutting down. The guest is sent a shutdown signal and should be in the process of stopping its operations gracefully. This may not work with all guest operating systems; some operating systems do not respond to these signals.
dying state are in is in process of dying, which is a state where the domain has not completely shut-down or crashed.
crashed guests have failed while running and are no longer running. This state can only occur if the guest has been configured not to restart on crash.
virsh:
# virsh vcpuinfo {domain-id, domain-name or domain-uuid}virsh vcpuinfo output:
# virsh vcpuinfo r5b2-mySQL01 VCPU: 0 CPU: 0 State: blocked CPU time: 0.0s CPU Affinity: yy
# virsh vcpupin domain-id vcpu cpulistdomain-id parameter is the guest's ID number or name.
vcpu parameter denotes the number of virtualized CPUs allocated to the guest.The vcpu parameter must be provided.
cpulist parameter is a list of physical CPU identifier numbers separated by commas. The cpulist parameter determines which physical CPUs the VCPUs can run on.
virsh:
# virsh setvcpus {domain-name, domain-id or domain-uuid} countcount value cannot exceed the count above the amount specified when the guest was created.
virsh:
# virsh setmem {domain-id or domain-name} count
virsh domblkstat to display block device statistics for a running guest.
# virsh domblkstat GuestName block-devicevirsh domifstat to display network interface statistics for a running guest.
# virsh domifstat GuestName interface-device virsh command. To list virtual networks:
# virsh net-list
# virsh net-list Name State Autostart ----------------------------------------- default active yes vnet1 active yes vnet2 active yes
# virsh net-dumpxml NetworkName# virsh net-dumpxml vnet1
<network>
<name>vnet1</name>
<uuid>98361b46-1581-acb7-1643-85a412626e70</uuid>
<forward dev='eth0'/>
<bridge name='vnet0' stp='on' forwardDelay='0' />
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.128' end='192.168.100.254' />
</dhcp>
</ip>
</network>
virsh commands used in managing virtual networks are:
virsh net-autostart network-name — Autostart a network specified as network-name.
virsh net-create XMLfile — generates and starts a new network using an existing XML file.
virsh net-define XMLfile — generates a new network device from an existing XML file without starting it.
virsh net-destroy network-name — destroy a network specified as network-name.
virsh net-name networkUUID — convert a specified networkUUID to a network name.
virsh net-uuid network-name — convert a specified network-name to a network UUID.
virsh net-start nameOfInactiveNetwork — starts an inactive network.
virsh net-undefine nameOfInactiveNetwork — removes the definition of an inactive network.
virsh. Migrate domain to another host. Add --live for live migration. The migrate command accepts parameters in the following format:
# virsh migrate --live GuestName DestinationURL--live parameter is optional. Add the --live parameter for live migrations.
GuestName parameter represents the name of the guest which you want to migrate.
DestinationURL parameter is the URL or hostname of the destination system. The destination system requires:
libvirt service must be started.
virsh migrate command. To migrate a non-running guest, the following script should be used:
virsh dumpxml Guest1 > Guest1.xml virsh -c qemu+ssh://<target-system-FQDN> define Guest1.xml virsh undefine Guest1
virsh capabilities command displays an XML document describing the capabilities of the hypervisor connection and host. The XML schema displayed has been extended to provide information about the host CPU model. One of the big challenges in describing a CPU model is that every architecture has a different approach to exposing their capabilities. On x86, the capabilities of a modern CPU are exposed via the CPUID instruction. Essentially this comes down to a set of 32-bit integers with each bit given a specific meaning. Fortunately AMD and Intel agree on common semantics for these bits. Other hypervisors expose the notion of CPUID masks directly in their guest configuration format. However, QEMU/KVM supports far more than just the x86 architecture, so CPUID is clearly not suitable as the canonical configuration format. QEMU ended up using a scheme which combines a CPU model name string, with a set of named flags. On x86, the CPU model maps to a baseline CPUID mask, and the flags can be used to then toggle bits in the mask on or off. libvirt decided to follow this lead and uses a combination of a model name and flags. Here is an example of what libvirt reports as the capabilities on a development workstation:
# virsh capabilities
<capabilities>
<host>
<uuid>c4a68e53-3f41-6d9e-baaf-d33a181ccfa0</uuid>
<cpu>
<arch>x86_64</arch>
<model>core2duo</model>
<topology sockets='1' cores='4' threads='1'/>
<feature name='lahf_lm'/>
<feature name='sse4.1'/>
<feature name='xtpr'/>
<feature name='cx16'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
... snip ...
</host>
</capabilities>
# virsh capabilities
<capabilities>
<host>
<uuid>8e8e4e67-9df4-9117-bf29-ffc31f6b6abb</uuid>
<cpu>
<arch>x86_64</arch>
<model>Westmere</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='4' threads='2'/>
<feature name='rdtscp'/>
<feature name='pdpe1gb'/>
<feature name='dca'/>
<feature name='xtpr'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='monitor'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
<feature name='vme'/>
</cpu>
... snip ...
</capabilities>
virsh cpu-compare command. To do so, the virsh capabilities > virsh-caps-workstation-full.xml command was executed on the workstation. The file virsh-caps-workstation-full.xml was edited and reduced to just the following content:
<cpu>
<arch>x86_64</arch>
<model>core2duo</model>
<topology sockets='1' cores='4' threads='1'/>
<feature name='lahf_lm'/>
<feature name='sse4.1'/>
<feature name='xtpr'/>
<feature name='cx16'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
virsh-caps-workstation-cpu-only.xml and the virsh cpu-compare command can be executed using this file:
virsh cpu-compare virsh-caps-workstation-cpu-only.xml Host CPU is a superset of CPU described in virsh-caps-workstation-cpu-only.xml
virsh cpu-baseline command:
# virsh cpu-baseline virsh-cap-weybridge-strictly-cpu-only.xml <cpu match='exact'> <model>Penryn</model> <feature policy='require' name='xtpr'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='ht'/> <feature policy='require' name='ss'/> <feature policy='require' name='acpi'/> <feature policy='require' name='ds'/> <feature policy='require' name='vme'/> </cpu>
both-cpus.xml, the following command would generate the same result:
# virsh cpu-baseline both-cpus.xml
cpu-baseline virsh command can now be copied directly into the guest XML at the top level under the <domain> element. As the observant reader will have noticed from the previous XML snippet, there are a few extra attributes available when describing a CPU in the guest XML. These can mostly be ignored, but for the curious here is a quick description of what they do. The top level <cpu> element has an attribute called match with possible values of:
virt-manager) windows, dialog boxes, and various GUI controls.
virt-manager provides a graphical view of hypervisors and guests on your host system and on remote host systems. virt-manager can perform virtualization management tasks, including:
virt-manager session open the menu, then the menu and select (virt-manager).
virt-manager main window appears.

virt-managervirt-manager can be started remotely using ssh as demonstrated in the following command:
ssh -X host's address
[remotehost]# virt-manager
ssh to manage virtual machines and hosts is discussed further in Section 5.1, “Remote management with SSH”.



virt-manager supports VNC and SPICE. If your virtual machine is set to require authentication, the Virtual Machine graphical console prompts you for a password before the display appears.

127.0.0.1). This ensures only those with shell privileges on the host can access virt-manager and the virtual machine through VNC.
virt-managersticky key' capability to send these sequences. You must press any modifier key (Ctrl or Alt) 3 times and the key you specify gets treated as active until the next non-modifier key is pressed. Then you can send Ctrl-Alt-F11 to the guest by entering the key sequence 'Ctrl Ctrl Ctrl Alt+F1'.
virt-manager.

virt-manager window.







virt-manager's preferences window.








# yum install libguestfs guestfish libguestfs-tools libguestfs-mount libguestfs-winsupport
# yum install '*guestf*'
guestfish --ro -a /path/to/disk/image
guestfish --ro -a /path/to/disk/image
Welcome to guestfish, the libguestfs filesystem interactive shell for editing virtual machine filesystems.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
><fs>
list-filesystems command will list file systems found by libguestfs. This output shows a Red Hat Enterprise Linux 4 disk image:
><fs> run ><fs> list-filesystems /dev/vda1: ext3 /dev/VolGroup00/LogVol00: ext3 /dev/VolGroup00/LogVol01: swap
><fs> run ><fs> list-filesystems /dev/vda1: ntfs /dev/vda2: ntfs
list-devices, list-partitions, lvs, pvs, vfs-type and file. You can get more information and help on any command by typing help command, as shown in the following output:
><fs> help vfs-type
NAME
vfs-type - get the Linux VFS type corresponding to a mounted device
SYNOPSIS
vfs-type device
DESCRIPTION
This command gets the filesystem type corresponding to the filesystem on
"device".
For most filesystems, the result is the name of the Linux VFS module
which would be used to mount this filesystem if you mounted it without
specifying the filesystem type. For example a string such as "ext3" or
"ntfs".
/dev/vda2), which in this case is known to correspond to the C:\ drive:
><fs> mount-ro /dev/vda2 / ><fs> ll / total 1834753 drwxrwxrwx 1 root root 4096 Nov 1 11:40 . drwxr-xr-x 21 root root 4096 Nov 16 21:45 .. lrwxrwxrwx 2 root root 60 Jul 14 2009 Documents and Settings drwxrwxrwx 1 root root 4096 Nov 15 18:00 Program Files drwxrwxrwx 1 root root 4096 Sep 19 10:34 Users drwxrwxrwx 1 root root 16384 Sep 19 10:34 Windows
ls, ll, cat, more, download and tar-out to view and download files and directories.
cd command to change directories. All paths must be fully qualified starting at the top with a forward slash (/) character. Use the Tab key to complete paths.
exit or enter Ctrl+d.
guestfish --ro -a /path/to/disk/image -i
Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
Operating system: Red Hat Enterprise Linux AS release 4 (Nahant Update 8)
/dev/VolGroup00/LogVol00 mounted on /
/dev/vda1 mounted on /boot
><fs> ll /
total 210
drwxr-xr-x. 24 root root 4096 Oct 28 09:09 .
drwxr-xr-x 21 root root 4096 Nov 17 15:10 ..
drwxr-xr-x. 2 root root 4096 Oct 27 22:37 bin
drwxr-xr-x. 4 root root 1024 Oct 27 21:52 boot
drwxr-xr-x. 4 root root 4096 Oct 27 21:21 dev
drwxr-xr-x. 86 root root 12288 Oct 28 09:09 etc
[etc]
run command is not necessary when using the -i option. The -i option works for many common Linux and Windows guests.
virsh list --all). Use the -d option to access a guest by its name, with or without the -i option:
guestfish --ro -d GuestName -i
/boot/grub/grub.conf file. When you are sure the guest is shut down you can omit the --ro flag in order to get write access via a command such as:
guestfish -d RHEL3 -i
Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
Operating system: Red Hat Enterprise Linux AS release 3 (Taroon Update 9)
/dev/vda2 mounted on /
/dev/vda1 mounted on /boot
><fs> edit /boot/grub/grub.conf
edit, vi and emacs. Many commands also exist for creating files and directories, such as write, mkdir, upload and tar-in.
mkfs, part-add, lvresize, lvcreate, vgcreate and pvcreate.
#!/bin/bash - set -e guestname="$1" guestfish -d "$guestname" -i <<'EOF' write /etc/motd "Welcome to Acme Incorporated." chmod 0644 /etc/motd EOF
#!/bin/bash - set -e guestname="$1" guestfish -d "$1" -i --ro <<'EOF' aug-init / 0 aug-get /files/etc/sysconfig/keyboard/LAYOUT EOF
#!/bin/bash - set -e guestname="$1" guestfish -d "$1" -i <<'EOF' aug-init / 0 aug-set /files/etc/sysconfig/keyboard/LAYOUT '"gb"' aug-save EOF
--ro option has been removed in the second example, giving the ability to write to the guest.
aug-get command has been changed to aug-set to modify the value instead of fetching it. The new value will be "gb" (including the quotes).
aug-save command is used here so Augeas will write the changes out to disk.
guestfish -N fs
><fs> copy-out /home /tmp/home
virt-cat is similar to the guestfish download command. It downloads and displays a single file to the guest. For example:
$ virt-cat RHEL3 /etc/ntp.conf | grep ^server server 127.127.1.0 # local clock
virt-edit is similar to the guestfish edit command. It can be used to interactively edit a single file within a guest. For example, you may need to edit the grub.conf file in a Linux-based guest that will not boot:
$ virt-edit LinuxGuest /boot/grub/grub.conf
virt-edit has another mode where it can be used to make simple non-interactive changes to a single file. For this, the -e option is used. This command, for example, changes the root password in a Linux guest to having no password:
$ virt-edit LinuxGuest /etc/passwd -e 's/^root:.*?:/root::/'
virt-ls is similar to the guestfish ls, ll and find commands. It is used to list a directory or directories (recursively). For example, the following command would recursively list files and directories under /home in a Linux guest:
$ virt-ls -R LinuxGuest /home/ | less
virt-rescue, which can be considered analogous to a rescue CD for virtual machines. It boots a guest into a rescue shell so that maintenance can be performed to correct errors and the guest can be repaired.
virt-rescue on a guest, make sure the guest is not running, otherwise disk corruption will occur. When you are sure the guest is not live, enter:
virt-rescue GuestName
virt-rescue /path/to/disk/image
Welcome to virt-rescue, the libguestfs rescue shell. Note: The contents of / are the rescue appliance. You have to mount the guest's partitions under /sysroot before you can examine them. bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell ><rescue>
><rescue> fdisk -l /dev/vda
/sysroot, which is an empty directory in the rescue machine for the user to mount anything you like. Note that the files under / are files from the rescue VM itself:
><rescue> mount /dev/vda1 /sysroot/ EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null) ><rescue> ls -l /sysroot/grub/ total 324 -rw-r--r--. 1 root root 63 Sep 16 18:14 device.map -rw-r--r--. 1 root root 13200 Sep 16 18:14 e2fs_stage1_5 -rw-r--r--. 1 root root 12512 Sep 16 18:14 fat_stage1_5 -rw-r--r--. 1 root root 11744 Sep 16 18:14 ffs_stage1_5 -rw-------. 1 root root 1503 Oct 15 11:19 grub.conf [...]
exit or Ctrl+d.
virt-rescue has many command line options. The options most often used are:
virt-df, which displays file system usage from a disk image or a guest. It is similar to the Linux df command, but for virtual machines.
# virt-df /dev/vg_guests/RHEL6 Filesystem 1K-blocks Used Available Use% RHEL6:/dev/sda1 101086 10233 85634 11% RHEL6:/dev/VolGroup00/LogVol00 7127864 2272744 4493036 32%
/dev/vg_guests/RHEL6 is a Red Hat Enterprise Linux 4 guest disk image. The path in this case is the host logical volume where this disk image is located.)
virt-df on its own to list information about all of your guests (ie. those known to libvirt). The virt-df command recognizes some of the same options as the standard df such as -h (human-readable) and -i (show inodes instead of blocks).
virt-df also works on Windows guests:
# virt-df -h
Filesystem Size Used Available Use%
F14x64:/dev/sda1 484.2M 66.3M 392.9M 14%
F14x64:/dev/vg_f14x64/lv_root 7.4G 3.0G 4.4G 41%
RHEL6brewx64:/dev/sda1 484.2M 52.6M 406.6M 11%
RHEL6brewx64:/dev/vg_rhel6brewx64/lv_root
13.3G 3.4G 9.2G 26%
Win7x32:/dev/sda1 100.0M 24.1M 75.9M 25%
Win7x32:/dev/sda2 19.9G 7.4G 12.5G 38%
virt-df safely on live guests, since it only needs read-only access. However, you should not expect the numbers to be precisely the same as those from a df command running inside the guest. This is because what is on disk will be slightly out of synch with the state of the live guest. Nevertheless it should be a good enough approximation for analysis and monitoring purposes.
--csv option to generate machine-readable Comma-Separated-Values (CSV) output. CSV output is readable by most databases, spreadsheet software and a variety of other tools and programming languages. The raw CSV looks like the following:
# virt-df --csv WindowsGuest Virtual Machine,Filesystem,1K-blocks,Used,Available,Use% Win7x32,/dev/sda1,102396,24712,77684,24.1% Win7x32,/dev/sda2,20866940,7786652,13080288,37.3%
virt-resize, a tool for expanding or shrinking guests. It only works for guests which are offline (shut down). It works by copying the guest image and leaving the original disk image untouched. This is ideal because you can use the original image as a backup, however there is a trade-off as you need twice the amount of disk space.
virsh dumpxml GuestName for a libvirt guest.
virt-df -h and virt-list-partitions -lh on the guest disk, as shown in the following output:
# virt-df -h /dev/vg_guests/RHEL6 Filesystem Size Used Available Use% RHEL6:/dev/sda1 98.7M 10.0M 83.6M 11% RHEL6:/dev/VolGroup00/LogVol00 6.8G 2.2G 4.3G 32% # virt-list-partitions -lh /dev/vg_guests/RHEL6 /dev/sda1 ext3 101.9M /dev/sda2 pv 7.9G
/dev/VolGroup00/LogVol00 to fill the new space in the second partition.
mv command. For logical volumes (as demonstrated in this example), use lvrename:
# lvrename /dev/vg_guests/RHEL6 /dev/vg_guests/RHEL6.backup
# lvcreate -L 16G -n RHEL6 /dev/vg_guests Logical volume "RHEL6" created
# virt-resize \
/dev/vg_guests/RHEL6.backup /dev/vg_guests/RHEL6 \
--resize /dev/sda1=500M \
--expand /dev/sda2 \
--LV-expand /dev/VolGroup00/LogVol00
--resize /dev/sda1=500M resizes the first partition up to 500MB. --expand /dev/sda2 expands the second partition to fill all remaining space. --LV-expand /dev/VolGroup00/LogVol00 expands the guest logical volume to fill the extra space in the second partition.
virt-resize describes what it is doing in the output:
Summary of changes: /dev/sda1: partition will be resized from 101.9M to 500.0M /dev/sda1: content will be expanded using the 'resize2fs' method /dev/sda2: partition will be resized from 7.9G to 15.5G /dev/sda2: content will be expanded using the 'pvresize' method /dev/VolGroup00/LogVol00: LV will be expanded to maximum size /dev/VolGroup00/LogVol00: content will be expanded using the 'resize2fs' method Copying /dev/sda1 ... [#####################################################] Copying /dev/sda2 ... [#####################################################] Expanding /dev/sda1 using the 'resize2fs' method Expanding /dev/sda2 using the 'pvresize' method Expanding /dev/VolGroup00/LogVol00 using the 'resize2fs' method
virt-df and/or virt-list-partitions to show the new size:
# virt-df -h /dev/vg_pin/RHEL6 Filesystem Size Used Available Use% RHEL6:/dev/sda1 484.4M 10.8M 448.6M 3% RHEL6:/dev/VolGroup00/LogVol00 14.3G 2.2G 11.4G 16%
virt-resize fails, there are a number of tips that you can review and attempt in the virt-resize(1) man page. For some older Red Hat Enterprise Linux guests, you may need to pay particular attention to the tip regarding GRUB.
virt-inspector is a tool for inspecting a disk image to find out what operating system it contains.
virt-inspector is the original program as found in Red Hat Enteprise Linux 6.0 and is now deprecated upstream. virt-inspector2 is the same as the new upstream virt-inspector program.
# yum install libguestfs-tools libguestfs-devel
/usr/share/doc/libguestfs-devel-*/ where "*" is replaced by the version number of libguestfs.
virt-inspector against any disk image or libvirt guest as shown in the following example:
virt-inspector --xml disk.img > report.xml
virt-inspector --xml GuestName > report.xml
report.xml). The main components of the XML file are a top-level <operatingsytems> element containing usually a single <operatingsystem> element, similar to the following:
<operatingsystems>
<operatingsystem>
<!-- the type of operating system and Linux distribution -->
<name>linux</name>
<distro>fedora</distro>
<!-- the name, version and architecture -->
<product_name>Fedora release 12 (Constantine)</product_name>
<major_version>12</major_version>
<arch>x86_64</arch>
<!-- how the filesystems would be mounted when live -->
<mountpoints>
<mountpoint dev="/dev/vg_f12x64/lv_root">/</mountpoint>
<mountpoint dev="/dev/sda1">/boot</mountpoint>
</mountpoints>
<!-- the filesystems -->
<filesystems>
<filesystem dev="/dev/sda1">
<type>ext4</type>
</filesystem>
<filesystem dev="/dev/vg_f12x64/lv_root">
<type>ext4</type>
</filesystem>
<filesystem dev="/dev/vg_f12x64/lv_swap">
<type>swap</type>
</filesystem>
</filesystems>
<!-- packages installed -->
<applications>
<application>
<name>firefox</name>
<version>3.5.5</version>
<release>1.fc12</release>
</application>
</applications>
</operatingsystem>
</operatingsystems>
xpath) which can be used for simple instances; however, for long-term and advanced usage, you should consider using an XPath library along with your favorite programming language.
virt-inspector --xml GuestName | xpath //filesystem/@dev Found 3 nodes: -- NODE -- dev="/dev/sda1" -- NODE -- dev="/dev/vg_f12x64/lv_root" -- NODE -- dev="/dev/vg_f12x64/lv_swap"
virt-inspector --xml GuestName | xpath //application/name [...long list...]
virt-win-reg is a tool that manipulates the Registry in Windows guests. It can be used to read out registry keys. You can also use it to make changes to the Registry, but you must never try to do this for live/running guests, as it will result in disk corruption.
virt-win-reg you must run the following:
# yum install libguestfs-tools libguestfs-winsupport
# virt-win-reg WindowsGuest \
'HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall' \
| less
.REG files on Windows.
.REG files from one machine to another.
virt-win-reg through this simple Perl script:
perl -MEncode -pe's?hex\((\d+)\):(\S+)?$t=$1;$_=$2;s,\,,,g;"str($t):\"".decode(utf16le=>pack("H*",$_))."\""?eg'
.REG file. There is a great deal of documentation about doing this available from MSDN, and there is a good summary in the following Wikipedia page: https://secure.wikimedia.org/wikipedia/en/wiki/Windows_Registry#.REG_files. When you have prepared a .REG file, enter the following:
# virt-win-reg --merge WindowsGuest input.reg
# yum install libguestfs-devel
# yum install 'perl(Sys::Guestfs)'
# yum install python-libguestfs
# yum install libguestfs-java libguestfs-java-devel libguestfs-javadoc
# yum install ruby-libguestfs
# yum install ocaml-libguestfs ocaml-libguestfs-devel
guestfs_launch (g);
$g->launch ()
g#launch ()
#include <stdio.h>
#include <stdlib.h>
#include <guestfs.h>
int
main (int argc, char *argv[])
{
guestfs_h *g;
g = guestfs_create ();
if (g == NULL) {
perror ("failed to create libguestfs handle");
exit (EXIT_FAILURE);
}
/* ... */
guestfs_close (g);
exit (EXIT_SUCCESS);
}
test.c). Compile this program and run it with the following two commands:
gcc -Wall test.c -o test -lguestfs ./test
disk.img and be created in the current directory.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <unistd.h>
#include <guestfs.h>
int
main (int argc, char *argv[])
{
guestfs_h *g;
size_t i;
g = guestfs_create ();
if (g == NULL) {
perror ("failed to create libguestfs handle");
exit (EXIT_FAILURE);
}
/* Create a raw-format sparse disk image, 512 MB in size. */
int fd = open ("disk.img", O_CREAT|O_WRONLY|O_TRUNC|O_NOCTTY, 0666);
if (fd == -1) {
perror ("disk.img");
exit (EXIT_FAILURE);
}
if (ftruncate (fd, 512 * 1024 * 1024) == -1) {
perror ("disk.img: truncate");
exit (EXIT_FAILURE);
}
if (close (fd) == -1) {
perror ("disk.img: close");
exit (EXIT_FAILURE);
}
/* Set the trace flag so that we can see each libguestfs call. */
guestfs_set_trace (g, 1);
/* Set the autosync flag so that the disk will be synchronized
* automatically when the libguestfs handle is closed.
*/
guestfs_set_autosync (g, 1);
/* Add the disk image to libguestfs. */
if (guestfs_add_drive_opts (g, "disk.img",
GUESTFS_ADD_DRIVE_OPTS_FORMAT, "raw", /* raw format */
GUESTFS_ADD_DRIVE_OPTS_READONLY, 0, /* for write */
-1 /* this marks end of optional arguments */ )
== -1)
exit (EXIT_FAILURE);
/* Run the libguestfs back-end. */
if (guestfs_launch (g) == -1)
exit (EXIT_FAILURE);
/* Get the list of devices. Because we only added one drive
* above, we expect that this list should contain a single
* element.
*/
char **devices = guestfs_list_devices (g);
if (devices == NULL)
exit (EXIT_FAILURE);
if (devices[0] == NULL || devices[1] != NULL) {
fprintf (stderr,
"error: expected a single device from list-devices\n");
exit (EXIT_FAILURE);
}
/* Partition the disk as one single MBR partition. */
if (guestfs_part_disk (g, devices[0], "mbr") == -1)
exit (EXIT_FAILURE);
/* Get the list of partitions. We expect a single element, which
* is the partition we have just created.
*/
char **partitions = guestfs_list_partitions (g);
if (partitions == NULL)
exit (EXIT_FAILURE);
if (partitions[0] == NULL || partitions[1] != NULL) {
fprintf (stderr,
"error: expected a single partition from list-partitions\n");
exit (EXIT_FAILURE);
}
/* Create an ext4 filesystem on the partition. */
if (guestfs_mkfs (g, "ext4", partitions[0]) == -1)
exit (EXIT_FAILURE);
/* Now mount the filesystem so that we can add files. */
if (guestfs_mount_options (g, "", partitions[0], "/") == -1)
exit (EXIT_FAILURE);
/* Create some files and directories. */
if (guestfs_touch (g, "/empty") == -1)
exit (EXIT_FAILURE);
const char *message = "Hello, world\n";
if (guestfs_write (g, "/hello", message, strlen (message)) == -1)
exit (EXIT_FAILURE);
if (guestfs_mkdir (g, "/foo") == -1)
exit (EXIT_FAILURE);
/* This uploads the local file /etc/resolv.conf into the disk image. */
if (guestfs_upload (g, "/etc/resolv.conf", "/foo/resolv.conf") == -1)
exit (EXIT_FAILURE);
/* Because 'autosync' was set (above) we can just close the handle
* and the disk contents will be synchronized. You can also do
* this manually by calling guestfs_umount_all and guestfs_sync.
*/
guestfs_close (g);
/* Free up the lists. */
for (i = 0; devices[i] != NULL; ++i)
free (devices[i]);
free (devices);
for (i = 0; partitions[i] != NULL; ++i)
free (partitions[i]);
free (partitions);
exit (EXIT_SUCCESS);
}
gcc -Wall test.c -o test -lguestfs ./test
disk.img, which you can examine with guestfish:
guestfish --ro -a disk.img -m /dev/sda1 ><fs> ll / ><fs> cat /foo/resolv.conf
$ libguestfs-test-tool
===== TEST FINISHED OK =====

libvirt daemon is first installed and started, the default network interface representing the virtual network switch is virbr0.

virbr0 interface can be viewed with the ifconfig and ip commands like any other interface:
$ ifconfig virbr0
virbr0 Link encap:Ethernet HWaddr 1B:C4:94:CF:FD:17
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:3097 (3.0 KiB)
$ ip addr show virbr0
3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 1b:c4:94:cf:fd:17 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

dnsmasq program for this. An instance of dnsmasq is automatically configured and started by libvirt for each virtual network switch that needs it.



libvirtd daemon is first installed, it contains an initial virtual network switch configuration in NAT mode. This configuration is used so that installed guests can communicate to the external network, through the host. The following image demonstrates this default configuration for libvirtd:

eth0, eth1 and eth2). This is only useful in routed and NAT modes, and can be defined in the dev=<interface> option, or in virt-manager when creating a new virtual network.


















| Item | Description |
|---|---|
pae
|
Specifies the physical address extension configuration data.
|
apic
|
Specifies the advanced programmable interrupt controller configuration data.
|
memory
|
Specifies the memory size in megabytes.
|
vcpus
|
Specifies the numbers of virtual CPUs.
|
console
|
Specifies the port numbers to export the domain consoles to.
|
nic
|
Specifies the number of virtual network interfaces.
|
vif
|
Lists the randomly-assigned MAC addresses and bridges assigned to use for the domain's network addresses.
|
disk
|
Lists the block devices to export to the domain and exports physical devices to domain with read only access.
|
dhcp
|
Enables networking using DHCP.
|
netmask
|
Specifies the configured IP netmasks.
|
gateway
|
Specifies the configured IP gateways.
|
acpi
|
Specifies the advanced configuration power interface configuration data.
|
libvirt.
libvirt.
virsh can handle XML configuration files. You may want to use this to your advantage for scripting large deployments with special options. You can add devices defined in an XML file to a running guest. For example, to add an ISO file as hdc to a running guest create an XML file:
# cat satelliteiso.xml <disk type="file" device="disk"> <driver name="file"/> <source file="/var/lib/libvirt/images/rhn-satellite-5.0.1-11-redhat-linux-as-i386-4-embedded-oracle.iso"/> <target dev="hdc"/> <readonly/> </disk>
virsh attach-device to attach the ISO as hdc to a guest called "satellite":
# virsh attach-device satellite satelliteiso.xml
qemu-kvm utility used as an emulator and a virtualizer in Red Hat Enterprise Linux 6. This is a comprehensive summary of the supported options.
kvm_stat
trace-cmd
vmstat
iostat
lsof
systemtap
crash
sysrq
sysrq t
sysrq w
ifconfig
tcpdump
tcpdump command 'sniffs' network packets. tcpdump is useful for finding network abnormalities and problems with network authentication. There is a graphical version of tcpdump named wireshark.
brctl
brctl is a networking tool that inspects and configures the Ethernet bridge configuration in the Virtualization linux kernel. You must have root access before performing these example commands:
# brctl show bridge-name bridge-id STP enabled interfaces ----------------------------------------------------------------------------- virtbr0 8000.feffffff yes eth0 # brctl showmacs virtbr0 port-no mac-addr local? aging timer 1 fe:ff:ff:ff:ff: yes 0.00 2 fe:ff:ff:fe:ff: yes 0.00 # brctl showstp virtbr0 virtbr0 bridge-id 8000.fefffffffff designated-root 8000.fefffffffff root-port 0 path-cost 0 max-age 20.00 bridge-max-age 20.00 hello-time 2.00 bridge-hello-time 2.00 forward-delay 0.00 bridge-forward-delay 0.00 aging-time 300.01 hello-timer 1.43 tcn-timer 0.00 topology-change-timer 0.00 gc-timer 0.02
yum install vnc command.
yum install vnc-server command.
kvm_stat command is a python script which retrieves runtime statistics from the kvm kernel module. The kvm_stat command can be used to diagnose guest behavior visible to kvm. In particular, performance related issues with guests. Currently, the reported statistics are for the entire system; the behavior of all running guests is reported.
kvm_stat command requires that the kvm kernel module is loaded and debugfs is mounted. If either of these features are not enabled, the command will output the required steps to enable debugfs or the kvm module. For example:
# kvm_stat
Please mount debugfs ('mount -t debugfs debugfs /sys/kernel/debug')
and ensure the kvm modules are loadeddebugfs if required:
# mount -t debugfs debugfs /sys/kernel/debug
kvm_stat command outputs statistics for all guests and the host. The output is updated until the command is terminated (using Ctrl+c or the q key).
# kvm_stat kvm statistics efer_reload 94 0 exits 4003074 31272 fpu_reload 1313881 10796 halt_exits 14050 259 halt_wakeup 4496 203 host_state_reload 1638354 24893 hypercalls 0 0 insn_emulation 1093850 1909 insn_emulation_fail 0 0 invlpg 75569 0 io_exits 1596984 24509 irq_exits 21013 363 irq_injections 48039 1222 irq_window 24656 870 largepages 0 0 mmio_exits 11873 0 mmu_cache_miss 42565 8 mmu_flooded 14752 0 mmu_pde_zapped 58730 0 mmu_pte_updated 6 0 mmu_pte_write 138795 0 mmu_recycled 0 0 mmu_shadow_zapped 40358 0 mmu_unsync 793 0 nmi_injections 0 0 nmi_window 0 0 pf_fixed 697731 3150 pf_guest 279349 0 remote_tlb_flush 5 0 request_irq 0 0 signal_exits 1 0 tlb_flush 200190 0
VMEXIT calls.
VMENTRY reloaded the FPU state. The fpu_reload is incremented when a guest is using the Floating Point Unit (FPU).
halt calls. This type of exit is usually seen when a guest is idle.
halt.
insn_emulation attempts.
tlb_flush operations performed by the hypervisor.
kvm_stat command is exported by the KVM hypervisor as pseudo files located in the /sys/kernel/debug/kvm/ directory.
virsh console command.
ttyS0 on Linux or COM1 on Windows.
/boot/grub/grub.conf file. Append the following to the kernel line: console=tty0 console=ttyS0,115200.
title Red Hat Enterprise Linux Server (2.6.32-36.x86-64)
root (hd0,0)
kernel /vmlinuz-2.6.32-36.x86-64 ro root=/dev/volgroup00/logvol00 \
console=tty0 console=ttyS0,115200
initrd /initrd-2.6.32-36.x86-64.img
# virsh console
virt-manager to display the virtual text console. In the guest console window, select Serial 1 in Text Consoles from the View menu.
/var/log/libvirt/qemu/ directory. Each guest log is named as GuestName.log and will be periodically compressed once a size limit is reached.
virt-manager.log file that resides in the $HOME/.virt-manager directory.
/etc/modprobe.d/directory. Add the following line:
options loop max_loop=64
phy: device or file: file commands.
Enabling the virtualization extensions in BIOS
cat /proc/cpuinfo | grep "vmx svm". If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled.
e1000) driver is also supported as an emulated driver choice. To use the e1000 driver, replace virtio in the procedure below with e1000. For the best performance it is recommended to use the virtio driver.
virsh command (where GUEST is the guest's name):
# virsh edit GUESTvirsh edit command uses the $EDITOR shell variable to determine which editor to use.
<interface type='network'>
[output truncated]
<model type='rtl8139' />
</interface>
'rtl8139' to 'virtio'. This will change the driver from the rtl8139 driver to the e1000 driver.
<interface type='network'>
[output truncated]
<model type='virtio' />
</interface>
Guest1):
# virsh dumpxmlGuest1> /tmp/guest-template.xml
# cp /tmp/guest-template.xml /tmp/new-guest.xml # vi /tmp/new-guest.xml
<interface type='network'>
[output truncated]
<model type='virtio' />
</interface>
# virsh define /tmp/new-guest.xml # virsh start new-guest
libvirt virtualization API.
man virsh and /usr/share/doc/libvirt-<version-number> — Contains sub commands and options for the virsh virtual machine management utility as well as comprehensive information about the libvirt virtualization library API.
/usr/share/doc/gnome-applet-vm-<version-number> — Documentation for the GNOME graphical panel applet that monitors and manages locally-running virtual machines.
/usr/share/doc/libvirt-python-<version-number> — Provides details on the Python bindings for the libvirt library. The libvirt-python package allows python developers to create programs that interface with the libvirt virtualization management library.
/usr/share/doc/python-virtinst-<version-number> — Provides documentation on the virt-install command that helps in starting installations of Fedora and Red Hat Enterprise Linux related distributions inside of virtual machines.
/usr/share/doc/virt-manager-<version-number> — Provides documentation on the Virtual Machine Manager, which provides a graphical tool for administering virtual machines.
| Revision History | ||||||||
|---|---|---|---|---|---|---|---|---|
| Revision 0.1-91 | Fri Dec 02 2011 | |||||||
| ||||||||
| Revision 0.1-87 | Mon Nov 28 2011 | |||||||
| ||||||||
| Revision 0.1-86 | Mon Nov 28 2011 | |||||||
| ||||||||
| Revision 0.1-85 | Mon Nov 21 2011 | |||||||
| ||||||||
| Revision 0.1-84 | Tue Nov 15 2011 | |||||||
| ||||||||
| Revision 0.1-83 | Tue Nov 15 2011 | |||||||
| ||||||||
| Revision 0.1-82 | Tue Nov 15 2011 | |||||||
| ||||||||
| Revision 0.1-81 | Tue Nov 15 2011 | |||||||
| ||||||||
| Revision 0.1-80 | Tue Nov 15 2011 | |||||||
| ||||||||
| Revision 0.1-79 | Thu Nov 03 2011 | |||||||
| ||||||||
| Revision 0.1-78 | Thu Nov 03 2011 | |||||||
| ||||||||
| Revision 0.1-76 | Tue Oct 25 2011 | |||||||
| ||||||||
| Revision 0.1-75 | Tue Oct 25 2011 | |||||||
| ||||||||
| Revision 0.1-73 | Tue Oct 18 2011 | |||||||
| ||||||||
| Revision 0.1-69 | Mon Sep 19 2011 | |||||||
| ||||||||
| Revision 0.1-68 | Fri Sep 16 2011 | |||||||
| ||||||||
| Revision 0.1-67 | Thu Sep 15 2011 | |||||||
| ||||||||
| Revision 0.1-66 | Tue Sep 13 2011 | |||||||
| ||||||||
| Revision 0.1-65 | Mon Sep 12 2011 | |||||||
| ||||||||
| Revision 0.1-64 | Tue Sep 06 2011 | |||||||
| ||||||||
| Revision 0.1-03 | Thu Jun 2 2011 | |||||||
| ||||||||
| Revision 0.1-02 | Thu Jun 2 2011 | |||||||
| ||||||||