Using virt-install to Install a Domain

2011年7月18日 | 标签: linux, xen, 虚拟主机

Using virt-install to Install a Guest

Note: this page describes the latest OpenSolaris xVM bits. Older versions may work differently.

The command-line utility virt-install is used for installing new guest domains. In general, you specify an installation source, some disk storage and networking, and some other parameters. You will then go through the guest OS installation process. After virt-install is done, the OS will be installed, and its configuration available via virsh. virt-manager is an alternative if you want a graphical guest installation tool. virt-install must be run as the root user.

Below we’ll show some typical invocations you might do. For more detailed information see the man page and the System Administration Guide: Virtualization Using the Solaris Operating System.

Configuring disk storage

The –disk option allows you to specify storage for a guest. xVM supports a number of different storage types, with various trade-offs. Some common choices are given below.

Configuring a ZFS volume

--disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol

This example creates a 10Gb ZFS volume. A ZFS volume enables all the features of ZFS (checksumming, snapshots, cloning, etc.) to be used for guest storage. However, it means that a guest cannot be easily migrated onto a different system without transferring the storage separately (via zfs send or whatever), so this is not a good choice for guests you want to live migrate.

Note that tank is a ZFS filesystem mounted on the dom0 at /tank, and we’re specifying the dataset path. Unlike the zfs utilities, though, we currently have to prefix the path with a leading slash. Any intermediate datasets (here, guests/dom1) will be created as needed.

Configuring a vdisk on a local filesystem

--disk path=/guests/dom1/disk0,size=10,driver=tap,subdriver=vdisk,format=vmdk

A VMDK format vdisk is virt-install’s default if a directory path is given, or the path doesn’t exist. This creates (or uses) a directory containing configuration information along with the files containing the image. Snapshotting and other features are provided by vdiskadm.

As well as VMWare’s VMDK format, vdisks also support VHD (Microsoft’s format) and VDI (VirtualBox’s format). In general, VMDK is best-performing and most widely supported. Use format=vhd or format=vdi to choose these disk formats.

Configuring a vdisk over NFS

--disk path=/net/nfshost/guests/dom1/disk0,size=10,driver=tap,subdriver=vdisk,format=vmdk

As vdisks can be stored on an NFS server, they are a good choice if you want to live migrate the guest, although remember the path to the vdisk must be exactly the same on each dom0 host.

For successful creation and use of a vdisk hosted on NFS, the user xvm on the dom0 host must have write permission for the given path. This may involve setting ACLs on the NFS host, or simply a chown of the containing directory to the xvm user or user ID 60.

Configuring an iSCSI volume

--disk path=/alias/0/iscsi/dom1/disk0,driver=phy,subdriver=iscsi

iSCSI is a networked block device. The example above uses a pre-existing iSCSI target with the alias iscsi/dom1/disk0, and LUN 0. This is useful if you have already configured iSCSI on your dom0 and have volumes available in iscsiadm list target.

In addition to an alias, a volume can be specified by its LUN and target ID: path=/discover/0/iqn.1986-03.com.sun:02:d5ab1c26-0a7a-c6b4-98f8-d6d267eb2561. In general, it’s simpler to use the Alias given for the target.

Finally, you can create a static config, in which case the volume is discovered, configured, and de-configured for you automatically. For example, given an iSCSI target at 192.168.0.70, LUN 0, and a target ID, you might specify path=/static/192.168.0.70/0/iqn.1986-03.com.sun:02:d5ab1c26-0a7a-c6b4-98f8-d6d267eb2561.

Configuring a raw physical disk

--disk path=/dev/dsk/c0t1d0s4,driver=phy

You can use a block device accessible to dom0 for guest storage. This can be the best-performing method, but doesn’t allow any kind of higher-level facilities such as snapshotting. It is also difficult to live-migrate if the device refers to a local disk drive.

Configuring a raw file

--disk path=/guests/dom1/disk0.img,size=10,driver=file

Raw files are not recommended: it’s almost always better to use one of the above methods instead. In particular, attempting to use raw files on a ZFS filesystem will be woefully slow.

Configuring networking

You can specify networking for the guest via the –network option. Almost always, you will want to share a bridge amongst guest domains, but you can also dedicate an unused NIC, or perform host-only networking via an etherstub.

Most likely, you will want your guest OS to use DHCP to configure its networking setup. For this to work, a DHCP server must be available on the virtual network. By default, virt-install will automatically create a random MAC address for the guest with the Xen OUI. If, however, your DHCP server doesn’t hand out addresses to unknown clients, you will need to specific a MAC address configured as a client on the DHCP server. In addition, most PXE and JumpStart setups require a specified MAC address.

The mac= setting for the –network option can be used to do this: be careful to specify a valid MAC address – not much validation is done. In the examples, we presume that you don’t need this option.

If you are running Xen on a laptop with a wireless connection as your primary NIC, you cannot use bridged mode, as the wireless chipsets typically reject any unknown MAC addresses. In this case, you can use an etherstub.

Configuring a bridge

--network bridge=e1000g0

Bridged networking is used for ethernet-based sharing of a physical network resource. In the example above, a VNIC is created on top of the dom0 interface e1000g0. Alternatively, you can just specify –network bridge, in which case a host NIC will be automatically chosen (typically, the first UP interface on which a VNIC can be created). In fact, this is preferred, as it allows a guest domain’s config to be migrated to host machines using a different NIC as the bridge.

Any traffic for the guest’s MAC address is sent to the guest via the VNIC. VNICs configured on the system can be shown via dladm show-vnic. This VNIC is automatically removed once the domain is no longer running.

Configuring a dedicated NIC

Specifying a network-setup script of vif-dedicated allows a guest to directly use a NIC on the host, without going through a VNIC. The NIC should not be configured on the host dom0. As of June 2009, it’s not possible to configure this via virt-install. To add such a NIC after installation:

virsh attach-interface dom1 bridge e1000g0 --script vif-dedicated

Configuring an etherstub

dladm create-etherstub stub0
dladm set-linkprop -p mtu=1500 stub0
...
... --network bridge=stub0

If you specify an etherstub as your bridge NIC, then you have a form of host-only networking: no packets can leave the virtual network created on top of stub0, as it is not connected to any physical interface. An MTU of 1500 (not the default 9000) is required.

You would typically then install from an ISO. For a network installation to work, you’ll have to provide network services on top of the etherstub (for example, another guest might be connected to the etherstub to provide an NFS server).

Configuring a VLAN

--network bridge,vlanid=2

To configure a guest to use a particular VLAN ID, use the vlanid setting. For more information, see here.

Configuring bandwidth control

--network bridge,capped-bandwidth=200M

The capped-bandwidth setting configures Crossbow bandwidth control. For more information, see here.

Configuring a network mask

--extra-args "-B subnet-mask=255.255.254.0"

If you have a non-standard network mask, the guest OS’s default may not be sufficient. This example tells a Solaris guest to use a mask of 255.255.254.0. Any such options are guest-specific. Note that the –extra-args option only apples to -paravirt installations.

Accessing the guest console

By default, virt-install will attempt to connect to the guest’s console when it starts, to allow an interactive installation. If you want to use a text-console install, the –nographics option can be used. In that case, the guests’ console is routed to the terminal from which you invoke virt-install.

Alternatively, specifying –vnc will configure a virtual frame-buffer. In general, this works for both HVM and para-virtual guests, but there are some variations, as shown below.

If you’re running virt-install in a graphical environment (that is, $DISPLAY is set, then it will start a VNC viewer connected to the guest console for you. Otherwise, you will see a message like this:

Unable to connect to graphical console; DISPLAY is not set.  Please connect to localhost:5900
Domain installation still in progress. Waiting for domain to complete installation.

Keep the virt-install process running, and connect to the VNC viewer given. Note that if you are connecting from a remote host (not dom0), you will have to configure dom0 VNC access before starting virt-install.

Full examples

编辑

Installing an OpenSolaris guest from an ISO

virt-install --paravirt --name dom1 --ram 1024 --nographics \
 --os-type=solaris --os-variant=opensolaris \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --location /isos/osol-200906.iso

Installs OpenSolaris onto a ZFS volume from the ISO /isos/osol-200906.iso. 1024 megabytes is the recommended minimum for Solaris. As Solaris doesn’t yet support a para-virtualized frame buffer, we start up a text install. After the guest starts, follow these instructions to access the graphical installer.

Note that OpenSolaris currently requires networking configuration to happen via DHCP.

Installing an OpenSolaris automated-install guest

virt-install --paravirt --name dom1 --ram 1024 --nographics \
 --os-type=solaris --os-variant=opensolaris \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --location http://10.0.0.1:5555/space/images/osol_111b \
 --autocf install_service=myservice

Installs an OpenSolaris guest using the ‘myservice’ AI install service from a given install media. Note that you /must/ use the IP address of the install server, not its hostname.

virt-install --paravirt --name dom1 --ram 1024 --nographics \
 --os-type=solaris --os-variant=opensolaris \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --location http://10.0.0.1:5555/space/images/osol_111b \
 --autocf ""

Installs an OpenSolaris guest, where the guest should self-discover its install service (note: in 2009.06, AI does not yet have this capacity, but it is one of the features being investigated by the development team).

Installing a Windows guest

virt-install --hvm --name dom1 --ram 512 --vnc \
 --os-type=windows --os-variant=win2k \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --cdrom /isos/osol-200906.iso

Since Windows doesn’t have a full-PV mode, we specify hvm for the virtualization type, and point to the installation ISO with cdrom.

With Windows installs, the guest will reboot during the installation. virt-install will keep running during this guest reboot and ensure the installation media is presented to the guest as needed.

Installing a Linux guest over HTTP

virt-install --paravirt --name dom1 --ram 1024 --vnc \
 --os-type=linux --os-variant=fedora8 \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --location http://fedora.mirror.facebook.net/linux/releases/10/Fedora/x86_64/os/

Some versions of Linux, such as CentOS, Fedora, and others, can install directly from HTTP. Choose a suitable mirror nearest your host dom0.

Note that some Linux versions such as Fedora 10 will not successfully install with nographics, and also have problems starting X: a text install via VNC works, however.

Some distributions allow you to change the RTC setting (for example, Fedora has a “System clock uses UTC” checkbox). If there is such an option, leave it as UTC: even though Solaris dom0 uses a localtime RTC, the system ensures the RTC is virtualized correctly for Linux, as long as you specify the –os-type flag.

Installing a Linux guest from an ISO image

mount -F hsfs /isos/opensuse11.iso /mnt
share -o ro /mnt

virt-install --paravirt --name dom1 --ram 1024 --vnc \
 --os-type=linux --os-variant=sles10 \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --location nfs:nfshost.domain.com:/mnt

If you cannot do an HTTP install, your other option is to use NFS-over-ISO. Since Linux is unable to do –paravirt installs from an ISO image, you have to share the ISO image over NFS, and install that way.

Make sure to specify the FQDN of your NFS host. Typical Linux installations insist on using DNS to resolve the names, so won’t be able to find partial names like nfshost.domain.

Installing a Solaris 10 guest from an ISO

virt-install --hvm --name dom1 --ram 1024 --vnc \
 --os-type=solaris --os-variant=solaris10 \
 --network bridge \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --cdrom /isos/s10u7.iso

Solaris 10 only supports HVM mode. It’s recommended to use at least Solaris 10 Update 7, though earlier versions also work.

Installing a Solaris guest with PXE

virt-install --hvm --name dom1 --ram 1024 --vnc \
 --os-type=solaris --os-variant=solaris10 \
 --network bridge,mac=00:16:3e:1b:e8:18 \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --pxe

Presuming you have a PXE server set up waiting for the given MAC address, this example will start up an Solaris guest via PXE boot. Note that only HVM guests support PXE currently.

Installing a Solaris Nevada guest over NFS

virt-install --paravirt --name dom1 --ram 1024 --nographics \
 --os-type=solaris --os-variant=opensolaris \
 --network bridge,mac=00:16:3e:1b:e8:18 \
 --disk path=/tank/guests/dom1/disk0,size=10,driver=phy,subdriver=zvol \
 --location nfs:netinstall.sfbay.sun.com:/export/nv/x/latest/ \
 --extra-args "-B subnet-mask=255.255.254.0"

This is similar to the OpenSolaris ISO install, except that we’re installing over NFS. In this case, we’ve chosen to specify a particular MAC address. We also have a non-standard subnet mask, so need to add that. Unlike OpenSolaris, Nevada has a text installer, so you can install directly from the console (this is recommended).


相关博文

  1. 2011年7月18日11:05

    举例:安装一个半虚拟化客户机,512MB 内存,5G 磁盘,通过使用http提供的安装树,要求使用 text-only 模式。

    # virt-install –paravirt –name rhel5u4 –ram 512 –file /var/lib/xen/images\
    /rhel5u4.img -file-size 6 –nographics –location http://192.168.0.254/rhel5u4

    以下是一些安装命令的举例:
    #virt-install –name rhel5u4 –ram 512 –file=/var/lib/libvirt/images/rhel5u4.img \
    –file-size=3 –vnc –cdrom=/var/lib/libvirt/images/rhel5u4.iso -w network=default

    #virt-install -p -n rhel5u4 -r 512 -f /var/lib/libvirt/images/rhel5u4.img -s 3 \
    -vnc –cdrom=/var/lib/libvirt/images/rhel5u4 .iso -w network=default

    #virt-install -p -n rhel5u4 -r 512 -f /var/lib/libvirt/images/rhel5u4.img -s 3\
    -vnc -l http://192.168.0.254/rhel5u4 -w network=default

    #virt-install -p -n rhel5u4 -r 512 -f /var/lib/libvirt/images/rhel5u4.img -s 3 \
    -vnc –location=http://192.168.0.2/rhel5u4 -x ks=http://192.168.122.1/ks.cfg -w \
    network=default

    #virt-install -p -n rhel5u4 -r 512 -f /var/lib/libvirt/images/rhel5u4 .img -s 3 \
    –vnc -l http://192.168.0.254/rhel5u4 —extra-args=’ks=http://192.168.122.1 \
    ks.cfg’ -w network=default

    常用参数介绍:
    -n NAME, –name=NAME 指定 Guest 名字

    -r MEMORY, –ram=MEMORY 指定内存大小

    -u UUID, –uuid=UUID 指定 uuid 号
    可以使用 uuidgen 命令来产生 uuid:
    # uuidgen
    a89a3751-3555-4be5-8157-5e205ddba5bb
    或者使用如下命令:
    # echo ‘import virtinst.util ; print\
    virtinst.util.uuidToString(virtinst.util.randomUUID())’ | python
    4217ef56-b0d9-071d-6157-c98d0e6d240a
    –vcpus=VCPUS 指定虚拟机的 CPU 数量

    -p, –paravirt 指定客户机为半虚拟化 Guest

    -f DISKFILE, –file=DISKFILE 虚拟机的虚拟磁盘,磁盘可以是文件、磁盘分区或者是

    lvm。此选项用来指定虚拟磁盘的路径
    -s DISKSIZE, –file-size=DISKSIZE 指定虚拟磁盘的大小,单位是 G;如果—file 指定

    的文件路径不存在,并且—nonsparse 选项没有指
    定,对这个文件不会预先分配存储空间。
    -w NETWORK, –network=NETWORK NETWORK 有三种选择,bridge:BRIDGE ,

    network:NAME 和 user
    -c CDROM, –cdrom=CDROM 指定用于全虚拟化 Guest 的虚拟 CD,可以是一个 ISO 镜
    像文件,也可以是一个 CDROM 设备,也可以是一个通
    过 URL 可以访问和获取到的 boot.iso 镜像。如果将其忽
    略,那么在—location 选项中必须指定 kernel 和 initrd
    的位置,也可以使用—pxe 参数通过网络进行安装。
    –pxe 使用 PXE boot 协议来加载初始化的 ramdisk 和 kernel,
    以便于启动 Guset 段的安装进进程。如果没有指定—pxe,那么
    就必须指定—cdrom 和–location 其中一个。
    -l LOCATION, –location=LOCATION 指定 kernel 和 initrd 的安装源,这对于半虚拟化是
    必须的。对于全虚拟化,要么使用–location 要么使用
    –cdrom 来指定 ISO 或 CDROM 镜像。其必须符合下面的
    四种格式:
    DIRECTORY
    nfs:host:/path
    http://host/path
    ftp://host/path
    -x EXTRA, –extra-args=EXTRA 用来给加载的 kernel 和 initrd 提供额外的内核命令行参
    数。
    (跟详细的参数及使用方法详见 man virt-install)

WordPress SEO fine-tune by Meta SEO Pack from Poradnik Webmastera