Subsections of Topics

API for configuring the udhcp server in Dom0

This API allows you to configure the DHCP service running on the Host Internal Management Network (HIMN). The API configures a udhcp daemon residing in Dom0 and alters the service configuration for any VM using the network.

It should be noted that for this reason, that callers who modify the default configuration should be aware that their changes may have an adverse affect on other consumers of the HIMN.

Version history

Date        State
----        ----
2013-3-15   Stable

Stable: this API is considered stable and unlikely to change between software version and between hotfixes.

API description

The API for configuring the network is based on a series of other_config keys that can be set by the caller on the HIMN XAPI network object. Once any of the keys below have been set, the caller must ensure that any VIFs attached to the HIMN are removed, destroyed, created and plugged.

ip_begin

The first IP address in the desired subnet that the caller wishes the DHCP service to use.

ip_end

The last IP address in the desired subnet that the caller wishes the DHCP service to use.

netmask

The subnet mask for each of the issues IP addresses.

ip_disable_gw

A boolean key for disabling the DHCP server from returning a default gateway for VMs on the network. To disable returning the gateway address set the key to True.

Note: By default, the DHCP server will issue a default gateway for those requesting an address. Setting this key may disrupt applications that require the default gateway for communicating with Dom0 and so should be used with care.

Example code

An example python extract of setting the config for the network:

def get_himn_ref():
    networks = session.xenapi.network.get_all_records()
    for ref, rec in networks.iteritems():
        if 'is_host_internal_management_network' \
                                        in rec['other_config']:                                            
            return ref

    raise Exception("Error: unable to find HIMN.")


himn_ref = get_himn_ref()
other_config = session.xenapi.network.get_other_config(himn_ref)

other_config['ip_begin'] = "169.254.0.1"
other_config['ip_end'] = "169.254.255.254"
other_config['netmask'] = "255.255.0.0"

session.xenapi.network.set_other_config(himn_ref, other_config)

An example for how to disable the server returning a default gateway:

himn_ref = get_himn_ref()
other_config = session.xenapi.network.get_other_config(himn_ref)

other_config['ip_disable_gw'] = True

session.xenapi.network.set_other_config(himn_ref, other_config)

Guest agents

“Guest agents” are special programs which run inside VMs which can be controlled via the XenAPI.

One communication method between XenAPI clients is via Xenstore.

Adding Xenstore entries to VMs

Developers may wish to install guest agents into VMs which take special action based on the type of the VM. In order to communicate this information into the guest, a special Xenstore name-space known as vm-data is available which is populated at VM creation time. It is populated from the xenstore-data map in the VM record.

Set the xenstore-data parameter in the VM record:

xe vm-param-set uuid= xenstore-data:vm-data/foo=bar

Start the VM.

If it is a Linux-based VM, install the COMPANY_TOOLS and use the xenstore-read to verify that the node exists in Xenstore.

Note

Only prefixes beginning with vm-data are permitted, and anything not in this name-space will be silently ignored when starting the VM.

Memory

Memory is used for many things:

  • the hypervisor code: this is the Xen executable itself
  • the hypervisor heap: this is needed for per-domain structures and per-vCPU structures
  • the crash kernel: this is needed to collect information after a host crash
  • domain RAM: this is the memory the VM believes it has
  • shadow memory: for HVM guests running on hosts without hardware assisted paging (HAP) Xen uses shadow to optimise page table updates. For all guests shadow is used during live migration for tracking the memory transfer.
  • video RAM for the virtual graphics card

Some of these are constants (e.g. hypervisor code) while some depend on the VM configuration (e.g. domain RAM). Xapi calls the constants “host overhead” and the variables due to VM configuration as “VM overhead”. These overheads are subtracted from free memory on the host when starting, resuming and migrating VMs.

Metrics

xcp-rrdd records statistics about the host and the VMs running on top. The metrics are stored persistently for long-term access and analysis of historical trends. Statistics are stored in RRDs (Round Robin Databases). RRDs are fixed-size structures that store time series with decreasing time resolution: the older the data point is, the longer the timespan it represents. ‘Data sources’ are sampled every few seconds and points are added to the highest resolution RRD. Periodically each high-frequency RRD is ‘consolidated’ (e.g. averaged) to produce a data point for a lower-frequency RRD.

RRDs are resident on the host on which the VM is running, or the pool coordinator when the VM is not running. The RRDs are backed up every day.

Granularity

Statistics are persisted for a maximum of one year, and are stored at different granularities. The average and most recent values are stored at intervals of:

  • five seconds for the past ten minutes
  • one minute for the past two hours
  • one hour for the past week
  • one day for the past year

RRDs are saved to disk as uncompressed XML. The size of each RRD when written to disk ranges from 200KiB to approximately 1.2MiB when the RRD stores the full year of statistics.

By default each RRD contains only averaged data to save storage space. To record minimum and maximum values in future RRDs, set the Pool-wide flag

xe pool-param-set uuid= other-config:create_min_max_in_new_VM_RRDs=true

Downloading

Statistics can be downloaded over HTTP in XML or JSON format, for example using wget. See rrddump and rrdxport for information about the XML format. The JSON format has the same structure as the XML. Parameters are appended to the URL following a question mark (?) and separated by ampersands (&). HTTP authentication can take the form of a username and password or a session token in a URL parameter.

Statistics may be downloaded all at once, including all history, or as deltas suitable for interactive graphing.

Downloading statistics all at once

To obtain a full dump of RRD data for a host use:

wget  http://hostname/host_rrd?session_id=OpaqueRef:43df3204-9360-c6ab-923e-41a8d19389ba"

where the session token has been fetched from the server using the API.

For example, using Python’s XenAPI library:

import XenAPI
username = "root"
password = "actual_password"
url = "http://hostname"
session = XenAPI.Session(url)
session.xenapi.login_with_password(username, password, "1.0", "session_getter")
session._session

A URL parameter is used to decide which format to return: XML is returned by default, adding the parameter json makes the server return JSON. Starting from xapi version 23.17.0, the server uses the HTTP header Accept to decide which format to return. When both formats are accepted, for example, using */*; JSON is returned. Of interest are the clients wget and curl which use this accept header value, meaning that when using them the default behaviour will change and the accept header needs to be overridden to make the server return XML. The content type is provided in the reponse’s headers in these newer versions.

The XML RRD data is in the format used by rrdtool and looks like this:

<?xml version="1.0"?>
<rrd>
  <version>0003</version>
  <step>5</step>
  <lastupdate>1213616574</lastupdate>
  <ds>
    <name>memory_total_kib</name>
    <type>GAUGE</type>
    <minimal_heartbeat>300.0000</minimal_heartbeat>
    <min>0.0</min>
    <max>Infinity</max>
    <last_ds>2070172</last_ds>
    <value>9631315.6300</value>
    <unknown_sec>0</unknown_sec>
  </ds>
  <ds>
   <!-- other dss - the order of the data sources is important
        and defines the ordering of the columns in the archives below -->
  </ds>
  <rra>
    <cf>AVERAGE</cf>
    <pdp_per_row>1</pdp_per_row>
     <params>
      <xff>0.5000</xff>
    </params>
    <cdp_prep> <!-- This is for internal use -->
      <ds>
        <primary_value>0.0</primary_value>
        <secondary_value>0.0</secondary_value>
        <value>0.0</value>
        <unknown_datapoints>0</unknown_datapoints>
      </ds>
      ...other dss - internal use only...
    </cdp_prep>
    <database>
     <row>
        <v>2070172.0000</v>  <!-- columns correspond to the DSs defined above -->
        <v>1756408.0000</v>
        <v>0.0</v>
        <v>0.0</v>
        <v>732.2130</v>
        <v>0.0</v>
        <v>782.9186</v>
        <v>0.0</v>
        <v>647.0431</v>
        <v>0.0</v>
        <v>0.0001</v>
        <v>0.0268</v>
        <v>0.0100</v>
        <v>0.0</v>
        <v>615.1072</v>
     </row>
     ...
  </rra>
  ... other archives ...
</rrd>

To obtain a full dump of RRD data of a VM with uuid x:

wget "http://hostname/vm_rrd?session_id=<token>&uuid=x"

Note that it is quite expensive to download full RRDs as they contain lots of historical information. For interactive displays clients should download deltas instead.

Downloading deltas

To obtain an update of all VM statistics on a host, the URL would be of the form:

wget "https://hostname/rrd_updates?session_id=<token>&start=<secondsinceepoch>"

This request returns data in an rrdtool xport style XML format, for every VM resident on the particular host that is being queried. To differentiate which column in the export is associated with which VM, the legend field is prefixed with the UUID of the VM.

An example rrd_updates output:

<xport>
  <meta>
    <start>1213578000</start>
    <step>3600</step>
    <end>1213617600</end>
    <rows>12</rows>
    <columns>12</columns>
    <legend>
      <entry>AVERAGE:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry> <!-- nb - each data source might have multiple entries for different consolidation functions -->
      <entry>AVERAGE:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
      <entry>AVERAGE:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
      <entry>MIN:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry>
      <entry>MIN:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
      <entry>MIN:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
      <entry>MAX:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry>
      <entry>MAX:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
      <entry>MAX:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
      <entry>LAST:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry>
      <entry>LAST:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
      <entry>LAST:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
    </legend>
  </meta>
  <data>
    <row>
      <t>1213617600</t>
      <v>0.0</v> <!-- once again, the order or the columns is defined by the legend above -->
      <v>0.0282</v>
      <v>209715200.0000</v>
      <v>0.0</v>
      <v>0.0201</v>
      <v>209715200.0000</v>
      <v>0.0</v>
      <v>0.0445</v>
      <v>209715200.0000</v>
      <v>0.0</v>
      <v>0.0243</v>
      <v>209715200.0000</v>
    </row>
   ...
  </data>
</xport>

To obtain host updates too, use the query parameter host=true:

wget "http://hostname/rrd_updates?session_id=<token>&start=<secondssinceepoch>&host=true"

The step will decrease as the period decreases, which means that if you request statistics for a shorter time period you will get more detailed statistics.

To download updates containing only the averages, or minimums or maximums, add the parameter cf=AVERAGE|MIN|MAX (note case is important) e.g.

wget "http://hostname/rrd_updates?session_id=<token>&start=0&cf=MAX"

To request a different update interval, add the parameter interval=seconds e.g.

wget "http://hostname/rrd_updates?session_id=<token>&start=0&interval=5"

Snapshots

Snapshots represent the state of a VM, or a disk (VDI) at a point in time. They can be used for:

  • backups (hourly, daily, weekly etc)
  • experiments (take snapshot, try something, revert back again)
  • golden images (install OS, get it just right, clone it 1000s of times)

Read more about Snapshots: the High-Level Feature.

Taking a VDI snapshot

To take a snapshot of a single disk (VDI):

snapshot_vdi <- VDI.snapshot(session_id, vdi, driver_params)

where vdi is the reference to the disk to be snapshotted, and driver_params is a list of string pairs providing optional backend implementation-specific hints. The snapshot operation should be quick (i.e. it should never be implemented as a slow disk copy) and the resulting VDI will have

Field nameDescription
is_a_snapshota flag, set to true, indicating the disk is a snapshot
snapshot_ofa reference to the disk the snapshot was created from
snapshot_timethe time the snapshot was taken

The resulting snapshot should be considered read-only. Depending on the backend implementation it may be technically possible to write to the snapshot, but clients must not do this. To create a writable disk from a snapshot, see “restoring from a snapshot” below.

Note that the storage backend is free to implement this in different ways. We do not assume the presence of a .vhd-formatted storage repository. Clients must never assume anything about the backend implementation without checking first with the maintainers of the backend implementation.

Restoring to a VDI snapshot

To restore from a VDI snapshot first

new_vdi <- VDI.clone(session_id, snapshot_vdi, driver_params)

where snapshot_vdi is a reference to the snapshot VDI, and driver_params is a list of string pairs providing optional backend implementation-specific hints. The clone operation should be quick (i.e. it should never be implemented as a slow disk copy) and the resulting VDI will have

Field nameDescription
is_a_snapshota flag, set to false, indicating the disk is not a snapshot
snapshot_ofan invalid reference
snapshot_timean invalid time

The resulting disk is writable and can be used by the client as normal.

Note that the “restored” VDI will have a different VDI.uuid and reference to the original VDI.

Taking a VM snapshot

A VM snapshot is a copy of the VM metadata and a snapshot of all the associated VDIs at around the same point in time. To take a VM snapshot:

snapshot_vm <- VM.snapshot(session_id, vm, new_name)

where vm is a reference to the existing VM and new_name will be the name_label of the resulting VM (snapshot) object. The resulting VM will have

Field nameDescription
is_a_snapshota flag, set to true, indicating the VM is a snapshot
snapshot_ofa reference to the VM the snapshot was created from
snapshot_timethe time the snapshot was taken

Note that each disk is snapshotted one-by-one and not at the same time.

Restoring to a VM snapshot

A VM snapshot can be reverted to a snapshot using

VM.revert(session_id, snapshot_ref)

where snapshot_ref is a reference to the snapshot VM. Each VDI associated with the VM before the snapshot will be destroyed and each VDI associated with the snapshot will be cloned (see “Reverting to a disk snapshot” above) and associated with the VM. The resulting VM will have

Field nameDescription
is_a_snapshota flag, set to false, indicating the VM is not a snapshot
snapshot_ofan invalid reference
snapshot_timean invalid time

Note that the VM.uuid and reference are preserved, but the VDI.uuid and VDI references are not.

Downloading a disk or snapshot

Disks can be downloaded in either raw or vhd format using an HTTP 1.0 GET request as follows:

GET /export_raw_vdi?session_id=%s&task_id=%s&vdi=%s&format=%s[&base=%s] HTTP/1.0\r\n
Connection: close\r\n
\r\n
\r\n

where

  • session_id is a currently logged-in session
  • task_id is a Task reference which will be used to monitor the progress of this task and receive errors from it
  • vdi is the reference of the VDI into which the data will be imported
  • format is either vhd or raw
  • (optional) base is the reference of a VDI which has already been exported and this export should only contain the blocks which have changed since then.

Note that the vhd format allows the disk to be sparse i.e. only contain allocated blocks. This helps reduce the size of the download.

The xapi-project/xen-api repo has a python download example

Uploading a disk or snapshot

Disks can be uploaded in either raw or vhd format using an HTTP 1.0 PUT request as follows:

PUT /import_raw_vdi?session_id=%s&task_id=%s&vdi=%s&format=%s HTTP/1.0\r\n
Connection: close\r\n
\r\n
\r\n

where

  • session_id is a currently logged-in session
  • task_id is a Task reference which will be used to monitor the progress of this task and receive errors from it
  • vdi is the reference of the VDI into which the data will be imported
  • format is either vhd or raw

Note that you must create the disk (with the correct size) before importing data to it. The disk doesn’t have to be empty, in fact if restoring from a series of incremental downloads it makes sense to upload them all to the same disk in order.

Example: incremental backup with xe

This section will show how easy it is to build an incremental backup tool using these APIs. For simplicity we will use the xe commands rather than raw XMLRPC and HTTP.

For a VDI with uuid $VDI, take a snapshot:

FULL=$(xe vdi-snapshot uuid=$VDI)

Next perform a full backup into a file “full.vhd”, in vhd format:

xe vdi-export uuid=$FULL filename=full.vhd format=vhd  --progress

If the SR was using the vhd format internally (this is the default) then the full backup will be sparse and will only contain blocks if they have been written to.

After some time has passed and the VDI has been written to, take another snapshot:

DELTA=$(xe vdi-snapshot uuid=$VDI)

Now we can backup only the disk blocks which have changed between the original snapshot $FULL and the next snapshot $DELTA into a file called “delta.vhd”:

xe vdi-export uuid=$DELTA filename=delta.vhd format=vhd base=$FULL --progress

We now have 2 files on the local system:

  • “full.vhd”: a complete backup of the first snapshot
  • “delta.vhd”: an incremental backup of the second snapshot, relative to the first

For example:

test $ ls -lh *.vhd
-rw------- 1 dscott xendev 213M Aug 15 10:39 delta.vhd
-rw------- 1 dscott xendev 8.0G Aug 15 10:39 full.vhd

To restore the original snapshot you must create an empty disk with the correct size. To find the size of a .vhd file use qemu-img as follows:

test $ qemu-img info delta.vhd
image: delta.vhd
file format: vpc
virtual size: 24G (25769705472 bytes)
disk size: 212M

Here the size is 25769705472 bytes. Create a fresh VDI in SR $SR to restore the backup as follows:

SIZE=25769705472
RESTORE=$(xe vdi-create name-label=restored virtual-size=$SIZE sr-uuid=$SR type=user)

then import “full.vhd” into it:

xe vdi-import uuid=$RESTORE filename=full.vhd format=vhd --progress

Once “full.vhd” has been imported, the incremental backup can be restored on top:

xe vdi-import uuid=$RESTORE filename=delta.vhd format=vhd --progress

Note there is no need to supply a “base” parameter when importing; Xapi will treat the “vhd differencing disk” as a set of blocks and import them. It is up to you to check you are importing them to the right place.

Now the VDI $RESTORE should have the same contents as $DELTA.

VM consoles

Most XenAPI graphical interfaces will want to gain access to the VM consoles, in order to render them to the user as if they were physical machines. There are several types of consoles available, depending on the type of guest or if the physical host console is being accessed:

Types of consoles

Operating SystemTextGraphicalOptimized graphical
WindowsNoVNC, using an API callRDP, directly from guest
LinuxYes, through VNC and an API callNoVNC, directly from guest
Physical HostYes, through VNC and an API callNoNo

Hardware-assisted VMs, such as Windows, directly provide a graphical console over VNC. There is no text-based console, and guest networking is not necessary to use the graphical console. Once guest networking has been established, it is more efficient to setup Remote Desktop Access and use an RDP client to connect directly (this must be done outside of the XenAPI).

Paravirtual VMs, such as Linux guests, provide a native text console directly. XenServer provides a utility (called vncterm) to convert this text-based console into a graphical VNC representation. Guest networking is not necessary for this console to function. As with Windows above, Linux distributions often configure VNC within the guest, and directly connect to it over a guest network interface.

The physical host console is only available as a vt100 console, which is exposed through the XenAPI as a VNC console by using vncterm in the control domain.

RFB (Remote Framebuffer) is the protocol which underlies VNC, specified in The RFB Protocol. Third-party developers are expected to provide their own VNC viewers, and many freely available implementations can be adapted for this purpose. RFB 3.3 is the minimum version which viewers must support.

Retrieving VNC consoles using the API

VNC consoles are retrieved using a special URL passed through to the host agent. The sequence of API calls is as follows:

  1. Client to Master/443: XML-RPC: Session.login_with_password().

  2. Master/443 to Client: Returns a session reference to be used with subsequent calls.

  3. Client to Master/443: XML-RPC: VM.get_by_name_label().

  4. Master/443 to Client: Returns a reference to a particular VM (or the “control domain” if you want to retrieve the physical host console).

  5. Client to Master/443: XML-RPC: VM.get_consoles().

  6. Master/443 to Client: Returns a list of console objects associated with the VM.

  7. Client to Master/443: XML-RPC: VM.get_location().

  8. Returns a URI describing where the requested console is located. The URIs are of the form: https://192.168.0.1/console?ref=OpaqueRef:c038533a-af99-a0ff-9095-c1159f2dc6a0.

  9. Client to 192.168.0.1: HTTP CONNECT “/console?ref=(…)”

The final HTTP CONNECT is slightly non-standard since the HTTP/1.1 RFC specifies that it should only be a host and a port, rather than a URL. Once the HTTP connect is complete, the connection can subsequently directly be used as a VNC server without any further HTTP protocol action.

This scheme requires direct access from the client to the control domain’s IP, and will not work correctly if there are Network Address Translation (NAT) devices blocking such connectivity. You can use the CLI to retrieve the console URI from the client and perform a connectivity check.

Retrieve the VM UUID by running:

$ VM=$(xe vm-list params=uuid --minimal name-label=<name>)

Retrieve the console information:

$ xe console-list vm-uuid=$VM
uuid ( RO)             : 8013b937-ff7e-60d1-ecd8-e52d66c5879e
          vm-uuid ( RO): 2d7c558a-8f03-b1d0-e813-cbe7adfa534c
    vm-name-label ( RO): 6
         protocol ( RO): RFB
         location ( RO): https://10.80.228.30/console?uuid=8013b937-ff7e-60d1-ecd8-e52d66c5879e

Use command-line utilities like ping to test connectivity to the IP address provided in the location field.

Disabling VNC forwarding for Linux VM

When creating and destroying Linux VMs, the host agent automatically manages the vncterm processes which convert the text console into VNC. Advanced users who wish to directly access the text console can disable VNC forwarding for that VM. The text console can then only be accessed directly from the control domain directly, and graphical interfaces such as XenCenter will not be able to render a console for that VM.

Before starting the guest, set the following parameter on the VM record:

$ xe vm-param-set uuid=$VM other-config:disable_pv_vnc=1

Start the VM.

Use the CLI to retrieve the underlying domain ID of the VM with:

$ DOMID=$(xe vm-list params=dom-id uuid=$VM --minimal)

On the host console, connect to the text console directly by:

$ /usr/lib/xen/bin/xenconsole $DOMID

This configuration is an advanced procedure, and we do not recommend that the text console is directly used for heavy I/O operations. Instead, connect to the guest over SSH or some other network-based connection mechanism.

VM import/export

VMs can be exported to a file and later imported to any Xapi host. The export protocol is a simple HTTP(S) GET, which should be sent to the Pool master. Authorization is either via a pre-created session_id or by HTTP basic authentication (particularly useful on the command-line). The VM to export is specified either by UUID or by reference. To keep track of the export, a task can be created and passed in using its reference. Note that Xapi may send an HTTP redirect if a different host has better access to the disk data.

The following arguments are passed as URI query parameters or HTTP cookies:

ArgumentDescription
session_idthe reference of the session being used to authenticate; required only when not using HTTP basic authentication
task_idthe reference of the task object with which to keep track of the operation; optional, required only if you have created a task object to keep track of the export
refthe reference of the VM; required only if not using the UUID
uuidthe UUID of the VM; required only if not using the reference
use_compressionan optional boolean “true” or “false” (defaulting to “false”). If “true” then the output will be gzip-compressed before transmission.

For example, using the Linux command line tool cURL:

$ curl http://root:foo@myxenserver1/export?uuid=<vm_uuid> -o <exportfile>

will export the specified VM to the file exportfile.

To export just the metadata, use the URI http://server/export_metadata.

The import protocol is similar, using HTTP(S) PUT. The session_id and task_id arguments are as for the export. The ref and uuid are not used; a new reference and uuid will be generated for the VM. There are some additional parameters:

ArgumentDescription
restoreif true, the import is treated as replacing the original VM - the implication of this currently is that the MAC addresses on the VIFs are exactly as the export was, which will lead to conflicts if the original VM is still being run.
forceif true, any checksum failures will be ignored (the default is to destroy the VM if a checksum error is detected)
sr_idthe reference of an SR into which the VM should be imported. The default behavior is to import into the Pool.default_SR

Note there is no need to specify whether the export is compressed, as Xapi will automatically detect and decompress gzip-encoded streams.

For example, again using cURL:

curl -T <exportfile> http://root:foo@myxenserver2/import

will import the VM to the default SR on the server.

Note

Note that if no default SR has been set, and no sr_uuid is specified, the error message DEFAULT_SR_NOT_FOUND is returned.

Another example:

curl -T <exportfile> http://root:foo@myxenserver2/import?sr_id=<ref_of_sr>

will import the VM to the specified SR on the server.

To import just the metadata, use the URI http://server/import_metadata

Legacy VM Import Format

This section describes the legacy VM import/export format and is for historical interest only. It should be updated to describe the current format, see issue 64

Xapi supports a human-readable legacy VM input format called XVA. This section describes the syntax and structure of XVA.

An XVA consists of a directory containing XML metadata and a set of disk images. A VM represented by an XVA is not intended to be directly executable. Data within an XVA package is compressed and intended for either archiving on permanent storage or for being transmitted to a VM server - such as a XenServer host - where it can be decompressed and executed.

XVA is a hypervisor-neutral packaging format; it should be possible to create simple tools to instantiate an XVA VM on any other platform. XVA does not specify any particular runtime format; for example disks may be instantiated as file images, LVM volumes, QCoW images, VMDK or VHD images. An XVA VM may be instantiated any number of times, each instantiation may have a different runtime format.

XVA does not:

  • specify any particular serialization or transport format

  • provide any mechanism for customizing VMs (or templates) on install

  • address how a VM may be upgraded post-install

  • define how multiple VMs, acting as an appliance, may communicate

These issues are all addressed by the related Open Virtual Appliance specification.

An XVA is a directory containing, at a minimum, a file called ova.xml. This file describes the VM contained within the XVA and is described in Section 3.2. Disks are stored within sub-directories and are referenced from the ova.xml. The format of disk data is described later in Section 3.3.

The following terms will be used in the rest of the chapter:

  • HVM: a mode in which unmodified OS kernels run with the help of virtualization support in the hardware.

  • PV: a mode in which specially modified “paravirtualized” kernels run explicitly on top of a hypervisor without requiring hardware support for virtualization.

The “ova.xml” file contains the following elements:

<appliance version="0.1">

The number in the attribute “version” indicates the version of this specification to which the XVA is constructed; in this case version 0.1. Inside the <appliance> there is exactly one <vm>: (in the OVA specification, multiple <vm>s are permitted)

<vm name="name">

Each <vm> element describes one VM. The “name” attribute is for future internal use only and must be unique within the ova.xml file. The “name” attribute is permitted to be any valid UTF-8 string. Inside each <vm> tag are the following compulsory elements:

<label>... text ... </label>

A short name for the VM to be displayed in a UI.

<shortdesc> ... description ... </shortdesc>

A description for the VM to be displayed in the UI. Note that for both <label> and <shortdesc> contents, leading and trailing whitespace will be ignored.

<config mem_set="268435456" vcpus="1"/>

The <config> element has attributes which describe the amount of memory in bytes (mem_set) and number of CPUs (VCPUs) the VM should have.

Each <vm> has zero or more <vbd> elements representing block devices which look like the following:

<vbd device="sda" function="root" mode="w" vdi="vdi_sda"/>

The attributes have the following meanings:

  • device: name of the physical device to expose to the VM. For linux guests we use “sd[a-z]” and for windows guests we use “hd[a-d]”.
  • function: if marked as “root”, this disk will be used to boot the guest. (NB this does not imply the existence of the Linux root i.e. / filesystem) Only one device should be marked as “root”. See Section 3.4 describing VM booting. Any other string is ignored.
  • mode: either “w” or “ro” if the device is to be read/write or read-only
  • vdi: the name of the disk image (represented by a <vdi> element) to which this block device is connected

Each <vm> may have an optional <hacks> section like the following:

<hacks is_hvm="false" kernel_boot_cmdline="root=/dev/sda1 ro"/>

The <hacks> element will be removed in future. The attribute is_hvm is either true or false, depending on whether the VM should be booted in HVM or not. The kernel_boot_cmdline contains additional kernel commandline arguments when booting a guest using pygrub.

In addition to a <vm> element, the <appliance> will contain zero or more <vdi> elements like the following:

<vdi name="vdi_sda" size="5368709120" source="file://sda" type="dir-gzipped-chunks">

Each <vdi> corresponds to a disk image. The attributes have the following meanings:

  • name: name of the VDI, referenced by the vdi attribute of <vbd>elements. Any valid UTF-8 string is permitted.
  • size: size of the required image in bytes
  • source: a URI describing where to find the data for the image, only file:// URIs are currently permitted and must describe paths relative to the directory containing the ova.xml
  • type: describes the format of the disk data

A single disk image encoding is specified in which has type “dir-gzipped-chunks”: Each image is represented by a directory containing a sequence of files as follows:

-rw-r--r-- 1 dscott xendev 458286013    Sep 18 09:51 chunk000000000.gz
-rw-r--r-- 1 dscott xendev 422271283    Sep 18 09:52 chunk000000001.gz
-rw-r--r-- 1 dscott xendev 395914244    Sep 18 09:53 chunk000000002.gz
-rw-r--r-- 1 dscott xendev 9452401      Sep 18 09:53 chunk000000003.gz
-rw-r--r-- 1 dscott xendev 1096066      Sep 18 09:53 chunk000000004.gz
-rw-r--r-- 1 dscott xendev 971976       Sep 18 09:53 chunk000000005.gz
-rw-r--r-- 1 dscott xendev 971976       Sep 18 09:53 chunk000000006.gz
-rw-r--r-- 1 dscott xendev 971976       Sep 18 09:53 chunk000000007.gz
-rw-r--r-- 1 dscott xendev 573930       Sep 18 09:53 chunk000000008.gz

Each file (named “chunk-XXXXXXXXX.gz”) is a gzipped file containing exactly 1e9 bytes (1GB, not 1GiB) of raw block data. The small size was chosen to be safely under the maximum file size limits of several filesystems. If the files are gunzipped and then concatenated together, the original image is recovered.

Because the import and export of VMs can take some time to complete, an asynchronous HTTP interface to the import and export operations is provided. To perform an export using the XenServer API, construct an HTTP GET call providing a valid session ID, task ID and VM UUID, as shown in the following pseudo code:

task = Task.create()
result = HTTP.get(
  server, 80, "/export?session_id=&task_id=&ref=");

For the import operation, use an HTTP PUT call as demonstrated in the following pseudo code:

task = Task.create()
result = HTTP.put(
  server, 80, "/import?session_id=&task_id=&ref=");

VM Lifecycle

The following figure shows the states that a VM can be in and the API calls that can be used to move the VM between these states.

graph halted-- start(paused) -->paused halted-- start(not paused) -->running running-- suspend -->suspended suspended-- resume(not paused) -->running suspended-- resume(paused) -->paused suspended-- hard shutdown -->halted paused-- unpause -->running paused-- hard shutdown -->halted running-- clean shutdown\n hard shutdown -->halted running-- pause -->paused halted-- destroy -->destroyed

VM boot parameters

The VM class contains a number of fields that control the way in which the VM is booted. With reference to the fields defined in the VM class (see later in this document), this section outlines the boot options available and the mechanisms provided for controlling them.

VM booting is controlled by setting one of the two mutually exclusive groups: “PV” and “HVM”. If HVM.boot_policy is an empty string, then paravirtual domain building and booting will be used; otherwise the VM will be loaded as a HVM domain, and booted using an emulated BIOS.

When paravirtual booting is in use, the PV_bootloader field indicates the bootloader to use. It may be “pygrub”, in which case the platform’s default installation of pygrub will be used, or a full path within the control domain to some other bootloader. The other fields, PV_kernel, PV_ramdisk, PV_args, and PV_bootloader_args will be passed to the bootloader unmodified, and interpretation of those fields is then specific to the bootloader itself, including the possibility that the bootloader will ignore some or all of those given values. Finally the paths of all bootable disks are added to the bootloader commandline (a disk is bootable if its VBD has the bootable flag set). There may be zero, one, or many bootable disks; the bootloader decides which disk (if any) to boot from.

If the bootloader is pygrub, then the menu.lst is parsed, if present in the guest’s filesystem, otherwise the specified kernel and ramdisk are used, or an autodetected kernel is used if nothing is specified and autodetection is possible. PV_args is appended to the kernel command line, no matter which mechanism is used for finding the kernel.

If PV_bootloader is empty but PV_kernel is specified, then the kernel and ramdisk values will be treated as paths within the control domain. If both PV_bootloader and PV_kernel are empty, then the behaviour is as if PV_bootloader were specified as “pygrub”.

When using HVM booting, HVM_boot_policy and HVM_boot_params specify the boot handling. Only one policy is currently defined, “BIOS order”. In this case, HVM_boot_params should contain one key-value pair “order” = “N” where N is the string that will be passed to QEMU. Optionally HVM_boot_params can contain another key-value pair “firmware” with values “bios” or “uefi” (default is “bios” if absent). By default Secure Boot is not enabled, it can be enabled when “uefi” is enabled by setting VM.platform["secureboot"] to true.

    XenCenter

    XenCenter uses some conventions on top of the XenAPI:

    Internationalization for SR names

    The SRs created at install time now have an other_config key indicating how their names may be internationalized.

    other_config["i18n-key"] may be one of

    • local-hotplug-cd

    • local-hotplug-disk

    • local-storage

    • xenserver-tools

    Additionally, other_config["i18n-original-value-<field name>"] gives the value of that field when the SR was created. If XenCenter sees a record where SR.name_label equals other_config["i18n-original-value-name_label"] (that is, the record has not changed since it was created during XenServer installation), then internationalization will be applied. In other words, XenCenter will disregard the current contents of that field, and instead use a value appropriate to the user’s own language.

    If you change SR.name_label for your own purpose, then it no longer is the same as other_config["i18n-original-value-name_label"]. Therefore, XenCenter does not apply internationalization, and instead preserves your given name.

    Hiding objects from XenCenter

    Networks, PIFs, and VMs can be hidden from XenCenter by adding the key HideFromXenCenter=true to the other_config parameter for the object. This capability is intended for ISVs who know what they are doing, not general use by everyday users. For example, you might want to hide certain VMs because they are cloned VMs that shouldn’t be used directly by general users in your environment.

    In XenCenter, hidden Networks, PIFs, and VMs can be made visible, using the View menu.