Subsections of XenAPI
XenAPI Basics
This document contains a description of the Xen Management API - an interface for remotely configuring and controlling virtualised guests running on a Xen-enabled host.
The API is presented here as a set of Remote Procedure Calls (RPCs). There are two supported wire formats, one based upon XML-RPC and one based upon JSON-RPC (v1.0 and v2.0 are both recognized). No specific language bindings are prescribed, although examples are given in the Python programming language.
Although we adopt some terminology from object-oriented programming, future client language bindings may or may not be object oriented. The API reference uses the terminology classes and objects. For our purposes a class is simply a hierarchical namespace; an object is an instance of a class with its fields set to specific values. Objects are persistent and exist on the server-side. Clients may obtain opaque references to these server-side objects and then access their fields via get/set RPCs.
For each class we specify a list of fields along with their types and qualifiers. A qualifier is one of:
RO/runtime: the field is Read Only. Furthermore, its value is automatically computed at runtime. For example, current CPU load and disk IO throughput.
RO/constructor: the field must be manually set when a new object is created, but is then Read Only for the duration of the object’s life. For example, the maximum memory addressable by a guest is set before the guest boots.
RW: the field is Read/Write. For example, the name of a VM.
Types
The following types are used to specify methods and fields in the API Reference:
string
: Text strings.int
: 64-bit integers.float
: IEEE double-precision floating-point numbers.bool
: Boolean.datetime
: Date and timestamp.c ref
: Reference to an object of classc
.t set
: Arbitrary-length set of values of typet
.(k -> v) map
: Mapping from values of typek
to values of typev
.e enum
: Enumeration type with namee
. Enums are defined in the API reference together with classes that use them.
Note that there are a number of cases where ref
s are doubly linked.
For example, a VM
has a field called VIFs
of type VIF ref set
;
this field lists the network interfaces attached to a particular VM.
Similarly, the VIF
class has a field called VM
of type VM ref
which references the VM to which the interface is connected.
These two fields are bound together, in the sense that
creating a new VIF causes the VIFs
field of the corresponding
VM object to be updated automatically.
The API reference lists explicitly the fields that are bound together in this way. It also contains a diagram that shows relationships between classes. In this diagram an edge signifies the existence of a pair of fields that are bound together, using standard crows-foot notation to signify the type of relationship (e.g. one-many, many-many).
RPCs associated with fields
Each field, f
, has an RPC accessor associated with it that returns f
’s value:
get_f (r)
: takes aref
,r
that refers to an object and returns the value off
.
Each field, f
, with qualifier RW and whose outermost type is set
has the
following additional RPCs associated with it:
add_f(r, v)
: adds a new elementv
to the set. Note that sets cannot contain duplicate values, hence this operation has no action in the case thatv
is already in the set.remove_f(r, v)
: removes elementv
from the set.
Each field, f
, with qualifier RW and whose outermost type is map
has the
following additional RPCs associated with it:
add_to_f(r, k, v)
: adds new pairk -> v
to the mapping stored inf
in objectr
. Attempting to add a new pair for duplicate key,k
, fails with aMAP_DUPLICATE_KEY
error.remove_from_f(r, k)
: removes the pair with keyk
from the mapping stored inf
in objectr
.
Each field whose outermost type is neither set
nor map
, but whose
qualifier is RW has an RPC accessor associated with it that sets its value:
set_f(r, v)
: sets the fieldf
on objectr
to valuev
.
RPCs associated with classes
Most classes have a constructor RPC named
create
that takes as parameters all fields marked RW and RO/constructor. The result of this RPC is that a new persistent object is created on the server-side with the specified field values.Each class has a
get_by_uuid(uuid)
RPC that returns the object of that class that has the specifieduuid
.Each class that has a
name_label
field has aget_by_name_label(name_label)
RPC that returns a set of objects of that class that have the specifiedname_label
.Most classes have a
destroy(r)
RPC that explicitly deletes the persistent object specified byr
from the system. This is a non-cascading delete - if the object being removed is referenced by another object then thedestroy
call will fail.
Apart from the RPCs enumerated above, most classes have additional RPCs
associated with them. For example, the VM
class has RPCs for cloning,
suspending, starting etc. Such additional RPCs are described explicitly
in the API reference.
Wire Protocol
API calls are sent over a network to a Xen-enabled host using an RPC protocol. Here we describe how the higher-level types used in our API Reference are mapped to primitive RPC types, covering the two supported wire formats XML-RPC and JSON-RPC.
XML-RPC Protocol
We specify the signatures of API functions in the following style:
(VM ref set) VM.get_all()
This specifies that the function with name VM.get_all
takes
no parameters and returns a set
of VM ref
.
These types are mapped onto XML-RPC types in a straight-forward manner:
the types
float
,bool
,datetime
, andstring
map directly to the XML-RPC<double>
,<boolean>
,<dateTime.iso8601>
, and<string>
elements.all
ref
types are opaque references, encoded as the XML-RPC’s<string>
type. Users of the API should not make assumptions about the concrete form of these strings and should not expect them to remain valid after the client’s session with the server has terminated.fields named
uuid
of typestring
are mapped to the XML-RPC<string>
type. The string itself is the OSF DCE UUID presentation format (as output byuuidgen
).int
is assumed to be 64-bit in our API and is encoded as a string of decimal digits (rather than using XML-RPC’s built-in 32-bit<i4>
type).values of
enum
types are encoded as strings. For example, the valuedestroy
ofenum on_normal_exit
, would be conveyed as:
<value><string>destroy</string></value>
- for all our types,
t
, our typet set
simply maps to XML-RPC’s<array>
type, so, for example, a value of typestring set
would be transmitted like this:
<array>
<data>
<value><string>CX8</string></value>
<value><string>PSE36</string></value>
<value><string>FPU</string></value>
</data>
</array>
- for types
k
andv
, our type(k -> v) map
maps onto an XML-RPC<struct>
, with the key as the name of the struct. Note that the(k -> v) map
type is only valid whenk
is astring
,ref
, orint
, and in each case the keys of the maps are stringified as above. For example, the(string -> float) map
containing the mappings Mike -> 2.3 and John -> 1.2 would be represented as:
<value>
<struct>
<member>
<name>Mike</name>
<value><double>2.3</double></value>
</member>
<member>
<name>John</name>
<value><double>1.2</double></value>
</member>
</struct>
</value>
- our
void
type is transmitted as an empty string.
XML-RPC Return Values and Status Codes
The return value of an RPC call is an XML-RPC <struct>
.
- The first element of the struct is named
Status
; it contains a string value indicating whether the result of the call was aSuccess
or aFailure
.
If the Status
is Success
then the struct contains a second element named
Value
:
- The element of the struct named
Value
contains the function’s return value.
If the Status
is Failure
then the struct contains a second element named
ErrorDescription
:
- The element of the struct named
ErrorDescription
contains an array of string values. The first element of the array is an error code; the rest of the elements are strings representing error parameters relating to that code.
For example, an XML-RPC return value from the host.get_resident_VMs
function
may look like this:
<struct>
<member>
<name>Status</name>
<value>Success</value>
</member>
<member>
<name>Value</name>
<value>
<array>
<data>
<value>81547a35-205c-a551-c577-00b982c5fe00</value>
<value>61c85a22-05da-b8a2-2e55-06b0847da503</value>
<value>1d401ec4-3c17-35a6-fc79-cee6bd9811fe</value>
</data>
</array>
</value>
</member>
</struct>
JSON-RPC Protocol
We specify the signatures of API functions in the following style:
(VM ref set) VM.get_all()
This specifies that the function with name VM.get_all
takes no parameters and
returns a set
of VM ref
. These types are mapped onto JSON-RPC types in the
following manner:
the types
float
andbool
map directly to the JSON typesnumber
andboolean
, whiledatetime
andstring
are represented as the JSONstring
type.all
ref
types are opaque references, encoded as the JSONstring
type. Users of the API should not make assumptions about the concrete form of these strings and should not expect them to remain valid after the client’s session with the server has terminated.fields named
uuid
of typestring
are mapped to the JSONstring
type. The string itself is the OSF DCE UUID presentation format (as output byuuidgen
).int
is assumed to be 64-bit in our API and is encoded as a JSONnumber
without decimal point or exponent, preserved as a string.values of
enum
types are encoded as the JSONstring
type. For example, the valuedestroy
ofenum on_normal_exit
, would be conveyed as:
"destroy"
- for all our types,
t
, our typet set
simply maps to the JSONarray
type, so, for example, a value of typestring set
would be transmitted like this:
[ "CX8", "PSE36", "FPU" ]
- for types
k
andv
, our type(k -> v) map
maps onto a JSON object which contains members with namek
and valuev
. Note that the(k -> v) map
type is only valid whenk
is astring
,ref
, orint
, and in each case the keys of the maps are stringified as above. For example, the(string -> float) map
containing the mappings Mike -> 2.3 and John -> 1.2 would be represented as:
{
"Mike": 2.3,
"John": 1.2
}
- our
void
type is transmitted as an empty string.
Both versions 1.0 and 2.0 of the JSON-RPC wire format are recognised and, depending on your client library, you can use either of them.
JSON-RPC v1.0
JSON-RPC v1.0 Requests
An API call is represented by sending a single JSON object to the server, which
contains the members method
, params
, and id
.
method
: A JSONstring
containing the name of the function to be invoked.params
: A JSONarray
of values, which represents the parameters of the function to be invoked.id
: A JSONstring
orinteger
representing the call id. Note that, diverging from the JSON-RPC v1.0 specification the API does not accept notification requests (requests without responses), i.e. the id cannot benull
.
For example, the body of a JSON-RPC v1.0 request to retrieve the resident VMs of a host may look like this:
{
"method": "host.get_resident_VMs",
"params": [
"OpaqueRef:74f1a19cd-b660-41e3-a163-10f03e0eae67",
"OpaqueRef:08c34fc9-f418-4f09-8274-b9cb25cd8550"
],
"id": "xyz"
}
In the above example, the first element of the params
array is the reference
of the open session to the host, while the second is the host reference.
JSON-RPC v1.0 Return Values
The return value of a JSON-RPC v1.0 call is a single JSON object containing
the members result
, error
, and id
.
result
: If the call is successful, it is a JSON value (string
,array
etc.) representing the return value of the invoked function. If an error has occurred, it isnull
.error
: If the call is successful, it isnull
. If the call has failed, it a JSONarray
ofstring
values. The first element of the array is an error code; the remainder of the array are strings representing error parameters relating to that code.id
: The call id. It is a JSONstring
orinteger
and it is the same id as the request it is responding to.
For example, a JSON-RPC v1.0 return value from the host.get_resident_VMs
function may look like this:
{
"result": [
"OpaqueRef:604f51e7-630f-4412-83fa-b11c6cf008ab",
"OpaqueRef:670d08f5-cbeb-4336-8420-ccd56390a65f"
],
"error": null,
"id": "xyz"
}
while the return value of the same call made on a logged out session may look like this:
{
"result": null,
"error": [
"SESSION_INVALID",
"OpaqueRef:93f1a23cd-a640-41e3-b163-10f86e0eae67"
],
"id": "xyz"
}
JSON-RPC v2.0
JSON-RPC v2.0 Requests
An API call is represented by sending a single JSON object to the server, which
contains the members jsonrpc
, method
, params
, and id
.
jsonrpc
: A JSONstring
specifying the version of the JSON-RPC protocol. It is exactly “2.0”.method
: A JSONstring
containing the name of the function to be invoked.params
: A JSONarray
of values, which represents the parameters of the function to be invoked. Although the JSON-RPC v2.0 specification allows this member to be ommitted, in practice all API calls accept at least one parameter.id
: A JSONstring
orinteger
representing the call id. Note that, diverging from the JSON-RPC v2.0 specification it cannot be null. Neither can it be ommitted because the API does not accept notification requests (requests without responses).
For example, the body of a JSON-RPC v2.0 request to retrieve the VMs resident on a host may may look like this:
{
"jsonrpc": "2.0",
"method": "host.get_resident_VMs",
"params": [
"OpaqueRef:c90cd28f-37ec-4dbf-88e6-f697ccb28b39",
"OpaqueRef:08c34fc9-f418-4f09-8274-b9cb25cd8550"
],
"id": 3
}
As before, the first element of the parameter
array is the reference
of the open session to the host, while the second is the host reference.
JSON-RPC v2.0 Return Values
The return value of a JSON-RPC v2.0 call is a single JSON object containing the
members jsonrpc
, either result
or error
depending on the outcome of the
call, and id
.
jsonrpc
: A JSONstring
specifying the version of the JSON-RPC protocol. It is exactly “2.0”.result
: If the call is successful, it is a JSON value (string
,array
etc.) representing the return value of the invoked function. If an error has occurred, it does not exist.error
: If the call is successful, it does not exist. If the call has failed, it is a single structured JSON object (see below).id
: The call id. It is a JSONstring
orinteger
and it is the same id as the request it is responding to.
The error
object contains the members code
, message
, and data
.
code
: The API does not make use of this member and only retains it for compliance with the JSON-RPC v2.0 specification. It is a JSONinteger
which has a non-zero value.message
: A JSONstring
representing an API error code.data
: A JSON array ofstring
values representing error parameters relating to the aforementioned API error code.
For example, a JSON-RPC v2.0 return value from the host.get_resident_VMs
function may look like this:
{
"jsonrpc": "2.0",
"result": [
"OpaqueRef:604f51e7-630f-4412-83fa-b11c6cf008ab",
"OpaqueRef:670d08f5-cbeb-4336-8420-ccd56390a65f"
],
"id": 3
}
while the return value of the same call made on a logged out session may look like this:
{
"jsonrpc": "2.0",
"error": {
"code": 1,
"message": "SESSION_INVALID",
"data": [
"OpaqueRef:c90cd28f-37ec-4dbf-88e6-f697ccb28b39"
]
},
"id": 3
}
Errors
When a low-level transport error occurs, or a request is malformed at the HTTP or RPC level, the server may send an HTTP 500 error response, or the client may simulate the same. The client must be prepared to handle these errors, though they may be treated as fatal.
For example, the following malformed request when using the XML-RPC protocol:
$curl -D - -X POST https://server -H 'Content-Type: application/xml' \
-d '<?xml version="1.0"?>
<methodCall>
<methodName>session.logout</methodName>
</methodCall>'
results to the following response:
HTTP/1.1 500 Internal Error
content-length: 297
content-type:text/html
connection:close
cache-control:no-cache, no-store
<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred;
please wait a while and try again. If the problem persists, please contact your
support representative.<h1> Additional information </h1>Xmlrpc.Parse_error(&quo
t;close_tag", "open_tag", _)</body></html>
When using the JSON-RPC protocol:
$curl -D - -X POST https://server/jsonrpc -H 'Content-Type: application/json' \
-d '{
"jsonrpc": "2.0",
"method": "session.login_with_password",
"id": 0
}'
the response is:
HTTP/1.1 500 Internal Error
content-length: 308
content-type:text/html
connection:close
cache-control:no-cache, no-store
<html><body><h1>HTTP 500 internal server error</h1>An unexpected error occurred;
please wait a while and try again. If the problem persists, please contact your
support representative.<h1> Additional information </h1>Jsonrpc.Malformed_metho
d_request("{jsonrpc=...,method=...,id=...}")</body></html>
All other failures are reported with a more structured error response, to allow better automatic response to failures, proper internationalization of any error message, and easier debugging.
On the wire, these are transmitted like this when using the XML-RPC protocol:
<struct>
<member>
<name>Status</name>
<value>Failure</value>
</member>
<member>
<name>ErrorDescription</name>
<value>
<array>
<data>
<value>MAP_DUPLICATE_KEY</value>
<value>Customer</value>
<value>eSpiel Inc.</value>
<value>eSpiel Incorporated</value>
</data>
</array>
</value>
</member>
</struct>
Note that ErrorDescription
value is an array of string values. The
first element of the array is an error code; the remainder of the array are
strings representing error parameters relating to that code. In this case,
the client has attempted to add the mapping Customer ->
eSpiel Incorporated to a Map, but it already contains the mapping
Customer -> eSpiel Inc., hence the request has failed.
When using the JSON-RPC protocol v2.0, the above error is transmitted as:
{
"jsonrpc": "2.0",
"error": {
"code": 1,
"message": "MAP_DUPLICATE_KEY",
"data": [
"Customer",
"eSpiel Inc.",
"eSpiel Incorporated"
]
},
"id": 3
}
Finally, when using the JSON-RPC protocol v1.0:
{
"result": null,
"error": [
"MAP_DUPLICATE_KEY",
"Customer",
"eSpiel Inc.",
"eSpiel Incorporated"
],
"id": "xyz"
}
Each possible error code is documented in the last section of the API reference.
Note on References vs UUIDs
References are opaque types - encoded as XML-RPC and JSON-RPC strings on the wire - understood only by the particular server which generated them. Servers are free to choose any concrete representation they find convenient; clients should not make any assumptions or attempt to parse the string contents. References are not guaranteed to be permanent identifiers for objects; clients should not assume that references generated during one session are valid for any future session. References do not allow objects to be compared for equality. Two references to the same object are not guaranteed to be textually identical.
UUIDs are intended to be permanent identifiers for objects. They are
guaranteed to be in the OSF DCE UUID presentation format (as output by uuidgen
).
Clients may store UUIDs on disk and use them to look up objects in subsequent sessions
with the server. Clients may also test equality on objects by comparing UUID strings.
The API provides mechanisms for translating between UUIDs and opaque references. Each class that contains a UUID field provides:
A
get_by_uuid
method that takes a UUID and returns an opaque reference to the server-side object that has that UUID;A
get_uuid
function (a regular “field getter” RPC) that takes an opaque reference and returns the UUID of the server-side object that is referenced by it.
Making RPC Calls
Transport Layer
The following transport layers are currently supported:
- HTTP/HTTPS for remote administration
- HTTP over Unix domain sockets for local administration
Session Layer
The RPC interface is session-based; before you can make arbitrary RPC calls you must login and initiate a session. For example:
(session ref) session.login_with_password(string uname, string pwd,
string version, string originator)
where uname
and password
refer to your username and password, as defined by
the Xen administrator, while version
and originator
are optional. The
session ref
returned by session.login_with_password
is passed
to subequent RPC calls as an authentication token. Note that a session
reference obtained by a login request to the XML-RPC backend can be used in
subsequent requests to the JSON-RPC backend, and vice-versa.
A session can be terminated with the session.logout
function:
void session.logout(session ref session_id)
Synchronous and Asynchronous Invocation
Each method call (apart from methods on the Session
and Task
objects and
“getters” and “setters” derived from fields) can be made either synchronously or
asynchronously. A synchronous RPC call blocks until the
return value is received; the return value of a synchronous RPC call is
exactly as specified above.
Only synchronous API calls are listed explicitly in this document.
All their asynchronous counterparts are in the special Async
namespace.
For example, the synchronous call VM.clone(...)
has an asynchronous
counterpart, Async.VM.clone(...)
, that is non-blocking.
Instead of returning its result directly, an asynchronous RPC call
returns an identifier of type task ref
which is subsequently used
to track the status of a running asynchronous RPC.
Note that an asychronous call may fail immediately, before a task has even been
created. When using the XML-RPC wire protocol, this eventuality is represented
by wrapping the returned task ref
in an XML-RPC struct with a Status
,
ErrorDescription
, and Value
fields, exactly as specified above; the
task ref
is provided in the Value
field if Status
is set to Success
.
When using the JSON-RPC protocol, the task ref
is wrapped in a response JSON
object as specified above and it is provided by the value of the result
member
of a successful call.
The RPC call
(task ref set) Task.get_all(session ref session_id)
returns a set of all task identifiers known to the system. The status (including any
returned result and error codes) of these can then be queried by accessing the
fields of the Task
object in the usual way. Note that, in order to get a
consistent snapshot of a task’s state, it is advisable to call the get_record
function.
Example interactive session
This section describes how an interactive session might look, using python XML-RPC and JSON-RPC client libraries.
First, initialise python:
$ python3
>>>
Using the XML-RPC Protocol
Import the library xmlrpc.client
and create a
python object referencing the remote server as shown below:
>>> import xmlrpc.client
>>> xen = xmlrpc.client.ServerProxy("https://localhost:443")
Note that you may need to disable SSL certificate validation to establish the connection, this can be done as follows:
>>> import ssl
>>> ctx = ssl._create_unverified_context()
>>> xen = xmlrpc.client.ServerProxy("https://localhost:443", context=ctx)
Acquire a session reference by logging in with a username and password; the
session reference is returned under the key Value
in the resulting dictionary
(error-handling ommitted for brevity):
>>> session = xen.session.login_with_password("user", "passwd",
... "version", "originator")['Value']
This is what the call looks like when serialized
<?xml version='1.0'?>
<methodCall>
<methodName>session.login_with_password</methodName>
<params>
<param><value><string>user</string></value></param>
<param><value><string>passwd</string></value></param>
<param><value><string>version</string></value></param>
<param><value><string>originator</string></value></param>
</params>
</methodCall>
Next, the user may acquire a list of all the VMs known to the system (note the call takes the session reference as the only parameter):
>>> all_vms = xen.VM.get_all(session)['Value']
>>> all_vms
['OpaqueRef:1', 'OpaqueRef:2', 'OpaqueRef:3', 'OpaqueRef:4' ]
The VM references here have the form OpaqueRef:X
(though they may not be
that simple in reality) and you should treat them as opaque strings.
Templates are VMs with the is_a_template
field set to true
. We can
find the subset of template VMs using a command like the following:
>>> all_templates = filter(lambda x: xen.VM.get_is_a_template(session, x)['Value'],
all_vms)
Once a reference to a VM has been acquired, a lifecycle operation may be invoked:
>>> xen.VM.start(session, all_templates[0], False, False)
{'Status': 'Failure', 'ErrorDescription': ['VM_IS_TEMPLATE', 'OpaqueRef:X']}
In this case the start
message has been rejected, because the VM is
a template, and so an error response has been returned. These high-level
errors are returned as structured data (rather than as XML-RPC faults),
allowing them to be internationalized.
Rather than querying fields individually, whole records may be returned at once. To retrieve the record of a single object as a python dictionary:
>>> record = xen.VM.get_record(session, all_templates[0])['Value']
>>> record['power_state']
'Halted'
>>> record['name_label']
'Windows 10 (64-bit)'
To retrieve all the VM records in a single call:
>>> records = xen.VM.get_all_records(session)['Value']
>>> list(records.keys())
['OpaqueRef:1', 'OpaqueRef:2', 'OpaqueRef:3', 'OpaqueRef:4' ]
>>> records['OpaqueRef:1']['name_label']
'Red Hat Enterprise Linux 7'
Using the JSON-RPC Protocol
For this example we are making use of the package jsonrpcclient
and the
requests
library due to their simplicity, although other packages can also be
used.
First, import the requests
and jsonrpcclient
libraries:
>>> import requests
>>> import jsonrpcclient
Now we construct a utility method to make using these libraries easier:
>>> def jsonrpccall(method, params):
... r = requests.post("https://localhost:443/jsonrpc",
... json=jsonrpcclient.request(method, params=params),
... verify=False)
... p = jsonrpcclient.parse(r.json())
... if isinstance(p, jsonrpcclient.Ok):
... return p.result
... raise Exception(p.message, p.data)
Acquire a session reference by logging in with a username and password:
>>> session = jsonrpccall("session.login_with_password",
... ("user", "password", "version", "originator"))
jsonrpcclient
uses the JSON-RPC protocol v2.0, so this is what the serialized
request looks like:
{
"jsonrpc": "2.0",
"method": "session.login_with_password",
"params": ["user", "passwd", "version", "originator"],
"id": 0
}
Next, the user may acquire a list of all the VMs known to the system (note the call takes the session reference as the only parameter):
>>> all_vms = jsonrpccall("VM.get_all", (session,))
>>> all_vms
['OpaqueRef:1', 'OpaqueRef:2', 'OpaqueRef:3', 'OpaqueRef:4' ]
The VM references here have the form OpaqueRef:X
(though they may not be
that simple in reality) and you should treat them as opaque strings.
Templates are VMs with the is_a_template
field set to true
. We can
find the subset of template VMs using a command like the following:
>>> all_templates = filter(
... lambda x: jsonrpccall("VM.get_is_a_template", (session, x)),
... all_vms)
Once a reference to a VM has been acquired, a lifecycle operation may be invoked:
>>> try:
... jsonrpccall("VM.start", (session, next(all_templates), False, False))
... except Exception as e:
... e
...
Exception('VM_IS_TEMPLATE', ['OpaqueRef:1', 'start'])
In this case the start
message has been rejected because the VM is
a template, hence an error response has been returned. These high-level
errors are returned as structured data, allowing them to be internationalized.
Rather than querying fields individually, whole records may be returned at once. To retrieve the record of a single object as a python dictionary:
>>> record = jsonrpccall("VM.get_record", (session, next(all_templates)))
>>> record['power_state']
'Halted'
>>> record['name_label']
'Windows 10 (64-bit)'
To retrieve all the VM records in a single call:
>>> records = jsonrpccall("VM.get_all_records", (session,))
>>> records.keys()
['OpaqueRef:1', 'OpaqueRef:2', 'OpaqueRef:3', 'OpaqueRef:4' ]
>>> records['OpaqueRef:1']['name_label']
'Red Hat Enterprise Linux 7'
Overview of the XenAPI
This chapter introduces the XenAPI and its associated object model. The API has the following key features:
Management of all aspects of the XenServer Host. The API allows you to manage VMs, storage, networking, host configuration and pools. Performance and status metrics can also be queried from the API.
Persistent Object Model. The results of all side-effecting operations (e.g. object creation, deletion and parameter modifications) are persisted in a server-side database that is managed by the XenServer installation.
An event mechanism. Through the API, clients can register to be notified when persistent (server-side) objects are modified. This enables applications to keep track of datamodel modifications performed by concurrently executing clients.
Synchronous and asynchronous invocation. All API calls can be invoked synchronously (that is, block until completion); any API call that may be long-running can also be invoked asynchronously. Asynchronous calls return immediately with a reference to a task object. This task object can be queried (through the API) for progress and status information. When an asynchronously invoked operation completes, the result (or error code) is available from the task object.
Remotable and Cross-Platform. The client issuing the API calls does not have to be resident on the host being managed; nor does it have to be connected to the host over ssh in order to execute the API. API calls make use of the XML-RPC protocol to transmit requests and responses over the network.
Secure and Authenticated Access. The XML-RPC API server executing on the host accepts secure socket connections. This allows a client to execute the APIs over the https protocol. Further, all the API calls execute in the context of a login session generated through username and password validation at the server. This provides secure and authenticated access to the XenServer installation.
Getting Started with the API
We will start our tour of the API by describing the calls required to create a new VM on a XenServer installation, and take it through a start/suspend/resume/stop cycle. This is done without reference to code in any specific language; at this stage we just describe the informal sequence of RPC invocations that accomplish our “install and start” task.
Authentication: acquiring a session reference
The first step is to call Session.login_with_password(, , , )
. The API is session based, so before you can make other calls you will need to authenticate with the server. Assuming the username and password are authenticated correctly, the result of this call is a session reference. Subsequent API calls take the session reference as a parameter. In this way we ensure that only API users who are suitably authorized can perform operations on a XenServer installation. You can continue to use the same session for any number of API calls. When you have finished the session, Citrix recommends that you call Session.logout(session)
to clean up: see later.
Acquiring a list of templates to base a new VM installation on
The next step is to query the list of “templates” on the host. Templates are specially-marked VM objects that specify suitable default parameters for a variety of supported guest types. (If you want to see a quick enumeration of the templates on a XenServer installation for yourself then you can execute the xe template-list
CLI command.) To get a list of templates from the API, we need to find the VM objects on the server that have their is_a_template
field set to true. One way to do this by calling VM.get_all_records(session)
where the session parameter is the reference we acquired from our Session.login_with_password
call earlier. This call queries the server, returning a snapshot (taken at the time of the call) containing all the VM object references and their field values.
(Remember that at this stage we are not concerned about the particular mechanisms by which the returned object references and field values can be manipulated in any particular client language: that detail is dealt with by our language-specific API bindings and described concretely in the following chapter. For now it suffices just to assume the existence of an abstract mechanism for reading and manipulating objects and field values returned by API calls.)
Now that we have a snapshot of all the VM objects’ field values in the memory of our client application we can simply iterate through them and find the ones that have their “is_a_template
” set to true. At this stage let’s assume that our example application further iterates through the template objects and remembers the reference corresponding to the one that has its “name_label
” set to “Debian Etch 4.0” (one of the default Linux templates supplied with XenServer).
Installing the VM based on a template
Continuing through our example, we must now install a new VM based on the template we selected. The installation process requires 4 API calls:
First we must now invoke the API call
VM.clone(session, t_ref, "my first VM")
. This tells the server to clone the VM object referenced byt_ref
in order to make a new VM object. The return value of this call is the VM reference corresponding to the newly-created VM. Let’s call thisnew_vm_ref
.Next, we need to specify the UUID of the Storage Repository where the VM’s disks will be instantiated. We have to put this in the
sr
attribute in the disk provisioning XML stored under the “disks
” key in theother_config
map of the newly-created VM. This field can be updated by calling its getter (other_config <- VM.get_other_config(session, new_vm_ref)
) and then its setter (VM.set_other_config(session, new_vm_ref, other_config)
) with the modifiedother_config
map.At this stage the object referred to by
new_vm_ref
is still a template (just like the VM object referred to byt_ref
, from which it was cloned). To makenew_vm_ref
into a VM object we need to callVM.provision(session, new_vm_ref)
. When this call returns thenew_vm_ref
object will have had itsis_a_template
field set to false, indicating thatnew_vm_ref
now refers to a regular VM ready for starting.
Note
The provision operation may take a few minutes, as it is as during this call that the template’s disk images are created. In the case of the Debian template, the newly created disks are also at this stage populated with a Debian root filesystem.
Taking the VM through a start/suspend/resume/stop cycle
Now we have an object reference representing our newly-installed VM, it is trivial to take it through a few lifecycle operations:
To start our VM we can just call
VM.start(session, new_vm_ref)
After it’s running, we can suspend it by calling
VM.suspend(session, new_vm_ref)
,and then resume it by calling
VM.resume(session, new_vm_ref)
.We can call
VM.shutdown(session, new_vm_ref)
to shutdown the VM cleanly.
Logging out
Once an application is finished interacting with a XenServer Host it is good practice to call Session.logout(session)
. This invalidates the session reference (so it cannot be used in subsequent API calls) and simultaneously deallocates server-side memory used to store the session object.
Although inactive sessions will eventually timeout, the server has a hardcoded limit of 500 concurrent sessions for each username
or originator
. Once this limit has been reached fresh logins will evict the session objects that have been used least recently, causing their associated session references to become invalid. For successful interoperability with other applications, concurrently accessing the server, the best policy is:
Choose a string that identifies your application and its version.
Create a single session at start-of-day, using that identifying string for the
originator
parameter toSession.login_with_password
.Use this session throughout the application (note that sessions can be used across multiple separate client-server network connections) and then explicitly logout when possible.
If a poorly written client leaks sessions or otherwise exceeds the limit, then as long as the client uses an appropriate originator
argument, it will be easily identifiable from the XenServer logs and XenServer will destroy the longest-idle sessions of the rogue client only; this may cause problems for that client but not for other clients. If the misbehaving client did not specify an originator
, it would be harder to identify and would cause the premature destruction of sessions of any clients that also did not specify an originator
Install and start example: summary
We have seen how the API can be used to install a VM from a XenServer template and perform a number of lifecycle operations on it. You will note that the number of calls we had to make in order to affect these operations was small:
One call to acquire a session:
Session.login_with_password()
One call to query the VM (and template) objects present on the XenServer installation:
VM.get_all_records()
. Recall that we used the information returned from this call to select a suitable template to install from.Four calls to install a VM from our chosen template:
VM.clone()
, followed by the getter and setter of theother_config
field to specify where to create the disk images of the template, and thenVM.provision()
.One call to start the resultant VM:
VM.start()
(and similarly other single calls to suspend, resume and shutdown accordingly)And then one call to logout
Session.logout()
The take-home message here is that, although the API as a whole is complex and fully featured, common tasks (such as creating and performing lifecycle operations on VMs) are very straightforward to perform, requiring only a small number of simple API calls. Keep this in mind while you study the next section which may, on first reading, appear a little daunting!
Object Model Overview
This section gives a high-level overview of the object model of the API. A more detailed description of the parameters and methods of each class outlined here can be found in the XenServer API Reference document.
We start by giving a brief outline of some of the core classes that make up the API. (Don’t worry if these definitions seem somewhat abstract in their initial presentation; the textual description in subsequent sections, and the code-sample walk through in the next Chapter will help make these concepts concrete.)
Class | Description |
---|---|
VM | A VM object represents a particular virtual machine instance on a XenServer Host or Resource Pool. Example methods include start , suspend , pool_migrate ; example parameters include power_state , memory_static_max , and name_label . (In the previous section we saw how the VM class is used to represent both templates and regular VMs) |
Host | A host object represents a physical host in a XenServer pool. Example methods include reboot and shutdown . Example parameters include software_version , hostname , and [IP] address . |
VDI | A VDI object represents a Virtual Disk Image. Virtual Disk Images can be attached to VMs, in which case a block device appears inside the VM through which the bits encapsulated by the Virtual Disk Image can be read and written. Example methods of the VDI class include “resize” and “clone”. Example fields include “virtual_size” and “sharable”. (When we called VM.provision on the VM template in our previous example, some VDI objects were automatically created to represent the newly created disks, and attached to the VM object.) |
SR | An SR (Storage Repository) aggregates a collection of VDIs and encapsulates the properties of physical storage on which the VDIs’ bits reside. Example parameters include type (which determines the storage-specific driver a XenServer installation uses to read/write the SR’s VDIs) and physical_utilisation ; example methods include scan (which invokes the storage-specific driver to acquire a list of the VDIs contained with the SR and the properties of these VDIs) and create (which initializes a block of physical storage so it is ready to store VDIs). |
Network | A network object represents a layer-2 network that exists in the environment in which the XenServer Host instance lives. Since XenServer does not manage networks directly this is a lightweight class that serves merely to model physical and virtual network topology. VM and Host objects that are attached to a particular Network object (by virtue of VIF and PIF instances – see below) can send network packets to each other. |
At this point, readers who are finding this enumeration of classes rather terse may wish to skip to the code walk-throughs of the next chapter: there are plenty of useful applications that can be written using only a subset of the classes already described! For those who wish to continue this description of classes in the abstract, read on.
On top of the classes listed above, there are 4 more that act as connectors, specifying relationships between VMs and Hosts, and Storage and Networks. The first 2 of these classes that we will consider, VBD and VIF, determine how VMs are attached to virtual disks and network objects respectively:
Class | Description |
---|---|
VBD | A VBD (Virtual Block Device) object represents an attachment between a VM and a VDI. When a VM is booted its VBD objects are queried to determine which disk images (VDIs) should be attached. Example methods of the VBD class include “plug” (which hot plugs a disk device into a running VM, making the specified VDI accessible therein) and “unplug” (which hot unplugs a disk device from a running guest); example fields include “device” (which determines the device name inside the guest under which the specified VDI will be made accessible). |
VIF | A VIF (Virtual network InterFace) object represents an attachment between a VM and a Network object. When a VM is booted its VIF objects are queried to determine which network devices should be created. Example methods of the VIF class include “plug” (which hot plugs a network device into a running VM) and “unplug” (which hot unplugs a network device from a running guest). |
The second set of “connector classes” that we will consider determine how Hosts are attached to Networks and Storage.
Class | Description |
---|---|
PIF | A PIF (Physical InterFace) object represents an attachment between a Host and a Network object. If a host is connected to a Network (over a PIF) then packets from the specified host can be transmitted/received by the corresponding host. Example fields of the PIF class include “device” (which specifies the device name to which the PIF corresponds – e.g. eth0) and “MAC” (which specifies the MAC address of the underlying NIC that a PIF represents). Note that PIFs abstract both physical interfaces and VLANs (the latter distinguished by the existence of a positive integer in the “VLAN” field). |
PBD | A PBD (Physical Block Device) object represents an attachment between a Host and a SR (Storage Repository) object. Fields include “currently-attached” (which specifies whether the chunk of storage represented by the specified SR object) is currently available to the host; and “device_config” (which specifies storage-driver specific parameters that determines how the low-level storage devices are configured on the specified host – e.g. in the case of an SR rendered on an NFS filer, device_config may specify the host-name of the filer and the path on the filer in which the SR files live.). |
The figure above presents a graphical overview of the API classes involved in managing VMs, Hosts, Storage and Networking. From this diagram, the symmetry between storage and network configuration, and also the symmetry between virtual machine and host configuration is plain to see.
Working with VIFs and VBDs
In this section we walk through a few more complex scenarios, describing informally how various tasks involving virtual storage and network devices can be accomplished using the API.
Creating disks and attaching them to VMs
Let’s start by considering how to make a new blank disk image and attach it to a running VM. We will assume that we already have ourselves a running VM, and we know its corresponding API object reference (e.g. we may have created this VM using the procedure described in the previous section, and had the server return its reference to us.) We will also assume that we have authenticated with the XenServer installation and have a corresponding session reference
. Indeed in the rest of this chapter, for the sake of brevity, we will stop mentioning sessions altogether.
Creating a new blank disk image
The first step is to instantiate the disk image on physical storage. We do this by calling VDI.create()
. The VDI.create
call takes a number of parameters, including:
name_label
andname_description
: a human-readable name/description for the disk (e.g. for convenient display in the UI etc.). These fields can be left blank if desired.SR
: the object reference of the Storage Repository representing the physical storage in which the VDI’s bits will be placed.read_only
: setting this field to true indicates that the VDI can only be attached to VMs in a read-only fashion. (Attempting to attach a VDI with itsread_only
field set to true in a read/write fashion results in error.)
Invoking the VDI.create
call causes the XenServer installation to create a blank disk image on physical storage, create an associated VDI object (the datamodel instance that refers to the disk image on physical storage) and return a reference to this newly created VDI object.
The way in which the disk image is represented on physical storage depends on the type of the SR in which the created VDI resides. For example, if the SR is of type “lvm” then the new disk image will be rendered as an LVM volume; if the SR is of type “nfs” then the new disk image will be a sparse VHD file created on an NFS filer. (You can query the SR type through the API using the SR.get_type()
call.)
Note
Some SR types might round up the
virtual-size
value to make it divisible by a configured block size.
Attaching the disk image to a VM
So far we have a running VM (that we assumed the existence of at the start of this example) and a fresh VDI that we just created. Right now, these are both independent objects that exist on the XenServer Host, but there is nothing linking them together. So our next step is to create such a link, associating the VDI with our VM.
The attachment is formed by creating a new “connector” object called a VBD (Virtual Block Device). To create our VBD we invoke the VBD.create()
call. The VBD.create()
call takes a number of parameters including:
VM
- the object reference of the VM to which the VDI is to be attachedVDI
- the object reference of the VDI that is to be attachedmode
- specifies whether the VDI is to be attached in a read-only or a read-write fashionuserdevice
- specifies the block device inside the guest through which applications running inside the VM will be able to read/write the VDI’s bits.type
- specifies whether the VDI should be presented inside the VM as a regular disk or as a CD. (Note that this particular field has more meaning for Windows VMs than it does for Linux VMs, but we will not explore this level of detail in this chapter.)
Invoking VBD.create
makes a VBD object on the XenServer installation and returns its object reference. However, this call in itself does not have any side-effects on the running VM (that is, if you go and look inside the running VM you will see that the block device has not been created). The fact that the VBD object exists but that the block device in the guest is not active, is reflected by the fact that the VBD object’s currently_attached
field is set to false.
For expository purposes, the figure above presents a graphical example that shows the relationship between VMs, VBDs, VDIs and SRs. In this instance a VM object has 2 attached VDIs: there are 2 VBD objects that form the connections between the VM object and its VDIs; and the VDIs reside within the same SR.
Hotplugging the VBD
If we rebooted the VM at this stage then, after rebooting, the block device corresponding to the VBD would appear: on boot, XenServer queries all VBDs of a VM and actively attaches each of the corresponding VDIs.
Rebooting the VM is all very well, but recall that we wanted to attach a newly created blank disk to a running VM. This can be achieved by invoking the plug
method on the newly created VBD object. When the plug
call returns successfully, the block device to which the VBD relates will have appeared inside the running VM – i.e. from the perspective of the running VM, the guest operating system is led to believe that a new disk device has just been hot plugged. Mirroring this fact in the managed world of the API, the currently_attached
field of the VBD is set to true.
Unsurprisingly, the VBD plug
method has a dual called “unplug
”. Invoking the unplug
method on a VBD object causes the associated block device to be hot unplugged from a running VM, setting the currently_attached
field of the VBD object to false accordingly.
Creating and attaching Network Devices to VMs
The API calls involved in configuring virtual network interfaces in VMs are similar in many respects to the calls involved in configuring virtual disk devices. For this reason we will not run through a full example of how one can create network interfaces using the API object-model; instead we will use this section just to outline briefly the symmetry between virtual networking device and virtual storage device configuration.
The networking analogue of the VBD class is the VIF class. Just as a VBD is the API representation of a block device inside a VM, a VIF (Virtual network InterFace) is the API representation of a network device inside a VM. Whereas VBDs associate VM objects with VDI objects, VIFs associate VM objects with Network objects. Just like VBDs, VIFs have a currently_attached
field that determines whether or not the network device (inside the guest) associated with the VIF is currently active or not. And as we saw with VBDs, at VM boot-time the VIFs of the VM are queried and a corresponding network device for each created inside the booting VM. Similarly, VIFs also have plug
and unplug
methods for hot plugging/unplugging network devices in/out of running VMs.
Host configuration for networking and storage
We have seen that the VBD and VIF classes are used to manage configuration of block devices and network devices (respectively) inside VMs. To manage host configuration of storage and networking there are two analogous classes: PBD (Physical Block Device) and PIF (Physical [network] InterFace).
Host storage configuration: PBDs
Let us start by considering the PBD class. A PBD_create()
call takes a number of parameters including:
Parameter | Description |
---|---|
host | physical machine on which the PBD is available |
SR | the Storage Repository that the PBD connects to |
device_config | a string-to-string map that is provided to the host’s SR-backend-driver, containing the low-level parameters required to configure the physical storage device(s) on which the SR is to be realized. The specific contents of the device_config field depend on the type of the SR to which the PBD is connected. (Executing xe sm-list will show a list of possible SR types; the configuration field in this enumeration specifies the device_config parameters that each SR type expects.) |
For example, imagine we have an SR object s of type “nfs” (representing a directory on an NFS filer within which VDIs are stored as VHD files); and let’s say that we want a host, h, to be able to access s. In this case we invoke PBD.create()
specifying host h, SR s, and a value for the device_config parameter that is the following map:
("server", "my_nfs_server.example.com"), ("serverpath", "/scratch/mysrs/sr1")
This tells the XenServer Host that SR s is accessible on host h, and further that to access SR s, the host needs to mount the directory /scratch/mysrs/sr1
on the NFS server named my_nfs_server.example.com
.
Like VBD objects, PBD objects also have a field called currently_attached
. Storage repositories can be attached and detached from a given host by invoking PBD.plug
and PBD.unplug
methods respectively.
Host networking configuration: PIFs
Host network configuration is specified by virtue of PIF objects. If a PIF object connects a network object, n, to a host object h, then the network corresponding to n is bridged onto a physical interface (or a physical interface plus a VLAN tag) specified by the fields of the PIF object.
For example, imagine a PIF object exists connecting host h to a network n, and that device
field of the PIF object is set to eth0
. This means that all packets on network n are bridged to the NIC in the host corresponding to host network device eth0
.
XML-RPC notes
Datetimes
The API deviates from the XML-RPC specification in handling of datetimes. The API appends a “Z” to the end of datetime strings, which is meant to indicate that the time is expressed in UTC.
API evolution
All APIs evolve as bugs are fixed, new features added and features are removed
- the XenAPI is no exception. This document lists policies describing how the XenAPI evolves over time.
The goals of XenAPI evolution are:
- to allow bugs to be fixed efficiently;
- to allow new, innovative features to be added easily;
- to keep old, unmodified clients working as much as possible; and
- where backwards-incompatible changes are to be made, publish this information early to enable affected parties to give timely feedback.
Background
In this document, the term XenAPI refers to the XMLRPC-derived wire protocol used by xapi. The XenAPI has objects which each have fields and messages. The XenAPI is described in detail elsewhere.
XenAPI Lifecycle
Each element of the XenAPI (objects, messages and fields) follows the lifecycle diagram above. When an element is newly created and being still in development, it is in the Prototype state. Elements in this state may be stubs: the interface is there and can be used by clients for prototyping their new features, but the actual implementation is not yet ready.
When the element subsequently becomes ready for use (the stub is replaced by a real implementation), it transitions to the Published state. This is the only state in which the object, message or field should be used. From this point onwards, the element needs to have clearly defined semantics that are available for reference in the XenAPI documentation.
If the XenAPI element becomes Deprecated, it will still function as it did before, but its use is discouraged. The final stage of the lifecycle is the Removed state, in which the element is not available anymore.
The numbered state changes in the diagram have the following meaning:
- Publish: declare that the XenAPI element is ready for people to use.
- Extend: a backwards-compatible extension of the XenAPI, for example an additional parameter in a message with an appropriate default value. If the API is used as before, it still has the same effect.
- Change: a backwards-incompatible change. That is, the message now behaves differently, or the field has different semantics. Such changes are discouraged and should only be considered in special cases (always consider whether deprecation is a better solution). The use of a message can for example be restricted for security or efficiency reasons, or the behaviour can be changed simply to fix a bug.
- Deprecate: declare that the use of this XenAPI element should be avoided from now on. Reasons for doing this include: the element is redundant (it duplicates functionality elsewhere), it is inconsistent with other parts of the XenAPI, it is insecure or inefficient (for examples of deprecation policies of other projects, see symbian eclipse oval.
- Remove: the element is taken out of the public API and can no longer be used.
Each lifecycle transition must be accompanied by an explanation describing the change and the reason for the change. This message should be enough to understand the semantics of the XenAPI element after the change, and in the case of backwards-incompatible changes or deprecation, it should give directions about how to modify a client to deal with the change (for example, how to avoid using the deprecated field or message).
Releases
Every release must be accompanied by release notes listing all objects, fields and messages that are newly prototyped, published, extended, changed, deprecated or removed in the release. Each item should have an explanation as implied above, documenting the new or changed XenAPI element. The release notes for every release shall be prominently displayed in the XenAPI HTML documentation.
Documentation
The XenAPI documentation will contain its complete lifecycle history for each XenAPI element. Only the elements described in the documentation are “official” and supported.
Each object, message and field in datamodel.ml
will have lifecycle
metadata attached to it, which is a list of transitions (transition type *
release * explanation string) as described above. Release notes are automatically generated from this data.
Using the API
This chapter describes how to use the XenServer Management API from real programs to manage XenServer Hosts and VMs. The chapter begins with a walk-through of a typical client application and demonstrates how the API can be used to perform common tasks. Example code fragments are given in python syntax but equivalent code in the other programming languages would look very similar. The language bindings themselves are discussed afterwards and the chapter finishes with walk-throughs of two complete examples.
Anatomy of a typical application
This section describes the structure of a typical application using the XenServer Management API. Most client applications begin by connecting to a XenServer Host and authenticating (e.g. with a username and password). Assuming the authentication succeeds, the server will create a “session” object and return a reference to the client. This reference will be passed as an argument to all future API calls. Once authenticated, the client may search for references to other useful objects (e.g. XenServer Hosts, VMs, etc.) and invoke operations on them. Operations may be invoked either synchronously or asynchronously; special task objects represent the state and progress of asynchronous operations. These application elements are all described in detail in the following sections.
Choosing a low-level transport
API calls can be issued over two transports:
SSL-encrypted TCP on port 443 (https) over an IP network
plaintext over a local Unix domain socket:
/var/xapi/xapi
The SSL-encrypted TCP transport is used for all off-host traffic while the Unix domain socket can be used from services running directly on the XenServer Host itself. In the SSL-encrypted TCP transport, all API calls should be directed at the Resource Pool master; failure to do so will result in the error HOST_IS_SLAVE
, which includes the IP address of the master as an error parameter.
Because the master host of a pool can change, especially if HA is enabled on a pool, clients must implement the following steps to detect a master host change and connect to the new master as required:
Subscribe to updates in the list of hosts servers, and maintain a current list of hosts in the pool
If the connection to the pool master fails to respond, attempt to connect to all hosts in the list until one responds
The first host to respond will return the HOST_IS_SLAVE
error message, which contains the identity of the new pool master (unless of course the host is the new master)
Connect to the new master
Note
As a special-case, all messages sent through the Unix domain socket are transparently forwarded to the correct node.
Authentication and session handling
The vast majority of API calls take a session reference as their first parameter; failure to supply a valid reference will result in a SESSION_INVALID
error being returned. Acquire a session reference by supplying a username and password to the login_with_password
function.
Note
As a special-case, if this call is executed over the local Unix domain socket then the username and password are ignored and the call always succeeds.
Every session has an associated “last active” timestamp which is updated on every API call. The server software currently has a built-in limit of 500 active sessions and will remove those with the oldest “last active” field if this limit is exceeded for a given username
or originator
. In addition all sessions whose “last active” field is older than 24 hours are also removed. Therefore it is important to:
Specify an appropriate
originator
when logging in; andRemember to log out of active sessions to avoid leaking them; and
Be prepared to log in again to the server if a
SESSION_INVALID
error is caught.
In the following Python fragment a connection is established over the Unix domain socket and a session is created:
import XenAPI
session = XenAPI.xapi_local()
try:
session.xenapi.login_with_password("root", "", "2.3", "My Widget v0.1")
...
finally:
session.xenapi.session.logout()
Finding references to useful objects
Once an application has authenticated the next step is to acquire references to objects in order to query their state or invoke operations on them. All objects have a set of “implicit” messages which include the following:
get_by_name_label
: return a list of all objects of a particular class with a particular label;get_by_uuid
: return a single object named by its UUID;get_all
: return a set of references to all objects of a particular class; andget_all_records
: return a map of reference to records for each object of a particular class.
For example, to list all hosts:
hosts = session.xenapi.host.get_all()
To find all VMs with the name “my first VM”:
vms = session.xenapi.VM.get_by_name_label('my first VM')
Note
Object
name_label
fields are not guaranteed to be unique and so theget_by_name_label
API call returns a set of references rather than a single reference.
In addition to the methods of finding objects described above, most objects also contain references to other objects within fields. For example it is possible to find the set of VMs running on a particular host by calling:
vms = session.xenapi.host.get_resident_VMs(host)
Invoking synchronous operations on objects
Once object references have been acquired, operations may be invoked on them. For example to start a VM:
session.xenapi.VM.start(vm, False, False)
All API calls are by default synchronous and will not return until the operation has completed or failed. For example in the case of VM.start
the call does not return until the VM has started booting.
Note
When the
VM.start
call returns the VM will be booting. To determine when the booting has finished, wait for the in-guest agent to report internal statistics through theVM_guest_metrics
object.
Using Tasks to manage asynchronous operations
To simplify managing operations which take quite a long time (e.g. VM.clone
and VM.copy
) functions are available in two forms: synchronous (the default) and asynchronous. Each asynchronous function returns a reference to a task object which contains information about the in-progress operation including:
whether it is pending
whether it is has succeeded or failed
progress (in the range 0-1)
the result or error code returned by the operation
An application which wanted to track the progress of a VM.clone
operation and display a progress bar would have code like the following:
vm = session.xenapi.VM.get_by_name_label('my vm')
task = session.xenapi.Async.VM.clone(vm)
while session.xenapi.task.get_status(task) == "pending":
progress = session.xenapi.task.get_progress(task)
update_progress_bar(progress)
time.sleep(1)
session.xenapi.task.destroy(task)
Note
Note that a well-behaved client should remember to delete tasks created by asynchronous operations when it has finished reading the result or error. If the number of tasks exceeds a built-in threshold then the server will delete the oldest of the completed tasks.
Subscribing to and listening for events
With the exception of the task and metrics classes, whenever an object is modified the server generates an event. Clients can subscribe to this event stream on a per-class basis and receive updates rather than resorting to frequent polling. Events come in three types:
add
- generated when an object has been created;del
- generated immediately before an object is destroyed; andmod
- generated when an object’s field has changed.
Events also contain a monotonically increasing ID, the name of the class of object and a snapshot of the object state equivalent to the result of a get_record()
.
Clients register for events by calling event.register()
with a list of class names or the special string “*”. Clients receive events by executing event.next()
which blocks until events are available and returns the new events.
Note
Since the queue of generated events on the server is of finite length a very slow client might fail to read the events fast enough; if this happens an
EVENTS_LOST
error is returned. Clients should be prepared to handle this by re-registering for events and checking that the condition they are waiting for hasn’t become true while they were unregistered.
The following python code fragment demonstrates how to print a summary of every event generated by a system: (similar code exists in Xenserver-SDK/XenServerPython/samples/watch-all-events.py
)
fmt = "%8s %20s %5s %s"
session.xenapi.event.register(["*"])
while True:
try:
for event in session.xenapi.event.next():
name = "(unknown)"
if "snapshot" in event.keys():
snapshot = event["snapshot"]
if "name_label" in snapshot.keys():
name = snapshot["name_label"]
print fmt % (event['id'], event['class'], event['operation'], name)
except XenAPI.Failure, e:
if e.details == [ "EVENTS_LOST" ]:
print "Caught EVENTS_LOST; should reregister"
Language bindings
C
The SDK includes the source to the C language binding in the directory XenServer-SDK/libxenserver/src
together with a Makefile which compiles the binding into a library. Every API object is associated with a header file which contains declarations for all that object’s API functions; for example the type definitions and functions required to invoke VM operations are all contained in xen_vm.h
.
C binding dependencies
The following simple examples are included with the C bindings:
test_vm_async_migrate
: demonstrates how to use asynchronous API calls to migrate running VMs from a slave host to the pool master.test_vm_ops
: demonstrates how to query the capabilities of a host, create a VM, attach a fresh blank disk image to the VM and then perform various powercycle operations;test_failures
: demonstrates how to translate error strings into enum_xen_api_failure, and vice versa;test_event_handling
: demonstrates how to listen for events on a connection.test_enumerate
: demonstrates how to enumerate the various API objects.
C#
The C# bindings are contained within the directory XenServer-SDK/XenServer.NET
and include project files suitable for building under Microsoft Visual Studio. Every API object is associated with one C# file; for example the functions implementing the VM operations are contained within the file VM.cs
.
C# binding dependencies
Three examples are included with the C# bindings in the directory XenServer-SDK/XenServer.NET/samples
as separate projects of the XenSdkSample.sln
solution:
GetVariousRecords
: logs into a XenServer Host and displays information about hosts, storage and virtual machines;GetVmRecords
: logs into a XenServer Host and lists all the VM records;VmPowerStates
: logs into a XenServer Host, finds a VM and takes it through the various power states. Requires a shut-down VM to be already installed.
Java
The Java bindings are contained within the directory XenServer-SDK/XenServerJava
and include project files suitable for building under Microsoft Visual Studio. Every API object is associated with one Java file; for example the functions implementing the VM operations are contained within the file VM.java
.
Java binding dependencies
Running the main file XenServer-SDK/XenServerJava/samples/RunTests.java
will run a series of examples included in the same directory:
AddNetwork
: Adds a new internal network not attached to any NICs;SessionReuse
: Demonstrates how a Session object can be shared between multiple Connections;AsyncVMCreate
: Makes asynchronously a new VM from a built-in template, starts and stops it;VdiAndSrOps
: Performs various SR and VDI tests, including creating a dummy SR;CreateVM
: Creates a VM on the default SR with a network and DVD drive;DeprecatedMethod
: Tests a warning is displayed wehn a deprecated API method is called;GetAllRecordsOfAllTypes
: Retrieves all the records for all types of objects;SharedStorage
: Creates a shared NFS SR;StartAllVMs
: Connects to a host and tries to start each VM on it.
PowerShell
The PowerShell bindings are contained within the directory XenServer-SDK/XenServerPowerShell
. We provide the PowerShell module XenServerPSModule
and source code exposing the XenServer API as Windows PowerShell cmdlets.
PowerShell binding dependencies
These example scripts are included with the PowerShell bindings in the directory XenServer-SDK/XenServerPowerShell/samples
:
AutomatedTestCore.ps1
: demonstrates how to log into a XenServer host, create a storage repository and a VM, and then perform various powercycle operations;HttpTest.ps1
: demonstrates how to log into a XenServer host, create a VM, and then perform operations such as VM importing and exporting, patch upload, and retrieval of performance statistics.
Python
The python bindings are contained within a single file: XenServer-SDK/XenServerPython/XenAPI.py
.
Python binding dependencies
|:–|:–| |Platform supported:|Linux| |Library:|XenAPI.py| |Dependencies:|None|
The SDK includes 7 python examples:
fixpbds.py
- reconfigures the settings used to access shared storage;install.py
- installs a Debian VM, connects it to a network, starts it up and waits for it to report its IP address;license.py
- uploads a fresh license to a XenServer Host;permute.py
- selects a set of VMs and uses XenMotion to move them simultaneously between hosts;powercycle.py
- selects a set of VMs and powercycles them;shell.py
- a simple interactive shell for testing;vm_start_async.py
- demonstrates how to invoke operations asynchronously;watch-all-events.py
- registers for all events and prints details when they occur.
Command Line Interface (CLI)
Besides using raw XML-RPC or one of the supplied language bindings, third-party software developers may integrate with XenServer Hosts by using the XE command line interface xe
. The xe CLI is installed by default on XenServer hosts; a stand-alone remote CLI is also available for Linux. On Windows, the xe.exe
CLI executable is installed along with XenCenter.
CLI dependencies
|:–|:–| |Platform supported:|Linux and Windows| |Library:|None| |Binary:|xe (xe.exe on Windows)| |Dependencies:|None|
The CLI allows almost every API call to be directly invoked from a script or other program, silently taking care of the required session management. The XE CLI syntax and capabilities are described in detail in the XenServer Administrator’s Guide. For additional resources and examples, visit the Citrix Knowledge Center.
Note
When running the CLI from a XenServer Host console, tab-completion of both command names and arguments is available.
Complete application examples
This section describes two complete examples of real programs using the API.
Simultaneously migrating VMs using XenMotion
This python example (contained in XenServer-SDK/XenServerPython/samples/permute.py
) demonstrates how to use XenMotion to move VMs simultaneously between hosts in a Resource Pool. The example makes use of asynchronous API calls and shows how to wait for a set of tasks to complete.
The program begins with some standard boilerplate and imports the API bindings module
import sys, time
import XenAPI
Next the commandline arguments containing a server URL, username, password and a number of iterations are parsed. The username and password are used to establish a session which is passed to the function main
, which is called multiple times in a loop. Note the use of try: finally:
to make sure the program logs out of its session at the end.
if __name__ == "__main__":
if len(sys.argv) <> 5:
print "Usage:"
print sys.argv[0], " <url> <username> <password> <iterations>"
sys.exit(1)
url = sys.argv[1]
username = sys.argv[2]
password = sys.argv[3]
iterations = int(sys.argv[4])
# First acquire a valid session by logging in:
session = XenAPI.Session(url)
session.xenapi.login_with_password(username, password, "2.3",
"Example migration-demo v0.1")
try:
for i in range(iterations):
main(session, i)
finally:
session.xenapi.session.logout()
The main
function examines each running VM in the system, taking care to filter out control domains (which are part of the system and not controllable by the user). A list of running VMs and their current hosts is constructed.
def main(session, iteration):
# Find a non-template VM object
all = session.xenapi.VM.get_all()
vms = []
hosts = []
for vm in all:
record = session.xenapi.VM.get_record(vm)
if not(record["is_a_template"]) and \
not(record["is_control_domain"]) and \
record["power_state"] == "Running":
vms.append(vm)
hosts.append(record["resident_on"])
print "%d: Found %d suitable running VMs" % (iteration, len(vms))
Next the list of hosts is rotated:
# use a rotation as a permutation
hosts = [hosts[-1]] + hosts[:(len(hosts)-1)]
Each VM is then moved using XenMotion to the new host under this rotation (i.e. a VM running on host at position 2 in the list will be moved to the host at position 1 in the list etc.) In order to execute each of the movements in parallel, the asynchronous version of the VM.pool_migrate
is used and a list of task references constructed. Note the live
flag passed to the VM.pool_migrate
; this causes the VMs to be moved while they are still running.
tasks = []
for i in range(0, len(vms)):
vm = vms[i]
host = hosts[i]
task = session.xenapi.Async.VM.pool_migrate(vm, host, { "live": "true" })
tasks.append(task)
The list of tasks is then polled for completion:
finished = False
records = {}
while not(finished):
finished = True
for task in tasks:
record = session.xenapi.task.get_record(task)
records[task] = record
if record["status"] == "pending":
finished = False
time.sleep(1)
Once all tasks have left the pending state (i.e. they have successfully completed, failed or been cancelled) the tasks are polled once more to see if they all succeeded:
allok = True
for task in tasks:
record = records[task]
if record["status"] <> "success":
allok = False
If any one of the tasks failed then details are printed, an exception is raised and the task objects left around for further inspection. If all tasks succeeded then the task objects are destroyed and the function returns.
if not(allok):
print "One of the tasks didn't succeed at", \
time.strftime("%F:%HT%M:%SZ", time.gmtime())
idx = 0
for task in tasks:
record = records[task]
vm_name = session.xenapi.VM.get_name_label(vms[idx])
host_name = session.xenapi.host.get_name_label(hosts[idx])
print "%s : %12s %s -> %s [ status: %s; result = %s; error = %s ]" % \
(record["uuid"], record["name_label"], vm_name, host_name, \
record["status"], record["result"], repr(record["error_info"]))
idx = idx + 1
raise "Task failed"
else:
for task in tasks:
session.xenapi.task.destroy(task)
Cloning a VM using the XE CLI
This example is a bash
script which uses the XE CLI to clone a VM taking care to shut it down first if it is powered on.
The example begins with some boilerplate which first checks if the environment variable XE
has been set: if it has it assumes that it points to the full path of the CLI, else it is assumed that the XE CLI is on the current path. Next the script prompts the user for a server name, username and password:
# Allow the path to the 'xe' binary to be overridden by the XE environment variable
if [ -z "${XE}" ]; then
XE=xe
fi
if [ ! -e "${HOME}/.xe" ]; then
read -p "Server name: " SERVER
read -p "Username: " USERNAME
read -p "Password: " PASSWORD
XE="${XE} -s ${SERVER} -u ${USERNAME} -pw ${PASSWORD}"
fi
Next the script checks its commandline arguments. It requires exactly one: the UUID of the VM which is to be cloned:
# Check if there's a VM by the uuid specified
${XE} vm-list params=uuid | grep -q " ${vmuuid}$"
if [ $? -ne 0 ]; then
echo "error: no vm uuid \"${vmuuid}\" found"
exit 2
fi
The script then checks the power state of the VM and if it is running, it attempts a clean shutdown. The event system is used to wait for the VM to enter state “Halted”.
Note
The XE CLI supports a command-line argument
--minimal
which causes it to print its output without excess whitespace or formatting, ideal for use from scripts. If multiple values are returned they are comma-separated.
# Check the power state of the vm
name=$(${XE} vm-list uuid=${vmuuid} params=name-label --minimal)
state=$(${XE} vm-list uuid=${vmuuid} params=power-state --minimal)
wasrunning=0
# If the VM state is running, we shutdown the vm first
if [ "${state}" = "running" ]; then
${XE} vm-shutdown uuid=${vmuuid}
${XE} event-wait class=vm power-state=halted uuid=${vmuuid}
wasrunning=1
fi
The VM is then cloned and the new VM has its name_label
set to cloned_vm
.
# Clone the VM
newuuid=$(${XE} vm-clone uuid=${vmuuid} new-name-label=cloned_vm)
Finally, if the original VM had been running and was shutdown, both it and the new VM are started.
# If the VM state was running before cloning, we start it again
# along with the new VM.
if [ "$wasrunning" -eq 1 ]; then
${XE} vm-start uuid=${vmuuid}
${XE} vm-start uuid=${newuuid}
fi
XenAPI Reference
XenAPI Classes
Click on a class to view the associated fields and messages.
Classes, Fields and Messages
Classes have both fields and messages. Messages are either implicit or explicit where an implicit message is one of:
- a constructor (usually called "create");
- a destructor (usually called "destroy");
- "get_by_name_label";
- "get_by_uuid";
- "get_record";
- "get_all"; and
- "get_all_records".
Explicit messages include all the rest, more class-specific messages (e.g. "VM.start", "VM.clone")
Every field has at least one accessor depending both on its type and whether it is read-only or read-write. Accessors for a field named "X" would be a proper subset of:
- set_X: change the value of field X (only if it is read-write);
- get_X: retrieve the value of field X;
- add_X: add a key/value pair (for fields of type set);
- remove_X: remove a key (for fields of type set);
- add_to_X: add a key/value pair (for fields of type map); and
- remove_from_X: remove a key (for fields of type map).
Subsections of XenAPI Reference
auth
blob
Bond
Certificate
Cluster
Cluster_host
console
crashdump
data_source
DR_task
event
Feature
GPU_group
host
host_cpu
host_crashdump
host_metrics
host_patch
LVHD
message
network
network_sriov
Observer
PBD
PCI
PGPU
PIF
PIF_metrics
pool
pool_patch
pool_update
probe_result
PUSB
PVS_cache_storage
PVS_proxy
PVS_server
PVS_site
Repository
role
SDN_controller
secret
session
SM
SR
sr_stat
subject
task
tunnel
USB_group
user
VBD
VBD_metrics
VDI
vdi_nbd_server_info
VGPU
VGPU_type
VIF
VIF_metrics
VLAN
VM
VM_appliance
VM_guest_metrics
VM_metrics
VMPP
VMSS
VTPM
VUSB
XenAPI Releases
- XAPI 24.16.0
- XAPI 24.14.0
- XAPI 24.10.0
- XAPI 24.3.0
- XAPI 24.0.0
- XAPI 23.30.0
- XAPI 23.27.0
- XAPI 23.25.0
- XAPI 23.18.0
- XAPI 23.14.0
- XAPI 23.9.0
- XAPI 23.1.0
- XAPI 22.37.0
- XAPI 22.33.0
- XAPI 22.27.0
- XAPI 22.26.0
- XAPI 22.20.0
- XAPI 22.19.0
- XAPI 22.16.0
- XAPI 22.12.0
- XAPI 22.5.0
- XAPI 21.4.0
- XAPI 21.3.0
- XAPI 21.2.0
- XAPI 1.329.0
- XAPI 1.318.0
- XAPI 1.313.0
- XAPI 1.307.0
- XAPI 1.304.0
- XAPI 1.303.0
- XAPI 1.301.0
- XAPI 1.298.0
- XAPI 1.297.0
- XAPI 1.294.0
- XAPI 1.290.0
- XAPI 1.271.0
- XAPI 1.257.0
- XAPI 1.250.0
- XenServer 8 Preview
- Citrix Hypervisor 8.2 Hotfix 2
- Citrix Hypervisor 8.2
- Citrix Hypervisor 8.1
- Citrix Hypervisor 8.0
- XenServer 7.6
- XenServer 7.5
- XenServer 7.4
- XenServer 7.3
- XenServer 7.2
- XenServer 7.1
- XenServer 7.0
- XenServer 6.5 SP1 Hotfix 31
- XenServer 6.5 SP1
- XenServer 6.5
- XenServer 6.2 SP1 Hotfix 11
- XenServer 6.2 SP1 Hotfix 4
- XenServer 6.2 SP1
- XenServer 6.2 SP1 Tech-Preview
- XenServer 6.2
- XenServer 6.1
- XenServer 6.0
- XenServer 5.6 FP1
- XenServer 5.6
- XenServer 5.5
- XenServer 5.0 Update 1
- XenServer 5.0
- XenServer 4.1.1
- XenServer 4.1
- XenServer 4.0
Subsections of XenAPI Releases
XAPI 24.16.0
XAPI 24.14.0
XAPI 24.10.0
XAPI 24.3.0
XAPI 24.0.0
XAPI 23.30.0
XAPI 23.27.0
XAPI 23.25.0
XAPI 23.18.0
XAPI 23.14.0
XAPI 23.9.0
XAPI 23.1.0
XAPI 22.37.0
XAPI 22.33.0
XAPI 22.27.0
XAPI 22.26.0
XAPI 22.20.0
XAPI 22.19.0
XAPI 22.16.0
XAPI 22.12.0
XAPI 22.5.0
XAPI 21.4.0
XAPI 21.3.0
XAPI 21.2.0
XAPI 1.329.0
XAPI 1.318.0
XAPI 1.313.0
XAPI 1.307.0
XAPI 1.304.0
XAPI 1.303.0
XAPI 1.301.0
XAPI 1.298.0
XAPI 1.297.0
XAPI 1.294.0
XAPI 1.290.0
XAPI 1.271.0
XAPI 1.257.0
XAPI 1.250.0
XenServer 8 Preview
Citrix Hypervisor 8.2 Hotfix 2
Citrix Hypervisor 8.2
Citrix Hypervisor 8.1
Citrix Hypervisor 8.0
XenServer 7.6
XenServer 7.5
XenServer 7.4
XenServer 7.3
XenServer 7.2
XenServer 7.1
XenServer 7.0
XenServer 6.5 SP1 Hotfix 31
XenServer 6.5 SP1
XenServer 6.5
XenServer 6.2 SP1 Hotfix 11
XenServer 6.2 SP1 Hotfix 4
XenServer 6.2 SP1
XenServer 6.2 SP1 Tech-Preview
XenServer 6.2
XenServer 6.1
XenServer 6.0
XenServer 5.6 FP1
XenServer 5.6
XenServer 5.5
XenServer 5.0 Update 1
XenServer 5.0
XenServer 4.1.1
XenServer 4.1
XenServer 4.0
Topics
Subsections of Topics
API for configuring the udhcp server in Dom0
This API allows you to configure the DHCP service running on the Host Internal Management Network (HIMN). The API configures a udhcp daemon residing in Dom0 and alters the service configuration for any VM using the network.
It should be noted that for this reason, that callers who modify the default configuration should be aware that their changes may have an adverse affect on other consumers of the HIMN.
Version history
Date State
---- ----
2013-3-15 Stable
Stable: this API is considered stable and unlikely to change between software version and between hotfixes.
API description
The API for configuring the network is based on a series of other_config keys that can be set by the caller on the HIMN XAPI network object. Once any of the keys below have been set, the caller must ensure that any VIFs attached to the HIMN are removed, destroyed, created and plugged.
ip_begin
The first IP address in the desired subnet that the caller wishes the DHCP service to use.
ip_end
The last IP address in the desired subnet that the caller wishes the DHCP service to use.
netmask
The subnet mask for each of the issues IP addresses.
ip_disable_gw
A boolean key for disabling the DHCP server from returning a default gateway for VMs on the network. To disable returning the gateway address set the key to True.
Note: By default, the DHCP server will issue a default gateway for those requesting an address. Setting this key may disrupt applications that require the default gateway for communicating with Dom0 and so should be used with care.
Example code
An example python extract of setting the config for the network:
def get_himn_ref():
networks = session.xenapi.network.get_all_records()
for ref, rec in networks.iteritems():
if 'is_host_internal_management_network' \
in rec['other_config']:
return ref
raise Exception("Error: unable to find HIMN.")
himn_ref = get_himn_ref()
other_config = session.xenapi.network.get_other_config(himn_ref)
other_config['ip_begin'] = "169.254.0.1"
other_config['ip_end'] = "169.254.255.254"
other_config['netmask'] = "255.255.0.0"
session.xenapi.network.set_other_config(himn_ref, other_config)
An example for how to disable the server returning a default gateway:
himn_ref = get_himn_ref()
other_config = session.xenapi.network.get_other_config(himn_ref)
other_config['ip_disable_gw'] = True
session.xenapi.network.set_other_config(himn_ref, other_config)
Guest agents
“Guest agents” are special programs which run inside VMs which can be controlled via the XenAPI.
One communication method between XenAPI clients is via Xenstore.
Adding Xenstore entries to VMs
Developers may wish to install guest agents into VMs which take special action based on the type of the VM. In order to communicate this information into the guest, a special Xenstore name-space known as vm-data
is available which is populated at VM creation time. It is populated from the xenstore-data
map in the VM record.
Set the xenstore-data
parameter in the VM record:
xe vm-param-set uuid= xenstore-data:vm-data/foo=bar
Start the VM.
If it is a Linux-based VM, install the COMPANY_TOOLS and use the xenstore-read
to verify that the node exists in Xenstore.
Note
Only prefixes beginning with
vm-data
are permitted, and anything not in this name-space will be silently ignored when starting the VM.
Memory
Memory is used for many things:
- the hypervisor code: this is the Xen executable itself
- the hypervisor heap: this is needed for per-domain structures and per-vCPU structures
- the crash kernel: this is needed to collect information after a host crash
- domain RAM: this is the memory the VM believes it has
- shadow memory: for HVM guests running on hosts without hardware assisted paging (HAP) Xen uses shadow to optimise page table updates. For all guests shadow is used during live migration for tracking the memory transfer.
- video RAM for the virtual graphics card
Some of these are constants (e.g. hypervisor code) while some depend on the VM configuration (e.g. domain RAM). Xapi calls the constants “host overhead” and the variables due to VM configuration as “VM overhead”. These overheads are subtracted from free memory on the host when starting, resuming and migrating VMs.
Metrics
xcp-rrdd records statistics about the host and the VMs running on top. The metrics are stored persistently for long-term access and analysis of historical trends. Statistics are stored in RRDs (Round Robin Databases). RRDs are fixed-size structures that store time series with decreasing time resolution: the older the data point is, the longer the timespan it represents. ‘Data sources’ are sampled every few seconds and points are added to the highest resolution RRD. Periodically each high-frequency RRD is ‘consolidated’ (e.g. averaged) to produce a data point for a lower-frequency RRD.
RRDs are resident on the host on which the VM is running, or the pool coordinator when the VM is not running. The RRDs are backed up every day.
Granularity
Statistics are persisted for a maximum of one year, and are stored at different granularities. The average and most recent values are stored at intervals of:
- five seconds for the past ten minutes
- one minute for the past two hours
- one hour for the past week
- one day for the past year
RRDs are saved to disk as uncompressed XML. The size of each RRD when written to disk ranges from 200KiB to approximately 1.2MiB when the RRD stores the full year of statistics.
By default each RRD contains only averaged data to save storage space. To record minimum and maximum values in future RRDs, set the Pool-wide flag
xe pool-param-set uuid= other-config:create_min_max_in_new_VM_RRDs=true
Downloading
Statistics can be downloaded over HTTP in XML or JSON format, for example
using wget
.
See rrddump and
rrdxport for information
about the XML format.
The JSON format has the same structure as the XML.
Parameters are appended to the URL following a question mark (?) and separated
by ampersands (&).
HTTP authentication can take the form of a username and password or a session
token in a URL parameter.
Statistics may be downloaded all at once, including all history, or as deltas suitable for interactive graphing.
Downloading statistics all at once
To obtain a full dump of RRD data for a host use:
wget http://hostname/host_rrd?session_id=OpaqueRef:43df3204-9360-c6ab-923e-41a8d19389ba"
where the session token has been fetched from the server using the API.
For example, using Python’s XenAPI library:
import XenAPI
username = "root"
password = "actual_password"
url = "http://hostname"
session = XenAPI.Session(url)
session.xenapi.login_with_password(username, password, "1.0", "session_getter")
session._session
A URL parameter is used to decide which format to return: XML is returned by
default, adding the parameter json
makes the server return JSON.
Starting from xapi version 23.17.0, the server uses the HTTP header Accept
to decide which format to return.
When both formats are accepted, for example, using */*
; JSON is returned.
Of interest are the clients wget and curl which use this accept header value,
meaning that when using them the default behaviour will change and the accept
header needs to be overridden to make the server return XML.
The content type is provided in the reponse’s headers in these newer versions.
The XML RRD data is in the format used by rrdtool and looks like this:
<?xml version="1.0"?>
<rrd>
<version>0003</version>
<step>5</step>
<lastupdate>1213616574</lastupdate>
<ds>
<name>memory_total_kib</name>
<type>GAUGE</type>
<minimal_heartbeat>300.0000</minimal_heartbeat>
<min>0.0</min>
<max>Infinity</max>
<last_ds>2070172</last_ds>
<value>9631315.6300</value>
<unknown_sec>0</unknown_sec>
</ds>
<ds>
<!-- other dss - the order of the data sources is important
and defines the ordering of the columns in the archives below -->
</ds>
<rra>
<cf>AVERAGE</cf>
<pdp_per_row>1</pdp_per_row>
<params>
<xff>0.5000</xff>
</params>
<cdp_prep> <!-- This is for internal use -->
<ds>
<primary_value>0.0</primary_value>
<secondary_value>0.0</secondary_value>
<value>0.0</value>
<unknown_datapoints>0</unknown_datapoints>
</ds>
...other dss - internal use only...
</cdp_prep>
<database>
<row>
<v>2070172.0000</v> <!-- columns correspond to the DSs defined above -->
<v>1756408.0000</v>
<v>0.0</v>
<v>0.0</v>
<v>732.2130</v>
<v>0.0</v>
<v>782.9186</v>
<v>0.0</v>
<v>647.0431</v>
<v>0.0</v>
<v>0.0001</v>
<v>0.0268</v>
<v>0.0100</v>
<v>0.0</v>
<v>615.1072</v>
</row>
...
</rra>
... other archives ...
</rrd>
To obtain a full dump of RRD data of a VM with uuid x
:
wget "http://hostname/vm_rrd?session_id=<token>&uuid=x"
Note that it is quite expensive to download full RRDs as they contain lots of historical information. For interactive displays clients should download deltas instead.
Downloading deltas
To obtain an update of all VM statistics on a host, the URL would be of the form:
wget "https://hostname/rrd_updates?session_id=<token>&start=<secondsinceepoch>"
This request returns data in an rrdtool xport
style XML format, for every VM
resident on the particular host that is being queried.
To differentiate which column in the export is associated with which VM, the
legend
field is prefixed with the UUID of the VM.
An example rrd_updates
output:
<xport>
<meta>
<start>1213578000</start>
<step>3600</step>
<end>1213617600</end>
<rows>12</rows>
<columns>12</columns>
<legend>
<entry>AVERAGE:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry> <!-- nb - each data source might have multiple entries for different consolidation functions -->
<entry>AVERAGE:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
<entry>AVERAGE:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
<entry>MIN:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry>
<entry>MIN:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
<entry>MIN:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
<entry>MAX:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry>
<entry>MAX:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
<entry>MAX:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
<entry>LAST:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu1</entry>
<entry>LAST:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:cpu0</entry>
<entry>LAST:vm:ecd8d7a0-1be3-4d91-bd0e-4888c0e30ab3:memory</entry>
</legend>
</meta>
<data>
<row>
<t>1213617600</t>
<v>0.0</v> <!-- once again, the order or the columns is defined by the legend above -->
<v>0.0282</v>
<v>209715200.0000</v>
<v>0.0</v>
<v>0.0201</v>
<v>209715200.0000</v>
<v>0.0</v>
<v>0.0445</v>
<v>209715200.0000</v>
<v>0.0</v>
<v>0.0243</v>
<v>209715200.0000</v>
</row>
...
</data>
</xport>
To obtain host updates too, use the query parameter host=true
:
wget "http://hostname/rrd_updates?session_id=<token>&start=<secondssinceepoch>&host=true"
The step will decrease as the period decreases, which means that if you request statistics for a shorter time period you will get more detailed statistics.
To download updates containing only the averages, or minimums or maximums,
add the parameter cf=AVERAGE|MIN|MAX
(note case is important) e.g.
wget "http://hostname/rrd_updates?session_id=<token>&start=0&cf=MAX"
To request a different update interval, add the parameter interval=seconds
e.g.
wget "http://hostname/rrd_updates?session_id=<token>&start=0&interval=5"
Snapshots
Snapshots represent the state of a VM, or a disk (VDI) at a point in time. They can be used for:
- backups (hourly, daily, weekly etc)
- experiments (take snapshot, try something, revert back again)
- golden images (install OS, get it just right, clone it 1000s of times)
Read more about Snapshots: the High-Level Feature.
Taking a VDI snapshot
To take a snapshot of a single disk (VDI):
snapshot_vdi <- VDI.snapshot(session_id, vdi, driver_params)
where vdi
is the reference to the disk to be snapshotted, and driver_params
is a list of string pairs providing optional backend implementation-specific hints.
The snapshot operation should be quick (i.e. it should never be implemented as
a slow disk copy) and the resulting VDI will have
Field name | Description |
---|---|
is_a_snapshot | a flag, set to true, indicating the disk is a snapshot |
snapshot_of | a reference to the disk the snapshot was created from |
snapshot_time | the time the snapshot was taken |
The resulting snapshot should be considered read-only. Depending on the backend implementation it may be technically possible to write to the snapshot, but clients must not do this. To create a writable disk from a snapshot, see “restoring from a snapshot” below.
Note that the storage backend is free to implement this in different ways. We do not assume the presence of a .vhd-formatted storage repository. Clients must never assume anything about the backend implementation without checking first with the maintainers of the backend implementation.
Restoring to a VDI snapshot
To restore from a VDI snapshot first
new_vdi <- VDI.clone(session_id, snapshot_vdi, driver_params)
where snapshot_vdi
is a reference to the snapshot VDI, and driver_params
is a list of string pairs providing optional backend implementation-specific hints.
The clone operation should be quick (i.e. it should never be implemented as
a slow disk copy) and the resulting VDI will have
Field name | Description |
---|---|
is_a_snapshot | a flag, set to false, indicating the disk is not a snapshot |
snapshot_of | an invalid reference |
snapshot_time | an invalid time |
The resulting disk is writable and can be used by the client as normal.
Note that the “restored” VDI will have a different VDI.uuid
and reference to
the original VDI.
Taking a VM snapshot
A VM snapshot is a copy of the VM metadata and a snapshot of all the associated VDIs at around the same point in time. To take a VM snapshot:
snapshot_vm <- VM.snapshot(session_id, vm, new_name)
where vm
is a reference to the existing VM and new_name
will be the name_label
of the resulting VM (snapshot) object. The resulting VM will have
Field name | Description |
---|---|
is_a_snapshot | a flag, set to true, indicating the VM is a snapshot |
snapshot_of | a reference to the VM the snapshot was created from |
snapshot_time | the time the snapshot was taken |
Note that each disk is snapshotted one-by-one and not at the same time.
Restoring to a VM snapshot
A VM snapshot can be reverted to a snapshot using
VM.revert(session_id, snapshot_ref)
where snapshot_ref
is a reference to the snapshot VM. Each VDI associated with
the VM before the snapshot will be destroyed and each VDI associated with the
snapshot will be cloned (see “Reverting to a disk snapshot” above) and associated
with the VM. The resulting VM will have
Field name | Description |
---|---|
is_a_snapshot | a flag, set to false, indicating the VM is not a snapshot |
snapshot_of | an invalid reference |
snapshot_time | an invalid time |
Note that the VM.uuid
and reference are preserved, but the VDI.uuid
and
VDI references are not.
Downloading a disk or snapshot
Disks can be downloaded in either raw or vhd format using an HTTP 1.0 GET request as follows:
GET /export_raw_vdi?session_id=%s&task_id=%s&vdi=%s&format=%s[&base=%s] HTTP/1.0\r\n
Connection: close\r\n
\r\n
\r\n
where
session_id
is a currently logged-in sessiontask_id
is aTask
reference which will be used to monitor the progress of this task and receive errors from itvdi
is the reference of theVDI
into which the data will be importedformat
is eithervhd
orraw
- (optional)
base
is the reference of aVDI
which has already been exported and this export should only contain the blocks which have changed since then.
Note that the vhd format allows the disk to be sparse i.e. only contain allocated blocks. This helps reduce the size of the download.
The xapi-project/xen-api repo has a python download example
Uploading a disk or snapshot
Disks can be uploaded in either raw or vhd format using an HTTP 1.0 PUT request as follows:
PUT /import_raw_vdi?session_id=%s&task_id=%s&vdi=%s&format=%s HTTP/1.0\r\n
Connection: close\r\n
\r\n
\r\n
where
session_id
is a currently logged-in sessiontask_id
is aTask
reference which will be used to monitor the progress of this task and receive errors from itvdi
is the reference of theVDI
into which the data will be importedformat
is eithervhd
orraw
Note that you must create the disk (with the correct size) before importing data to it. The disk doesn’t have to be empty, in fact if restoring from a series of incremental downloads it makes sense to upload them all to the same disk in order.
Example: incremental backup with xe
This section will show how easy it is to build an incremental backup
tool using these APIs. For simplicity we will use the xe
commands
rather than raw XMLRPC and HTTP.
For a VDI with uuid $VDI, take a snapshot:
FULL=$(xe vdi-snapshot uuid=$VDI)
Next perform a full backup into a file “full.vhd”, in vhd format:
xe vdi-export uuid=$FULL filename=full.vhd format=vhd --progress
If the SR was using the vhd format internally (this is the default) then the full backup will be sparse and will only contain blocks if they have been written to.
After some time has passed and the VDI has been written to, take another snapshot:
DELTA=$(xe vdi-snapshot uuid=$VDI)
Now we can backup only the disk blocks which have changed between the original snapshot $FULL and the next snapshot $DELTA into a file called “delta.vhd”:
xe vdi-export uuid=$DELTA filename=delta.vhd format=vhd base=$FULL --progress
We now have 2 files on the local system:
- “full.vhd”: a complete backup of the first snapshot
- “delta.vhd”: an incremental backup of the second snapshot, relative to the first
For example:
test $ ls -lh *.vhd
-rw------- 1 dscott xendev 213M Aug 15 10:39 delta.vhd
-rw------- 1 dscott xendev 8.0G Aug 15 10:39 full.vhd
To restore the original snapshot you must create an empty disk with the
correct size. To find the size of a .vhd file use qemu-img
as follows:
test $ qemu-img info delta.vhd
image: delta.vhd
file format: vpc
virtual size: 24G (25769705472 bytes)
disk size: 212M
Here the size is 25769705472 bytes. Create a fresh VDI in SR $SR to restore the backup as follows:
SIZE=25769705472
RESTORE=$(xe vdi-create name-label=restored virtual-size=$SIZE sr-uuid=$SR type=user)
then import “full.vhd” into it:
xe vdi-import uuid=$RESTORE filename=full.vhd format=vhd --progress
Once “full.vhd” has been imported, the incremental backup can be restored on top:
xe vdi-import uuid=$RESTORE filename=delta.vhd format=vhd --progress
Note there is no need to supply a “base” parameter when importing; Xapi will treat the “vhd differencing disk” as a set of blocks and import them. It is up to you to check you are importing them to the right place.
Now the VDI $RESTORE should have the same contents as $DELTA.
VM consoles
Most XenAPI graphical interfaces will want to gain access to the VM consoles, in order to render them to the user as if they were physical machines. There are several types of consoles available, depending on the type of guest or if the physical host console is being accessed:
Types of consoles
Operating System | Text | Graphical | Optimized graphical |
---|---|---|---|
Windows | No | VNC, using an API call | RDP, directly from guest |
Linux | Yes, through VNC and an API call | No | VNC, directly from guest |
Physical Host | Yes, through VNC and an API call | No | No |
Hardware-assisted VMs, such as Windows, directly provide a graphical console over VNC. There is no text-based console, and guest networking is not necessary to use the graphical console. Once guest networking has been established, it is more efficient to setup Remote Desktop Access and use an RDP client to connect directly (this must be done outside of the XenAPI).
Paravirtual VMs, such as Linux guests, provide a native text console directly. XenServer provides a utility (called vncterm
) to convert this text-based console into a graphical VNC representation. Guest networking is not necessary for this console to function. As with Windows above, Linux distributions often configure VNC within the guest, and directly connect to it over a guest network interface.
The physical host console is only available as a vt100
console, which is exposed through the XenAPI as a VNC console by using vncterm
in the control domain.
RFB (Remote Framebuffer) is the protocol which underlies VNC, specified in The RFB Protocol. Third-party developers are expected to provide their own VNC viewers, and many freely available implementations can be adapted for this purpose. RFB 3.3 is the minimum version which viewers must support.
Retrieving VNC consoles using the API
VNC consoles are retrieved using a special URL passed through to the host agent. The sequence of API calls is as follows:
Client to Master/443: XML-RPC:
Session.login_with_password()
.Master/443 to Client: Returns a session reference to be used with subsequent calls.
Client to Master/443: XML-RPC:
VM.get_by_name_label()
.Master/443 to Client: Returns a reference to a particular VM (or the “control domain” if you want to retrieve the physical host console).
Client to Master/443: XML-RPC:
VM.get_consoles()
.Master/443 to Client: Returns a list of console objects associated with the VM.
Client to Master/443: XML-RPC:
VM.get_location()
.Returns a URI describing where the requested console is located. The URIs are of the form:
https://192.168.0.1/console?ref=OpaqueRef:c038533a-af99-a0ff-9095-c1159f2dc6a0
.Client to 192.168.0.1: HTTP CONNECT “/console?ref=(…)”
The final HTTP CONNECT is slightly non-standard since the HTTP/1.1 RFC specifies that it should only be a host and a port, rather than a URL. Once the HTTP connect is complete, the connection can subsequently directly be used as a VNC server without any further HTTP protocol action.
This scheme requires direct access from the client to the control domain’s IP, and will not work correctly if there are Network Address Translation (NAT) devices blocking such connectivity. You can use the CLI to retrieve the console URI from the client and perform a connectivity check.
Retrieve the VM UUID by running:
$ VM=$(xe vm-list params=uuid --minimal name-label=<name>)
Retrieve the console information:
$ xe console-list vm-uuid=$VM
uuid ( RO) : 8013b937-ff7e-60d1-ecd8-e52d66c5879e
vm-uuid ( RO): 2d7c558a-8f03-b1d0-e813-cbe7adfa534c
vm-name-label ( RO): 6
protocol ( RO): RFB
location ( RO): https://10.80.228.30/console?uuid=8013b937-ff7e-60d1-ecd8-e52d66c5879e
Use command-line utilities like ping
to test connectivity to the IP address provided in the location
field.
Disabling VNC forwarding for Linux VM
When creating and destroying Linux VMs, the host agent automatically manages the vncterm
processes which convert the text console into VNC. Advanced users who wish to directly access the text console can disable VNC forwarding for that VM. The text console can then only be accessed directly from the control domain directly, and graphical interfaces such as XenCenter will not be able to render a console for that VM.
Before starting the guest, set the following parameter on the VM record:
$ xe vm-param-set uuid=$VM other-config:disable_pv_vnc=1
Start the VM.
Use the CLI to retrieve the underlying domain ID of the VM with:
$ DOMID=$(xe vm-list params=dom-id uuid=$VM --minimal)
On the host console, connect to the text console directly by:
$ /usr/lib/xen/bin/xenconsole $DOMID
This configuration is an advanced procedure, and we do not recommend that the text console is directly used for heavy I/O operations. Instead, connect to the guest over SSH or some other network-based connection mechanism.
VM import/export
VMs can be exported to a file and later imported to any Xapi host. The export
protocol is a simple HTTP(S) GET, which should be sent to the Pool master.
Authorization is either via a pre-created session_id
or by HTTP basic
authentication (particularly useful on the command-line).
The VM to export is specified either by UUID or by reference. To keep track of
the export, a task can be created and passed in using its reference. Note that
Xapi may send an HTTP redirect if a different host has better access to the
disk data.
The following arguments are passed as URI query parameters or HTTP cookies:
Argument | Description |
---|---|
session_id | the reference of the session being used to authenticate; required only when not using HTTP basic authentication |
task_id | the reference of the task object with which to keep track of the operation; optional, required only if you have created a task object to keep track of the export |
ref | the reference of the VM; required only if not using the UUID |
uuid | the UUID of the VM; required only if not using the reference |
use_compression | an optional boolean “true” or “false” (defaulting to “false”). If “true” then the output will be gzip-compressed before transmission. |
For example, using the Linux command line tool cURL:
$ curl http://root:foo@myxenserver1/export?uuid=<vm_uuid> -o <exportfile>
will export the specified VM to the file exportfile
.
To export just the metadata, use the URI http://server/export_metadata
.
The import protocol is similar, using HTTP(S) PUT. The session_id
and task_id
arguments are as for the export. The ref
and uuid
are not used; a new reference and uuid will be generated for the VM. There are some additional parameters:
Argument | Description |
---|---|
restore | if true , the import is treated as replacing the original VM - the implication of this currently is that the MAC addresses on the VIFs are exactly as the export was, which will lead to conflicts if the original VM is still being run. |
force | if true , any checksum failures will be ignored (the default is to destroy the VM if a checksum error is detected) |
sr_id | the reference of an SR into which the VM should be imported. The default behavior is to import into the Pool.default_SR |
Note there is no need to specify whether the export is compressed, as Xapi will automatically detect and decompress gzip-encoded streams.
For example, again using cURL:
curl -T <exportfile> http://root:foo@myxenserver2/import
will import the VM to the default SR on the server.
Note
Note that if no default SR has been set, and no
sr_uuid
is specified, the error messageDEFAULT_SR_NOT_FOUND
is returned.
Another example:
curl -T <exportfile> http://root:foo@myxenserver2/import?sr_id=<ref_of_sr>
will import the VM to the specified SR on the server.
To import just the metadata, use the URI http://server/import_metadata
Legacy VM Import Format
This section describes the legacy VM import/export format and is for historical interest only. It should be updated to describe the current format, see issue 64
Xapi supports a human-readable legacy VM input format called XVA. This section describes the syntax and structure of XVA.
An XVA consists of a directory containing XML metadata and a set of disk images. A VM represented by an XVA is not intended to be directly executable. Data within an XVA package is compressed and intended for either archiving on permanent storage or for being transmitted to a VM server - such as a XenServer host - where it can be decompressed and executed.
XVA is a hypervisor-neutral packaging format; it should be possible to create simple tools to instantiate an XVA VM on any other platform. XVA does not specify any particular runtime format; for example disks may be instantiated as file images, LVM volumes, QCoW images, VMDK or VHD images. An XVA VM may be instantiated any number of times, each instantiation may have a different runtime format.
XVA does not:
specify any particular serialization or transport format
provide any mechanism for customizing VMs (or templates) on install
address how a VM may be upgraded post-install
define how multiple VMs, acting as an appliance, may communicate
These issues are all addressed by the related Open Virtual Appliance specification.
An XVA is a directory containing, at a minimum, a file called ova.xml
. This file describes the VM contained within the XVA and is described in Section 3.2. Disks are stored within sub-directories and are referenced from the ova.xml. The format of disk data is described later in Section 3.3.
The following terms will be used in the rest of the chapter:
HVM: a mode in which unmodified OS kernels run with the help of virtualization support in the hardware.
PV: a mode in which specially modified “paravirtualized” kernels run explicitly on top of a hypervisor without requiring hardware support for virtualization.
The “ova.xml” file contains the following elements:
<appliance version="0.1">
The number in the attribute “version” indicates the version of this specification to which the XVA is constructed; in this case version 0.1. Inside the <appliance> there is exactly one <vm>: (in the OVA specification, multiple <vm>s are permitted)
<vm name="name">
Each <vm>
element describes one VM. The “name” attribute is for future internal use only and must be unique within the ova.xml file. The “name” attribute is permitted to be any valid UTF-8 string. Inside each <vm> tag are the following compulsory elements:
<label>... text ... </label>
A short name for the VM to be displayed in a UI.
<shortdesc> ... description ... </shortdesc>
A description for the VM to be displayed in the UI. Note that for both <label>
and <shortdesc>
contents, leading and trailing whitespace will be ignored.
<config mem_set="268435456" vcpus="1"/>
The <config>
element has attributes which describe the amount of memory in bytes (mem_set
) and number of CPUs (VCPUs) the VM should have.
Each <vm>
has zero or more <vbd>
elements representing block devices which look like the following:
<vbd device="sda" function="root" mode="w" vdi="vdi_sda"/>
The attributes have the following meanings:
device
: name of the physical device to expose to the VM. For linux guests we use “sd[a-z]” and for windows guests we use “hd[a-d]”.function
: if marked as “root”, this disk will be used to boot the guest. (NB this does not imply the existence of the Linux root i.e. / filesystem) Only one device should be marked as “root”. See Section 3.4 describing VM booting. Any other string is ignored.mode
: either “w” or “ro” if the device is to be read/write or read-onlyvdi
: the name of the disk image (represented by a<vdi>
element) to which this block device is connected
Each <vm>
may have an optional <hacks>
section like the following:
<hacks is_hvm="false" kernel_boot_cmdline="root=/dev/sda1 ro"/>
The <hacks>
element will be removed in future. The attribute is_hvm
is
either true
or false
, depending on whether the VM should be booted in HVM or not.
The kernel_boot_cmdline
contains additional kernel commandline arguments when
booting a guest using pygrub.
In addition to a <vm>
element, the <appliance>
will contain zero or more
<vdi>
elements like the following:
<vdi name="vdi_sda" size="5368709120" source="file://sda" type="dir-gzipped-chunks">
Each <vdi>
corresponds to a disk image. The attributes have the following meanings:
name
: name of the VDI, referenced by the vdi attribute of<vbd>
elements. Any valid UTF-8 string is permitted.size
: size of the required image in bytessource
: a URI describing where to find the data for the image, only file:// URIs are currently permitted and must describe paths relative to the directory containing the ova.xmltype
: describes the format of the disk data
A single disk image encoding is specified in which has type “dir-gzipped-chunks”: Each image is represented by a directory containing a sequence of files as follows:
-rw-r--r-- 1 dscott xendev 458286013 Sep 18 09:51 chunk000000000.gz
-rw-r--r-- 1 dscott xendev 422271283 Sep 18 09:52 chunk000000001.gz
-rw-r--r-- 1 dscott xendev 395914244 Sep 18 09:53 chunk000000002.gz
-rw-r--r-- 1 dscott xendev 9452401 Sep 18 09:53 chunk000000003.gz
-rw-r--r-- 1 dscott xendev 1096066 Sep 18 09:53 chunk000000004.gz
-rw-r--r-- 1 dscott xendev 971976 Sep 18 09:53 chunk000000005.gz
-rw-r--r-- 1 dscott xendev 971976 Sep 18 09:53 chunk000000006.gz
-rw-r--r-- 1 dscott xendev 971976 Sep 18 09:53 chunk000000007.gz
-rw-r--r-- 1 dscott xendev 573930 Sep 18 09:53 chunk000000008.gz
Each file (named “chunk-XXXXXXXXX.gz”) is a gzipped file containing exactly 1e9 bytes (1GB, not 1GiB) of raw block data. The small size was chosen to be safely under the maximum file size limits of several filesystems. If the files are gunzipped and then concatenated together, the original image is recovered.
Because the import and export of VMs can take some time to complete, an asynchronous HTTP interface to the import and export operations is provided. To perform an export using the XenServer API, construct an HTTP GET call providing a valid session ID, task ID and VM UUID, as shown in the following pseudo code:
task = Task.create()
result = HTTP.get(
server, 80, "/export?session_id=&task_id=&ref=");
For the import operation, use an HTTP PUT call as demonstrated in the following pseudo code:
task = Task.create()
result = HTTP.put(
server, 80, "/import?session_id=&task_id=&ref=");
VM Lifecycle
The following figure shows the states that a VM can be in and the API calls that can be used to move the VM between these states.
VM boot parameters
The VM
class contains a number of fields that control the way in which the VM
is booted. With reference to the fields defined in the VM class (see later in
this document), this section outlines the boot options available and the
mechanisms provided for controlling them.
VM booting is controlled by setting one of the two mutually exclusive groups:
“PV” and “HVM”. If HVM.boot_policy
is an empty string, then paravirtual
domain building and booting will be used; otherwise the VM will be loaded as a
HVM domain, and booted using an emulated BIOS.
When paravirtual booting is in use, the PV_bootloader
field indicates the
bootloader to use. It may be “pygrub”, in which case the platform’s default
installation of pygrub will be used, or a full path within the control domain to
some other bootloader. The other fields, PV_kernel
, PV_ramdisk
, PV_args
,
and PV_bootloader_args
will be passed to the bootloader unmodified, and
interpretation of those fields is then specific to the bootloader itself,
including the possibility that the bootloader will ignore some or all of
those given values. Finally the paths of all bootable disks are added to the
bootloader commandline (a disk is bootable if its VBD has the bootable flag set).
There may be zero, one, or many bootable disks; the bootloader decides which
disk (if any) to boot from.
If the bootloader is pygrub, then the menu.lst is parsed, if present in the
guest’s filesystem, otherwise the specified kernel and ramdisk are used, or an
autodetected kernel is used if nothing is specified and autodetection is
possible. PV_args
is appended to the kernel command line, no matter which
mechanism is used for finding the kernel.
If PV_bootloader
is empty but PV_kernel
is specified, then the kernel and
ramdisk values will be treated as paths within the control domain. If both
PV_bootloader
and PV_kernel
are empty, then the behaviour is as if
PV_bootloader
were specified as “pygrub”.
When using HVM booting, HVM_boot_policy
and HVM_boot_params
specify the boot
handling. Only one policy is currently defined, “BIOS order”. In this case,
HVM_boot_params
should contain one key-value pair “order” = “N” where N is the
string that will be passed to QEMU.
Optionally HVM_boot_params
can contain another key-value pair “firmware”
with values “bios” or “uefi” (default is “bios” if absent).
By default Secure Boot is not enabled, it can be enabled when “uefi” is enabled
by setting VM.platform["secureboot"]
to true.
XenCenter
XenCenter uses some conventions on top of the XenAPI:
Internationalization for SR names
The SRs created at install time now have an other_config
key indicating how their names may be internationalized.
other_config["i18n-key"]
may be one of
local-hotplug-cd
local-hotplug-disk
local-storage
xenserver-tools
Additionally, other_config["i18n-original-value-<field name>"]
gives the value of that field when the SR was created. If XenCenter sees a record where SR.name_label
equals other_config["i18n-original-value-name_label"]
(that is, the record has not changed since it was created during XenServer installation), then internationalization will be applied. In other words, XenCenter will disregard the current contents of that field, and instead use a value appropriate to the user’s own language.
If you change SR.name_label
for your own purpose, then it no longer is the same as other_config["i18n-original-value-name_label"]
. Therefore, XenCenter does not apply internationalization, and instead preserves your given name.
Hiding objects from XenCenter
Networks, PIFs, and VMs can be hidden from XenCenter by adding the key HideFromXenCenter=true
to the other_config
parameter for the object. This capability is intended for ISVs who know what they are doing, not general use by everyday users. For example, you might want to hide certain VMs because they are cloned VMs that shouldn’t be used directly by general users in your environment.
In XenCenter, hidden Networks, PIFs, and VMs can be made visible, using the View menu.