Introduction
The SMAPIv3 storage interface provides an easy way to connect xapi to any storage type.
Who Is This For?
This is for anyone who has a storage system which is not supported by xapi out-of-the-box.
This is also for anyone who wants to manage their storage in a customized way.
If you can make your volumes appear as Linux block devices or you can refer to
the volumes via URIs of the form iscsi://
, nfs://
, or rbd://
, then this
documentation is for you.
No Xapi or Xen specific knowledge is required.
Status of This Documentation
This documentation is a draft intended for discussion, which happens through the issues on GitHub.
Learn
Features
The xapi storage interface supports the following features:
- Storage Repositories: collections of Volumes on the same storage substrate
- Volumes may be created, cloned, snapshotted, queried, attached to and detached from VMs
- Storage Repositories can be incrementally probed; for example given only an iSCSI target portal, the IQNs and then the LUNs can be discovered.
- Everything is named by URIs, where the scheme describes the protocol used to access the data (e.g. file, iscsi)
- Implementations expose capabilities and these are reported via the XenAPI
- Dynamic properties (such as space utilisation, I/O bandwidth and latency) can be exposed as "datasources" which are compatible with the xapi toolstack RRD framework.
- The storage implementation can be used to store the xapi HA statefile and database redo logs
- If the storage implementation supports fast clone, then it will work with the xapi "clone on boot" feature
Concepts
When a virtual machine looks at its disk, it sees something which looks like a real physical disk. This is an illusion. In reality the bytes of data written to the "virtual disk" probably reside in a file in a filesystem or in a logical volume on some physical storage system that the VM cannot see.
We call the real physical storage system a Storage Repository (SR).
We call the virtual disks within the SR volumes.
Manipulating volumes
When a VM is installed, a volume will be created. Typically this volume will be deleted when the VM is uninstalled. The Xapi toolstack doesn't know how to manipulate volumes on your storage system directly; instead it delegates to "Volume plugins": implementation-specific plugins which know how to talk the storage-specific APIs. These volume plugins can be anything from simple scripts in domain 0 to sophisticated services running somewhere on the network.
Consider for example a system using Linux LVM, where individual LVs are mapped
to VMs as virtual disks. The volume plugin for LVM could implement the
Volume.create
API by simply calling
lvcreate -n name -L 64GiB -Z n
Consider another example where volumes are simple sparse files stored on an NFS
share. The volume plugin could implement the Volume.create
API by simply
calling:
dd if=/dev/zero of=disk.name count=1 skip=64G
Connecting volumes to VMs
VMs running on the Xen hypervisor use special shared-memory protocols to access their disks and network interfaces. There are several implementations of these protocols including:
- the Linux xen-blkback kernel module
- the FreeBSD blkback driver
- the QEMU qdisk userspace implementation
- the XenServer tapdisk3 userspace implementation, part of the blktap project, described here
- the MirageOS userspace and kernelspace implementation
With so many implementations to choose from, which one should we use for a given volume? This decision - and how to configure the implementation for maximum performance - is the job of the Datapath plugin.
Disks as URIs
Every volume has one or more URIs, which describe how to access the data within the volume. Examples include:
file:///local/block/device
- a local file or block devicenfs://server/export/path/file.qcow2
- a remote NFS server exporting a .qcow2 format filesmb://server/export/path/file.vhd
- a remote CIFS server exporting a .vhd format filerbd://pool/volume
- a volume in a Ceph storage pooliscsi://target/lun
- a LUN on an iSCSI array
The Xapi toolstack takes the list of URIs provided by the Volume plugin and creates a connection between the VM and the disk. Xapi chooses a "Datapath plugin" based on the URI scheme. The Datapath plugin returns Xen-specific connection details, including choice of backend (kernel blkback or userspace qemu) and caching options.
Architecture
The system is divided into two parts, intended for people with different expertise:
- Volume Plugin: this is the storage control-plane. Xapi delegates Volume
manipulation requests to these plugins, which know how to operate on the
physical storage which could be anything from an NFS/CIFS server, an iSCSI
target or a Ceph deployment. No virtualisation knowledge is required to
write a Volume Plugin. The volume plugin do not perform I/O directly,
instead they associate volumes with URIs which encode a particular method
for accessing the disk data. For example an NFS plugin could expose URIs of
the form
nfs://server/path/file
- Datapath Plugin: this is the storage data-plane. This is Xen-specific code which chooses the best way to connect VMs to individual disks. Xen expertise is needed to write a datapath plugin. Note that this code doesn't know about volumes; it only knows about specific volume access protocols such as NFS, iSCSI and RBD.
Volume Plugins
Clients such as OpenStack, CloudStack, XenCenter and Xen Orchestra create VMs and virtual disks by sending XenAPI requests to Xapi. Xapi doesn't know how to manipulate storage directly, so it delegates storage operations to a Volume Plugin. Each "Storage Repository" is associated with exactly one Volume Plugin. These plugins know how to
- create volumes;
- destroy volumes;
- snapshot volumes;
- clone volumes; and
- list the ways in which a volume may be accessed.
The following diagram shows a XenAPI client sending a VDI.create request causing Xapi to send a Volume.create request to a specific plugin:
Datapath Plugins
The following diagram shows a XenAPI client sending a VM.start request. Xapi
calls Volume.stat
to list the available access methods for the VM's disks. The
Ceph Volume Plugin returns a URL of the form rbd://server/pool
so Xapi consults
its "rbd" Datapath plugin and asks "how should I connect this datapath to the
VM?". The datapath plugin tells Xapi to use QEMU qdisk's built-in support for
RBD via librados, so Xapi tells libxl to set this up.
Frequently Asked Questions
How Do I...
Test my code?
The OCaml and python generated code includes a convenient command-line parser so if you write:
module Cmds = Xapi_storage.Control.Sr(Cmdlinergen.Gen ())
Cmdliner.Term.eval_choice default_cmd (List.map (fun t -> t rpc) (Cmds.implementation ()))
class Implementation(xapi.volume.SR_skeleton):
pass
curl "http://example.com/api/kittens/2"
if __name__ == "__main__":
cmd = xapi.volume.SR_commandline(Implementation())
cmd.attach()
You'll be able to run the command like this:
$ ./SR.attach
usage: SR.attach [-h] [-j] dbg uri
SR.attach: error: too few arguments
$ ./SR.attach -h
usage: SR.attach [-h] [-j] dbg uri
[attach uri]: attaches the SR to the local host. Once an SR is attached then
volumes may be manipulated.
positional arguments:
dbg Debug context from the caller
uri The Storage Repository URI
optional arguments:
-h, --help show this help message and exit
-j, --json Read json from stdin, print json to stdout
Although it's not enforced by the interface, plugin implementations should avoid interacting with the toolstack so that they can be easily tested in isolation.
Report dynamic properties like space consumption?
Dynamic properties like space consumption, bandwidth or latency should be exposed as "datasources". The SR.stat function should return a list of URIs pointing at these in "xenostats" format. The toolstack will hook up these datasources to the xcp-rrdd daemon which will record history. XenAPI clients can then use the RRD API to fetch the data, draw graphs etc.
Expose backend-specific functions?
curl "http://example.com/api/kittens/2" -X DELETE
The SMAPIv3 is intended to be a generic API. Before extending the SMAPIv3 itself, first ask the question: would this make sense for 3 completely different storage stacks (e.g. consider Ceph, LVM over iSCSI and gfs2). If the concept is actually general then propose an SMAPIv3 update via a pull request. If the concept is actually backend-specific then consider adding a new XenAPI extension for it and name the API appropriately (e.g. "LVHD.foo").
Call xapi?
Nothing in the interface prevents you from making RPC calls to xapi or other toolstack components, however doing so will make it more difficult to test your component in isolation.
In the past, a common reason to call xapi was to store data in the xapi database, for example the "sm-config" fields. This was unreliable because
- The "sm-config" key,value pairs can disappear, for example if the SR is forgotten and then re-attached.
- The "sm-config" key,value pairs may be duplicated over VDI.clone (or not): the exact behaviour was never defined.
- If the storage is itself reverted to a previous point in time (imagine an SR-level snapshot), then the "sm-config" keys, value pairs will not be reverted.
It is strongly recommended to store all storage-related state on the storage medium. This ensures that the metadata has a "shared fate" with the data: if data is restored from backup, reverted to a snapshot, then so is the metadata.
Tie my cluster to the xapi Pool?
Ideally a storage cluster would be managed separately from a xapi pool, with it's own configuration and monitoring interfaces. The storage cluster could be very large (consider Ceph-style scale-out) while the xapi pool is designed to remain within a rack.
In the past, a common reason to tie a storage cluster to the xapi pool was to piggyback on the xapi notions of a single pool master, HA and inter-host authenticated RPC mechanisms to co-ordinate activities sych as vhd coalescing. If it is still necessary to tie a storage cluster to a xapi pool then the storage implementation should launch it's own "pool monitor" service which could use the xapi pool APIs to track host membership and master status. Note: this might require adding new capabilities to xapi's pool APIs, but they should not be part of the storage API itself.
Note: in the case where a particular storage implementation requires a
particular HA cluster stack to be running, this can be declared in the
Plugin.query
call.
Develop
Below you can find the SMAPIv3 API Reference:
plugin
The xapi toolstack expects all plugins to support a basic query interface. This means that if you plan to implement both, a volume and a datapath plugin, make sure that both implement the query interface.
Type definitions
query_result
{
"required_cluster_stack": [ "required_cluster_stack" ],
"configuration": { "configuration": "configuration" },
"features": [ "features" ],
"required_api_version": "required_api_version",
"version": "version",
"copyright": "copyright",
"vendor": "vendor",
"description": "description",
"name": "name",
"plugin": "plugin"
}
type query_result
= struct { ... }
Properties of this implementation.
Members
Name | Type | Description |
---|---|---|
plugin | string | Plugin name, used in the XenAPI as SR.type. |
name | string | Short name. |
description | string | Description. |
vendor | string | Entity (e.g. company, project, group) which produced this implementation. |
copyright | string | Copyright. |
version | string | Version. |
required_api_version | string | Minimum required API version. |
features | string list | Features supported by this plugin. |
configuration | (string * string) list | Key/description pairs describing required device_config parameters. |
required_cluster_stack | string list | The plugin requires one of these cluster stacks to be active. |
srs
[ "srs" ]
[]
type srs
= string list
Interface: Plugin
Discover properties of this implementation. Every implementation must support the query interface or it will not be recognised as a storage plugin by xapi.
Method: query
Query this implementation and return its properties. This is called by xapi to determine whether it is compatible with xapi and to discover the supported features.
Client
{ "method": "Plugin.query", "params": [ { "dbg": "dbg" } ], "id": 1 }
try
let query_result = Client.query dbg in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Plugin.query({ dbg: "string" })
print(repr(results))
Server
{
"required_cluster_stack": [
"required_cluster_stack_1", "required_cluster_stack_2"
],
"configuration": { "field_1": "value_1", "field_2": "value_2" },
"features": [ "features_1", "features_2" ],
"required_api_version": "required_api_version",
"version": "version",
"copyright": "copyright",
"vendor": "vendor",
"description": "description",
"name": "name",
"plugin": "plugin"
}
try
let query_result = Client.query dbg in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Plugin_myimplementation(Plugin_skeleton):
# by default each method will return a Not_implemented error
# ...
def query(self, dbg):
"""
Query this implementation and return its properties. This is
called by xapi to determine whether it is compatible with xapi
and to discover the supported features.
"""
return {"plugin": "string", "name": "string", "description": "string", "vendor": "string", "copyright": "string", "version": "string", "required_api_version": "string", "features": ["string"], "configuration": {"string": "string"}, "required_cluster_stack": ["string"]}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
unnamed | out | query_result | Properties of this implementation. |
Method: ls
[ls dbg]: returns a list of attached SRs
Client
{ "method": "Plugin.ls", "params": [ { "dbg": "dbg" } ], "id": 2 }
try
let srs = Client.ls dbg in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Plugin.ls({ dbg: "string" })
print(repr(results))
Server
[ "srs_1", "srs_2" ]
try
let srs = Client.ls dbg in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Plugin_myimplementation(Plugin_skeleton):
# by default each method will return a Not_implemented error
# ...
def ls(self, dbg):
"""
[ls dbg]: returns a list of attached SRs
"""
return ["string"]
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
srs | out | srs | The attached SRs |
Method: diagnostics
Returns a printable set of backend diagnostic information. Implementations are encouraged to include any data which will be useful to diagnose problems. Note this data should not include personally-identifiable data as it is intended to be automatically included in bug reports.
Client
{ "method": "Plugin.diagnostics", "params": [ { "dbg": "dbg" } ], "id": 3 }
try
let diagnostics = Client.diagnostics dbg in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Plugin.diagnostics({ dbg: "string" })
print(repr(results))
Server
"diagnostics"
try
let diagnostics = Client.diagnostics dbg in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Plugin_myimplementation(Plugin_skeleton):
# by default each method will return a Not_implemented error
# ...
def diagnostics(self, dbg):
"""
Returns a printable set of backend diagnostic information.
Implementations are encouraged to include any data which will
be useful to diagnose problems. Note this data should not
include personally-identifiable data as it is intended to be
automatically included in bug reports.
"""
return "string"
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
diagnostics | out | string | A string containing loggable human-readable diagnostics information |
Errors
exnt
[ "Unimplemented", "exnt" ]
type exnt
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Unimplemented | string |
datapath
The Datapath interfaces are provided to access the data stored
in the volumes. The Datapath
interface is used to open and close the disks for read/write
operations from VMs and the Data
interface is used for operations such as copy
and mirror
Type definitions
persistent
true
false
type persistent
= bool
True means the disk data is persistent and should be preserved when the datapath is closed i.e. when a VM is shutdown or rebooted. False means the data should be thrown away when the VM is shutdown or rebooted.
xendisk
{
"backend_type": "backend_type",
"extra": { "extra": "extra" },
"params": "params"
}
type xendisk
= struct { ... }
Members
Name | Type | Description |
---|---|---|
params | string | Put into the "params" key in xenstore |
extra | (string * string) list | Key-value pairs to be put into the "sm-data" subdirectory underneath the xenstore backend |
backend_type | string | The name of the xenstore directory corresponding to the backend. For example "qdisk". |
block_device
{ "path": "path" }
type block_device
= struct { ... }
Members
Name | Type | Description |
---|---|---|
path | string | Path to the system local block device. This is equivalent to the SMAPIv1 params. |
file
{ "path": "path" }
type file
= struct { ... }
Members
Name | Type | Description |
---|---|---|
path | string | Path to the raw file |
nbd
{ "uri": "uri" }
type nbd
= struct { ... }
Members
Name | Type | Description |
---|---|---|
uri | string | NBD URI of the form nbd:unix:<domain-socket>:exportname=<NAME> (this format is used by qemu-system: https://manpages.debian.org/stretch/qemu-system-x86/qemu-system-x86_64.1.en.html) |
implementation
[
"XenDisk",
{
"backend_type": "backend_type",
"extra": { "extra": "extra" },
"params": "params"
}
]
[ "BlockDevice", { "path": "path" } ]
[ "File", { "path": "path" } ]
[ "Nbd", { "uri": "uri" } ]
type implementation
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
XenDisk | xendisk | This value can be used for ring connection. |
BlockDevice | block_device | This value can be used for Domain0 block device access. |
File | file | |
Nbd | nbd |
backend
{
"implementations": [
[
"XenDisk",
{
"backend_type": "backend_type",
"extra": { "extra": "extra" },
"params": "params"
}
]
]
}
type backend
= struct { ... }
A description of which Xen block backend to use. The toolstack needs this to setup the shared memory connection to blkfront in the VM.
Members
Name | Type | Description |
---|---|---|
implementations | implementation list | Choice of implementation technologies. |
sock_path
"sock_path"
type sock_path
= string
Path to a UNIX domain socket
uri
"uri"
type uri
= string
A URI representing the means for accessing the volume data. The interpretation of the URI is specific to the implementation. Xapi will choose which implementation to use based on the URI scheme.
domain
"domain"
type domain
= string
A string representing a Xen domain on the local host. The string is guaranteed to be unique per-domain but it is not guaranteed to take any particular form. It may (for example) be a Xen domain id, a Xen VM uuid or a Xenstore path or anything else chosen by the toolstack. Implementations should not assume the string has any meaning.
blocklist
{ "ranges": [ [ 0, 0 ] ], "blocksize": 0 }
type blocklist
= struct { ... }
List of blocks for copying.
Members
Name | Type | Description |
---|---|---|
blocksize | int | Size of the individual blocks. |
ranges | int64 * int64 list | List of block ranges, where a range is a (start,length) pair, measured in units of [blocksize] |
operation
[ "CopyV1", "operation" ]
[ "MirrorV1", "operation" ]
type operation
= variant { ... }
The primary key for referring to a long-running operation.
Constructors
Name | Type | Description |
---|---|---|
CopyV1 | string | CopyV1 (key) represents an on-going copy operation with the unique [key]. |
MirrorV1 | string | MirrorV1 (key) represents an on-going mirror operation with the unique [key]. |
status
{ "progress": 0.0, "complete": true, "failed": true }
type status
= struct { ... }
Status information for on-going tasks.
Members
Name | Type | Description |
---|---|---|
failed | bool | [failed] will be set to true if the operation has failed for some reason. |
complete | bool | [complete] will be set true if the operation is complete, whether successfully or not, see [failed]. |
progress | float option | [progress] will be returned for a copy operation, and ranges between 0 and 1. |
operations
[ [ "CopyV1", "operations" ] ]
[]
type operations
= operation list
A list of operations.
Interface: Datapath
Xapi will call the functions here on VM start / shutdown / suspend / resume / migrate. Every function is idempotent. Every function takes a domain parameter which allows the implementation to track how many domains are currently using the volume.
Volumes must be attached via the following sequence of calls:
[open url persistent] must be called first and is used to declare that the writes to the disks must either be persisted or not. [open] is not an exclusive operation - a disk may be opened on more than once host at once. The call returns
unit
or an error.[attach url domain] is then called. The
domain
parameter is advisory. Note that this call is currently only ever called once. In the future the call may be made multiple times with different [domain] parameters if the disk is attached to multiple domains. The return value from this call is the information required to attach the disk to a VM. This call is again, not exclusive. The volume may be attached to more than one host concurrently.[activate url domain] is called to activate the datapath. This must be called before the volume is to be used by the VM, and it is acceptible for this to be an exclusive operation, such that it is an error for a volume to be activated on more than one host simultaneously.
Method: open
[open uri persistent] is called before a disk is attached to a VM. If persistent is true then care should be taken to persist all writes to the disk. If persistent is false then the implementation should configure a temporary location for writes so they can be thrown away on [close].
Client
{
"method": "Datapath.open",
"params": [ { "persistent": true, "uri": "uri", "dbg": "dbg" } ],
"id": 31
}
try
let () = Client.open dbg uri persistent in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.open({ dbg: "string", uri: "string", persistent: True })
print(repr(results))
Server
null
try
let () = Client.open dbg uri persistent in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def open(self, dbg, uri, persistent):
"""
[open uri persistent] is called before a disk is attached to a VM.
If persistent is true then care should be taken to persist all writes
to the disk. If persistent is false then the implementation should
configure a temporary location for writes so they can be thrown away
on [close].
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
persistent | in | persistent | True means the disk data is persistent and should be preserved when the datapath is closed i.e. when a VM is shutdown or rebooted. False means the data should be thrown away when the VM is shutdown or rebooted. |
Method: attach
[attach uri domain] prepares a connection between the storage named by [uri] and the Xen domain with id [domain]. The return value is the information needed by the Xen toolstack to setup the shared-memory blkfront protocol. Note that the same volume may be simultaneously attached to multiple hosts for example over a migrate. If an implementation needs to perform an explicit handover, then it should implement [activate] and [deactivate]. This function is idempotent.
Client
{
"method": "Datapath.attach",
"params": [ { "domain": "domain", "uri": "uri", "dbg": "dbg" } ],
"id": 32
}
try
let backend = Client.attach dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.attach({ dbg: "string", uri: "string", domain: "string" })
print(repr(results))
Server
{
"implementations": [
[
"XenDisk",
{
"backend_type": "backend_type",
"extra": { "field_1": "value_1", "field_2": "value_2" },
"params": "params"
}
],
[
"XenDisk",
{
"backend_type": "backend_type",
"extra": { "field_1": "value_1", "field_2": "value_2" },
"params": "params"
}
]
]
}
try
let backend = Client.attach dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def attach(self, dbg, uri, domain):
"""
[attach uri domain] prepares a connection between the storage named by
[uri] and the Xen domain with id [domain]. The return value is the
information needed by the Xen toolstack to setup the shared-memory
blkfront protocol. Note that the same volume may be simultaneously
attached to multiple hosts for example over a migrate. If an
implementation needs to perform an explicit handover, then it should
implement [activate] and [deactivate]. This function is idempotent.
"""
return {"implementations": [None]}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
backend | out | backend | A description of which Xen block backend to use. The toolstack needs this to setup the shared memory connection to blkfront in the VM. |
Method: activate
[activate uri domain] is called just before a VM needs to read or write its disk. This is an opportunity for an implementation which needs to perform an explicit volume handover to do it. This function is called in the migration downtime window so delays here will be noticeable to users and should be minimised. This function is idempotent.
Client
{
"method": "Datapath.activate",
"params": [ { "domain": "domain", "uri": "uri", "dbg": "dbg" } ],
"id": 33
}
try
let () = Client.activate dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.activate({ dbg: "string", uri: "string", domain: "string" })
print(repr(results))
Server
null
try
let () = Client.activate dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def activate(self, dbg, uri, domain):
"""
[activate uri domain] is called just before a VM needs to read or write
its disk. This is an opportunity for an implementation which needs to
perform an explicit volume handover to do it. This function is called
in the migration downtime window so delays here will be noticeable to
users and should be minimised. This function is idempotent.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
Method: activate_readonly
[activate_readonly uri domain] is called just before a VM, or the control domain, needs to read a volume. A single volume may be activated readonly multiple times, including on multiple independent hosts. It is not permitted for a volume to be activated both readonly and read-write concurrently. Implementations shall declare the VDI_ACTIVATE_READONLY feature for this method to be supported. Once a volume is activated readonly it is required that all readonly activations are deactivated before any read-write activation is attempted. This function is idempotent and in all other respects is interchangeable with activate.
Client
{
"method": "Datapath.activate_readonly",
"params": [ { "domain": "domain", "uri": "uri", "dbg": "dbg" } ],
"id": 34
}
try
let () = Client.activate_readonly dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.activate_readonly({ dbg: "string", uri: "string", domain: "string" })
print(repr(results))
Server
null
try
let () = Client.activate_readonly dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def activate_readonly(self, dbg, uri, domain):
"""
[activate_readonly uri domain] is called just before a VM, or the
control domain, needs to read a volume. A single volume may be
activated readonly multiple times, including on multiple independent
hosts. It is not permitted for a volume to be activated both readonly
and read-write concurrently. Implementations shall declare the
VDI_ACTIVATE_READONLY feature for this method to be supported. Once a
volume is activated readonly it is required that all readonly
activations are deactivated before any read-write activation is
attempted. This function is idempotent and in all other respects
is interchangeable with activate.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
Method: import_activate
[import_activate uri domain] prepares a connection to the storage named by [uri] for use by inbound import mirroring, the [domain] parameter identifies which domain to connect to, most likely 0 or a custom storage domain. The return value is a path to a UNIX domain socket to which an open file descriptor may be passed, by SCM_RIGHTS. This, in turn, will become the server end of a Network Block Device (NBD) connection using, new-fixed protocol. Implementations shall declare the VDI_MIRROR_IN feature for this method to be supported. It is expected that activate will have been previously called so that there is an active datapath.
Client
{
"method": "Datapath.import_activate",
"params": [ { "domain": "domain", "uri": "uri", "dbg": "dbg" } ],
"id": 35
}
try
let sock_path = Client.import_activate dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.import_activate({ dbg: "string", uri: "string", domain: "string" })
print(repr(results))
Server
"sock_path"
try
let sock_path = Client.import_activate dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def import_activate(self, dbg, uri, domain):
"""
[import_activate uri domain] prepares a connection to the
storage named by [uri] for use by inbound import mirroring,
the [domain] parameter identifies which domain to connect to,
most likely 0 or a custom storage domain. The return value is a
path to a UNIX domain socket to which an open file descriptor
may be passed, by SCM_RIGHTS. This, in turn, will become
the server end of a Network Block Device (NBD) connection
using, new-fixed protocol. Implementations shall declare the
VDI_MIRROR_IN feature for this method to be supported. It is
expected that activate will have been previously called so that
there is an active datapath.
"""
return "string"
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
sock_path | out | sock_path | A path to a UNIX domain socket in the filesystem. |
Method: deactivate
[deactivate uri domain] is called as soon as a VM has finished reading or writing its disk. This is an opportunity for an implementation which needs to perform an explicit volume handover to do it. This function is called in the migration downtime window so delays here will be noticeable to users and should be minimised. This function is idempotent.
Client
{
"method": "Datapath.deactivate",
"params": [ { "domain": "domain", "uri": "uri", "dbg": "dbg" } ],
"id": 36
}
try
let () = Client.deactivate dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.deactivate({ dbg: "string", uri: "string", domain: "string" })
print(repr(results))
Server
null
try
let () = Client.deactivate dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def deactivate(self, dbg, uri, domain):
"""
[deactivate uri domain] is called as soon as a VM has finished reading
or writing its disk. This is an opportunity for an implementation which
needs to perform an explicit volume handover to do it. This function is
called in the migration downtime window so delays here will be
noticeable to users and should be minimised. This function is idempotent.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
Method: detach
[detach uri domain] is called sometime after a VM has finished reading or writing its disk. This is an opportunity to clean up any resources associated with the disk. This function is called outside the migration downtime window so can be slow without affecting users. This function is idempotent. This function should never fail. If an implementation is unable to perform some cleanup right away then it should queue the action internally. Any error result represents a bug in the implementation.
Client
{
"method": "Datapath.detach",
"params": [ { "domain": "domain", "uri": "uri", "dbg": "dbg" } ],
"id": 37
}
try
let () = Client.detach dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.detach({ dbg: "string", uri: "string", domain: "string" })
print(repr(results))
Server
null
try
let () = Client.detach dbg uri domain in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def detach(self, dbg, uri, domain):
"""
[detach uri domain] is called sometime after a VM has finished reading
or writing its disk. This is an opportunity to clean up any resources
associated with the disk. This function is called outside the migration
downtime window so can be slow without affecting users. This function is
idempotent. This function should never fail. If an implementation is
unable to perform some cleanup right away then it should queue the
action internally. Any error result represents a bug in the
implementation.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
Method: close
[close uri] is called after a disk is detached and a VM shutdown. This is an opportunity to throw away writes if the disk is not persistent.
Client
{
"method": "Datapath.close",
"params": [ { "uri": "uri", "dbg": "dbg" } ],
"id": 38
}
try
let () = Client.close dbg uri in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Datapath.close({ dbg: "string", uri: "string" })
print(repr(results))
Server
null
try
let () = Client.close dbg uri in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Datapath_myimplementation(Datapath_skeleton):
# by default each method will return a Not_implemented error
# ...
def close(self, dbg, uri):
"""
[close uri] is called after a disk is detached and a VM shutdown. This
is an opportunity to throw away writes if the disk is not persistent.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
Interface: Data
This interface is used for long-running data operations such as copying the contents of volumes or mirroring volumes to remote destinations.
These operations are asynchronous and rely on the Tasks API to report results and errors.
To mirror a VDI a sequence of these API calls is required:
Create a destination VDI using the Volume API on the destination SR. This must be the same size as the source. To minimize copying the destination VDI may be cloned from one that has been previously copied, as long as a disk from which the copy was made is still present on the source (even as a metadata-only disk)
Arrange for the destination disk to be accessible on the source host by suitable URL. This may be
nbd
,iscsi
,nfs
or other URL.Start mirroring all new writes to the destination disk via the
Data.mirror
API call.Find the list of blocks to copy via the CBT API call. Note that if the destination volume has not been 'prezeroed' then all of the blocks must be copied to the destination.
Start the background copy of the disk via a call to
DATA.copy
, passing in the list of blocks to copy. The plugin must ensure that the copy does not conflict with the mirror operation - ie., all writes from the mirror operation must not be overwritten by writes of old data from the copy operation.The progress of the copy operation may be queried via the
Data.stat
call.Once the copy operation has succesfully completed the destination disk will be a perfect mirror of the source.
Method: copy
[copy uri domain remotes blocks] copies [blocks] from the local disk to a remote URI. This may be called as part of a Volume Mirroring operation, and hence may need to cooperate with whatever process is currently mirroring writes to ensure data integrity is maintained. The [remote] parameter is a remotely accessible URI, for example, nbd://root:pass@foo.com/path/to/disk
that must contain all necessary authentication tokens
Client
{
"method": "Data.copy",
"params": [
{
"blocklist": { "ranges": [ [ 0, 0 ], [ 0, 0 ] ], "blocksize": 0 },
"remote": "remote",
"domain": "domain",
"uri": "uri",
"dbg": "dbg"
}
],
"id": 39
}
try
let operation = Client.copy dbg uri domain remote blocklist in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Data.copy({ dbg: "string", uri: "string", domain: "string", remote: "string", blocklist: {"blocksize": long(0), "ranges": [[]]} })
print(repr(results))
Server
[ "CopyV1", "CopyV1" ]
try
let operation = Client.copy dbg uri domain remote blocklist in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Data_myimplementation(Data_skeleton):
# by default each method will return a Not_implemented error
# ...
def copy(self, dbg, uri, domain, remote, blocklist):
"""
[copy uri domain remotes blocks] copies [blocks] from the local disk
to a remote URI. This may be called as part of a Volume Mirroring
operation, and hence may need to cooperate with whatever process is
currently mirroring writes to ensure data integrity is maintained.
The [remote] parameter is a remotely accessible URI, for example,
`nbd://root:pass@foo.com/path/to/disk` that must contain all necessary
authentication tokens
"""
return None
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
remote | in | uri | A URI which represents how to access a remote volume disk data. |
blocklist | in | blocklist | List of blocks for copying. |
operation | out | operation | The primary key for referring to a long-running operation. |
Method: mirror
[mirror uri domain remote] starts mirroring new writes to the volume to a remote URI (usually NBD). This is called as part of a volume mirroring process
Client
{
"method": "Data.mirror",
"params": [
{ "remote": "remote", "domain": "domain", "uri": "uri", "dbg": "dbg" }
],
"id": 40
}
try
let operation = Client.mirror dbg uri domain remote in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Data.mirror({ dbg: "string", uri: "string", domain: "string", remote: "string" })
print(repr(results))
Server
[ "CopyV1", "CopyV1" ]
try
let operation = Client.mirror dbg uri domain remote in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Data_myimplementation(Data_skeleton):
# by default each method will return a Not_implemented error
# ...
def mirror(self, dbg, uri, domain, remote):
"""
[mirror uri domain remote] starts mirroring new writes to the volume
to a remote URI (usually NBD). This is called as part of a volume
mirroring process
"""
return None
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uri | in | uri | A URI which represents how to access the volume disk data. |
domain | in | domain | An opaque string which represents the Xen domain. |
remote | in | uri | A URI which represents how to access a remote volume disk data. |
operation | out | operation | The primary key for referring to a long-running operation. |
Method: stat
[stat operation] returns the current status of [operation]. For a copy operation, this will contain progress information.
Client
{
"method": "Data.stat",
"params": [ { "operation": [ "CopyV1", "CopyV1" ], "dbg": "dbg" } ],
"id": 41
}
try
let status = Client.stat dbg operation in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Data.stat({ dbg: "string", operation: None })
print(repr(results))
Server
{ "progress": 0.0, "complete": true, "failed": true }
try
let status = Client.stat dbg operation in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Data_myimplementation(Data_skeleton):
# by default each method will return a Not_implemented error
# ...
def stat(self, dbg, operation):
"""
[stat operation] returns the current status of [operation]. For a
copy operation, this will contain progress information.
"""
return {"failed": True, "complete": True, "progress": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
operation | in | operation | The primary key for referring to a long-running operation. |
unnamed | out | status | Status information for on-going tasks. |
Method: cancel
[cancel operation] cancels a long-running operation. Note that the call may return before the operation has finished.
Client
{
"method": "Data.cancel",
"params": [ { "operation": [ "CopyV1", "CopyV1" ], "dbg": "dbg" } ],
"id": 42
}
try
let () = Client.cancel dbg operation in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Data.cancel({ dbg: "string", operation: None })
print(repr(results))
Server
null
try
let () = Client.cancel dbg operation in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Data_myimplementation(Data_skeleton):
# by default each method will return a Not_implemented error
# ...
def cancel(self, dbg, operation):
"""
[cancel operation] cancels a long-running operation. Note that the
call may return before the operation has finished.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
operation | in | operation | The primary key for referring to a long-running operation. |
Method: destroy
[destroy operation] destroys the information about a long-running operation. This should fail when run against an operation that is still in progress.
Client
{
"method": "Data.destroy",
"params": [ { "operation": [ "CopyV1", "CopyV1" ], "dbg": "dbg" } ],
"id": 43
}
try
let () = Client.destroy dbg operation in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Data.destroy({ dbg: "string", operation: None })
print(repr(results))
Server
null
try
let () = Client.destroy dbg operation in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Data_myimplementation(Data_skeleton):
# by default each method will return a Not_implemented error
# ...
def destroy(self, dbg, operation):
"""
[destroy operation] destroys the information about a long-running
operation. This should fail when run against an operation that is
still in progress.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
operation | in | operation | The primary key for referring to a long-running operation. |
Method: ls
[ls] returns a list of all current operations
Client
{ "method": "Data.ls", "params": [ { "dbg": "dbg" } ], "id": 44 }
try
let operations = Client.ls dbg in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Data.ls({ dbg: "string" })
print(repr(results))
Server
[ [ "CopyV1", "CopyV1" ], [ "CopyV1", "CopyV1" ] ]
try
let operations = Client.ls dbg in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Data_myimplementation(Data_skeleton):
# by default each method will return a Not_implemented error
# ...
def ls(self, dbg):
"""
[ls] returns a list of all current operations
"""
return [None]
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
unnamed | out | operations | A list of operations. |
Errors
exnt
[ "Unimplemented", "exnt" ]
type exnt
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Unimplemented | string |
exnt
[ "Unimplemented", "exnt" ]
type exnt
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Unimplemented | string |
volume
The xapi toolstack delegates all storage control-plane functions to "Volume plugins".These plugins allow the toolstack to create/destroy/snapshot/clone volumes which are organised into groups called Storage Repositories (SR). Volumes have a set of URIs which can be used by the "Datapath plugins" to connect the disk data to VMs.
Type definitions
configuration
{ "configuration": "configuration" }
type configuration
= (string * string) list
Plugin-specific configuration which describes where and how to locate the storage repository. This may include the physical block device name, a remote NFS server and path or an RBD storage pool.
health
[ "Healthy", "health" ]
[ "Recovering", "health" ]
[ "Unreachable", "health" ]
[ "Unavailable", "health" ]
type health
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Healthy | string | Storage is fully available |
Recovering | string | Storage is busy recovering, e.g. rebuilding mirrors |
Unreachable | string | Storage is unreachable but may be recoverable with admin intervention |
Unavailable | string | Storage is unavailable, a host reboot will be required |
sr_stat
{
"health": [ "Healthy", "health" ],
"clustered": true,
"datasources": [ "datasources" ],
"total_space": 0,
"free_space": 0,
"description": "description",
"uuid": "uuid",
"name": "name",
"sr": "sr"
}
type sr_stat
= struct { ... }
Members
Name | Type | Description |
---|---|---|
sr | string | The URI identifying this volume. A typical value would be a file:// URI pointing to a directory or block device |
name | string | Short, human-readable label for the SR. |
uuid | string option | Uuid that uniquely identifies this SR, if one is available. For SRs that are created by SR.create, this should be the value passed into that call, if it is possible to persist it. |
description | string | Longer, human-readable description of the SR. Descriptions are generally only displayed by clients when the user is examining SRs in detail. |
free_space | int64 | Number of bytes free on the backing storage (in bytes) |
total_space | int64 | Total physical size of the backing storage (in bytes) |
datasources | string list | URIs naming datasources: time-varying quantities representing anything from disk access latency to free space. The entities named by these URIs are self-describing. |
clustered | bool | Indicates whether the SR uses clustered local storage. |
health | health | The health status of the SR. |
probe_result
{
"extra_info": { "extra_info": "extra_info" },
"sr": {
"health": [ "Healthy", "health" ],
"clustered": true,
"datasources": [ "datasources" ],
"total_space": 0,
"free_space": 0,
"description": "description",
"uuid": "uuid",
"name": "name",
"sr": "sr"
},
"complete": true,
"configuration": { "configuration": "configuration" }
}
type probe_result
= struct { ... }
Members
Name | Type | Description |
---|---|---|
configuration | (string * string) list | Plugin-specific configuration which describes where and how to locate the storage repository. This may include the physical block device name, a remote NFS server and path or an RBD storage pool. |
complete | bool | True if this configuration is complete and can be used to call SR.create or SR.attach. False if it requires further iterative calls to SR.probe, to potentially narrow down on a configuration that can be used. |
sr | sr_stat option | Existing SR found for this configuration |
extra_info | (string * string) list | Additional plugin-specific information about this configuration, that might be of use for an API user. This can for example include the LUN or the WWPN. |
probe_results
[
{
"extra_info": { "extra_info": "extra_info" },
"sr": {
"health": [ "Healthy", "health" ],
"clustered": true,
"datasources": [ "datasources" ],
"total_space": 0,
"free_space": 0,
"description": "description",
"uuid": "uuid",
"name": "name",
"sr": "sr"
},
"complete": true,
"configuration": { "configuration": "configuration" }
}
]
[]
type probe_results
= probe_result list
volume_type
"Data"
"CBT_Metadata"
"Data_and_CBT_Metadata"
type volume_type
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Data | unit | Normal data volume |
CBT_Metadata | unit | CBT Metadata only, data destroyed |
Data_and_CBT_Metadata | unit | Both Data and CBT Metadata |
volume
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "keys": "keys" },
"uri": [ "uri" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "uuid",
"key": "key"
}
type volume
= struct { ... }
Members
Name | Type | Description |
---|---|---|
key | string | A primary key for this volume. The key must be unique within the enclosing Storage Repository (SR). A typical value would be a filename or an LVM volume name. |
uuid | string option | A uuid (or guid) for the volume, if one is available. If a storage system has a built-in notion of a guid, then it will be returned here. |
name | string | Short, human-readable label for the volume. Names are commonly used by when displaying short lists of volumes. |
description | string | Longer, human-readable description of the volume. Descriptions are generally only displayed by clients when the user is examining volumes individually. |
read_write | bool | True means the VDI may be written to, false means the volume is read-only. Some storage media is read-only so all volumes are read-only; for example .iso disk images on an NFS share. Some volume are created read-only; for example because they are snapshots of some other VDI. |
sharable | bool | Indicates whether the VDI can be attached by multiple hosts at once. This is used for example by the HA statefile and XAPI redo log. |
virtual_size | int64 | Size of the volume from the perspective of a VM (in bytes) |
physical_utilisation | int64 | Amount of space currently used on the backing storage (in bytes) |
uri | string list | A list of URIs which can be opened and by a datapath plugin for I/O. A URI could reference a local block device, a remote NFS share, iSCSI LUN or RBD volume. In cases where the data may be accessed over several protocols, the list should be sorted into descending order of desirability. Xapi will open the most desirable URI for which it has an available datapath plugin. |
keys | (string * string) list | A list of key=value pairs which have been stored in the Volume metadata. These should not be interpreted by the Volume plugin. |
volume_type | volume_type option | The content type of this volume |
cbt_enabled | bool option | True means that the storage datapath will track changed dirty blocks while writing and will be able to provide CBT Metadata when requested |
volumes
[
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "keys": "keys" },
"uri": [ "uri" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "uuid",
"key": "key"
}
]
[]
type volumes
= volume list
A list of volumes
key
"key"
type key
= string
Primary key for a volume. This can be any string which is meaningful to the implementation. For example this could be an NFS filename, an LVM LV name or even a URI. This string is abstract.
blocklist
{ "ranges": [ [ 0, 0 ] ], "blocksize": 0 }
type blocklist
= struct { ... }
List of blocks for copying.
Members
Name | Type | Description |
---|---|---|
blocksize | int | Size of the individual blocks. |
ranges | int64 * int64 list | List of block ranges, where a range is a (start,length) pair, measured in units of [blocksize] |
key_list
[ "key_list" ]
[]
type key_list
= string list
changed_blocks
{ "bitmap": "bitmap", "granularity": 0 }
type changed_blocks
= struct { ... }
Members
Name | Type | Description |
---|---|---|
granularity | int | One bit in the changed block bitmap indicates the status of an area of this size, in bytes. |
bitmap | string | The changed blocks between two volumes as a base64-encoded string. The bits in the bitmap indicate the status of consecutive blocks of size [granularity] bytes. Each bit is set if the corresponding area has changed. |
Interface: SR
Operations which act on Storage Repositories
Method: probe
[probe configuration]: can be used iteratively to narrow down configurations to use with SR.create, or to find existing SRs on the backing storage
Client
{
"method": "SR.probe",
"params": [
{
"configuration": { "field_1": "value_1", "field_2": "value_2" },
"dbg": "dbg"
}
],
"id": 4
}
try
let probe_result = Client.probe dbg configuration in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.probe({ dbg: "string", configuration: {"string": "string"} })
print(repr(results))
Server
[
{
"extra_info": { "field_1": "value_1", "field_2": "value_2" },
"sr": {
"health": [ "Healthy", "Healthy" ],
"clustered": true,
"datasources": [ "datasources_1", "datasources_2" ],
"total_space": 0,
"free_space": 0,
"description": "description",
"uuid": "optional_uuid",
"name": "name",
"sr": "sr"
},
"complete": true,
"configuration": { "field_1": "value_1", "field_2": "value_2" }
},
{
"extra_info": { "field_1": "value_1", "field_2": "value_2" },
"sr": {
"health": [ "Healthy", "Healthy" ],
"clustered": true,
"datasources": [ "datasources_1", "datasources_2" ],
"total_space": 0,
"free_space": 0,
"description": "description",
"uuid": "optional_uuid",
"name": "name",
"sr": "sr"
},
"complete": true,
"configuration": { "field_1": "value_1", "field_2": "value_2" }
}
]
try
let probe_result = Client.probe dbg configuration in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def probe(self, dbg, configuration):
"""
[probe configuration]: can be used iteratively to narrow down configurations
to use with SR.create, or to find existing SRs on the backing storage
"""
return [{"configuration": {"string": "string"}, "complete": True, "sr": None, "extra_info": {"string": "string"}}]
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
configuration | in | configuration | Plugin-specific configuration which describes where and how to locate the storage repository. This may include the physical block device name, a remote NFS server and path or an RBD storage pool. |
probe_result | out | probe_results | Contents of the storage device |
Method: create
[create uuid configuration name description]: creates a fresh SR
Client
{
"method": "SR.create",
"params": [
{
"description": "description",
"name": "name",
"configuration": { "field_1": "value_1", "field_2": "value_2" },
"uuid": "uuid",
"dbg": "dbg"
}
],
"id": 5
}
try
let configuration = Client.create dbg uuid configuration name description in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.create({ dbg: "string", uuid: "string", configuration: {"string": "string"}, name: "string", description: "string" })
print(repr(results))
Server
{ "field_1": "value_1", "field_2": "value_2" }
try
let configuration = Client.create dbg uuid configuration name description in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def create(self, dbg, uuid, configuration, name, description):
"""
[create uuid configuration name description]: creates a fresh SR
"""
return {"string": "string"}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
uuid | in | string | A uuid to associate with the SR. |
configuration | in | configuration | Plugin-specific configuration which describes where and how to locate the storage repository. This may include the physical block device name, a remote NFS server and path or an RBD storage pool. |
name | in | string | Human-readable name for the SR |
description | in | string | Human-readable description for the SR |
configuration | out | configuration | Plugin-specific configuration which describes where and how to locate the storage repository. This may include the physical block device name, a remote NFS server and path or an RBD storage pool. |
Method: attach
[attach configuration]: attaches the SR to the local host. Once an SR is attached then volumes may be manipulated.
Client
{
"method": "SR.attach",
"params": [
{
"configuration": { "field_1": "value_1", "field_2": "value_2" },
"dbg": "dbg"
}
],
"id": 6
}
try
let sr = Client.attach dbg configuration in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.attach({ dbg: "string", configuration: {"string": "string"} })
print(repr(results))
Server
"sr"
try
let sr = Client.attach dbg configuration in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def attach(self, dbg, configuration):
"""
[attach configuration]: attaches the SR to the local host. Once an SR is
attached then volumes may be manipulated.
"""
return "string"
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
configuration | in | configuration | Plugin-specific configuration which describes where and how to locate the storage repository. This may include the physical block device name, a remote NFS server and path or an RBD storage pool. |
sr | out | string | The Storage Repository |
Method: detach
[detach sr]: detaches the SR, clearing up any associated resources. Once the SR is detached then volumes may not be manipulated.
Client
{
"method": "SR.detach",
"params": [ { "sr": "sr", "dbg": "dbg" } ],
"id": 7
}
try
let () = Client.detach dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.detach({ dbg: "string", sr: "string" })
print(repr(results))
Server
null
try
let () = Client.detach dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def detach(self, dbg, sr):
"""
[detach sr]: detaches the SR, clearing up any associated resources.
Once the SR is detached then volumes may not be manipulated.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
Method: destroy
[destroy sr]: destroys the [sr] and deletes any volumes associated with it. Note that an SR must be attached to be destroyed; otherwise Sr_not_attached is thrown.
Client
{
"method": "SR.destroy",
"params": [ { "sr": "sr", "dbg": "dbg" } ],
"id": 8
}
try
let () = Client.destroy dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.destroy({ dbg: "string", sr: "string" })
print(repr(results))
Server
null
try
let () = Client.destroy dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def destroy(self, dbg, sr):
"""
[destroy sr]: destroys the [sr] and deletes any volumes associated
with it. Note that an SR must be attached to be destroyed; otherwise
Sr_not_attached is thrown.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
Method: stat
[stat sr] returns summary metadata associated with [sr]. Note this call does not return details of sub-volumes, see SR.ls.
Client
{ "method": "SR.stat", "params": [ { "sr": "sr", "dbg": "dbg" } ], "id": 9 }
try
let sr = Client.stat dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.stat({ dbg: "string", sr: "string" })
print(repr(results))
Server
{
"health": [ "Healthy", "Healthy" ],
"clustered": true,
"datasources": [ "datasources_1", "datasources_2" ],
"total_space": 0,
"free_space": 0,
"description": "description",
"uuid": "optional_uuid",
"name": "name",
"sr": "sr"
}
try
let sr = Client.stat dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def stat(self, dbg, sr):
"""
[stat sr] returns summary metadata associated with [sr]. Note this
call does not return details of sub-volumes, see SR.ls.
"""
return {"sr": "string", "name": "string", "uuid": None, "description": "string", "free_space": long(0), "total_space": long(0), "datasources": ["string"], "clustered": True, "health": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
sr | out | sr_stat | SR metadata |
Method: set_name
[set_name sr new_name] changes the name of [sr]
Client
{
"method": "SR.set_name",
"params": [ { "new_name": "new_name", "sr": "sr", "dbg": "dbg" } ],
"id": 10
}
try
let () = Client.set_name dbg sr new_name in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.set_name({ dbg: "string", sr: "string", new_name: "string" })
print(repr(results))
Server
null
try
let () = Client.set_name dbg sr new_name in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def set_name(self, dbg, sr, new_name):
"""
[set_name sr new_name] changes the name of [sr]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
new_name | in | string | The new name of the SR |
Method: set_description
[set_description sr new_description] changes the description of [sr]
Client
{
"method": "SR.set_description",
"params": [
{ "new_description": "new_description", "sr": "sr", "dbg": "dbg" }
],
"id": 11
}
try
let () = Client.set_description dbg sr new_description in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.set_description({ dbg: "string", sr: "string", new_description: "string" })
print(repr(results))
Server
null
try
let () = Client.set_description dbg sr new_description in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def set_description(self, dbg, sr, new_description):
"""
[set_description sr new_description] changes the description of [sr]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
new_description | in | string | The new description for the SR |
Method: ls
[ls sr] returns a list of volumes contained within an attached SR.
Client
{ "method": "SR.ls", "params": [ { "sr": "sr", "dbg": "dbg" } ], "id": 12 }
try
let volumes = Client.ls dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.SR.ls({ dbg: "string", sr: "string" })
print(repr(results))
Server
[
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
},
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
}
]
try
let volumes = Client.ls dbg sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class SR_myimplementation(SR_skeleton):
# by default each method will return a Not_implemented error
# ...
def ls(self, dbg, sr):
"""
[ls sr] returns a list of volumes contained within an attached SR.
"""
return [{"key": "string", "uuid": None, "name": "string", "description": "string", "read_write": True, "sharable": True, "virtual_size": long(0), "physical_utilisation": long(0), "uri": ["string"], "keys": {"string": "string"}, "volume_type": None, "cbt_enabled": None}]
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
volumes | out | volumes | A list of volumes |
Interface: Volume
Operations which operate on volumes (also known as Virtual Disk Images)
Method: create
[create sr name description size] creates a new volume in [sr] with [name] and [description]. The volume will have size >= [size] i.e. it is always permissable for an implementation to round-up the volume to the nearest convenient block size
Client
{
"method": "Volume.create",
"params": [
{
"sharable": true,
"size": 0,
"description": "description",
"name": "name",
"sr": "sr",
"dbg": "dbg"
}
],
"id": 13
}
try
let volume = Client.create dbg sr name description size sharable in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.create({ dbg: "string", sr: "string", name: "string", description: "string", size: long(0), sharable: True })
print(repr(results))
Server
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
}
try
let volume = Client.create dbg sr name description size sharable in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def create(self, dbg, sr, name, description, size, sharable):
"""
[create sr name description size] creates a new volume in [sr] with
[name] and [description]. The volume will have size >= [size] i.e. it
is always permissable for an implementation to round-up the volume to
the nearest convenient block size
"""
return {"key": "string", "uuid": None, "name": "string", "description": "string", "read_write": True, "sharable": True, "virtual_size": long(0), "physical_utilisation": long(0), "uri": ["string"], "keys": {"string": "string"}, "volume_type": None, "cbt_enabled": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
name | in | string | A human-readable name to associate with the new disk. This name is intended to be short, to be a good summary of the disk. |
description | in | string | A human-readable description to associate with the new disk. This can be arbitrarily long, up to the general string size limit. |
size | in | int64 | A minimum size (in bytes) for the disk. Depending on the characteristics of the implementation this may be rounded up to (for example) the nearest convenient block size. The created disk will not be smaller than this size. |
sharable | in | bool | Indicates whether the VDI can be attached by multiple hosts at once. This is used for example by the HA statefile and XAPI redo log. |
volume | out | volume | Properties of the volume |
Method: snapshot
[snapshot sr volume] creates a new volue which is a snapshot of [volume] in [sr]. Snapshots should never be written to; they are intended for backup/restore only. Note the name and description are copied but any extra metadata associated by [set] is not copied. This can raise Activated_on_another_host(host_installation_uuid) if the VDI is already active on another host and snapshots can only be taken on the host that has the VDI active (if any). XAPI will take care of redirecting the request to the proper host
Client
{
"method": "Volume.snapshot",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 14
}
try
let volume = Client.snapshot dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.snapshot({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
}
try
let volume = Client.snapshot dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def snapshot(self, dbg, sr, key):
"""
[snapshot sr volume] creates a new volue which is a snapshot of
[volume] in [sr]. Snapshots should never be written to; they are
intended for backup/restore only. Note the name and description are
copied but any extra metadata associated by [set] is not copied.
This can raise Activated_on_another_host(host_installation_uuid)
if the VDI is already active on another host and snapshots
can only be taken on the host that has the VDI active (if any).
XAPI will take care of redirecting the request to the proper host
"""
return {"key": "string", "uuid": None, "name": "string", "description": "string", "read_write": True, "sharable": True, "virtual_size": long(0), "physical_utilisation": long(0), "uri": ["string"], "keys": {"string": "string"}, "volume_type": None, "cbt_enabled": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
volume | out | volume | Properties of the volume |
Method: clone
[clone sr volume] creates a new volume which is a writable clone of [volume] in [sr]. Note the name and description are copied but any extra metadata associated by [set] is not copied.
Client
{
"method": "Volume.clone",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 15
}
try
let volume = Client.clone dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.clone({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
}
try
let volume = Client.clone dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def clone(self, dbg, sr, key):
"""
[clone sr volume] creates a new volume which is a writable clone of
[volume] in [sr]. Note the name and description are copied but any
extra metadata associated by [set] is not copied.
"""
return {"key": "string", "uuid": None, "name": "string", "description": "string", "read_write": True, "sharable": True, "virtual_size": long(0), "physical_utilisation": long(0), "uri": ["string"], "keys": {"string": "string"}, "volume_type": None, "cbt_enabled": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
volume | out | volume | Properties of the volume |
Method: copy
[copy sr volume dest_sr] creates a new volume as a writeable copy of [volume] in [dest_sr]. [dest_sr] may be the same as [sr] and the operation may be rejected if the volume management plugin cannot copy a volume between different SRs. It is expected that this operation is accelerated by the implementation and is more efficient than a blockwise replication through the local host. This operation should only be called if the plugin declares the VDI_COPY feature in the query response. If the operation is rejected then the caller will be expected to fall back to performing a blockwise copy.
Client
{
"method": "Volume.copy",
"params": [
{ "dest_sr": "dest_sr", "key": "key", "sr": "sr", "dbg": "dbg" }
],
"id": 16
}
try
let volume = Client.copy dbg sr key dest_sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.copy({ dbg: "string", sr: "string", key: "string", dest_sr: "string" })
print(repr(results))
Server
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
}
try
let volume = Client.copy dbg sr key dest_sr in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def copy(self, dbg, sr, key, dest_sr):
"""
[copy sr volume dest_sr] creates a new volume as a writeable copy of
[volume] in [dest_sr]. [dest_sr] may be the same as [sr] and the operation
may be rejected if the volume management plugin cannot copy a volume
between different SRs. It is expected that this operation is accelerated
by the implementation and is more efficient than a blockwise replication
through the local host. This operation should only be called if the plugin
declares the VDI_COPY feature in the query response. If the operation is
rejected then the caller will be expected to fall back to performing a
blockwise copy.
"""
return {"key": "string", "uuid": None, "name": "string", "description": "string", "read_write": True, "sharable": True, "virtual_size": long(0), "physical_utilisation": long(0), "uri": ["string"], "keys": {"string": "string"}, "volume_type": None, "cbt_enabled": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
dest_sr | in | string | The Destination Storage Repository |
volume | out | volume | Properties of the volume |
Method: destroy
[destroy sr volume] removes [volume] from [sr]
Client
{
"method": "Volume.destroy",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 17
}
try
let () = Client.destroy dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.destroy({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
null
try
let () = Client.destroy dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def destroy(self, dbg, sr, key):
"""
[destroy sr volume] removes [volume] from [sr]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
Method: set_name
[set_name sr volume new_name] changes the name of [volume]
Client
{
"method": "Volume.set_name",
"params": [
{ "new_name": "new_name", "key": "key", "sr": "sr", "dbg": "dbg" }
],
"id": 18
}
try
let () = Client.set_name dbg sr key new_name in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.set_name({ dbg: "string", sr: "string", key: "string", new_name: "string" })
print(repr(results))
Server
null
try
let () = Client.set_name dbg sr key new_name in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def set_name(self, dbg, sr, key, new_name):
"""
[set_name sr volume new_name] changes the name of [volume]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
new_name | in | string | New name |
Method: set_description
[set_description sr volume new_description] changes the description of [volume]
Client
{
"method": "Volume.set_description",
"params": [
{
"new_description": "new_description",
"key": "key",
"sr": "sr",
"dbg": "dbg"
}
],
"id": 19
}
try
let () = Client.set_description dbg sr key new_description in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.set_description({ dbg: "string", sr: "string", key: "string", new_description: "string" })
print(repr(results))
Server
null
try
let () = Client.set_description dbg sr key new_description in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def set_description(self, dbg, sr, key, new_description):
"""
[set_description sr volume new_description] changes the description
of [volume]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
new_description | in | string | New description |
Method: set
[set sr volume key value] associates [key] with [value] in the metadata of [volume] Note these keys and values are not interpreted by the plugin; they are intended for the higher-level software only.
Client
{
"method": "Volume.set",
"params": [
{ "v": "v", "k": "k", "key": "key", "sr": "sr", "dbg": "dbg" }
],
"id": 20
}
try
let () = Client.set dbg sr key k v in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.set({ dbg: "string", sr: "string", key: "string", k: "string", v: "string" })
print(repr(results))
Server
null
try
let () = Client.set dbg sr key k v in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def set(self, dbg, sr, key, k, v):
"""
[set sr volume key value] associates [key] with [value] in the
metadata of [volume] Note these keys and values are not interpreted
by the plugin; they are intended for the higher-level software only.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
k | in | string | Key |
v | in | string | Value |
Method: unset
[unset sr volume key] removes [key] and any value associated with it from the metadata of [volume] Note these keys and values are not interpreted by the plugin; they are intended for the higher-level software only.
Client
{
"method": "Volume.unset",
"params": [ { "k": "k", "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 21
}
try
let () = Client.unset dbg sr key k in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.unset({ dbg: "string", sr: "string", key: "string", k: "string" })
print(repr(results))
Server
null
try
let () = Client.unset dbg sr key k in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def unset(self, dbg, sr, key, k):
"""
[unset sr volume key] removes [key] and any value associated with it
from the metadata of [volume] Note these keys and values are not
interpreted by the plugin; they are intended for the higher-level
software only.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
k | in | string | Key |
Method: resize
[resize sr volume new_size] enlarges [volume] to be at least [new_size].
Client
{
"method": "Volume.resize",
"params": [ { "new_size": 0, "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 22
}
try
let () = Client.resize dbg sr key new_size in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.resize({ dbg: "string", sr: "string", key: "string", new_size: long(0) })
print(repr(results))
Server
null
try
let () = Client.resize dbg sr key new_size in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def resize(self, dbg, sr, key, new_size):
"""
[resize sr volume new_size] enlarges [volume] to be at least
[new_size].
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
new_size | in | int64 | New disk size |
Method: stat
[stat sr volume] returns metadata associated with [volume].
Client
{
"method": "Volume.stat",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 23
}
try
let volume = Client.stat dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.stat({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "field_1": "value_1", "field_2": "value_2" },
"uri": [ "uri_1", "uri_2" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "optional_uuid",
"key": "key"
}
try
let volume = Client.stat dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def stat(self, dbg, sr, key):
"""
[stat sr volume] returns metadata associated with [volume].
"""
return {"key": "string", "uuid": None, "name": "string", "description": "string", "read_write": True, "sharable": True, "virtual_size": long(0), "physical_utilisation": long(0), "uri": ["string"], "keys": {"string": "string"}, "volume_type": None, "cbt_enabled": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
volume | out | volume | Properties of the volume |
Method: compare
[compare sr volume1 volume2] compares the two volumes and returns a result of type blocklist that describes the differences between the two volumes. If the two volumes are unrelated, or the second volume does not exist, the result will be a list of the blocks that are non-empty in volume1. If this information is not available to the plugin, it should return a result indicating that all blocks are in use.
Client
{
"method": "Volume.compare",
"params": [ { "key2": "key2", "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 24
}
try
let blocklist = Client.compare dbg sr key key2 in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.compare({ dbg: "string", sr: "string", key: "string", key2: "string" })
print(repr(results))
Server
{ "ranges": [ [ 0, 0 ], [ 0, 0 ] ], "blocksize": 0 }
try
let blocklist = Client.compare dbg sr key key2 in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def compare(self, dbg, sr, key, key2):
"""
[compare sr volume1 volume2] compares the two volumes and returns a
result of type blocklist that describes the differences between the
two volumes. If the two volumes are unrelated, or the second volume
does not exist, the result will be a list of the blocks that are
non-empty in volume1. If this information is not available to the
plugin, it should return a result indicating that all blocks are in
use.
"""
return {"blocksize": long(0), "ranges": [[]]}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
key2 | in | key | The volume key |
unnamed | out | blocklist | List of blocks for copying. |
Method: similar_content
[similar_content sr volume] returns a list of VDIs which have similar content to [vdi]
Client
{
"method": "Volume.similar_content",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 25
}
try
let key list = Client.similar_content dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.similar_content({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
[ "key list_1", "key list_2" ]
try
let key list = Client.similar_content dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def similar_content(self, dbg, sr, key):
"""
[similar_content sr volume] returns a list of VDIs which have similar
content to [vdi]
"""
return ["string"]
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
key list | out | key_list | List of volume keys |
Method: enable_cbt
[enable_cbt sr volume] enables Changed Block Tracking for [volume]
Client
{
"method": "Volume.enable_cbt",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 26
}
try
let () = Client.enable_cbt dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.enable_cbt({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
null
try
let () = Client.enable_cbt dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def enable_cbt(self, dbg, sr, key):
"""
[enable_cbt sr volume] enables Changed Block Tracking for [volume]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
Method: disable_cbt
[disable_cbt sr volume] disables Changed Block Tracking for [volume]
Client
{
"method": "Volume.disable_cbt",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 27
}
try
let () = Client.disable_cbt dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.disable_cbt({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
null
try
let () = Client.disable_cbt dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def disable_cbt(self, dbg, sr, key):
"""
[disable_cbt sr volume] disables Changed Block Tracking for [volume]
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
Method: data_destroy
[data_destroy sr volume] deletes the data of the snapshot [volume] without deleting its changed block tracking metadata
Client
{
"method": "Volume.data_destroy",
"params": [ { "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 28
}
try
let () = Client.data_destroy dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.data_destroy({ dbg: "string", sr: "string", key: "string" })
print(repr(results))
Server
null
try
let () = Client.data_destroy dbg sr key in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def data_destroy(self, dbg, sr, key):
"""
[data_destroy sr volume] deletes the data of the snapshot [volume]
without deleting its changed block tracking metadata
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
Method: list_changed_blocks
[list_changed_blocks sr volume1 volume2 offset length] returns the blocks that have changed between [volume1] and [volume2] in the extent specified by the given [offset] and [length] as a base64-encoded bitmap string. If this extent is not aligned to the granularity of the returned bitmap, then the bitmap will cover the area extended to the nearest block boundaries.
Client
{
"method": "Volume.list_changed_blocks",
"params": [
{
"length": 0,
"offset": 0,
"key2": "key2",
"key": "key",
"sr": "sr",
"dbg": "dbg"
}
],
"id": 29
}
try
let changed_blocks = Client.list_changed_blocks dbg sr key key2 offset length in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.list_changed_blocks({ dbg: "string", sr: "string", key: "string", key2: "string", offset: long(0), length: long(0) })
print(repr(results))
Server
{ "bitmap": "bitmap", "granularity": 0 }
try
let changed_blocks = Client.list_changed_blocks dbg sr key key2 offset length in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def list_changed_blocks(self, dbg, sr, key, key2, offset, length):
"""
[list_changed_blocks sr volume1 volume2 offset length] returns the
blocks that have changed between [volume1] and [volume2] in the extent
specified by the given [offset] and [length] as a base64-encoded
bitmap string. If this extent is not aligned to the granularity of the
returned bitmap, then the bitmap will cover the area extended to the
nearest block boundaries.
"""
return {"granularity": long(0), "bitmap": "string"}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
key2 | in | key | The volume key |
offset | in | int64 | The offset of the extent for which changed blocks should be computed |
length | in | int | The length of the extent for which changed blocks should be computed |
changed_blocks | out | changed_blocks | The changed blocks between two volumes in the specified extent |
Method: compose
[compose sr child_volume parent_volume] layers the updates from [child_volume] onto [parent_volume], modifying [child_volume]. In the case of a delta file format this means updating the [child_volume] to have a parent or backing object defined by [parent_volume]. Implementations shall declare the VDI_COMPOSE feature for this method to be supported. After a successful return it should be assumed that the [parent_volume] is no longer valid. Calling SR.ls, will return the list of currently known, valid, volumes.
Client
{
"method": "Volume.compose",
"params": [ { "key2": "key2", "key": "key", "sr": "sr", "dbg": "dbg" } ],
"id": 30
}
try
let () = Client.compose dbg sr key key2 in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Volume.compose({ dbg: "string", sr: "string", key: "string", key2: "string" })
print(repr(results))
Server
null
try
let () = Client.compose dbg sr key key2 in
...
with Exn (Sr_not_attached str) -> ...
| Exn (SR_does_not_exist str) -> ...
| Exn (Volume_does_not_exist str) -> ...
| Exn (Unimplemented str) -> ...
| Exn (Cancelled str) -> ...
| Exn (Activated_on_another_host str) -> ...
# import additional libraries if needed
class Volume_myimplementation(Volume_skeleton):
# by default each method will return a Not_implemented error
# ...
def compose(self, dbg, sr, key, key2):
"""
[compose sr child_volume parent_volume] layers the updates from
[child_volume] onto [parent_volume], modifying [child_volume].
In the case of a delta file format this means updating the
[child_volume] to have a parent or backing object defined by
[parent_volume]. Implementations shall declare the VDI_COMPOSE
feature for this method to be supported. After a successful
return it should be assumed that the [parent_volume] is no
longer valid. Calling SR.ls, will return the list of currently
known, valid, volumes.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
sr | in | string | The Storage Repository |
key | in | key | The volume key |
key2 | in | key | The volume key |
Errors
exns
[ "Sr_not_attached", "exns" ]
[ "SR_does_not_exist", "exns" ]
[ "Volume_does_not_exist", "exns" ]
[ "Unimplemented", "exns" ]
[ "Cancelled", "exns" ]
[ "Activated_on_another_host", "exns" ]
type exns
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Sr_not_attached | string | An SR must be attached in order to access volumes |
SR_does_not_exist | string | The specified SR could not be found |
Volume_does_not_exist | string | The specified volume could not be found in the SR |
Unimplemented | string | The operation has not been implemented |
Cancelled | string | The operation has been cancelled |
Activated_on_another_host | string | The Volume is already active on another host |
exns
[ "Sr_not_attached", "exns" ]
[ "SR_does_not_exist", "exns" ]
[ "Volume_does_not_exist", "exns" ]
[ "Unimplemented", "exns" ]
[ "Cancelled", "exns" ]
[ "Activated_on_another_host", "exns" ]
type exns
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Sr_not_attached | string | An SR must be attached in order to access volumes |
SR_does_not_exist | string | The specified SR could not be found |
Volume_does_not_exist | string | The specified volume could not be found in the SR |
Unimplemented | string | The operation has not been implemented |
Cancelled | string | The operation has been cancelled |
Activated_on_another_host | string | The Volume is already active on another host |
task
The Task interface is required if the backend supports long-running tasks.
Type definitions
id
"id"
type id
= string
Unique identifier for a task.
volume_type
"Data"
"CBT_Metadata"
"Data_and_CBT_Metadata"
type volume_type
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Data | unit | Normal data volume |
CBT_Metadata | unit | CBT Metadata only, data destroyed |
Data_and_CBT_Metadata | unit | Both Data and CBT Metadata |
volume
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "keys": "keys" },
"uri": [ "uri" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "uuid",
"key": "key"
}
type volume
= struct { ... }
Members
Name | Type | Description |
---|---|---|
key | string | A primary key for this volume. The key must be unique within the enclosing Storage Repository (SR). A typical value would be a filename or an LVM volume name. |
uuid | string option | A uuid (or guid) for the volume, if one is available. If a storage system has a built-in notion of a guid, then it will be returned here. |
name | string | Short, human-readable label for the volume. Names are commonly used by when displaying short lists of volumes. |
description | string | Longer, human-readable description of the volume. Descriptions are generally only displayed by clients when the user is examining volumes individually. |
read_write | bool | True means the VDI may be written to, false means the volume is read-only. Some storage media is read-only so all volumes are read-only; for example .iso disk images on an NFS share. Some volume are created read-only; for example because they are snapshots of some other VDI. |
sharable | bool | Indicates whether the VDI can be attached by multiple hosts at once. This is used for example by the HA statefile and XAPI redo log. |
virtual_size | int64 | Size of the volume from the perspective of a VM (in bytes) |
physical_utilisation | int64 | Amount of space currently used on the backing storage (in bytes) |
uri | string list | A list of URIs which can be opened and by a datapath plugin for I/O. A URI could reference a local block device, a remote NFS share, iSCSI LUN or RBD volume. In cases where the data may be accessed over several protocols, the list should be sorted into descending order of desirability. Xapi will open the most desirable URI for which it has an available datapath plugin. |
keys | (string * string) list | A list of key=value pairs which have been stored in the Volume metadata. These should not be interpreted by the Volume plugin. |
volume_type | volume_type option | The content type of this volume |
cbt_enabled | bool option | True means that the storage datapath will track changed dirty blocks while writing and will be able to provide CBT Metadata when requested |
async_result_t
"UnitResult"
[
"Volume",
{
"cbt_enabled": true,
"volume_type": "Data",
"keys": { "keys": "keys" },
"uri": [ "uri" ],
"physical_utilisation": 0,
"virtual_size": 0,
"sharable": true,
"read_write": true,
"description": "description",
"name": "name",
"uuid": "uuid",
"key": "key"
}
]
type async_result_t
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
UnitResult | unit | |
Volume | volume |
completion_t
{ "result": "UnitResult", "duration": 0.0 }
type completion_t
= struct { ... }
Members
Name | Type | Description |
---|---|---|
duration | float | |
result | async_result_t option |
state
[ "Pending", 0.0 ]
[ "Completed", { "result": "UnitResult", "duration": 0.0 } ]
[ "Failed", "state" ]
type state
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Pending | float | The task is in progress, with progress info from 0..1. |
Completed | completion_t | |
Failed | string |
task
{
"state": [ "Pending", 0.0 ],
"ctime": 0.0,
"debug_info": "debug_info",
"id": "id"
}
type task
= struct { ... }
Members
Name | Type | Description |
---|---|---|
id | string | |
debug_info | string | |
ctime | float | |
state | state |
task_list
[ "task_list" ]
[]
type task_list
= string list
Interface: Task
The task interface is for querying the status of asynchronous tasks. All long-running operations are associated with tasks, including copying and mirroring of data.
Method: stat
[stat task_id] returns the status of the task
Client
{
"method": "Task.stat",
"params": [ { "id": "id", "dbg": "dbg" } ],
"id": 45
}
try
let result = Client.stat dbg id in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Task.stat({ dbg: "string", id: "string" })
print(repr(results))
Server
{
"state": [ "Pending", 0.0 ],
"ctime": 0.0,
"debug_info": "debug_info",
"id": "id"
}
try
let result = Client.stat dbg id in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Task_myimplementation(Task_skeleton):
# by default each method will return a Not_implemented error
# ...
def stat(self, dbg, id):
"""
[stat task_id] returns the status of the task
"""
return {"id": "string", "debug_info": "string", "ctime": 1.1, "state": None}
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
id | in | id | Unique identifier for a task. |
result | out | task |
Method: cancel
[cancel task_id] performs a best-effort cancellation of an ongoing task. The effect of this should leave the system in one of two states: Either that the task has completed successfully, or that it had never been made at all. The call should return immediately and the status of the task can the be queried via the [stat] call.
Client
{
"method": "Task.cancel",
"params": [ { "id": "id", "dbg": "dbg" } ],
"id": 46
}
try
let () = Client.cancel dbg id in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Task.cancel({ dbg: "string", id: "string" })
print(repr(results))
Server
null
try
let () = Client.cancel dbg id in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Task_myimplementation(Task_skeleton):
# by default each method will return a Not_implemented error
# ...
def cancel(self, dbg, id):
"""
[cancel task_id] performs a best-effort cancellation of an ongoing
task. The effect of this should leave the system in one of two
states: Either that the task has completed successfully, or that it
had never been made at all. The call should return immediately and
the status of the task can the be queried via the [stat] call.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
id | in | id | Unique identifier for a task. |
Method: destroy
[destroy task_id] should remove all traces of the task_id. This call should fail if the task is currently in progress.
Client
{
"method": "Task.destroy",
"params": [ { "id": "id", "dbg": "dbg" } ],
"id": 47
}
try
let () = Client.destroy dbg id in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Task.destroy({ dbg: "string", id: "string" })
print(repr(results))
Server
null
try
let () = Client.destroy dbg id in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Task_myimplementation(Task_skeleton):
# by default each method will return a Not_implemented error
# ...
def destroy(self, dbg, id):
"""
[destroy task_id] should remove all traces of the task_id. This call
should fail if the task is currently in progress.
"""
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
id | in | id | Unique identifier for a task. |
Method: ls
[ls] should return a list of all of the tasks the plugin is aware of.
Client
{ "method": "Task.ls", "params": [ { "dbg": "dbg" } ], "id": 48 }
try
let task_list = Client.ls dbg in
...
with Exn (Unimplemented str) -> ...
# import necessary libraries if needed
# we assume that your library providing the client is called myclient and it provides a connect method
import myclient
if __name__ == "__main__":
c = myclient.connect()
results = c.Task.ls({ dbg: "string" })
print(repr(results))
Server
[ "task_list_1", "task_list_2" ]
try
let task_list = Client.ls dbg in
...
with Exn (Unimplemented str) -> ...
# import additional libraries if needed
class Task_myimplementation(Task_skeleton):
# by default each method will return a Not_implemented error
# ...
def ls(self, dbg):
"""
[ls] should return a list of all of the tasks the plugin is aware of.
"""
return ["string"]
# ...
Name | Direction | Type | Description |
---|---|---|---|
dbg | in | string | Debug context from the caller |
unnamed | out | task_list |
Errors
exnt
[ "Unimplemented", "exnt" ]
type exnt
= variant { ... }
Constructors
Name | Type | Description |
---|---|---|
Unimplemented | string |