Process events from xenopsd in a timely manner

Design document
Revision v1
Status proposed
Review create new issue

Background

There is a significant delay between the VM being unpaused and XAPI reporting it as started during a bootstorm. It can happen that the VM is able to send UDP packets already, but XAPI still reports it as not started for minutes.

XAPI currently processes all events from xenopsd in a single thread, the unpause events get queued up behind a lot of other events generated by the already running VMs.

We need to ensure that unpause events from xenopsd get processed in a timely manner, even if XAPI is busy processing other events.

Timely processing of events

If we process the events in a Round-Robin fashion then unpause events are reported in a timely fashion. We need to ensure that events operating on the same VM are not processed in parallel.

Xenopsd already has code that does exactly this, the purpose of the xapi-work-queues refactoring PR is to reuse this code in XAPI by creating a shared package between xenopsd and xapi: xapi-work-queues.

xapi-work-queues

From the documentation of the new Worker Pool interface:

A worker pool has a limited number of worker threads. Each worker pops one tagged item from the queue in a round-robin fashion. While the item is executed the tag temporarily doesn’t participate in round-robin scheduling. If during execution more items get queued with the same tag they get redirected to a private queue. Once the item finishes execution the tag will participate in RR scheduling again.

This ensures that items with the same tag do not get executed in parallel, and that a tag with a lot of items does not starve the execution of other tags.

The XAPI side of the changes will look like this

Known limitations: The active per-VM events should be a small number, this is already ensured in the push_with_coalesce / should_keep code on the xenopsd side. Events to XAPI from xenopsd should already arrive coalesced.