Go to file
Xuewei Niu 04c30f700a vhost-device-console: Being able to specify max queue size
Some hypervisors, e.g. UML, negotiate a queue size larger than `QUEUE_SIZE`
(128), which results in failures. This patch allows users to specify it
using `--max-queue-size <size>`.

Signed-off-by: Xuewei Niu <niuxuewei.nxw@antgroup.com>
2025-02-14 12:51:22 +02:00
.buildkite CI: Exclude GPU from musl builds/tests 2025-02-14 10:15:52 +02:00
.cargo cargo-config: rename to config.toml 2024-10-08 09:44:56 +02:00
.github dependabot: Group updates to reduce noise 2024-11-18 13:50:34 +05:30
rust-vmm-ci@09aef997d8 build(deps): bump rust-vmm-ci from 1150c47 to 09aef99 2024-12-16 09:39:43 +01:00
staging Move vhost-user-gpu from staging to main directory 2025-02-14 10:15:52 +02:00
vhost-device-can build(deps): bump the non-rust-vmm group across 2 directories with 30 updates 2025-01-07 10:13:07 +01:00
vhost-device-console vhost-device-console: Being able to specify max queue size 2025-02-14 12:51:22 +02:00
vhost-device-gpio Fix clippy::empty_line_after_doc_comments 2024-12-16 14:00:08 +05:30
vhost-device-gpu Move vhost-user-gpu from staging to main directory 2025-02-14 10:15:52 +02:00
vhost-device-i2c Fix clippy::empty_line_after_doc_comments 2024-12-16 14:00:08 +05:30
vhost-device-input build(deps): bump the non-rust-vmm group across 2 directories with 33 updates 2025-02-12 13:59:51 +01:00
vhost-device-rng build(deps): bump the non-rust-vmm group across 2 directories with 33 updates 2025-02-12 13:59:51 +01:00
vhost-device-scmi scmi: sensor axis extended attributes support 2025-01-17 10:56:04 +01:00
vhost-device-scsi build(deps): bump the non-rust-vmm group across 2 directories with 33 updates 2025-02-12 13:59:51 +01:00
vhost-device-sound build(deps): bump the non-rust-vmm group across 2 directories with 33 updates 2025-02-12 13:59:51 +01:00
vhost-device-spi build(deps): bump the non-rust-vmm group across 2 directories with 33 updates 2025-02-12 13:59:51 +01:00
vhost-device-template build(deps): bump the rust-vmm group across 1 directory with 4 updates 2024-11-18 09:34:52 +01:00
vhost-device-vsock build(deps): bump the non-rust-vmm group across 2 directories with 33 updates 2025-02-12 13:59:51 +01:00
.gitignore Add .gitignore 2021-08-19 09:52:14 +03:00
.gitmodules Initial commit 2021-05-26 10:18:40 +03:00
Cargo.lock Move vhost-user-gpu from staging to main directory 2025-02-14 10:15:52 +02:00
Cargo.toml Move vhost-user-gpu from staging to main directory 2025-02-14 10:15:52 +02:00
CODEOWNERS Set @mz-pdm as the code owner of vhost-device-scmi 2025-01-14 17:42:45 +02:00
coverage_config_x86_64.json Move vhost-user-gpu from staging to main directory 2025-02-14 10:15:52 +02:00
LICENSE-APACHE Initial commit 2021-05-26 10:18:40 +03:00
LICENSE-BSD-3-Clause Add BSD-3-Clause license for the crates 2022-10-27 11:19:40 -06:00
README.md README.md: add vhost-device-gpu to staging list 2024-12-09 17:19:56 +02:00

vhost-device

Design

This repository hosts various 'vhost-user' device backends in their own crates. See their individual README.md files for specific information about those crates.

To be included here device backends must:

Here is the list of device backends that we support:

The vhost-device workspace also provides a template to help new developers understand how to write their own vhost-user backend.

Staging Devices

Implementing a proper VirtIO device requires co-ordination between the specification, drivers and backend implementations. As these can all be in flux during development it was decided introducing a staging workspace which would allow developers to work within the main rust-vmm project while clearly marking the backends as not production ready.

To be included in the staging workspace there must at least be:

  • A public proposal to extend the VIRTIO specification
  • A public implementation of a device driver
  • Documentation pointing to the above

More information may be found in its README file.

Here is the list of device backends in staging:

Testing and Code Coverage

Like the wider rust-vmm project we expect new features to come with comprehensive code coverage. However as a multi-binary repository there are cases where avoiding a drop in coverage can be hard and an exception to the approach is allowable. These are:

  • adding a new binary target (aim at least 60% overall coverage)
  • expanding the main function (a small drop is acceptable)

However any new feature added to an existing binary should not cause a drop in coverage. The general aim should be to always improve coverage.

Separation of Concerns

The binaries built by this repository can be run with any VMM which can act as a vhost-user frontend. Typically they have been tested with QEMU although the rust-vmm project does provide a vhost-user frontend crate for rust based VMMs.

While it's possible to implement all parts of the backend inside the vhost-device workspace consideration should be given to separating the VirtQueue handling and response logic to a device crate in the vm-virtio repository. This way a monolithic rust-vmm VMM implementation can reuse the core logic to service the virtio requests directly in the application.

Build dependency

The GPIO crate needs a local installation of libgpiod library to be available. If your distro ships libgpiod >= v2.0, then you should be fine.

Otherwise, you will need to build libgpiod yourself:

git clone --depth 1 --branch v2.0.x https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git/
cd libgpiod
./autogen.sh --prefix="$PWD/install/"
make install

In order to inform tools about the build location, you can now set:

export PKG_CONFIG_PATH="<PATH-TO-LIBGPIOD>/install/lib/pkgconfig/"

To prevent setting this in every terminal session, you can also configure cargo to set it automatically.

Xen support

Supporting Xen requires special handling while mapping the guest memory. The vm-memory crate implements xen memory mapping support via a separate feature xen, and this crate uses the same feature name to enable Xen support.

It was decided by the rust-vmm maintainers to keep the interface simple and build the crate for either standard Unix memory mapping or Xen, and not both.