Making the device configuration polymorphic requires the device struct
to exist before the device parameters are checked and assigned to the
struct fields. Which means wrapping the struct fields by Option
unnecessarily or introducing other data confusion.
Let's extract the device configuration from traits to plain functions
in order to keep the device struct's unencumbered.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Writing `--device help' on the command line will list all the
available devices and their parameters.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Followup patches will allow connecting vhost-user-scmi to industrial
I/O devices. On host without IIO devices, it’s possible to use
emulated devices for testing. This patch documents how to use them
and also provides a slightly customized IIO dummy kernel module.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Different sensors will have similar handling. Let’s extract generic
parts from FakeSensor implementation into a reusable code, within the
limits of Rust.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
The code already contains support for creating devices that can serve
as SCMI-accessible sensors and a sample fake devices. But to actually
use the device, the code must be modified.
This patch adds a command line option to define the devices on start.
The format of the option value is in the QEMU style:
DEVICE,PROPERTY=VALUE,…
For example:
--device fake,name=fake1 fake,name=fake2
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
This patch implements the necessary parts of the SCMI sensor
management protocol, required either by the SCMI standard or by Linux
VIRTIO SCMI drivers to function correctly. A part of this
implementation is a fake sensor device, which is useful for both unit
testing here and a testing with a real guest OS.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Implementation of the mandatory parts of the SCMI base protocol. This
allows the daemon to communicate with the guest SCMI VIRTIO device,
although not yet providing any useful functionality.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
This patch adds support for a SCMI vhost-user device. It implements
the basic skeleton of the vhost-user daemon and of SCMI processing.
It doesn’t provide any real functionality yet, adding it will be the
subject of followup patches.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
cargo complains with the following otherwise:
some crates are on edition 2021 which defaults to resolver = 2,
but virtual workspaces default to resolver = 1
Signed-off-by: Bilal Elmoussaoui <belmouss@redhat.com>
vhost-user-backend v0.10.0 introduced an issue that affects
all vhost-user backends. I easily reproduced the problem with
vhost-device-vsock: just restart the guest kernel and the
device no longer works.
vhost-user-backend v0.10.1 includes the fix [1] for that issue.
[1] https://github.com/rust-vmm/vhost/pull/180
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
The main target of this update is vm-memory to a newer stable version,
but lets update everything anyway.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Restrict the VMs a given VM can communicate with by introducing VM groups.
A group is simply a list of names assigned to the device in the
configuration. A VM can communicate with another VM only if the list of
group names assigned to their devices have atleast one group name in
common.
Signed-off-by: Priyansh Rathi <techiepriyansh@gmail.com>
Currently need to provide all the fields in the yaml config file,
otherwise the application panics. Modify this behaviour to allow not
specifying the optional fields to make it consistent with specifying the
configuration using only CLI arguments.
Signed-off-by: Priyansh Rathi <techiepriyansh@gmail.com>
In virtio standard, vsock uses 3 vqs. crosvm expects 3 vqs from
vhost-user-vsock impl, but this vhost-user-vsock device sets up
only 2 vqs because event vq isn't handled. And it causes crash in
crosvm. To avoid crash in crosvm, I increase NUM_QUEUES to 3
Signed-off-by: Jeongik Cha <jeongik@google.com>
BACKEND_EVENT value depends on NUM_QUEUES, because it is the next value
of NUM_QUEUES, so set it based on NUM_QUEUES
Signed-off-by: Jeongik Cha <jeongik@google.com>
VsockConnection::stream which is cloned is always used for
epoll_register, except add_new_guest_conn. Only in add_new_guest_conn,
the original stream is used.
Because a stream's raw fd is used for the key of listener_map, it cannot
find proper listener after the first packet.
Signed-off-by: Jeongik Cha <jeongik@google.com>
There was some reference left in the documentation and sources to
"vhost-user-scsi" that we had changed during the rebase.
Let's change them to "vhost-device-scsi".
Everything should be safe.
We leave "vhost-user-scsi" in
crates/scsi/src/scsi/emulation/response_data.rs because it looks like
an identifier with some constant size. We will fix in the future.
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
All other devices follow the "vhost-device-*" pattern, while for
vsock we used "vhost-user-vsock". Let's rename this as well to be
consistent.
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
`timer` variable does not need to be mutable as clippy reported:
warning: variable does not need to be mutable
--> crates/rng/src/vhu_rng.rs:127:17
|
127 | let mut timer = &mut self.timer;
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
When we merged the SCSI device, we forgot to put it in the
workspace README.md and put the link to the device README.md.
Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Currently, the `raw_pkts_queue` is processed only when a
`SIBLING_VM_EVENT` is received. But it may happen that the
`raw_pkts_queue` could not be processed completely due to insufficient
space in the RX virtqueue at that time. So, try to process raw packets on
other events too similar to what happens in the RX of standard packets.
Signed-off-by: Priyansh Rathi <techiepriyansh@gmail.com>
The deadlock occurs when two sibling VMs simultaneously try to send each
other packets. The `VhostUserVsockThread`s corresponding to both the VMs
hold their own locks while executing `thread_backend.send_pkt` and then
try to lock each other to access their counterpart's `raw_pkts_queue`.
This ultimately results in a deadlock.
Resolved by separating the mutex over `raw_pkts_queue` from the mutex over
`VhostUserVsockThread`.
Signed-off-by: Priyansh Rathi <techiepriyansh@gmail.com>
The vhost_user::Error::Disconnected error code is returned by the
daemon if the VM is shutting down. Don't Warn the user in this case but
just point out that VM may be shutting down.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>