vhost-device-gpu: Refactor vhost-device-gpu

This commit refactors vhost-device-gpu by separating
virglrenderer from rutabaga, and using gfxstream via
rutabaga, Simplifying future backend development.

This commit introduces a significant refactor of the
virtio-gpu backend architecture:
- Transition `gfxstream` support to use `Rutabaga`
  abstraction.
- Decouple `virglrenderer` from `Rutabaga`, allowing
  it to be used as standalone.
- Unify backend handling using thread-local storage
  and macro-based runtime dispatch.

Key Changes:
VirglRenderer Backend:
   - `virgl.rs` is now a standalone backend that
     directly calls `libvirglrenderer` functions.
   - Removed reliance on `rutabaga` for virgl path.

Gfxstream Backend via Rutabaga:
   - Introduced `gfxstream.rs` backend using `rutabaga`
   - Thread-local `GfxstreamAdapter` manages its own
     `Rutabaga` instance, initialized lazily.
   - Preserved internal `GfxstreamResource` tracking
     with scanout support and memory handling.

Renderer Selection Logic:
   - In `device.rs`, `lazy_init_and_handle_event()` now:
     - Dispatches `VirglRendererAdapter` and
       `GfxstreamAdapter` using thread-local storage(TLS)
   - Introduced `extract_backend_and_vring()` helper for
     reusing backend setup logic.

Code Deduplication:
   - Abstracted common logic for both backends to common.rs.
   - Shared helpers reused between gfxstream and virgl.
   - Improved modularity with fewer duplicated error
     handling branches.

Testing and Validation:
   - Replaced `virtio_gpu.rs` testing paths with new unit
     tests for `gfxstream.rs` and `virgl.rs`.
   - Added code coverage for the new refactored crate.
   - Update coverage file to reflect the drop in coverage
     due to exclusion of some gfxstream tests from CI
     since they can't run in CI without GPU drivers.

Signed-off-by: Dorinda Bassey <dbassey@redhat.com>
This commit is contained in:
Dorinda Bassey 2025-11-04 13:41:04 +01:00 committed by Stefano Garzarella
parent 907ec70922
commit 9f72a8187f
16 changed files with 3411 additions and 1759 deletions

View File

@ -1,5 +1,5 @@
{
"coverage_score": 85.94,
"coverage_score": 84.55,
"exclude_path": "xtask",
"crate_features": ""
}

View File

@ -5,6 +5,8 @@
### Changed
- [[#852]] (https://github.com/rust-vmm/vhost-device/pull/890) vhost-device-gpu: Refactor vhost-device-gpu
### Fixed
### Deprecated

View File

@ -26,7 +26,7 @@ libc = "0.2"
log = "0.4"
[target.'cfg(not(target_env = "musl"))'.dependencies]
rutabaga_gfx = { version = "0.1.75", features = ["virgl_renderer"] }
rutabaga_gfx = "0.1.75"
thiserror = "2.0.17"
virglrenderer = {version = "0.1.2", optional = true }
vhost = { version = "0.14.0", features = ["vhost-user-backend"] }

View File

@ -87,39 +87,34 @@ Because blob resources are not yet supported, some capsets are limited:
- gfxstream-vulkan and gfxstream-gles support are exposed, but can practically only be used for display output, there is no hardware acceleration yet.
## Features
The device leverages the [rutabaga_gfx](https://crates.io/crates/rutabaga_gfx)
crate to provide rendering with virglrenderer and gfxstream.
This crate supports two GPU backends: gfxstream (default) and virglrenderer.
Both require the system-provided virglrenderer and minigbm libraries due to the dependence on rutabaga_gfx.
The **virglrenderer** backend uses the [virglrenderer-rs](https://crates.io/crates/virglrenderer-rs)
crate, which provides Rust bindings to the native virglrenderer library. It translates
OpenGL API and Vulkan calls to an intermediate representation and allows for OpenGL
acceleration on the host.
The **gfxstream** backend leverages the [rutabaga_gfx](https://crates.io/crates/rutabaga_gfx)
crate. With gfxstream rendering mode, GLES and Vulkan calls are forwarded to the host
with minimal modification.
Install the development packages for your distro, then build with:
```session
CROSVM_USE_SYSTEM_VIRGLRENDERER=1 \
CROSVM_USE_SYSTEM_MINIGBM=1 \
cargo build
$ cargo build
```
gfxstream support is compiled by default, it can be disabled by not building with the `backend-gfxstream` feature flag, for example:
```session
CROSVM_USE_SYSTEM_VIRGLRENDERER=1 \
CROSVM_USE_SYSTEM_MINIGBM=1 \
cargo build --no-default-features
$ cargo build --no-default-features
```
With Virglrenderer, Rutabaga translates OpenGL API and Vulkan calls to an
intermediate representation and allows for OpenGL acceleration on the host.
With the gfxstream rendering mode, GLES and Vulkan calls are forwarded to the
host with minimal modification.
## Examples
First start the daemon on the host machine using either of the 2 gpu modes:
1) `virglrenderer`
1) `virglrenderer` (if the crate has been compiled with the feature `backend-virgl`)
2) `gfxstream` (if the crate has been compiled with the feature `backend-gfxstream`)
```shell

View File

@ -0,0 +1,466 @@
// Copyright 2025 Red Hat Inc
//
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
use std::sync::{Arc, Mutex};
use log::{debug, error};
use vhost::vhost_user::{
gpu_message::{VhostUserGpuCursorPos, VhostUserGpuCursorUpdate, VhostUserGpuEdidRequest},
GpuBackend,
};
use vm_memory::VolatileSlice;
use crate::{
gpu_types::{FenceDescriptor, FenceState, Transfer3DDesc, VirtioGpuRing},
protocol::{
GpuResponse,
GpuResponse::{ErrUnspec, OkDisplayInfo, OkEdid, OkNoData},
VirtioGpuResult, VIRTIO_GPU_MAX_SCANOUTS,
},
renderer::Renderer,
};
#[derive(Debug, Clone)]
pub struct VirtioGpuScanout {
pub resource_id: u32,
}
#[derive(Copy, Clone, Debug, Default)]
pub struct AssociatedScanouts(u32);
impl AssociatedScanouts {
#[allow(clippy::missing_const_for_fn)]
pub fn enable(&mut self, scanout_id: u32) {
self.0 |= 1 << scanout_id;
}
#[allow(clippy::missing_const_for_fn)]
pub fn disable(&mut self, scanout_id: u32) {
self.0 &= !(1 << scanout_id);
}
pub const fn has_any_enabled(self) -> bool {
self.0 != 0
}
pub fn iter_enabled(self) -> impl Iterator<Item = u32> {
(0..VIRTIO_GPU_MAX_SCANOUTS).filter(move |i| ((self.0 >> i) & 1) == 1)
}
}
pub const VHOST_USER_GPU_MAX_CURSOR_DATA_SIZE: usize = 16384; // 4*4*1024
pub const READ_RESOURCE_BYTES_PER_PIXEL: usize = 4;
#[derive(Copy, Clone, Debug, Default)]
pub struct CursorConfig {
pub width: u32,
pub height: u32,
}
impl CursorConfig {
pub const fn expected_buffer_len(self) -> usize {
self.width as usize * self.height as usize * READ_RESOURCE_BYTES_PER_PIXEL
}
}
pub fn common_display_info(gpu_backend: &GpuBackend) -> VirtioGpuResult {
let backend_display_info = gpu_backend.get_display_info().map_err(|e| {
error!("Failed to get display info: {e:?}");
ErrUnspec
})?;
let display_info = backend_display_info
.pmodes
.iter()
.map(|display| (display.r.width, display.r.height, display.enabled == 1))
.collect::<Vec<_>>();
debug!("Displays: {display_info:?}");
Ok(OkDisplayInfo(display_info))
}
pub fn common_get_edid(
gpu_backend: &GpuBackend,
edid_req: VhostUserGpuEdidRequest,
) -> VirtioGpuResult {
debug!("edid request: {edid_req:?}");
let edid = gpu_backend.get_edid(&edid_req).map_err(|e| {
error!("Failed to get edid from frontend: {e}");
ErrUnspec
})?;
Ok(OkEdid {
blob: Box::from(&edid.edid[..edid.size as usize]),
})
}
pub fn common_process_fence(
fence_state: &Arc<Mutex<FenceState>>,
ring: VirtioGpuRing,
fence_id: u64,
desc_index: u16,
len: u32,
) -> bool {
// In case the fence is signaled immediately after creation, don't add a return
// FenceDescriptor.
let mut fence_state = fence_state.lock().unwrap();
if fence_id > *fence_state.completed_fences.get(&ring).unwrap_or(&0) {
fence_state.descs.push(FenceDescriptor {
ring,
fence_id,
desc_index,
len,
});
false
} else {
true
}
}
pub fn common_move_cursor(
gpu_backend: &GpuBackend,
resource_id: u32,
cursor: VhostUserGpuCursorPos,
) -> VirtioGpuResult {
if resource_id == 0 {
gpu_backend.cursor_pos_hide(&cursor).map_err(|e| {
error!("Failed to set cursor pos from frontend: {e}");
ErrUnspec
})?;
} else {
gpu_backend.cursor_pos(&cursor).map_err(|e| {
error!("Failed to set cursor pos from frontend: {e}");
ErrUnspec
})?;
}
Ok(GpuResponse::OkNoData)
}
/// Reads cursor resource data into a buffer using transfer_read.
/// Returns a boxed slice containing the cursor pixel data.
pub fn common_read_cursor_resource(
renderer: &mut dyn Renderer,
resource_id: u32,
config: CursorConfig,
) -> Result<Box<[u8]>, GpuResponse> {
let mut data = vec![0u8; config.expected_buffer_len()].into_boxed_slice();
let transfer = Transfer3DDesc {
x: 0,
y: 0,
z: 0,
w: config.width,
h: config.height,
d: 1,
level: 0,
stride: config.width * READ_RESOURCE_BYTES_PER_PIXEL as u32,
layer_stride: 0,
offset: 0,
};
// Create VolatileSlice from the buffer
// SAFETY: The buffer is valid for the entire duration of the transfer_read call
let volatile_slice = unsafe { VolatileSlice::new(data.as_mut_ptr(), data.len()) };
// ctx_id 0 is used for direct resource operations
renderer
.transfer_read(0, resource_id, transfer, Some(volatile_slice))
.map_err(|e| {
error!("Failed to read cursor resource: {e:?}");
ErrUnspec
})?;
Ok(data)
}
pub fn common_update_cursor(
gpu_backend: &GpuBackend,
cursor_pos: VhostUserGpuCursorPos,
hot_x: u32,
hot_y: u32,
data: &[u8],
config: CursorConfig,
) -> VirtioGpuResult {
let expected_len = config.expected_buffer_len();
if data.len() != expected_len {
error!(
"Mismatched cursor data size: expected {}, got {}",
expected_len,
data.len()
);
return Err(ErrUnspec);
}
let data_ref: &[u8] = data;
let cursor_update = VhostUserGpuCursorUpdate {
pos: cursor_pos,
hot_x,
hot_y,
};
let mut padded_data = [0u8; VHOST_USER_GPU_MAX_CURSOR_DATA_SIZE];
padded_data[..data_ref.len()].copy_from_slice(data_ref);
gpu_backend
.cursor_update(&cursor_update, &padded_data)
.map_err(|e| {
error!("Failed to update cursor: {e}");
ErrUnspec
})?;
Ok(OkNoData)
}
pub fn common_set_scanout_disable(scanouts: &mut [Option<VirtioGpuScanout>], scanout_idx: usize) {
scanouts[scanout_idx] = None;
debug!("Disabling scanout scanout_id={scanout_idx}");
}
#[cfg(test)]
mod tests {
use std::{
os::unix::net::UnixStream,
sync::{Arc, Mutex},
};
use assert_matches::assert_matches;
use super::*;
use crate::{
gpu_types::VirtioGpuRing,
protocol::{GpuResponse::ErrUnspec, VIRTIO_GPU_MAX_SCANOUTS},
};
const CURSOR_POS: VhostUserGpuCursorPos = VhostUserGpuCursorPos {
scanout_id: 0,
x: 0,
y: 0,
};
const CURSOR_CONFIG: CursorConfig = CursorConfig {
width: 4,
height: 4,
};
const BYTES_PER_PIXEL: usize = 4;
const EXPECTED_LEN: usize =
(CURSOR_CONFIG.width as usize) * (CURSOR_CONFIG.height as usize) * BYTES_PER_PIXEL;
fn dummy_gpu_backend() -> GpuBackend {
let (_, backend) = UnixStream::pair().unwrap();
GpuBackend::from_stream(backend)
}
// AssociatedScanouts
// Test that enabling, disabling, iterating, and checking any enabled works as
// expected.
#[test]
fn associated_scanouts_enable_disable_iter_and_any() {
let mut assoc = AssociatedScanouts::default();
// No scanouts initially
assert!(!assoc.has_any_enabled());
assert_eq!(assoc.iter_enabled().count(), 0);
// Enable a couple
assoc.enable(0);
assoc.enable(3);
assert!(assoc.has_any_enabled());
assert_eq!(assoc.iter_enabled().collect::<Vec<u32>>(), vec![0u32, 3u32]);
// Disable one
assoc.disable(3);
assert!(assoc.has_any_enabled());
assert_eq!(assoc.iter_enabled().collect::<Vec<u32>>(), vec![0u32]);
// Disable last
assoc.disable(0);
assert!(!assoc.has_any_enabled());
assert_eq!(assoc.iter_enabled().count(), 0);
}
// CursorConfig
// Test that expected_buffer_len computes the correct size.
#[test]
fn cursor_config_expected_len() {
let cfg = CursorConfig {
width: 64,
height: 64,
};
assert_eq!(
cfg.expected_buffer_len(),
64 * 64 * READ_RESOURCE_BYTES_PER_PIXEL
);
}
// Update cursor
// Test that updating the cursor with mismatched data size fails.
#[test]
fn update_cursor_mismatched_data_size_fails() {
let gpu_backend = dummy_gpu_backend();
// Data has length 1 (expected is 64)
let bad_data = [0u8];
let result = common_update_cursor(&gpu_backend, CURSOR_POS, 0, 0, &bad_data, CURSOR_CONFIG);
assert_matches!(result, Err(ErrUnspec), "Should fail due to mismatched size");
}
// Test that updating the cursor with correct data size but backend failure
// returns ErrUnspec.
#[test]
fn update_cursor_backend_failure() {
let gpu_backend = dummy_gpu_backend();
// Data has the correct length (64 bytes)
let correct_data = vec![0u8; EXPECTED_LEN];
let result =
common_update_cursor(&gpu_backend, CURSOR_POS, 0, 0, &correct_data, CURSOR_CONFIG);
assert_matches!(
result,
Err(ErrUnspec),
"Should fail due to failure to update cursor"
);
}
// Fence handling
// Test that processing a fence pushes a descriptor when the fence is new.
#[test]
fn process_fence_pushes_descriptor_when_new() {
let fence_state = Arc::new(Mutex::new(FenceState::default()));
let ring = VirtioGpuRing::Global;
// Clone because common_process_fence takes ownership of ring
let ret = common_process_fence(&fence_state, ring.clone(), 42, 7, 512);
assert!(!ret, "New fence should not complete immediately");
let st = fence_state.lock().unwrap();
assert_eq!(st.descs.len(), 1);
assert_eq!(st.descs[0].ring, ring);
assert_eq!(st.descs[0].fence_id, 42);
assert_eq!(st.descs[0].desc_index, 7);
assert_eq!(st.descs[0].len, 512);
drop(st);
}
// Test that processing a fence that is already completed returns true
// immediately.
#[test]
fn process_fence_immediately_completes_when_already_done() {
let ring = VirtioGpuRing::Global;
// Seed state so that ring's 100 is already completed.
let mut seeded = FenceState::default();
seeded.completed_fences.insert(ring.clone(), 100);
let fence_state = Arc::new(Mutex::new(seeded));
let ret = common_process_fence(&fence_state, ring, 100, 1, 4);
assert!(ret, "already-completed fence should return true");
let st = fence_state.lock().unwrap();
assert!(st.descs.is_empty());
drop(st);
}
// Test that disabling a scanout clears the corresponding slot.
#[test]
fn set_scanout_disable_clears_slot() {
const N: usize = VIRTIO_GPU_MAX_SCANOUTS as usize;
let mut scanouts: [Option<VirtioGpuScanout>; N] = Default::default();
scanouts[5] = Some(VirtioGpuScanout { resource_id: 123 });
common_set_scanout_disable(&mut scanouts, 5);
assert!(scanouts[5].is_none());
}
// Test backend operations with dummy backend (all should fail with ErrUnspec)
#[test]
fn backend_operations_without_frontend() {
let gpu_backend = dummy_gpu_backend();
// Test display_info
assert_matches!(common_display_info(&gpu_backend), Err(ErrUnspec));
// Test get_edid
let edid_req = VhostUserGpuEdidRequest { scanout_id: 0 };
assert_matches!(common_get_edid(&gpu_backend, edid_req), Err(ErrUnspec));
}
// Test common_move_cursor for both hide (resource_id=0) and show
// (resource_id!=0) paths
#[test]
fn move_cursor_operations() {
let gpu_backend = dummy_gpu_backend();
let cursor_pos = VhostUserGpuCursorPos {
scanout_id: 0,
x: 50,
y: 50,
};
// Test hide cursor (resource_id = 0 calls cursor_pos_hide)
assert_matches!(
common_move_cursor(&gpu_backend, 0, cursor_pos),
Err(ErrUnspec)
);
// Test show cursor (non-zero resource_id calls cursor_pos)
assert_matches!(
common_move_cursor(&gpu_backend, 42, cursor_pos),
Err(ErrUnspec)
);
}
// Test AssociatedScanouts::disable
#[test]
fn associated_scanouts_disable_functionality() {
let mut scanouts = AssociatedScanouts::default();
scanouts.enable(0);
scanouts.enable(2);
assert!(scanouts.has_any_enabled());
scanouts.disable(0);
assert!(scanouts.has_any_enabled()); // Still has 2
assert_eq!(scanouts.iter_enabled().collect::<Vec<_>>(), vec![2u32]);
scanouts.disable(2);
assert!(!scanouts.has_any_enabled());
}
// Test CursorConfig expected_buffer_len calculation
#[test]
fn cursor_config_buffer_calculations() {
// Test various sizes: (width, height, expected_len)
for (width, height) in [(16, 16), (64, 64), (128, 128)] {
let config = CursorConfig { width, height };
let expected = width as usize * height as usize * READ_RESOURCE_BYTES_PER_PIXEL;
assert_eq!(config.expected_buffer_len(), expected);
}
}
// Test VirtioGpuScanout structure (creation and clone)
#[test]
fn virtio_gpu_scanout_operations() {
let scanout = VirtioGpuScanout { resource_id: 456 };
assert_eq!(scanout.resource_id, 456);
}
// Test fence processing with context-specific ring
#[test]
fn process_fence_context_specific_ring() {
let ring = VirtioGpuRing::ContextSpecific {
ctx_id: 5,
ring_idx: 2,
};
let fence_state = Arc::new(Mutex::new(FenceState::default()));
let ret = common_process_fence(&fence_state, ring.clone(), 100, 10, 256);
assert!(!ret, "New fence should not complete immediately");
let st = fence_state.lock().unwrap();
assert_eq!(st.descs.len(), 1);
assert_eq!(st.descs[0].ring, ring);
assert_eq!(st.descs[0].fence_id, 100);
drop(st);
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,9 @@
// Copyright 2025 Red Hat Inc
//
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
mod common;
#[cfg(feature = "backend-gfxstream")]
pub mod gfxstream;
#[cfg(feature = "backend-virgl")]
pub mod virgl;

View File

@ -0,0 +1,928 @@
// Virglrenderer backend device
// Copyright 2019 The ChromiumOS Authors
// Copyright 2025 Red Hat Inc
//
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
use std::{
collections::BTreeMap,
io::IoSliceMut,
os::fd::{AsFd, FromRawFd, IntoRawFd, RawFd},
sync::{Arc, Mutex},
};
use libc::c_void;
use log::{debug, error, trace, warn};
use rutabaga_gfx::RutabagaFence;
use vhost::vhost_user::{
gpu_message::{
VhostUserGpuCursorPos, VhostUserGpuDMABUFScanout, VhostUserGpuDMABUFScanout2,
VhostUserGpuEdidRequest, VhostUserGpuUpdate,
},
GpuBackend,
};
use vhost_user_backend::{VringRwLock, VringT};
use virglrenderer::{
FenceHandler, Iovec, VirglContext, VirglRenderer, VirglRendererFlags, VirglResource,
VIRGL_HANDLE_TYPE_MEM_DMABUF,
};
use vm_memory::{GuestAddress, GuestMemory, GuestMemoryMmap, VolatileSlice};
use vmm_sys_util::eventfd::EventFd;
use crate::{
backend::{
common,
common::{common_set_scanout_disable, AssociatedScanouts, CursorConfig, VirtioGpuScanout},
},
gpu_types::{FenceState, ResourceCreate3d, Transfer3DDesc, VirtioGpuRing},
protocol::{
virtio_gpu_rect, GpuResponse,
GpuResponse::{
ErrInvalidContextId, ErrInvalidParameter, ErrInvalidResourceId, ErrInvalidScanoutId,
ErrUnspec, OkCapset, OkCapsetInfo, OkNoData,
},
VirtioGpuResult, VIRTIO_GPU_MAX_SCANOUTS,
},
renderer::Renderer,
GpuConfig,
};
const CAPSET_ID_VIRGL: u32 = 1;
const CAPSET_ID_VIRGL2: u32 = 2;
const CAPSET_ID_VENUS: u32 = 4;
#[derive(Clone)]
pub struct GpuResource {
pub virgl_resource: VirglResource,
// Stores information about which scanouts are associated with the given
// resource. Resource could be used for multiple scanouts.
pub scanouts: AssociatedScanouts,
pub backing_iovecs: Arc<Mutex<Option<Vec<Iovec>>>>,
}
fn sglist_to_iovecs(
vecs: &[(GuestAddress, usize)],
mem: &GuestMemoryMmap,
) -> Result<Vec<Iovec>, ()> {
if vecs
.iter()
.any(|&(addr, len)| mem.get_slice(addr, len).is_err())
{
return Err(());
}
let mut virgl_iovecs: Vec<Iovec> = Vec::new();
for &(addr, len) in vecs {
let slice = mem.get_slice(addr, len).unwrap();
virgl_iovecs.push(Iovec {
base: slice.ptr_guard_mut().as_ptr().cast::<c_void>(),
len,
});
}
Ok(virgl_iovecs)
}
impl From<virglrenderer::VirglError> for GpuResponse {
fn from(_: virglrenderer::VirglError) -> Self {
ErrUnspec
}
}
pub struct VirglFenceHandler {
queue_ctl: VringRwLock,
fence_state: Arc<Mutex<FenceState>>,
}
impl VirglFenceHandler {
pub const fn new(queue_ctl: VringRwLock, fence_state: Arc<Mutex<FenceState>>) -> Self {
Self {
queue_ctl,
fence_state,
}
}
}
impl FenceHandler for VirglFenceHandler {
fn call(&self, fence_id: u64, ctx_id: u32, ring_idx: u8) {
let mut fence_state = self.fence_state.lock().unwrap();
let mut i = 0;
let ring = match ring_idx {
0 => VirtioGpuRing::Global,
_ => VirtioGpuRing::ContextSpecific { ctx_id, ring_idx },
};
while i < fence_state.descs.len() {
if fence_state.descs[i].ring == ring && fence_state.descs[i].fence_id <= fence_id {
let completed_desc = fence_state.descs.remove(i);
self.queue_ctl
.add_used(completed_desc.desc_index, completed_desc.len)
.unwrap();
self.queue_ctl
.signal_used_queue()
.map_err(|e| log::error!("Failed to signal queue: {e:?}"))
.unwrap();
} else {
i += 1;
}
}
fence_state.completed_fences.insert(ring, fence_id);
}
}
pub struct VirglRendererAdapter {
renderer: VirglRenderer,
gpu_backend: GpuBackend,
fence_state: Arc<Mutex<FenceState>>,
resources: BTreeMap<u32, GpuResource>,
contexts: BTreeMap<u32, VirglContext>,
scanouts: [Option<VirtioGpuScanout>; VIRTIO_GPU_MAX_SCANOUTS as usize],
}
impl VirglRendererAdapter {
pub fn new(queue_ctl: &VringRwLock, config: &GpuConfig, gpu_backend: GpuBackend) -> Self {
let virglrenderer_flags = VirglRendererFlags::new()
.use_virgl(true)
.use_venus(true)
.use_egl(config.flags().use_egl)
.use_gles(config.flags().use_gles)
.use_glx(config.flags().use_glx)
.use_surfaceless(config.flags().use_surfaceless)
.use_external_blob(true)
.use_async_fence_cb(true)
.use_thread_sync(true);
let fence_state = Arc::new(Mutex::new(FenceState::default()));
let fence_handler = Box::new(VirglFenceHandler::new(
queue_ctl.clone(),
fence_state.clone(),
));
let renderer = VirglRenderer::init(virglrenderer_flags, fence_handler, None)
.expect("Failed to initialize virglrenderer");
Self {
renderer,
gpu_backend,
fence_state,
resources: BTreeMap::new(),
contexts: BTreeMap::new(),
scanouts: Default::default(),
}
}
}
impl Renderer for VirglRendererAdapter {
fn resource_create_3d(&mut self, resource_id: u32, args: ResourceCreate3d) -> VirtioGpuResult {
let virgl_args: virglrenderer::ResourceCreate3D = args.into();
let virgl_resource = self
.renderer
.create_3d(resource_id, virgl_args)
.map_err(|_| ErrUnspec)?;
let local_resource = GpuResource {
virgl_resource,
scanouts: AssociatedScanouts::default(),
backing_iovecs: Arc::new(Mutex::new(None)),
};
self.resources.insert(resource_id, local_resource);
Ok(OkNoData)
}
fn unref_resource(&mut self, resource_id: u32) -> VirtioGpuResult {
let resource = self.resources.remove(&resource_id);
match resource {
None => return Err(ErrInvalidResourceId),
// The spec doesn't say anything about this situation and this doesn't actually seem
// to happen in practise but let's be careful and refuse to disable the resource.
// This keeps the internal state of the gpu device and the fronted consistent.
Some(resource) if resource.scanouts.has_any_enabled() => {
warn!(
"The driver requested unref_resource, but resource {resource_id} has \
associated scanouts, refusing to delete the resource."
);
return Err(ErrUnspec);
}
_ => (),
}
self.renderer.unref_resource(resource_id);
Ok(OkNoData)
}
fn transfer_write(
&mut self,
ctx_id: u32,
resource_id: u32,
transfer: Transfer3DDesc,
) -> VirtioGpuResult {
trace!("transfer_write ctx_id {ctx_id}, resource_id {resource_id}, {transfer:?}");
self.renderer
.transfer_write(resource_id, ctx_id, transfer.into(), None)?;
Ok(OkNoData)
}
fn transfer_write_2d(
&mut self,
ctx_id: u32,
resource_id: u32,
transfer: Transfer3DDesc,
) -> VirtioGpuResult {
trace!("transfer_write ctx_id {ctx_id}, resource_id {resource_id}, {transfer:?}");
self.renderer
.transfer_write(resource_id, ctx_id, transfer.into(), None)?;
Ok(OkNoData)
}
fn transfer_read(
&mut self,
ctx_id: u32,
resource_id: u32,
transfer: Transfer3DDesc,
buf: Option<VolatileSlice>,
) -> VirtioGpuResult {
let buf = buf.map(|vs| {
IoSliceMut::new(
// SAFETY: trivially safe
unsafe { std::slice::from_raw_parts_mut(vs.ptr_guard_mut().as_ptr(), vs.len()) },
)
});
self.renderer
.transfer_read(resource_id, ctx_id, transfer.into(), buf)?;
Ok(OkNoData)
}
fn attach_backing(
&mut self,
resource_id: u32,
mem: &GuestMemoryMmap,
vecs: Vec<(GuestAddress, usize)>,
) -> VirtioGpuResult {
let mut iovs: Vec<Iovec> = sglist_to_iovecs(&vecs, mem).map_err(|()| ErrUnspec)?;
// Tell virgl to use our iovec array (pointer must stay valid afterwards)
self.renderer.attach_backing(resource_id, &mut iovs)?;
// Keep the Vec alive so the buffers pointer stays valid
let res = self
.resources
.get_mut(&resource_id)
.ok_or(ErrInvalidResourceId)?;
res.backing_iovecs.lock().unwrap().replace(iovs);
Ok(OkNoData)
}
fn detach_backing(&mut self, resource_id: u32) -> VirtioGpuResult {
self.renderer.detach_backing(resource_id);
if let Some(r) = self.resources.get_mut(&resource_id) {
r.backing_iovecs.lock().unwrap().take(); // drop our boxed iovecs
}
Ok(OkNoData)
}
fn update_cursor(
&mut self,
resource_id: u32,
cursor_pos: VhostUserGpuCursorPos,
hot_x: u32,
hot_y: u32,
) -> VirtioGpuResult {
let config = CursorConfig {
width: 64,
height: 64,
};
let cursor_resource = self
.resources
.get(&resource_id)
.ok_or(ErrInvalidResourceId)?;
if cursor_resource.virgl_resource.width != config.width
|| cursor_resource.virgl_resource.height != config.height
{
error!("Cursor resource has invalid dimensions");
return Err(ErrInvalidParameter);
}
let data = common::common_read_cursor_resource(self, resource_id, config)?;
common::common_update_cursor(&self.gpu_backend, cursor_pos, hot_x, hot_y, &data, config)
}
fn move_cursor(&mut self, resource_id: u32, cursor: VhostUserGpuCursorPos) -> VirtioGpuResult {
common::common_move_cursor(&self.gpu_backend, resource_id, cursor)
}
fn resource_assign_uuid(&self, _resource_id: u32) -> VirtioGpuResult {
error!("Not implemented: resource_assign_uuid");
Err(ErrUnspec)
}
fn get_capset_info(&self, index: u32) -> VirtioGpuResult {
debug!("the capset index is {index}");
let capset_id = match index {
0 => CAPSET_ID_VIRGL,
1 => CAPSET_ID_VIRGL2,
3 => CAPSET_ID_VENUS,
_ => return Err(ErrInvalidParameter),
};
let (version, size) = self.renderer.get_capset_info(index);
Ok(OkCapsetInfo {
capset_id,
version,
size,
})
}
fn get_capset(&self, capset_id: u32, version: u32) -> VirtioGpuResult {
let capset = self.renderer.get_capset(capset_id, version);
Ok(OkCapset(capset))
}
fn create_context(
&mut self,
ctx_id: u32,
context_init: u32,
context_name: Option<&str>,
) -> VirtioGpuResult {
if self.contexts.contains_key(&ctx_id) {
return Err(ErrUnspec);
}
// Create the VirglContext using virglrenderer
let ctx = virglrenderer::VirglContext::create_context(ctx_id, context_init, context_name)
.map_err(|_| ErrInvalidContextId)?;
// Insert the newly created context into our local BTreeMap.
self.contexts.insert(ctx_id, ctx);
Ok(OkNoData)
}
fn destroy_context(&mut self, ctx_id: u32) -> VirtioGpuResult {
self.contexts.remove(&ctx_id).ok_or(ErrInvalidContextId)?;
Ok(OkNoData)
}
fn context_attach_resource(&mut self, ctx_id: u32, resource_id: u32) -> VirtioGpuResult {
let ctx = self.contexts.get_mut(&ctx_id).ok_or(ErrInvalidContextId)?;
let resource = self
.resources
.get_mut(&resource_id)
.ok_or(ErrInvalidResourceId)?;
ctx.attach(&mut resource.virgl_resource);
Ok(OkNoData)
}
fn context_detach_resource(&mut self, ctx_id: u32, resource_id: u32) -> VirtioGpuResult {
let ctx = self.contexts.get_mut(&ctx_id).ok_or(ErrInvalidContextId)?;
let resource = self
.resources
.get_mut(&resource_id)
.ok_or(ErrInvalidResourceId)?;
ctx.detach(&resource.virgl_resource);
Ok(OkNoData)
}
fn submit_command(
&mut self,
ctx_id: u32,
commands: &mut [u8],
fence_ids: &[u64],
) -> VirtioGpuResult {
let ctx = self.contexts.get_mut(&ctx_id).ok_or(ErrInvalidContextId)?;
ctx.submit_cmd(commands, fence_ids)
.map(|()| OkNoData)
.map_err(|_| ErrUnspec)
}
fn create_fence(&mut self, fence: RutabagaFence) -> VirtioGpuResult {
// Convert the fence ID to u32
let fence_id_u32 = u32::try_from(fence.fence_id).map_err(|_| GpuResponse::ErrUnspec)?;
self.renderer
.create_fence(fence_id_u32, fence.ctx_id)
.map_err(|_| ErrUnspec)?;
Ok(OkNoData)
}
fn process_fence(
&mut self,
ring: VirtioGpuRing,
fence_id: u64,
desc_index: u16,
len: u32,
) -> bool {
common::common_process_fence(&self.fence_state, ring, fence_id, desc_index, len)
}
fn get_event_poll_fd(&self) -> Option<EventFd> {
// SAFETY: The fd is guaranteed to be a valid owned descriptor.
self.renderer
.poll_descriptor()
.map(|fd| unsafe { EventFd::from_raw_fd(fd.into_raw_fd()) })
}
fn event_poll(&self) {
self.renderer.event_poll();
}
fn force_ctx_0(&self) {
self.renderer.force_ctx_0();
}
fn display_info(&self) -> VirtioGpuResult {
common::common_display_info(&self.gpu_backend)
}
fn get_edid(&self, edid_req: VhostUserGpuEdidRequest) -> VirtioGpuResult {
common::common_get_edid(&self.gpu_backend, edid_req)
}
fn set_scanout(
&mut self,
scanout_id: u32,
resource_id: u32,
rect: virtio_gpu_rect,
) -> VirtioGpuResult {
let scanout_idx = scanout_id as usize;
// Basic Validation of scanout_id
if scanout_idx >= VIRTIO_GPU_MAX_SCANOUTS as usize {
return Err(ErrInvalidScanoutId);
}
// Handle existing scanout to disable it if necessary (like QEMU)
let current_scanout_resource_id =
self.scanouts[scanout_idx].as_ref().map(|s| s.resource_id);
if let Some(old_resource_id) = current_scanout_resource_id {
if old_resource_id != resource_id {
// Only disable if resource_id changes
if let Some(old_resource) = self.resources.get_mut(&old_resource_id) {
old_resource.scanouts.disable(scanout_id);
}
}
}
// Handle Resource ID 0 (Disable Scanout)
if resource_id == 0 {
common_set_scanout_disable(&mut self.scanouts, scanout_idx);
// Send VHOST_USER_GPU_DMABUF_SCANOUT message with FD = -1
self.gpu_backend
.set_dmabuf_scanout(
&VhostUserGpuDMABUFScanout {
scanout_id,
x: 0,
y: 0,
width: 0,
height: 0,
fd_width: 0,
fd_height: 0,
fd_stride: 0,
fd_flags: 0,
fd_drm_fourcc: 0,
},
None::<&RawFd>, // Send None for the FD, which translates to -1 in the backend
)
.map_err(|e| {
error!("Failed to send DMABUF scanout disable message: {e:?}");
ErrUnspec
})?;
return Ok(OkNoData);
}
// Handling non-zero resource_id (Enable/Update Scanout)
let resource = self
.resources
.get_mut(&resource_id)
.ok_or(ErrInvalidResourceId)?;
// Extract the DMABUF information (handle and info_3d)
let handle = resource.virgl_resource.handle.as_ref().ok_or_else(|| {
error!("resource {resource_id} has no handle");
ErrUnspec
})?;
if handle.handle_type != VIRGL_HANDLE_TYPE_MEM_DMABUF {
error!(
"resource {} handle is not a DMABUF (got type = {})",
resource_id, handle.handle_type
);
return Err(ErrUnspec);
}
// Borrow the 3D info directly; no DmabufTextureInfo wrapper.
let info_3d = resource.virgl_resource.info_3d.as_ref().ok_or_else(|| {
error!("resource {resource_id} has handle but no info_3d");
ErrUnspec
})?;
// Clone the fd well pass to the backend.
let fd = handle.os_handle.try_clone().map_err(|e| {
error!("Failed to clone DMABUF FD for resource {resource_id}: {e:?}");
ErrUnspec
})?;
debug!(
"Using stored DMABUF texture info for resource {}: width={}, height={}, strides={}, fourcc={}, modifier={}",
resource_id, info_3d.width, info_3d.height, info_3d.strides[0], info_3d.drm_fourcc, info_3d.modifier
);
// Construct VhostUserGpuDMABUFScanout Message
let dmabuf_scanout_payload = VhostUserGpuDMABUFScanout {
scanout_id,
x: rect.x.into(),
y: rect.y.into(),
width: rect.width.into(),
height: rect.height.into(),
fd_width: info_3d.width,
fd_height: info_3d.height,
fd_stride: info_3d.strides[0],
fd_flags: 0,
fd_drm_fourcc: info_3d.drm_fourcc,
};
// Determine which message type to send based on modifier support
let frontend_supports_dmabuf2 = info_3d.modifier != 0;
if frontend_supports_dmabuf2 {
let dmabuf_scanout2_msg = VhostUserGpuDMABUFScanout2 {
dmabuf_scanout: dmabuf_scanout_payload,
modifier: info_3d.modifier,
};
self.gpu_backend
.set_dmabuf_scanout2(&dmabuf_scanout2_msg, Some(&fd.as_fd()))
.map_err(|e| {
error!(
"Failed to send VHOST_USER_GPU_DMABUF_SCANOUT2 for resource {resource_id}: {e:?}"
);
ErrUnspec
})?;
} else {
self.gpu_backend
.set_dmabuf_scanout(&dmabuf_scanout_payload, Some(&fd.as_fd()))
.map_err(|e| {
error!(
"Failed to send VHOST_USER_GPU_DMABUF_SCANOUT for resource {resource_id}: {e:?}"
);
ErrUnspec
})?;
}
debug!(
"Sent DMABUF scanout for resource {} using fd {:?}",
resource_id,
fd.as_fd()
);
// Update internal state to associate resource with scanout
resource.scanouts.enable(scanout_id);
self.scanouts[scanout_idx] = Some(VirtioGpuScanout { resource_id });
Ok(OkNoData)
}
fn flush_resource(&mut self, resource_id: u32, _rect: virtio_gpu_rect) -> VirtioGpuResult {
if resource_id == 0 {
return Ok(OkNoData);
}
let resource = self
.resources
.get(&resource_id)
.ok_or(ErrInvalidResourceId)?
.clone();
for scanout_id in resource.scanouts.iter_enabled() {
// For VirglRenderer, use update_dmabuf_scanout (no image copy)
self.gpu_backend
.update_dmabuf_scanout(&VhostUserGpuUpdate {
scanout_id,
x: 0,
y: 0,
width: resource.virgl_resource.width,
height: resource.virgl_resource.height,
})
.map_err(|e| {
error!("Failed to update_dmabuf_scanout: {e:?}");
ErrUnspec
})?;
}
Ok(OkNoData)
}
fn resource_create_blob(
&mut self,
_ctx_id: u32,
_resource_id: u32,
_blob_id: u64,
_size: u64,
_blob_mem: u32,
_blob_flags: u32,
) -> VirtioGpuResult {
error!("Not implemented: resource_create_blob");
Err(ErrUnspec)
}
fn resource_map_blob(&mut self, _resource_id: u32, _offset: u64) -> VirtioGpuResult {
error!("Not implemented: resource_map_blob");
Err(ErrUnspec)
}
fn resource_unmap_blob(&mut self, _resource_id: u32) -> VirtioGpuResult {
error!("Not implemented: resource_unmap_blob");
Err(ErrUnspec)
}
}
#[cfg(test)]
mod virgl_cov_tests {
use std::{
os::unix::net::UnixStream,
sync::{Arc, Mutex},
};
use assert_matches::assert_matches;
use rusty_fork::rusty_fork_test;
use rutabaga_gfx::{RUTABAGA_PIPE_BIND_RENDER_TARGET, RUTABAGA_PIPE_TEXTURE_2D};
use vm_memory::{Bytes, GuestAddress, GuestMemoryAtomic, GuestMemoryMmap};
use super::*;
use crate::{
gpu_types::{FenceDescriptor, FenceState, ResourceCreate3d, Transfer3DDesc, VirtioGpuRing},
protocol::{virtio_gpu_rect, GpuResponse, VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM},
renderer::Renderer,
testutils::{
create_vring, test_capset_operations, test_fence_operations, test_move_cursor,
TestingDescChainArgs,
},
GpuCapset, GpuConfig, GpuFlags, GpuMode,
};
fn fence_desc(r: VirtioGpuRing, id: u64, idx: u16, len: u32) -> FenceDescriptor {
FenceDescriptor {
ring: r,
fence_id: id,
desc_index: idx,
len,
}
}
fn dummy_gpu_backend() -> GpuBackend {
let (_, backend) = UnixStream::pair().unwrap();
GpuBackend::from_stream(backend)
}
#[test]
fn sglist_to_iovecs_err_on_invalid_slice() {
// Single region: 0x1000..0x2000 (4 KiB)
let mem = GuestMemoryMmap::from_ranges(&[(GuestAddress(0x1000), 0x1000)]).unwrap();
// Segment starts outside of mapped memory -> expect Err(()).
let bad = vec![(GuestAddress(0x3000), 16usize)];
assert!(sglist_to_iovecs(&bad, &mem).is_err());
}
rusty_fork::rusty_fork_test! {
#[test]
fn virgl_end_to_end_once() {
// Fence handler coverage (no virgl init needed)
let mem_a = GuestMemoryAtomic::new(
GuestMemoryMmap::from_ranges(&[(GuestAddress(0), 0x20_000)]).unwrap()
);
let (vr_a, _outs_a, call_a) =
create_vring(&mem_a, &[] as &[TestingDescChainArgs], GuestAddress(0x3000), GuestAddress(0x5000), 64);
let fs_a = Arc::new(Mutex::new(FenceState {
descs: vec![
fence_desc(VirtioGpuRing::Global, 5, 3, 64),
fence_desc(VirtioGpuRing::Global, 9, 4, 64),
],
completed_fences: BTreeMap::default(),
}));
let handler_a = VirglFenceHandler {
queue_ctl: vr_a,
fence_state: fs_a.clone(),
};
let _ = call_a.read(); // drain stale
handler_a.call(/*fence_id*/ 7, /*ctx_id*/ 0, /*ring_idx*/ 0);
{
let st = fs_a.lock().unwrap();
assert_eq!(st.descs.len(), 1);
assert_eq!(st.descs[0].fence_id, 9);
assert_eq!(st.completed_fences.get(&VirtioGpuRing::Global), Some(&7u64));
drop(st);
}
assert_eq!(call_a.read().unwrap(), 1);
// Context ring path: no match → completed_fences updated, no notify
let mem_b = GuestMemoryAtomic::new(
GuestMemoryMmap::from_ranges(&[(GuestAddress(0), 0x20_000)]).unwrap()
);
let (vr_b, _outs_b, call_b) =
create_vring(&mem_b, &[] as &[TestingDescChainArgs], GuestAddress(0x6000), GuestAddress(0x8000), 32);
let ring_b = VirtioGpuRing::ContextSpecific { ctx_id: 42, ring_idx: 3 };
let fs_b = Arc::new(Mutex::new(FenceState {
descs: vec![fence_desc(VirtioGpuRing::Global, 7, 1, 1)],
completed_fences: BTreeMap::default(),
}));
let handler_b = VirglFenceHandler {
queue_ctl: vr_b,
fence_state: fs_b.clone(),
};
handler_b.call(/*fence_id*/ 6, /*ctx_id*/ 42, /*ring_idx*/ 3);
{
let st = fs_b.lock().unwrap();
assert_eq!(st.descs.len(), 1);
assert_eq!(st.completed_fences.get(&ring_b), Some(&6u64));
drop(st);
}
assert!(call_b.read().is_err(), "no signal when no match");
// Initialize virgl ONCE in this forked process; exercise adapter paths
let cfg = GpuConfig::new(
GpuMode::VirglRenderer,
Some(GpuCapset::VIRGL | GpuCapset::VIRGL2),
GpuFlags::default(),
).expect("GpuConfig");
let mem = GuestMemoryAtomic::new(
GuestMemoryMmap::from_ranges(&[(GuestAddress(0), 0x20_000)]).unwrap()
);
let (vring, _outs, _call_evt) =
create_vring(&mem, &[] as &[TestingDescChainArgs], GuestAddress(0x2000), GuestAddress(0x4000), 64);
let backend = dummy_gpu_backend();
let mut gpu = VirglRendererAdapter::new(&vring, &cfg, backend);
gpu.event_poll();
let edid_req = VhostUserGpuEdidRequest {
scanout_id: 0,
};
gpu.get_edid(edid_req).unwrap_err();
assert!(gpu.unref_resource(99_999).is_err(), "unref on missing must error");
// Resource creation + attach backing
let res_id = 1;
let req = ResourceCreate3d {
target: RUTABAGA_PIPE_TEXTURE_2D,
format: VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM,
bind: RUTABAGA_PIPE_BIND_RENDER_TARGET,
width: 1, height: 1, depth: 1,
array_size: 1, last_level: 0, nr_samples: 0, flags: 0,
};
gpu.resource_create_3d(res_id, req).unwrap();
let gm_back = GuestMemoryMmap::from_ranges(&[(GuestAddress(0xA0000), 0x1000)]).unwrap();
let pattern = [0xAA, 0xBB, 0xCC, 0xDD];
gm_back.write(&pattern, GuestAddress(0xA0000)).unwrap();
gpu.attach_backing(res_id, &gm_back, vec![(GuestAddress(0xA0000), 4usize)]).unwrap();
// move_cursor: expected to Err with invalid resource id
test_move_cursor(&mut gpu);
// update_cursor: expected to Err with invalid resource id
let cursor_pos = VhostUserGpuCursorPos {
scanout_id: 0,
x: 10,
y: 10,
};
gpu.update_cursor(9_999, cursor_pos, 0, 0).unwrap_err();
// update_cursor: create cursor resource and test reading path
let cursor_res_id = 2;
let cursor_req = ResourceCreate3d {
target: RUTABAGA_PIPE_TEXTURE_2D,
format: VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM,
bind: RUTABAGA_PIPE_BIND_RENDER_TARGET,
width: 64, height: 64, depth: 1,
array_size: 1, last_level: 0, nr_samples: 0, flags: 0,
};
gpu.resource_create_3d(cursor_res_id, cursor_req).unwrap();
// Attach backing for cursor resource
let cursor_backing = GuestMemoryMmap::from_ranges(&[(GuestAddress(0xB0000), 0x10000)]).unwrap();
gpu.attach_backing(cursor_res_id, &cursor_backing, vec![(GuestAddress(0xB0000), 16384usize)]).unwrap();
// This should exercise common_read_cursor_resource and then fail at cursor_update (no frontend)
let result = gpu.update_cursor(cursor_res_id, cursor_pos, 5, 5);
assert_matches!(result, Err(GpuResponse::ErrUnspec), "Should fail at cursor_update to frontend");
// submit_command: expected to Err with dummy buffer
let mut cmd = [0u8; 8];
let fence_id: Vec<u64> = vec![];
gpu.submit_command(1, &mut cmd[..], &fence_id).unwrap_err();
let t = Transfer3DDesc::new_2d(0, 0, 1, 1, 0);
gpu.transfer_write(0, res_id, t).unwrap();
gpu.transfer_read(0, res_id, t, None).unwrap();
// create_fence + process_fence
test_fence_operations(&mut gpu);
gpu.detach_backing(res_id).unwrap();
// create_context / destroy_context and use ctx in transfers
let ctx_id = 1;
assert_matches!(gpu.create_context(ctx_id, 0, None), Ok(_));
gpu.context_attach_resource(1, 1).unwrap();
gpu.context_detach_resource(1, 1).unwrap();
let _ = gpu.destroy_context(ctx_id);
// use invalid ctx_id, should fail after destroy
let _ = gpu.transfer_write(ctx_id, res_id, t).unwrap_err();
let _ = gpu.transfer_read(0, res_id, t, None).unwrap_err();
// scanout + flush paths
let dirty = virtio_gpu_rect { x: 0.into(), y: 0.into(), width: 32.into(), height: 32.into() };
gpu.flush_resource(9_999, dirty).unwrap_err();
let res2 = 404u32;
let req2 = ResourceCreate3d {
target: RUTABAGA_PIPE_TEXTURE_2D,
format: VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM,
bind: RUTABAGA_PIPE_BIND_RENDER_TARGET,
width: 64, height: 64, depth: 1,
array_size: 1, last_level: 0, nr_samples: 0, flags: 0,
};
gpu.resource_create_3d(res2, req2).unwrap();
assert_matches!(gpu.flush_resource(res2, dirty), Ok(GpuResponse::OkNoData));
gpu.set_scanout(1, 1, dirty).unwrap_err();
gpu.set_scanout(1, 0, dirty).unwrap_err();
// resource_id = 0 disables scanout
assert_matches!(gpu.flush_resource(0, dirty), Ok(GpuResponse::OkNoData));
// Test capset queries
for index in [0, 1, 3] {
test_capset_operations(&gpu, index);
}
// Test blob resource functions (all should return ErrUnspec - not implemented)
assert_matches!(
gpu.resource_create_blob(1, 100, 0, 4096, 0, 0),
Err(GpuResponse::ErrUnspec)
);
assert_matches!(
gpu.resource_map_blob(100, 0),
Err(GpuResponse::ErrUnspec)
);
assert_matches!(
gpu.resource_unmap_blob(100),
Err(GpuResponse::ErrUnspec)
);
// Test resource_assign_uuid (not implemented)
assert_matches!(
gpu.resource_assign_uuid(1),
Err(GpuResponse::ErrUnspec)
);
// Test display_info (should fail without frontend)
assert_matches!(
gpu.display_info(),
Err(GpuResponse::ErrUnspec)
);
// Test force_ctx_0
gpu.force_ctx_0();
// Test get_event_poll_fd
let _poll_fd = gpu.get_event_poll_fd();
// Test transfer_write_2d
let t2d = Transfer3DDesc::new_2d(0, 0, 1, 1, 0);
gpu.transfer_write_2d(0, res_id, t2d).unwrap_err();
// Test unref with resource that has scanouts (should fail)
let res3 = 500u32;
let req3 = ResourceCreate3d {
target: RUTABAGA_PIPE_TEXTURE_2D,
format: VIRTIO_GPU_FORMAT_R8G8B8A8_UNORM,
bind: RUTABAGA_PIPE_BIND_RENDER_TARGET,
width: 32, height: 32, depth: 1,
array_size: 1, last_level: 0, nr_samples: 0, flags: 0,
};
gpu.resource_create_3d(res3, req3).unwrap();
// Manually enable scanout on the resource to test unref protection
if let Some(resource) = gpu.resources.get_mut(&res3) {
resource.scanouts.enable(0);
}
// Now unref should fail because resource has active scanouts
assert_matches!(
gpu.unref_resource(res3),
Err(GpuResponse::ErrUnspec)
);
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,148 @@
// Copyright 2025 Red Hat Inc
//
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
/// Generates an implementation of `From<Transfer3DDesc>` for any compatible
/// target struct.
macro_rules! impl_transfer3d_from_desc {
($target:path) => {
impl From<Transfer3DDesc> for $target {
fn from(desc: Transfer3DDesc) -> Self {
Self {
x: desc.x,
y: desc.y,
z: desc.z,
w: desc.w,
h: desc.h,
d: desc.d,
level: desc.level,
stride: desc.stride,
layer_stride: desc.layer_stride,
offset: desc.offset,
}
}
}
};
}
macro_rules! impl_from_resource_create3d {
($target:ty) => {
impl From<ResourceCreate3d> for $target {
fn from(r: ResourceCreate3d) -> Self {
Self {
target: r.target,
format: r.format,
bind: r.bind,
width: r.width,
height: r.height,
depth: r.depth,
array_size: r.array_size,
last_level: r.last_level,
nr_samples: r.nr_samples,
flags: r.flags,
}
}
}
};
}
use std::{collections::BTreeMap, os::raw::c_void};
use rutabaga_gfx::Transfer3D;
use virglrenderer::Transfer3D as VirglTransfer3D;
use crate::protocol::virtio_gpu_rect;
#[derive(Debug, Clone, Copy)]
pub struct Transfer3DDesc {
pub x: u32,
pub y: u32,
pub z: u32,
pub w: u32,
pub h: u32,
pub d: u32,
pub level: u32,
pub stride: u32,
pub layer_stride: u32,
pub offset: u64,
}
impl Transfer3DDesc {
/// Constructs a 2 dimensional XY box in 3 dimensional space with unit depth
/// and zero displacement on the Z axis.
pub const fn new_2d(x: u32, y: u32, w: u32, h: u32, offset: u64) -> Self {
Self {
x,
y,
z: 0,
w,
h,
d: 1,
level: 0,
stride: 0,
layer_stride: 0,
offset,
}
}
}
// Invoke the macro for both targets
// rutabaga_gfx::Transfer3D
impl_transfer3d_from_desc!(Transfer3D);
// virglrenderer::Transfer3D
impl_transfer3d_from_desc!(VirglTransfer3D);
// These are neutral types that can be used by all backends
pub type Rect = virtio_gpu_rect;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub enum VirtioGpuRing {
Global,
ContextSpecific { ctx_id: u32, ring_idx: u8 },
}
pub struct FenceDescriptor {
pub ring: VirtioGpuRing,
pub fence_id: u64,
pub desc_index: u16,
pub len: u32,
}
#[derive(Default)]
pub struct FenceState {
pub descs: Vec<FenceDescriptor>,
pub completed_fences: BTreeMap<VirtioGpuRing, u64>,
}
#[derive(Debug, Clone, Copy)]
#[repr(C)]
pub struct Iovec {
pub iov_base: *mut c_void,
pub iov_len: usize,
}
// The neutral `ResourceCreate3d` struct that all adapters will convert from.
#[derive(Debug, Clone, Copy)]
pub struct ResourceCreate3d {
pub target: u32,
pub format: u32,
pub bind: u32,
pub width: u32,
pub height: u32,
pub depth: u32,
pub array_size: u32,
pub last_level: u32,
pub nr_samples: u32,
pub flags: u32,
}
// Invoke the macro for both targets
impl_from_resource_create3d!(rutabaga_gfx::ResourceCreate3D);
impl_from_resource_create3d!(virglrenderer::ResourceCreate3D);
#[derive(Debug, Clone, Copy)]
pub struct ResourceCreate2d {
pub resource_id: u32,
pub format: u32,
pub width: u32,
pub height: u32,
}

View File

@ -11,7 +11,13 @@
pub mod device;
pub mod protocol;
pub mod virtio_gpu;
// Module for backends
pub mod backend;
// Module for the common renderer trait
pub mod gpu_types;
pub mod renderer;
#[cfg(test)]
pub(crate) mod testutils;
use std::{
fmt::{Display, Formatter},
@ -23,6 +29,7 @@ use clap::ValueEnum;
use log::info;
#[cfg(feature = "backend-gfxstream")]
use rutabaga_gfx::{RUTABAGA_CAPSET_GFXSTREAM_GLES, RUTABAGA_CAPSET_GFXSTREAM_VULKAN};
#[cfg(feature = "backend-virgl")]
use rutabaga_gfx::{RUTABAGA_CAPSET_VIRGL, RUTABAGA_CAPSET_VIRGL2};
use thiserror::Error as ThisError;
use vhost_user_backend::VhostUserDaemon;
@ -33,6 +40,7 @@ use crate::device::VhostUserGpuBackend;
#[derive(Clone, Copy, Debug, PartialEq, Eq, ValueEnum)]
pub enum GpuMode {
#[value(name = "virglrenderer", alias("virgl-renderer"))]
#[cfg(feature = "backend-virgl")]
VirglRenderer,
#[cfg(feature = "backend-gfxstream")]
Gfxstream,
@ -41,6 +49,7 @@ pub enum GpuMode {
impl Display for GpuMode {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
#[cfg(feature = "backend-virgl")]
Self::VirglRenderer => write!(f, "virglrenderer"),
#[cfg(feature = "backend-gfxstream")]
Self::Gfxstream => write!(f, "gfxstream"),
@ -52,8 +61,11 @@ bitflags! {
/// A bitmask for representing supported gpu capability sets.
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub struct GpuCapset: u64 {
#[cfg(feature = "backend-virgl")]
const VIRGL = 1 << RUTABAGA_CAPSET_VIRGL as u64;
#[cfg(feature = "backend-virgl")]
const VIRGL2 = 1 << RUTABAGA_CAPSET_VIRGL2 as u64;
#[cfg(feature = "backend-virgl")]
const ALL_VIRGLRENDERER_CAPSETS = Self::VIRGL.bits() | Self::VIRGL2.bits();
#[cfg(feature = "backend-gfxstream")]
@ -75,7 +87,9 @@ impl Display for GpuCapset {
first = false;
match capset {
#[cfg(feature = "backend-virgl")]
Self::VIRGL => write!(f, "virgl"),
#[cfg(feature = "backend-virgl")]
Self::VIRGL2 => write!(f, "virgl2"),
#[cfg(feature = "backend-gfxstream")]
Self::GFXSTREAM_VULKAN => write!(f, "gfxstream-vulkan"),
@ -139,6 +153,7 @@ pub enum GpuConfigError {
}
impl GpuConfig {
#[cfg(feature = "backend-virgl")]
pub const DEFAULT_VIRGLRENDER_CAPSET_MASK: GpuCapset = GpuCapset::ALL_VIRGLRENDERER_CAPSETS;
#[cfg(feature = "backend-gfxstream")]
@ -146,6 +161,7 @@ impl GpuConfig {
pub const fn get_default_capset_for_mode(gpu_mode: GpuMode) -> GpuCapset {
match gpu_mode {
#[cfg(feature = "backend-virgl")]
GpuMode::VirglRenderer => Self::DEFAULT_VIRGLRENDER_CAPSET_MASK,
#[cfg(feature = "backend-gfxstream")]
GpuMode::Gfxstream => Self::DEFAULT_GFXSTREAM_CAPSET_MASK,
@ -154,6 +170,7 @@ impl GpuConfig {
fn validate_capset(gpu_mode: GpuMode, capset: GpuCapset) -> Result<(), GpuConfigError> {
let supported_capset_mask = match gpu_mode {
#[cfg(feature = "backend-virgl")]
GpuMode::VirglRenderer => GpuCapset::ALL_VIRGLRENDERER_CAPSETS,
#[cfg(feature = "backend-gfxstream")]
GpuMode::Gfxstream => GpuCapset::ALL_GFXSTREAM_CAPSETS,
@ -237,6 +254,7 @@ mod tests {
use super::*;
#[test]
#[cfg(feature = "backend-virgl")]
fn test_gpu_config_create_default_virglrenderer() {
let config = GpuConfig::new(GpuMode::VirglRenderer, None, GpuFlags::new_default()).unwrap();
assert_eq!(config.gpu_mode(), GpuMode::VirglRenderer);
@ -264,6 +282,7 @@ mod tests {
}
#[test]
#[cfg(feature = "backend-virgl")]
fn test_gpu_config_valid_combination() {
let config = GpuConfig::new(
GpuMode::VirglRenderer,
@ -304,12 +323,14 @@ mod tests {
#[test]
fn test_default_num_capsets() {
#[cfg(feature = "backend-virgl")]
assert_eq!(GpuConfig::DEFAULT_VIRGLRENDER_CAPSET_MASK.num_capsets(), 2);
#[cfg(feature = "backend-gfxstream")]
assert_eq!(GpuConfig::DEFAULT_GFXSTREAM_CAPSET_MASK.num_capsets(), 2);
}
#[test]
#[cfg(feature = "backend-virgl")]
fn test_capset_display_multiple() {
let capset = GpuCapset::VIRGL | GpuCapset::VIRGL2;
let output = capset.to_string();
@ -327,6 +348,7 @@ mod tests {
}
#[test]
#[cfg(feature = "backend-virgl")]
fn test_fail_listener() {
// This will fail the listeners and thread will panic.
let socket_name = Path::new("/proc/-1/nonexistent");

View File

@ -14,9 +14,11 @@ use vhost_device_gpu::{start_backend, GpuCapset, GpuConfig, GpuConfigError, GpuF
#[repr(u64)]
pub enum CapsetName {
/// [virglrenderer] OpenGL implementation, superseded by Virgl2
#[cfg(feature = "backend-virgl")]
Virgl = GpuCapset::VIRGL.bits(),
/// [virglrenderer] OpenGL implementation
#[cfg(feature = "backend-virgl")]
Virgl2 = GpuCapset::VIRGL2.bits(),
/// [gfxstream] Vulkan implementation (partial support only){n}

View File

@ -71,6 +71,14 @@ pub const CONTROL_QUEUE: u16 = 0;
pub const CURSOR_QUEUE: u16 = 1;
pub const POLL_EVENT: u16 = 3;
/// 3D resource creation parameters. Also used to create 2D resource.
///
/// Constants based on Mesa's (internal) Gallium interface. Not in the
/// virtio-gpu spec, but should be since dumb resources can't work with
/// gfxstream/virglrenderer without this.
pub const VIRTIO_GPU_TEXTURE_2D: u32 = 2;
pub const VIRTIO_GPU_BIND_RENDER_TARGET: u32 = 2;
pub const VIRTIO_GPU_MAX_SCANOUTS: u32 = 16;
/// `CHROMIUM(b/277982577)` success responses
@ -385,6 +393,25 @@ pub struct virtio_gpu_resource_create_3d {
pub padding: Le32,
}
impl From<virtio_gpu_resource_create_2d> for virtio_gpu_resource_create_3d {
fn from(args: virtio_gpu_resource_create_2d) -> Self {
Self {
resource_id: args.resource_id,
target: VIRTIO_GPU_TEXTURE_2D.into(),
format: args.format,
bind: VIRTIO_GPU_BIND_RENDER_TARGET.into(),
width: args.width,
height: args.height,
depth: 1.into(), // default for 2D
array_size: 1.into(), // default for 2D
last_level: 0.into(), // default mipmap
nr_samples: 0.into(), // default sample count
flags: 0.into(),
padding: 0.into(),
}
}
}
// SAFETY: The layout of the structure is fixed and can be initialized by
// reading its content from byte array.
unsafe impl ByteValued for virtio_gpu_resource_create_3d {}

View File

@ -0,0 +1,102 @@
// Copyright 2025 Red Hat Inc
//
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
use rutabaga_gfx::RutabagaFence;
use vhost::vhost_user::gpu_message::{VhostUserGpuCursorPos, VhostUserGpuEdidRequest};
use vm_memory::{GuestAddress, GuestMemoryMmap, VolatileSlice};
use vmm_sys_util::eventfd::EventFd;
use crate::{
gpu_types::{ResourceCreate3d, Transfer3DDesc, VirtioGpuRing},
protocol::{virtio_gpu_rect, VirtioGpuResult},
};
/// Trait defining the interface for GPU renderers.
pub trait Renderer: Send + Sync {
fn resource_create_3d(&mut self, resource_id: u32, req: ResourceCreate3d) -> VirtioGpuResult;
fn unref_resource(&mut self, resource_id: u32) -> VirtioGpuResult;
fn transfer_write(
&mut self,
ctx_id: u32,
resource_id: u32,
req: Transfer3DDesc,
) -> VirtioGpuResult;
fn transfer_write_2d(
&mut self,
ctx_id: u32,
resource_id: u32,
req: Transfer3DDesc,
) -> VirtioGpuResult;
fn transfer_read(
&mut self,
ctx_id: u32,
resource_id: u32,
req: Transfer3DDesc,
buf: Option<VolatileSlice>,
) -> VirtioGpuResult;
fn attach_backing(
&mut self,
resource_id: u32,
mem: &GuestMemoryMmap,
vecs: Vec<(GuestAddress, usize)>,
) -> VirtioGpuResult;
fn detach_backing(&mut self, resource_id: u32) -> VirtioGpuResult;
fn update_cursor(
&mut self,
resource_id: u32,
cursor_pos: VhostUserGpuCursorPos,
hot_x: u32,
hot_y: u32,
) -> VirtioGpuResult;
fn move_cursor(&mut self, resource_id: u32, cursor: VhostUserGpuCursorPos) -> VirtioGpuResult;
fn resource_assign_uuid(&self, resource_id: u32) -> VirtioGpuResult;
fn get_capset_info(&self, index: u32) -> VirtioGpuResult;
fn get_capset(&self, capset_id: u32, version: u32) -> VirtioGpuResult;
fn create_context(
&mut self,
ctx_id: u32,
context_init: u32,
context_name: Option<&str>,
) -> VirtioGpuResult;
fn destroy_context(&mut self, ctx_id: u32) -> VirtioGpuResult;
fn context_attach_resource(&mut self, ctx_id: u32, resource_id: u32) -> VirtioGpuResult;
fn context_detach_resource(&mut self, ctx_id: u32, resource_id: u32) -> VirtioGpuResult;
fn submit_command(
&mut self,
ctx_id: u32,
commands: &mut [u8],
fence_ids: &[u64],
) -> VirtioGpuResult;
fn create_fence(&mut self, rutabaga_fence: RutabagaFence) -> VirtioGpuResult;
fn process_fence(
&mut self,
ring: VirtioGpuRing,
fence_id: u64,
desc_index: u16,
len: u32,
) -> bool;
fn get_event_poll_fd(&self) -> Option<EventFd>;
fn event_poll(&self);
fn force_ctx_0(&self);
fn display_info(&self) -> VirtioGpuResult;
fn get_edid(&self, edid_req: VhostUserGpuEdidRequest) -> VirtioGpuResult;
fn set_scanout(
&mut self,
scanout_id: u32,
resource_id: u32,
rect: virtio_gpu_rect,
) -> VirtioGpuResult;
fn flush_resource(&mut self, resource_id: u32, rect: virtio_gpu_rect) -> VirtioGpuResult;
fn resource_create_blob(
&mut self,
ctx_id: u32,
resource_id: u32,
blob_id: u64,
size: u64,
blob_mem: u32,
blob_flags: u32,
) -> VirtioGpuResult;
fn resource_map_blob(&mut self, resource_id: u32, offset: u64) -> VirtioGpuResult;
fn resource_unmap_blob(&mut self, resource_id: u32) -> VirtioGpuResult;
}

View File

@ -0,0 +1,227 @@
// Copyright 2025 Red Hat Inc
//
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
use std::{
fs::File,
iter::zip,
mem,
os::fd::{AsRawFd, FromRawFd},
};
use assert_matches::assert_matches;
use libc::EFD_NONBLOCK;
use rutabaga_gfx::RutabagaFence;
use vhost::vhost_user::gpu_message::VhostUserGpuCursorPos;
use vhost_user_backend::{VringRwLock, VringT};
use virtio_bindings::virtio_ring::{VRING_DESC_F_NEXT, VRING_DESC_F_WRITE};
use virtio_queue::{
desc::{split::Descriptor as SplitDescriptor, RawDescriptor},
mock::MockSplitQueue,
Queue, QueueT,
};
use vm_memory::{
Bytes, GuestAddress, GuestAddressSpace, GuestMemory, GuestMemoryAtomic, GuestMemoryMmap,
};
use vmm_sys_util::eventfd::EventFd;
use crate::{
gpu_types::VirtioGpuRing,
protocol::GpuResponse::{ErrUnspec, OkCapset, OkCapsetInfo, OkNoData},
renderer::Renderer,
};
pub struct TestingDescChainArgs<'a> {
/// Each readable buffer becomes a descriptor (no WRITE flag)
pub readable_desc_bufs: &'a [&'a [u8]],
/// Each length becomes a writable descriptor (WRITE flag set)
pub writable_desc_lengths: &'a [u32],
}
// Common function to test fence creation and processing logic.
// It takes a mutable reference to backend Gpu component and the fence object.
pub fn test_fence_operations<T: Renderer>(gpu_device: &mut T) {
let fence = RutabagaFence {
flags: 0,
fence_id: 0,
ctx_id: 1,
ring_idx: 0,
};
// Test creating a fence with the `RutabagaFence`
// This assumes create_fence returns Result<Result<NoData>> or similar nested
// result
let result = gpu_device.create_fence(fence);
assert_matches!(result, Ok(OkNoData)); // Assuming OkNoData is defined
// Test processing gpu fence: If the fence has already been signaled return true
// This test logic implies that 'create_fence' automatically signals the first
// fence (fence ID 0) or that the GfxstreamGpu is initialized with fence 0
// already completed.
let ring = VirtioGpuRing::Global;
let result = gpu_device.process_fence(ring.clone(), 0, 0, 0); // Assuming ring, seq, flags, type
assert_matches!(result, true, "Fence ID 0 should be signaled");
// Test processing gpu fence: If the fence has not yet been signaled return
// false
let result = gpu_device.process_fence(ring, 1, 0, 0);
assert_matches!(result, false, "Fence ID 1 should not be signaled");
}
/// Common function to validate capset discovery & fetch on any Renderer.
/// - Queries capset info at `index` (default 0 via the wrapper below)
/// - Uses the returned (`capset_id`, version) to fetch the actual capset blob.
pub fn test_capset_operations<T: Renderer>(gpu: &T, index: u32) {
let info = gpu.get_capset_info(index);
// Expect Ok(OkCapsetInfo { .. })
assert_matches!(info, Ok(OkCapsetInfo { .. }));
// Pull out id/version and fetch the capset
let Ok(OkCapsetInfo {
capset_id, version, ..
}) = info
else {
unreachable!("assert_matches above guarantees this arm");
};
let caps = gpu.get_capset(capset_id, version);
// Expect Ok(OkCapset(_))
assert_matches!(caps, Ok(OkCapset(_)));
}
/// Test the cursor movement logic of any `GpuDevice` implementation.
/// - Resource ID 0 should hide the cursor (or fail if no resource is bound)
/// - Any other Resource ID should attempt to move the cursor (or fail if no
/// resource)
pub fn test_move_cursor<T: Renderer>(gpu_device: &mut T) {
let cursor_pos = VhostUserGpuCursorPos {
scanout_id: 1,
x: 123,
y: 123,
};
// Test case 1: Resource ID 0 (invalid/no resource)
let result = gpu_device.move_cursor(0, cursor_pos);
assert_matches!(result, Err(ErrUnspec));
// Test case 2: Resource ID 1 (resource might exist)
let result = gpu_device.move_cursor(1, cursor_pos);
assert_matches!(result, Err(ErrUnspec));
}
/// Create a vring with the specified descriptor chains, queue size, and memory
/// regions. Returns the created `VringRwLock`, a vector of output buffer
/// address vectors, and the `EventFd` used for call notifications.
pub fn create_vring(
mem: &GuestMemoryAtomic<GuestMemoryMmap>,
chains: &[TestingDescChainArgs],
queue_addr_start: GuestAddress,
data_addr_start: GuestAddress,
queue_size: u16,
) -> (VringRwLock, Vec<Vec<GuestAddress>>, EventFd) {
let mem_handle = mem.memory();
mem_handle
.check_address(queue_addr_start)
.expect("Invalid start address");
let mut output_bufs = Vec::new();
let vq = MockSplitQueue::create(&*mem_handle, queue_addr_start, queue_size);
// Address of the buffer associated with the next descriptor we place
let mut next_addr = data_addr_start.0;
let mut chain_index_start = 0usize;
let mut descriptors: Vec<SplitDescriptor> = Vec::new();
for chain in chains {
// Readable descriptors (no WRITE flag)
for buf in chain.readable_desc_bufs.iter().copied() {
mem_handle
.check_address(GuestAddress(next_addr))
.expect("Readable descriptor's buffer address is not valid!");
let desc = SplitDescriptor::new(
next_addr,
u32::try_from(buf.len()).expect("Buffer too large to fit into descriptor"),
0,
0,
);
mem_handle.write(buf, desc.addr()).unwrap();
descriptors.push(desc);
next_addr += buf.len() as u64;
}
// Writable descriptors (WRITE flag)
let mut writable_descriptor_addresses = Vec::new();
for &desc_len in chain.writable_desc_lengths {
mem_handle
.check_address(GuestAddress(next_addr))
.expect("Writable descriptor's buffer address is not valid!");
let desc = SplitDescriptor::new(
next_addr,
desc_len,
u16::try_from(VRING_DESC_F_WRITE).unwrap(),
0,
);
writable_descriptor_addresses.push(desc.addr());
descriptors.push(desc);
next_addr += u64::from(desc_len);
}
output_bufs.push(writable_descriptor_addresses);
// Link the descriptors we just appended into a single chain
make_descriptors_into_a_chain(
u16::try_from(chain_index_start).unwrap(),
&mut descriptors[chain_index_start..],
);
chain_index_start = descriptors.len();
}
assert!(descriptors.len() < queue_size as usize);
if !descriptors.is_empty() {
let descs_raw: Vec<RawDescriptor> =
descriptors.into_iter().map(RawDescriptor::from).collect();
vq.build_multiple_desc_chains(&descs_raw)
.expect("Failed to build descriptor chain");
}
// Create the vring and point it at the queue tables
let queue: Queue = vq.create_queue().unwrap();
let vring = VringRwLock::new(mem.clone(), queue_size).unwrap();
// Install call eventfd
let call_evt = EventFd::new(EFD_NONBLOCK).unwrap();
let call_evt_clone = call_evt.try_clone().unwrap();
vring
.set_queue_info(queue.desc_table(), queue.avail_ring(), queue.used_ring())
.unwrap();
vring.set_call(Some(event_fd_into_file(call_evt_clone)));
vring.set_enabled(true);
vring.set_queue_ready(true);
(vring, output_bufs, call_evt)
}
/// Link a slice of descriptors into a single chain starting at `start_idx`.
/// The last descriptor in the slice will have its NEXT flag cleared.
fn make_descriptors_into_a_chain(start_idx: u16, descriptors: &mut [SplitDescriptor]) {
let last_idx = start_idx + u16::try_from(descriptors.len()).unwrap() - 1;
for (idx, desc) in zip(start_idx.., descriptors.iter_mut()) {
if idx == last_idx {
desc.set_flags(desc.flags() & !VRING_DESC_F_NEXT as u16);
} else {
desc.set_flags(desc.flags() | VRING_DESC_F_NEXT as u16);
desc.set_next(idx + 1);
}
}
}
/// Convert an `EventFd` into a File, transferring ownership of the underlying
/// FD.
fn event_fd_into_file(event_fd: EventFd) -> File {
// SAFETY: transfer FD ownership into File; prevent Drop on EventFd.
unsafe {
let raw = event_fd.as_raw_fd();
mem::forget(event_fd);
File::from_raw_fd(raw)
}
}

File diff suppressed because it is too large Load Diff