This patch adds a new file handle watch interface to libspice, featuring
three callbacks:
(1) watch_add() -- create a new file watch.
(2) watch_update_mask() -- change event mask. spice frequently
enables/disables write notification.
(3) watch_remove() -- remove a file watch.
libspice users must implement these functions to allow libspice
monitoring file handles.
The old interface (set_file_handlers) doesn't explicitly express the
lifecycle of a watch. Also it maps 1:1 to a qemu-internal function.
In case the way qemu implements file watches changes (someone sayed
QemuIONotifier?) this will break horribly. Beside that it is very
bad style.
Follwing patches will switch over users one by one to the new interface
and finally zap the old one.
Interfaces must be registered after spice_server_init().
The "next" callback is used to discover interfaces
registered before spice_server_init(). Which is a empty
list and thus pretty pointless. Remove it.
- drop spice_channel_name_t enum, use spice-protocol defines instead.
- switch spice_server_set_channel_security() channel parameter from
enum to string.
- drop spice_server_set_default_channel_security(), use
spice_server_set_channel_security with channel == NULL instead.
Adds sanity check to iovec setup. In theory this should never ever
trigger. In practice guest driver bugs can make it trigger. This
patch avoids qemu burning cpu in a endless loop, instead we'll print a
message and abort. Not sure whenever there is a more graceful way to
handle the situation ...
Pixman sometimes sets the ignored high byte to 0xff during alpha
blending. This is correct according to pixman specs, as the high
byte is ignored. However its not what windows expects, and it causes
unnecessary regions with non-zero high byte, causing us to
send rgba data instead of rgb which compresses worse.
So, we detect this and clear the high byte.
Surface creation now specifies the exact format, not only the bit depth
of each surface which is used for rendering.
Additionally we now actually store the surfaces in that format, instead
of converting everything to 32bpp when drawing or e.g. handling palettes.
In order to be able to support 16bit canvases on 32bit screens and 32bit
canvases on 16bit screens we need to handle format conversion when drawing
RedPixmaps.
The way this works now for X11 is that we only have one PIXELS_SOURCE_TYPE
for pixmaps, which always has a pixman_image_t for the data, but additionally
it has an XImage (shared mem or not) if the screen the pixmap was created
for (i.e. an explicit one or the default screen) has the same format as
the pixmap.
When we draw a pixmap on a drawable we have two variants. If the pixmap
has a XImage and it matches the format of the target drawable then we
just X(Shm)PutImage it to the drawable. If the formats differ, then we
create a temporary XImage and convert into that before drawing it to
the screen.
Right now this is a bit inefficient, because we always allocate a new
temporary image when converting. We want to add some caching here, but
at least this lets things work again.
We need to know the format for other drawables too (like for instance
the native format of a window), so we're pushing this down.
This changes a bunch of references to be RedDrawable::, but not all.
The the old RedPixmap:: references still work, but will be phased out.
We now support 16bit format pixmaps as well as the old ones. Including
both 555 and 565 modes.
We drop the palette argument for pixmap construction as it was only
used for black/white anyway.
Canvas creation is simplified so that there is no separate set_mode
state. Canvases are already created in the right mode and never change.