This patch and previous ones want to solve the problem of not having a
context in SpiceCoreInterface. SpiceCoreInterface defines a set of
callbacks to handle events in spice-server. These callbacks allow to
handle timers, watch for file descriptors and send channel events.
All these callbacks do not accept a context (usually in C passed as a
void* parameter) so it is hard for them to differentiate the interface
specified.
Unfortunately this structure is used even internally from different
contexts for instance every RedWorker thread has a different context. To
solve this issue some workarounds are used. Currently for timers a variable
depending on the current thread is used while for watches the opaque
parameter to pass to the event callback is used as it currently points just
to RedChannelClient structure. This however imposes some implicit
maintainance problem in the future. What happens for instance if for some
reason a timer is registered during worker initialization, run in another
thread? What if we decide to register a file descriptor callback for
something not a RedChannelClient? Could be that the program will run
without any issue till some bytes change and weird things could happen.
The implementation of this solution is done implementing an internal "core"
interface that has context specific and use it to differentiate the
context instead of relying on some other, hard to maintain, detail. Then an
adapter structure (name inpired to the adapter pattern) will provide the
internal core interface using the external, public, definition (in the
future this technique can be used to extend the external interface without
breaking the ABI).
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
Acked-by: Christophe Fergeau <cfergeau@redhat.com>
Add wrapper functions for SpiceCoreInterface in order to present
a SpiceCoreInterfaceInternal. These functions just expect
SpiceCoreInterfaceInternal API and forward request to
SpiceCoreInterface.
This allows to change ABI details of internal one.
See comments in "channel: add interface parameters to
SpiceCoreInterfaceInternal" patch.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
Acked-by: Christophe Fergeau <cfergeau@redhat.com>
Define an internal structure that matches 100% the ABI of the public one.
The structure will be changed by following patches.
See comments in "channel: add interface parameters to
SpiceCoreInterfaceInternal" patch.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
Acked-by: Christophe Fergeau <cfergeau@redhat.com>
The rest of code is using spice_malloc* functions, use them for
consistency.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
Acked-by: Jonathon Jongsma <jjongsma@redhat.com>
Make clear that these funcion are just checking a condition.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
Acked-by: Jonathon Jongsma <jjongsma@redhat.com>
This is really not supported, requires X11, so better to remove it for
now. Some day it might be revived, using DRM, ..
Note for later, this could be removed too (not used by client):
- spice-common/common/ogl_ctx
Acked-by: Fabiano Fidêncio <fidencio@redhat.com>
spice_server_destroy calls reds_exit which is called also at exit time
(is registered with atexit) so avoid to keep dangling pointers.
Currently this does not happen as spice_server_destroy is not called
by Qemu.
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
This has been renamed to SpiceImageCompression in order to avoid clashes
with older spice-server in the SPICE_IMAGE_COMPRESS_ namespace. This
commit is a straight rename of SpiceImageCompress to
SpiceImageCompression and SPICE_IMAGE_COMPRESS_ to
SPICE_IMAGE_COMPRESSION_
If a previous Qemu executable is not able to delete the statistic file
on the next creation with same name (statitics file are based on pid
numbers so if pid get reused for another Qemu process you get the same
name) it fails as you can't open a file with 0444 permissions (these
are the permission used to create these files).
This patch assure there are no stale file trying to remove it before the
creation of the new one. As file is based on pid and name used for spice
you are not deleting another file.
Fixes: rhbz#1177326
Signed-off-by: Frediano Ziglio <fziglio@redhat.com>
With a TCP socket, the backlog doesn't seem to matter much,
perhaps because of latency or underlying protocol behaviour. However,
on UNIX socket, it is fairly easy to reach the backlog limit and the
client will get an EAGAIN error (but not ECONNREFUSED as stated in
listen(7)) that is not easy to deal with: attempting to reconnect in a
loop might busy-loop forever as there are no guarantee the server will
accept new connections, so it will be inherently racy.
Typically, Spice server can easily have up to 15 concurrent incoming
connections that are established during initialization of the session.
To improve the situation, raise the backlog limit to the default maximum
system value, which is 128 on Linux.
There is already a enumeration in a public header that defines the
different streaming options, so there's no need to duplicate that
enumeration internally. Just use the public enum values.
This commit fixes enough endianness issues that it's possible to
connect to a spice-server/qemu running on a big-endian box with a client
running on a little-endian machine.
I haven't tested more than getting to the bios/bootloader and typing a
bit on the keyboard as I did not manage to boot a distro afterwards :(
This is based on patches send by Erlon R. Cruz
<erlon.cruz@br.flextronics.com>
The multimedia time is defined by the server side monotonic time [1],
but the drawing time-stamp is done in guest side, so it requires
synchronization between host and guest. This is expensive, when no audio
is playing, there is a ~30x/sec wakeup to update the qxl device mmtime,
and it requires marking dirty the rom region.
Instead, the video timestamping can be done more efficiently on server
side, without visible drawbacks.
[1] a better timestamp could be the audio time, since audio players are
usually sync with audio time)
Related to:
https://bugzilla.redhat.com/show_bug.cgi?id=912763
If the client advertises the SASL cap, it means it guarantees it will be
able to use SASL if the server supports, and that it does not need a valid
SpiceLinkReply::pub_key field when using SASL.
When the client cap is set, we thus don't need to create a RSA public key
if SASL is enabled server side.
The reason for needing client guarantees about not looking at the pub_key
field is that its presence and size is hardcoded in the protocol, but in
some hardened setups (using fips mode), generating a RSA 1024 bit key as
expected is forbidden and fails. With this new capability, the server
knows the client will be able to handle SASL if needed, and can skip
the generation of the key altogether. This means that on the setups
described above, SASL authentication has to be used.
During seamless migration, after switching host, if a client was connected
during the migration, it will have data to send back to the new
qemu/spice-server instance. This is handled through MIGRATE_DATA messages.
SPICE char devices use such MIGRATE_DATA messages to restore their state.
However, the MIGRATE_DATA message can arrive any time after the new qemu
instance has started, this can happen before or after the SPICE char
devices have been created. In order to handle this, if the migrate data
arrives early, it's stored in reds->agent_state.mig_data, and
attach_to_red_agent() will restore the agent state as appropriate.
Unfortunately this does not work as expected, for main
channel (agent messages).
If attach_to_red_agent() is called before the MIGRATE_DATA
message reaches the server, all goes well,
but if MIGRATE_DATA reaches the server before
attach_to_red_agent() gets called, then some assert() gets
triggered in spice_char_device_state_restore():
((null):32507): Spice-ERROR **: char_device.c:937:spice_char_device_state_restore: assertion `dev->num_clients == 1 && dev->wait_for_migrate_data' failed
Thread 3 (Thread 0x7f406b543700 (LWP 32543)):
Thread 2 (Thread 0x7f40697ff700 (LWP 32586)):
Thread 1 (Thread 0x7f4079b45a40 (LWP 32507)):
When restoring state, a client must already be added to the
spice-char-device.
What happens is that a client is not being added to the char-device
when when MIGRATE_DATA arrives first, which leaves both
dev->num_clients and dev->wait_for_migrate_data value at 0.
This commit changes the logic in spice_server_char_device_add_interface(),
such that if there is migrate data pending in reds->agent_state.mig_data
but no client was added to the spice-char-device yet,
then first the client is added to the device by calling
spice_char_device_client_add(), and only then the state is restored.
=== How to Reproduce
To reproduce, add delays to the migration connection between
qmeu-kvm on the source host (SRC) and on the destination (DST).
Specifically I added a man in the middle DLY host between
migration ports from SRC to DST.
+-----+ +-----+ +-----+
| SRC |--> | DLY | --> | DST |
+-----+ +-----+ +-----+
DLY listens on port P1 (e.g. 4444) and DST listens on port
PINCOMING (e.g. 4444, from qemu-kvm '-incoming' command line option)
Precondition: make sure port P1 on DLY is accessible in iptables.
Option 1: use ssh tcp port forwarding
On DLY host run ssh:
ssh DLY:P1:DST:PINCOMING DST
Then use the following migration command (on qemu-kvm monitor):
client_migrate_info spice DST PSPICE
migrate -d tcp:DLY:P1
Option 2: Use a simple proxy program that forwards
packets from SRC to DST while adding some delays.
The program runs on DLY, listens to port D1, upon
accept connects to DST:PINCOMING and forward all
packets from DLY:D1 to DST:PINCOMING.
Then use the same migrate command as in option 1:
client_migrate_info spice DST PSPICE
migrate -d tcp:DLY:P1
=== How to Reproduce Ends
This fixes https://bugzilla.redhat.com/show_bug.cgi?id=1035184
Based-on-a-patch-by: Christophe Fergeau <cfergeau@redhat.com>
In reds_send_link_ack(), lookup the channel with the same id as the link
message.
The bug was found during code review a while ago.
A reproducer bug was later reported:
https://bugzilla.redhat.com/show_bug.cgi?id=1058625
9feed69 moved the async reader code to RedsStream so that it can be used
for the SASL authentication code. In particular, it introduced a
RedsStream::async_read member which is used by the SASL authentication code
for its async operations.
However, what was not done is to remove the now redundant
RedLinkInfo::async_read field. This causes failures when using SASL
authentication as the async read error callback is getting set
on the RedLinkInfo::async_read structure, but then the SASL code is trying
to use the RedeStream::async_read structure for its async IOs, which do not
have the needed error callback set.
This commit makes use of the newly introduced reds_stream_async_read()
helper in order to make use of RedsStream::async_read.
This can fail in fips mode for example. If we ignore the failure, we'll get
a crash:
#0 0x00007f38d63728a0 in BN_num_bits () from /lib64/libcrypto.so.10
#1 0x00007f38d639661d in RSA_size () from /lib64/libcrypto.so.10
#2 0x00007f38d7991762 in reds_handle_read_link_done () from /lib64/libspice-server.so.1
#3 0x00007f38d7990c06 in spice_server_add_client () from /lib64/libspice-server.so.1
#4 0x00007f38d7990c6a in reds_accept () from /lib64/libspice-server.so.1
#5 0x00007f38dc0d2946 in qemu_iohandler_poll (pollfds=0x7f38dedce200, ret=755449965, ret@entry=1) at iohandler.c:143
#6 0x00007f38dc0d6ea8 in main_loop_wait (nonblocking=<optimized out>) at main-loop.c:465
#7 0x00007f38dbffd7c0 in main_loop () at vl.c:1988
#8 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4357
This commit will cause the client connection to fail but qemu won't
segfault.
For example, with qemu, a webdav channel can be created this way:
-chardev spiceport,name=org.spice-space.webdav.0,...
And redirected to a virtio port:
-device virtserialport,...,name=org.spice-space.webdav.0
SASL authentication mostly use members from RedsStream to do its work, so
it makes sense to have its code in reds_stream.c. This should allow to make
RedsStream::sasl private in the future.
The AsyncRead structure in reds.h wraps an async read + callback to
be done on a stream. Moving it to reds_stream.h is needed in order
to move SASL authentication there.
Now that stream creation and SSL enabling are done by helpers
in reds_stream.c, we can move the initialization of the vfunc
read/write pointers there too.
Initializing a new stream means initializing quite a few fields.
This commit factors this initialization in a dedicated reds_stream_new
helper. This also helps moving more code from reds.c to reds_stream.c