SSL 2 and 3 are already disabled by default by us, and TLS 1.1 and below
are disabled by default on Debian systems.
requires corresponding patch in pve-manager to have an effect.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
when using a custom pveproxy certificate. actual handling is done in
pve-manager.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
like the TLS <= 1.2 cipher list, but needs a different option since the
format and values are incompatible. AnyEvent doesn't yet handle this
directly like the cipher list, so set it directly on the context.
requires corresponding patch in pve-manager (which reads the config, and
passes relevant parts back to the API server).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
if a client closes the connection while the API server is
waiting/stalling here, the handle will disappear, and sending a response
is no longer possible.
(this issue is only cosmetic, but if such clients are a regular
occurrence it might get quite noisy in the logs)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the WS gets disconnected without any data having been sent first,
wbuf (and thus `length $wbuf`) is undef. the actual length of the buffer
is not relevant here anyway, just the fact that it's non-empty - so
avoid the undef warning by dropping the unnecessary comparison.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this is useful if we want to pipe the output of a program e.g. through gzip
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The issue is probably not critical and best addressed by not running
the perl API servers in an exposed environment or when this needs to
be done by installing a reverse proxy in front of them.
The DOS potential of the perl daemons is limited more by the limited
number of parallel workers (and the memory constraints of starting
more of them), than by the CPU cycles wasted on TLS renegotiation.
Still disabling TLS renegotiation should show very little downside:
* it was removed in TLS 1.3 for security reasons
* it was the way nginx addressed this issue [1].
* we do not use client certificate authentication
Tested by running `openssl s_client -no_tls1_3 -connect 192.0.2.1:8006`
and issuing a `HEAD / HTTP/1.1\nR\n`
with and without the patch.
[1] 70bd187c4c386d82d6e4d180e0db84f361d1be02 at
https://github.com/nginx/nginx (although that code adapted to
the various changes in openssl API over the years)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
for proxied requests, we usually tear down the proxy connection
immediately when closing the source connection. this is not the correct
course of action for bulk one-way data streams that are proxied, where
the source connection might be closed, but the proxy connection might
still have data in the write buffer that needs to be written out.
push_shutdown already handles this case (closing the socket/FH after it
has been fully drained).
one example for such a proxied data stream is the 'migrate' data for a
remote migration, which gets proxied over a websocket connection.
terminating the proxied connection early makes the target VM crash for
obvious reasons.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
any uploaded file has to be deleted by the corrosponding
endpoint. the file upload was only used by the 'upload to
storage' feature in pve.
this change allows the endpoint to delete the file itself,
making the old and racey`sleep 1` (waiting until the worker
has opened the file) obsolete.
this change breaks all pve-manager versions, in which the
worker does not unlink the temp file itself.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
this major release still needs to have an incompatible client, the next
one can drop setting a protocol client-side, and the one after that can
remove the protocol handling on the server side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We do not support any, and we only ever send binary frames, so drop
trying to parse the header.
For compatibility with current clients (novnc, pve-xtermjs), we have
to reply with the protocols it sent.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
novnc does not support this anymore since 2015, and neither does
our xtermjs client. it is also not listed in IANAs list of websocket
protocols [0].
so simply drop it and only send out binary frames and don't decode text frames
0: https://www.iana.org/assignments/websocket/websocket.xml#subprotocol-name
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
is actually not required since quite a bit, i.e., commit
88628fd141 from my last bootstrapping
effort in 2019.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Net::IP objects are bound to a version - 0/0 is treated as ipv4 only.
If 'all' is present in the allow_from/deny_from list we should also
add ::/0 for matching all ipv6 addresses.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
With recent changes to the listening socket code in pve-manager
the proxy daemons now usually bind to '::' and ipv4 clients are
read as v4-mapped-v6 addresses [0] from ::ffff:0:0/96.
This caused the allow_from/deny_from matching to break.
This patch addresses the issue by normalizing addresses from
::ffff:0:0/96 using Net::IP::ip_get_embedded_ipv4
(which roughly splits on ':' and checks if the last part looks like an
ipv4 address).
Issue was originally reported in our community forum [1]
[0] https://en.wikipedia.org/wiki/IPv6_address
[1] https://forum.proxmox.com/threads/my-pveproxy-file-doesnt-work.83228/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Allow specifying a filepath for stream=1 instead of either a path or fh
with stream=1.
With this in place, we can also just return the path to the proxy in
case we want to stream a response back, and let it read from the file
itself. This way, the pvedaemon is cut out of the transfer pipe.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Use an explicit AnyEvent::Handle similar to websocket proxying.
Needs some special care to make sure we apply backpressure correctly to
avoid caching too much data. Note that because of AnyEvent restrictions,
specifying a "fh" to point to a file or a packet-based socket may result
in unwanted behaviour[0].
[0]: https://metacpan.org/pod/AnyEvent::Handle#DESCRIPTION
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
to allow setting arbitrary IP address to listen on
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-by: Dylan Whyte <d.whyte@proxmox.com>
Reviewed-by: Dylan Whyte <d.whyte@proxmox.com>
PVE::HTTPServer in pve-manager wraps the API return value in a 'data'
element, look for a 'download' element there too to allow an API call to
instruct the HTTP server to return a file via path or filehandle.
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
if an error happens before AnyEvent::Handle registers the cleanup
callback, we should shutdown/close the socket, when handling it.
Using close, instead of shutdown($sock, SHUT_WR) here, since we are in
an error-state, and would not read from the socket anyways.
(Additionally close sends just on packet (RST,ACK), vs shutdown
(FIN,ACK+RST,ACK) in its use here).
Co-Authored-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
When handling new connections in 'accept_connections' the number of
active connections (conn_count) got increased before the callback, which
would eventually decrease it got registered in AnyEvent::Handle->new.
Any error/die before registering the callback would skip the
decrement, and leave the process in an endless loop upon exiting in
wait_end_loop.
This can happen e.g. when the call to getpeername fails, or if the
connection is denied by the ALLOW_FROM/DENY_FROM settings in
'/etc/default/pveproxy' (which is also a simple reproducer for that).
Additionally it can cause a denial of service, by attempting to
connect from a denied ip until the connection count exeeds the maximum
connections of all child-processes.
This patch addresses the issue by incrementing the connection count
before attempting to create the handle, and decrementing it again, if
handle creation fails.
A warning is logged if 'conn_count' turns negative when decrementing
during cleanup on error/eof. In case creating a new handle during
initial accept_connection fails, a warning is logged as well, but
'conn_count' is not decremented.
Reported via our community-forum:
https://forum.proxmox.com/threads/pveproxy-eats-available-ram.79617/
Co-Authored-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
This is mostly a "do not allow infinity headers" limit in the sense
of "it's good to have limits". With modern browsers and users behind
proxies we may actually get over 30 headers, so increase it for now
to 64 - hopefully enough for another decade ;)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reported-by: Victor Hooi <victorhooi@yahoo.com>
needed to keep tunnel connections alive.
> The Ping frame contains an opcode of 0x9.
> [...]
> The Pong frame contains an opcode of 0xA.
-- Section 5.5.2 cf. https://tools.ietf.org/html/rfc6455#section-5.5.2
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>