Currently, if a file starts with a newline, it gets removed and the
upload succeeds (provided no hash is given).
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Reviewed-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Tested-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Some fields (e.g. filename) can contain spaces, but our 'trim'
function, would only return the value until the first whitespace
character instead of removing leading/trailing white space. This lead
to passing the wrong filename to the API call (e.g. 'foo' instead of
'foo (1).iso'), which would then reject it because of the 'wrong'
extension.
Fix this by just using the battle proven trim from pve-common.
Fixes: 0fbcbc2 ("fix #3990: multipart upload: rework to fix uploading small files")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Separate the flow into first getting the length and data reference
and only then handle writing/digest in a common way
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
and improve the boundary variable helper name by adding a _re postfix
and using snake case everywhere.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
== The problem
Upload of files smaller than ~16kb failed.
This was because the code assumed that it would be called
several times, and each time would do a certain action.
When the whole file was in the buffer, this failed because
the function would try parssing the first part in the payload and
then return, instead of parsing the rest of the available data.
== Why not just modifying the current code a bit?
The code had a lot of nested control statements and a
non-intuitive control flow (phase 0->2->1->1->1 and so on).
The way the phases and buffer content were checked made it
rather difficult to just fix a few lines.
== What was changed
* Part headers are parsed with a single regex line each,
which improves code readability.
* Parsing the content is done in order, so even if the whole data is in the buffer,
it can be read in one go. Files of arbitrary sizes can be uploaded.
== Tested with
* Uploaded 0B, 1B, 14KB, 16KB, 1GB, 10GB, 20GB files
* Tested with all checksums and without
* Tested on firefox, chromium, and pvesh
I didn't do any fuzzing or automated upload testing.
== Drawbacks & Potential issues
* Part headers are hardcoded, adding new ones requries modifying this file
== does not fix
* upload can still time out
Signed-off-by: Matthias Heiserer <m.heiserer@proxmox.com>
Tested-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Acknowledging the Content-Disposition header makes it possible for the
backend to tell the browser whether a file should be downloaded,
rather than displayed inline, and what it's default name should be.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
While $self->error will immediately send out a 4xx or 5xx response
anyhow its still good to cover against possible side effects (e.g.,
from future code in that branch) on the server and return directly.
Note that this is mostly for completeness sake, we already have
another check that covers this one for relevant cases in commit
580d540ea9.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We don't expect any userinfo in the authority and t o avoid that this
allows some leverage in doing weird things later its better to error
out early on such requests.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Ensures that no external request can control streaming on proxying
requests as safety net for when we'd have another issue in the
request handling part.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
We implicitly assume that to be the case when assembling the target
URL, so assert it explicitly as it's user controlled input.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Originally-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
basically only possible to trigger with chromium based browsers
(chrome, edge, opera) but besides those having the biggest usage
currently its not that nice in any way.
Users could inject headers in their response, which isn't really that
bad itself, as they won't really do anything at least for sane
browsers that don't allow setting third party cookies by default
(unlike again, chrome), in which case one can create huge cookies
that then trigger the max header size check on requests, DOS'ing an
user's access to a PVE interface if they can get them to visit a
malicious site (a clear cooki actione would allow visiting it again)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Reported-by: STAR Labs <info@starlabs.sg>
SSL 2 and 3 are already disabled by default by us, and TLS 1.1 and below
are disabled by default on Debian systems.
requires corresponding patch in pve-manager to have an effect.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
like the TLS <= 1.2 cipher list, but needs a different option since the
format and values are incompatible. AnyEvent doesn't yet handle this
directly like the cipher list, so set it directly on the context.
requires corresponding patch in pve-manager (which reads the config, and
passes relevant parts back to the API server).
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Tested-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Stoiko Ivanov <s.ivanov@proxmox.com>
if a client closes the connection while the API server is
waiting/stalling here, the handle will disappear, and sending a response
is no longer possible.
(this issue is only cosmetic, but if such clients are a regular
occurrence it might get quite noisy in the logs)
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
if the WS gets disconnected without any data having been sent first,
wbuf (and thus `length $wbuf`) is undef. the actual length of the buffer
is not relevant here anyway, just the fact that it's non-empty - so
avoid the undef warning by dropping the unnecessary comparison.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this is useful if we want to pipe the output of a program e.g. through gzip
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The issue is probably not critical and best addressed by not running
the perl API servers in an exposed environment or when this needs to
be done by installing a reverse proxy in front of them.
The DOS potential of the perl daemons is limited more by the limited
number of parallel workers (and the memory constraints of starting
more of them), than by the CPU cycles wasted on TLS renegotiation.
Still disabling TLS renegotiation should show very little downside:
* it was removed in TLS 1.3 for security reasons
* it was the way nginx addressed this issue [1].
* we do not use client certificate authentication
Tested by running `openssl s_client -no_tls1_3 -connect 192.0.2.1:8006`
and issuing a `HEAD / HTTP/1.1\nR\n`
with and without the patch.
[1] 70bd187c4c386d82d6e4d180e0db84f361d1be02 at
https://github.com/nginx/nginx (although that code adapted to
the various changes in openssl API over the years)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
for proxied requests, we usually tear down the proxy connection
immediately when closing the source connection. this is not the correct
course of action for bulk one-way data streams that are proxied, where
the source connection might be closed, but the proxy connection might
still have data in the write buffer that needs to be written out.
push_shutdown already handles this case (closing the socket/FH after it
has been fully drained).
one example for such a proxied data stream is the 'migrate' data for a
remote migration, which gets proxied over a websocket connection.
terminating the proxied connection early makes the target VM crash for
obvious reasons.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
any uploaded file has to be deleted by the corrosponding
endpoint. the file upload was only used by the 'upload to
storage' feature in pve.
this change allows the endpoint to delete the file itself,
making the old and racey`sleep 1` (waiting until the worker
has opened the file) obsolete.
this change breaks all pve-manager versions, in which the
worker does not unlink the temp file itself.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
this major release still needs to have an incompatible client, the next
one can drop setting a protocol client-side, and the one after that can
remove the protocol handling on the server side.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We do not support any, and we only ever send binary frames, so drop
trying to parse the header.
For compatibility with current clients (novnc, pve-xtermjs), we have
to reply with the protocols it sent.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
novnc does not support this anymore since 2015, and neither does
our xtermjs client. it is also not listed in IANAs list of websocket
protocols [0].
so simply drop it and only send out binary frames and don't decode text frames
0: https://www.iana.org/assignments/websocket/websocket.xml#subprotocol-name
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>