Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
[ Thomas: amended the following changes:
- factor out openid_login_param to widget-toolkit as
getOpenIDRedirectionAuthorization and use it
- use camel case to match our JS style guide and our framework (and
basically the rest of the JS world)
- minor cleanups like moving variable definition into the single if
branch their used
]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
And also add the 'backup-info' endpoint to the index.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
these were mostly releveant for upgrading from Corosync 2.x to 3.x - so
keep the warnings/errors, but reduce the noise a bit by skipping lots of
PASS output.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This had a myriad of issues:
* marked as protected, thus forwarded to the privileged daemon even
if it just returned static information
* did not return directory index but a "stub" string, which does not
makes sense.
* not named index
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If neither 'rootdir' nor 'images' are configured on a storage, but
there are guest images, just log the number of volumes found. If they
are relevant for migration, the check for unreferenced volumes will
catch them later.
Also detect content type mismatch for all volumes of existing guests,
which also covers the case of a VM image on a storage with only
'rootdir' and vice versa. To catch all such unreferenced volumes too,
it is necessary to scan all storages that do not have both content
types configured.
Change the message from 'will not work' to 'might not work'. If a
volume only referenced by a snapshot is misconfigured, it doesn't mean
that the guest doesn't work at all. Or it might be an ISO on a
misconfigured storage.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
If there is a log_fail, because of misconfigured 'none' content type, the final
log_pass should not be printed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
VM names are returned by the endpoint anyway, therefore it makes sense
to add it to the endpoint specification so it also appears in the API
docs and is visible when using pvesh with text output.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Similar to PBS. The 'errors' filter parameter still takes precedence
(overrides this)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: adapt to renamed PVE::Tools helper method ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
While one can re-add a user quite easily, the ACLs and the old
password may still be lost, so not really just like a simple "undo".
That's why it should show the warning triangle and have no selected
as default.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
now that we no longer ship our own LVM packages, set the relevant
filtering options here if they are missing.
for an upgrade from PVE 6.x, the following two scenarios are likely:
A: user edited config provided by our old lvm2 package. it likely
contains our (or a modified) global_filter, but the old scan_lvs
default. in this case we ignore global_filter as long as it contains our
'don't scan zvols' entry, and set scan_lvs to false.
B: config provided by our old lvm2 package was taken over by default
config from stock lvm2 package. scan_lvs defaults to false already, but
global_filter is unset (scan everything), so we need to set our own
global_filter excluding zvols.
other combinations should be handled fine as well.
for new installs (installer, install on top of Debian Bullseye) we are
always in scenario B.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
we have lots of information already parsed and cached, use that and
give the frontend more to work with/display.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
any system upgrading to 7.x was either installed with >= 6.4 in the
first place, or upgraded to >= 6.4 and thus fixed those issues already.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
We could also just check the mtime of the machine-id as heuristic,
but extracting the machine-ids from our ISO archive was pretty
straight forward and avoids special handling for from Debian
installed systems, so use that.
The full map:
pve 4.0-62414ad6-11 -> "2ec24eda629a4c8d8c1f8dac50a9ee5f"
pve 4.1-a64d2990-21 -> "bd94244c0da6419a82a383e62dc03b51"
pve 4.2-95d93422-28 -> "45d4e7046c3d4c26af8acd589f358ac6"
pve 4.3-29d03d47-2 -> "8c445f96b3064ff79f825ea78a3eefde"
pve 4.4-f4006904-1 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 4.4-f4006904-2 -> "6f9fae0f0a794fd4b89b3abecfd7f182"
pve 5.0-786da0da-1 -> "285de85759894b3f9ad9844a89045af6"
pve 5.0-786da0da-2 -> "89971dede7b04c98b2b0bc8845f53320"
pve 5.0-20170505-test -> "4e3b6e9550f24d638bc26211a7b37df5"
pve 5.0-ad98a36-5 -> "bc2f684e31ee4daf95e45c62410a95b1"
pve 5.0-d136f4ad-3 -> "8cc7bc883fd048b78a4af7433c48e341"
pve 5.0-9795f744-4 -> "9b46d99712854566bb02a656a3ff9191"
pve 5.0-22d7548f-1 -> "e7fc055af47048ee884dcb88a7474336"
pve 5.0-273a9671-1 -> "13d879f75e6447a69ed85179bd93759a"
pve 5.1-2 -> "5b59e448c3e74029af2ac91f572d68a7"
pve 5.1-3 -> "5a2bd0d11a6c41f9a33fd527751224ea"
pve 5.1-cfaf62cd-1 -> "516afc72013c4b9da85b309aad987df2"
pve 5.1-test-20171019-1 -> "b0ce8d24684845e8ac337c588a7715cb"
pve 5.1-test-20171218 -> "e0af064c16e9463e9fa980eac66427c1"
pve 5.2-1 -> "6e925d11b497446e8e7f2ff38e7cf891"
pve 5.3-1 -> "eec280213051474d8bfe7e089a86744a"
pve 5.3-2 -> "708ded6ee82a46c08b77fecda2284c6c"
pve 5.3-preview-20181123-1 -> "615cb2b78b2240289fef74da610c146f"
pve 5.4-1 -> "b965b329a7e246d5be66a8d367f5760d"
pve 6.0-1 -> "5472a49c6436426fbebd7881f7b7f13b"
The 6.0 one should never trigger as there we had the fix already out,
but it may be that some internal installation missed that and it
doesn't hurt to check, so include it.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>