This remove vmbr* from bridgeselector if user have access to vnets.
if user need to have also access to vmbr, we can add a permission
in path "/sdn/vnets/vmbrX"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Using
pvesh create /nodes/pve701/apt/repositories --path
"/etc/apt/sources.list" --index 0 --enabled 1
reliably leads to
error: invalid type: string "0", expected usize
Coerce to int to avoid this. I was not able to trigger the issue with
the "enabled" option being a string here (in PMG I was), but be on the
safe side and coerce there too. Otherwise it might get triggered by a
future, completely unrelated change further up in the API call
handling.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To avoid being blacklisted because of the default, quite popular,
libwww-perl user-agent like reported in community forum [0].
[0]: https://forum.proxmox.com/threads/104081/
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-by: Matthias Heiserer <m.heiserer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A virtual package does not have SelectedState Install, but the
dependency will still be satisfied if a package providing it has.
Fixes a bug, wrongly showing that postfix will be installed, when a
different mail-transport-agent is installed and a pve-manager update
is available:
https://forum.proxmox.com/threads/103413/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and calculate it by getting the next event after 'now' since
we currently have no way to get the last run time for jobs only running
on different cluster nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of accumulating the whole output of 'mini-journalreader' in
the api call (this can be quite big), use the download mechanic of the
http-server to stream the output to the client.
we lose some error handling possibilities, but we do not have
to allocate anything here, and since perl does not free memory after
allocating[0] this is our desired behaviour.
to keep api compatiblitiy, we need to give the journalreader the '-j'
flag to let it output json.
also tell the http server that the encoding is gzip and pipe
the output through it.
0: https://perldoc.perl.org/perlfaq3#How-can-I-free-an-array-or-hash-so-my-program-shrinks?
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
had it lying around and did not felt condensed/code-golfed to me,
rather a bit more expressive (surely biased though)..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the old web ui sends the days as seperate parameters, which will
be concatenated by a null-byte in the api, causing it to land it this
way in the jobs.cfg
to fix this, split+join the list to get a well-formed dow list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
else this happens:
"Use of inherited AUTOLOAD for non-method PVE::API2::Backup::uuid() is
no longer allowed at /usr/share/perl5/PVE/API2/Backup.pm line 198.
(500)"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
not only the given parameters, e.g. at the moment, the gui will
never send a 'verify-certificate' parameter, even if set in the config
by using the complete resulting config, we test the actual settings.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Note that this does not only allow partitions to be used, but for DB
and WAL disks, one more type of disk, that wasn't allowed before.
Namely, GPT-partitioned disks with any partitions detected as used.
The reason is get_disks' behavior:
* Without $include_partitions=1, the disk will have the same usage
as it's first used partition, and thus wasn't allowed. (Except in
the case that usage was LVM, where the check was bypassed, but
luckily OSD creation just failed later because no Ceph volume
group would be detected).
* With $include_partitions=1, the disk will have usage 'partitions'
and thus be allowed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
a simple api call to simulate calendar event triggers
takes a schedule, an optional number (default 10), an optional starttime
(default 'now') and returns a list with unix timestamps, as well as
humanly readable utc timestamps.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in addition to listing the vzdump.cron jobs, also list from the
jobs.cfg file.
updates/creations go into the new jobs.cfg only now
and on update, starttime+dow get converted to a schedule
this transformation is straight forward, since 'dow'
is already in a compatible format (e.g. 'mon,tue') and we simply
append the starttime (if any)
id on creation is optional for now (for api compat), but will
be autogenerated (uuid). on update, we simply take the id from before
(the ids of the other entries in vzdump.cron will change but they would
anyway)
as long as we have the vzdump.cron file, we must lock both
vzdump.cron and jobs.cfg, since we often update both
we also change the backupinfo api call to read the jobs.cfg too
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since commit: 8a3a300b ("ceph services: drop broadcasting legacy
version pmxcfs KV")
The 'ceph-version' kv is not broadcasted anymore, so we should not
query it, instead use get_ceph_versions
Also drop the other legacy keys for the versions
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. The Diskmanage package
relies upon lsblk for certain info, and lsblk queries the udev
database. Ensure the information is updated by manually calling
'udevadm trigger' for the changed devices.
Without the fix, and a bit of bad luck, a cleaned up disk could still
show up as an 'LVM2_member' for example.
[0]: https://github.com/systemd/systemd/issues/18525
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is mostly a copy of the wipe_disks helper with the difference
that it also uses wipefs on the device and its partitions.
Remove the wipe_disks helper as no users remain.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Adds some missing descriptions to the api/man page documentation, for
certain options of the `pvenode` command. Some minor language fix-ups
are also included
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
so that they are documented and get displayed by pvesh/pvenode
all those fields must exists (since they come from the upid)
aside from the exitstatus, so marking that as optional
forum user reported that they are missing:
https://forum.proxmox.com/threads/ergebnis-eines-tasks-per-api-abfragen.92267/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we filtered out devices which belong into the 'Generic System Peripheral'
category, but this can contain actual useful pci devices
users want to pass through, so simply do not filter it by default.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
metadata is gained using a HEAD request.
Due to the ability of this api endpoint to request files on internal
networks (which would not be visible/accessible from outside) it is
restricted to users with permissions `Sys.Audit` and `Sys.Modify` on
`/`. Users with these permissions are able to alter node (network)
config anyway, so this should not create any further security risk.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
And also add the 'backup-info' endpoint to the index.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This had a myriad of issues:
* marked as protected, thus forwarded to the privileged daemon even
if it just returned static information
* did not return directory index but a "stub" string, which does not
makes sense.
* not named index
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
VM names are returned by the endpoint anyway, therefore it makes sense
to add it to the endpoint specification so it also appears in the API
docs and is visible when using pvesh with text output.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Similar to PBS. The 'errors' filter parameter still takes precedence
(overrides this)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: adapt to renamed PVE::Tools helper method ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we have lots of information already parsed and cached, use that and
give the frontend more to work with/display.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
a common function to download arbitrary files from urls has been
defined as PVE::Tools::download_file_from_url and is now used.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Multiple public networks can be defined in the ceph.conf. The networks need to
be routed to each other.
Support handling multiple IPs for a single monitor. By default, one address from
each public network is selected for monitor creation, but, as before, it can be
overwritten with the mon-address parameter, now taking a list of addresses.
On removal, make sure the all addresses are removed from the mon_host entry in
the ceph configuration.
Originally-by: Alwin Antreich <a.antreich@proxmox.com>
[handling of multiple addresses]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by also comparing the canonical form to decide when to remove an address. When
getting the IP from the rados information, also drop eventual brackets, so our
existing function can handle it. Add the brackets back within the
remove_addr_from_mon_host function.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Partially based on pve-storage's CephConfig.pm get_monaddr_list, but the
interface is not the best for the use case here.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in preparation for supporting multiple addresses. The config section does not
allow more than one public_addr.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
mostly relevant to prepare support for IPv4/IPv6 dual stack mode as a special
case of the planned support for mutliple public networks.
As before, only set the false value when we are dealing with the first address,
but also be explicit about the IPv4 case as the defaults might change in the
future.
Then, when an address of a different type comes along later, set the relevant
bind option to true.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The change not to pass the 'upgrade' parameter in the frontend was made in
953f6e9bb3 (the commit doesn't talk about it, it's
likely an accidental squash of two changes)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The switch to 'cmd' was made by commit af39a6f09651e15d1c83536e25493a2212efd7d3
in the pve-xtermjs repo and is included in 4.7.0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
everywhere where Pool.Allocate was unnecessarly used it was replaced
with Pool.Audit.
`/cluster/resources` now returns pool infomation for guests only if
the requesting user has the Pool.Audit permission on the pool.
`/pool/` now returns only pools where the requesting user has the
Pool.Audit permission.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
on a given node (and storage).
There is no datacenter/storage fallback for the bandwidth limit, so the default
can just be returned as is. While the bandwidth limit is a root-only option when
executing the backup, it still makes sense to return it for all users, so they
can see what's going to be used.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently the check for used ports for bonds and bridges happens
while rendering '/etc/network/interfaces.new' in PVE::Inotify
(pve-common).
However at that stage the new/updated interface is already merged
with the old settings, making it impossible to indicate where a NIC
is currently used.
The code is adapted from the renderer in
PVE::Inotify::__write_etc_network_interfaces.
Tested on a virtual PVE instance.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this was from the time where we had a loop here to add two storages,
one for KRDB-only and one for KRBD-never. Nowadays we can handle the
mixed case just fine, but the patch dropping that forget to cleanup
the error handling..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We want to check explicitly for type host, so filter for that first
and create a hash map for easier usage afterwards.
Drop the error when there's no tree, as either RADOS error'd on bad
command already, or there really is no tree (but RADOS worked OK), in
which case we simply return that the OSD did not belong to this node.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allow destroying only OSDs that belong to the node that has been specified in
the API path.
So if
- OSD 1 belongs to node A and
- OSD 2 belongs to node B
then
- pvesh delete nodes/A/ceph/osd/1 is allowed but
- pvesh delete nodes/A/ceph/osd/2 is not
Destroying an OSD via GUI automatically inserts the correct node
into the API path.
pveceph automatically insert the local node into the API call, too.
Consequently, it can now only destroy local OSDs (fix#2053).
- pveceph osd destroy 1 is allowed on node A but
- pveceph osd destroy 2 is not
Signed-off-by: Dominic Jäger <d.jaeger@proxmox.com>
The guest iteration is slightly confusing as we also handle the
accumulated pool settings there, so we only check the VM.Audit privs.
for a specific VM and skip to the next if the permissions is not
there after those pool handling.
So, move operations which are only required when VM privs. are there
below this check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
makes it easier to compare in API responses, and those list are not
huge, seldom over a few thousands, which is peanut crumbs compared to
all the other thing in this perl stack.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
so it doesn't get out of scope too early.
Regression introduced by 5620e5761e as pointed
out by Fabian Grünbichler.
Reported in the community forum:
https://forum.proxmox.com/threads/limit-simultaneous-backup-jobs.87489
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
avoid further crowding the top-level node API path with such
"what can some part of the node currently do" stuff, rather move it
down.
The QEMU cpu stuff should move also down there.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as 'machine-types', so it is clear this refers to QEMU machines, not the
local machine (as one might think, this being a 'node' API call).
Signed-off-by: Stefan Reiter <s.reiter@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This API call is the predecessor of /nodes/{node}/disks/list, which has seen a
few more improvements. The latter API call should be used instead, and the web
UI already does so.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
like we do in it for the storage section configs
we will need this to store the token for influxdbs http api
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
We get the device list from ceph-volume lvm list, and decode the json
output, which at that point is tainted (perlsec (1)).
Untaint it here before calling, because it is currently the only
call-site using the information in a problematic way (run_command).
(the only other call-site being in pve5to6)
Alternatively we could untaint while reading the information, but then
should only return a small subset of the ceph-volume output.
The issue is most likely due to
cb9db10c1a9855cf40ff13e81f9dd97d6a9b2698 in pve-common ('run_command:
improve performance for logging and long lines'),
Tested on a virtual testsetup by creating OSDs with second DB disk,
and destroying it via GUI (did not manage to get the error without the
DB disk)
Reported via our community forum:
https://forum.proxmox.com/threads/insecure-dependency-in-exec-during-osd-destroy.79574/
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
by deleteing it from $ceph_param we deleted it also from
$param since it was only a reference
fix it by extracting it beforehand
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
If the command itself allows it, which normally means it has good
verification of passed arguments.
We may want to re-evaluate security here if we allow execution for a
group of non-root users.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the information comes only from the key value store in the pmxcfs, so
we do not actually require ceph to be installed.
So only check if ceph is locally initialized and create the rados
connection after the early return when only versions scope is set.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
We actually wanted to use that scheme for more new API paths, lets
see if it is really fitting starting with this.
Use the new widget-toolkit submitUrl helper to add the ID on create.
And unify the edit/create window creation, which may fit better in a
separate commit, it's quite small and was to cumbersome to untangle,
so just go against my one rules here... :(
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
after creation, so that users don't need to go the ceph tooling route.
Separate common pool options to reuse them in other places.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
to reduce code duplication and make it easier to add more options for
pool commands.
Use a new rados object for each 'osd pool set', as each command can set
an option independent of the previous commands success/failure. On
failure a new rados object would need to be created and that will
confuse task tracking of the REST environment.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Like this, there will be a backup task (within the big worker task)
for such IDs, which will then visibly (i.e. also visible in the
notification mail) fail with, e.g.:
unable to find VM '123'
In get_included_guests, the key '' was chosen for the orphaned IDs,
because it cannot possibly denote a nodename.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and make the two options mutally exclusive as long
as they are specified on the same level (e.g. both
from the storage configuration). Otherwise prefer
option > storage config > default (only maxfiles has a default currently).
Defines the backup limit for prune-backups as the sum of all
keep-values.
There is no perfect way to determine whether a
new backup would trigger a removal with prune later:
1. we would need a way to include the not yet existing backup
in a 'prune --dry-run' check.
2. even if we had that check, if it's executed right before
a full hour, and the actual backup happens after the full
hour, the information from the check is not correct.
So in some cases, we allow backup jobs with remove=0, that
will lead to a removal when the next prune is executed.
Still, the job with remove=0 does not execute a prune, so:
1. There is a well-defined limit.
2. A job with remove=0 never removes an old backup.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and regular users to read all their own tasks as well as those of their
associated tokens.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>