With Proxmox VE 8, we'll have support for a enterprise ceph repo,
accessed through Proxmox VE subscriptions, to provide more broadly
tested ceph updates for production setups.
Replace the test-repository parameter with an actual enum of
selectable repo types for:
- test (same as previously selected through setting test-repository)
- no-subscription (the previous default, then named "main")
- enterprise (new and the default now, recommended for production)
Note that writing the auth-part is a bit hacky and might/should be
improved.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The pve_verify_cidr{,v4,v6} functions were originally intended for
the /etc/network/interfaces API endpoints and thus are a bit
restrictive. For example, as reported in the community forum[0],
pve_verify_cidr() does not consider '0::/0' and '0::/1' to be valid.
The error message in this scenario being
> value does not look like a valid CIDR network
is also confusing, as the first thought of users will be that it comes
from the passed-in monitor address.
The public networks are not written here and read from the Ceph config
and via a RADOS mon command, so no need to try and verify them. If
something really would go wrong during parsing, the
get_local_ip_from_cidr() call would complain afterwards.
[0]: https://forum.proxmox.com/threads/125226/
Suggested-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
if a job has no schedule and is executed via "Schedule now" but fails, the
following will be printed to journal/syslog:
Mar 21 13:05:01 host02 pvescheduler[203343]: send/receive failed, cleaning up snapshot(s)..
Mar 21 13:05:01 host02 pvescheduler[203343]: 100-0: got unexpected replication job error - command 'set -o pipefail && pvesm export local-zfs:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1679400300__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=host03' root@10.0.74.3 -- pvesm import local-zfs:vm-100-disk-0 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1679400300__ -allow-rename 0' failed: exit code 255
Mar 21 13:05:01 host02 pvescheduler[203343]: Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/PVE/API2/Replication.pm line 107.
defaulting to the fallback schedule '*/15' makes the spurious warning go away.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Consolidating the different config paths lets us add more as needed
without polluting our API with too many 'configxxx' endpoints.
The config and configdb paths are renamed under the ceph/cfg path:
* config -> raw (returns the ceph.conf file as is)
* configdb -> db (returns the ceph config db contents)
The old paths are still available and need to be dropped at some point.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
/nodes/{node}/ceph/pools/{pool} returns the pool details right away on a
GET. This makes it bad practice to add additional sub API endpoints.
By deprecating it and replacing it with /nodes/{node}/ceph/pool/{pool}
(singular instead of plural) we can turn that into an index GET
response, making it possible to expand it more in the future.
The GET call returning the pool details is moved into
/nodes/{node}/ceph/pool/{pool}/status
The code in the new Pool.pm is basically a copy of Pools.pm to avoid
a close coupling with the old code as it is possible that it will divert
until we can entirely remove the old code.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
To get more details for a single OSD, we add two new endpoints:
* nodes/{node}/ceph/osd/{osdid}/metadata
* nodes/{node}/ceph/osd/{osdid}/lv-info
The {osdid} endpoint itself gets a new GET handler to return the index.
The metadata one provides various metadata regarding the OSD.
Such as
* process id
* memory usage
* info about devices used (bdev/block, db, wal)
* size
* disks used (sdX)
...
* network addresses and ports used
...
Memory usage and PID are retrieved from systemd while the rest can be
retrieved from the metadata provided by Ceph.
The second one (lv-info) returns the following infos for a logical
volume:
* creation time
* lv name
* lv path
* lv size
* lv uuid
* vg name
Possible volumes are:
* block (default value if not provided)
* db
* wal
'ceph-volume' is used to gather the infos, except for the creation time
of the LV which is retrieved via 'lvs'.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The 'hardware' entry was missing there. While interfacing with it
works, it will not show up during CLI auto completion and in the HTML
debug view (/api2/html/) if not listed here in the API directory
index.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
By switching from 'ceph osd tree' to the 'ceph osd df tree' mon API
equivalent , we get the same data structure with more information per
OSD. One of them is the number of PGs stored on that OSD.
The number of PGs per OSD is an important number, for example when
trying to figure out why the performance is not as good as expected.
Therefore, adding it to the OSD overview visible by default should
reduce the number of times, one needs to access the CLI.
Comparing runtime cost on a 3 node ceph cluster with 4 OSDs each doing 50k
iterations gives:
Rate osd-df-tree osd-tree
osd-df-tree 9141/s -- -25%
osd-tree 12136/s 33% --
So, while definitively a bit slower, but it's still in the µs range,
and as such below HTTP in TLS in TCP connection setup for most users,
so worth the extra useful information.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ TL: slight rewording of subject and add benchmark data ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
previously the upid would just be used without a file extension when
downloading a task log. this lead to rather strange filenames that
appeared unfamiliar to users as the upid is not very prevalent in the
gui. set a proper file name based on the node name, worker type and a
time stamp instead. also add the ".log" file extension to indicate
that these files contain logs.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
This patch fixes the issue that when the user supplied any non-standard
repositories, the changelogs often wouldn't load. For example, providing
both pve-no-subscription and pbs-no-subscription broke the changelog
API, since the URL built for pbs-no-subscription was invalid.
Signed-off-by: Leo Nunner <l.nunner@proxmox.com>
This API endpoint returns a big nested schema. This patch adds a mostly
complete description.
For the actual service instance return schema, we include commonly used
and important properties. It will usually return more. What exactly
depends on the Ceph service type.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
to include a more complete description of the returned data.
Sort properties in alphabetical order if the list is longer.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
So that one can really decide if this is a shutdown or an actual
stop.
partially related to #4194
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Note that this changes the lower timeout of 60s for CTs also to 180s
like VM always used; besides that there's not much gained making that
distinction there was never a really good argument for this.
partially related to #4194
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we might add other ones that might be used together with the
`download` one, so rather be explicit on communicating what we check.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The read_tasklog API call now stream the whole log file if the query
parameter 'download' is set to true.
This is done in preparation for the task log download button to be
added in the TaskViewer.
Signed-off-by: Daniel Tschlatscher <d.tschlatscher@proxmox.com>
Tested-by: Stefan Sterz <s.sterz@proxmox.com>
Reviewed-by: Stefan Sterz <s.sterz@proxmox.com>
The 'cmd-safety', 'configdb' and 'mgr' items were missing, and while
directly calling the API endpoints worked, the api-viewer and pvesh
where partially broken here.
Sort the whole list alphabetically will make it easier to track in
the future
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
[ T: note which items where missing and reword slightly ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Allows sufficiently privileged users to pass-in retention and
performance parameters for manual backup, but keeps tmpdir, dumpdir
and script root-only. Such users could already edit the job
accordingly, so essentially not granting anything new.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
keep Sys.Modify only for backward compat, as it does not make really
much sense to ask for that on an informative GET call.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Ceph provides us with several safety checks to verify that an action is
safe to perform. This endpoint provides means to acces them.
The actual mon commands are not exposed directly. Instead the two
actions "stop" and "destroy" are offered.
In case it is not okay to perform an action, Ceph provides a status
message explaining why. This message is part of the returned values.
For now there are the following checks for these services:
MON:
- ok-to-stop
- ok-to-rm
OSD:
- ok-to-stop
- safe-to-destroy
MDS:
- ok-to-stop
Even though OSDs have a check if it is okay to destroy them, it is for
now not really usable in our workflow because it needs the OSD to be up
and running to return useful information. Our workflow in the GUI
currently is that the OSD needs to be stopped in order to destroy it.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
To avoid returning unrelated information in the /version api call in
the future use the API endpoint that'd return the relevant
information basically anyway.
It contains most ui relevant options, like the console preference and
tag-style so allow these for users without 'Sys.Audit' on '/', for
others its unchanged, they still get the whole datacenter config.
We also add the list of allowed tags, while not strictly a datacenter
config, it's derived from the current users privileges and the
datacenter config.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
since ceph luminous (ceph 12) pools need to be associated with at
least one applicaton. expose this information here too so that clients
of this endpoint can use it.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
Tested-By: Aaron Lauterer <a.lauterer@proxmox.com>
so the frontend has the information readily available.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
for backwards compatibility. Otherwise, e.g. listing backup jobs with
pvesh get /cluster/backup is broken. And suddenly not having the
property anymore would be a breaking API change.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
This avoids errors about the use of uninitialized values if the 'pool'
parameter is not present in the storage configuration.
The 'pool' property for an RBD storage config is not mandatory.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Also generalizes the way vzdump property strings are handled for jobs.
Something similar could be done in VZDump.pm, but there the maxfiles
and prune-backups settings are currently coupled, so a dedicated
parse_performance() is used instead. Can be changed once maxfiles is
dropped.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
since the jobs are configured clusterwide in pmxcfs, a user can use any
node to update the config of them. for some configs (schedule/enabled)
we need to update the last runtime in the state file, but this
is sadly only node-local.
to also update the state file on the other nodes, we introduce
a new 'detect_changed_runtime_props' function that checks and saves relevant
properties from the config to the statefile each round of the scheduler if they
changed.
this way, we can detect changes in those and update the last runtime too.
the only situation where we don't detect a config change is when the
user changes back to the previous configuration in between iterations.
This can be ignored though, since it would not be scheduled then
anyway.
in 'synchronize_job_states_with_config' we switch from reading the
jobstate unconditionally to check the existance of the statefile
(which is the only condition that can return undef anyway)
so that we don't read the file multiple times each round.
Fixes: 0c8d7468 ("fix #4053: don't run vzdump jobs when they change from
disabled->enabled")
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fiona Ebner <f.ebner@proxmox.com>
by extracting the JSON-encoded-string schema and dumping it into the
verbose description it at least shows up in the API viewer.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
since this was missing a proper return type definition the api viewer
couldn't display the endpoint (`retinfs.items` was undefined). also
the `pvesh` command would complain that it cannot properly format the
return type because the variable `$item_type` in `CLIFormatter.pm` was
not defined.
Signed-off-by: Stefan Sterz <s.sterz@proxmox.com>
by updating the timestamp in the job state when enabled is changing
from 0 -> 1. We do it this way too in PBS for example, and is the more
sensible behaviour.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
like systemd-timers 'persistent'. so that the user can configure it to not be
run after powering up when it was previously missed
this reverses the default behaviour to not run missed jobs after pvescheduler
was started, since most of the time that's not the desired behaviour
since we don't use it for updated schedules anymore, rename
'updated_job_schedule' to 'update_last_runtime'
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Reviewed-by: Fabian Ebner <f.ebner@proxmox.com>
which can happen when failing to obtain the guest's migration lock.
This led to a lot of mails being sent during migration (timeout for
obtaining lock is only 2 seconds and we run it in a loop).
One could argue that obtaining the lock should increase the fail
count, but without the lock, the job state should not be touched and
even the first three mails upon migration could be considered spam.
Fixes: fa4bb659 ("replication: sent always mail for first three tries and move helper")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Because $mon->{addr} might come with a port attached (affects monitors
created with PVE 5.4 as reported in the community forum [0]), or even
be a hostname (according to the code in Ceph/Services.pm). Although
the latter shouldn't happen for configurations created by PVE.
[0]: https://forum.proxmox.com/threads/105904/
Fixes: 9e989449 ("api: ceph: mon: fix handling of IPv6 addresses in assert_mon_prerequisites")
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Mention which optional parameters will be used for the replicated
metadata pool but won't have an effect on the erasure coded data pool.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
The crush rule is an optional paramter which can be used for the
metadata pool, but the erasure coded data pool will always get its own
crush rule. Therefore this parameter can not be adapted.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When a schedule only has a limited amount of runs it can happen
(e.g. 2022-10-01 8:00/30), $next will be undef after the last run.
Exit early in that case.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
The osd dump already contains the pool type in numerical format.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
The behavior of always adding the storage config was lost in commit
23c407e. But it is more sensible to make it a default that can be
changed if needed.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
moved to a format string 'erasurce-coded', that allows also to drop
most of the param existence checking as we can set the correct
optional'ness in there. Also avoids bloating the API to much for
just this.
Reuse the $rados connection more often to avoid to much
overhead/lingering sockets (the rados connection stays around in the
background to allow efficient reuse)
really should be three separate commits, but too intertwined and too
late for me to care tbh.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
To use erasure coded (EC) pools for RBD storages, we need two pools. One
regular replicated pool that will hold the RBD omap and other metadata
and the EC pool which will hold the image data.
The coupling happens when an RBD image is created by adding the
--data-pool parameter. This is why we have the 'data-pool' parameter in
the storage configuration.
To follow already established semantics, we will create a 'X-metadata'
and 'X-data' pool. The storage configuration is always added as it is
the only thing that links the two together (besides naming schemes).
Different pg_num defaults are chosen for the replicated metadata pool as
it will not hold a lot of data.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
When removing a pool, we check against any storage that might have that
pool configured.
We need to check if that pool is used as data-pool too.
Signed-off-by: Aaron Lauterer <a.lauterer@proxmox.com>
commit 89d146f207 introduced permission
checks here that caused all regular bridges to be removed from the
returned list as soon as the SDN package is installed, unless the user
is root@pam or there exists a VNET with the same ID.
this is arguably a breaking change, so limit the priv check to actually
defined VNETs for the time being, and add ALL regular bridges
uncondtionally like before.
get_local_vnets already filters by the same prvs, so we need to get the
full config to find out which IDs are VNETs and which are not.
once/iff we introduce ACL paths for *all* bridges in the future, we can
limit accordingly here.
CC: Alexandre Derumier <aderumier@odiso.com>
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
but rather multiple times becoming exponentially less frequent.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
on create we require either starttime (+dow) or a schedule, but when
updating an existing job, this is not necessary
before we changed to schedules, the starttime was not optional either on
update, but i think there is no reason to require the user to send the
schedule/startime along every time.
the gui will send all values every time, so that was never a problem there
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In preparation to have check_volume_access() always allow access for
users with Datastore.Allocate privilege. As to not automatically give
all such users permission to extract the config too.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
This remove vmbr* from bridgeselector if user have access to vnets.
if user need to have also access to vmbr, we can add a permission
in path "/sdn/vnets/vmbrX"
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Using
pvesh create /nodes/pve701/apt/repositories --path
"/etc/apt/sources.list" --index 0 --enabled 1
reliably leads to
error: invalid type: string "0", expected usize
Coerce to int to avoid this. I was not able to trigger the issue with
the "enabled" option being a string here (in PMG I was), but be on the
safe side and coerce there too. Otherwise it might get triggered by a
future, completely unrelated change further up in the API call
handling.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
To avoid being blacklisted because of the default, quite popular,
libwww-perl user-agent like reported in community forum [0].
[0]: https://forum.proxmox.com/threads/104081/
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
Tested-by: Matthias Heiserer <m.heiserer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A virtual package does not have SelectedState Install, but the
dependency will still be satisfied if a package providing it has.
Fixes a bug, wrongly showing that postfix will be installed, when a
different mail-transport-agent is installed and a pve-manager update
is available:
https://forum.proxmox.com/threads/103413/
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
and calculate it by getting the next event after 'now' since
we currently have no way to get the last run time for jobs only running
on different cluster nodes
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
instead of accumulating the whole output of 'mini-journalreader' in
the api call (this can be quite big), use the download mechanic of the
http-server to stream the output to the client.
we lose some error handling possibilities, but we do not have
to allocate anything here, and since perl does not free memory after
allocating[0] this is our desired behaviour.
to keep api compatiblitiy, we need to give the journalreader the '-j'
flag to let it output json.
also tell the http server that the encoding is gzip and pipe
the output through it.
0: https://perldoc.perl.org/perlfaq3#How-can-I-free-an-array-or-hash-so-my-program-shrinks?
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
had it lying around and did not felt condensed/code-golfed to me,
rather a bit more expressive (surely biased though)..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
the old web ui sends the days as seperate parameters, which will
be concatenated by a null-byte in the api, causing it to land it this
way in the jobs.cfg
to fix this, split+join the list to get a well-formed dow list
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
else this happens:
"Use of inherited AUTOLOAD for non-method PVE::API2::Backup::uuid() is
no longer allowed at /usr/share/perl5/PVE/API2/Backup.pm line 198.
(500)"
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
not only the given parameters, e.g. at the moment, the gui will
never send a 'verify-certificate' parameter, even if set in the config
by using the complete resulting config, we test the actual settings.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Note that this does not only allow partitions to be used, but for DB
and WAL disks, one more type of disk, that wasn't allowed before.
Namely, GPT-partitioned disks with any partitions detected as used.
The reason is get_disks' behavior:
* Without $include_partitions=1, the disk will have the same usage
as it's first used partition, and thus wasn't allowed. (Except in
the case that usage was LVM, where the check was bypassed, but
luckily OSD creation just failed later because no Ceph volume
group would be detected).
* With $include_partitions=1, the disk will have usage 'partitions'
and thus be allowed.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
a simple api call to simulate calendar event triggers
takes a schedule, an optional number (default 10), an optional starttime
(default 'now') and returns a list with unix timestamps, as well as
humanly readable utc timestamps.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
namely if the fs is already existing, and if there is currently a
standby mds that can be used for the new fs
previosuly, only one cephfs was possible, so these checks were not
necessary. now with pacific, it is possible to have multiple cephfs'
and we should check for those.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
in addition to listing the vzdump.cron jobs, also list from the
jobs.cfg file.
updates/creations go into the new jobs.cfg only now
and on update, starttime+dow get converted to a schedule
this transformation is straight forward, since 'dow'
is already in a compatible format (e.g. 'mon,tue') and we simply
append the starttime (if any)
id on creation is optional for now (for api compat), but will
be autogenerated (uuid). on update, we simply take the id from before
(the ids of the other entries in vzdump.cron will change but they would
anyway)
as long as we have the vzdump.cron file, we must lock both
vzdump.cron and jobs.cfg, since we often update both
we also change the backupinfo api call to read the jobs.cfg too
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Since commit: 8a3a300b ("ceph services: drop broadcasting legacy
version pmxcfs KV")
The 'ceph-version' kv is not broadcasted anymore, so we should not
query it, instead use get_ceph_versions
Also drop the other legacy keys for the versions
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
There is a udev bug [0] which can ultimately lead to the udev database
for certain devices not being actively updated. The Diskmanage package
relies upon lsblk for certain info, and lsblk queries the udev
database. Ensure the information is updated by manually calling
'udevadm trigger' for the changed devices.
Without the fix, and a bit of bad luck, a cleaned up disk could still
show up as an 'LVM2_member' for example.
[0]: https://github.com/systemd/systemd/issues/18525
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
which is mostly a copy of the wipe_disks helper with the difference
that it also uses wipefs on the device and its partitions.
Remove the wipe_disks helper as no users remain.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Adds some missing descriptions to the api/man page documentation, for
certain options of the `pvenode` command. Some minor language fix-ups
are also included
Signed-off-by: Dylan Whyte <d.whyte@proxmox.com>
so that they are documented and get displayed by pvesh/pvenode
all those fields must exists (since they come from the upid)
aside from the exitstatus, so marking that as optional
forum user reported that they are missing:
https://forum.proxmox.com/threads/ergebnis-eines-tasks-per-api-abfragen.92267/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we filtered out devices which belong into the 'Generic System Peripheral'
category, but this can contain actual useful pci devices
users want to pass through, so simply do not filter it by default.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
metadata is gained using a HEAD request.
Due to the ability of this api endpoint to request files on internal
networks (which would not be visible/accessible from outside) it is
restricted to users with permissions `Sys.Audit` and `Sys.Modify` on
`/`. Users with these permissions are able to alter node (network)
config anyway, so this should not create any further security risk.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Reviewed-By: Dominik Csapak <d.csapak@proxmox.com>
And also add the 'backup-info' endpoint to the index.
Suggested-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This had a myriad of issues:
* marked as protected, thus forwarded to the privileged daemon even
if it just returned static information
* did not return directory index but a "stub" string, which does not
makes sense.
* not named index
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
VM names are returned by the endpoint anyway, therefore it makes sense
to add it to the endpoint specification so it also appears in the API
docs and is visible when using pvesh with text output.
Signed-off-by: Hannes Laimer <h.laimer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Similar to PBS. The 'errors' filter parameter still takes precedence
(overrides this)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
[ Thomas: adapt to renamed PVE::Tools helper method ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we have lots of information already parsed and cached, use that and
give the frontend more to work with/display.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
a common function to download arbitrary files from urls has been
defined as PVE::Tools::download_file_from_url and is now used.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
Multiple public networks can be defined in the ceph.conf. The networks need to
be routed to each other.
Support handling multiple IPs for a single monitor. By default, one address from
each public network is selected for monitor creation, but, as before, it can be
overwritten with the mon-address parameter, now taking a list of addresses.
On removal, make sure the all addresses are removed from the mon_host entry in
the ceph configuration.
Originally-by: Alwin Antreich <a.antreich@proxmox.com>
[handling of multiple addresses]
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
by also comparing the canonical form to decide when to remove an address. When
getting the IP from the rados information, also drop eventual brackets, so our
existing function can handle it. Add the brackets back within the
remove_addr_from_mon_host function.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
Partially based on pve-storage's CephConfig.pm get_monaddr_list, but the
interface is not the best for the use case here.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
in preparation for supporting multiple addresses. The config section does not
allow more than one public_addr.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
mostly relevant to prepare support for IPv4/IPv6 dual stack mode as a special
case of the planned support for mutliple public networks.
As before, only set the false value when we are dealing with the first address,
but also be explicit about the IPv4 case as the defaults might change in the
future.
Then, when an address of a different type comes along later, set the relevant
bind option to true.
Reviewed-by: Dominik Csapak <d.csapak@proxmox.com>
Tested-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The change not to pass the 'upgrade' parameter in the frontend was made in
953f6e9bb3 (the commit doesn't talk about it, it's
likely an accidental squash of two changes)
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
The switch to 'cmd' was made by commit af39a6f09651e15d1c83536e25493a2212efd7d3
in the pve-xtermjs repo and is included in 4.7.0
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
everywhere where Pool.Allocate was unnecessarly used it was replaced
with Pool.Audit.
`/cluster/resources` now returns pool infomation for guests only if
the requesting user has the Pool.Audit permission on the pool.
`/pool/` now returns only pools where the requesting user has the
Pool.Audit permission.
Signed-off-by: Lorenz Stechauner <l.stechauner@proxmox.com>
on a given node (and storage).
There is no datacenter/storage fallback for the bandwidth limit, so the default
can just be returned as is. While the bandwidth limit is a root-only option when
executing the backup, it still makes sense to return it for all users, so they
can see what's going to be used.
Signed-off-by: Fabian Ebner <f.ebner@proxmox.com>
nautilus 14.2.20 and octopus 15.2.11 fixed a security issue with
reclaiming the global ID auth (CVE-2021-20288). As fixing this issue
means that older client won't be able to connect anymore, the fix was
done behind a switch, with a HEALTH warning if it was not active
(i.e., disallowed connection from older clients).
New installations have this switch also at the insecure level, for
compat reasons, so lets deactivate it ourself after monitor creation
to avoid the health warning and slightly insecure setup (in default
PVE ceph the whole issue was of rather low impact/risk). But, only do
so when creating the first monitor of a ceph cluster, to avoid
breaking existing setups by accident.
An admin can always switch it back again, e.g., if they're recovering
from some failure and need to setup fresh monitors but have still old
clients.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Currently the check for used ports for bonds and bridges happens
while rendering '/etc/network/interfaces.new' in PVE::Inotify
(pve-common).
However at that stage the new/updated interface is already merged
with the old settings, making it impossible to indicate where a NIC
is currently used.
The code is adapted from the renderer in
PVE::Inotify::__write_etc_network_interfaces.
Tested on a virtual PVE instance.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this was from the time where we had a loop here to add two storages,
one for KRDB-only and one for KRBD-never. Nowadays we can handle the
mixed case just fine, but the patch dropping that forget to cleanup
the error handling..
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
In Ceph Octopus the device_health_metrics pool is auto-created with 1
PG. Since Ceph has the ability to split/merge PGs, hitting the wrong PG
count is now less of an issue anyhow.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
the properties target_size_ratio, target_size_bytes and pg_num_min are
used to fine-tune the pg_autoscaler and are set on a pool. The updated
pool list shows now autoscale settings & status. Including the new
(optimal) target PGs. To make it easier for new users to get/set the
correct amount of PGs.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>