This was for vxlan interfaces and fixed in ifupdown2 with my last patches.
simply reload network, and if we still have errors, we can use ifquery to check them later
Signed-off-by: Alexandre Derumier <aderumier@odiso.com>
Add classes for suspended, suspending and migration.
They use the same symbol as in the buttons for consitency, size is a
bit smaller to fit better for the tree.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This was already proposed by Dominik[0], but it was was wished for a
faster backend backing of this[1], and as with most wishes one needs
to either be content with what's there or (try) to improve it one
self.. So with the IPCC approach proposed as backing for this I'd
like to add this again. It differs from [0] a bit, first it's rebased
as parts of the tooltip stuff got already applied[2].
I use "Config locked (<LOCK>)" as text for this, as it
1. Clarifies what the lock symbol means, which is always a good thing
for tooltips
2. repeating the lock symbol here again would show the users three
lock symbols at the same time if the VM was selected in the tree
(the tree one, the VM config panel one, and this tool tip one)
this is a bit much, so don't do it.
[0]: https://pve.proxmox.com/pipermail/pve-devel/2019-February/035829.html
[1]: https://pve.proxmox.com/pipermail/pve-devel/2019-March/035930.html
[2]: https://pve.proxmox.com/pipermail/pve-devel/2019-March/036165.html
Co-developed-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
using the new get_guest_config_property helper from pve-cluster,
which allows us to get this info with relatively low overhead.
With a somewhat realistic setup of 303 guest configurations here my
API call timing changes from ~ 24 to 26 ms without this to 26 to 28
ms with this patch applied, which seems reasonable.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
make the message a bit more informative (with help from thomas), namely
mentioning the ability to change/increase the limit.
Signed-off-by: Oguz Bektas <o.bektas@proxmox.com>
On some occasions e.g. license checking, the manufacturer string in the
SMBIOS settings edit has to allow characters such as whitespaces.
https://forum.proxmox.com/threads/proxmox-and-windows-rok-license-for-dell.53236/
In principle SMBIOS allows to pass any zero terminated string to the
corresponding fields in the structure type 1 (System Information).
By base64 encoding the values clashing of the config is avoided.
Relies on the corresponding patch to qemu-server to pass parameter verification
and correct parsing.
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
this parameter changed sometime between luminous and nautilus
note that with this change, it is not possible to delete pools in
luminous anymore
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
ceph luminous does not use the 'name' property in the metadata
everywhere, so fall back to 'id'
this makes the ceph dashboard usable while having still luminous
(relevant for upgrading)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This allows to select the tri-state (enforce on, enforce off, default
from QEMU+CPU Model) for each CPU flag independently.
For this a grid with a widgetcolumn is used hosting tree radio
buttons for each state. They're marked '+' for enforce on, '-' for
enforce off and the default has no label, as it isn't easy to add in
such a way that it does not confuses people and does not looks
completely ugly.. But, to help people which have a hard time figuring
out what the states mean, a fake column was added showing the current
selected state's outcome in words.
For show casing the new nice interface add all currently supported
flags from out API-
It could be worth to add some selected CPU model awareness, so that
flags are only enabled if they can make sense with the selected
model. But one should be able to add this relative easily with this
as base.
The hardcoded flag lists is not ideal, we should try to generate this
in the future, but here already qemu-server is lacking and this is
rather independent of the fact and can be done later one just fine
too.
Note that this /is/ an *advanced* feature so not visible for all
directly, while I try to document in short what a flag does it surely
isn't perfect and to short to explain all nuances, they should give
enough pointers to know if it's relevant at all (amd / intel cpu) and
for what one should research
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
(cherry picked from commit 92572ead6d)
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The aim of this patch is to reorder/rework the code of the api call
so that it gets more readable
it adds comments of what/why something is done, removes
code duplication between db/wal checks/creation
There are two changes in behaviour:
* when a device is given more than once via the api,
the user gets a parameter exception for the db or wal
with the information that the explicit defined devices must be
different
* we check the usage for db/wal before the worker, so that the user
gets instant feedback if a device is already in use
(this is more for api users than for gui users, since we do those
checks there also)
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
In this component, reload is defined as a local variable in
initComponent not on the component itself
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the size of an LV can only be a multiple of 512b, we round
down to the next kib
we then have to mulitply it by 1024 for the partition, since
append_partition expects bytes and not kib
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this fixes the dashboard when one views it on a node
alternatively, we can add a node specific metadata api call
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
similar to the MDS api, so that DELETE and POST calls can operate on
the same path. This does not changes the CLI pveceph interface
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this is an abstraction for listing Ceph Services with a few improvements:
* start/stop/restart buttons for all services (incl. icons)
* a syslog button to view the syslog of that service
* correct reloading behaviour when creating/destroying
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
As in a situation where we /had/ a manager but destroyed it this
key's value is a empty string, and if we pass that to the WebUI we
get strange results form of a ghost MGR entry with ExtJS auto-ID
generation as name -> pretty confusing.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
no point in first building a list if we can just remove it directly
afterwards, it's eval-ed anyway and $osd_list did not get touched
in-between.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
with this, osd destruction is left to ceph-volume if the osd was created
with ceph-volume, else our old code remains mostly the same since
we want to be able to destroy upgraded osds
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
this completely rewrites the ceph os creation api call using ceph-volume
since ceph-disk is not available anymore
breaking changes:
no filestore anymore, journal_dev -> db_dev
it is now possible to give a specific size for db/wal, default
is to read from ceph db/config and fallback is
10% of osd for block.db and 1% of osd for block.wal
the reason is that ceph-volume does not autocreate those itself
(like ceph-disk) but you have to create it yourself
if the db/wal device has an lvm on it with naming scheme 'ceph-UUID'
it uses that and creates a new lv
if we detect partitions, we create a new partition at the end
if the disk is not used at all, we create a pv/vg/lv for it
it is not possible to create osds on luminous with this api call anymore,
anyone needing this has to use ceph-disk directly
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
This way they can cleanly exit (with a SIGTERM or systemctl stop) but
will get restarted if killed or they abort, but only for
StartLimitIntervalSec= time period with a total maximal count of
StartLimitBurst= retries.
See `man systemd.unit` for details, default are 10s total
restart retry period with at max 5 total retry counts, after that the
unit will placed in the failure state. So this is a best effort try
with no real downside.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
as already announced over two months ago[0], remove the unofficial
SheepDog plugin now completely. Besides that it was never fully
supported in Proxmox VE one of its main developer and ex-maintainer
declared it as abandoned[1], and thus just let's remove it, git
allows to resurrect it any time if a wonder happens anyway.
[0]: https://pve.proxmox.com/pipermail/pve-user/2019-March/170497.html
[1]: http://lists.wpkg.org/pipermail/sheepdog/2019-March/068449.html
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>