the check_connection is done by querying the exports of the nfs server
in question. With nfs v4 those exports aren't listed anymore since nfs
v4 employs a pseudo-filesystem starting from root (/).
rpcinfo allows to query the existence of an nfs v4 service.
Signed-off-by: Alwin Antreich <a.antreich@proxmox.com>
reuse the one from DirPlugin by directing the call to it, but with
the actual $class. This should stay stable, as we provide an ABI and
try to always use $class->helpers.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we already have the ZFS pool plugin as precedent to use 10s, at for
network with remote off-site storage one can get to 200 - 300ms
RTT latency, which means that for a protocol needing multiple rounds of
communication, one can easily get over 2s while not being in a broken
network.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Takes an operation, an optional requested bandwidth
limit override, and a list of storages involved in the
operation and lowers the requested bandwidth against global
and storage-specific limits unless the user has permissions
to change those.
This means:
* Global limits apply to all users without Sys.Modify on /
(as they can change datacenter.cfg options via the API).
* Storage specific limits apply to users without
Datastore.Allocate access on /storage/X for any involved
storage X.
Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com>
So far this only prevented the creation of the toplevel
directory. This does not cover all problem cases,
particularly when said directory is supposed to be a mount
point, including NFS and glusterfs beside ZFS.
The directory based storages we have already use mkpath
whenever they need to create files, and for actions on files
which are supposed to exist it's fine if it errors out.
So it should also be safe to skip the creation of standard
subdirectories in activate_storage().
Additionally NFS and glusterfs storages should also accept
the mkdir option as they otherwise may exhibit similar
issues, eg. when an NFS storage is mounted onto a directory
inside a ZFS subvolume.
The parse_proc_mounts change made the glusterfs is_mounted
check fail (causing it to be shown as inactive on the GUI).
The NFS check was stricter (not allowing a trailing / in the
source anymore).
rpcinfo from rpcbind-0.2.1 in debian doesn't support ipv6 addresses.
At the same time the used command only actually tests for
portmapper/rpcbind availability, not for NFS directly.
Storage::scan_nfs uses /sbin/showmount to get a list of NFS exports from a
server and happily accepts ipv6 addresses. It is also more specific to NFS.
Replacing the rpcinfo call with showmount here means checking explicitly
for NFS and supporting IPv6 without the need for an updated rpcbind
package.
NFS needs brackets around ipv6 addresses.
Also: nfs_is_mounted needs to quote the variables. This becomes apparent
when ipv6 addresses are used as then the address would otherwise be
treated as a character class, causing the check to always fail.