Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Co-authored-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
If it is set and 0, don't warn.
Signed-off-by: Mira Limbeck <m.limbeck@proxmox.com>
[ TL: adapt subject ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The auto-installer will place an executable file named
`proxmox-first-boot` in the installer runtime-directory if the user set
up.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
If a subset of disks associated with a pre-existing ZFS pool are
selected for installation, the pool might still be importable
(required for the rename) but will be in a `degraded` state.
Currently, only pools in `online` state will be considered for
renameing, leading a possibly clashing pool named `rpool` behind.
Therefore, a reboot after installation will fail because of the
duplicate names.
To partially fix this behaviour, also rename `rpool` in `degraded`
state.
Note:
This however does not cover the case when a pool can no longer be
imported because the number of required replicas is not available.
Renaming by zpool import is not possible for that case.
Partially-fixes: 43591049 ("low-level: install: check for already-existing `rpool` on install")
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
Making the system bootable can take some time if many disks are used
for installation, which could be misinterpreted as a hanging
installer. Add a please be patient output when more than 3 disks are
used.
Output changes from `make system bootable` to
`make system bootable (please be patient)`
Signed-off-by: Christian Ebner <c.ebner@proxmox.com>
[ TL: include hint for why user needs to be patient ]
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
`compress` instead of `compress-force` is used, as the latter can have
unindented (performance) implications, as the name implies. That would
be neither expected by users nor should such a decision made without the
user explicitly opting for it.
Others do the same, e.g. the installer for RedHat/Fedora systems (aka.
Anaconda) opts for `compress` too.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
A hashed password can be created e.g. using the `mkpasswd(1)`.
This then will allow the auto-installer to pass along a
already-hashed password from the user, instead of simple plaintext.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Tested-by: Theodor Fumics <theodor.fumics@gmx.net>
.. much in the same manner as the detection for LVM works.
zpools can only be renamed by importing them with a new name, so
unfortunately the import-export dance is needed.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
As this is an internal option for the low-level installer anyway, no
real functional changes here.
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
Reviewed-by: Aaron Lauterer <a.lauterer@proxmox.com>
Tested-by: Aaron Lauterer <a.lauterer@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
This would effectively pull in grub-efi-amd64, which we skip a line
above this if not in EFI mode..
The builder now adds this always to the packages due to the
proxmox-secure-boot-support meta package being present there, at least
that's my current educated guess (confirmed in practice).
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The $rootdev variable is not set in the ZFS branch, and ZFS is not
mounted here, so just move the progress update inside the non-ZFS
branch.
Reported-by: Fiona Ebner <f.ebner@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we already log that and printing that to stderr does not provides that
much extra value and is also not done for similar actions like the
configuration of packages.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
.. thereby, also fixing a accidental shell injection.
Since run_cmd{,s}() is nowhere else used anymore, they can be removed
too.
Also mostly reverts commit
5878dc4ae "auto-installer: handle auto-reboot info messages directly"
in the process too.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
while it doesn't hurt to be installed, it also doesn't help in any fashion on
such systems.
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
The default for the `compression` property in ZFS got changed ~2 years
ago by
56fa4aa96 ("Default to ON for compression") [0]
Support for setting this option originally got introduced into the
installer in 2016 by
c7779156 ("refactor disk setup, add advanced ZFS options") [1]
where the default of 'off' was still correct.
As the installer only set the property if it was *not* explicitly set
to 'on', this actually regressed in the meantime.
Thus just remove the conditional all together, as the definedness-check
did not have any impact anyway (since $value gets set to 'on'
regardless) and the latter just causes regressions like this one.
Tested by installing once w/o the patch to confirm the report and once
with the patch applied, checking `zfs get compression` on the freshly
installed system.
[0] 56fa4aa96e
[1] https://git.proxmox.com/?p=pve-installer.git;a=commit;h=c7779156db5c38cf184e143de0cab534bd0a9cb1
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
To avoid a misinterpretation of the auto-return value:
> In the absence of an explicit return, a subroutine, eval, or do FILE
> automatically returns the value of the last expression evaluated.
-- https://perldoc.perl.org/functions/return
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Creating rpool/var/lib/vz and all intermediate datasets causes a
service-failure of `var.mount` upon shutdown.
creating the dataset for /var/lib/vz directly at the rpool and setting
its mountpoint property seems the most robust way to address this.
The alternative approach of setting `canmount=off` on the `var`
dataset seems a bit dangerous (users setting a zfs property and
suddenly hiding their /var contents).
The only small downside to this approach is that the setting of the
mountpoint happens quite a bit after extracting the data - but this
would probably be better addressed with a refactoring of the
lowlevel-installer code (setting the zfs-pool up under /target and
getting rid of a few special cases)
Fixes: dd19d40cea
Suggested-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Matching if a serial will be needed for grub is based on the target
commandline - the speed is also read from there. The unit is based
on the ttyS device - although I'd assume that this might not always
match up.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
the recent patch to create /var/lib/vz as dedicated dataset, did so
for all our products - but this is only needed/wanted for PVE
moved the creation of the root-dataset above the creation of
rpool/data, so that the pve-specifics can remain in one if block.
Fixes: dd19d40cea
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
journald as a core component tries setting a ACL on the journal files
for (non-root) users and fails on our ZFS installs.
Resulting in dmesg being spammed with messages from journald upon each
journal-rotation for each user upon their first login.
This is also suggested by OpenZFS in their Debian guide for root on
ZFS:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html
Tested by setting this on a machine of mine, where this has been
bugging for quite a while.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
this enables the users to set reservations on / separate from
/var/lib/vz - where backups, ISOs, and other data might fill the
complete pool.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Parameters needed for booting during installation are best preserved
in the target cmdline as well - e.g. if you need a particular
cmdline switch for your system to boot at all - not having to add it
for the first boot of the installed system and manually adding it to
the bootloader config is an improvement.
This additionally enables us to drop the console parameter handling
for serial consoles (it is just one of the parameters to pass along).
Finally it fixes the regular expressions for the installer settings we
read from the cmdline (swapsize, maxroot,...) which were broken if
added as last entry.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
The regex matching in Proxmox::Install::Config was blindly copied from
above - so the other parameters are also likely to not get recognized
if they are the last on the cmdline
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
If an installation needs to provide a dedicated console parameter
(e.g. because it runs on the serial console) the target system most
likely will need the parameter too.
This patch adds the parameter to the kernel-commandline (in case zfs
is used for both grub and systemd)
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
That's what happens when you do some last-minute variable renaming and
trust that nothing broke ..
Fixes: 42aa2fa ("fix #4829: install: add new ZFS `arc_max` setup option")
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
grub packages in debian split between:
* meta-packages, which handles (among other things) the reinstalling
grub to the actual device/ESP in case of a version upgrade (grub-pc,
grub-efi-amd64)
* bin-packages, which contain the actual boot-loaders
The bin-packages can coexist on a system, but the meta-package
conflict with each other (didn't check why, but I don't see a hard
conflict on a quick glance)
Currently our ISO installs grub-pc unconditionally (and both bin
packages, since we install the legacy bootloader also on uefi-booted
systems). This results in uefi-systems not getting a new grub
installed automatically upon upgrade.
Reported in our community-forum from users who upgraded to PVE 8.0,
and still run into an issue fixed in grub for bookworm:
https://forum.proxmox.com/threads/.123512/
Reproduced and analyzed by Friedrich.
This patch changes the installer, to install the meta-package fitting
for the boot-mode.
We do not set the debconf variable install_devices, because the
'install_devices' variable is only defined for 'grub-pc', and thus
(still) only set for that package/namespace.
Reported-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
Reviewed-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
This is already checked for LVM and ZFS setups, but not for Btrfs. Add
it there too, as it doesn't work anyway.
Tested by creating a block device with 4K sectorsize using
the following QEMU args:
-device virtio-blk,drive=testdrive4k,logical_block_size=4096,physical_block_size=4096
-drive file=/path/to/4k-testdisk.img,if=none,id=testdrive4k
The 4k-testdisk.img was created with:
qemu-img create -f qcow2 /path/to/4k-testdisk.img 16G
Signed-off-by: Christoph Heiss <c.heiss@proxmox.com>
> On-Access [...] leverages a kernel api called fanotify to block
> processes from attempting to access malicious files. This
> prevention occurs in kernel-space, and thus offers stronger
> protection than a purely user-space solution.
This is not really useful for the PMG use case and requires user
configuration as otherwise it refuses to start. In fact, is the sole
unit marked as failed after a fresh installation:
> ERROR: Clamonacc: at least one of OnAccessExcludeUID,
> OnAccessExcludeUname, or OnAccessExcludeRootUID must be specified
> it is recommended you exclude the clamd instance UID or uname to
> prevent infinite event scanning loops.
So disable it by default, if a user really wants this, whyever that
would be, the can just configure it and enable it again via
systemctl.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The convoluted calculation logic in case the disks is 8GB leads to
datasize becoming 16EiB further down:
* after calculating and removing the rootsize from $rest, $rest becomes
smaller than $space (which should be the minimal non-used space in the
volume-group) - this leads to a negative value, which overflows in
the `& ~0xFFF` opration.
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
While this was already done in the $rest < 48 GiB cases, it wasn't yet
done for the else branch and also not if $maxroot_mb was assigned,
because of being smaller.
Second and last step towards fixing an issue reported in the community
forum [0] where using 250.00 hdsize, 250 maxroot and 0 minfree would
fail.
Turns out two extents would be missing because of lvcreate implicitly
rounding up, one of them for the root LV (the one for metadata was
already handled in the previous commit).
[0]: https://forum.proxmox.com/threads/129320/post-566375
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
First step towards fixing an issue reported in the community forum [0]
where using 250.00 hdsize, 250 maxroot and 0 minfree would fail.
Turns out two extents would be missing because of lvcreate implicitly
rounding up, one of them for the metadata.
[0]: https://forum.proxmox.com/threads/129320/post-566375
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
When the prompt abstraction got added in bc05a8f ("add basic UI
plugin infrastructure") it also inlined the check for the answer, as
that can be differently structured for each user interface, and
returns bool. But when switching over to this new infra, two sites
weren't updated to the simpler bool check and still checked with the
previous "equals 'ok'", which now was always false.
Fixes: 72bea99 ("switch prompt, error and message calls to new UI infra")
Reported-by: Alexander Zeidler <a.zeidler@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>