Commit Graph

292 Commits

Author SHA1 Message Date
Linus Torvalds
44a8c96edd This update includes the following changes:
API:
 
 - Allow hash drivers without fallbacks (e.g., hardware key).
 
 Algorithms:
 
 - Add hmac hardware key support (phmac) on s390.
 - Re-enable sha384 in FIPS mode.
 - Disable sha1 in FIPS mode.
 - Convert zstd to acomp.
 
 Drivers:
 
 - Lower priority of qat skcipher and aead.
 - Convert aspeed to partial block API.
 - Add iMX8QXP support in caam.
 - Add rate limiting support for GEN6 devices in qat.
 - Enable telemetry for GEN6 devices in qat.
 - Implement full backlog mode for hisilicon/sec2.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmiHQXwACgkQxycdCkmx
 i6f49A//dQtMg/nvlqForj3BTYKPtjpfZhGxOhda1Y01ts4nFLwM39HtNXGCa6no
 e5L/taHdGd4loZoFa0H7Jz8Qn+I8F3YJLE1gDmN1zogzM6hG7KwFpJLy+PrusS3H
 IwjUehPKNTK2XWmJCdxpsulmwBD+Y//DG3wpwGlkr+MMvlzoMkesvBSCwmXKh/rh
 dn8efrHqL+3LBM6F4nM5zTwcKpLvp7V9arwAE6Zat95WN1X2puEk9L8vYG96hU9/
 YmG79E6WIb4UBILJlYdfba+3tK0bZaU3iDHMLQVlAPgM8JvzF9THyFRlLRa586/P
 rHo2xrgD1vPlMFXKhNI9p+D65zF/5Z0EKTfn1Z99z1kVzz3L71GOYlAvcAw1S9/j
 dRAcfrs/7xEW1SI9j+jVYqZn5g/ClGF8MwEL2VYHzyxN3VPY7ELys4rk6Il29NQp
 EVH8VfZS3XmdF1oiH51/ZDT4mfvQjn3v33ssdNpAFsZX2XIBj0d48JtTN/ynDfUB
 SPS2pTa5FBJCOpRR/Pbct+eloyrVP4Lcy8/gwlKAEY0ZffBBPmd2wCujQf/SKcUH
 e4b6hXAWe0gns/4VSnaker3YdG6o4uPWotZKvIiyKlkKGmJXHuSRK32odRO66+Bg
 tlaUYOmRghmxgU9Sc6h9M6vkm5rBLMw4ccykmhGSvvudm9rLh6A=
 =E8nj
 -----END PGP SIGNATURE-----

Merge tag 'v6.17-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
 "API:
   - Allow hash drivers without fallbacks (e.g., hardware key)

  Algorithms:
   - Add hmac hardware key support (phmac) on s390
   - Re-enable sha384 in FIPS mode
   - Disable sha1 in FIPS mode
   - Convert zstd to acomp

  Drivers:
   - Lower priority of qat skcipher and aead
   - Convert aspeed to partial block API
   - Add iMX8QXP support in caam
   - Add rate limiting support for GEN6 devices in qat
   - Enable telemetry for GEN6 devices in qat
   - Implement full backlog mode for hisilicon/sec2"

* tag 'v6.17-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
  crypto: keembay - Use min() to simplify ocs_create_linked_list_from_sg()
  crypto: hisilicon/hpre - fix dma unmap sequence
  crypto: qat - make adf_dev_autoreset() static
  crypto: ccp - reduce stack usage in ccp_run_aes_gcm_cmd
  crypto: qat - refactor ring-related debug functions
  crypto: qat - fix seq_file position update in adf_ring_next()
  crypto: qat - fix DMA direction for compression on GEN2 devices
  crypto: jitter - replace ARRAY_SIZE definition with header include
  crypto: engine - remove {prepare,unprepare}_crypt_hardware callbacks
  crypto: engine - remove request batching support
  crypto: qat - flush misc workqueue during device shutdown
  crypto: qat - enable rate limiting feature for GEN6 devices
  crypto: qat - add compression slice count for rate limiting
  crypto: qat - add get_svc_slice_cnt() in device data structure
  crypto: qat - add adf_rl_get_num_svc_aes() in rate limiting
  crypto: qat - relocate service related functions
  crypto: qat - consolidate service enums
  crypto: qat - add decompression service for rate limiting
  crypto: qat - validate service in rate limiting sysfs api
  crypto: hisilicon/sec2 - implement full backlog mode for sec
  ...
2025-07-31 09:45:28 -07:00
Linus Torvalds
bc46b7cbc5 s390 updates for 6.17 merge window
- Standardize on the __ASSEMBLER__ macro that is provided by GCC
   and Clang compilers and replace __ASSEMBLY__ with  __ASSEMBLER__
   in both uapi and non-uapi headers
 
 - Explicitly include <linux/export.h> in architecture and driver
   files which contain an EXPORT_SYMBOL() and remove the include
   from the files which do not contain the EXPORT_SYMBOL()
 
 - Use the full title of "z/Architecture Principles of Operation"
   manual and the name of a section where facility bits are listed
 
 - Use -D__DISABLE_EXPORTS for files in arch/s390/boot to avoid
   unnecessary slowing down of the build and confusing external
   kABI tools that process symtypes data
 
 - Print additional unrecoverable machine check information to make
   the root cause analysis easier
 
 - Move cmpxchg_user_key() handling to uaccess library code, since
   the generated code is large anyway and there is no benefit if it
   is inlined
 
 - Fix a problem when cmpxchg_user_key() is executing a code with a
   non-default key: if a system is IPL-ed with "LOAD NORMAL", and
   the previous system used storage keys where the fetch-protection
   bit was set for some pages, and the cmpxchg_user_key() is located
   within such page, a protection exception happens
 
 - Either the external call or emergency signal order is used to send
   an IPI to a remote CPU. Use the external order only, since it is at
   least as good and sometimes even better, than the emergency signal
 
 - In case of an early crash the early program check handler prints
   more or less random value of the last breaking event address, since
   it is not initialized properly. Copy the last breaking event address
   from the lowcore to pt_regs to address this
 
 - During STP synchronization check udelay() can not be used, since the
   first CPU modifies tod_clock_base and get_tod_clock_monotonic() might
   return a non-monotonic time. Instead, busy-loop on other CPUs, while
   the the first CPU actually handles the synchronization operation
 
 - When debugging the early kernel boot using QEMU with the -S flag and
   GDB attached, skip the decompressor and start directly in kernel
 
 - Rename PAI Crypto event 4210 according to z16 and z17 "z/Architecture
   Principles of Operation" manual
 
 - Remove the in-kernel time steering support in favour of the new s390
   PTP driver, which allows the kernel clock steered more precisely
 
 - Remove a possible false-positive warning in pte_free_defer(), which
   could be triggered in a valid case KVM guest process is initializing
 -----BEGIN PGP SIGNATURE-----
 
 iI0EABYKADUWIQQrtrZiYVkVzKQcYivNdxKlNrRb8AUCaIJQThccYWdvcmRlZXZA
 bGludXguaWJtLmNvbQAKCRDNdxKlNrRb8FI2APwPnlrj6ZVXzNA6dw0fSUt697rS
 NlaHEORXL8KcfoQh8QD/WwHUe1VNtDG1R5bBn0guR+UytVgR9Tt7LxyKfIgT3ws=
 =tdMb
 -----END PGP SIGNATURE-----

Merge tag 's390-6.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Alexander Gordeev:

 - Standardize on the __ASSEMBLER__ macro that is provided by GCC and
   Clang compilers and replace __ASSEMBLY__ with __ASSEMBLER__ in both
   uapi and non-uapi headers

 - Explicitly include <linux/export.h> in architecture and driver files
   which contain an EXPORT_SYMBOL() and remove the include from the
   files which do not contain the EXPORT_SYMBOL()

 - Use the full title of "z/Architecture Principles of Operation" manual
   and the name of a section where facility bits are listed

 - Use -D__DISABLE_EXPORTS for files in arch/s390/boot to avoid
   unnecessary slowing down of the build and confusing external kABI
   tools that process symtypes data

 - Print additional unrecoverable machine check information to make the
   root cause analysis easier

 - Move cmpxchg_user_key() handling to uaccess library code, since the
   generated code is large anyway and there is no benefit if it is
   inlined

 - Fix a problem when cmpxchg_user_key() is executing a code with a
   non-default key: if a system is IPL-ed with "LOAD NORMAL", and the
   previous system used storage keys where the fetch-protection bit was
   set for some pages, and the cmpxchg_user_key() is located within such
   page, a protection exception happens

 - Either the external call or emergency signal order is used to send an
   IPI to a remote CPU. Use the external order only, since it is at
   least as good and sometimes even better, than the emergency signal

 - In case of an early crash the early program check handler prints more
   or less random value of the last breaking event address, since it is
   not initialized properly. Copy the last breaking event address from
   the lowcore to pt_regs to address this

 - During STP synchronization check udelay() can not be used, since the
   first CPU modifies tod_clock_base and get_tod_clock_monotonic() might
   return a non-monotonic time. Instead, busy-loop on other CPUs, while
   the the first CPU actually handles the synchronization operation

 - When debugging the early kernel boot using QEMU with the -S flag and
   GDB attached, skip the decompressor and start directly in kernel

 - Rename PAI Crypto event 4210 according to z16 and z17 "z/Architecture
   Principles of Operation" manual

 - Remove the in-kernel time steering support in favour of the new s390
   PTP driver, which allows the kernel clock steered more precisely

 - Remove a possible false-positive warning in pte_free_defer(), which
   could be triggered in a valid case KVM guest process is initializing

* tag 's390-6.17-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (29 commits)
  s390/mm: Remove possible false-positive warning in pte_free_defer()
  s390/stp: Default to enabled
  s390/stp: Remove leap second support
  s390/time: Remove in-kernel time steering
  s390/sclp: Use monotonic clock in sclp_sync_wait()
  s390/smp: Use monotonic clock in smp_emergency_stop()
  s390/time: Use monotonic clock in get_cycles()
  s390/pai_crypto: Rename PAI Crypto event 4210
  scripts/gdb/symbols: make lx-symbols skip the s390 decompressor
  s390/boot: Introduce jump_to_kernel() function
  s390/stp: Remove udelay from stp_sync_clock()
  s390/early: Copy last breaking event address to pt_regs
  s390/smp: Remove conditional emergency signal order code usage
  s390/uaccess: Merge cmpxchg_user_key() inline assemblies
  s390/uaccess: Prevent kprobes on cmpxchg_user_key() functions
  s390/uaccess: Initialize code pages executed with non-default access key
  s390/skey: Provide infrastructure for executing with non-default access key
  s390/uaccess: Make cmpxchg_user_key() library code
  s390/page: Add memory clobber to page_set_storage_key()
  s390/page: Cleanup page_set_storage_key() inline assemblies
  ...
2025-07-29 20:17:08 -07:00
Ovidiu Panait
c470ffa6f4 crypto: engine - remove request batching support
Remove request batching support from crypto_engine, as there are no
drivers using this feature and it doesn't really work that well.

Instead of doing batching based on backlog, a more optimal approach
would be for the user to handle the batching (similar to how IPsec
can hook into GSO to get 64K of data each time or how block encryption
can use unit sizes much greater than 4K).

Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Ovidiu Panait <ovidiu.panait.oss@gmail.com>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-07-18 20:52:00 +10:00
Eric Biggers
377982d561 lib/crypto: s390/sha1: Migrate optimized code into library
Instead of exposing the s390-optimized SHA-1 code via s390-specific
crypto_shash algorithms, instead just implement the sha1_blocks()
library function.  This is much simpler, it makes the SHA-1 library
functions be s390-optimized, and it fixes the longstanding issue where
the s390-optimized SHA-1 code was disabled by default.  SHA-1 still
remains available through crypto_shash, but individual architectures no
longer need to handle it.

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250712232329.818226-12-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-07-14 11:11:49 -07:00
Eric Biggers
b7b366087e lib/crypto: s390/sha512: Migrate optimized SHA-512 code to library
Instead of exposing the s390-optimized SHA-512 code via s390-specific
crypto_shash algorithms, instead just implement the sha512_blocks()
library function.  This is much simpler, it makes the SHA-512 (and
SHA-384) library functions be s390-optimized, and it fixes the
longstanding issue where the s390-optimized SHA-512 code was disabled by
default.  SHA-512 still remains available through crypto_shash, but
individual architectures no longer need to handle it.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160320.2888-13-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30 09:26:20 -07:00
Eric Biggers
e0fca17755 crypto: sha512 - Rename conflicting symbols
Rename existing functions and structs in architecture-optimized SHA-512
code that had names conflicting with the upcoming library interface
which will be added to <crypto/sha2.h>: sha384_init, sha512_init,
sha512_update, sha384, and sha512.

Note: all affected code will be superseded by later commits that migrate
the arch-optimized SHA-512 code into the library.  This commit simply
keeps the kernel building for the initial introduction of the library.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250630160320.2888-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>
2025-06-30 09:26:19 -07:00
Harald Freudenberger
d48b2f5e82 crypto: s390 - Add selftest support for phmac
Add key preparation code in case of selftest running to the phmac
setkey function:

As long as crypto_ahash_tested() returns with false, all setkey()
invocations are assumed to carry sheer hmac clear key values and thus
need some preparation to work with the phmac implementation. Thus it
is possible to use the already available hmac test vectors implemented
in the testmanager to test the phmac code.

When crypto_ahash_tested() returns true (that is after larval state)
the phmac code assumes the key material is a blob digestible by the
pkey kernel module which converts the blob into a working key for the
phmac code.

Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-26 18:52:33 +08:00
Harald Freudenberger
cbbc675506 crypto: s390 - New s390 specific protected key hash phmac
Add support for protected key hmac ("phmac") for s390 arch.

With the latest machine generation there is now support for
protected key (that is a key wrapped by a master key stored
in firmware) hmac for sha2 (sha224, sha256, sha384 and sha512)
for the s390 specific CPACF instruction kmac.

This patch adds support via 4 new ahashes registered as
phmac(sha224), phmac(sha256), phmac(sha384) and phmac(sha512).

Co-developed-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-26 18:52:33 +08:00
Heiko Carstens
65c9a9f925 s390: Explicitly include <linux/export.h>
Explicitly include <linux/export.h> in files which contain an
EXPORT_SYMBOL().

See commit a934a57a42 ("scripts/misc-check: check missing #include
<linux/export.h> when W=1") for more details.

Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
2025-06-17 18:18:02 +02:00
Herbert Xu
73c2437109 crypto: s390/sha3 - Use cpu byte-order when exporting
The sha3 partial hash on s390 is in little-endian just like the
final hash.  However the generic implementation produces native
or big-endian partial hashes.

Make s390 sha3 conform to that by doing the byte-swap on export
and import.

Reported-by: Ingo Franzki <ifranzki@linux.ibm.com>
Fixes: 6f90ba7065 ("crypto: s390/sha3 - Use API partial block handling")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-13 17:26:16 +08:00
Herbert Xu
1b39bc4a70 crypto: s390/hmac - Fix counter in export state
The hmac export state needs to be one block-size bigger to account
for the ipad.

Reported-by: Ingo Franzki <ifranzki@linux.ibm.com>
Fixes: 08811169ac ("crypto: s390/hmac - Use API partial block handling")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-06-13 17:26:16 +08:00
Linus Torvalds
d8cb068359 s390 updates for 6.16 merge window
- Large rework of the protected key crypto code to allow for asynchronous
   handling without memory allocation
 
 - Speed up system call entry/exit path by re-implementing lazy ASCE
   handling
 
 - Add module autoload support for the diag288_wdt watchdog device driver
 
 - Get rid of s390 specific strcpy() and strncpy() implementations, and
   switch all remaining users to strscpy() when possible
 
 - Various other small fixes and improvements
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEECMNfWEw3SLnmiLkZIg7DeRspbsIFAmgwuaIACgkQIg7DeRsp
 bsJ+xA//VTVWo15XHqX6xuJOSGSjwE7BbyA2RBNoIK+qH2PqCRvwvJU+QUcmjiKF
 v+GHOL5GGypKneEVjHCB4xTlKeBPt61yf9/dmTbE4K8KD/3olpHcC6BY62VJiet1
 hDd6Tx9JT3TmjiTV+BanJ5KdiioUxc7jjJ2etMpQkCsHwAZlBtJ+yK+P/IoSI2kP
 80hhNPHvlBadd36ke+Ell95TJBoxhQkKPX/u8ryIKybCvUf4ybsoEaYSPJv8/2I8
 G4IqhTQ0Ft5BcO0g2gcfkpU+8HUtrqhjIDDDhy2h42wuWXsE2Jn3/PieNJUybtwA
 QdsAcN7P85a1TX2L7e4cEM9LEeIk15syZ8mUDKl6H69UHz4M8VNjMO8XfPpFD2ml
 5xCPp7D+RcvUkdnjmz0jLx18mgrF9xjTTqrhYwcP2PWtANvf2m602ElEUlIWuRpI
 Mew/wgCfSdqKsYEbPenvfT+XTd62+hMdo3U3dwZ6OZlKwVXNTk5EUXe3UgMDkKdO
 37OmAh5N2DcYwtPtL9vtl1CLguVJD+FXRnB/veHLCg25+OO/G4IrobySOjI7K0m7
 C48Z5lUjxVJiRjKe35XXwBjuEaN+/TctK1kH3RfCDQGvBGBNoPyvC+L4kOprsbce
 1Lnc7/pFCKdP5y90sJMomMdJCyP5BHKiJvHHWTr8maeslfxd9Vw=
 =lFqh
 -----END PGP SIGNATURE-----

Merge tag 's390-6.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:

 - Large rework of the protected key crypto code to allow for
   asynchronous handling without memory allocation

 - Speed up system call entry/exit path by re-implementing lazy ASCE
   handling

 - Add module autoload support for the diag288_wdt watchdog device
   driver

 - Get rid of s390 specific strcpy() and strncpy() implementations, and
   switch all remaining users to strscpy() when possible

 - Various other small fixes and improvements

* tag 's390-6.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (51 commits)
  s390/pci: Serialize device addition and removal
  s390/pci: Allow re-add of a reserved but not yet removed device
  s390/pci: Prevent self deletion in disable_slot()
  s390/pci: Remove redundant bus removal and disable from zpci_release_device()
  s390/crypto: Extend protected key conversion retry loop
  s390/pci: Fix __pcilg_mio_inuser() inline assembly
  s390/ptrace: Always inline regs_get_kernel_stack_nth() and regs_get_register()
  s390/thread_info: Cleanup header includes
  s390/extmem: Add workaround for DCSS unload diag
  s390/crypto: Rework protected key AES for true asynch support
  s390/cpacf: Rework cpacf_pcc() to return condition code
  s390/mm: Fix potential use-after-free in __crst_table_upgrade()
  s390/mm: Add mmap_assert_write_locked() check to crst_table_upgrade()
  s390/string: Remove strcpy() implementation
  s390/con3270: Use strscpy() instead of strcpy()
  s390/boot: Use strspcy() instead of strcpy()
  s390: Simple strcpy() to strscpy() conversions
  s390/pkey/crypto: Introduce xflags param for pkey in-kernel API
  s390/pkey: Provide and pass xflags within pkey and zcrypt layers
  s390/uv: Remove uv_get_secret_metadata function
  ...
2025-05-26 14:36:05 -07:00
Harald Freudenberger
b5185ea1a6 s390/crypto: Extend protected key conversion retry loop
CI runs show that the protected key conversion retry loop
runs into timeout if a master key change was initiated on
the addressed crypto resource shortly before the conversion
request.

This patch extends the retry logic to run in total 5 attempts
with increasing delay (200, 400, 800 and 1600 ms) in case of
a busy card.

Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2025-05-21 12:02:27 +02:00
Harald Freudenberger
6cd87cb5ef s390/crypto: Rework protected key AES for true asynch support
This is a complete rework of the protected key AES (PAES) implementation.
The goal of this rework is to implement the 4 modes (ecb, cbc, ctr, xts)
in a real asynchronous fashion:
- init(), exit() and setkey() are synchronous and don't allocate any
  memory.
- the encrypt/decrypt functions first try to do the job in a synchronous
  manner. If this fails, for example the protected key got invalid caused
  by a guest suspend/resume or guest migration action, the encrypt/decrypt
  is transferred to an instance of the crypto engine (see below) for
  asynchronous processing.
  These postponed requests are then handled by the crypto engine by
  invoking the do_one_request() callback but may of course again run into
  a still not converted key or the key is getting invalid. If the key is
  still not converted, the first thread does the conversion and updates
  the key status in the transformation context. The conversion is
  invoked via pkey API with a new flag PKEY_XFLAG_NOMEMALLOC.
  Note that once there is an active requests enqueued to get async
  processed via crypto engine, further requests also need to go via
  crypto engine to keep the request sequence.

This patch together with the pkey/zcrypt/AP extensions to support
the new PKEY_XFLAG_NOMEMMALOC should toughen the paes crypto algorithms
to truly meet the requirements for in-kernel skcipher implementations
and the usage patterns for the dm-crypt and dm-integrity layers.

Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Link: https://lore.kernel.org/r/20250514090955.72370-3-freude@linux.ibm.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2025-05-17 10:49:21 +02:00
Herbert Xu
64745a9ca8 crypto: s390/sha512 - Initialise upper counter to zero for sha384
Initialise the high bit counter to zero in sha384_init.

Also change the state initialisation to use ctx->sha512.state
instead of ctx->state for consistency.

Fixes: 572b5c4682 ("crypto: s390/sha512 - Use API partial block handling")
Reported-by: Ingo Franzki <ifranzki@linux.ibm.com>
Reported-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05 20:58:14 +08:00
Herbert Xu
08811169ac crypto: s390/hmac - Use API partial block handling
Use the Crypto API partial block handling.

Also switch to the generic export format.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05 18:20:46 +08:00
Herbert Xu
89490e6b80 crypto: s390/hmac - Extend hash length counters to 128 bits
As sha512 requires 128-bit counters, extend the hash length counters
to that length.  Previously they were just 32 bits which means that
a >4G sha256 hash would be incorrect.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05 18:20:44 +08:00
Eric Biggers
b9eac03edc crypto: s390/sha256 - implement library instead of shash
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library.  This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
disabled by default.  SHA-256 still remains available through
crypto_shash, but individual architectures no longer need to handle it.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-05-05 18:20:43 +08:00
Harald Freudenberger
f688429549 s390/pkey/crypto: Introduce xflags param for pkey in-kernel API
Add a new parameter xflags to the in-kernel API function
pkey_key2protkey(). Currently there is only one flag supported:

* PKEY_XFLAG_NOMEMALLOC:
  If this flag is given in the xflags parameter, the pkey
  implementation is not allowed to allocate memory but instead should
  fall back to use preallocated memory or simple fail with -ENOMEM.
  This flag is for protected key derive within a cipher or similar
  which must not allocate memory which would cause io operations - see
  also the CRYPTO_ALG_ALLOCATES_MEMORY flag in crypto.h.

The one and only user of this in-kernel API - the skcipher
implementations PAES in paes_s390.c set this flag upon request
to derive a protected key from the given raw key material.

Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Link: https://lore.kernel.org/r/20250424133619.16495-26-freude@linux.ibm.com
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2025-04-30 11:34:03 +02:00
Herbert Xu
5b39aa368b crypto: s390/sha512 - Fix sha512 state size
The sha512 state size in s390_sha_ctx is out by a factor of 8,
fix it so that it stays below HASH_MAX_DESCSIZE.  Also fix the
state init function which was writing garbage to the state.  Luckily
this is not actually used for anything other than export.

Reported-by: Harald Freudenberger <freude@linux.ibm.com>
Fixes: 572b5c4682 ("crypto: s390/sha512 - Use API partial block handling")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-30 10:49:20 +08:00
Eric Biggers
3ea91323fe crypto: s390 - move library functions to arch/s390/lib/crypto/
Continue disentangling the crypto library functions from the generic
crypto infrastructure by moving the s390 ChaCha library functions into a
new directory arch/s390/lib/crypto/ that does not depend on CRYPTO.
This mirrors the distinction between crypto/ and lib/crypto/.

Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28 19:40:53 +08:00
Eric Biggers
4cf0e75916 crypto: s390 - drop redundant dependencies on S390
arch/s390/crypto/Kconfig is sourced only when CONFIG_S390=y, so there is
no need for the symbols defined inside it to depend on S390.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-28 19:40:53 +08:00
Herbert Xu
572b5c4682 crypto: s390/sha512 - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:47 +08:00
Herbert Xu
6f90ba7065 crypto: s390/sha3 - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:46 +08:00
Herbert Xu
b333c273ab crypto: arm64/sha3-ce - Use API partial block handling
Use the Crypto API partial block handling.

Also remove the unnecessary SIMD fallback path.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:46 +08:00
Herbert Xu
1340113bdb crypto: s390/sha256 - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:46 +08:00
Herbert Xu
7b83638f96 crypto: s390/sha1 - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:47 +08:00
Herbert Xu
dab2d7b66f crypto: s390/ghash - Use API partial block handling
Use the Crypto API partial block handling.

Also switch to the generic export format.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:47 +08:00
Eric Biggers
efe8ddfaa3 crypto: s390/chacha - remove the skcipher algorithms
Since crypto/chacha.c now registers chacha20-$(ARCH), xchacha20-$(ARCH),
and xchacha12-$(ARCH) skcipher algorithms that use the architecture's
ChaCha and HChaCha library functions, individual architectures no longer
need to do the same.  Therefore, remove the redundant skcipher
algorithms and leave just the library functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:28 +08:00
Eric Biggers
4aa6dc909e crypto: chacha - centralize the skcipher wrappers for arch code
Following the example of the crc32 and crc32c code, make the crypto
subsystem register both generic and architecture-optimized chacha20,
xchacha20, and xchacha12 skcipher algorithms, all implemented on top of
the appropriate library functions.  This eliminates the need for every
architecture to implement the same skcipher glue code.

To register the architecture-optimized skciphers only when
architecture-optimized code is actually being used, add a function
chacha_is_arch_optimized() and make each arch implement it.  Change each
architecture's ChaCha module_init function to arch_initcall so that the
CPU feature detection is guaranteed to run before
chacha_is_arch_optimized() gets called by crypto/chacha.c.  In the case
of s390, remove the CPU feature based module autoloading, which is no
longer needed since the module just gets pulled in via function linkage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:28 +08:00
Eric Biggers
ca17aa6640 crypto: lib/chacha - remove unused arch-specific init support
All implementations of chacha_init_arch() just call
chacha_init_generic(), so it is pointless.  Just delete it, and replace
chacha_init() with what was previously chacha_init_generic().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Eric Biggers
7450ebd29c crypto: scatterwalk - simplify map and unmap calling convention
Now that the address returned by scatterwalk_map() is always being
stored into the same struct scatter_walk that is passed in, make
scatterwalk_map() do so itself and return void.

Similarly, now that scatterwalk_unmap() is always being passed the
address field within a struct scatter_walk, make scatterwalk_unmap()
take a pointer to struct scatter_walk instead of the address directly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:38 +08:00
Herbert Xu
65775cf313 crypto: scatterwalk - Change scatterwalk_next calling convention
Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address.  The length is returned directly.

Change the done functions to use this stored address instead of
getting them from the caller.

Split the address into two using a union.  The user should only
access the const version so that it is never changed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
17ec3e71ba crypto: lib/Kconfig - Hide arch options from user
The ARCH_MAY_HAVE patch missed arm64, mips and s390.  But it may
also lead to arch options being enabled but ineffective because
of modular/built-in conflicts.

As the primary user of all these options wireguard is selecting
the arch options anyway, make the same selections at the lib/crypto
option level and hide the arch options from the user.

Instead of selecting them centrally from lib/crypto, simply set
the default of each arch option as suggested by Eric Biggers.

Change the Crypto API generic algorithms to select the top-level
lib/crypto options instead of the generic one as otherwise there
is no way to enable the arch options (Eric Biggers).  Introduce a
set of INTERNAL options to work around dependency cycles on the
CONFIG_CRYPTO symbol.

Fixes: 1047e21aec ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Arnd Bergmann <arnd@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:21:47 +08:00
Eric Biggers
e7d5d8a86d crypto: s390/aes-gcm - use the new scatterwalk functions
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map().  Use scatterwalk_done_src() and
scatterwalk_done_dst() which consolidate scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done().

Besides the new functions being a bit easier to use, this is necessary
because scatterwalk_done() is planned to be removed.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Tested-by: Harald Freudenberger <freude@linux.ibm.com>
Cc: Holger Dengler <dengler@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:44 +08:00
Linus Torvalds
37b33c68b0 CRC updates for 6.14
- Reorganize the architecture-optimized CRC32 and CRC-T10DIF code to be
   directly accessible via the library API, instead of requiring the
   crypto API.  This is much simpler and more efficient.
 
 - Convert some users such as ext4 to use the CRC32 library API instead
   of the crypto API.  More conversions like this will come later.
 
 - Add a KUnit test that tests and benchmarks multiple CRC variants.
   Remove older, less-comprehensive tests that are made redundant by
   this.
 
 - Add an entry to MAINTAINERS for the kernel's CRC library code.  I'm
   volunteering to maintain it.  I have additional cleanups and
   optimizations planned for future cycles.
 
 These patches have been in linux-next since -rc1.
 -----BEGIN PGP SIGNATURE-----
 
 iIoEABYIADIWIQSacvsUNc7UX4ntmEPzXCl4vpKOKwUCZ418ZRQcZWJpZ2dlcnNA
 Z29vZ2xlLmNvbQAKCRDzXCl4vpKOKyJYAP9kBlpm8W9/XY6N8SpjKaXE/vKQYHQl
 Nobhak06Us8uJwEAkcUTymWP4IwQj5A9jgBAPRw53FQcNVKIc+01C7gRHw0=
 =mqSH
 -----END PGP SIGNATURE-----

Merge tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux

Pull CRC updates from Eric Biggers:

 - Reorganize the architecture-optimized CRC32 and CRC-T10DIF code to be
   directly accessible via the library API, instead of requiring the
   crypto API. This is much simpler and more efficient.

 - Convert some users such as ext4 to use the CRC32 library API instead
   of the crypto API. More conversions like this will come later.

 - Add a KUnit test that tests and benchmarks multiple CRC variants.
   Remove older, less-comprehensive tests that are made redundant by
   this.

 - Add an entry to MAINTAINERS for the kernel's CRC library code. I'm
   volunteering to maintain it. I have additional cleanups and
   optimizations planned for future cycles.

* tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (31 commits)
  MAINTAINERS: add entry for CRC library
  powerpc/crc: delete obsolete crc-vpmsum_test.c
  lib/crc32test: delete obsolete crc32test.c
  lib/crc16_kunit: delete obsolete crc16_kunit.c
  lib/crc_kunit.c: add KUnit test suite for CRC library functions
  powerpc/crc-t10dif: expose CRC-T10DIF function through lib
  arm64/crc-t10dif: expose CRC-T10DIF function through lib
  arm/crc-t10dif: expose CRC-T10DIF function through lib
  x86/crc-t10dif: expose CRC-T10DIF function through lib
  crypto: crct10dif - expose arch-optimized lib function
  lib/crc-t10dif: add support for arch overrides
  lib/crc-t10dif: stop wrapping the crypto API
  scsi: target: iscsi: switch to using the crc32c library
  f2fs: switch to using the crc32 library
  jbd2: switch to using the crc32c library
  ext4: switch to using the crc32c library
  lib/crc32: make crc32c() go directly to lib
  bcachefs: Explicitly select CRYPTO from BCACHEFS_FS
  x86/crc32: expose CRC32 functions through lib
  x86/crc32: update prototype for crc32_pclmul_le_16()
  ...
2025-01-22 19:55:08 -08:00
Peter Zijlstra
cdd30ebb1b module: Convert symbol namespace to string literal
Clean up the existing export namespace code along the same lines of
commit 33def8498f ("treewide: Convert macro and uses of __section(foo)
to __section("foo")") and for the same reason, it is not desired for the
namespace argument to be a macro expansion itself.

Scripted using

  git grep -l -e MODULE_IMPORT_NS -e EXPORT_SYMBOL_NS | while read file;
  do
    awk -i inplace '
      /^#define EXPORT_SYMBOL_NS/ {
        gsub(/__stringify\(ns\)/, "ns");
        print;
        next;
      }
      /^#define MODULE_IMPORT_NS/ {
        gsub(/__stringify\(ns\)/, "ns");
        print;
        next;
      }
      /MODULE_IMPORT_NS/ {
        $0 = gensub(/MODULE_IMPORT_NS\(([^)]*)\)/, "MODULE_IMPORT_NS(\"\\1\")", "g");
      }
      /EXPORT_SYMBOL_NS/ {
        if ($0 ~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+),/) {
  	if ($0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/ &&
  	    $0 !~ /(EXPORT_SYMBOL_NS[^(]*)\(\)/ &&
  	    $0 !~ /^my/) {
  	  getline line;
  	  gsub(/[[:space:]]*\\$/, "");
  	  gsub(/[[:space:]]/, "", line);
  	  $0 = $0 " " line;
  	}

  	$0 = gensub(/(EXPORT_SYMBOL_NS[^(]*)\(([^,]+), ([^)]+)\)/,
  		    "\\1(\\2, \"\\3\")", "g");
        }
      }
      { print }' $file;
  done

Requested-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://mail.google.com/mail/u/2/#inbox/FMfcgzQXKWgMmjdFwwdsfgxzKpVHWPlc
Acked-by: Greg KH <gregkh@linuxfoundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2024-12-02 11:34:44 -08:00
Eric Biggers
008071917d s390/crc32: expose CRC32 functions through lib
Move the s390 CRC32 assembly code into the lib directory and wire it up
to the library interface.  This allows it to be used without going
through the crypto API.  It remains usable via the crypto API too via
the shash algorithms that use the library interface.  Thus all the
arch-specific "shash" code becomes unnecessary and is removed.

Note: to see the diff from arch/s390/crypto/crc32-vx.c to
arch/s390/lib/crc32-glue.c, view this commit with 'git show -M10'.

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20241202010844.144356-10-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
2024-12-01 17:23:01 -08:00
Holger Dengler
666300dae8 s390/crypto: Add hardware acceleration for full AES-XTS mode
Extend the existing paes cipher to exploit the full AES-XTS hardware
acceleration introduced with message-security assist extension 10.

The full AES-XTS mode requires a protected key of type
PKEY_KEYTYPE_AES_XTS_128 or PKEY_KEYTYPE_AES_XTS_256.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-29 11:17:19 +01:00
Holger Dengler
f4d3cf6b8b s390/crypto: Postpone the key split to key conversion
Store the input key material of paes-xts in a single key_blob
structure. The split of the input key material is postponed to the key
conversion. Split the key material only, if the returned protected
keytype requires a second protected key.

For clear key pairs, prepare a clearkey token for each key and convert
them separately to protected keys. Store the concatenated conversion
results as input key in the context. All other input keys are stored
as is.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-29 11:17:19 +01:00
Holger Dengler
20a5f640ca s390/crypto: Introduce function for tokenize clearkeys
Move the conversion of a clearkey blob to token into a separate
function.

The functionality of the paes module is not affected by this commit.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-29 11:17:19 +01:00
Holger Dengler
6e98c81063 s390/crypto: Generalize parameters for key conversion
As a preparation for multiple key tokens in a key_blob structure, use
separate pointer and length parameters for __paes_keyblob2pkey()
instead a pointer to the struct key_blob.

The functionality of the paes module is not affected by this commit.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-29 11:17:19 +01:00
Holger Dengler
e665b96939 s390/crypto: Use module-local structures for protected keys
The paes module uses only AES related structures and constants of the
pkey module. As pkey also supports protected keys other than AES keys,
the structures and size constants of the pkey module may be
changed. Use module-local structures and size constants for paes to
prevent any unwanted side effect by such a change.

The struct pkey_protkey is used to store the protected key blob
together with its length and type. The structure is only used locally,
it is not required for any pkey API call. So define the module-local
structure struct paes_protkey instead.

While at it, unify the names of struct paes_protkey variables on
stack.

The functionality of the paes module is not affected by this commit.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-29 11:17:19 +01:00
Holger Dengler
ed61f86e61 s390/crypto: Convert to reverse x-mas tree, rename ret to rc
Reverse x-mas tree order for stack variables in paes module. While at
it, rename stack variables ret to rc.

The functionality of the paes module is not affected by this commit.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Signed-off-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-29 11:17:19 +01:00
Mete Durlu
897a42dd01 s390/crypto: Switch over to sysfs_emit()
Per Documentation/filesystems/sysfs.rst, sysfs_emit() is preferred for
presenting attributes to user space in sysfs. Convert the left-over uses
in the s390/crypto code.

Signed-off-by: Mete Durlu <meted@linux.ibm.com>
Reviewed-by: Gerd Bayer <gbayer@linux.ibm.com>
Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-10-25 16:03:24 +02:00
Linus Torvalds
1cfb46051d This push fixes the following issues:
- Disable buggy p10 aes-gcm code on powerpc.
 - Fix module aliases in paes_s390.
 - Fix buffer overread in caam.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmbukjcACgkQxycdCkmx
 i6eCRRAAkGxU0E3oq4CzZT4kH1NUBKM6Tn9TtpW2EdLQVOUtR8GEAgb3yQ3hfKpP
 0oHp4egWIjsYvCl1L2NqZl3H5cO+kIH3JRMZhhgySZiwSSVo6Ld7YARPf60wL/W0
 UeviJIz9ZeJtwzhyMaJ8n6BPmbJAlwV34LZ1CMx6OnhfCiJKBzsiJvcmV0ztrXeh
 +kpZAh/qOa0fi1mcNRc7Yyd5POhLLIJ7BjLgmUlgGvUow2uSYUv/aAHVq2W/2RPR
 oENMxjvP5y52bbV7EkzTR+xZR99EpvvqkxTTSt/++w8t6cisqLndP8Kocb8LEXLI
 JDP1z9VvC3Qi1F0z9bfeYfsK0xHhukeinFHVk+twHKpTvErV2b67A2jfF4JIKbYv
 b7wZz/Tat/P1Vap/CzE3RG/3a/jnchHn2fjJpSpjitSZSL1nwY6HTHdC708ZHjNr
 oNVGN9KcusKdG73SSik7K6lRrQZijDcSx6iedzZkfteK5e5PZAEeIm55/GnFb9Xj
 XpYQVgBnaAJ7cRJNckG02B88N3kbdvrjKSQb2ypMYfpXBCAnls8QBWEv58sfGWR+
 qmxQN1d0GcLwUEDXHB/lhJBgQP8WvgZOJx2TMMZKmZUV4mFtKminszMhnYfmECW1
 wHEygk4EKUt4xi2emDSMWI8jP7LTqQUCWTydZYJPrJkGH9WMJJQ=
 =nanf
 -----END PGP SIGNATURE-----

Merge tag 'v6.12-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:

 - Disable buggy p10 aes-gcm code on powerpc

 - Fix module aliases in paes_s390

 - Fix buffer overread in caam

* tag 'v6.12-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: powerpc/p10-aes-gcm - Disable CRYPTO_AES_GCM_P10
  crypto: s390/paes - Fix module aliases
  crypto: caam - Pad SG length when allocating hash edesc
2024-09-24 10:46:54 -07:00
Herbert Xu
4330869a2d crypto: s390/paes - Fix module aliases
The paes_s390 module didn't declare the correct aliases for the
algorithms that it registered.  Instead it declared an alias for
the non-existent paes algorithm.

The Crypto API will eventually try to load the paes algorithm, to
construct the cbc(paes) instance.  But because the module does not
actually contain a "paes" algorithm, this will fail.

Previously this failure was hidden and the the cbc(paes) lookup will
be retried.  This was fixed recently, thus exposing the buggy alias
in paes_s390.

Replace the bogus paes alias with aliases for the actual algorithms.

Reported-by: Ingo Franzki <ifranzki@linux.ibm.com>
Fixes: e7a4142b35 ("crypto: api - Fix generic algorithm self-test races")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ingo Franzki <ifranzki@linux.ibm.com>
Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2024-09-21 17:14:59 +08:00
Ingo Franzki
992b706680 s390/sha3: Fix SHA3 selftests failures
Since commit "s390/sha3: Support sha3 performance enhancements"
the selftests of the sha3_256_s390 and sha3_512_s390 kernel digests
sometimes fail with:

  alg: shash: sha3-256-s390 test failed (wrong result) on test vector 3,
              cfg="import/export"
  alg: self-tests for sha3-256 using sha3-256-s390 failed (rc=-22)

or with

  alg: ahash: sha3-256-s390 test failed (wrong result) on test vector 3,
              cfg="digest misaligned splits crossing pages"
  alg: self-tests for sha3-256 using sha3-256-s390 failed (rc=-22)

The first failure is because the newly introduced context field
'first_message_part' is not copied during export and import operations.
Because of that the value of 'first_message_part' is more or less random
after an import into a newly allocated context and may or may not fit to
the state of the imported SHA3 operation, causing an invalid hash when it
does not fit.

Save the 'first_message_part' field in the currently unused field 'partial'
of struct sha3_state, even though the meaning of 'partial' is not exactly
the same as 'first_message_part'. For the caller the returned state blob
is opaque and it must only be ensured that the state can be imported later
on by the module that exported it.

The second failure is when on entry of s390_sha_update() the flag
'first_message_part' is on, and kimd is called in the first 'if (index)'
block as well as in the second 'if (len >= bsize)' block. In this case,
the 'first_message_part' is turned off after the first kimd, but the
function code incorrectly retains the NIP flag. Reset the NIP flag after
the first kimd unconditionally besides turning 'first_message_part' off.

Reported-by: Marc Hartmayer <mhartmay@linux.ibm.com>
Fixes: 88c02b3f79 ("s390/sha3: Support sha3 performance enhancements")
Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Reviewed-by: Joerg Schmidbauer <jschmidb@de.ibm.com>
Signed-off-by: Ingo Franzki <ifranzki@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
2024-09-05 15:17:23 +02:00
Joerg Schmidbauer
88c02b3f79 s390/sha3: Support sha3 performance enhancements
On newer machines the SHA3 performance of CPACF instructions KIMD and
KLMD can be enhanced by using additional modifier bits. This allows the
application to omit initializing the ICV, but also affects the internal
processing of the instructions. Performance is mostly gained when
processing short messages.

The new CPACF feature is backwards compatible with older machines, i.e.
the new modifier bits are ignored on older machines. However, to save the
ICV initialization, the application must detect the MSA level and omit
the ICV initialization only if this feature is supported.

Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Joerg Schmidbauer <jschmidb@de.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-08-29 22:56:34 +02:00
Harald Freudenberger
86fbf5e2a0 s390/pkey: Rework and split PKEY kernel module code
This is a huge rework of all the pkey kernel module code.
The goal is to split the code into individual parts with
a dedicated calling interface:
- move all the sysfs related code into pkey_sysfs.c
- all the CCA related code goes to pkey_cca.c
- the EP11 stuff has been moved to pkey_ep11.c
- the PCKMO related code is now in pkey_pckmo.c

The CCA, EP11 and PCKMO code may be seen as "handlers" with
a similar calling interface. The new header file pkey_base.h
declares this calling interface. The remaining code in
pkey_api.c handles the ioctl, the pkey module things and the
"handler" independent code on top of the calling interface
invoking the handlers.

This regrouping of the code will be the base for a real
pkey kernel module split into a pkey base module which acts
as a dispatcher and handler modules providing their service.

Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Reviewed-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
2024-08-29 22:56:33 +02:00