Commit Graph

3951 Commits

Author SHA1 Message Date
Herbert Xu
3715cb9863 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge crypto tree to pick up the scompress scratch refcount fix.  The
merge resolution is slightly non-trivial as the context has shifted.
2025-04-25 10:37:30 +08:00
Sabrina Dubroca
a32f1923c6 crypto: scompress - increment scomp_scratch_users when already allocated
Commit ddd0a42671 only increments scomp_scratch_users when it was 0,
causing a panic when using ipcomp:

    Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
    KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
    CPU: 1 UID: 0 PID: 619 Comm: ping Tainted: G                 N  6.15.0-rc3-net-00032-ga79be02bba5c #41 PREEMPT(full)
    Tainted: [N]=TEST
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
    RIP: 0010:inflate_fast+0x5a2/0x1b90
    [...]
    Call Trace:
     <IRQ>
     zlib_inflate+0x2d60/0x6620
     deflate_sdecompress+0x166/0x350
     scomp_acomp_comp_decomp+0x45f/0xa10
     scomp_acomp_decompress+0x21/0x120
     acomp_do_req_chain+0x3e5/0x4e0
     ipcomp_input+0x212/0x550
     xfrm_input+0x2de2/0x72f0
    [...]
    Kernel panic - not syncing: Fatal exception in interrupt
    Kernel Offset: disabled
    ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---

Instead, let's keep the old increment, and decrement back to 0 if the
scratch allocation fails.

Fixes: ddd0a42671 ("crypto: scompress - Fix scratch allocation failure handling")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-25 10:33:30 +08:00
Herbert Xu
566ec9adfe crypto: xcbc - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:47 +08:00
Herbert Xu
f4bb31367e crypto: cmac - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:47 +08:00
Herbert Xu
ca5d7d5f7a crypto: cbcmac - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:47 +08:00
Herbert Xu
8266393e9b crypto: sm3-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:47 +08:00
Herbert Xu
216623af53 crypto: sha512-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:46 +08:00
Herbert Xu
561aab1104 crypto: riscv/sha512 - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:46 +08:00
Herbert Xu
0d474be267 crypto: sha3-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:46 +08:00
Herbert Xu
9adeea13ed crypto: sha256-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 15:52:45 +08:00
Herbert Xu
a2d910b846 crypto: sha1-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:47 +08:00
Herbert Xu
efd62c8552 crypto: md5-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:47 +08:00
Herbert Xu
ef11957b0a crypto: ghash-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:46 +08:00
Herbert Xu
aa54e17020 crypto: blake2b-generic - Use API partial block handling
Use the Crypto API partial block handling.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:46 +08:00
Herbert Xu
7650f826f7 crypto: shash - Handle partial blocks in API
Provide an option to handle the partial blocks in the shash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.

Rather than duplicating the partial block handling many times,
add this functionality to the shash API.

It is optional (e.g., hmac would never need this by relying on
the partial block handling of the underlying hash), and to enable
it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY.

The export format is always that of the underlying hash export,
plus the partial block buffer, followed by a single-byte for the
partial block length.

Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra
byte in the partial block.  This will come in handy when this
is extended to ahash where hardware often can't deal with a
zero-length final.

It will also be used for algorithms requiring an extra block for
finalisation (e.g., cmac).

As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if
the algorithm wishes to get as much data as possible instead of
just the last partial block.

The descriptor will be zeroed after finalisation.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 11:33:46 +08:00
Herbert Xu
e3f08b2625 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge crypto tree to pick up scompress off-by-one patch.  The
merge resolution is non-trivial as the dst handling code has been
moved in front of the src.
2025-04-23 09:36:39 +08:00
Herbert Xu
002ba346e3 crypto: scomp - Fix off-by-one bug when calculating last page
Fix off-by-one bug in the last page calculation for src and dst.

Reported-by: Nhat Pham <nphamcs@gmail.com>
Fixes: 2d3553ecb4 ("crypto: scomp - Remove support for some non-trivial SG lists")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-23 09:32:57 +08:00
Herbert Xu
31b20bc22f crypto: acomp - Add missing return statements in compress/decompress
The return statements were missing which causes REQ_CHAIN algorithms
to execute twice for every request.

Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 64929fe8c0 ("crypto: acomp - Remove request chaining")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-19 11:18:28 +08:00
Herbert Xu
aece1cf146 Revert "crypto: testmgr - Add multibuffer acomp testing"
This reverts commit 99585c2192.

Remove the acomp multibuffer tests as they are buggy.

Reported-by: Dmitry Antipov <dmantipov@yandex.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-19 11:07:58 +08:00
Herbert Xu
02db42856e crypto: public_key - Make sig/tfm local to if clause in software_key_query
The recent code changes in this function triggered a false-positive
maybe-uninitialized warning in software_key_query.  Rearrange the
code by moving the sig/tfm variables into the if clause where they
are actually used.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-17 10:41:47 +08:00
Herbert Xu
ddd0855fa3 crypto: deflate - Make the acomp walk atomic
Add an atomic flag to the acomp walk and use that in deflate.
Due to the use of a per-cpu context, it is impossible to sleep
during the walk in deflate.

Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com
Fixes: 08cabc7d3c ("crypto: deflate - Convert to acomp")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-17 10:41:47 +08:00
Eric Biggers
ecaa4be128 crypto: poly1305 - centralize the shash wrappers for arch code
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions.  This eliminates the need for every architecture to
implement the same shash glue code.

Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init().  Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.

Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it.  Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c.  (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Dr. David Alan Gilbert
b03892c2f8 crypto: deadcode structs from 'comp' removal
Ard's recent series of patches removing 'comp' implementations
left behind a bunch of trivial structs, remove them.

These are:
  crypto842_ctx - commit 2d985ff007 ("crypto: 842 - drop obsolete 'comp'
implementation")
  lz4_ctx       - commit 33335afe33 ("crypto: lz4 - drop obsolete 'comp'
implementation")
  lz4hc_ctx     - commit dbae96559e ("crypto: lz4hc - drop obsolete
'comp' implementation")
  lzo_ctx       - commit a3e43a25ba ("crypto: lzo - drop obsolete
'comp' implementation")
  lzorle_ctx    - commit d32da55c5b ("crypto: lzo-rle - drop obsolete
'comp' implementation")

Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
0a1376744c crypto: cbcmac - Set block size properly
The block size of a hash algorithm is meant to be the number of
bytes its block function can handle.  For cbcmac that should be
the block size of the underlying block cipher instead of one.

Set the block size of all cbcmac implementations accordingly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
f4065b2f63 crypto: lib/sm3 - Move sm3 library into lib/crypto
Move the sm3 library code into lib/crypto.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
04bfa4c7d5 crypto: hash - Add HASH_REQUEST_ON_STACK
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed.  The intended usage is:

	HASH_REQUEST_ON_STACK(req, tfm);

	...
	err = crypto_ahash_digest(req);
	/* The request cannot complete synchronously. */
	if (err == -EAGAIN) {
		/* This will not fail. */
		req = HASH_REQUEST_CLONE(req, gfp);

		/* Redo operation. */
		err = crypto_ahash_digest(req);
	}

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
90916934fd crypto: shash - Remove dynamic descsize
As all users of the dynamic descsize have been converted to use
a static one instead, remove support for dynamic descsize.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
aeffd90938 crypto: hmac - Make descsize an algorithm attribute
Rather than setting descsize in init_tfm, make it an algorithm
attribute and set it during instance construction.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
f1440a9046 crypto: api - Add support for duplicating algorithms before registration
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration.  This is inteded for
hardware-based algorithms that may be unplugged at will.

Do not use this if the algorithm data structure is embedded in a
bigger data structure.  Perform the duplication in the driver
instead.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:36:24 +08:00
Herbert Xu
d701722aa3 crypto: api - Allow delayed algorithm destruction
The current algorithm unregistration mechanism originated from
software crypto.  The code relies on module reference counts to
stop in-use algorithms from being unregistered.  Therefore if
the unregistration function is reached, it is assumed that the
module reference count has hit zero and thus the algorithm reference
count should be exactly 1.

This is completely broken for hardware devices, which can be
unplugged at random.

Fix this by allowing algorithms to be destroyed later if a destroy
callback is provided.

Reported-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:22 +08:00
Thorsten Blum
c80d6598ff crypto: essiv - Remove unnecessary strscpy() size argument
If the destination buffer has a fixed length, strscpy() automatically
determines its size using sizeof() when the argument is omitted. This
makes the explicit size argument unnecessary - remove it.

No functional changes intended.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:22 +08:00
Lukas Wunner
6b7f9397c9 crypto: ecdsa - Fix NIST P521 key size reported by KEYCTL_PKEY_QUERY
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.

That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.

Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().

The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes.  This matches with the units in struct keyctl_pkey_query.

Fixes: a7d45ba77d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:21 +08:00
Lukas Wunner
3828485e1c crypto: ecdsa - Fix enc/dec size reported by KEYCTL_PKEY_QUERY
KEYCTL_PKEY_QUERY system calls for ecdsa keys return the key size as
max_enc_size and max_dec_size, even though such keys cannot be used for
encryption/decryption.  They're exclusively for signature generation or
verification.

Only rsa keys with pkcs1 encoding can also be used for encryption or
decryption.

Return 0 instead for ecdsa keys (as well as ecrdsa keys).

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:21 +08:00
Herbert Xu
c360df01c6 crypto: ahash - Use cra_reqsize
Use the common reqsize field and remove reqsize from ahash_alg.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:21 +08:00
Herbert Xu
300e6d6e9e crypto: acomp - Remove reqsize field
Remove the type-specific reqsize field in favour of the common one.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:21 +08:00
Herbert Xu
dbad301d9f crypto: acomp - Use cra_reqsize
Use the common reqsize if present.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:21 +08:00
Herbert Xu
5f3437e9c8 crypto: acomp - Simplify folio handling
Rather than storing the folio as is and handling it later, convert
it to a scatterlist right away.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:21 +08:00
Herbert Xu
097c432caa crypto: acomp - Add ACOMP_REQUEST_CLONE
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack
request into a dynamically allocated one if possible, and otherwise
switch it over to the sycnrhonous fallback transform.  The intended
usage is:

	ACOMP_STACK_ON_REQUEST(req, tfm);

	...
	err = crypto_acomp_compress(req);
	/* The request cannot complete synchronously. */
	if (err == -EAGAIN) {
		/* This will not fail. */
		req = ACOMP_REQUEST_CLONE(req, gfp);

		/* Redo operation. */
		err = crypto_acomp_compress(req);
	}

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
05fa2c6e87 crypto: acomp - Add ACOMP_FBREQ_ON_STACK
Add a helper to create an on-stack fallback request from a given
request.  Use this helper in acomp_do_nondma.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Thorsten Blum
b93336cd76 crypto: x509 - Replace kmalloc() + NUL-termination with kzalloc()
Use kzalloc() to zero out the one-element array instead of using
kmalloc() followed by a manual NUL-termination.

No functional changes intended.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
5bb61dc76d crypto: ahash - Remove request chaining
Request chaining requires the user to do too much book keeping.
Remove it from ahash.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
69e5a1228d Revert "crypto: tcrypt - Restore multibuffer ahash tests"
This reverts commit c664f03417.

Remove the multibuffer ahash speed tests again.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
64929fe8c0 crypto: acomp - Remove request chaining
Request chaining requires the user to do too much book keeping.
Remove it from acomp.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
78e2846aa4 crypto: deflate - Remove request chaining
Remove request chaining support from deflate.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
5976fe19e2 Revert "crypto: testmgr - Add multibuffer acomp testing"
This reverts commit 99585c2192.

Remove the acomp multibuffer tests so that the interface can be
redesigned.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-16 15:16:20 +08:00
Herbert Xu
51a7c741f7 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge crypto tree to pick up scompress and ahash fixes.  The
scompress fix becomes mostly unnecessary as the bugs no longer
exist with the new acompress code.  However, keep the NULL assignment
in crypto_acomp_free_streams so that if the user decides to call
crypto_acomp_alloc_streams again it will work.
2025-04-12 09:48:09 +08:00
Herbert Xu
b2e689baf2 crypto: ahash - Disable request chaining
Disable hash request chaining in case a driver that copies an
ahash_request object by hand accidentally triggers chaining.

Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Fixes: f2ffe5a918 ("crypto: hash - Add request chaining API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Manorit Chawdhry <m-chawdhry@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-12 09:33:09 +08:00
Herbert Xu
9ae0c92fec crypto: scomp - Fix wild memory accesses in scomp_free_streams
In order to use scomp_free_streams to free the partially allocted
streams in the allocation error path, move the alg->stream assignment
to the beginning.  Also check for error pointers in scomp_free_streams
before freeing the ctx.

Finally set alg->stream to NULL to not break subsequent attempts
to allocate the streams.

Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Reported-by: syzkaller <syzkaller@googlegroups.com>
Co-developed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Co-developed-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-12 09:33:09 +08:00
Herbert Xu
5322584385 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Merge crypto tree to pick up scompress and caam fixes.  The scompress
fix has a non-trivial resolution as the code in question has moved
over to acompress.
2025-04-09 21:33:40 +08:00
Herbert Xu
cfb32c656e crypto: scomp - Fix null-pointer deref when freeing streams
As the scomp streams are freed when an algorithm is unregistered,
it is possible that the algorithm has never been used at all (e.g.,
an algorithm that does not have a self-test).  So test whether the
streams exist before freeing them.

Reported-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-09 21:28:31 +08:00
Eric Biggers
d23fce15ab crypto: chacha - remove <crypto/internal/chacha.h>
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so
fold it into there.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:28 +08:00
Eric Biggers
4aa6dc909e crypto: chacha - centralize the skcipher wrappers for arch code
Following the example of the crc32 and crc32c code, make the crypto
subsystem register both generic and architecture-optimized chacha20,
xchacha20, and xchacha12 skcipher algorithms, all implemented on top of
the appropriate library functions.  This eliminates the need for every
architecture to implement the same skcipher glue code.

To register the architecture-optimized skciphers only when
architecture-optimized code is actually being used, add a function
chacha_is_arch_optimized() and make each arch implement it.  Change each
architecture's ChaCha module_init function to arch_initcall so that the
CPU feature detection is guaranteed to run before
chacha_is_arch_optimized() gets called by crypto/chacha.c.  In the case
of s390, remove the CPU feature based module autoloading, which is no
longer needed since the module just gets pulled in via function linkage.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:28 +08:00
Herbert Xu
a7b1d0c5f1 crypto: scomp - Drop the dst scratch buffer
As deflate has been converted over to acomp, and cavium zip has been
removed, there are no longer any scomp algorithms that can be used
by IPsec.

Since IPsec was the only user of the dst scratch buffer, remove it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Herbert Xu
08cabc7d3c crypto: deflate - Convert to acomp
This based on work by Ard Biesheuvel <ardb@kernel.org>.

Convert deflate from scomp to acomp.  This removes the need for
the caller to linearise the source and destination.

Link: https://lore.kernel.org/all/20230718125847.3869700-21-ardb@kernel.org/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Herbert Xu
9c8cf58262 crypto: acomp - Add acomp_walk
Add acomp_walk which is similar to skcipher_walk but tailored for
acomp.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Herbert Xu
42d9f6c774 crypto: acomp - Move scomp stream allocation code into acomp
Move the dynamic stream allocation code into acomp and make it
available as a helper for acomp algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Herbert Xu
c47e1f4142 crypto: scomp - Allocate per-cpu buffer on first use of each CPU
Per-cpu buffers can be wasteful when the number of CPUs is large,
especially if the buffer itself is likely to never be used.  Reduce
such wastage by only allocating them on first use of a particular
CPU.

On start-up allocate a single buffer on the first possible CPU.
For every other CPU a work struct will be scheduled on first use
to allocate the buffer for that CPU.  Until the allocation succeeds
simply use the first CPU's buffer which is protected under a spin
lock.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Herbert Xu
138804c2c1 crypto: api - Ensure cra_type->destroy is done in process context
Move the cra_type->destroy call out of crypto_alg_put and into
crypto_unregister_alg and crypto_free_instance.  This ensures
that it's always done in process context so calls such as flush_work
can be done.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Herbert Xu
3860642e0a crypto: api - Move alg destroy work from instance to template
Commit 9ae4577bc0 ("crypto: api - Use work queue in
crypto_destroy_instance") introduced a work struct to free an
instance after the last user goes away.

Move the delayed work from the instance into its template so that
when the template is unregistered it can ensure that all its
instances have been freed before returning.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-04-07 13:22:25 +08:00
Linus Torvalds
5de0afb422 This push fixes reverts the multibuffer hash testing as it is buggy.
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmfotyEACgkQxycdCkmx
 i6cexw/+P89k6YKSHv9tYQ7hndFBf8ivDKvhb2bmB1tWSC+n7p8AW0oC9rCLkgih
 fya2mhPLgH3M+qOcKURFks9f5SVZm1IXuG+Hu/9hBkETW6RZZt+m5FzxAnbjVUqA
 dETVYvPDyw1pv6G1oUpi68fKhID8miZS2XLGmwCHrE+I+c/wskq8+oTjqZrn3jA3
 iE29V6ZC4jV/aLk4EzIxBKJiGFHA9S5kd6sj7vMHVx9j3pvcBq0t3NuqBU5dmTbD
 sCpYF8PCWv4TgbiMy8EgIgZPlrbUAtctFUS1O1X//C8YvvcRuJARBXBLCHcFUqEV
 A8Ub1gvO9J+xzS9TTV3cKOifAPeNX87748gpSk/5FmTdIANSBKVuKLkAdF5mm7UM
 sodUr95jHkqpAbzNKJlN4cSYw0ZjVHvd/fAL3e8jNGPo97rqQYJM3OZzStgArHAz
 PCKqb4agjHpN9T8MSsKAv8b/5+ZoukpBwkiM+57zxyG3Qrfpg1c369Ld/P8Bg0rY
 PKxsgH5AAP6kpPw5ZLQYuTnq4FvvrkJdTnfVd5G6qQZ/NCabnw/dIFgAM/qhJWz0
 Ib8p5y1bdRhbECQOHu678vAJIiKkD5Ml2ltx3FZnEmmQR3ueAMqcj0tx3S9Lq5pC
 Tk0znvQRRbwBSCeQQa7BHyEXulvOrNxV6ZZFwn1YWM5cDbuq/oc=
 =fRer
 -----END PGP SIGNATURE-----

Merge tag 'v6.15-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto fix from Herbert Xu:

 - revert the multibuffer hash testing as it is buggy

* tag 'v6.15-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  Revert "crypto: testmgr - Add multibuffer hash testing"
2025-04-02 09:14:59 -07:00
Herbert Xu
9764d5b0cd Revert "crypto: testmgr - Add multibuffer hash testing"
This reverts commit 8b54e6a8f4.

The multibuffer tests has a number of bugs.  For example, the SG
lists for the filler requests weren't initialised properly, and
it fails to take data-keyed algorithms such as poly1305 into account.

More importantly, the chaining interface itself is under review.
Revert this until the interface is fully settled.

Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202503281658.7a078821-lkp@intel.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-30 09:39:57 +08:00
Linus Torvalds
e5e0e6bebe This update includes the following changes:
API:
 
 - Remove legacy compression interface.
 - Improve scatterwalk API.
 - Add request chaining to ahash and acomp.
 - Add virtual address support to ahash and acomp.
 - Add folio support to acomp.
 - Remove NULL dst support from acomp.
 
 Algorithms:
 
 - Library options are fuly hidden (selected by kernel users only).
 - Add Kerberos5 algorithms.
 - Add VAES-based ctr(aes) on x86.
 - Ensure LZO respects output buffer length on compression.
 - Remove obsolete SIMD fallback code path from arm/ghash-ce.
 
 Drivers:
 
 - Add support for PCI device 0x1134 in ccp.
 - Add support for rk3588's standalone TRNG in rockchip.
 - Add Inside Secure SafeXcel EIP-93 crypto engine support in eip93.
 - Fix bugs in tegra uncovered by multi-threaded self-test.
 - Fix corner cases in hisilicon/sec2.
 
 Others:
 
 - Add SG_MITER_LOCAL to sg miter.
 - Convert ubifs, hibernate and xfrm_ipcomp from legacy API to acomp.
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmfiQ9kACgkQxycdCkmx
 i6fFZg/9GWjC1FLEV66vNlYAIzFGwzwWdFGyQzXyP235Cphhm4qt9gx7P91N6Lvc
 pplVjNEeZHoP8lMw+AIeGc2cRhIwsvn8C+HA3tCBOoC1qSe8T9t7KHAgiRGd/0iz
 UrzVBFLYlR9i4tc0T5peyQwSctv8DfjWzduTmI3Ts8i7OQcfeVVgj3sGfWam7kjF
 1GJWIQH7aPzT8cwFtk8gAK1insuPPZelT1Ppl9kUeZe0XUibrP7Gb5G9simxXAyi
 B+nLCaJYS6Hc1f47cfR/qyZSeYQN35KTVrEoKb1pTYXfEtMv6W9fIvQVLJRYsqpH
 RUBdDJUseE+WckR6glX9USrh+Fv9d+HfsTXh1fhpApKU5sQJ7pDbUm4ge8p6htNG
 MIszbJPdqajYveRLuPUjFlUXaqomos8eT6BZA+RLHm1cogzEOm+5bjspbfRNAVPj
 x9KiDu5lXNiFj02v/MkLKUe3bnGIyVQnZNi7Rn0Rpxjv95tIjVpksZWMPJarxUC6
 5zdyM2I5X0Z9+teBpbfWyqfzSbAs/KpzV8S/xNvWDUT6NlpYGBeNXrCDTXcwJLAh
 PRW0w1EJUwsZbPi8GEh5jNzo/YK1cGsUKrihKv7YgqSSopMLI8e/WVr8nKZMVDFA
 O+6F6ec5lR7KsOIMGUqrBGFU1ccAeaLLvLK3H5J8//gMMg82Uik=
 =aQNt
 -----END PGP SIGNATURE-----

Merge tag 'v6.15-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Remove legacy compression interface
   - Improve scatterwalk API
   - Add request chaining to ahash and acomp
   - Add virtual address support to ahash and acomp
   - Add folio support to acomp
   - Remove NULL dst support from acomp

  Algorithms:
   - Library options are fuly hidden (selected by kernel users only)
   - Add Kerberos5 algorithms
   - Add VAES-based ctr(aes) on x86
   - Ensure LZO respects output buffer length on compression
   - Remove obsolete SIMD fallback code path from arm/ghash-ce

  Drivers:
   - Add support for PCI device 0x1134 in ccp
   - Add support for rk3588's standalone TRNG in rockchip
   - Add Inside Secure SafeXcel EIP-93 crypto engine support in eip93
   - Fix bugs in tegra uncovered by multi-threaded self-test
   - Fix corner cases in hisilicon/sec2

  Others:
   - Add SG_MITER_LOCAL to sg miter
   - Convert ubifs, hibernate and xfrm_ipcomp from legacy API to acomp"

* tag 'v6.15-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (187 commits)
  crypto: testmgr - Add multibuffer acomp testing
  crypto: acomp - Fix synchronous acomp chaining fallback
  crypto: testmgr - Add multibuffer hash testing
  crypto: hash - Fix synchronous ahash chaining fallback
  crypto: arm/ghash-ce - Remove SIMD fallback code path
  crypto: essiv - Replace memcpy() + NUL-termination with strscpy()
  crypto: api - Call crypto_alg_put in crypto_unregister_alg
  crypto: scompress - Fix incorrect stream freeing
  crypto: lib/chacha - remove unused arch-specific init support
  crypto: remove obsolete 'comp' compression API
  crypto: compress_null - drop obsolete 'comp' implementation
  crypto: cavium/zip - drop obsolete 'comp' implementation
  crypto: zstd - drop obsolete 'comp' implementation
  crypto: lzo - drop obsolete 'comp' implementation
  crypto: lzo-rle - drop obsolete 'comp' implementation
  crypto: lz4hc - drop obsolete 'comp' implementation
  crypto: lz4 - drop obsolete 'comp' implementation
  crypto: deflate - drop obsolete 'comp' implementation
  crypto: 842 - drop obsolete 'comp' implementation
  crypto: nx - Migrate to scomp API
  ...
2025-03-29 10:01:55 -07:00
Linus Torvalds
9b960d8cd6 for-6.15/block-20250322
-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmfe8BkQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgpvTqD/4pOeGi/QfLyocn4TcJcidRGZAvBxecTVuM
 upeyr+dCyCi9Wk+EJKeAFooGe15upzxDxKj06HhCixaLx4etDK78uGV4FMM1Z4oa
 2dtchz1Zd0HyBPgQIUY8OuOgbS7tstMS/KdvL+gr5IjfapeTF+54WVLCD8eVyvO/
 vUIppgJBhrqy2qui4xF2lw4t2COt+/PqinGQuYALn4V4Po9NWA7lSh3ZI4F/byj1
 v68jXyt2fqCAyxwkzRDv4GxhN8c6W+TPJpzivrEAuSkLacovESKztinOrafrBnLR
 zdyO4n0V0yGOXbAcxRbADVA4HUsqhLl4JRnvE5P5zIaD7rkE0UqggF7vrSeCvVA1
 hsi1BhkAMNimKX7CZMnT3dJpxRQj1eDJxpwUAusLHWjMyQbNFhV7WAtthMtVJon8
 lAS4e5+xzjqKhF15GpVg5Lzy8SAwdqgNXwwq2zbM8OaPKG0FpajG8DXAqqcj4fpy
 WXnwg72KZDmRcSNJhVZK6B9xSAwIMXPgH4ClCMP9/xlw8EDpM38MDmzrs35TAVtI
 HGE3Qv9CjFjVj/OG3el+bTGIQJFVgYEVPV5TYfNCpKoxpj5cLn5OQY5u6MJawtgK
 HeDgKv3jw3lHatDALMVfwJqqVlUht0R6SIxtP9WHV+CcFrqN1LJKmdhDQbm7b4XK
 EbbawIsdxw==
 =Ci5m
 -----END PGP SIGNATURE-----

Merge tag 'for-6.15/block-20250322' of git://git.kernel.dk/linux

Pull block updates from Jens Axboe:

 - Fixes for integrity handling

 - NVMe pull request via Keith:
      - Secure concatenation for TCP transport (Hannes)
      - Multipath sysfs visibility (Nilay)
      - Various cleanups (Qasim, Baruch, Wang, Chen, Mike, Damien, Li)
      - Correct use of 64-bit BARs for pci-epf target (Niklas)
      - Socket fix for selinux when used in containers (Peijie)

 - MD pull request via Yu:
      - fix recovery can preempt resync (Li Nan)
      - fix md-bitmap IO limit (Su Yue)
      - fix raid10 discard with REQ_NOWAIT (Xiao Ni)
      - fix raid1 memory leak (Zheng Qixing)
      - fix mddev uaf (Yu Kuai)
      - fix raid1,raid10 IO flags (Yu Kuai)
      - some refactor and cleanup (Yu Kuai)

 - Series cleaning up and fixing bugs in the bad block handling code

 - Improve support for write failure simulation in null_blk

 - Various lock ordering fixes

 - Fixes for locking for debugfs attributes

 - Various ublk related fixes and improvements

 - Cleanups for blk-rq-qos wait handling

 - blk-throttle fixes

 - Fixes for loop dio and sync handling

 - Fixes and cleanups for the auto-PI code

 - Block side support for hardware encryption keys in blk-crypto

 - Various cleanups and fixes

* tag 'for-6.15/block-20250322' of git://git.kernel.dk/linux: (105 commits)
  nvmet: replace max(a, min(b, c)) by clamp(val, lo, hi)
  nvme-tcp: fix selinux denied when calling sock_sendmsg
  nvmet: pci-epf: Always configure BAR0 as 64-bit
  nvmet: Remove duplicate uuid_copy
  nvme: zns: Simplify nvme_zone_parse_entry()
  nvmet: pci-epf: Remove redundant 'flush_workqueue()' calls
  nvmet-fc: Remove unused functions
  nvme-pci: remove stale comment
  nvme-fc: Utilise min3() to simplify queue count calculation
  nvme-multipath: Add visibility for queue-depth io-policy
  nvme-multipath: Add visibility for numa io-policy
  nvme-multipath: Add visibility for round-robin io-policy
  nvmet: add tls_concat and tls_key debugfs entries
  nvmet-tcp: support secure channel concatenation
  nvmet: Add 'sq' argument to alloc_ctrl_args
  nvme-fabrics: reset admin connection for secure concatenation
  nvme-tcp: request secure channel concatenation
  nvme-keyring: add nvme_tls_psk_refresh()
  nvme: add nvme_auth_derive_tls_psk()
  nvme: add nvme_auth_generate_digest()
  ...
2025-03-26 18:08:55 -07:00
Herbert Xu
99585c2192 crypto: testmgr - Add multibuffer acomp testing
Add rudimentary multibuffer acomp testing.  Testing coverage is
extended to compression vectors only.  However, as the compression
vectors are compressed and then decompressed, this covers both
compression and decompression.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-22 07:25:19 +08:00
Herbert Xu
39fc22a8e5 crypto: acomp - Fix synchronous acomp chaining fallback
The synchronous acomp fallback code path is broken because the
completion code path assumes that the state object is always set
but this is only done for asynchronous algorithms.

First of all remove the assumption on the completion code path
by passing in req0 instead of the state.  However, also remove
the conditional setting of the state since it's always in the
request object anyway.

Fixes: b67a026003 ("crypto: acomp - Add request chaining and virtual addresses")
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-22 07:25:19 +08:00
Herbert Xu
8b54e6a8f4 crypto: testmgr - Add multibuffer hash testing
This is based on a patch by Eric Biggers <ebiggers@google.com>.

Add limited self-test for multibuffer hash code path.  This tests
only a single request in chain of a random length.  The other
requests are either all of the same length as the one being tested,
or random lengths between 0 and PAGE_SIZE * 2 * XBUFSIZE.

Potential extension include testing all requests rather than just
the single one.

Link: https://lore.kernel.org/all/20241001153718.111665-3-ebiggers@kernel.org/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-22 07:25:19 +08:00
Herbert Xu
108ce629cf crypto: hash - Fix synchronous ahash chaining fallback
The synchronous ahash fallback code paths are broken because the
ahash_restore_req assumes there is always a state object.  Fix this
by removing the state from ahash_restore_req and localising it to
the asynchronous completion callback.

Also add a missing synchronous finish call in ahash_def_digest_finish.

Fixes: f2ffe5a918 ("crypto: hash - Add request chaining API")
Fixes: 439963cdc3 ("crypto: ahash - Add virtual address support")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-22 07:25:19 +08:00
Thorsten Blum
fdd305803b crypto: essiv - Replace memcpy() + NUL-termination with strscpy()
Use strscpy() to copy the NUL-terminated string 'p' to the destination
buffer instead of using memcpy() followed by a manual NUL-termination.

No functional changes intended.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Herbert Xu
27b1342534 crypto: api - Call crypto_alg_put in crypto_unregister_alg
Instead of calling cra_destroy by hand, call it through
crypto_alg_put so that the correct unwinding functions are called
through crypto_destroy_alg.

Fixes: 3d6979bf3b ("crypto: api - Add cra_type->destroy hook")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Herbert Xu
5a06ef1f8d crypto: scompress - Fix incorrect stream freeing
Fix stream freeing crash by passing the correct pointer.

Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Eric Biggers
ca17aa6640 crypto: lib/chacha - remove unused arch-specific init support
All implementations of chacha_init_arch() just call
chacha_init_generic(), so it is pointless.  Just delete it, and replace
chacha_init() with what was previously chacha_init_generic().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
fce8b8d598 crypto: remove obsolete 'comp' compression API
The 'comp' compression API has been superseded by the acomp API, which
is a bit more cumbersome to use, but ultimately more flexible when it
comes to hardware implementations.

Now that all the users and implementations have been removed, let's
remove the core plumbing of the 'comp' API as well.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
be457e4e8d crypto: compress_null - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
8beb40458c crypto: zstd - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
a3e43a25ba crypto: lzo - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
d32da55c5b crypto: lzo-rle - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
dbae96559e crypto: lz4hc - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
33335afe33 crypto: lz4 - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
0fd486363c crypto: deflate - drop obsolete 'comp' implementation
No users of the obsolete 'comp' crypto compression API remain, so let's
drop the software deflate version of it.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Ard Biesheuvel
2d985ff007 crypto: 842 - drop obsolete 'comp' implementation
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:06 +08:00
Herbert Xu
ddd0a42671 crypto: scompress - Fix scratch allocation failure handling
If the scratch allocation fails, all subsequent allocations will
silently succeed without actually allocating anything.  Fix this
by only incrementing users when the allocation succeeds.

Fixes: 6a8487a1f2 ("crypto: scompress - defer allocation of scratch buffer to first use")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:39:05 +08:00
Herbert Xu
8a6771cda3 crypto: acomp - Add support for folios
For many users, it's easier to supply a folio rather than an SG
list since they already have them.  Add support for folios to the
acomp interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:35:26 +08:00
Herbert Xu
dfd3bc6977 crypto: acomp - Add async nondma fallback
Add support for passing non-DMA virtual addresses to async drivers
by passing them along to the fallback software algorithm.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:39 +08:00
Herbert Xu
5416b8a741 crypto: acomp - Add ACOMP_REQUEST_ALLOC and acomp_request_alloc_extra
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc
that falls back to a synchronous stack reqeust if the allocation
fails.

Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack
only.

The request should be freed with acomp_request_free.

Finally add acomp_request_alloc_extra which gives the user extra
memory to use in conjunction with the request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:39 +08:00
Herbert Xu
2c1808e5fe crypto: scomp - Add chaining and virtual address support
Add chaining and virtual address support to all scomp algorithms.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:39 +08:00
Herbert Xu
2d3553ecb4 crypto: scomp - Remove support for some non-trivial SG lists
As the only user of acomp/scomp uses a trivial single-page SG
list, remove support for everything else in preprataion for the
addition of virtual address support.

However, keep support for non-trivial source SG lists as that
user is currently jumping through hoops in order to linearise
the source data.

Limit the source SG linearisation buffer to a single page as
that user never goes over that.  The only other potential user
is also unlikely to exceed that (IPComp) and it can easily do
its own linearisation if necessary.

Also keep the destination SG linearisation for IPComp.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:39 +08:00
Herbert Xu
ce3313560c crypto: hash - Use nth_page instead of doing it by hand
Use nth_page instead of adding n to the page pointer.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:39 +08:00
Herbert Xu
480db50095 crypto: hash - Fix test underflow in shash_ahash_digest
The test on PAGE_SIZE - offset in shash_ahash_digest can underflow,
leading to execution of the fast path even if the data cannot be
mapped into a single page.

Fix this by splitting the test into four cases:

1) nbytes > sg->length: More than one SG entry, slow path.
2) !IS_ENABLED(CONFIG_HIGHMEM): fast path.
3) nbytes > (unsigned int)PAGE_SIZE - offset: Two highmem pages, slow path.
4) Highmem fast path.

Fixes: 5f7082ed4f ("crypto: hash - Export shash through hash")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:38 +08:00
Herbert Xu
da6f9bf40a crypto: krb5 - Use SG miter instead of doing it by hand
The function crypto_shash_update_sg iterates through an SG by
hand.  It fails to handle corner cases such as SG entries longer
than a page.  Fix this by using the SG iterator.

Fixes: 348f5669d1 ("crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_mic")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:38 +08:00
Eric Biggers
7450ebd29c crypto: scatterwalk - simplify map and unmap calling convention
Now that the address returned by scatterwalk_map() is always being
stored into the same struct scatter_walk that is passed in, make
scatterwalk_map() do so itself and return void.

Similarly, now that scatterwalk_unmap() is always being passed the
address field within a struct scatter_walk, make scatterwalk_unmap()
take a pointer to struct scatter_walk instead of the address directly.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-21 17:33:38 +08:00
Hannes Reinecke
3241cd0c6c crypto,fs: Separate out hkdf_extract() and hkdf_expand()
Separate out the HKDF functions into a separate module to
to make them available to other callers.
And add a testsuite to the module with test vectors
from RFC 5869 (and additional vectors for SHA384 and SHA512)
to ensure the integrity of the algorithm.

Signed-off-by: Hannes Reinecke <hare@kernel.org>
Acked-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Keith Busch <kbusch@kernel.org>
2025-03-20 16:53:53 -07:00
Herbert Xu
d2d072a313 crypto: testmgr - Remove NULL dst acomp tests
In preparation for the partial removal of NULL dst acomp support,
remove the tests for them.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:23 +08:00
Herbert Xu
b67a026003 crypto: acomp - Add request chaining and virtual addresses
This adds request chaining and virtual address support to the
acomp interface.

It is identical to the ahash interface, except that a new flag
CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
virtual addresses are not suitable for DMA.  This is because
all existing and potential acomp users can provide memory that
is suitable for DMA so there is no need for a fall-back copy
path.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:23 +08:00
Herbert Xu
cff12830e2 crypto: scomp - Disable BH when taking per-cpu spin lock
Disable BH when taking per-cpu spin locks.  This isn't an issue
right now because the only user zswap calls scomp from process
context.  However, if scomp is called from softirq context the
spin lock may dead-lock.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:23 +08:00
Herbert Xu
3d72ad46a2 crypto: acomp - Move stream management into scomp layer
Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp.  This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
0af7304c06 crypto: scomp - Remove tfm argument from alloc/free_ctx
The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm.  Remove it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
3d6979bf3b crypto: api - Add cra_type->destroy hook
Add a cra_type->destroy hook so that resources can be freed after
the last user of a registered algorithm is gone.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
37d451809f crypto: skcipher - Make skcipher_walk src.virt.addr const
Mark the src.virt.addr field in struct skcipher_walk as a pointer
to const data.  This guarantees that the user won't modify the data
which should be done through dst.virt.addr to ensure that flushing
is done when necessary.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
db873be6f0 crypto: skcipher - Eliminate duplicate virt.addr field
Reuse the addr field from struct scatter_walk for skcipher_walk.

Keep the existing virt.addr fields but make them const for the
user to access the mapped address.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
131bdceca1 crypto: scatterwalk - Add memcpy_sglist
Add memcpy_sglist which copies one SG list to another.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Herbert Xu
65775cf313 crypto: scatterwalk - Change scatterwalk_next calling convention
Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address.  The length is returned directly.

Change the done functions to use this stored address instead of
getting them from the caller.

Split the address into two using a union.  The user should only
access the const version so that it is never changed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 16:21:22 +08:00
Dr. David Alan Gilbert
20238d4944 async_xor: Remove unused 'async_xor_val'
async_xor_val has been unused since commit
a7c224a820 ("md/raid5: convert to new xor compution interface")

Remove it.

Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-15 15:08:53 +08:00
Eric Biggers
eca6828403 crypto: skcipher - fix mismatch between mapping and unmapping order
Local kunmaps have to be unmapped in the opposite order from which they
were mapped.  My recent change flipped the unmap order in the
SKCIPHER_WALK_DIFF case.  Adjust the mapping side to match.

This fixes a WARN_ON_ONCE that was triggered when running the
crypto-self tests on a 32-bit kernel with
CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y.

Fixes: 95dbd711b1 ("crypto: skcipher - use the new scatterwalk functions")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08 16:24:36 +08:00
Herbert Xu
98330b9a61 crypto: Kconfig - Select LIB generic option
Select the generic LIB options if the Crypto API algorithm is
enabled.  Otherwise this may lead to a build failure as the Crypto
API algorithm always uses the generic implementation.

Fixes: 17ec3e71ba ("crypto: lib/Kconfig - Hide arch options from user")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202503022113.79uEtUuy-lkp@intel.com/
Closes: https://lore.kernel.org/oe-kbuild-all/202503022115.9OOyDR5A-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08 16:24:36 +08:00
Herbert Xu
8f3332eecd crypto: acomp - Remove acomp request flags
The acomp request flags field duplicates the base request flags
and is confusing.  Remove it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08 16:23:22 +08:00
Herbert Xu
cc47f07234 crypto: lzo - Fix compression buffer overrun
Unlike the decompression code, the compression code in LZO never
checked for output overruns.  It instead assumes that the caller
always provides enough buffer space, disregarding the buffer length
provided by the caller.

Add a safe compression interface that checks for the end of buffer
before each write.  Use the safe interface in crypto/lzo.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08 16:23:22 +08:00
Herbert Xu
c3e054dbdb crypto: api - Move struct crypto_type into internal.h
Move the definition of struct crypto_type into internal.h as it
is only used by API implementors and not algorithm implementors.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-08 16:23:22 +08:00
David Howells
fc0cf10c04 crypto/krb5: Implement crypto self-testing
Implement self-testing infrastructure to test the pseudo-random function,
key derivation, encryption and checksumming.

Add the testing data from rfc8009 to test AES + HMAC-SHA2.

Add the testing data from rfc6803 to test Camellia.  Note some encryption
test vectors here are incomplete, lacking the key usage number needed to
derive Ke and Ki, and there are errata for this:

	https://www.rfc-editor.org/errata_search.php?rfc=6803

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:56:47 +00:00
David Howells
742e38d4d4 crypto/krb5: Implement the Camellia enctypes from rfc6803
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from
rfc6803.

Note that the test vectors in rfc6803 for encryption are incomplete,
lacking the key usage number needed to derive Ke and Ki, and there are
errata for this:

	https://www.rfc-editor.org/errata_search.php?rfc=6803

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:55:23 +00:00
David Howells
6c3c0e86c2 crypto/krb5: Implement the AES enctypes from rfc8009
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192
enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto
scheme.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:53:55 +00:00
David Howells
7c164b66b2 crypto/krb5: Implement the AES enctypes from rfc3962
Implement the aes128-cts-hmac-sha1-96 and aes256-cts-hmac-sha1-96 enctypes
from rfc3962, using the rfc3961 kerberos 5 simplified crypto scheme.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:52:48 +00:00
David Howells
348f5669d1 crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_mic
Add functions that sign and verify a message according to rfc3961 sec 5.4,
using Kc to generate a checksum and insert it into the MIC field in the
skbuff in the sign phase then checksum the data and compare it to the MIC
in the verify phase.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:51:47 +00:00
David Howells
00244da40f crypto/krb5: Implement the Kerberos5 rfc3961 encrypt and decrypt functions
Add functions that encrypt and decrypt a message according to rfc3961 sec
5.3, using Ki to checksum the data to be secured and Ke to encrypt it
during the encryption phase, then decrypting with Ke and verifying the
checksum with Ki in the decryption phase.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:50:43 +00:00
David Howells
8bcdbfa89f crypto/krb5: Provide RFC3961 setkey packaging functions
Provide functions to derive keys according to RFC3961 (or load the derived
keys for the selftester where only derived keys are available) and to
package them up appropriately for passing to a krb5enc AEAD setkey or a
hash setkey function.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:49:40 +00:00
David Howells
c8d8f6af66 crypto/krb5: Implement the Kerberos5 rfc3961 key derivation
Implement the simplified crypto profile for Kerberos 5 rfc3961 with the
pseudo-random function, PRF(), from section 5.3 and the key derivation
function, DK() from section 5.1.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:48:34 +00:00
David Howells
41cf1d1e8a crypto/krb5: Provide infrastructure and key derivation
Provide key derivation interface functions and a helper to implement the
PRF+ function from rfc4402.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:47:07 +00:00
David Howells
0392b110cc crypto/krb5: Add an API to perform requests
Add an API by which users of the krb5 crypto library can perform crypto
requests, such as encrypt, decrypt, get_mic and verify_mic.  These
functions take the previously prepared crypto objects to work on.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:45:52 +00:00
David Howells
a9c27d2d87 crypto/krb5: Add an API to alloc and prepare a crypto object
Add an API by which users of the krb5 crypto library can get an allocated
and keyed crypto object.

For encryption-mode operation, an AEAD object is returned; for
checksum-mode operation, a synchronous hash object is returned.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:44:27 +00:00
David Howells
025ac491f4 crypto/krb5: Add an API to query the layout of the crypto section
Provide some functions to allow the called to find out about the layout of
the crypto section:

 (1) Calculate, for a given size of data, how big a buffer will be
     required to hold it and where the data will be within it.

 (2) Calculate, for an amount of buffer, what's the maximum size of data
     that will fit therein, and where it will start.

 (3) Determine where the data will be in a received message.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:43:14 +00:00
David Howells
3936f02bf2 crypto/krb5: Implement Kerberos crypto core
Provide core structures, an encoding-type registry and basic module and
config bits for a generic Kerberos crypto library.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:41:54 +00:00
David Howells
1b80b6f446 crypto/krb5: Test manager data
Add Kerberos crypto tests to the test manager database.  This covers:

	camellia128-cts-cmac		samples from RFC6803
	camellia256-cts-cmac		samples from RFC6803
	aes128-cts-hmac-sha256-128	samples from RFC8009
	aes256-cts-hmac-sha384-192	samples from RFC8009

but not:

	aes128-cts-hmac-sha1-96
	aes256-cts-hmac-sha1-96

as the test samples in RFC3962 don't seem to be suitable.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:40:56 +00:00
David Howells
d1775a177f crypto: Add 'krb5enc' hash and cipher AEAD algorithm
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash).  This is required for a number of Kerberos 5 encoding
types.

[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned.  It may be necessary to do this here too for performance reasons -
but there are considerations both ways:

 (1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
     x86_64 that should be used instead of having two ciphers;

 (2) secondly, none of the hardware offload drivers seem to offer CTS
     support (Intel QAT does not, for instance).

However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
2025-03-02 21:39:34 +00:00
Herbert Xu
17ec3e71ba crypto: lib/Kconfig - Hide arch options from user
The ARCH_MAY_HAVE patch missed arm64, mips and s390.  But it may
also lead to arch options being enabled but ineffective because
of modular/built-in conflicts.

As the primary user of all these options wireguard is selecting
the arch options anyway, make the same selections at the lib/crypto
option level and hide the arch options from the user.

Instead of selecting them centrally from lib/crypto, simply set
the default of each arch option as suggested by Eric Biggers.

Change the Crypto API generic algorithms to select the top-level
lib/crypto options instead of the generic one as otherwise there
is no way to enable the arch options (Eric Biggers).  Introduce a
set of INTERNAL options to work around dependency cycles on the
CONFIG_CRYPTO symbol.

Fixes: 1047e21aec ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Arnd Bergmann <arnd@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:21:47 +08:00
Herbert Xu
f79d2d2852 crypto: skcipher - Use restrict rather than hand-rolling accesses
Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.

This generates equivalent if not superior code on x86 with gcc 12.

Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:21:47 +08:00
Eric Biggers
641938d3bb crypto: scatterwalk - don't split at page boundaries when !HIGHMEM
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not
actually map anything, and the address it returns is just the address
from the kernel's direct map, where each sg entry's data is virtually
contiguous.  To improve performance, stop unnecessarily clamping data
segments to page boundaries in this case.

For now, still limit segments to PAGE_SIZE.  This is needed to prevent
preemption from being disabled for too long when SIMD is used, and to
support the alignmask case which still uses a page-sized bounce buffer.

Even so, this change still helps a lot in cases where messages cross a
page boundary.  For example, testing IPsec with AES-GCM on x86_64, the
messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side
over a third cross a page boundary.  These ended up being processed in
three parts, with the middle part going through skcipher_next_slow which
uses a 16-byte bounce buffer.  That was causing a significant amount of
overhead which unnecessarily reduced the performance benefit of the new
x86_64 AES-GCM assembly code.  This change solves the problem; all these
messages now get passed to the assembly code in one part.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:44 +08:00
Eric Biggers
fa94e45436 crypto: scatterwalk - remove obsolete functions
Remove various functions that are no longer used.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:44 +08:00
Eric Biggers
95dbd711b1 crypto: skcipher - use the new scatterwalk functions
Convert skcipher_walk to use the new scatterwalk functions.

This includes a few changes to exactly where the different parts of the
iteration happen.  For example the dcache flush that previously happened
in scatterwalk_done() now happens in scatterwalk_dst_done() or in
memcpy_to_scatterwalk().  Advancing to the next sg entry now happens
just-in-time in scatterwalk_clamp() instead of in scatterwalk_done().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:44 +08:00
Eric Biggers
c89edd931a crypto: aegis - use the new scatterwalk functions
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:43 +08:00
Eric Biggers
cb25dbb605 crypto: skcipher - use scatterwalk_start_at_pos()
In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead
of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2),
and scatterwalk_done().  This is simpler and faster.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:43 +08:00
Eric Biggers
bb699e724f crypto: scatterwalk - add new functions for copying data
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable
versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1
respectively.  They follow the same argument order as memcpy_from_page()
and memcpy_to_page() from <linux/highmem.h>.  Note that in the case of
memcpy_from_sglist(), this also happens to be the same argument order
that scatterwalk_map_and_copy() uses.

The new code is also faster, mainly because it builds the scatter_walk
directly without creating a temporary scatterlist.  E.g., a 20%
performance improvement is seen for copying the AES-GCM auth tag.

Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist()
and memcpy_to_sglist().  Callers of scatterwalk_map_and_copy() should be
updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but
there are a lot of them so they aren't all being updated right away.

Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk()
which are similar but operate on a scatter_walk instead of a
scatterlist.  These will replace scatterwalk_copychunks() with the 'out'
argument 0 and 1 respectively.  Their behavior differs slightly from
scatterwalk_copychunks() in that they automatically take care of
flushing the dcache when needed, making them easier to use.

scatterwalk_copychunks() itself is left unchanged for now.  It will be
removed after its callers are updated to use other functions instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:43 +08:00
Eric Biggers
e21d01a2a3 crypto: scatterwalk - add new functions for skipping data
Add scatterwalk_skip() to skip the given number of bytes in a
scatter_walk.  Previously support for skipping was provided through
scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was
confusing and less efficient.

Also add scatterwalk_start_at_pos() which starts a scatter_walk at the
given position, equivalent to scatterwalk_start() + scatterwalk_skip().
This addresses another common need in a more streamlined way.

Later patches will convert various users to use these functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:43 +08:00
Arnd Bergmann
f307c87ea0 crypto: bpf - Add MODULE_DESCRIPTION for skcipher
All modules should have a description, building with extra warnings
enabled prints this outfor the for bpf_crypto_skcipher module:

WARNING: modpost: missing MODULE_DESCRIPTION() in crypto/bpf_crypto_skcipher.o

Add a description line.

Fixes: fda4f71282 ("bpf: crypto: add skcipher to bpf crypto")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-03-02 15:19:43 +08:00
Herbert Xu
9e01aaa103 crypto: ahash - Set default reqsize from ahash_alg
Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 16:09:05 +08:00
Herbert Xu
439963cdc3 crypto: ahash - Add virtual address support
This patch adds virtual address support to ahash.  Virtual addresses
were previously only supported through shash.  The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.

The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining.  Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 16:09:02 +08:00
Herbert Xu
c664f03417 crypto: tcrypt - Restore multibuffer ahash tests
This patch is a revert of commit 388ac25efc.

As multibuffer ahash is coming back in the form of request chaining,
restore the multibuffer ahash tests using the new interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 16:01:55 +08:00
Herbert Xu
f2ffe5a918 crypto: hash - Add request chaining API
This adds request chaining to the ahash interface.  Request chaining
allows multiple requests to be submitted in one shot.  An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN.  If this bit is not set, the API will break
up chained requests and submit them one-by-one.

A new err field is added to struct crypto_async_request to record
the return value for each individual request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 16:01:53 +08:00
Herbert Xu
075db21426 crypto: ahash - Only save callback and data in ahash_save_req
As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 15:59:59 +08:00
Herbert Xu
ee509efc74 crypto: skcipher - Zap type in crypto_alloc_sync_skcipher
The type needs to be zeroed as otherwise the user could use it to
allocate an asynchronous sync skcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 15:56:03 +08:00
Herbert Xu
7505436e29 crypto: api - Fix larval relookup type and mask
When the lookup is retried after instance construction, it uses
the type and mask from the larval, which may not match the values
used by the caller.  For example, if the caller is requesting for
a !NEEDS_FALLBACK algorithm, it may end up getting an algorithm
that needs fallbacks.

Fix this by making the caller supply the type/mask and using that
for the lookup.

Reported-by: Coiby Xu <coxu@redhat.com>
Fixes: 96ad595520 ("crypto: api - Remove instance larval fulfilment")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 15:56:03 +08:00
Herbert Xu
dcc47a028c crypto: null - Use spin lock instead of mutex
As the null algorithm may be freed in softirq context through
af_alg, use spin locks instead of mutexes to protect the default
null algorithm.

Reported-by: syzbot+b3e02953598f447d4d2a@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 15:56:03 +08:00
Thorsten Blum
844c683d1f crypto: aead - use str_yes_no() helper in crypto_aead_show()
Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 15:56:03 +08:00
Thorsten Blum
77cb2f63ad crypto: ahash - use str_yes_no() helper in crypto_ahash_show()
Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-22 15:56:03 +08:00
Lukas Wunner
b16510a530 crypto: ecdsa - Harden against integer overflows in DIV_ROUND_UP()
Herbert notes that DIV_ROUND_UP() may overflow unnecessarily if an ecdsa
implementation's ->key_size() callback returns an unusually large value.
Herbert instead suggests (for a division by 8):

  X / 8 + !!(X & 7)

Based on this formula, introduce a generic DIV_ROUND_UP_POW2() macro and
use it in lieu of DIV_ROUND_UP() for ->key_size() return values.

Additionally, use the macro in ecc_digits_from_bytes(), whose "nbytes"
parameter is a ->key_size() return value in some instances, or a
user-specified ASN.1 length in the case of ecdsa_get_signature_rs().

Link: https://lore.kernel.org/r/Z3iElsILmoSu6FuC@gondor.apana.org.au/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-09 18:08:12 +08:00
Lukas Wunner
f4144b6bb7 crypto: sig - Prepare for algorithms with variable signature size
The callers of crypto_sig_sign() assume that the signature size is
always equivalent to the key size.

This happens to be true for RSA, which is currently the only algorithm
implementing the ->sign() callback.  But it is false e.g. for X9.62
encoded ECDSA signatures because they have variable length.

Prepare for addition of a ->sign() callback to such algorithms by
letting the callback return the signature size (or a negative integer
on error).  When testing the ->sign() callback in test_sig_one(),
use crypto_sig_maxsize() instead of crypto_sig_keysize() to verify that
the test vector's signature does not exceed an algorithm's maximum
signature size.

There has been a relatively recent effort to upstream ECDSA signature
generation support which may benefit from this change:

https://lore.kernel.org/linux-crypto/20220908200036.2034-1-ignat@cloudflare.com/

However the main motivation for this commit is to reduce the number of
crypto_sig_keysize() callers:  This function is about to be changed to
return the size in bits instead of bytes and that will require amending
most callers to divide the return value by 8.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Cc: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-09 18:08:12 +08:00
Thorsten Blum
1fe244c591 crypto: skcipher - use str_yes_no() helper in crypto_skcipher_show()
Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2025-02-09 18:08:11 +08:00
Eric Biggers
8522104f75 crypto: crct10dif - remove from crypto API
Remove the "crct10dif" shash algorithm from the crypto API.  It has no
known user now that the lib is no longer built on top of it.  It has no
remaining references in kernel code.  The only other potential users
would be the usual components that allow specifying arbitrary hash
algorithms by name, namely AF_ALG and dm-integrity.   However there are
no indications that "crct10dif" is being used with these components.
Debian Code Search and web searches don't find anything relevant, and
explicitly grepping the source code of the usual suspects (cryptsetup,
libell, iwd) finds no matches either.  "crc32" and "crc32c" are used in
a few more places, but that doesn't seem to be the case for "crct10dif".

crc_t10dif_update() is also tested by crc_kunit now, so the test
coverage provided via the crypto self-tests is no longer needed.

Also note that the "crct10dif" shash algorithm was inconsistent with the
rest of the shash API in that it wrote the digest in CPU endianness,
making the resulting byte array differ on little endian vs. big endian
platforms.  This means it was effectively just built for use by the lib
functions, and it was not actually correct to treat it as "just another
hash function" that could be dropped in via the shash API.

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20250206173857.39794-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-08 20:06:30 -08:00
Eric Biggers
68ea3c2ae0 lib/crc32: remove "_le" from crc32c base and arch functions
Following the standardization on crc32c() as the lib entry point for the
Castagnoli CRC32 instead of the previous mix of crc32c(), crc32c_le(),
and __crc32c_le(), make the same change to the underlying base and arch
functions that implement it.

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250208024911.14936-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-08 20:06:30 -08:00
Eric Biggers
8df3682904 lib/crc32: standardize on crc32c() name for Castagnoli CRC32
For historical reasons, the Castagnoli CRC32 is available under 3 names:
crc32c(), crc32c_le(), and __crc32c_le().  Most callers use crc32c().
The more verbose versions are not really warranted; there is no "_be"
version that the "_le" version needs to be differentiated from, and the
leading underscores are pointless.

Therefore, let's standardize on just crc32c().  Remove the other two
names, and update callers accordingly.

Specifically, the new crc32c() comes from what was previously
__crc32c_le(), so compared to the old crc32c() it now takes a size_t
length rather than unsigned int, and it's now in linux/crc32.h instead
of just linux/crc32c.h (which includes linux/crc32.h).

Later patches will also rename __crc32c_le_combine(), crc32c_le_base(),
and crc32c_le_arch().

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250208024911.14936-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-08 20:06:30 -08:00
Eric Biggers
0fcec0b73a crypto: crc64-rocksoft - remove from crypto API
Remove crc64-rocksoft from the crypto API.  It has no known user now
that the lib is no longer built on top of it.  It was also added much
more recently than the longstanding crc32 and crc32c.  Unlike crc32 and
crc32c, crc64-rocksoft is also not mentioned in the dm-integrity
documentation and there are no references to it in anywhere in the
cryptsetup git repo, so it is unlikely to have any user there either.

Also, this CRC variant is named incorrectly; it has nothing to do with
Rocksoft and should be called crc64-nvme.  That is yet another reason to
remove it from the crypto API; we would not want anyone to start
depending on the current incorrect algorithm name of crc64-rocksoft.

Note that this change temporarily makes this CRC variant not be covered
by any tests, as previously it was relying on the crypto self-tests.
This will be fixed by adding this CRC variant to crc_kunit.

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
Acked-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20250130035130.180676-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
2025-02-08 20:06:22 -08:00
Joel Granados
1751f872cc treewide: const qualify ctl_tables where applicable
Add the const qualifier to all the ctl_tables in the tree except for
watchdog_hardlockup_sysctl, memory_allocation_profiling_sysctls,
loadpin_sysctl_table and the ones calling register_net_sysctl (./net,
drivers/inifiniband dirs). These are special cases as they use a
registration function with a non-const qualified ctl_table argument or
modify the arrays before passing them on to the registration function.

Constifying ctl_table structs will prevent the modification of
proc_handler function pointers as the arrays would reside in .rodata.
This is made possible after commit 78eb4ea25c ("sysctl: treewide:
constify the ctl_table argument of proc_handlers") constified all the
proc_handlers.

Created this by running an spatch followed by a sed command:
Spatch:
    virtual patch

    @
    depends on !(file in "net")
    disable optional_qualifier
    @

    identifier table_name != {
      watchdog_hardlockup_sysctl,
      iwcm_ctl_table,
      ucma_ctl_table,
      memory_allocation_profiling_sysctls,
      loadpin_sysctl_table
    };
    @@

    + const
    struct ctl_table table_name [] = { ... };

sed:
    sed --in-place \
      -e "s/struct ctl_table .table = &uts_kern/const struct ctl_table *table = \&uts_kern/" \
      kernel/utsname_sysctl.c

Reviewed-by: Song Liu <song@kernel.org>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> # for kernel/trace/
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> # SCSI
Reviewed-by: Darrick J. Wong <djwong@kernel.org> # xfs
Acked-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Wei Liu <wei.liu@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Bill O'Donnell <bodonnel@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Acked-by: Anna Schumaker <anna.schumaker@oracle.com>
Signed-off-by: Joel Granados <joel.granados@kernel.org>
2025-01-28 13:48:37 +01:00