Commit ddd0a42671 only increments scomp_scratch_users when it was 0,
causing a panic when using ipcomp:
Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN NOPTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 1 UID: 0 PID: 619 Comm: ping Tainted: G N 6.15.0-rc3-net-00032-ga79be02bba5c #41 PREEMPT(full)
Tainted: [N]=TEST
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014
RIP: 0010:inflate_fast+0x5a2/0x1b90
[...]
Call Trace:
<IRQ>
zlib_inflate+0x2d60/0x6620
deflate_sdecompress+0x166/0x350
scomp_acomp_comp_decomp+0x45f/0xa10
scomp_acomp_decompress+0x21/0x120
acomp_do_req_chain+0x3e5/0x4e0
ipcomp_input+0x212/0x550
xfrm_input+0x2de2/0x72f0
[...]
Kernel panic - not syncing: Fatal exception in interrupt
Kernel Offset: disabled
---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
Instead, let's keep the old increment, and decrement back to 0 if the
scratch allocation fails.
Fixes: ddd0a42671 ("crypto: scompress - Fix scratch allocation failure handling")
Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Provide an option to handle the partial blocks in the shash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
Rather than duplicating the partial block handling many times,
add this functionality to the shash API.
It is optional (e.g., hmac would never need this by relying on
the partial block handling of the underlying hash), and to enable
it set the bit CRYPTO_AHASH_ALG_BLOCK_ONLY.
The export format is always that of the underlying hash export,
plus the partial block buffer, followed by a single-byte for the
partial block length.
Set the bit CRYPTO_AHASH_ALG_FINAL_NONZERO to withhold an extra
byte in the partial block. This will come in handy when this
is extended to ahash where hardware often can't deal with a
zero-length final.
It will also be used for algorithms requiring an extra block for
finalisation (e.g., cmac).
As an optimisation, set the bit CRYPTO_AHASH_ALG_FINUP_MAX if
the algorithm wishes to get as much data as possible instead of
just the last partial block.
The descriptor will be zeroed after finalisation.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress off-by-one patch. The
merge resolution is non-trivial as the dst handling code has been
moved in front of the src.
Fix off-by-one bug in the last page calculation for src and dst.
Reported-by: Nhat Pham <nphamcs@gmail.com>
Fixes: 2d3553ecb4 ("crypto: scomp - Remove support for some non-trivial SG lists")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The return statements were missing which causes REQ_CHAIN algorithms
to execute twice for every request.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 64929fe8c0 ("crypto: acomp - Remove request chaining")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 99585c2192.
Remove the acomp multibuffer tests as they are buggy.
Reported-by: Dmitry Antipov <dmantipov@yandex.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The recent code changes in this function triggered a false-positive
maybe-uninitialized warning in software_key_query. Rearrange the
code by moving the sig/tfm variables into the if clause where they
are actually used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add an atomic flag to the acomp walk and use that in deflate.
Due to the use of a per-cpu context, it is impossible to sleep
during the walk in deflate.
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202504151654.4c3b6393-lkp@intel.com
Fixes: 08cabc7d3c ("crypto: deflate - Convert to acomp")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following the example of the crc32, crc32c, and chacha code, make the
crypto subsystem register both generic and architecture-optimized
poly1305 shash algorithms, both implemented on top of the appropriate
library functions. This eliminates the need for every architecture to
implement the same shash glue code.
Note that the poly1305 shash requires that the key be prepended to the
data, which differs from the library functions where the key is simply a
parameter to poly1305_init(). Previously this was handled at a fairly
low level, polluting the library code with shash-specific code.
Reorganize things so that the shash code handles this quirk itself.
Also, to register the architecture-optimized shashes only when
architecture-optimized code is actually being used, add a function
poly1305_is_arch_optimized() and make each arch implement it. Change
each architecture's Poly1305 module_init function to arch_initcall so
that the CPU feature detection is guaranteed to run before
poly1305_is_arch_optimized() gets called by crypto/poly1305.c. (In
cases where poly1305_is_arch_optimized() just returns true
unconditionally, using arch_initcall is not strictly needed, but it's
still good to be consistent across architectures.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The block size of a hash algorithm is meant to be the number of
bytes its block function can handle. For cbcmac that should be
the block size of the underlying block cipher instead of one.
Set the block size of all cbcmac implementations accordingly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed. The intended usage is:
HASH_REQUEST_ON_STACK(req, tfm);
...
err = crypto_ahash_digest(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = HASH_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_ahash_digest(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As all users of the dynamic descsize have been converted to use
a static one instead, remove support for dynamic descsize.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than setting descsize in init_tfm, make it an algorithm
attribute and set it during instance construction.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration. This is inteded for
hardware-based algorithms that may be unplugged at will.
Do not use this if the algorithm data structure is embedded in a
bigger data structure. Perform the duplication in the driver
instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The current algorithm unregistration mechanism originated from
software crypto. The code relies on module reference counts to
stop in-use algorithms from being unregistered. Therefore if
the unregistration function is reached, it is assumed that the
module reference count has hit zero and thus the algorithm reference
count should be exactly 1.
This is completely broken for hardware devices, which can be
unplugged at random.
Fix this by allowing algorithms to be destroyed later if a destroy
callback is provided.
Reported-by: Sean Anderson <sean.anderson@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the destination buffer has a fixed length, strscpy() automatically
determines its size using sizeof() when the argument is omitted. This
makes the explicit size argument unnecessary - remove it.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When user space issues a KEYCTL_PKEY_QUERY system call for a NIST P521
key, the key_size is incorrectly reported as 528 bits instead of 521.
That's because the key size obtained through crypto_sig_keysize() is in
bytes and software_key_query() multiplies by 8 to yield the size in bits.
The underlying assumption is that the key size is always a multiple of 8.
With the recent addition of NIST P521, that's no longer the case.
Fix by returning the key_size in bits from crypto_sig_keysize() and
adjusting the calculations in software_key_query().
The ->key_size() callbacks of sig_alg algorithms now return the size in
bits, whereas the ->digest_size() and ->max_size() callbacks return the
size in bytes. This matches with the units in struct keyctl_pkey_query.
Fixes: a7d45ba77d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
KEYCTL_PKEY_QUERY system calls for ecdsa keys return the key size as
max_enc_size and max_dec_size, even though such keys cannot be used for
encryption/decryption. They're exclusively for signature generation or
verification.
Only rsa keys with pkcs1 encoding can also be used for encryption or
decryption.
Return 0 instead for ecdsa keys (as well as ecrdsa keys).
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than storing the folio as is and handling it later, convert
it to a scatterlist right away.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a new helper ACOMP_REQUEST_CLONE that will transform a stack
request into a dynamically allocated one if possible, and otherwise
switch it over to the sycnrhonous fallback transform. The intended
usage is:
ACOMP_STACK_ON_REQUEST(req, tfm);
...
err = crypto_acomp_compress(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = ACOMP_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_acomp_compress(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a helper to create an on-stack fallback request from a given
request. Use this helper in acomp_do_nondma.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use kzalloc() to zero out the one-element array instead of using
kmalloc() followed by a manual NUL-termination.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 99585c2192.
Remove the acomp multibuffer tests so that the interface can be
redesigned.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress and ahash fixes. The
scompress fix becomes mostly unnecessary as the bugs no longer
exist with the new acompress code. However, keep the NULL assignment
in crypto_acomp_free_streams so that if the user decides to call
crypto_acomp_alloc_streams again it will work.
Disable hash request chaining in case a driver that copies an
ahash_request object by hand accidentally triggers chaining.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Fixes: f2ffe5a918 ("crypto: hash - Add request chaining API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Manorit Chawdhry <m-chawdhry@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to use scomp_free_streams to free the partially allocted
streams in the allocation error path, move the alg->stream assignment
to the beginning. Also check for error pointers in scomp_free_streams
before freeing the ctx.
Finally set alg->stream to NULL to not break subsequent attempts
to allocate the streams.
Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Reported-by: syzkaller <syzkaller@googlegroups.com>
Co-developed-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
Co-developed-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Merge crypto tree to pick up scompress and caam fixes. The scompress
fix has a non-trivial resolution as the code in question has moved
over to acompress.
As the scomp streams are freed when an algorithm is unregistered,
it is possible that the algorithm has never been used at all (e.g.,
an algorithm that does not have a self-test). So test whether the
streams exist before freeing them.
Reported-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Fixes: 3d72ad46a2 ("crypto: acomp - Move stream management into scomp layer")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
<crypto/internal/chacha.h> is now included only by crypto/chacha.c, so
fold it into there.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Following the example of the crc32 and crc32c code, make the crypto
subsystem register both generic and architecture-optimized chacha20,
xchacha20, and xchacha12 skcipher algorithms, all implemented on top of
the appropriate library functions. This eliminates the need for every
architecture to implement the same skcipher glue code.
To register the architecture-optimized skciphers only when
architecture-optimized code is actually being used, add a function
chacha_is_arch_optimized() and make each arch implement it. Change each
architecture's ChaCha module_init function to arch_initcall so that the
CPU feature detection is guaranteed to run before
chacha_is_arch_optimized() gets called by crypto/chacha.c. In the case
of s390, remove the CPU feature based module autoloading, which is no
longer needed since the module just gets pulled in via function linkage.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As deflate has been converted over to acomp, and cavium zip has been
removed, there are no longer any scomp algorithms that can be used
by IPsec.
Since IPsec was the only user of the dst scratch buffer, remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This based on work by Ard Biesheuvel <ardb@kernel.org>.
Convert deflate from scomp to acomp. This removes the need for
the caller to linearise the source and destination.
Link: https://lore.kernel.org/all/20230718125847.3869700-21-ardb@kernel.org/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the dynamic stream allocation code into acomp and make it
available as a helper for acomp algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per-cpu buffers can be wasteful when the number of CPUs is large,
especially if the buffer itself is likely to never be used. Reduce
such wastage by only allocating them on first use of a particular
CPU.
On start-up allocate a single buffer on the first possible CPU.
For every other CPU a work struct will be scheduled on first use
to allocate the buffer for that CPU. Until the allocation succeeds
simply use the first CPU's buffer which is protected under a spin
lock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the cra_type->destroy call out of crypto_alg_put and into
crypto_unregister_alg and crypto_free_instance. This ensures
that it's always done in process context so calls such as flush_work
can be done.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 9ae4577bc0 ("crypto: api - Use work queue in
crypto_destroy_instance") introduced a work struct to free an
instance after the last user goes away.
Move the delayed work from the instance into its template so that
when the template is unregistered it can ensure that all its
instances have been freed before returning.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This reverts commit 8b54e6a8f4.
The multibuffer tests has a number of bugs. For example, the SG
lists for the filler requests weren't initialised properly, and
it fails to take data-keyed algorithms such as poly1305 into account.
More importantly, the chaining interface itself is under review.
Revert this until the interface is fully settled.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202503281658.7a078821-lkp@intel.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add rudimentary multibuffer acomp testing. Testing coverage is
extended to compression vectors only. However, as the compression
vectors are compressed and then decompressed, this covers both
compression and decompression.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The synchronous acomp fallback code path is broken because the
completion code path assumes that the state object is always set
but this is only done for asynchronous algorithms.
First of all remove the assumption on the completion code path
by passing in req0 instead of the state. However, also remove
the conditional setting of the state since it's always in the
request object anyway.
Fixes: b67a026003 ("crypto: acomp - Add request chaining and virtual addresses")
Reported-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This is based on a patch by Eric Biggers <ebiggers@google.com>.
Add limited self-test for multibuffer hash code path. This tests
only a single request in chain of a random length. The other
requests are either all of the same length as the one being tested,
or random lengths between 0 and PAGE_SIZE * 2 * XBUFSIZE.
Potential extension include testing all requests rather than just
the single one.
Link: https://lore.kernel.org/all/20241001153718.111665-3-ebiggers@kernel.org/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The synchronous ahash fallback code paths are broken because the
ahash_restore_req assumes there is always a state object. Fix this
by removing the state from ahash_restore_req and localising it to
the asynchronous completion callback.
Also add a missing synchronous finish call in ahash_def_digest_finish.
Fixes: f2ffe5a918 ("crypto: hash - Add request chaining API")
Fixes: 439963cdc3 ("crypto: ahash - Add virtual address support")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use strscpy() to copy the NUL-terminated string 'p' to the destination
buffer instead of using memcpy() followed by a manual NUL-termination.
No functional changes intended.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of calling cra_destroy by hand, call it through
crypto_alg_put so that the correct unwinding functions are called
through crypto_destroy_alg.
Fixes: 3d6979bf3b ("crypto: api - Add cra_type->destroy hook")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All implementations of chacha_init_arch() just call
chacha_init_generic(), so it is pointless. Just delete it, and replace
chacha_init() with what was previously chacha_init_generic().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' compression API has been superseded by the acomp API, which
is a bit more cumbersome to use, but ultimately more flexible when it
comes to hardware implementations.
Now that all the users and implementations have been removed, let's
remove the core plumbing of the 'comp' API as well.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
No users of the obsolete 'comp' crypto compression API remain, so let's
drop the software deflate version of it.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The 'comp' API is obsolete and will be removed, so remove this comp
implementation.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If the scratch allocation fails, all subsequent allocations will
silently succeed without actually allocating anything. Fix this
by only incrementing users when the allocation succeeds.
Fixes: 6a8487a1f2 ("crypto: scompress - defer allocation of scratch buffer to first use")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For many users, it's easier to supply a folio rather than an SG
list since they already have them. Add support for folios to the
acomp interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support for passing non-DMA virtual addresses to async drivers
by passing them along to the fallback software algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc
that falls back to a synchronous stack reqeust if the allocation
fails.
Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack
only.
The request should be freed with acomp_request_free.
Finally add acomp_request_alloc_extra which gives the user extra
memory to use in conjunction with the request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the only user of acomp/scomp uses a trivial single-page SG
list, remove support for everything else in preprataion for the
addition of virtual address support.
However, keep support for non-trivial source SG lists as that
user is currently jumping through hoops in order to linearise
the source data.
Limit the source SG linearisation buffer to a single page as
that user never goes over that. The only other potential user
is also unlikely to exceed that (IPComp) and it can easily do
its own linearisation if necessary.
Also keep the destination SG linearisation for IPComp.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The test on PAGE_SIZE - offset in shash_ahash_digest can underflow,
leading to execution of the fast path even if the data cannot be
mapped into a single page.
Fix this by splitting the test into four cases:
1) nbytes > sg->length: More than one SG entry, slow path.
2) !IS_ENABLED(CONFIG_HIGHMEM): fast path.
3) nbytes > (unsigned int)PAGE_SIZE - offset: Two highmem pages, slow path.
4) Highmem fast path.
Fixes: 5f7082ed4f ("crypto: hash - Export shash through hash")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function crypto_shash_update_sg iterates through an SG by
hand. It fails to handle corner cases such as SG entries longer
than a page. Fix this by using the SG iterator.
Fixes: 348f5669d1 ("crypto/krb5: Implement the Kerberos5 rfc3961 get_mic and verify_mic")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the address returned by scatterwalk_map() is always being
stored into the same struct scatter_walk that is passed in, make
scatterwalk_map() do so itself and return void.
Similarly, now that scatterwalk_unmap() is always being passed the
address field within a struct scatter_walk, make scatterwalk_unmap()
take a pointer to struct scatter_walk instead of the address directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Separate out the HKDF functions into a separate module to
to make them available to other callers.
And add a testsuite to the module with test vectors
from RFC 5869 (and additional vectors for SHA384 and SHA512)
to ensure the integrity of the algorithm.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Acked-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Keith Busch <kbusch@kernel.org>
This adds request chaining and virtual address support to the
acomp interface.
It is identical to the ahash interface, except that a new flag
CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
virtual addresses are not suitable for DMA. This is because
all existing and potential acomp users can provide memory that
is suitable for DMA so there is no need for a fall-back copy
path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Disable BH when taking per-cpu spin locks. This isn't an issue
right now because the only user zswap calls scomp from process
context. However, if scomp is called from softirq context the
spin lock may dead-lock.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp. This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm. Remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a cra_type->destroy hook so that resources can be freed after
the last user of a registered algorithm is gone.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mark the src.virt.addr field in struct skcipher_walk as a pointer
to const data. This guarantees that the user won't modify the data
which should be done through dst.virt.addr to ensure that flushing
is done when necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reuse the addr field from struct scatter_walk for skcipher_walk.
Keep the existing virt.addr fields but make them const for the
user to access the mapped address.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address. The length is returned directly.
Change the done functions to use this stored address instead of
getting them from the caller.
Split the address into two using a union. The user should only
access the const version so that it is never changed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
async_xor_val has been unused since commit
a7c224a820 ("md/raid5: convert to new xor compution interface")
Remove it.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Local kunmaps have to be unmapped in the opposite order from which they
were mapped. My recent change flipped the unmap order in the
SKCIPHER_WALK_DIFF case. Adjust the mapping side to match.
This fixes a WARN_ON_ONCE that was triggered when running the
crypto-self tests on a 32-bit kernel with
CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y.
Fixes: 95dbd711b1 ("crypto: skcipher - use the new scatterwalk functions")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike the decompression code, the compression code in LZO never
checked for output overruns. It instead assumes that the caller
always provides enough buffer space, disregarding the buffer length
provided by the caller.
Add a safe compression interface that checks for the end of buffer
before each write. Use the safe interface in crypto/lzo.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the definition of struct crypto_type into internal.h as it
is only used by API implementors and not algorithm implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement self-testing infrastructure to test the pseudo-random function,
key derivation, encryption and checksumming.
Add the testing data from rfc8009 to test AES + HMAC-SHA2.
Add the testing data from rfc6803 to test Camellia. Note some encryption
test vectors here are incomplete, lacking the key usage number needed to
derive Ke and Ki, and there are errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from
rfc6803.
Note that the test vectors in rfc6803 for encryption are incomplete,
lacking the key usage number needed to derive Ke and Ki, and there are
errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192
enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto
scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the aes128-cts-hmac-sha1-96 and aes256-cts-hmac-sha1-96 enctypes
from rfc3962, using the rfc3961 kerberos 5 simplified crypto scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add functions that sign and verify a message according to rfc3961 sec 5.4,
using Kc to generate a checksum and insert it into the MIC field in the
skbuff in the sign phase then checksum the data and compare it to the MIC
in the verify phase.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add functions that encrypt and decrypt a message according to rfc3961 sec
5.3, using Ki to checksum the data to be secured and Ke to encrypt it
during the encryption phase, then decrypting with Ke and verifying the
checksum with Ki in the decryption phase.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide functions to derive keys according to RFC3961 (or load the derived
keys for the selftester where only derived keys are available) and to
package them up appropriately for passing to a krb5enc AEAD setkey or a
hash setkey function.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the simplified crypto profile for Kerberos 5 rfc3961 with the
pseudo-random function, PRF(), from section 5.3 and the key derivation
function, DK() from section 5.1.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide key derivation interface functions and a helper to implement the
PRF+ function from rfc4402.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an API by which users of the krb5 crypto library can perform crypto
requests, such as encrypt, decrypt, get_mic and verify_mic. These
functions take the previously prepared crypto objects to work on.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an API by which users of the krb5 crypto library can get an allocated
and keyed crypto object.
For encryption-mode operation, an AEAD object is returned; for
checksum-mode operation, a synchronous hash object is returned.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide some functions to allow the called to find out about the layout of
the crypto section:
(1) Calculate, for a given size of data, how big a buffer will be
required to hold it and where the data will be within it.
(2) Calculate, for an amount of buffer, what's the maximum size of data
that will fit therein, and where it will start.
(3) Determine where the data will be in a received message.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide core structures, an encoding-type registry and basic module and
config bits for a generic Kerberos crypto library.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add Kerberos crypto tests to the test manager database. This covers:
camellia128-cts-cmac samples from RFC6803
camellia256-cts-cmac samples from RFC6803
aes128-cts-hmac-sha256-128 samples from RFC8009
aes256-cts-hmac-sha384-192 samples from RFC8009
but not:
aes128-cts-hmac-sha1-96
aes256-cts-hmac-sha1-96
as the test samples in RFC3962 don't seem to be suitable.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash). This is required for a number of Kerberos 5 encoding
types.
[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned. It may be necessary to do this here too for performance reasons -
but there are considerations both ways:
(1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
x86_64 that should be used instead of having two ciphers;
(2) secondly, none of the hardware offload drivers seem to offer CTS
support (Intel QAT does not, for instance).
However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may
also lead to arch options being enabled but ineffective because
of modular/built-in conflicts.
As the primary user of all these options wireguard is selecting
the arch options anyway, make the same selections at the lib/crypto
option level and hide the arch options from the user.
Instead of selecting them centrally from lib/crypto, simply set
the default of each arch option as suggested by Eric Biggers.
Change the Crypto API generic algorithms to select the top-level
lib/crypto options instead of the generic one as otherwise there
is no way to enable the arch options (Eric Biggers). Introduce a
set of INTERNAL options to work around dependency cycles on the
CONFIG_CRYPTO symbol.
Fixes: 1047e21aec ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Arnd Bergmann <arnd@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.
This generates equivalent if not superior code on x86 with gcc 12.
Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not
actually map anything, and the address it returns is just the address
from the kernel's direct map, where each sg entry's data is virtually
contiguous. To improve performance, stop unnecessarily clamping data
segments to page boundaries in this case.
For now, still limit segments to PAGE_SIZE. This is needed to prevent
preemption from being disabled for too long when SIMD is used, and to
support the alignmask case which still uses a page-sized bounce buffer.
Even so, this change still helps a lot in cases where messages cross a
page boundary. For example, testing IPsec with AES-GCM on x86_64, the
messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side
over a third cross a page boundary. These ended up being processed in
three parts, with the middle part going through skcipher_next_slow which
uses a 16-byte bounce buffer. That was causing a significant amount of
overhead which unnecessarily reduced the performance benefit of the new
x86_64 AES-GCM assembly code. This change solves the problem; all these
messages now get passed to the assembly code in one part.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove various functions that are no longer used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert skcipher_walk to use the new scatterwalk functions.
This includes a few changes to exactly where the different parts of the
iteration happen. For example the dcache flush that previously happened
in scatterwalk_done() now happens in scatterwalk_dst_done() or in
memcpy_to_scatterwalk(). Advancing to the next sg entry now happens
just-in-time in scatterwalk_clamp() instead of in scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead
of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2),
and scatterwalk_done(). This is simpler and faster.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable
versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1
respectively. They follow the same argument order as memcpy_from_page()
and memcpy_to_page() from <linux/highmem.h>. Note that in the case of
memcpy_from_sglist(), this also happens to be the same argument order
that scatterwalk_map_and_copy() uses.
The new code is also faster, mainly because it builds the scatter_walk
directly without creating a temporary scatterlist. E.g., a 20%
performance improvement is seen for copying the AES-GCM auth tag.
Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist()
and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be
updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but
there are a lot of them so they aren't all being updated right away.
Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk()
which are similar but operate on a scatter_walk instead of a
scatterlist. These will replace scatterwalk_copychunks() with the 'out'
argument 0 and 1 respectively. Their behavior differs slightly from
scatterwalk_copychunks() in that they automatically take care of
flushing the dcache when needed, making them easier to use.
scatterwalk_copychunks() itself is left unchanged for now. It will be
removed after its callers are updated to use other functions instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scatterwalk_skip() to skip the given number of bytes in a
scatter_walk. Previously support for skipping was provided through
scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was
confusing and less efficient.
Also add scatterwalk_start_at_pos() which starts a scatter_walk at the
given position, equivalent to scatterwalk_start() + scatterwalk_skip().
This addresses another common need in a more streamlined way.
Later patches will convert various users to use these functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All modules should have a description, building with extra warnings
enabled prints this outfor the for bpf_crypto_skcipher module:
WARNING: modpost: missing MODULE_DESCRIPTION() in crypto/bpf_crypto_skcipher.o
Add a description line.
Fixes: fda4f71282 ("bpf: crypto: add skcipher to bpf crypto")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds virtual address support to ahash. Virtual addresses
were previously only supported through shash. The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.
The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining. Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is a revert of commit 388ac25efc.
As multibuffer ahash is coming back in the form of request chaining,
restore the multibuffer ahash tests using the new interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds request chaining to the ahash interface. Request chaining
allows multiple requests to be submitted in one shot. An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN. If this bit is not set, the API will break
up chained requests and submit them one-by-one.
A new err field is added to struct crypto_async_request to record
the return value for each individual request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The type needs to be zeroed as otherwise the user could use it to
allocate an asynchronous sync skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the lookup is retried after instance construction, it uses
the type and mask from the larval, which may not match the values
used by the caller. For example, if the caller is requesting for
a !NEEDS_FALLBACK algorithm, it may end up getting an algorithm
that needs fallbacks.
Fix this by making the caller supply the type/mask and using that
for the lookup.
Reported-by: Coiby Xu <coxu@redhat.com>
Fixes: 96ad595520 ("crypto: api - Remove instance larval fulfilment")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the null algorithm may be freed in softirq context through
af_alg, use spin locks instead of mutexes to protect the default
null algorithm.
Reported-by: syzbot+b3e02953598f447d4d2a@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert notes that DIV_ROUND_UP() may overflow unnecessarily if an ecdsa
implementation's ->key_size() callback returns an unusually large value.
Herbert instead suggests (for a division by 8):
X / 8 + !!(X & 7)
Based on this formula, introduce a generic DIV_ROUND_UP_POW2() macro and
use it in lieu of DIV_ROUND_UP() for ->key_size() return values.
Additionally, use the macro in ecc_digits_from_bytes(), whose "nbytes"
parameter is a ->key_size() return value in some instances, or a
user-specified ASN.1 length in the case of ecdsa_get_signature_rs().
Link: https://lore.kernel.org/r/Z3iElsILmoSu6FuC@gondor.apana.org.au/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The callers of crypto_sig_sign() assume that the signature size is
always equivalent to the key size.
This happens to be true for RSA, which is currently the only algorithm
implementing the ->sign() callback. But it is false e.g. for X9.62
encoded ECDSA signatures because they have variable length.
Prepare for addition of a ->sign() callback to such algorithms by
letting the callback return the signature size (or a negative integer
on error). When testing the ->sign() callback in test_sig_one(),
use crypto_sig_maxsize() instead of crypto_sig_keysize() to verify that
the test vector's signature does not exceed an algorithm's maximum
signature size.
There has been a relatively recent effort to upstream ECDSA signature
generation support which may benefit from this change:
https://lore.kernel.org/linux-crypto/20220908200036.2034-1-ignat@cloudflare.com/
However the main motivation for this commit is to reduce the number of
crypto_sig_keysize() callers: This function is about to be changed to
return the size in bits instead of bytes and that will require amending
most callers to divide the return value by 8.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Cc: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the "crct10dif" shash algorithm from the crypto API. It has no
known user now that the lib is no longer built on top of it. It has no
remaining references in kernel code. The only other potential users
would be the usual components that allow specifying arbitrary hash
algorithms by name, namely AF_ALG and dm-integrity. However there are
no indications that "crct10dif" is being used with these components.
Debian Code Search and web searches don't find anything relevant, and
explicitly grepping the source code of the usual suspects (cryptsetup,
libell, iwd) finds no matches either. "crc32" and "crc32c" are used in
a few more places, but that doesn't seem to be the case for "crct10dif".
crc_t10dif_update() is also tested by crc_kunit now, so the test
coverage provided via the crypto self-tests is no longer needed.
Also note that the "crct10dif" shash algorithm was inconsistent with the
rest of the shash API in that it wrote the digest in CPU endianness,
making the resulting byte array differ on little endian vs. big endian
platforms. This means it was effectively just built for use by the lib
functions, and it was not actually correct to treat it as "just another
hash function" that could be dropped in via the shash API.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20250206173857.39794-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Following the standardization on crc32c() as the lib entry point for the
Castagnoli CRC32 instead of the previous mix of crc32c(), crc32c_le(),
and __crc32c_le(), make the same change to the underlying base and arch
functions that implement it.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250208024911.14936-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
For historical reasons, the Castagnoli CRC32 is available under 3 names:
crc32c(), crc32c_le(), and __crc32c_le(). Most callers use crc32c().
The more verbose versions are not really warranted; there is no "_be"
version that the "_le" version needs to be differentiated from, and the
leading underscores are pointless.
Therefore, let's standardize on just crc32c(). Remove the other two
names, and update callers accordingly.
Specifically, the new crc32c() comes from what was previously
__crc32c_le(), so compared to the old crc32c() it now takes a size_t
length rather than unsigned int, and it's now in linux/crc32.h instead
of just linux/crc32c.h (which includes linux/crc32.h).
Later patches will also rename __crc32c_le_combine(), crc32c_le_base(),
and crc32c_le_arch().
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250208024911.14936-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Remove crc64-rocksoft from the crypto API. It has no known user now
that the lib is no longer built on top of it. It was also added much
more recently than the longstanding crc32 and crc32c. Unlike crc32 and
crc32c, crc64-rocksoft is also not mentioned in the dm-integrity
documentation and there are no references to it in anywhere in the
cryptsetup git repo, so it is unlikely to have any user there either.
Also, this CRC variant is named incorrectly; it has nothing to do with
Rocksoft and should be called crc64-nvme. That is yet another reason to
remove it from the crypto API; we would not want anyone to start
depending on the current incorrect algorithm name of crc64-rocksoft.
Note that this change temporarily makes this CRC variant not be covered
by any tests, as previously it was relying on the crypto self-tests.
This will be fixed by adding this CRC variant to crc_kunit.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
Acked-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20250130035130.180676-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Add the const qualifier to all the ctl_tables in the tree except for
watchdog_hardlockup_sysctl, memory_allocation_profiling_sysctls,
loadpin_sysctl_table and the ones calling register_net_sysctl (./net,
drivers/inifiniband dirs). These are special cases as they use a
registration function with a non-const qualified ctl_table argument or
modify the arrays before passing them on to the registration function.
Constifying ctl_table structs will prevent the modification of
proc_handler function pointers as the arrays would reside in .rodata.
This is made possible after commit 78eb4ea25c ("sysctl: treewide:
constify the ctl_table argument of proc_handlers") constified all the
proc_handlers.
Created this by running an spatch followed by a sed command:
Spatch:
virtual patch
@
depends on !(file in "net")
disable optional_qualifier
@
identifier table_name != {
watchdog_hardlockup_sysctl,
iwcm_ctl_table,
ucma_ctl_table,
memory_allocation_profiling_sysctls,
loadpin_sysctl_table
};
@@
+ const
struct ctl_table table_name [] = { ... };
sed:
sed --in-place \
-e "s/struct ctl_table .table = &uts_kern/const struct ctl_table *table = \&uts_kern/" \
kernel/utsname_sysctl.c
Reviewed-by: Song Liu <song@kernel.org>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> # for kernel/trace/
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> # SCSI
Reviewed-by: Darrick J. Wong <djwong@kernel.org> # xfs
Acked-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Wei Liu <wei.liu@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Bill O'Donnell <bodonnel@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Acked-by: Anna Schumaker <anna.schumaker@oracle.com>
Signed-off-by: Joel Granados <joel.granados@kernel.org>