Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address. The length is returned directly.
Change the done functions to use this stored address instead of
getting them from the caller.
Split the address into two using a union. The user should only
access the const version so that it is never changed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
async_xor_val has been unused since commit
a7c224a820 ("md/raid5: convert to new xor compution interface")
Remove it.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Local kunmaps have to be unmapped in the opposite order from which they
were mapped. My recent change flipped the unmap order in the
SKCIPHER_WALK_DIFF case. Adjust the mapping side to match.
This fixes a WARN_ON_ONCE that was triggered when running the
crypto-self tests on a 32-bit kernel with
CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP=y.
Fixes: 95dbd711b1 ("crypto: skcipher - use the new scatterwalk functions")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike the decompression code, the compression code in LZO never
checked for output overruns. It instead assumes that the caller
always provides enough buffer space, disregarding the buffer length
provided by the caller.
Add a safe compression interface that checks for the end of buffer
before each write. Use the safe interface in crypto/lzo.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the definition of struct crypto_type into internal.h as it
is only used by API implementors and not algorithm implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement self-testing infrastructure to test the pseudo-random function,
key derivation, encryption and checksumming.
Add the testing data from rfc8009 to test AES + HMAC-SHA2.
Add the testing data from rfc6803 to test Camellia. Note some encryption
test vectors here are incomplete, lacking the key usage number needed to
derive Ke and Ki, and there are errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from
rfc6803.
Note that the test vectors in rfc6803 for encryption are incomplete,
lacking the key usage number needed to derive Ke and Ki, and there are
errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192
enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto
scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the aes128-cts-hmac-sha1-96 and aes256-cts-hmac-sha1-96 enctypes
from rfc3962, using the rfc3961 kerberos 5 simplified crypto scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add functions that sign and verify a message according to rfc3961 sec 5.4,
using Kc to generate a checksum and insert it into the MIC field in the
skbuff in the sign phase then checksum the data and compare it to the MIC
in the verify phase.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add functions that encrypt and decrypt a message according to rfc3961 sec
5.3, using Ki to checksum the data to be secured and Ke to encrypt it
during the encryption phase, then decrypting with Ke and verifying the
checksum with Ki in the decryption phase.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide functions to derive keys according to RFC3961 (or load the derived
keys for the selftester where only derived keys are available) and to
package them up appropriately for passing to a krb5enc AEAD setkey or a
hash setkey function.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the simplified crypto profile for Kerberos 5 rfc3961 with the
pseudo-random function, PRF(), from section 5.3 and the key derivation
function, DK() from section 5.1.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide key derivation interface functions and a helper to implement the
PRF+ function from rfc4402.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an API by which users of the krb5 crypto library can perform crypto
requests, such as encrypt, decrypt, get_mic and verify_mic. These
functions take the previously prepared crypto objects to work on.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an API by which users of the krb5 crypto library can get an allocated
and keyed crypto object.
For encryption-mode operation, an AEAD object is returned; for
checksum-mode operation, a synchronous hash object is returned.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide some functions to allow the called to find out about the layout of
the crypto section:
(1) Calculate, for a given size of data, how big a buffer will be
required to hold it and where the data will be within it.
(2) Calculate, for an amount of buffer, what's the maximum size of data
that will fit therein, and where it will start.
(3) Determine where the data will be in a received message.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide core structures, an encoding-type registry and basic module and
config bits for a generic Kerberos crypto library.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add Kerberos crypto tests to the test manager database. This covers:
camellia128-cts-cmac samples from RFC6803
camellia256-cts-cmac samples from RFC6803
aes128-cts-hmac-sha256-128 samples from RFC8009
aes256-cts-hmac-sha384-192 samples from RFC8009
but not:
aes128-cts-hmac-sha1-96
aes256-cts-hmac-sha1-96
as the test samples in RFC3962 don't seem to be suitable.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash). This is required for a number of Kerberos 5 encoding
types.
[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned. It may be necessary to do this here too for performance reasons -
but there are considerations both ways:
(1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
x86_64 that should be used instead of having two ciphers;
(2) secondly, none of the hardware offload drivers seem to offer CTS
support (Intel QAT does not, for instance).
However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may
also lead to arch options being enabled but ineffective because
of modular/built-in conflicts.
As the primary user of all these options wireguard is selecting
the arch options anyway, make the same selections at the lib/crypto
option level and hide the arch options from the user.
Instead of selecting them centrally from lib/crypto, simply set
the default of each arch option as suggested by Eric Biggers.
Change the Crypto API generic algorithms to select the top-level
lib/crypto options instead of the generic one as otherwise there
is no way to enable the arch options (Eric Biggers). Introduce a
set of INTERNAL options to work around dependency cycles on the
CONFIG_CRYPTO symbol.
Fixes: 1047e21aec ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Arnd Bergmann <arnd@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.
This generates equivalent if not superior code on x86 with gcc 12.
Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not
actually map anything, and the address it returns is just the address
from the kernel's direct map, where each sg entry's data is virtually
contiguous. To improve performance, stop unnecessarily clamping data
segments to page boundaries in this case.
For now, still limit segments to PAGE_SIZE. This is needed to prevent
preemption from being disabled for too long when SIMD is used, and to
support the alignmask case which still uses a page-sized bounce buffer.
Even so, this change still helps a lot in cases where messages cross a
page boundary. For example, testing IPsec with AES-GCM on x86_64, the
messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side
over a third cross a page boundary. These ended up being processed in
three parts, with the middle part going through skcipher_next_slow which
uses a 16-byte bounce buffer. That was causing a significant amount of
overhead which unnecessarily reduced the performance benefit of the new
x86_64 AES-GCM assembly code. This change solves the problem; all these
messages now get passed to the assembly code in one part.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove various functions that are no longer used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Convert skcipher_walk to use the new scatterwalk functions.
This includes a few changes to exactly where the different parts of the
iteration happen. For example the dcache flush that previously happened
in scatterwalk_done() now happens in scatterwalk_dst_done() or in
memcpy_to_scatterwalk(). Advancing to the next sg entry now happens
just-in-time in scatterwalk_clamp() instead of in scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead
of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2),
and scatterwalk_done(). This is simpler and faster.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable
versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1
respectively. They follow the same argument order as memcpy_from_page()
and memcpy_to_page() from <linux/highmem.h>. Note that in the case of
memcpy_from_sglist(), this also happens to be the same argument order
that scatterwalk_map_and_copy() uses.
The new code is also faster, mainly because it builds the scatter_walk
directly without creating a temporary scatterlist. E.g., a 20%
performance improvement is seen for copying the AES-GCM auth tag.
Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist()
and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be
updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but
there are a lot of them so they aren't all being updated right away.
Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk()
which are similar but operate on a scatter_walk instead of a
scatterlist. These will replace scatterwalk_copychunks() with the 'out'
argument 0 and 1 respectively. Their behavior differs slightly from
scatterwalk_copychunks() in that they automatically take care of
flushing the dcache when needed, making them easier to use.
scatterwalk_copychunks() itself is left unchanged for now. It will be
removed after its callers are updated to use other functions instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scatterwalk_skip() to skip the given number of bytes in a
scatter_walk. Previously support for skipping was provided through
scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was
confusing and less efficient.
Also add scatterwalk_start_at_pos() which starts a scatter_walk at the
given position, equivalent to scatterwalk_start() + scatterwalk_skip().
This addresses another common need in a more streamlined way.
Later patches will convert various users to use these functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
All modules should have a description, building with extra warnings
enabled prints this outfor the for bpf_crypto_skcipher module:
WARNING: modpost: missing MODULE_DESCRIPTION() in crypto/bpf_crypto_skcipher.o
Add a description line.
Fixes: fda4f71282 ("bpf: crypto: add skcipher to bpf crypto")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds virtual address support to ahash. Virtual addresses
were previously only supported through shash. The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.
The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining. Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch is a revert of commit 388ac25efc.
As multibuffer ahash is coming back in the form of request chaining,
restore the multibuffer ahash tests using the new interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds request chaining to the ahash interface. Request chaining
allows multiple requests to be submitted in one shot. An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN. If this bit is not set, the API will break
up chained requests and submit them one-by-one.
A new err field is added to struct crypto_async_request to record
the return value for each individual request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The type needs to be zeroed as otherwise the user could use it to
allocate an asynchronous sync skcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the lookup is retried after instance construction, it uses
the type and mask from the larval, which may not match the values
used by the caller. For example, if the caller is requesting for
a !NEEDS_FALLBACK algorithm, it may end up getting an algorithm
that needs fallbacks.
Fix this by making the caller supply the type/mask and using that
for the lookup.
Reported-by: Coiby Xu <coxu@redhat.com>
Fixes: 96ad595520 ("crypto: api - Remove instance larval fulfilment")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the null algorithm may be freed in softirq context through
af_alg, use spin locks instead of mutexes to protect the default
null algorithm.
Reported-by: syzbot+b3e02953598f447d4d2a@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Herbert notes that DIV_ROUND_UP() may overflow unnecessarily if an ecdsa
implementation's ->key_size() callback returns an unusually large value.
Herbert instead suggests (for a division by 8):
X / 8 + !!(X & 7)
Based on this formula, introduce a generic DIV_ROUND_UP_POW2() macro and
use it in lieu of DIV_ROUND_UP() for ->key_size() return values.
Additionally, use the macro in ecc_digits_from_bytes(), whose "nbytes"
parameter is a ->key_size() return value in some instances, or a
user-specified ASN.1 length in the case of ecdsa_get_signature_rs().
Link: https://lore.kernel.org/r/Z3iElsILmoSu6FuC@gondor.apana.org.au/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The callers of crypto_sig_sign() assume that the signature size is
always equivalent to the key size.
This happens to be true for RSA, which is currently the only algorithm
implementing the ->sign() callback. But it is false e.g. for X9.62
encoded ECDSA signatures because they have variable length.
Prepare for addition of a ->sign() callback to such algorithms by
letting the callback return the signature size (or a negative integer
on error). When testing the ->sign() callback in test_sig_one(),
use crypto_sig_maxsize() instead of crypto_sig_keysize() to verify that
the test vector's signature does not exceed an algorithm's maximum
signature size.
There has been a relatively recent effort to upstream ECDSA signature
generation support which may benefit from this change:
https://lore.kernel.org/linux-crypto/20220908200036.2034-1-ignat@cloudflare.com/
However the main motivation for this commit is to reduce the number of
crypto_sig_keysize() callers: This function is about to be changed to
return the size in bits instead of bytes and that will require amending
most callers to divide the return value by 8.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Cc: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the "crct10dif" shash algorithm from the crypto API. It has no
known user now that the lib is no longer built on top of it. It has no
remaining references in kernel code. The only other potential users
would be the usual components that allow specifying arbitrary hash
algorithms by name, namely AF_ALG and dm-integrity. However there are
no indications that "crct10dif" is being used with these components.
Debian Code Search and web searches don't find anything relevant, and
explicitly grepping the source code of the usual suspects (cryptsetup,
libell, iwd) finds no matches either. "crc32" and "crc32c" are used in
a few more places, but that doesn't seem to be the case for "crct10dif".
crc_t10dif_update() is also tested by crc_kunit now, so the test
coverage provided via the crypto self-tests is no longer needed.
Also note that the "crct10dif" shash algorithm was inconsistent with the
rest of the shash API in that it wrote the digest in CPU endianness,
making the resulting byte array differ on little endian vs. big endian
platforms. This means it was effectively just built for use by the lib
functions, and it was not actually correct to treat it as "just another
hash function" that could be dropped in via the shash API.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20250206173857.39794-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Following the standardization on crc32c() as the lib entry point for the
Castagnoli CRC32 instead of the previous mix of crc32c(), crc32c_le(),
and __crc32c_le(), make the same change to the underlying base and arch
functions that implement it.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250208024911.14936-7-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
For historical reasons, the Castagnoli CRC32 is available under 3 names:
crc32c(), crc32c_le(), and __crc32c_le(). Most callers use crc32c().
The more verbose versions are not really warranted; there is no "_be"
version that the "_le" version needs to be differentiated from, and the
leading underscores are pointless.
Therefore, let's standardize on just crc32c(). Remove the other two
names, and update callers accordingly.
Specifically, the new crc32c() comes from what was previously
__crc32c_le(), so compared to the old crc32c() it now takes a size_t
length rather than unsigned int, and it's now in linux/crc32.h instead
of just linux/crc32c.h (which includes linux/crc32.h).
Later patches will also rename __crc32c_le_combine(), crc32c_le_base(),
and crc32c_le_arch().
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250208024911.14936-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Remove crc64-rocksoft from the crypto API. It has no known user now
that the lib is no longer built on top of it. It was also added much
more recently than the longstanding crc32 and crc32c. Unlike crc32 and
crc32c, crc64-rocksoft is also not mentioned in the dm-integrity
documentation and there are no references to it in anywhere in the
cryptsetup git repo, so it is unlikely to have any user there either.
Also, this CRC variant is named incorrectly; it has nothing to do with
Rocksoft and should be called crc64-nvme. That is yet another reason to
remove it from the crypto API; we would not want anyone to start
depending on the current incorrect algorithm name of crc64-rocksoft.
Note that this change temporarily makes this CRC variant not be covered
by any tests, as previously it was relying on the crypto self-tests.
This will be fixed by adding this CRC variant to crc_kunit.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: "Martin K. Petersen" <martin.petersen@oracle.com>
Acked-by: Keith Busch <kbusch@kernel.org>
Link: https://lore.kernel.org/r/20250130035130.180676-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Add the const qualifier to all the ctl_tables in the tree except for
watchdog_hardlockup_sysctl, memory_allocation_profiling_sysctls,
loadpin_sysctl_table and the ones calling register_net_sysctl (./net,
drivers/inifiniband dirs). These are special cases as they use a
registration function with a non-const qualified ctl_table argument or
modify the arrays before passing them on to the registration function.
Constifying ctl_table structs will prevent the modification of
proc_handler function pointers as the arrays would reside in .rodata.
This is made possible after commit 78eb4ea25c ("sysctl: treewide:
constify the ctl_table argument of proc_handlers") constified all the
proc_handlers.
Created this by running an spatch followed by a sed command:
Spatch:
virtual patch
@
depends on !(file in "net")
disable optional_qualifier
@
identifier table_name != {
watchdog_hardlockup_sysctl,
iwcm_ctl_table,
ucma_ctl_table,
memory_allocation_profiling_sysctls,
loadpin_sysctl_table
};
@@
+ const
struct ctl_table table_name [] = { ... };
sed:
sed --in-place \
-e "s/struct ctl_table .table = &uts_kern/const struct ctl_table *table = \&uts_kern/" \
kernel/utsname_sysctl.c
Reviewed-by: Song Liu <song@kernel.org>
Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org> # for kernel/trace/
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> # SCSI
Reviewed-by: Darrick J. Wong <djwong@kernel.org> # xfs
Acked-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Wei Liu <wei.liu@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Bill O'Donnell <bodonnel@redhat.com>
Acked-by: Baoquan He <bhe@redhat.com>
Acked-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Acked-by: Anna Schumaker <anna.schumaker@oracle.com>
Signed-off-by: Joel Granados <joel.granados@kernel.org>
- Reorganize the architecture-optimized CRC32 and CRC-T10DIF code to be
directly accessible via the library API, instead of requiring the
crypto API. This is much simpler and more efficient.
- Convert some users such as ext4 to use the CRC32 library API instead
of the crypto API. More conversions like this will come later.
- Add a KUnit test that tests and benchmarks multiple CRC variants.
Remove older, less-comprehensive tests that are made redundant by
this.
- Add an entry to MAINTAINERS for the kernel's CRC library code. I'm
volunteering to maintain it. I have additional cleanups and
optimizations planned for future cycles.
These patches have been in linux-next since -rc1.
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQSacvsUNc7UX4ntmEPzXCl4vpKOKwUCZ418ZRQcZWJpZ2dlcnNA
Z29vZ2xlLmNvbQAKCRDzXCl4vpKOKyJYAP9kBlpm8W9/XY6N8SpjKaXE/vKQYHQl
Nobhak06Us8uJwEAkcUTymWP4IwQj5A9jgBAPRw53FQcNVKIc+01C7gRHw0=
=mqSH
-----END PGP SIGNATURE-----
Merge tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull CRC updates from Eric Biggers:
- Reorganize the architecture-optimized CRC32 and CRC-T10DIF code to be
directly accessible via the library API, instead of requiring the
crypto API. This is much simpler and more efficient.
- Convert some users such as ext4 to use the CRC32 library API instead
of the crypto API. More conversions like this will come later.
- Add a KUnit test that tests and benchmarks multiple CRC variants.
Remove older, less-comprehensive tests that are made redundant by
this.
- Add an entry to MAINTAINERS for the kernel's CRC library code. I'm
volunteering to maintain it. I have additional cleanups and
optimizations planned for future cycles.
* tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (31 commits)
MAINTAINERS: add entry for CRC library
powerpc/crc: delete obsolete crc-vpmsum_test.c
lib/crc32test: delete obsolete crc32test.c
lib/crc16_kunit: delete obsolete crc16_kunit.c
lib/crc_kunit.c: add KUnit test suite for CRC library functions
powerpc/crc-t10dif: expose CRC-T10DIF function through lib
arm64/crc-t10dif: expose CRC-T10DIF function through lib
arm/crc-t10dif: expose CRC-T10DIF function through lib
x86/crc-t10dif: expose CRC-T10DIF function through lib
crypto: crct10dif - expose arch-optimized lib function
lib/crc-t10dif: add support for arch overrides
lib/crc-t10dif: stop wrapping the crypto API
scsi: target: iscsi: switch to using the crc32c library
f2fs: switch to using the crc32 library
jbd2: switch to using the crc32c library
ext4: switch to using the crc32c library
lib/crc32: make crc32c() go directly to lib
bcachefs: Explicitly select CRYPTO from BCACHEFS_FS
x86/crc32: expose CRC32 functions through lib
x86/crc32: update prototype for crc32_pclmul_le_16()
...
1) Per-CPU kthreads must stay affine to a single CPU and never execute
relevant code on any other CPU. This is currently handled by smpboot
code which takes care of CPU-hotplug operations. Affinity here is
a correctness constraint.
2) Some kthreads _have_ to be affine to a specific set of CPUs and can't
run anywhere else. The affinity is set through kthread_bind_mask()
and the subsystem takes care by itself to handle CPU-hotplug
operations. Affinity here is assumed to be a correctness constraint.
3) Per-node kthreads _prefer_ to be affine to a specific NUMA node. This
is not a correctness constraint but merely a preference in terms of
memory locality. kswapd and kcompactd both fall into this category.
The affinity is set manually like for any other task and CPU-hotplug
is supposed to be handled by the relevant subsystem so that the task
is properly reaffined whenever a given CPU from the node comes up.
Also care should be taken so that the node affinity doesn't cross
isolated (nohz_full) cpumask boundaries.
4) Similar to the previous point except kthreads have a _preferred_
affinity different than a node. Both RCU boost kthreads and RCU
exp kworkers fall into this category as they refer to "RCU nodes"
from a distinctly distributed tree.
Currently the preferred affinity patterns (3 and 4) have at least 4
identified users, with more or less success when it comes to handle
CPU-hotplug operations and CPU isolation. Each of which do it in its own
ad-hoc way.
This is an infrastructure proposal to handle this with the following API
changes:
_ kthread_create_on_node() automatically affines the created kthread to
its target node unless it has been set as per-cpu or bound with
kthread_bind[_mask]() before the first wake-up.
- kthread_affine_preferred() is a new function that can be called right
after kthread_create_on_node() to specify a preferred affinity
different than the specified node.
When the preferred affinity can't be applied because the possible
targets are offline or isolated (nohz_full), the kthread is affine
to the housekeeping CPUs (which means to all online CPUs most of the
time or only the non-nohz_full CPUs when nohz_full= is set).
kswapd, kcompactd, RCU boost kthreads and RCU exp kworkers have been
converted, along with a few old drivers.
Summary of the changes:
* Consolidate a bunch of ad-hoc implementations of kthread_run_on_cpu()
* Introduce task_cpu_fallback_mask() that defines the default last
resort affinity of a task to become nohz_full aware
* Add some correctness check to ensure kthread_bind() is always called
before the first kthread wake up.
* Default affine kthread to its preferred node.
* Convert kswapd / kcompactd and remove their halfway working ad-hoc
affinity implementation
* Implement kthreads preferred affinity
* Unify kthread worker and kthread API's style
* Convert RCU kthreads to the new API and remove the ad-hoc affinity
implementation.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEd76+gtGM8MbftQlOhSRUR1COjHcFAmeNf8gACgkQhSRUR1CO
jHedQQ/+IxTjjqQiItzrq41TES2S0desHDq8lNJFb7rsR/DtKFyLx3s67cOYV+cM
Yx54QHg2m/Fz4nXMQ7Po5ygOtJGCKBc5C5QQy7y0lVKeTQK+daDfEtBSa3oG7j3C
u+E3tTY6qxkbCzymUyaKkHN4/ay2vLvjFS50luV7KMyI3x47Aji+t7VdCX4LCPP2
eAwOALWD0+7qLJ/VF6gsmQLKA4Qx7PQAzBa3KSBmUN9UcN8Gk1bQHCTIQKDHP9LQ
v8BXrNZtYX1o2+snNYpX2z6/ECjxkdwriOgqqZY5306hd9RAQ1u46Dx3byrIqjGn
ULG/XQ2istPyhTqb/h+RbrobdOcwEUIeqk8hRRbBXE8bPpqUz9EMuaCMxWDbQjgH
NTuKG4ifKJ/IqstkkuDkdOiByE/ysMmwqrTXgSnu2ITNL9yY3BEgFbvA95hgo42s
f7QCxEfZb1MHcNEMENSMwM3xw5lLMGMpxVZcMQ3gLwyotMBRrhFZm1qZJG7TITYW
IDIeCbH4JOMdQwLs3CcWTXio0N5/85NhRNFV+IDn96OrgxObgnMtV8QwNgjXBAJ5
wGeJWt8s34W1Zo3qS9gEuVzEhW4XaxISQQMkHe8faKkK6iHmIB/VjSQikDwwUNQ/
AspYj82RyWBCDZsqhiYh71kpxjvS6Xp0bj39Ce1sNsOnuksxKkQ=
=g8In
-----END PGP SIGNATURE-----
Merge tag 'kthread-for-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks
Pull kthread updates from Frederic Weisbecker:
"Kthreads affinity follow either of 4 existing different patterns:
1) Per-CPU kthreads must stay affine to a single CPU and never
execute relevant code on any other CPU. This is currently handled
by smpboot code which takes care of CPU-hotplug operations.
Affinity here is a correctness constraint.
2) Some kthreads _have_ to be affine to a specific set of CPUs and
can't run anywhere else. The affinity is set through
kthread_bind_mask() and the subsystem takes care by itself to
handle CPU-hotplug operations. Affinity here is assumed to be a
correctness constraint.
3) Per-node kthreads _prefer_ to be affine to a specific NUMA node.
This is not a correctness constraint but merely a preference in
terms of memory locality. kswapd and kcompactd both fall into this
category. The affinity is set manually like for any other task and
CPU-hotplug is supposed to be handled by the relevant subsystem so
that the task is properly reaffined whenever a given CPU from the
node comes up. Also care should be taken so that the node affinity
doesn't cross isolated (nohz_full) cpumask boundaries.
4) Similar to the previous point except kthreads have a _preferred_
affinity different than a node. Both RCU boost kthreads and RCU
exp kworkers fall into this category as they refer to "RCU nodes"
from a distinctly distributed tree.
Currently the preferred affinity patterns (3 and 4) have at least 4
identified users, with more or less success when it comes to handle
CPU-hotplug operations and CPU isolation. Each of which do it in its
own ad-hoc way.
This is an infrastructure proposal to handle this with the following
API changes:
- kthread_create_on_node() automatically affines the created kthread
to its target node unless it has been set as per-cpu or bound with
kthread_bind[_mask]() before the first wake-up.
- kthread_affine_preferred() is a new function that can be called
right after kthread_create_on_node() to specify a preferred
affinity different than the specified node.
When the preferred affinity can't be applied because the possible
targets are offline or isolated (nohz_full), the kthread is affine to
the housekeeping CPUs (which means to all online CPUs most of the time
or only the non-nohz_full CPUs when nohz_full= is set).
kswapd, kcompactd, RCU boost kthreads and RCU exp kworkers have been
converted, along with a few old drivers.
Summary of the changes:
- Consolidate a bunch of ad-hoc implementations of
kthread_run_on_cpu()
- Introduce task_cpu_fallback_mask() that defines the default last
resort affinity of a task to become nohz_full aware
- Add some correctness check to ensure kthread_bind() is always
called before the first kthread wake up.
- Default affine kthread to its preferred node.
- Convert kswapd / kcompactd and remove their halfway working ad-hoc
affinity implementation
- Implement kthreads preferred affinity
- Unify kthread worker and kthread API's style
- Convert RCU kthreads to the new API and remove the ad-hoc affinity
implementation"
* tag 'kthread-for-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks:
kthread: modify kernel-doc function name to match code
rcu: Use kthread preferred affinity for RCU exp kworkers
treewide: Introduce kthread_run_worker[_on_cpu]()
kthread: Unify kthread_create_on_cpu() and kthread_create_worker_on_cpu() automatic format
rcu: Use kthread preferred affinity for RCU boost
kthread: Implement preferred affinity
mm: Create/affine kswapd to its preferred node
mm: Create/affine kcompactd to its preferred node
kthread: Default affine kthread to its preferred NUMA node
kthread: Make sure kthread hasn't started while binding it
sched,arm64: Handle CPU isolation on last resort fallback rq selection
arm64: Exclude nohz_full CPUs from 32bits el0 support
lib: test_objpool: Use kthread_run_on_cpu()
kallsyms: Use kthread_run_on_cpu()
soc/qman: test: Use kthread_run_on_cpu()
arm/bL_switcher: Use kthread_run_on_cpu()
key_being_used_for[] is an unused array of textual names for
the elements of the enum key_being_used_for. It was added in 2015 by
commit 99db443506 ("PKCS#7: Appropriately restrict authenticated
attributes and content type")
Remove it.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In skcipher_walk_done(), instead of calling crypto_yield() which
requires a translation between flags, just call cond_resched() directly.
This has the same effect.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The helper functions like crypto_skcipher_blocksize() take in a pointer
to a tfm object, but they actually return properties of the algorithm.
As the Linux kernel is compiled with -fno-strict-aliasing, the compiler
has to assume that the writes to struct skcipher_walk could clobber the
tfm's pointer to its algorithm. Thus it gets repeatedly reloaded in the
generated code. Therefore, replace the use of these helper functions
with staightforward accesses to the struct fields.
Note that while *users* of the skcipher and aead APIs are supposed to
use the helper functions, this particular code is part of the API
*implementation* in crypto/skcipher.c, which already accesses the
algorithm struct directly in many cases. So there is no reason to
prefer the helper functions here.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
- Initialize SKCIPHER_WALK_SLEEP in a consistent way, and check for
atomic=true at the same time as CRYPTO_TFM_REQ_MAY_SLEEP. Technically
atomic=true only needs to apply after the first step, but it is very
rarely used. We should optimize for the common case. So, check
'atomic' alongside CRYPTO_TFM_REQ_MAY_SLEEP. This is more efficient.
- Initialize flags other than SKCIPHER_WALK_SLEEP to 0 rather than
preserving them. No caller actually initializes the flags, which
makes it impossible to use their original values for anything.
Indeed, that does not happen and all meaningful flags get overridden
anyway. It may have been thought that just clearing one flag would be
faster than clearing all flags, but that's not the case as the former
is a read-write operation whereas the latter is just a write.
- Move the explicit clearing of SKCIPHER_WALK_SLOW, SKCIPHER_WALK_COPY,
and SKCIPHER_WALK_DIFF into skcipher_walk_done(), since it is now
only needed on non-first steps.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fold skcipher_walk_skcipher() into skcipher_walk_virt() which is its
only remaining caller. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In skcipher_walk_done(), remove the check for SKCIPHER_WALK_SLOW because
it is always true. All other flags (and lack thereof) were checked
earlier in the function, leaving SKCIPHER_WALK_SLOW as the only
remaining possibility.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the case where skcipher_walk_next() allocates a bounce page, that
page by definition has size PAGE_SIZE. The number of bytes to copy 'n'
is guaranteed to fit in it, since earlier in the function it was clamped
to be at most a page. Therefore remove the unnecessary logic that tried
to clamp 'n' again to fit in the bounce page.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In the slow path of skcipher_walk where it uses a slab bounce buffer for
the data and/or IV, do not bother to avoid crossing a page boundary in
the part(s) of this buffer that are used, and do not bother to allocate
extra space in the buffer for that purpose. The buffer is accessed only
by virtual address, so pages are irrelevant for it.
This logic may have been present due to the physical address support in
skcipher_walk, but that has now been removed. Or it may have been
present to be consistent with the fast path that currently does not hand
back addresses that span pages, but that behavior is a side effect of
the pages being "mapped" one by one and is not actually a requirement.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
skcipher_walk_done() has an unusual calling convention, and some of its
local variables have unclear names. Document it and rename variables to
make it a bit clearer what is going on. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_yes_no() and str_no_yes()
helpers. Remove unnecessary curly braces.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
kthread_create() creates a kthread without running it yet. kthread_run()
creates a kthread and runs it.
On the other hand, kthread_create_worker() creates a kthread worker and
runs it.
This difference in behaviours is confusing. Also there is no way to
create a kthread worker and affine it using kthread_bind_mask() or
kthread_affine_preferred() before starting it.
Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]()
that behaves just like kthread_run(). kthread_create_worker[_on_cpu]()
will now only create a kthread worker without starting it.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Due to the removal of the Niagara2 SPU driver, crypto_hash_walk_first(),
crypto_hash_walk_done(), crypto_hash_walk_last(), and struct
crypto_hash_walk are now only used in crypto/ahash.c. Therefore, make
them all private to crypto/ahash.c. I.e. un-export the two functions
that were exported, make the functions static, and move the struct
definition to the .c file. As part of this, move the functions to
earlier in the file to avoid needing to add forward declarations.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The keywrap (kw) algorithm has no in-tree user. It has never had an
in-tree user, and the patch that added it provided no justification for
its inclusion. Even use of it via AF_ALG is impossible, as it uses a
weird calling convention where part of the ciphertext is returned via
the IV buffer, which is not returned to userspace in AF_ALG.
It's also unclear whether any new code in the kernel that does key
wrapping would actually use this algorithm. It is controversial in the
cryptographic community due to having no clearly stated security goal,
no security proof, poor performance, and only a 64-bit auth tag. Later
work (https://eprint.iacr.org/2006/221) suggested that the goal is
deterministic authenticated encryption. But there are now more modern
algorithms for this, and this is not the same as key wrapping, for which
a regular AEAD such as AES-GCM usually can be (and is) used instead.
Therefore, remove this unused code.
There were several special cases for this algorithm in the self-tests,
due to its weird calling convention. Remove those too.
Cc: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the vmac64 template, as it has no known users. It also continues
to have longstanding bugs such as alignment violations (see
https://lore.kernel.org/r/20241226134847.6690-1-evepolonium@gmail.com/).
This code was added in 2009 by commit f1939f7c56 ("crypto: vmac - New
hash algorithm for intel_txt support"). Based on the mention of
intel_txt support in the commit title, it seems it was added as a
prerequisite for the contemporaneous patch
"intel_txt: add s3 userspace memory integrity verification"
(https://lore.kernel.org/r/4ABF2B50.6070106@intel.com/). In the design
proposed by that patch, when an Intel Trusted Execution Technology (TXT)
enabled system resumed from suspend, the "tboot" trusted executable
launched the Linux kernel without verifying userspace memory, and then
the Linux kernel used VMAC to verify userspace memory.
However, that patch was never merged, as reviewers had objected to the
design. It was later reworked into commit 4bd96a7a81 ("x86, tboot:
Add support for S3 memory integrity protection") which made tboot verify
the memory instead. Thus the VMAC support in Linux was never used.
No in-tree user has appeared since then, other than potentially the
usual components that allow specifying arbitrary hash algorithms by
name, namely AF_ALG and dm-integrity. However there are no indications
that VMAC is being used with these components. Debian Code Search and
web searches for "vmac64" (the actual algorithm name) do not return any
results other than the kernel itself, suggesting that it does not appear
in any other code or documentation. Explicitly grepping the source code
of the usual suspects (libell, iwd, cryptsetup) finds no matches either.
Before 2018, the vmac code was also completely broken due to using a
hardcoded nonce and the wrong endianness for the MAC. It was then fixed
by commit ed331adab3 ("crypto: vmac - add nonced version with big
endian digest") and commit 0917b87312 ("crypto: vmac - remove insecure
version with hardcoded nonce"). These were intentionally breaking
changes that changed all the computed MAC values as well as the
algorithm name ("vmac" to "vmac64"). No complaints were ever received
about these breaking changes, strongly suggesting the absence of users.
The reason I had put some effort into fixing this code in 2018 is
because it was used by an out-of-tree driver. But if it is still needed
in that particular out-of-tree driver, the code can be carried in that
driver instead. There is no need to carry it upstream.
Cc: Atharva Tiwari <evepolonium@gmail.com>
Cc: Shane Wang <shane.wang@intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_enabled_disabled() helper
function.
Use pr_info() instead of printk(KERN_INFO) to silence a checkpatch
warning.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since this code is zero-initializing the algorithm struct, the
assignment of 0 to cra_alignmask is redundant. Remove it to reduce the
number of matches that are found when grepping for cra_alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Struct fields are zero by default, so these lines of code have no
effect. Remove them to reduce the number of matches that are found when
grepping for cra_alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the physical address support in skcipher_walk is not used anymore,
remove all the code associated with it. This includes:
- The skcipher_walk_async() and skcipher_walk_complete() functions;
- The SKCIPHER_WALK_PHYS flag and everything conditional on it;
- The buffers, phys, and virt.page fields in struct skcipher_walk;
- struct skcipher_walk_buffer.
As a result, skcipher_walk now just supports virtual addresses.
Physical address support in skcipher_walk is unneeded because drivers
that need physical addresses just use the scatterlists directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As sig is now a standalone type, it no longer needs to have a wide
mask that includes akcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There is no need to hold the crypto mutex when scheduling a self-
test. In fact prior to the patch introducing asynchronous testing,
this was done outside of the locked area.
Move the crypto_schedule_test call back out of the locked area.
Also move crypto_remove_final to the else branch under the schedule-
test call as the list of algorithms to be removed is non-empty only
when the test larval is NULL (i.e., testing is disabled).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
During the boot process self-tests are postponed so that all
algorithms are registered when the test starts. In the event
that algorithms are still being registered during these tests,
which can occur either because the algorithm is registered at
late_initcall, or because a self-test itself triggers the creation
of an instance, some self-tests may never start at all.
Fix this by setting the flag at the start of crypto_start_tests.
Note that this race is theoretical and has never been observed
in practice.
Fixes: adad556efc ("crypto: api - Fix built-in testing dependency failures")
Signed-off-by: Herbert Xu <herbert.xu@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As virtual addresses in general may not be suitable for DMA, always
perform a copy before using them in an SG list.
Fixes: 1e562deace ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
Reported-by: Zorro Lang <zlang@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that crc_t10dif_update() may be directly optimized for each
architecture, make the shash driver for crct10dif register a
crct10dif-$arch algorithm that uses it, instead of only
crct10dif-generic which uses crc_t10dif_generic().
The result is that architecture-optimized crct10dif will remain
available through the shash API once the architectures implement
crc_t10dif_arch() instead of the shash API.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20241202012056.209768-4-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
In preparation for making the CRC-T10DIF library directly optimized for
each architecture, like what has been done for CRC32, get rid of the
weird layering where crc_t10dif_update() calls into the crypto API.
Instead, move crc_t10dif_generic() into the crc-t10dif library module,
and make crc_t10dif_update() just call crc_t10dif_generic().
Acceleration will be reintroduced via crc_t10dif_arch() in the following
patches.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20241202012056.209768-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Instead of registering the crc32-$arch and crc32c-$arch algorithms if
the arch-specific code was built, only register them when that code was
built *and* is not falling back to the base implementation at runtime.
This avoids confusing users like btrfs which checks the shash driver
name to determine whether it is crc32c-generic.
(It would also make sense to change btrfs to test the crc32_optimization
flags itself, so that it doesn't have to use the weird hack of parsing
the driver name. This change still makes sense either way though.)
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20241202010844.144356-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Currently the CRC32 library functions are defined as weak symbols, and
the arm64 and riscv architectures override them.
This method of arch-specific overrides has the limitation that it only
works when both the base and arch code is built-in. Also, it makes the
arch-specific code be silently not used if it is accidentally built with
lib-y instead of obj-y; unfortunately the RISC-V code does this.
This commit reorganizes the code to have explicit *_arch() functions
that are called when they are enabled, similar to how some of the crypto
library code works (e.g. chacha_crypt() calls chacha_crypt_arch()).
Make the existing kconfig choice for the CRC32 implementation also
control whether the arch-optimized implementation (if one is available)
is enabled or not. Make it enabled by default if CRC32 is also enabled.
The result is that arch-optimized CRC32 library functions will be
included automatically when appropriate, but it is now possible to
disable them. They can also now be built as a loadable module if the
CRC32 library functions happen to be used only by loadable modules, in
which case the arch and base CRC32 modules will be automatically loaded
via direct symbol dependency when appropriate.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20241202010844.144356-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Remove the leading underscores from __crc32c_le_base().
This is in preparation for adding crc32c_le_arch() and eventually
renaming __crc32c_le() to crc32c_le().
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20241202010844.144356-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEq5lC5tSkz8NBJiCnSfxwEqXeA64FAmc6oE0ACgkQSfxwEqXe
A65n5BAAtNmfBJhYRiC6Svsg7+ktHmhCAHoHwnP7sv+bjs81FRAEv21CsfI+02Nb
zUvaPuyiLtYzlWxzE5Yg44v1cADHAq+QZE1Fg5yl7ge6zPZ3+S1pv/8suNSyyI2M
PKvh1sb4OkUtqplveYSuP1J87u55zAtV9mP9qC3hSlY3XkeQUObt9Awss8peOMdv
sH2AxwBlRkqFXpY2worxlfg3p5iLemb3AUZ3f0Jc6fRmOagSJCt7i4mDrWo3EXke
90Ao8ypY0x3YVGRFACHnxCS53X20HGwLxm7jdicfriMCzAJ6JQR6asO+NYnXR+Ev
9Za3UquVHP6HbQGWj6d1k5k2nF+IbkTHTgFBPRK/CY9ZpVbP04B2K7tE1gmT81wj
AscRGi9RBVBPKAUguyi99MXYlprFG/ZTLOux3hvdarv5u0bP94eXmy1FrRM+IO0r
u4BiQ39FlkDdtRxjzKfCiKkMrf3NmFEciZJhxCnflzmOBaj64r1hRt/ea8Bjxvp3
a4k0MfULmcEn2JwPiT1/Swz45ypZQc4OgbP87SCU8P0a23r21r2oK+9v3No/rCzB
TI0fP6ykDTFQoiKUOSg1mJmkipdjeDyQ9E+0XIDsKd+T8Yv9rFoaV6RWoMrkt4AJ
Yea9+V+XEI8F3SjhdD4OL/s3/+bjTjnRHDaXnJf2XzGmXcuvnbs=
=o4ww
-----END PGP SIGNATURE-----
Merge tag 'random-6.13-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random
Pull random number generator updates from Jason Donenfeld:
"This contains a single series from Uros to replace uses of
<linux/random.h> with prandom.h or other more specific headers
as needed, in order to avoid a circular header issue.
Uros' goal is to be able to use percpu.h from prandom.h, which
will then allow him to define __percpu in percpu.h rather than
in compiler_types.h"
* tag 'random-6.13-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
prandom: Include <linux/percpu.h> in <linux/prandom.h>
random: Do not include <linux/prandom.h> in <linux/random.h>
netem: Include <linux/prandom.h> in sch_netem.c
lib/test_scanf: Include <linux/prandom.h> instead of <linux/random.h>
lib/test_parman: Include <linux/prandom.h> instead of <linux/random.h>
bpf/tests: Include <linux/prandom.h> instead of <linux/random.h>
lib/rbtree-test: Include <linux/prandom.h> instead of <linux/random.h>
random32: Include <linux/prandom.h> instead of <linux/random.h>
kunit: string-stream-test: Include <linux/prandom.h>
lib/interval_tree_test.c: Include <linux/prandom.h> instead of <linux/random.h>
bpf: Include <linux/prandom.h> instead of <linux/random.h>
scsi: libfcoe: Include <linux/prandom.h> instead of <linux/random.h>
fscrypt: Include <linux/once.h> in fs/crypto/keyring.c
mtd: tests: Include <linux/prandom.h> instead of <linux/random.h>
media: vivid: Include <linux/prandom.h> in vivid-vid-cap.c
drm/lib: Include <linux/prandom.h> instead of <linux/random.h>
drm/i915/selftests: Include <linux/prandom.h> instead of <linux/random.h>
crypto: testmgr: Include <linux/prandom.h> instead of <linux/random.h>
x86/kaslr: Include <linux/prandom.h> instead of <linux/random.h>
API:
- Add sig driver API.
- Remove signing/verification from akcipher API.
- Move crypto_simd_disabled_for_test to lib/crypto.
- Add WARN_ON for return values from driver that indicates memory corruption.
Algorithms:
- Provide crc32-arch and crc32c-arch through Crypto API.
- Optimise crc32c code size on x86.
- Optimise crct10dif on arm/arm64.
- Optimise p10-aes-gcm on powerpc.
- Optimise aegis128 on x86.
- Output full sample from test interface in jitter RNG.
- Retry without padata when it fails in pcrypt.
Drivers:
- Add support for Airoha EN7581 TRNG.
- Add support for STM32MP25x platforms in stm32.
- Enable iproc-r200 RNG driver on BCMBCA.
- Add Broadcom BCM74110 RNG driver.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmc6sQsACgkQxycdCkmx
i6dfHxAAnkI65TE6agZq9DlkEU4ZqOsxxdk0MsGIhbCUTxW3KENzu9vtKjnvg9T/
Ou0d2J49ny87Y4zaA59Wf/Q1+gg5YSQR5kelonpfrPLkCkJjr72HZpyCHv8TTzEC
uHHoVj9cnPIF5/yfiqQsrWT1ACip9vn+slyVPaMJV1qR6gnvnSALtsg4e/vKHkn7
ZMaf2pZ2ROYXdB02nMK5KQcCrxD64MQle/yQepY44eYjnT+XclkqPdi6o1nUSpj/
RFAeY0jFSTu0pj3DqT48TnU/LiiNLlFOZrGjCdEySoac63vmTtKqfYDmrRaFz4hB
sucxbgJ3xnnYseRijtfXnxaD/IkDJln+ipGNQKAZLfOVMDCTxPdYGmOpobMTXMS+
0sY0eAHgqr23P9pOp+sOzcAEFIqg6llAYQVWx3Zl4vpXBUuxzg6AqmHnPicnck7y
Lw1cJhQxij2De3dG2ZL/0dgQxMjGN/YfCM8SSg6l+Xn3j4j47rqJNH2ZsmXtbJ2n
kTkmemmWdgRR1IvgQQGsvyKs9ThkcEDW+IzW26SUv3Clvru2NSkX4ZPHbezZQf+D
R0wMZsW3Fw7Zymerz1GIBSqdLnsyFWtIAjukDpOR6ordPgOBeDt76v6tw5vL2/II
KYoeN1pdEEecwuhAsEvCryT5ZG4noBeNirf/ElWAfEybgcXiTks=
=T8pa
-----END PGP SIGNATURE-----
Merge tag 'v6.13-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Add sig driver API
- Remove signing/verification from akcipher API
- Move crypto_simd_disabled_for_test to lib/crypto
- Add WARN_ON for return values from driver that indicates memory
corruption
Algorithms:
- Provide crc32-arch and crc32c-arch through Crypto API
- Optimise crc32c code size on x86
- Optimise crct10dif on arm/arm64
- Optimise p10-aes-gcm on powerpc
- Optimise aegis128 on x86
- Output full sample from test interface in jitter RNG
- Retry without padata when it fails in pcrypt
Drivers:
- Add support for Airoha EN7581 TRNG
- Add support for STM32MP25x platforms in stm32
- Enable iproc-r200 RNG driver on BCMBCA
- Add Broadcom BCM74110 RNG driver"
* tag 'v6.13-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits)
crypto: marvell/cesa - fix uninit value for struct mv_cesa_op_ctx
crypto: cavium - Fix an error handling path in cpt_ucode_load_fw()
crypto: aesni - Move back to module_init
crypto: lib/mpi - Export mpi_set_bit
crypto: aes-gcm-p10 - Use the correct bit to test for P10
hwrng: amd - remove reference to removed PPC_MAPLE config
crypto: arm/crct10dif - Implement plain NEON variant
crypto: arm/crct10dif - Macroify PMULL asm code
crypto: arm/crct10dif - Use existing mov_l macro instead of __adrl
crypto: arm64/crct10dif - Remove remaining 64x64 PMULL fallback code
crypto: arm64/crct10dif - Use faster 16x64 bit polynomial multiply
crypto: arm64/crct10dif - Remove obsolete chunking logic
crypto: bcm - add error check in the ahash_hmac_init function
crypto: caam - add error check to caam_rsa_set_priv_key_form
hwrng: bcm74110 - Add Broadcom BCM74110 RNG driver
dt-bindings: rng: add binding for BCM74110 RNG
padata: Clean up in padata_do_multithreaded()
crypto: inside-secure - Fix the return value of safexcel_xcbcmac_cra_init()
crypto: qat - Fix missing destroy_workqueue in adf_init_aer()
crypto: rsassa-pkcs1 - Reinstate support for legacy protocols
...
Commit 1e562deace ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
enforced that rsassa-pkcs1 sign/verify operations specify a hash
algorithm. That is necessary because per RFC 8017 sec 8.2, a hash
algorithm identifier must be prepended to the hash before generating or
verifying the signature ("Full Hash Prefix").
However the commit went too far in that it changed user space behavior:
KEYCTL_PKEY_QUERY system calls now return -EINVAL unless they specify a
hash algorithm. Intel Wireless Daemon (iwd) is one application issuing
such system calls (for EAP-TLS).
Closer analysis of the Embedded Linux Library (ell) used by iwd reveals
that the problem runs even deeper: When iwd uses TLS 1.1 or earlier, it
not only queries for keys, but performs sign/verify operations without
specifying a hash algorithm. These legacy TLS versions concatenate an
MD5 to a SHA-1 hash and omit the Full Hash Prefix:
https://git.kernel.org/pub/scm/libs/ell/ell.git/tree/ell/tls-suites.c#n97
TLS 1.1 was deprecated in 2021 by RFC 8996, but removal of support was
inadvertent in this case. It probably should be coordinated with iwd
maintainers first.
So reinstate support for such legacy protocols by defaulting to hash
algorithm "none" which uses an empty Full Hash Prefix.
If it is later on decided to remove TLS 1.1 support but still allow
KEYCTL_PKEY_QUERY without a hash algorithm, that can be achieved by
reverting the present commit and replacing it with the following patch:
https://lore.kernel.org/r/ZxalYZwH5UiGX5uj@wunner.de/
It's worth noting that Python's cryptography library gained support for
such legacy use cases very recently, so they do seem to still be a thing.
The Python developers identified IKE version 1 as another protocol
omitting the Full Hash Prefix:
https://github.com/pyca/cryptography/issues/10226https://github.com/pyca/cryptography/issues/5495
The author of those issues, Zoltan Kelemen, spent considerable effort
searching for test vectors but only found one in a 2019 blog post by
Kevin Jones. Add it to testmgr.h to verify correctness of this feature.
Examination of wpa_supplicant as well as various IKE daemons (libreswan,
strongswan, isakmpd, raccoon) has determined that none of them seems to
use the kernel's Key Retention Service, so iwd is the only affected user
space application known so far.
Fixes: 1e562deace ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend")
Reported-by: Klara Modin <klarasmodin@gmail.com>
Tested-by: Klara Modin <klarasmodin@gmail.com>
Closes: https://lore.kernel.org/r/2ed09a22-86c0-4cf0-8bda-ef804ccb3413@gmail.com/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
encrypt_blob(), decrypt_blob() and create_signature() were some of the
functions added in 2018 by
commit 5a30771832 ("KEYS: Provide missing asymmetric key subops for new
key type ops [ver #2]")
however, they've not been used.
Remove them.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move crypto_simd_disabled_for_test to lib/ so that crypto_simd_usable()
can be used by library code.
This was discussed previously
(https://lore.kernel.org/linux-crypto/20220716062920.210381-4-ebiggers@kernel.org/)
but was not done because there was no use case yet. However, this is
now needed for the arm64 CRC32 library code.
Tested with:
export ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
echo CONFIG_CRC32=y > .config
echo CONFIG_MODULES=y >> .config
echo CONFIG_CRYPTO=m >> .config
echo CONFIG_DEBUG_KERNEL=y >> .config
echo CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=n >> .config
echo CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y >> .config
make olddefconfig
make -j$(nproc)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32c-generic is currently backed by the architecture's CRC-32c library
code, which may offer a variety of implementations depending on the
capabilities of the platform. These are not covered by the crypto
subsystem's fuzz testing capabilities because crc32c-generic is the
reference driver that the fuzzing logic uses as a source of truth.
Fix this by providing a crc32c-arch implementation which is based on the
arch library code if available, and modify crc32c-generic so it is
always based on the generic C implementation. If the arch has no CRC-32c
library code, this change does nothing.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crc32-generic is currently backed by the architecture's CRC-32 library
code, which may offer a variety of implementations depending on the
capabilities of the platform. These are not covered by the crypto
subsystem's fuzz testing capabilities because crc32-generic is the
reference driver that the fuzzing logic uses as a source of truth.
Fix this by providing a crc32-arch implementation which is based on the
arch library code if available, and modify crc32-generic so it is
always based on the generic C implementation. If the arch has no CRC-32
library code, this change does nothing.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the helper functions str_true_false()
and str_enabled_disabled().
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since commit 8f4f68e788 ("crypto: pcrypt - Fix hungtask for
PADATA_RESET"), the pcrypt encryption and decryption operations return
-EAGAIN when the CPU goes online or offline. In alg_test(), a WARN is
generated when pcrypt_aead_decrypt() or pcrypt_aead_encrypt() returns
-EAGAIN, the unnecessary panic will occur when panic_on_warn set 1.
Fix this issue by calling crypto layer directly without parallelization
in that case.
Fixes: 8f4f68e788 ("crypto: pcrypt - Fix hungtask for PADATA_RESET")
Signed-off-by: Yi Yang <yiyang13@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit a7d45ba77d ("crypto: ecdsa - Register NIST P521 and extend test
suite") added support for ECDSA signature verification using NIST P521,
but forgot to amend the Kconfig help text. Fix it.
Fixes: a7d45ba77d ("crypto: ecdsa - Register NIST P521 and extend test suite")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit a2471684da ("crypto: ecdsa - Move X9.62 signature size
calculation into template") introduced ->max_size() and ->digest_size()
callbacks to struct sig_alg. They return an algorithm's maximum
signature size and digest size, respectively.
For algorithms which lack these callbacks, crypto_register_sig() was
amended to use the ->key_size() callback instead.
However the commit neglected to also amend sig_register_instance().
As a result, the ->max_size() and ->digest_size() callbacks remain NULL
pointers if instances do not define them. A KEYCTL_PKEY_QUERY system
call results in an oops for such instances:
BUG: kernel NULL pointer dereference, address: 0000000000000000
Call Trace:
software_key_query+0x169/0x370
query_asymmetric_key+0x67/0x90
keyctl_pkey_query+0x86/0x120
__do_sys_keyctl+0x428/0x480
do_syscall_64+0x4b/0x110
The only instances affected by this are "pkcs1(rsa, ...)".
Fix by moving the callback checks from crypto_register_sig() to
sig_prepare_alg(), which is also invoked by sig_register_instance().
Change the return type of sig_prepare_alg() from void to int to be able
to return errors. This matches other algorithm types, see e.g.
aead_prepare_alg() or ahash_prepare_alg().
Fixes: a2471684da ("crypto: ecdsa - Move X9.62 signature size calculation into template")
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The Jitter RNG time delta is computed based on the difference of two
high-resolution, 64-bit time stamps. However, the test interface added
in 69f1c387ba only outputs the lower 32 bits of those time stamps. To
ensure all information is available during the evaluation process of
the Jitter RNG, output the full 64-bit time stamps.
Any clients collecting data from the test interface will need to be
updated to take this change into account.
Additionally, the size of the temporary buffer that holds the data for
user space has been clarified. Previously, this buffer was
JENT_TEST_RINGBUFFER_SIZE (= 1000) bytes in size, however that value
represents the number of samples held in the kernel space ring buffer,
with each sample taking 8 (previously 4) bytes.
Rather than increasing the size to allow for all 1000 samples to be
output, we keep it at 1000 bytes, but clarify that this means at most
125 64-bit samples will be output every time this interface is called.
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The previous patch removed the ENOENT warning at the point of
allocation, but the overall self-test warning is still there.
Fix all of them by returning zero as the test result. This is
safe because if the algorithm has gone away, then it cannot be
marked as tested.
Fixes: 4eded6d14f ("crypto: testmgr - Hide ENOENT errors")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As algorithm testing is carried out without holding the main crypto
lock, it is always possible for the algorithm to go away during the
test.
So before crypto_alg_tested updates the status of the tested alg,
it checks whether it's still on the list of all algorithms. This
is inaccurate because it may be off the main list but still on the
list of algorithms to be removed.
Updating the algorithm status is safe per se as the larval still
holds a reference to it. However, killing spawns of other algorithms
that are of lower priority is clearly a deficiency as it adds
unnecessary churn.
Fix the test by checking whether the algorithm is dead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
software_key_query() returns the curve size as maximum signature size
for ecrdsa. However it should return twice as much.
It's only the maximum signature size that seems to be off. The maximum
digest size is likewise set to the curve size, but that's correct as it
matches the checks in ecrdsa_set_pub_key() and ecrdsa_verify().
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Alternatively to the X9.62 encoding of ecdsa signatures, which uses
ASN.1 and is already supported by the kernel, there's another common
encoding called P1363. It stores r and s as the concatenation of two
big endian, unsigned integers. The name originates from IEEE P1363.
Add a P1363 template in support of the forthcoming SPDM library
(Security Protocol and Data Model) for PCI device authentication.
P1363 is prescribed by SPDM 1.2.1 margin no 44:
"For ECDSA signatures, excluding SM2, in SPDM, the signature shall be
the concatenation of r and s. The size of r shall be the size of
the selected curve. Likewise, the size of s shall be the size of
the selected curve. See BaseAsymAlgo in NEGOTIATE_ALGORITHMS for
the size of r and s. The byte order for r and s shall be in big
endian order. When placing ECDSA signatures into an SPDM signature
field, r shall come first followed by s."
Link: https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
software_key_query() returns the maximum signature and digest size for a
given key to user space. When it only supported RSA keys, calculating
those sizes was trivial as they were always equivalent to the key size.
However when ECDSA was added, the function grew somewhat complicated
calculations which take the ASN.1 encoding and curve into account.
This doesn't scale well and adjusting the calculations is easily
forgotten when adding support for new encodings or curves. In fact,
when NIST P521 support was recently added, the function was initially
not amended:
https://lore.kernel.org/all/b749d5ee-c3b8-4cbd-b252-7773e4536e07@linux.ibm.com/
Introduce a ->max_size() callback to struct sig_alg and take advantage
of it to move the signature size calculations to ecdsa-x962.c.
Introduce a ->digest_size() callback to struct sig_alg and move the
maximum ECDSA digest size to ecdsa.c. It is common across ecdsa-x962.c
and the upcoming ecdsa-p1363.c and thus inherited by both of them.
For all other algorithms, continue using the key size as maximum
signature and digest size.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_sig_maxsize() is a bit of a misnomer as it doesn't return the
maximum signature size, but rather the key size.
Rename it as well as all implementations of the ->max_size callback.
A subsequent commit introduces a crypto_sig_maxsize() function which
returns the actual maximum signature size.
While at it, change the return type of crypto_sig_keysize() from int to
unsigned int for consistency with crypto_akcipher_maxsize(). None of
the callers checks for a negative return value and an error condition
can always be indicated by returning zero.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike the rsa driver, which separates signature decoding and
signature verification into two steps, the ecdsa driver does both in one.
This restricts users to the one signature format currently supported
(X9.62) and prevents addition of others such as P1363, which is needed
by the forthcoming SPDM library (Security Protocol and Data Model) for
PCI device authentication.
Per Herbert's suggestion, change ecdsa to use a "raw" signature encoding
and then implement X9.62 and P1363 as templates which convert their
respective encodings to the raw one. One may then specify
"x962(ecdsa-nist-XXX)" or "p1363(ecdsa-nist-XXX)" to pick the encoding.
The present commit moves X9.62 decoding to a template. A separate
commit is going to introduce another template for P1363 decoding.
The ecdsa driver internally represents a signature as two u64 arrays of
size ECC_MAX_BYTES. This appears to be the most natural choice for the
raw format as it can directly be used for verification without having to
further decode signature data or copy it around.
Repurpose all the existing test vectors for "x962(ecdsa-nist-XXX)" and
create a duplicate of them to test the raw encoding.
Link: https://lore.kernel.org/all/ZoHXyGwRzVvYkcTP@gondor.apana.org.au/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When extracting a signature component r or s from an ASN.1-encoded
integer, ecdsa_get_signature_rs() subtracts the expected length
"bufsize" from the ASN.1 length "vlen" (both of unsigned type size_t)
and stores the result in "diff" (of signed type ssize_t).
This results in a signed integer overflow if vlen > SSIZE_MAX + bufsize.
The kernel is compiled with -fno-strict-overflow, which implies -fwrapv,
meaning signed integer overflow is not undefined behavior. And the
function does check for overflow:
if (-diff >= bufsize)
return -EINVAL;
So the code is fine in principle but not very obvious. In the future it
might trigger a false-positive with CONFIG_UBSAN_SIGNED_WRAP=y.
Avoid by comparing the two unsigned variables directly and erroring out
if "vlen" is too large.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The crypto_sig_*() API calls lived in sig.c so far because they needed
access to struct crypto_sig_type: This was necessary to differentiate
between signature algorithms that had already been migrated from
crypto_akcipher to crypto_sig and those that hadn't yet.
Now that all algorithms have been migrated, the API calls can become
static inlines in <crypto/sig.h> to mimic what <crypto/akcipher.h> is
doing.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A sig_alg backend has just been introduced and all asymmetric
sign/verify algorithms have been migrated to it.
The sign/verify operations can thus be dropped from akcipher_alg.
It is now purely for asymmetric encrypt/decrypt.
Move struct crypto_akcipher_sync_data from internal.h to akcipher.c and
unexport crypto_akcipher_sync_{prep,post}(): They're no longer used by
sig.c but only locally in akcipher.c.
In crypto_akcipher_sync_{prep,post}(), drop various NULL pointer checks
for data->dst as they were only necessary for the verify operation.
In the crypto_sig_*() API calls, remove the forks that were necessary
while algorithms were converted from crypto_akcipher to crypto_sig
one by one.
In struct akcipher_testvec, remove the "params", "param_len" and "algo"
elements as they were only needed for the ecrdsa verify operation.
Remove corresponding dead code from test_akcipher_one() as well.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When constructing the EMSA-PKCS1-v1_5 padding for the sign operation,
a buffer for the padding is allocated and the Full Hash Prefix is copied
into it. The padding is then passed to the RSA decrypt operation as an
sglist entry which is succeeded by a second sglist entry for the hash.
Actually copying the hash prefix around is completely unnecessary.
It can simply be referenced from a third sglist entry which sits
in-between the padding and the digest.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The RSASSA-PKCS1-v1_5 sign operation currently only checks that the
digest length is less than "key_size - hash_prefix->size - 11".
The verify operation merely checks that it's more than zero.
Actually the precise digest length is known because the hash algorithm
is specified upon instance creation and the digest length is encoded
into the final byte of the hash algorithm's Full Hash Prefix.
So check for the exact digest length rather than solely relying on
imprecise maximum/minimum checks.
Keep the maximum length check for the sign operation as a safety net,
but drop the now unnecessary minimum check for the verify operation.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate
rsassa-pkcs1.c which uses the new backend.
Consequently there are now two templates which build on the "rsa"
akcipher_alg:
* The existing "pkcs1pad" template, which is instantiated as an
akcipher_instance and retains the encrypt/decrypt operations of
RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2).
* The new "pkcs1" template, which is instantiated as a sig_instance
and contains the sign/verify operations of RSASSA-PKCS1-v1_5
(RFC 8017 sec 8.2).
In a separate step, rsa-pkcs1pad.c could optionally be renamed to
rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates
could be added for RSAES-OAEP and RSASSA-PSS.
Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform
without specifying a hash algorithm. That makes sense if the transform
is only used for encrypt/decrypt and continues to be supported. But for
sign/verify, such transforms previously did not insert the Full Hash
Prefix into the padding. The resulting message encoding was incompliant
with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical.
From here on in, it is no longer allowed to allocate a transform without
specifying a hash algorithm if the transform is used for sign/verify
operations. This simplifies the code because the insertion of the Full
Hash Prefix is no longer optional, so various "if (digest_info)" clauses
can be removed.
There has been a previous attempt to forbid transform allocation without
specifying a hash algorithm, namely by commit c0d20d22e0 ("crypto:
rsa-pkcs1pad - Require hash to be present"). It had to be rolled back
with commit b3a8c8a5eb ("crypto: rsa-pkcs1pad: Allow hash to be
optional [ver #2]"), presumably because it broke allocation of a
transform which was solely used for encrypt/decrypt, not sign/verify.
Avoid such breakage by allowing transform allocation for encrypt/decrypt
with and without specifying a hash algorithm (and simply ignoring the
hash algorithm in the former case).
So again, specifying a hash algorithm is now mandatory for sign/verify,
but optional and ignored for encrypt/decrypt.
The new sig_alg API uses kernel buffers instead of sglists, which
avoids the overhead of copying signature and digest from sglists back
into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit.
sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg
may be asynchronous. So await the result of the akcipher_alg, similar
to crypto_akcipher_sync_{en,de}crypt().
As part of the migration, rename "rsa_digest_info" to "hash_prefix" to
adhere to the spec language in RFC 9580. Otherwise keep the code
unmodified wherever possible to ease reviewing and bisecting. Leave
several simplification and hardening opportunities to separate commits.
rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers
which need to be freed by kfree_sensitive(), hence a DEFINE_FREE()
clause for kfree_sensitive() is introduced herein as a byproduct.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
pkcs1pad_set_pub_key() and pkcs1pad_set_priv_key() are almost identical.
The upcoming migration of sign/verify operations from rsa-pkcs1pad.c
into a separate crypto_template will require another copy of the exact
same functions. When RSASSA-PSS and RSAES-OAEP are introduced, each
will need yet another copy.
Deduplicate the functions into a single one which lives in a common
header file for reuse by RSASSA-PKCS1-v1_5, RSASSA-PSS and RSAES-OAEP.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate ecrdsa.c to the new backend.
One benefit of the new API is the use of kernel buffers instead of
sglists, which avoids the overhead of copying signature and digest
sglists back into kernel buffers. ecrdsa.c is thus simplified quite
a bit.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
A sig_alg backend has just been introduced with the intent of moving all
asymmetric sign/verify algorithms to it one by one.
Migrate ecdsa.c to the new backend.
One benefit of the new API is the use of kernel buffers instead of
sglists, which avoids the overhead of copying signature and digest
sglists back into kernel buffers. ecdsa.c is thus simplified quite
a bit.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 6cb8815f41 ("crypto: sig - Add interface for sign/verify")
began a transition of asymmetric sign/verify operations from
crypto_akcipher to a new crypto_sig frontend.
Internally, the crypto_sig frontend still uses akcipher_alg as backend,
however:
"The link between sig and akcipher is meant to be temporary. The
plan is to create a new low-level API for sig and then migrate
the signature code over to that from akcipher."
https://lore.kernel.org/r/ZrG6w9wsb-iiLZIF@gondor.apana.org.au/
"having a separate alg for sig is definitely where we want to
be since there is very little that the two types actually share."
https://lore.kernel.org/r/ZrHlpz4qnre0zWJO@gondor.apana.org.au/
Take the next step of that migration and augment the crypto_sig frontend
with a sig_alg backend to which all algorithms can be moved.
During the migration, there will briefly be signature algorithms that
are still based on crypto_akcipher, whilst others are already based on
crypto_sig. Allow for that by building a fork into crypto_sig_*() API
calls (i.e. crypto_sig_maxsize() and friends) such that one of the two
backends is selected based on the transform's cra_type.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The ECDSA test vectors contain "params", "param_len" and "algo" elements
even though ecdsa.c doesn't make any use of them. The only algorithm
implementation using those elements is ecrdsa.c.
Drop the unused test vector elements.
For the curious, "params" is an ASN.1 SEQUENCE of OID_id_ecPublicKey
and a second OID identifying the curve. For example:
"\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48"
"\xce\x3d\x03\x01\x01"
... decodes to:
SEQUENCE (OID_id_ecPublicKey, OID_id_prime192v1)
The curve OIDs used in those "params" elements are unsurprisingly:
OID_id_prime192v1 (2a8648ce3d030101)
OID_id_prime256v1 (2a8648ce3d030107)
OID_id_ansip384r1 (2b81040022)
OID_id_ansip521r1 (2b81040023)
Those are just different names for secp192r1, secp256r1, secp384r1 and
secp521r1, respectively, per RFC 8422 appendix A:
https://www.rfc-editor.org/rfc/rfc8422#appendix-A
The entries for secp384r1 and secp521r1 curves contain a useful code
comment calling out the curve and hash. Add analogous code comments
to secp192r1 and secp256r1 curve entries.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Substitute the inclusion of <linux/random.h> header with
<linux/prandom.h> to allow the removal of legacy inclusion
of <linux/prandom.h> from <linux/random.h>.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@davemloft.net>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
asm/unaligned.h is always an include of asm-generic/unaligned.h;
might as well move that thing to linux/unaligned.h and include
that - there's nothing arch-specific in that header.
auto-generated by the following:
for i in `git grep -l -w asm/unaligned.h`; do
sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i
done
for i in `git grep -l -w asm-generic/unaligned.h`; do
sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i
done
git mv include/asm-generic/unaligned.h include/linux/unaligned.h
git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h
sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild
sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
In find_asymmetric_key(), if all NULLs are passed in the id_{0,1,2}
arguments, the kernel will first emit WARN but then have an oops
because id_2 gets dereferenced anyway.
Add the missing id_2 check and move WARN_ON() to the final else branch
to avoid duplicate NULL checks.
Found by Linux Verification Center (linuxtesting.org) with Svace static
analysis tool.
Cc: stable@vger.kernel.org # v5.17+
Fixes: 7d30198ee2 ("keys: X.509 public key issuer lookup without AKID")
Suggested-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: Roman Smirnov <r.smirnov@omp.ru>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
The code in crypto_aegis128_process_crypt() had an indentation
issue where spaces were used instead of tabs. This commit
corrects the indentation to use tabs, adhering to the
Linux kernel coding style guidelines.
Issue reported by checkpatch:
- ERROR: code indent should use tabs where possible
No functional changes are intended.
Signed-off-by: Riyan Dhiman <riyandhiman14@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When a crypto algorithm with a higher priority is registered, it
kills the spawns of all lower-priority algorithms. Thus it is to
be expected for an algorithm to go away at any time, even during
a self-test. This is now much more common with asynchronous testing.
Remove the printk when an ENOENT is encountered during a self-test.
This is not really an error since the algorithm being tested is no
longer there (i.e., it didn't fail the test which is what we care
about).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On Fri, Aug 30, 2024 at 10:51:54AM -0700, Eric Biggers wrote:
>
> Given below in defconfig form, use 'make olddefconfig' to apply. The failures
> are nondeterministic and sometimes there are different ones, for example:
>
> [ 0.358017] alg: skcipher: failed to allocate transform for cbc(twofish-generic): -2
> [ 0.358365] alg: self-tests for cbc(twofish) using cbc(twofish-generic) failed (rc=-2)
> [ 0.358535] alg: skcipher: failed to allocate transform for cbc(camellia-generic): -2
> [ 0.358918] alg: self-tests for cbc(camellia) using cbc(camellia-generic) failed (rc=-2)
> [ 0.371533] alg: skcipher: failed to allocate transform for xts(ecb(aes-generic)): -2
> [ 0.371922] alg: self-tests for xts(aes) using xts(ecb(aes-generic)) failed (rc=-2)
>
> Modules are not enabled, maybe that matters (I haven't checked yet).
Yes I think that was the key. This triggers a massive self-test
run which executes in parallel and reveals a few race conditions
in the system. I think it boils down to the following scenario:
Base algorithm X-generic, X-optimised
Template Y
Optimised algorithm Y-X-optimised
Everything gets registered, and then the self-tests are started.
When Y-X-optimised gets tested, it requests the creation of the
generic Y(X-generic). Which then itself undergoes testing.
The race is that after Y(X-generic) gets registered, but just
before it gets tested, X-optimised finally finishes self-testing
which then causes all spawns of X-generic to be destroyed. So
by the time the self-test for Y(X-generic) comes along, it can
no longer find the algorithm. This error then bubbles up all
the way up to the self-test of Y-X-optimised which then fails.
Note that there is some complexity that I've omitted here because
when the generic self-test fails to find Y(X-generic) it actually
triggers the construction of it again which then fails for various
other reasons (these are not important because the construction
should *not* be triggered at this point).
So in a way the error is expected, and we should probably remove
the pr_err for the case where ENOENT is returned for the algorithm
that we're currently testing.
The solution is two-fold. First when an algorithm undergoes
self-testing it should not trigger its construction. Secondly
if an instance larval fails to materialise due to it being destroyed
by a more optimised algorithm coming along, it should obviously
retry the construction.
Remove the check in __crypto_alg_lookup that stops a larval from
matching new requests based on differences in the mask. It is better
to block new requests even if it is wrong and then simply retry the
lookup. If this ends up being the wrong larval it will sort iself
out during the retry.
Reduce the CRYPTO_ALG_TYPE_MASK bits in type during larval creation
as otherwise LSKCIPHER algorithms may not match SKCIPHER larvals.
Also block the instance creation during self-testing in the function
crypto_larval_lookup by checking for CRYPTO_ALG_TESTED in the mask
field.
Finally change the return value when crypto_alg_lookup fails in
crypto_larval_wait to EAGAIN to redo the lookup.
Fixes: 37da5d0ffa ("crypto: api - Do not wait for tests during registration")
Reported-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use the min() macro to simplify the jent_read_entropy() function and
improve its readability.
Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Algorithm registration is usually carried out during module init,
where as little work as possible should be carried out. The SIMD
code violated this rule by allocating a tfm, this then triggers a
full test of the algorithm which may dead-lock in certain cases.
SIMD is only allocating the tfm to get at the alg object, which is
in fact already available as it is what we are registering. Use
that directly and remove the crypto_alloc_tfm call.
Also remove some obsolete and unused SIMD API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As registration is usually carried out during module init, this
is a context where as little work as possible should be carried
out. Testing may trigger module loads of underlying components,
which could even lead back to the module that is registering at
the moment. This may lead to dead-locks outside of the Crypto API.
Avoid this by not waiting for the tests to complete. They will
be scheduled but completion will be asynchronous. Any users will
still wait for completion.
Reported-by: Russell King <linux@armlinux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In order to allow testing to complete asynchronously after the
registration process, instance larvals need to complete prior
to having a test result. Support this by redoing the lookup for
instance larvals after completion. This should locate the pending
test larval and then repeat the wait on that (if it is still pending).
As the lookup is now repeated there is no longer any need to compute
the fulfilment status and all that code can be removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The user space Jitter RNG library uses the oversampling rate of 3 which
implies that each time stamp is credited with 1/3 bit of entropy. To
obtain 256 bits of entropy, 768 time stamps need to be sampled. The
increase in OSR is applied based on a report where the Jitter RNG is
used on a system exhibiting a challenging environment to collect
entropy.
This OSR default value is now applied to the Linux kernel version of
the Jitter RNG as well.
The increase in the OSR from 1 to 3 also implies that the Jitter RNG is
now slower by default.
Reported-by: Jeff Barnes <jeffbarnes@microsoft.com>
Signed-off-by: Stephan Mueller <smueller@chronox.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fixes: 6637e11e4a ("crypto: rsa - allow only odd e and restrict value in FIPS mode")
Fixes: f145d411a6 ("crypto: rsa - implement Chinese Remainder Theorem for faster private key operation")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that mpi_rshift can return errors, check them.
Fixes: 35d2bf2068 ("crypto: dh - calculate Q from P for the full public key verification")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the __counted_by compiler attribute to the flexible array member
salt to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and
CONFIG_FORTIFY_SOURCE.
Reviewed-by: Kees Cook <kees@kernel.org>
Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit c055e3eae0 ("crypto: xor - use ktime for template benchmarking")
switched from using jiffies to ktime-based performance benchmarking.
This works nicely on machines which have a fine-grained ktime()
clocksource as e.g. x86 machines with TSC.
But other machines, e.g. my 4-way HP PARISC server, don't have such
fine-grained clocksources, which is why it seems that 800 xor loops
take zero seconds, which then shows up in the logs as:
xor: measuring software checksum speed
8regs : -1018167296 MB/sec
8regs_prefetch : -1018167296 MB/sec
32regs : -1018167296 MB/sec
32regs_prefetch : -1018167296 MB/sec
Fix this with some small modifications to the existing code to improve
the algorithm to always produce correct results without introducing
major delays for architectures with a fine-grained ktime()
clocksource:
a) Delay start of the timing until ktime() just advanced. On machines
with a fast ktime() this should be just one additional ktime() call.
b) Count the number of loops. Run at minimum 800 loops and finish
earliest when the ktime() counter has progressed.
With that the throughput can now be calculated more accurately under all
conditions.
Fixes: c055e3eae0 ("crypto: xor - use ktime for template benchmarking")
Signed-off-by: Helge Deller <deller@gmx.de>
Tested-by: John David Anglin <dave.anglin@bell.net>
v2:
- clean up coding style (noticed & suggested by Herbert Xu)
- rephrased & fixed typo in commit message
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implementations of hash functions often have special cases when lengths
are a multiple of the hash function's internal block size (e.g. 64 for
SHA-256, 128 for SHA-512). Currently, when the fuzz testing code
generates lengths, it doesn't prefer any length mod 64 over any other.
This limits the coverage of these special cases.
Therefore, this patch updates the fuzz testing code to generate
power-of-2 lengths and divide messages exactly in half a bit more often.
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
iaa_crypto depends on the deflate compression algorithm that's provided
by deflate.
If the algorithm is not available because CRYPTO_DEFLATE=m and deflate
is not inserted, iaa_crypto will request "crypto-deflate-generic".
Deflate will not be inserted because "crypto-deflate-generic" is not a
valid alias.
Add deflate-generic and crypto-deflate-generic aliases to deflate.
Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Allow to run skcipher speed for given algorithm.
Case 600 is modified to cover ENCRYPT and DECRYPT
directions.
Example:
modprobe tcrypt mode=600 alg="qat_aes_xts" klen=32
If succeed, the performance numbers will be printed in dmesg:
testing speed of multibuffer qat_aes_xts (qat_aes_xts) encryption
test 0 (256 bit key, 16 byte blocks): 1 operation in 14596 cycles (16 bytes)
...
test 6 (256 bit key, 4096 byte blocks): 1 operation in 8053 cycles (4096 bytes)
Signed-off-by: Sergey Portnoy <sergey.portnoy@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Fix an off-by-one error where the most significant digit was not
initialized leading to signature verification failures by the testmgr.
Example: If a curve requires ndigits (=9) and diff (=2) indicates that
2 digits need to be set to zero then start with digit 'ndigits - diff' (=7)
and clear 'diff' digits starting from there, so 7 and 8.
Reported-by: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com>
Closes: https://lore.kernel.org/linux-crypto/619bc2de-b18a-4939-a652-9ca886bf6349@linux.ibm.com/T/#m045d8812409ce233c17fcdb8b88b6629c671f9f4
Fixes: 2fd2a82ccb ("crypto: ecdsa - Use ecc_digits_from_bytes to create hash digits array")
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Tested-by: Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The SM2 algorithm has a single user in the kernel. However, it's
never been integrated properly with that user: asymmetric_keys.
The crux of the issue is that the way it computes its digest with
sm3 does not fit into the architecture of asymmetric_keys. As no
solution has been proposed, remove this algorithm.
It can be resubmitted when it is integrated properly into the
asymmetric_keys subsystem.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since ecc_digits_from_bytes will provide zeros when an insufficient number
of bytes are passed in the input byte array, use it to convert the r and s
components of the signature to digits directly from the input byte
array. This avoids going through an intermediate byte array that has the
first few bytes filled with zeros.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since ecc_digits_from_bytes will provide zeros when an insufficient number
of bytes are passed in the input byte array, use it to create the hash
digits directly from the input byte array. This avoids going through an
intermediate byte array (rawhash) that has the first few bytes filled with
zeros.
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Public key blob is not just x and y concatenated. It follows RFC5480
section 2.2. Address this by re-documenting the function with the
correct description of the format.
Link: https://datatracker.ietf.org/doc/html/rfc5480
Fixes: 4e6602916b ("crypto: ecdsa - Add support for ECDSA signature verification")
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since crypto_shash_setkey(), crypto_ahash_setkey(),
crypto_skcipher_setkey(), and crypto_aead_setkey() apparently need to
work in no-SIMD context on some architectures, make the self-tests cover
this scenario. Specifically, sometimes do the setkey while under
crypto_disable_simd_for_test(), and do this independently from disabling
SIMD for the other parts of the crypto operation since there is no
guarantee that all parts happen in the same context. (I.e., drivers
mustn't store the key in different formats for SIMD vs. no-SIMD.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The boot-test-finished toggle is only necessary if algapi
is built into the kernel. Do not include this code if it is a module.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
fix in qat.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmZHN+8ACgkQxycdCkmx
i6dUtw//fOLT4uoLt99XHSiRF8HV+saMklfGEn6pAnAE4L9StQDYerErDA+R7kWs
xwqFLlETUXvfvmDRSgwpFQCEZc7loenISEg0RjsWHsdEUYcCg9ty7YwJxKpKIMmq
U+rX1FfuogXNl0G/TQkoCC02IkNn3j0ZNwg9MrLqg9R4dH2eQPrXwVgkWXYd++Gt
8quDbDrSZQSZyYu6WFkPmb0pVGSjjNLPACToW76tqDVhed9MFZo/rkKFo9IHFETF
YyIVZXhJ2mI0143RkAfTgS5H2C3+8xWxw/LgjKt62ZBCWDHP5iBxiO7a/aHrdAdx
a1YuHb7XwhTWQtjwOgpNHRgdu2LcmYj0jLyjAPQ5sNyuf346ri0b4TqZVMi8WJqj
XYsApM08Fu/jJV/UJVWQ9dvjMgybt48SZ2pTw+M2bOpvjD3rmQRDdtEpVolMb6wk
GkIckfR/U9aS76Qqr4bTf8Fabehik7jF7joQxKAO2su3aH5Xpnlsyy47Mj8AqEIm
FWleteaYg8mIoTSA7E/UNTQbWmf+Ga+ZjJ+eB3w7QRatXOoH5wpvDoucERpG5ieC
OMXc3kkib8z6W85BzGVyddWvisu28XoLvKeRf2ZFc6AjX8UXbA4LfDF2AVZPLiXr
gwwFzL0lOTaveLCwXE/LxiCD0/TVaxqv00/rSw9OnU04vbRWdhI=
=zPgG
-----END PGP SIGNATURE-----
Merge tag 'v6.10-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"Fix a bug in the new ecc P521 code as well as a buggy fix in qat"
* tag 'v6.10-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ecc - Prevent ecc_digits_from_bytes from reading too many bytes
crypto: qat - Fix ADF_DEV_RESET_SYNC memory leak
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAmZFcFwQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpuP1EADKJOJRtcvO/av2cUR+HZFDC+s/jBwHIJK+
4UY633zQlxjqc7dt4rX7zk/uk4mkhZnsGY+wS6xH08kB3VO9YksrwREVt6Ur9lP8
UXNVPpPcZ7fFcIp41rYkZX9pTDp2N8z2qsVg7V8wcXJ7EeTXd6L4ZLjhfCiHvs2s
i6yIEwLrW+voYuqSFV7vWBIM3mSXSRTIiO2DqRAOtT2lsj374DOthvP2lOSSb5wq
6TF4s4z3HMGs+HF3rjP5kJ6ic6RdC6i31lzEivUMhwCiKN1AZXdp96KXaC+NVPRV
t5//EdS+pSenQgkg6XH7d5kzFoCUFJfVZt05w0GCqMA081Q9ySjUymN1zedJbGd9
8CDlW01N8XLqG6+F9yakJLSFY+mUFGduPuueTNiUJWP8kTkQCtYIRzZDeyjxQrE5
c17NW5S1uWkf26Ucyi1r+gxw9N4kGkuB3+NitC6DOc7BW5CocEIoqLWi/UH7cEZe
0v6loTakqBAdgh03RCDMUj9Rt/37pQs2KFT9/CazVpbkvkKsue4xK4K2CUFsxqOj
qcoc/LD62at4S3AUWwhUIs3YaQ7v/6AY5hIktqAwsFHmDffUbPdRrXWY1keKIprJ
4qS/sY0M+kvKGnp+80fPVHab9l6/fMLfabIyFuh0M3W/M4eHGt2YfKWreoGEy/1x
xLq2iq+ehw==
=S6Xt
-----END PGP SIGNATURE-----
Merge tag 'net-accept-more-20240515' of git://git.kernel.dk/linux
Pull more io_uring updates from Jens Axboe:
"This adds support for IORING_CQE_F_SOCK_NONEMPTY for io_uring accept
requests.
This is very similar to previous work that enabled the same hint for
doing receives on sockets. By far the majority of the work here is
refactoring to enable the networking side to pass back whether or not
the socket had more pending requests after accepting the current one,
the last patch just wires it up for io_uring.
Not only does this enable applications to know whether there are more
connections to accept right now, it also enables smarter logic for
io_uring multishot accept on whether to retry immediately or wait for
a poll trigger"
* tag 'net-accept-more-20240515' of git://git.kernel.dk/linux:
io_uring/net: wire up IORING_CQE_F_SOCK_NONEMPTY for accept
net: pass back whether socket was empty post accept
net: have do_accept() take a struct proto_accept_arg argument
net: change proto and proto_ops accept type
Prevent ecc_digits_from_bytes from reading too many bytes from the input
byte array in case an insufficient number of bytes is provided to fill the
output digit array of ndigits. Therefore, initialize the most significant
digits with 0 to avoid trying to read too many bytes later on. Convert the
function into a regular function since it is getting too big for an inline
function.
If too many bytes are provided on the input byte array the extra bytes
are ignored since the input variable 'ndigits' limits the number of digits
that will be filled.
Fixes: d67c96fb97 ("crypto: ecdsa - Convert byte arrays with key coordinates to digits")
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adds a self-test testing PCKS#7 signed data against ECDSA key and
couple of bug fixes for missing deps.
BR, Jarkko
-----BEGIN PGP SIGNATURE-----
iJYEABYKAD4WIQRE6pSOnaBC00OEHEIaerohdGur0gUCZkLGKyAcamFya2tvLnNh
a2tpbmVuQGxpbnV4LmludGVsLmNvbQAKCRAaerohdGur0jvXAP9yPAiK7nGu/xgj
Nx4xJnMnMDeS4br4Kisp232pLqovxgEAgabmbihLve5Dv1OzR+pNz6qNhackj4ra
C0RaWPFQlQQ=
=/eej
-----END PGP SIGNATURE-----
Merge tag 'asymmetric-keys-next-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd
Pull asymmetric keys update from Jarkko Sakkinen:
"Add a self-test testing PCKS#7 signed data against ECDSA key and
couple of bug fixes for missing deps"
* tag 'asymmetric-keys-next-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jarkko/linux-tpmdd:
certs: Add ECDSA signature verification self-test
certs: Move RSA self-test data to separate file
KEYS: asymmetric: Add missing dependencies of FIPS_SIGNATURE_SELFTEST
KEYS: asymmetric: Add missing dependency on CRYPTO_SIG
Core & protocols
----------------
- Complete rework of garbage collection of AF_UNIX sockets.
AF_UNIX is prone to forming reference count cycles due to fd passing
functionality. New method based on Tarjan's Strongly Connected Components
algorithm should be both faster and remove a lot of workarounds
we accumulated over the years.
- Add TCP fraglist GRO support, allowing chaining multiple TCP packets
and forwarding them together. Useful for small switches / routers which
lack basic checksum offload in some scenarios (e.g. PPPoE).
- Support using SMP threads for handling packet backlog i.e. packet
processing from software interfaces and old drivers which don't
use NAPI. This helps move the processing out of the softirq jumble.
- Continue work of converting from rtnl lock to RCU protection.
Don't require rtnl lock when reading: IPv6 routing FIB, IPv6 address
labels, netdev threaded NAPI sysfs files, bonding driver's sysfs files,
MPLS devconf, IPv4 FIB rules, netns IDs, tcp metrics, TC Qdiscs,
neighbor entries, ARP entries via ioctl(SIOCGARP), a lot of the link
information available via rtnetlink.
- Small optimizations from Eric to UDP wake up handling, memory accounting,
RPS/RFS implementation, TCP packet sizing etc.
- Allow direct page recycling in the bulk API used by XDP, for +2% PPS.
- Support peek with an offset on TCP sockets.
- Add MPTCP APIs for querying last time packets were received/sent/acked,
and whether MPTCP "upgrade" succeeded on a TCP socket.
- Add intra-node communication shortcut to improve SMC performance.
- Add IPv6 (and IPv{4,6}-over-IPv{4,6}) support to the GTP protocol driver.
- Add HSR-SAN (RedBOX) mode of operation to the HSR protocol driver.
- Add reset reasons for tracing what caused a TCP reset to be sent.
- Introduce direction attribute for xfrm (IPSec) states.
State can be used either for input or output packet processing.
Things we sprinkled into general kernel code
--------------------------------------------
- Add bitmap_{read,write}(), bitmap_size(), expose BYTES_TO_BITS().
This required touch-ups and renaming of a few existing users.
- Add Endian-dependent __counted_by_{le,be} annotations.
- Make building selftests "quieter" by printing summaries like
"CC object.o" rather than full commands with all the arguments.
Netfilter
---------
- Use GFP_KERNEL to clone elements, to deal better with OOM situations
and avoid failures in the .commit step.
BPF
---
- Add eBPF JIT for ARCv2 CPUs.
- Support attaching kprobe BPF programs through kprobe_multi link in
a session mode, meaning, a BPF program is attached to both function entry
and return, the entry program can decide if the return program gets
executed and the entry program can share u64 cookie value with return
program. "Session mode" is a common use-case for tetragon and bpftrace.
- Add the ability to specify and retrieve BPF cookie for raw tracepoint
programs in order to ease migration from classic to raw tracepoints.
- Add an internal-only BPF per-CPU instruction for resolving per-CPU
memory addresses and implement support in x86, ARM64 and RISC-V JITs.
This allows inlining functions which need to access per-CPU state.
- Optimize x86 BPF JIT's emit_mov_imm64, and add support for various
atomics in bpf_arena which can be JITed as a single x86 instruction.
Support BPF arena on ARM64.
- Add a new bpf_wq API for deferring events and refactor process-context
bpf_timer code to keep common code where possible.
- Harden the BPF verifier's and/or/xor value tracking.
- Introduce crypto kfuncs to let BPF programs call kernel crypto APIs.
- Support bpf_tail_call_static() helper for BPF programs with GCC 13.
- Add bpf_preempt_{disable,enable}() kfuncs in order to allow a BPF
program to have code sections where preemption is disabled.
Driver API
----------
- Skip software TC processing completely if all installed rules are
marked as HW-only, instead of checking the HW-only flag rule by rule.
- Add support for configuring PoE (Power over Ethernet), similar to
the already existing support for PoDL (Power over Data Line) config.
- Initial bits of a queue control API, for now allowing a single queue
to be reset without disturbing packet flow to other queues.
- Common (ethtool) statistics for hardware timestamping.
Tests and tooling
-----------------
- Remove the need to create a config file to run the net forwarding tests
so that a naive "make run_tests" can exercise them.
- Define a method of writing tests which require an external endpoint
to communicate with (to send/receive data towards the test machine).
Add a few such tests.
- Create a shared code library for writing Python tests. Expose the YAML
Netlink library from tools/ to the tests for easy Netlink access.
- Move netfilter tests under net/, extend them, separate performance tests
from correctness tests, and iron out issues found by running them
"on every commit".
- Refactor BPF selftests to use common network helpers.
- Further work filling in YAML definitions of Netlink messages for:
nftables, team driver, bonding interfaces, vlan interfaces, VF info,
TC u32 mark, TC police action.
- Teach Python YAML Netlink to decode attribute policies.
- Extend the definition of the "indexed array" construct in the specs
to cover arrays of scalars rather than just nests.
- Add hyperlinks between definitions in generated Netlink docs.
Drivers
-------
- Make sure unsupported flower control flags are rejected by drivers,
and make more drivers report errors directly to the application rather
than dmesg (large number of driver changes from Asbjørn Sloth Tønnesen).
- Ethernet high-speed NICs:
- Broadcom (bnxt):
- support multiple RSS contexts and steering traffic to them
- support XDP metadata
- make page pool allocations more NUMA aware
- Intel (100G, ice, idpf):
- extract datapath code common among Intel drivers into a library
- use fewer resources in switchdev by sharing queues with the PF
- add PFCP filter support
- add Ethernet filter support
- use a spinlock instead of HW lock in PTP clock ops
- support 5 layer Tx scheduler topology
- nVidia/Mellanox:
- 800G link modes and 100G SerDes speeds
- per-queue IRQ coalescing configuration
- Marvell Octeon:
- support offloading TC packet mark action
- Ethernet NICs consumer, embedded and virtual:
- stop lying about skb->truesize in USB Ethernet drivers, it messes up
TCP memory calculations
- Google cloud vNIC:
- support changing ring size via ethtool
- support ring reset using the queue control API
- VirtIO net:
- expose flow hash from RSS to XDP
- per-queue statistics
- add selftests
- Synopsys (stmmac):
- support controllers which require an RX clock signal from the MII
bus to perform their hardware initialization
- TI:
- icssg_prueth: support ICSSG-based Ethernet on AM65x SR1.0 devices
- icssg_prueth: add SW TX / RX Coalescing based on hrtimers
- cpsw: minimal XDP support
- Renesas (ravb):
- support describing the MDIO bus
- Realtek (r8169):
- add support for RTL8168M
- Microchip Sparx5:
- matchall and flower actions mirred and redirect
- Ethernet switches:
- nVidia/Mellanox:
- improve events processing performance
- Marvell:
- add support for MV88E6250 family internal PHYs
- Microchip:
- add DCB and DSCP mapping support for KSZ switches
- vsc73xx: convert to PHYLINK
- Realtek:
- rtl8226b/rtl8221b: add C45 instances and SerDes switching
- Many driver changes related to PHYLIB and PHYLINK deprecated API cleanup.
- Ethernet PHYs:
- Add a new driver for Airoha EN8811H 2.5 Gigabit PHY.
- micrel: lan8814: add support for PPS out and external timestamp trigger
- WiFi:
- Disable Wireless Extensions (WEXT) in all Wi-Fi 7 devices drivers.
Modern devices can only be configured using nl80211.
- mac80211/cfg80211
- handle color change per link for WiFi 7 Multi-Link Operation
- Intel (iwlwifi):
- don't support puncturing in 5 GHz
- support monitor mode on passive channels
- BZ-W device support
- P2P with HE/EHT support
- re-add support for firmware API 90
- provide channel survey information for Automatic Channel Selection
- MediaTek (mt76):
- mt7921 LED control
- mt7925 EHT radiotap support
- mt7920e PCI support
- Qualcomm (ath11k):
- P2P support for QCA6390, WCN6855 and QCA2066
- support hibernation
- ieee80211-freq-limit Device Tree property support
- Qualcomm (ath12k):
- refactoring in preparation of multi-link support
- suspend and hibernation support
- ACPI support
- debugfs support, including dfs_simulate_radar support
- RealTek:
- rtw88: RTL8723CS SDIO device support
- rtw89: RTL8922AE Wi-Fi 7 PCI device support
- rtw89: complete features of new WiFi 7 chip 8922AE including
BT-coexistence and Wake-on-WLAN
- rtw89: use BIOS ACPI settings to set TX power and channels
- rtl8xxxu: enable Management Frame Protection (MFP) support
- Bluetooth:
- support for Intel BlazarI and Filmore Peak2 (BE201)
- support for MediaTek MT7921S SDIO
- initial support for Intel PCIe BT driver
- remove HCI_AMP support
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmZD6sQACgkQMUZtbf5S
IrtLYw/+I73ePGIye37o2jpbodcLAUZVfF3r6uYUzK8hokEcKD0QVJa9w7PizLZ3
UO45ClOXFLJCkfP4reFenLfxGCel2AJI+F7VFl2xaO2XgrcH/lnVrHqKZEAEXjls
KoYMnShIolv7h2MKP6hHtyTi2j1wvQUKsZC71o9/fuW+4fUT8gECx1YtYcL73wrw
gEMdlUgBYC3jiiCUHJIFX6iPJ2t/TC+q1eIIF2K/Osrk2kIqQhzoozcL4vpuAZQT
99ljx/qRelXa8oppDb7nM5eulg7WY8ZqxEfFZphTMC5nLEGzClxuOTTl2kDYI/D/
UZmTWZDY+F5F0xvNk2gH84qVJXBOVDoobpT7hVA/tDuybobc/kvGDzRayEVqVzKj
Q0tPlJs+xBZpkK5TVnxaFLJVOM+p1Xosxy3kNVXmuYNBvT/R89UbJiCrUKqKZF+L
z/1mOYUv8UklHqYAeuJSptHvqJjTGa/fsEYP7dAUBbc1N2eVB8mzZ4mgU5rYXbtC
E6UXXiWnoSRm8bmco9QmcWWoXt5UGEizHSJLz6t1R5Df/YmXhWlytll5aCwY1ksf
FNoL7S4u7AZThL1Nwi7yUs4CAjhk/N4aOsk+41S0sALCx30BJuI6UdesAxJ0lu+Z
fwCQYbs27y4p7mBLbkYwcQNxAxGm7PSK4yeyRIy2njiyV4qnLf8=
=EsC2
-----END PGP SIGNATURE-----
Merge tag 'net-next-6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"Core & protocols:
- Complete rework of garbage collection of AF_UNIX sockets.
AF_UNIX is prone to forming reference count cycles due to fd
passing functionality. New method based on Tarjan's Strongly
Connected Components algorithm should be both faster and remove a
lot of workarounds we accumulated over the years.
- Add TCP fraglist GRO support, allowing chaining multiple TCP
packets and forwarding them together. Useful for small switches /
routers which lack basic checksum offload in some scenarios (e.g.
PPPoE).
- Support using SMP threads for handling packet backlog i.e. packet
processing from software interfaces and old drivers which don't use
NAPI. This helps move the processing out of the softirq jumble.
- Continue work of converting from rtnl lock to RCU protection.
Don't require rtnl lock when reading: IPv6 routing FIB, IPv6
address labels, netdev threaded NAPI sysfs files, bonding driver's
sysfs files, MPLS devconf, IPv4 FIB rules, netns IDs, tcp metrics,
TC Qdiscs, neighbor entries, ARP entries via ioctl(SIOCGARP), a lot
of the link information available via rtnetlink.
- Small optimizations from Eric to UDP wake up handling, memory
accounting, RPS/RFS implementation, TCP packet sizing etc.
- Allow direct page recycling in the bulk API used by XDP, for +2%
PPS.
- Support peek with an offset on TCP sockets.
- Add MPTCP APIs for querying last time packets were received/sent/acked
and whether MPTCP "upgrade" succeeded on a TCP socket.
- Add intra-node communication shortcut to improve SMC performance.
- Add IPv6 (and IPv{4,6}-over-IPv{4,6}) support to the GTP protocol
driver.
- Add HSR-SAN (RedBOX) mode of operation to the HSR protocol driver.
- Add reset reasons for tracing what caused a TCP reset to be sent.
- Introduce direction attribute for xfrm (IPSec) states. State can be
used either for input or output packet processing.
Things we sprinkled into general kernel code:
- Add bitmap_{read,write}(), bitmap_size(), expose BYTES_TO_BITS().
This required touch-ups and renaming of a few existing users.
- Add Endian-dependent __counted_by_{le,be} annotations.
- Make building selftests "quieter" by printing summaries like
"CC object.o" rather than full commands with all the arguments.
Netfilter:
- Use GFP_KERNEL to clone elements, to deal better with OOM
situations and avoid failures in the .commit step.
BPF:
- Add eBPF JIT for ARCv2 CPUs.
- Support attaching kprobe BPF programs through kprobe_multi link in
a session mode, meaning, a BPF program is attached to both function
entry and return, the entry program can decide if the return
program gets executed and the entry program can share u64 cookie
value with return program. "Session mode" is a common use-case for
tetragon and bpftrace.
- Add the ability to specify and retrieve BPF cookie for raw
tracepoint programs in order to ease migration from classic to raw
tracepoints.
- Add an internal-only BPF per-CPU instruction for resolving per-CPU
memory addresses and implement support in x86, ARM64 and RISC-V
JITs. This allows inlining functions which need to access per-CPU
state.
- Optimize x86 BPF JIT's emit_mov_imm64, and add support for various
atomics in bpf_arena which can be JITed as a single x86
instruction. Support BPF arena on ARM64.
- Add a new bpf_wq API for deferring events and refactor
process-context bpf_timer code to keep common code where possible.
- Harden the BPF verifier's and/or/xor value tracking.
- Introduce crypto kfuncs to let BPF programs call kernel crypto
APIs.
- Support bpf_tail_call_static() helper for BPF programs with GCC 13.
- Add bpf_preempt_{disable,enable}() kfuncs in order to allow a BPF
program to have code sections where preemption is disabled.
Driver API:
- Skip software TC processing completely if all installed rules are
marked as HW-only, instead of checking the HW-only flag rule by
rule.
- Add support for configuring PoE (Power over Ethernet), similar to
the already existing support for PoDL (Power over Data Line)
config.
- Initial bits of a queue control API, for now allowing a single
queue to be reset without disturbing packet flow to other queues.
- Common (ethtool) statistics for hardware timestamping.
Tests and tooling:
- Remove the need to create a config file to run the net forwarding
tests so that a naive "make run_tests" can exercise them.
- Define a method of writing tests which require an external endpoint
to communicate with (to send/receive data towards the test
machine). Add a few such tests.
- Create a shared code library for writing Python tests. Expose the
YAML Netlink library from tools/ to the tests for easy Netlink
access.
- Move netfilter tests under net/, extend them, separate performance
tests from correctness tests, and iron out issues found by running
them "on every commit".
- Refactor BPF selftests to use common network helpers.
- Further work filling in YAML definitions of Netlink messages for:
nftables, team driver, bonding interfaces, vlan interfaces, VF
info, TC u32 mark, TC police action.
- Teach Python YAML Netlink to decode attribute policies.
- Extend the definition of the "indexed array" construct in the specs
to cover arrays of scalars rather than just nests.
- Add hyperlinks between definitions in generated Netlink docs.
Drivers:
- Make sure unsupported flower control flags are rejected by drivers,
and make more drivers report errors directly to the application
rather than dmesg (large number of driver changes from Asbjørn
Sloth Tønnesen).
- Ethernet high-speed NICs:
- Broadcom (bnxt):
- support multiple RSS contexts and steering traffic to them
- support XDP metadata
- make page pool allocations more NUMA aware
- Intel (100G, ice, idpf):
- extract datapath code common among Intel drivers into a library
- use fewer resources in switchdev by sharing queues with the PF
- add PFCP filter support
- add Ethernet filter support
- use a spinlock instead of HW lock in PTP clock ops
- support 5 layer Tx scheduler topology
- nVidia/Mellanox:
- 800G link modes and 100G SerDes speeds
- per-queue IRQ coalescing configuration
- Marvell Octeon:
- support offloading TC packet mark action
- Ethernet NICs consumer, embedded and virtual:
- stop lying about skb->truesize in USB Ethernet drivers, it
messes up TCP memory calculations
- Google cloud vNIC:
- support changing ring size via ethtool
- support ring reset using the queue control API
- VirtIO net:
- expose flow hash from RSS to XDP
- per-queue statistics
- add selftests
- Synopsys (stmmac):
- support controllers which require an RX clock signal from the
MII bus to perform their hardware initialization
- TI:
- icssg_prueth: support ICSSG-based Ethernet on AM65x SR1.0 devices
- icssg_prueth: add SW TX / RX Coalescing based on hrtimers
- cpsw: minimal XDP support
- Renesas (ravb):
- support describing the MDIO bus
- Realtek (r8169):
- add support for RTL8168M
- Microchip Sparx5:
- matchall and flower actions mirred and redirect
- Ethernet switches:
- nVidia/Mellanox:
- improve events processing performance
- Marvell:
- add support for MV88E6250 family internal PHYs
- Microchip:
- add DCB and DSCP mapping support for KSZ switches
- vsc73xx: convert to PHYLINK
- Realtek:
- rtl8226b/rtl8221b: add C45 instances and SerDes switching
- Many driver changes related to PHYLIB and PHYLINK deprecated API
cleanup
- Ethernet PHYs:
- Add a new driver for Airoha EN8811H 2.5 Gigabit PHY.
- micrel: lan8814: add support for PPS out and external timestamp trigger
- WiFi:
- Disable Wireless Extensions (WEXT) in all Wi-Fi 7 devices
drivers. Modern devices can only be configured using nl80211.
- mac80211/cfg80211
- handle color change per link for WiFi 7 Multi-Link Operation
- Intel (iwlwifi):
- don't support puncturing in 5 GHz
- support monitor mode on passive channels
- BZ-W device support
- P2P with HE/EHT support
- re-add support for firmware API 90
- provide channel survey information for Automatic Channel Selection
- MediaTek (mt76):
- mt7921 LED control
- mt7925 EHT radiotap support
- mt7920e PCI support
- Qualcomm (ath11k):
- P2P support for QCA6390, WCN6855 and QCA2066
- support hibernation
- ieee80211-freq-limit Device Tree property support
- Qualcomm (ath12k):
- refactoring in preparation of multi-link support
- suspend and hibernation support
- ACPI support
- debugfs support, including dfs_simulate_radar support
- RealTek:
- rtw88: RTL8723CS SDIO device support
- rtw89: RTL8922AE Wi-Fi 7 PCI device support
- rtw89: complete features of new WiFi 7 chip 8922AE including
BT-coexistence and Wake-on-WLAN
- rtw89: use BIOS ACPI settings to set TX power and channels
- rtl8xxxu: enable Management Frame Protection (MFP) support
- Bluetooth:
- support for Intel BlazarI and Filmore Peak2 (BE201)
- support for MediaTek MT7921S SDIO
- initial support for Intel PCIe BT driver
- remove HCI_AMP support"
* tag 'net-next-6.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1827 commits)
selftests: netfilter: fix packetdrill conntrack testcase
net: gro: fix napi_gro_cb zeroed alignment
Bluetooth: btintel_pcie: Refactor and code cleanup
Bluetooth: btintel_pcie: Fix warning reported by sparse
Bluetooth: hci_core: Fix not handling hdev->le_num_of_adv_sets=1
Bluetooth: btintel: Fix compiler warning for multi_v7_defconfig config
Bluetooth: btintel_pcie: Fix compiler warnings
Bluetooth: btintel_pcie: Add *setup* function to download firmware
Bluetooth: btintel_pcie: Add support for PCIe transport
Bluetooth: btintel: Export few static functions
Bluetooth: HCI: Remove HCI_AMP support
Bluetooth: L2CAP: Fix div-by-zero in l2cap_le_flowctl_init()
Bluetooth: qca: Fix error code in qca_read_fw_build_info()
Bluetooth: hci_conn: Use __counted_by() and avoid -Wfamnae warning
Bluetooth: btintel: Add support for Filmore Peak2 (BE201)
Bluetooth: btintel: Add support for BlazarI
LE Create Connection command timeout increased to 20 secs
dt-bindings: net: bluetooth: Add MediaTek MT7921S SDIO Bluetooth
Bluetooth: compute LE flow credits based on recvbuf space
Bluetooth: hci_sync: Use cmd->num_cis instead of magic number
...
Commit c27b2d2012 ("crypto: testmgr - allow ecdsa-nist-p256 and -p384
in FIPS mode") enabled support for ECDSA in crypto/testmgr.c. The
PKCS#7 signature verification API builds upon the KCAPI primitives to
perform its high-level operations. Therefore, this change in testmgr.c
also allows ECDSA to be used by the PKCS#7 signature verification API
(in FIPS mode).
However, from a FIPS perspective, the PKCS#7 signature verification API
is a distinct "service" from the KCAPI primitives. This is because the
PKCS#7 API performs a "full" signature verification, which consists of
both hashing the data to be verified, and the public key operation.
On the other hand, the KCAPI primitive does not perform this hashing
step - it accepts pre-hashed data from the caller and only performs the
public key operation.
For this reason, the ECDSA self-tests in crypto/testmgr.c are not
sufficient to cover ECDSA signature verification offered by the PKCS#7
API. This is reflected by the self-test already present in this file
for RSA PKCS#1 v1.5 signature verification.
The solution is simply to add a second self-test here for ECDSA. P-256
with SHA-256 hashing was chosen as those parameters should remain
FIPS-approved for the foreseeable future, while keeping the performance
impact to a minimum. The ECDSA certificate and PKCS#7 signed data was
generated using OpenSSL. The input data is identical to the input data
for the existing RSA self-test.
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
In preparation of adding new ECDSA self-tests, the existing data for
the RSA self-tests is moved to a separate file. This file is only
compiled if the new CONFIG_FIPS_SIGNATURE_SELFTEST_RSA configuration
option is set, which ensures that the required dependencies (RSA,
SHA-256) are present. Otherwise, the kernel would panic when trying to
execute the self-test.
The introduction of this new option, rather than adding the
dependencies to the existing CONFIG_FIPS_SIGNATURE_SELFTEST option,
allows for additional self-tests to be added for different algorithms.
The kernel can then be configured to only execute the self-tests for
those algorithms that are included.
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Rather than pass in flags, error pointer, and whether this is a kernel
invocation or not, add a struct proto_accept_arg struct as the argument.
This then holds all of these arguments, and prepares accept for being
able to pass back more information.
No functional changes in this patch.
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
API:
- Remove crypto stats interface.
Algorithms:
- Add faster AES-XTS on modern x86_64 CPUs.
- Forbid curves with order less than 224 bits in ecc (FIPS 186-5).
- Add ECDSA NIST P521.
Drivers:
- Expose otp zone in atmel.
- Add dh fallback for primes > 4K in qat.
- Add interface for live migration in qat.
- Use dma for aes requests in starfive.
- Add full DMA support for stm32mpx in stm32.
- Add Tegra Security Engine driver.
Others:
- Introduce scope-based x509_certificate allocation.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmZBjXMACgkQxycdCkmx
i6cQ7g/+JPKnzQedhpJSK5AnkAkqO9kJ16JdeB7AtdSeZZA/EIFxuXZ3Fv1fH44y
1CCibowc5zdss8F/1iOqPc57u5vy2Mjyw8qlhs7JlmcYf/lo7CBGfT8Uxo7BK/S9
n+/+y47Xu5p3yt/c6ldrwqjOaWaYuaCKICZtS91XVvrxM80iVnmDSQCNkcch4KQ4
nsdcVJhS4lOStBNjKtkhWlgufqdp8RPzKYH2B6GbW9z6en8WeTbnoMhgqjqQ3UID
/DHtixyee0MDUDReQrixyCM3XMV5er/qBMoDrCxipBuVrr4GMd2GlCEaZbXfTUW0
3K8Nle4KMMqi81lBAQKiD/hRjrC68FHOvVRGHtZntR0+NZ/nlinXCVWv4iHwRzAB
7BOqRTC3mfv+uMhTvgwQAkXCHAhivMokSzTaDCIrzPLjKIx2BOfVZKmPBt98LxeW
8/JfgEK4gX6wxe4GRftueEApCfWQrwYK60j5bIkescaJ/mI7M5bEByvTTob1lAka
Fw5kGDy8dVnrG9HagLwnXoI1pIGmca8hV1t24Vf1OCdWLgOW+GTCIuyutL2c9AWv
0vEbytGZl69XJlIgQGVcv9RM6NlIXxHwfSHU59N/SHTXhlHjm1XWi3HCiJaZ1b6+
pcILMJ29FMs8LobiN7PT+rNu6fboaH0/o+R7OK9mKRut864xFTk=
=NDS0
-----END PGP SIGNATURE-----
Merge tag 'v6.10-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Remove crypto stats interface
Algorithms:
- Add faster AES-XTS on modern x86_64 CPUs
- Forbid curves with order less than 224 bits in ecc (FIPS 186-5)
- Add ECDSA NIST P521
Drivers:
- Expose otp zone in atmel
- Add dh fallback for primes > 4K in qat
- Add interface for live migration in qat
- Use dma for aes requests in starfive
- Add full DMA support for stm32mpx in stm32
- Add Tegra Security Engine driver
Others:
- Introduce scope-based x509_certificate allocation"
* tag 'v6.10-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (123 commits)
crypto: atmel-sha204a - provide the otp content
crypto: atmel-sha204a - add reading from otp zone
crypto: atmel-i2c - rename read function
crypto: atmel-i2c - add missing arg description
crypto: iaa - Use kmemdup() instead of kzalloc() and memcpy()
crypto: sahara - use 'time_left' variable with wait_for_completion_timeout()
crypto: api - use 'time_left' variable with wait_for_completion_killable_timeout()
crypto: caam - i.MX8ULP donot have CAAM page0 access
crypto: caam - init-clk based on caam-page0-access
crypto: starfive - Use fallback for unaligned dma access
crypto: starfive - Do not free stack buffer
crypto: starfive - Skip unneeded fallback allocation
crypto: starfive - Skip dma setup for zeroed message
crypto: hisilicon/sec2 - fix for register offset
crypto: hisilicon/debugfs - mask the unnecessary info from the dump
crypto: qat - specify firmware files for 402xx
crypto: x86/aes-gcm - simplify GCM hash subkey derivation
crypto: x86/aes-gcm - delete unused GCM assembly code
crypto: x86/aes-xts - simplify loop in xts_crypt_slowpath()
hwrng: stm32 - repair clock handling
...
Since the signature self-test uses RSA and SHA-256, it must only be
enabled when those algorithms are enabled. Otherwise it fails and
panics the kernel on boot-up.
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202404221528.51d75177-lkp@intel.com
Fixes: 3cde3174eb ("certs: Add FIPS selftests")
Cc: stable@vger.kernel.org
Cc: Simo Sorce <simo@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
Make ASYMMETRIC_PUBLIC_KEY_SUBTYPE select CRYPTO_SIG to avoid build
errors like the following, which were possible with
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y && CONFIG_CRYPTO_SIG=n:
ld: vmlinux.o: in function `public_key_verify_signature':
(.text+0x306280): undefined reference to `crypto_alloc_sig'
ld: (.text+0x306300): undefined reference to `crypto_sig_set_pubkey'
ld: (.text+0x306324): undefined reference to `crypto_sig_verify'
ld: (.text+0x30636c): undefined reference to `crypto_sig_set_privkey'
Fixes: 63ba4d6759 ("KEYS: asymmetric: Use new crypto interface without scatterlists")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
There is a confusing pattern in the kernel to use a variable named 'timeout' to
store the result of wait_for_completion_killable_timeout() causing patterns like:
timeout = wait_for_completion_killable_timeout(...)
if (!timeout) return -ETIMEDOUT;
with all kinds of permutations. Use 'time_left' as a variable to make the code
self explaining.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQTFp0I1jqZrAX+hPRXbK58LschIgwUCZi9+AAAKCRDbK58LschI
g0nEAP487m7L0nLVriC2oIOWsi29tklW3etm6DO7gmGRGIHgrgEAnMyV1xBj3bGj
v6jJwDcybCym1hLx+1x1JCZ4eoAFswE=
=xbna
-----END PGP SIGNATURE-----
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Daniel Borkmann says:
====================
pull-request: bpf-next 2024-04-29
We've added 147 non-merge commits during the last 32 day(s) which contain
a total of 158 files changed, 9400 insertions(+), 2213 deletions(-).
The main changes are:
1) Add an internal-only BPF per-CPU instruction for resolving per-CPU
memory addresses and implement support in x86 BPF JIT. This allows
inlining per-CPU array and hashmap lookups
and the bpf_get_smp_processor_id() helper, from Andrii Nakryiko.
2) Add BPF link support for sk_msg and sk_skb programs, from Yonghong Song.
3) Optimize x86 BPF JIT's emit_mov_imm64, and add support for various
atomics in bpf_arena which can be JITed as a single x86 instruction,
from Alexei Starovoitov.
4) Add support for passing mark with bpf_fib_lookup helper,
from Anton Protopopov.
5) Add a new bpf_wq API for deferring events and refactor sleepable
bpf_timer code to keep common code where possible,
from Benjamin Tissoires.
6) Fix BPF_PROG_TEST_RUN infra with regards to bpf_dummy_struct_ops programs
to check when NULL is passed for non-NULLable parameters,
from Eduard Zingerman.
7) Harden the BPF verifier's and/or/xor value tracking,
from Harishankar Vishwanathan.
8) Introduce crypto kfuncs to make BPF programs able to utilize the kernel
crypto subsystem, from Vadim Fedorenko.
9) Various improvements to the BPF instruction set standardization doc,
from Dave Thaler.
10) Extend libbpf APIs to partially consume items from the BPF ringbuffer,
from Andrea Righi.
11) Bigger batch of BPF selftests refactoring to use common network helpers
and to drop duplicate code, from Geliang Tang.
12) Support bpf_tail_call_static() helper for BPF programs with GCC 13,
from Jose E. Marchesi.
13) Add bpf_preempt_{disable,enable}() kfuncs in order to allow a BPF
program to have code sections where preemption is disabled,
from Kumar Kartikeya Dwivedi.
14) Allow invoking BPF kfuncs from BPF_PROG_TYPE_SYSCALL programs,
from David Vernet.
15) Extend the BPF verifier to allow different input maps for a given
bpf_for_each_map_elem() helper call in a BPF program, from Philo Lu.
16) Add support for PROBE_MEM32 and bpf_addr_space_cast instructions
for riscv64 and arm64 JITs to enable BPF Arena, from Puranjay Mohan.
17) Shut up a false-positive KMSAN splat in interpreter mode by unpoison
the stack memory, from Martin KaFai Lau.
18) Improve xsk selftest coverage with new tests on maximum and minimum
hardware ring size configurations, from Tushar Vyavahare.
19) Various ReST man pages fixes as well as documentation and bash completion
improvements for bpftool, from Rameez Rehman & Quentin Monnet.
20) Fix libbpf with regards to dumping subsequent char arrays,
from Quentin Deslandes.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (147 commits)
bpf, docs: Clarify PC use in instruction-set.rst
bpf_helpers.h: Define bpf_tail_call_static when building with GCC
bpf, docs: Add introduction for use in the ISA Internet Draft
selftests/bpf: extend BPF_SOCK_OPS_RTT_CB test for srtt and mrtt_us
bpf: add mrtt and srtt as BPF_SOCK_OPS_RTT_CB args
selftests/bpf: dummy_st_ops should reject 0 for non-nullable params
bpf: check bpf_dummy_struct_ops program params for test runs
selftests/bpf: do not pass NULL for non-nullable params in dummy_st_ops
selftests/bpf: adjust dummy_st_ops_success to detect additional error
bpf: mark bpf_dummy_struct_ops.test_1 parameter as nullable
selftests/bpf: Add ring_buffer__consume_n test.
bpf: Add bpf_guard_preempt() convenience macro
selftests: bpf: crypto: add benchmark for crypto functions
selftests: bpf: crypto skcipher algo selftests
bpf: crypto: add skcipher to bpf crypto
bpf: make common crypto API for TC/XDP programs
bpf: update the comment for BTF_FIELDS_MAX
selftests/bpf: Fix wq test.
selftests/bpf: Use make_sockaddr in test_sock_addr
selftests/bpf: Use connect_to_addr in test_sock_addr
...
====================
Link: https://lore.kernel.org/r/20240429131657.19423-1-daniel@iogearbox.net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
The private key in ctx->private_key is currently initialized in reverse
byte order in ecdh_set_secret and whenever the key is needed in proper
byte order the variable priv is introduced and the bytes from
ctx->private_key are copied into priv while being byte-swapped
(ecc_swap_digits). To get rid of the unnecessary byte swapping initialize
ctx->private_key in proper byte order and clean up all functions that were
previously using priv or were called with ctx->private_key:
- ecc_gen_privkey: Directly initialize the passed ctx->private_key with
random bytes filling all the digits of the private key. Get rid of the
priv variable. This function only has ecdh_set_secret as a caller to
create NIST P192/256/384 private keys.
- crypto_ecdh_shared_secret: Called only from ecdh_compute_value with
ctx->private_key. Get rid of the priv variable and work with the passed
private_key directly.
- ecc_make_pub_key: Called only from ecdh_compute_value with
ctx->private_key. Get rid of the priv variable and work with the passed
private_key directly.
Cc: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
ecc_is_key_valid expects a key with the most significant digit in the last
entry of the digit array. Currently ecdh_set_secret passes a reversed key
to ecc_is_key_valid that then passes the rather simple test checking
whether the private key is in range [2, n-3]. For all current ecdh-
supported curves (NIST P192/256/384) the 'n' parameter is a rather large
number, therefore easily passing this test.
Throughout the ecdh and ecc codebase the variable 'priv' is used for a
private_key holding the bytes in proper byte order. Therefore, introduce
priv in ecdh_set_secret and copy the bytes from ctx->private_key into
priv in proper byte order by using ecc_swap_digits. Pass priv to
ecc_is_valid_key.
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Salvatore Benedetto <salvatore.benedetto@intel.com>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Acked-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
I.G 9.7.B for FIPS 140-3 specifies that variables temporarily holding
cryptographic information should be zeroized once they are no longer
needed. Accomplish this by using kfree_sensitive for buffers that
previously held the private key.
Signed-off-by: Hailey Mothershead <hailmo@amazon.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a DEFINE_FREE() clause for x509_certificate structs and use it in
x509_cert_parse() and x509_key_preparse(). These are the only functions
where scope-based x509_certificate allocation currently makes sense.
A third user will be introduced with the forthcoming SPDM library
(Security Protocol and Data Model) for PCI device authentication.
Unlike most other DEFINE_FREE() clauses, this one checks for IS_ERR()
instead of NULL before calling x509_free_certificate() at end of scope.
That's because the "constructor" of x509_certificate structs,
x509_cert_parse(), returns a valid pointer or an ERR_PTR(), but never
NULL.
Comparing the Assembler output before/after has shown they are identical,
save for the fact that gcc-12 always generates two return paths when
__cleanup() is used, one for the success case and one for the error case.
In x509_cert_parse(), add a hint for the compiler that kzalloc() never
returns an ERR_PTR(). Otherwise the compiler adds a gratuitous IS_ERR()
check on return. Introduce an assume() macro for this which can be
re-used elsewhere in the kernel to provide hints for the compiler.
Suggested-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
Link: https://lore.kernel.org/all/20231003153937.000034ca@Huawei.com/
Link: https://lwn.net/Articles/934679/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Enable the x509 parser to accept NIST P521 certificates and add the
OID for ansip521r1, which is the identifier for NIST P521.
Cc: David Howells <dhowells@redhat.com>
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Adjust the calculation of the maximum signature size for support of
NIST P521. While existing curves may prepend a 0 byte to their coordinates
(to make the number positive), NIST P521 will not do this since only the
first bit in the most significant byte is used.
If the encoding of the x & y coordinates requires at least 128 bytes then
an additional byte is needed for the encoding of the length. Take this into
account when calculating the maximum signature size.
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Register NIST P521 as an akcipher and extend the testmgr with
NIST P521-specific test vectors.
Add a module alias so the module gets automatically loaded by the crypto
subsystem when the curve is needed.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In cases where 'keylen' was referring to the size of the buffer used by
a curve's digits, it does not reflect the purpose of the variable anymore
once NIST P521 is used. What it refers to then is the size of the buffer,
which may be a few bytes larger than the size a coordinate of a key.
Therefore, rename keylen to bufsize where appropriate.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace the usage of ndigits with nbits where precise space calculations
are needed, such as in ecdsa_max_size where the length of a coordinate is
determined.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the parameters for the NIST P521 curve and define a new curve ID
for it. Make the curve available in ecc_get_curve.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In ecc_point_mult use the number of bits of the NIST P521 curve + 2. The
change is required specifically for NIST P521 to pass mathematical tests
on the public key.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement vli_mmod_fast_521 following the description for how to calculate
the modulus for NIST P521 in the NIST publication "Recommendations for
Discrete Logarithm-Based Cryptography: Elliptic Curve Domain Parameters"
section G.1.4.
NIST p521 requires 9 64bit digits, so increase the ECC_MAX_DIGITS so that
the vli digit array provides enough elements to fit the larger integers
required by this curve.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the number of bits a curve has to the ecc_curve definition to be able
to derive the number of bytes a curve requires for its coordinates from it.
It also allows one to identify a curve by its particular size. Set the
number of bits on all curve definitions.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
res.x has been calculated by ecc_point_mult_shamir, which uses
'mod curve_prime' on res.x and therefore p > res.x with 'p' being the
curve_prime. Further, it is true that for the NIST curves p > n with 'n'
being the 'curve_order' and therefore the following may be true as well:
p > res.x >= n.
If res.x >= n then res.x mod n can be calculated by iteratively sub-
tracting n from res.x until res.x < n. For NIST P192/256/384 this can be
done in a single subtraction. This can also be done in a single
subtraction for NIST P521.
The mathematical reason why a single subtraction is sufficient is due to
the values of 'p' and 'n' of the NIST curves where the following holds
true:
note: max(res.x) = p - 1
max(res.x) - n < n
p - 1 - n < n
p - 1 < 2n => holds true for the NIST curves
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
In preparation for support of NIST P521, adjust the basic tests on the
length of the provided key parameters to only ensure that the length of the
x plus y coordinates parameter array is not an odd number and that each
coordinate fits into an array of 'ndigits' digits. Mathematical tests on
the key's parameters are then done in ecc_is_pubkey_valid_full rejecting
invalid keys.
The change is necessary since NIST P521 keys do not have keys with
coordinates that each require 'full' digits (= all bits in u64 used).
NIST P521 only requires 2 bytes (9 bits) in the most significant digit
unlike NIST P192/256/384 that each require multiple 'full' digits.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Tested-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For NIST P192/256/384 the public key's x and y parameters could be copied
directly from a given array since both parameters filled 'ndigits' of
digits (a 'digit' is a u64). For support of NIST P521 the key parameters
need to have leading zeros prepended to the most significant digit since
only 2 bytes of the most significant digit are provided.
Therefore, implement ecc_digits_from_bytes to convert a byte array into an
array of digits and use this function in ecdsa_set_pub_key where an input
byte array needs to be converted into digits.
Suggested-by: Lukas Wunner <lukas@wunner.de>
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace hard-coded numbers with ECC_CURVE_NIST_P192/256/384_DIGITS where
possible.
Tested-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The for loop will be executed at least once, so i > 0. If the loop
is interrupted before i is incremented (e.g., when checking len for NULL),
i will not be checked.
Found by Linux Verification Center (linuxtesting.org) with Svace.
Signed-off-by: Roman Smirnov <r.smirnov@omp.ru>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the current state of the code, ecc_get_curve() cannot return
NULL in crypto_ecdh_shared_secret() and ecc_make_pub_key(). This is
conditioned by the fact that they are only called from ecdh_compute_value(),
which implements the kpp_alg::{generate_public_key,compute_shared_secret}()
methods. Also ecdh implements the kpp_alg::init() method, which is called
before the other methods and sets ecdh_ctx::curve_id to a valid value.
Signed-off-by: Roman Smirnov <r.smirnov@omp.ru>
Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
private_key is overwritten with the key parameter passed in by the
caller (if present), or alternatively a newly generated private key.
However, it is possible that the caller provides a key (or the newly
generated key) which is shorter than the previous key. In that
scenario, some key material from the previous key would not be
overwritten. The easiest solution is to explicitly zeroize the entire
private_key array first.
Note that this patch slightly changes the behavior of this function:
previously, if the ecc_gen_privkey failed, the old private_key would
remain. Now, the private_key is always zeroized. This behavior is
consistent with the case where params.key is set and ecc_is_key_valid
fails.
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This commit comes at the tail end of a greater effort to remove the
empty elements at the end of the ctl_table arrays (sentinels) which will
reduce the overall build time size of the kernel and run time memory
bloat by ~64 bytes per sentinel (further information Link :
https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/)
Remove sentinel from crypto_sysctl_table
Signed-off-by: Joel Granados <j.granados@samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Replace memzero_explicit() and kvfree() with kvfree_sensitive() to fix
the following Coccinelle/coccicheck warning reported by
kfree_sensitive.cocci:
WARNING opportunity for kfree_sensitive/kvfree_sensitive
Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add module alias with the algorithm cra_name similar to what we have for
RSA-related and other algorithms.
The kernel attempts to modprobe asymmetric algorithms using the names
"crypto-$cra_name" and "crypto-$cra_name-all." However, since these
aliases are currently missing, the modules are not loaded. For instance,
when using the `add_key` function, the hash algorithm is typically
loaded automatically, but the asymmetric algorithm is not.
Steps to test:
1. Create certificate
openssl req -x509 -sha256 -newkey ec \
-pkeyopt "ec_paramgen_curve:secp384r1" -keyout key.pem -days 365 \
-subj '/CN=test' -nodes -outform der -out nist-p384.der
2. Optionally, trace module requests with: trace-cmd stream -e module &
3. Trigger add_key call for the cert:
# keyctl padd asymmetric "" @u < nist-p384.der
641069229
# lsmod | head -2
Module Size Used by
ecdsa_generic 16384 0
Fixes: c12d448ba9 ("crypto: ecdsa - Register NIST P384 and extend test suite")
Cc: stable@vger.kernel.org
Signed-off-by: Stefan Berger <stefanb@linux.ibm.com>
Reviewed-by: Vitaly Chikunov <vt@altlinux.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
FIPS 186-5 [1] was released approximately 1 year ago. The most
interesting change for ecc_gen_privkey is the removal of curves with
order < 224 bits. This is minimum is now checked in step 1. It is
unlikely that there is still any benefit in generating private keys for
curves with n < 224, as those curves provide less than 112 bits of
security strength and are therefore unsafe for any modern usage.
This patch also updates the documentation for __ecc_is_key_valid and
ecc_gen_privkey to clarify which FIPS 186-5 method is being used to
generate private keys. Previous documentation mentioned that "extra
random bits" was used. However, this did not match the code. Instead,
the code currently uses (and always has used) the "rejection sampling"
("testing candidates" in FIPS 186-4) method.
[1]: https://doi.org/10.6028/NIST.FIPS.186-5
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add module alias with the algorithm cra_name similar to what we have for
RSA-related and other algorithms.
The kernel attempts to modprobe asymmetric algorithms using the names
"crypto-$cra_name" and "crypto-$cra_name-all." However, since these
aliases are currently missing, the modules are not loaded. For instance,
when using the `add_key` function, the hash algorithm is typically
loaded automatically, but the asymmetric algorithm is not.
Steps to test:
1. Cert is generated usings ima-evm-utils test suite with
`gen-keys.sh`, example cert is provided below:
$ base64 -d >test-gost2012_512-A.cer <<EOF
MIIB/DCCAWagAwIBAgIUK8+whWevr3FFkSdU9GLDAM7ure8wDAYIKoUDBwEBAwMFADARMQ8wDQYD
VQQDDAZDQSBLZXkwIBcNMjIwMjAxMjIwOTQxWhgPMjA4MjEyMDUyMjA5NDFaMBExDzANBgNVBAMM
BkNBIEtleTCBoDAXBggqhQMHAQEBAjALBgkqhQMHAQIBAgEDgYQABIGALXNrTJGgeErBUOov3Cfo
IrHF9fcj8UjzwGeKCkbCcINzVUbdPmCopeJRHDJEvQBX1CQUPtlwDv6ANjTTRoq5nCk9L5PPFP1H
z73JIXHT0eRBDVoWy0cWDRz1mmQlCnN2HThMtEloaQI81nTlKZOcEYDtDpi5WODmjEeRNQJMdqCj
UDBOMAwGA1UdEwQFMAMBAf8wHQYDVR0OBBYEFCwfOITMbE9VisW1i2TYeu1tAo5QMB8GA1UdIwQY
MBaAFCwfOITMbE9VisW1i2TYeu1tAo5QMAwGCCqFAwcBAQMDBQADgYEAmBfJCMTdC0/NSjz4BBiQ
qDIEjomO7FEHYlkX5NGulcF8FaJW2jeyyXXtbpnub1IQ8af1KFIpwoS2e93LaaofxpWlpQLlju6m
KYLOcO4xK3Whwa2hBAz9YbpUSFjvxnkS2/jpH2MsOSXuUEeCruG/RkHHB3ACef9umG6HCNQuAPY=
EOF
2. Optionally, trace module requests with: trace-cmd stream -e module &
3. Trigger add_key call for the cert:
# keyctl padd asymmetric "" @u <test-gost2012_512-A.cer
939910969
# lsmod | head -3
Module Size Used by
ecrdsa_generic 16384 0
streebog_generic 28672 0
Repored-by: Paul Wolneykien <manowar@altlinux.org>
Cc: stable@vger.kernel.org
Signed-off-by: Vitaly Chikunov <vt@altlinux.org>
Tested-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.
Covering each of these points in detail:
1. Feature is not being used
Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.
It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.
Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.
There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.
2. CONFIG_CRYPTO_STATS significantly reduces performance
Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with a large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.
It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.
It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.
3. CONFIG_CRYPTO_STATS is a large maintenance burden
There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
While looking at UDP receive performance, I saw sk_wake_async()
was no longer inlined.
This matters at least on AMD Zen1-4 platforms (see SRSO)
This might be because rcu_read_lock() and rcu_read_unlock()
are no longer nops in recent kernels ?
Add sk_wake_async_rcu() variant, which must be called from
contexts already holding rcu lock.
As SOCK_FASYNC is deprecated in modern days, use unlikely()
to give a hint to the compiler.
sk_wake_async_rcu() is properly inlined from
__udp_enqueue_schedule_skb() and sock_def_readable().
Signed-off-by: Eric Dumazet <edumazet@google.com>
Link: https://lore.kernel.org/r/20240328144032.1864988-5-edumazet@google.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
zero in iaa.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmX9buQACgkQxycdCkmx
i6dw1w/+IiwZX9A62NLDEBJbaXavQEHqoI1mhXq4Zwqeor5DhUlUGbmAhdZ5bg88
oA60zjroLgt+Wke3phXFMjavSdwyLMd8zkDU3BjhqHAD6ASZz4ml51MCi48ROKB8
lKSSAvjJX3VAHlYSxctUB/KWfzZZaGGUWQdtkC/90Tqp6OGeVyBbQCiQPyF4I55P
e1B8ADY5Ey2d+Rgos1whMuLkKa077yoWdiDgX5PFjfGalRdNh4jNsojrCSx6AEiI
KijWUtaGBNj9Exc04NeCa0JQNff4vynhn21ygMbgPMEMTde0SlHHdf6FWKrZcm6h
JlNYYVGXjcobMC62dQdTosTLxuAqSd4kr5UaiCjO52QdM8txfBJCm/PDARS2z9Gl
xROvBpOfRSn0Z8GpuHaBMNot4DlKv6y/puwmef6o3qYlUJH+CnuHUFaLOHRYVt6d
B0tWi7PWyx2jorj0VHUCpqiGGYlbUq6lWhGYt8XzPHeQ2ZewzH6EbbF5qKpEWGiB
3X8Dxl1PpO8sSwOQRabBpoWoXxJLL6G9uvyJCjJeftL9ZDEeTBDvdA9qis7t7kR5
Ckv4q/5rOq4MK3NsXsL6lJZY7ckQavLhCoZlRU9kcQt3MG06i6KUDoo83L/49rUL
zq5adU8sI3lP/2VplfA0XEeYFuLlBWH4TwNAmwJBRoWFiG8j8XM=
=Mau0
-----END PGP SIGNATURE-----
Merge tag 'v6.9-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"This fixes a regression that broke iwd as well as a divide by zero in
iaa"
* tag 'v6.9-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: iaa - Fix nr_cpus < nr_iaa case
Revert "crypto: pkcs7 - remove sha1 support"
* Support for various vector-accelerated crypto routines.
* Hibernation is now enabled for portable kernel builds.
* mmap_rnd_bits_max is larger on systems with larger VAs.
* Support for fast GUP.
* Support for membarrier-based instruction cache synchronization.
* Support for the Andes hart-level interrupt controller and PMU.
* Some cleanups around unaligned access speed probing and Kconfig
settings.
* Support for ACPI LPI and CPPC.
* Various cleanus related to barriers.
* A handful of fixes.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCAAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAmX9icgTHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRAuExnzX7sYib+UD/4xyL6UMixx6A06BVBL9UT4vOrxRvNr
JIihG5y5QNMjes9DHWL35mZTMqFtQ0tq94ViWFLmJWloV/8KRVM2C9R9KX7vplf3
M/OwvP106spxgvNHoeQbycgs42RU1t2mpqT7N1iK2hCjqieP3vLn6hsSLXWTAG0L
3gQbQw6XCLC3hPyLq+nbFY2i4faeCmpXWmixoy/IvQ5calZQrRU0LNlP6lcMBhVo
uocjG0uGAhrahw2s81jxcMZcxa3AvUCiplapdD5H5v9rBM85SkYJj2Q9SqdSorkb
xzuimRnKPI5s47yM3pTfZY0qnQUYHV7PXXuw4WujpCQVQdhaG+Ggq63UUZA61J9t
IzZK2zdcfHqICrGTtXImUzRT3dcc3oq+IFq4tTY+rEJm29hrXkAtx+qBm5xtMvax
fJz5feJ/iT0u7MDj4Oq24n+Kpl+Olm+MJaZX3m5Ovi/9V6a9iK9HXqxg9/Fs0fMO
+J/0kTgd8Vu9CYH7KNWz3uztcO9eMAH3VyzuXuab4BGj1i1Y/9EjpALQi7rDN73S
OsYQX6NnzMkBV4dvElJVLXiPlvNlMHZZwdak5CqPb48jaJu6iiIZAuvOrG6/naGP
wnQSLVA2WWWoOkl3AJhxfpa11CLhbMl9E2gYm1VtNvASXoSFIxlAq1Yv3sG8yjty
4ZT0rYFJOstYiQ==
=3dL5
-----END PGP SIGNATURE-----
Merge tag 'riscv-for-linus-6.9-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V updates from Palmer Dabbelt:
- Support for various vector-accelerated crypto routines
- Hibernation is now enabled for portable kernel builds
- mmap_rnd_bits_max is larger on systems with larger VAs
- Support for fast GUP
- Support for membarrier-based instruction cache synchronization
- Support for the Andes hart-level interrupt controller and PMU
- Some cleanups around unaligned access speed probing and Kconfig
settings
- Support for ACPI LPI and CPPC
- Various cleanus related to barriers
- A handful of fixes
* tag 'riscv-for-linus-6.9-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (66 commits)
riscv: Fix syscall wrapper for >word-size arguments
crypto: riscv - add vector crypto accelerated AES-CBC-CTS
crypto: riscv - parallelize AES-CBC decryption
riscv: Only flush the mm icache when setting an exec pte
riscv: Use kcalloc() instead of kzalloc()
riscv/barrier: Add missing space after ','
riscv/barrier: Consolidate fence definitions
riscv/barrier: Define RISCV_FULL_BARRIER
riscv/barrier: Define __{mb,rmb,wmb}
RISC-V: defconfig: Enable CONFIG_ACPI_CPPC_CPUFREQ
cpufreq: Move CPPC configs to common Kconfig and add RISC-V
ACPI: RISC-V: Add CPPC driver
ACPI: Enable ACPI_PROCESSOR for RISC-V
ACPI: RISC-V: Add LPI driver
cpuidle: RISC-V: Move few functions to arch/riscv
riscv: Introduce set_compat_task() in asm/compat.h
riscv: Introduce is_compat_thread() into compat.h
riscv: add compile-time test into is_compat_task()
riscv: Replace direct thread flag check with is_compat_task()
riscv: Improve arch_get_mmap_end() macro
...
This reverts commit 16ab7cb582 because it
broke iwd. iwd uses the KEYCTL_PKEY_* UAPIs via its dependency libell,
and apparently it is relying on SHA-1 signature support. These UAPIs
are fairly obscure, and their documentation does not mention which
algorithms they support. iwd really should be using a properly
supported userspace crypto library instead. Regardless, since something
broke we have to revert the change.
It may be possible that some parts of this commit can be reinstated
without breaking iwd (e.g. probably the removal of MODULE_SIG_SHA1), but
for now this just does a full revert to get things working again.
Reported-by: Karel Balej <balejk@matfyz.cz>
Closes: https://lore.kernel.org/r/CZSHRUIJ4RKL.34T4EASV5DNJM@matfyz.cz
Cc: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Karel Balej <balejk@matfyz.cz>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Avoid unnecessary copying in scomp for trivial SG lists.
Algorithms:
- Optimise NEON CCM implementation on ARM64.
Drivers:
- Add queue stop/query debugfs support in hisilicon/qm.
-
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmXxDbcACgkQxycdCkmx
i6dCzw//dI5CuszRMalZu8fwvh7K48d4GoY/5uuc1vPK6yLyc5dl6CzPsrKhmBGO
Ip2fUCFoO+ViWl5tw8SjOOEnPRztETrIGlOM5PHOIxRuSA6lbcsdo+EQja4WBDo1
fGUNaNVW9eY9XsaUw46uwxaicd+xOqRGJURKB2XuzSHACln/QrNy74aVdbc6VhaK
Jnh4o2blsEh7jcVAK2ahNmmDm739w8C462Go4UIl8xcM693HtjUUFx7TALpI0iC+
BWxctAV9IzA/siUwztvwwlo+V1KZbjZFD5XfC/erkV0gFs7ll8IcuDFnZ1suMvt2
Z+6OVhghkkvMJJEl1qKNxVJITOUdXH0yKWqTbEHuFlV7VV+9hjUlWlvRVm7ZIrXf
ZXi/OpBJRAIBB6fJqy1RAvfIZUR8Dl8i8RKLVawdqFLbB+XCOiIK7a8+5hCKaWsU
0DVFNCfiJFPIiByzxXV+4Jt/hzxx/qseIbTiUShdoXeHOVWMgH5H255MWJv5qjVy
aSwgQLX6MlY9Pj+w5cPHAUjOiGjiAa3iKwsbnmo5lszsPl4KR407gaJnV7VI4s6R
fhqAgIjJ4Ik2lHzQRM89QU2OOogVBPs+FHEkk98lak5CATpjUq97iLbTharAg3dc
l8FLoSB9+NM+RCbnzUXuOIvoU/okSM4+j6a3xAnLQH6OxBOtk3Y=
=Uw2R
-----END PGP SIGNATURE-----
Merge tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Avoid unnecessary copying in scomp for trivial SG lists
Algorithms:
- Optimise NEON CCM implementation on ARM64
Drivers:
- Add queue stop/query debugfs support in hisilicon/qm
- Intel qat updates and cleanups"
* tag 'v6.9-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (79 commits)
Revert "crypto: remove CONFIG_CRYPTO_STATS"
crypto: scomp - remove memcpy if sg_nents is 1 and pages are lowmem
crypto: tcrypt - add ffdhe2048(dh) test
crypto: iaa - fix the missing CRYPTO_ALG_ASYNC in cra_flags
crypto: hisilicon/zip - fix the missing CRYPTO_ALG_ASYNC in cra_flags
hwrng: hisi - use dev_err_probe
MAINTAINERS: Remove T Ambarus from few mchp entries
crypto: iaa - Fix comp/decomp delay statistics
crypto: iaa - Fix async_disable descriptor leak
dt-bindings: rng: atmel,at91-trng: add sam9x7 TRNG
dt-bindings: crypto: add sam9x7 in Atmel TDES
dt-bindings: crypto: add sam9x7 in Atmel SHA
dt-bindings: crypto: add sam9x7 in Atmel AES
crypto: remove CONFIG_CRYPTO_STATS
crypto: dh - Make public key test FIPS-only
crypto: rockchip - fix to check return value
crypto: jitter - fix CRYPTO_JITTERENTROPY help text
crypto: qat - make ring to service map common for QAT GEN4
crypto: qat - fix ring to service map for dcc in 420xx
crypto: qat - fix ring to service map for dcc in 4xxx
...
heap optimizations".
- Kuan-Wei Chiu has also sped up the library sorting code in the series
"lib/sort: Optimize the number of swaps and comparisons".
- Alexey Gladkov has added the ability for code running within an IPC
namespace to alter its IPC and MQ limits. The series is "Allow to
change ipc/mq sysctls inside ipc namespace".
- Geert Uytterhoeven has contributed some dhrystone maintenance work in
the series "lib: dhry: miscellaneous cleanups".
- Ryusuke Konishi continues nilfs2 maintenance work in the series
"nilfs2: eliminate kmap and kmap_atomic calls"
"nilfs2: fix kernel bug at submit_bh_wbc()"
- Nathan Chancellor has updated our build tools requirements in the
series "Bump the minimum supported version of LLVM to 13.0.1".
- Muhammad Usama Anjum continues with the selftests maintenance work in
the series "selftests/mm: Improve run_vmtests.sh".
- Oleg Nesterov has done some maintenance work against the signal code
in the series "get_signal: minor cleanups and fix".
Plus the usual shower of singleton patches in various parts of the tree.
Please see the individual changelogs for details.
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZfMnvgAKCRDdBJ7gKXxA
jjKMAP4/Upq07D4wjkMVPb+QrkipbbLpdcgJ++q3z6rba4zhPQD+M3SFriIJk/Xh
tKVmvihFxfAhdDthseXcIf1nBjMALwY=
=8rVc
-----END PGP SIGNATURE-----
Merge tag 'mm-nonmm-stable-2024-03-14-09-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull non-MM updates from Andrew Morton:
- Kuan-Wei Chiu has developed the well-named series "lib min_heap: Min
heap optimizations".
- Kuan-Wei Chiu has also sped up the library sorting code in the series
"lib/sort: Optimize the number of swaps and comparisons".
- Alexey Gladkov has added the ability for code running within an IPC
namespace to alter its IPC and MQ limits. The series is "Allow to
change ipc/mq sysctls inside ipc namespace".
- Geert Uytterhoeven has contributed some dhrystone maintenance work in
the series "lib: dhry: miscellaneous cleanups".
- Ryusuke Konishi continues nilfs2 maintenance work in the series
"nilfs2: eliminate kmap and kmap_atomic calls"
"nilfs2: fix kernel bug at submit_bh_wbc()"
- Nathan Chancellor has updated our build tools requirements in the
series "Bump the minimum supported version of LLVM to 13.0.1".
- Muhammad Usama Anjum continues with the selftests maintenance work in
the series "selftests/mm: Improve run_vmtests.sh".
- Oleg Nesterov has done some maintenance work against the signal code
in the series "get_signal: minor cleanups and fix".
Plus the usual shower of singleton patches in various parts of the tree.
Please see the individual changelogs for details.
* tag 'mm-nonmm-stable-2024-03-14-09-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (77 commits)
nilfs2: prevent kernel bug at submit_bh_wbc()
nilfs2: fix failure to detect DAT corruption in btree and direct mappings
ocfs2: enable ocfs2_listxattr for special files
ocfs2: remove SLAB_MEM_SPREAD flag usage
assoc_array: fix the return value in assoc_array_insert_mid_shortcut()
buildid: use kmap_local_page()
watchdog/core: remove sysctl handlers from public header
nilfs2: use div64_ul() instead of do_div()
mul_u64_u64_div_u64: increase precision by conditionally swapping a and b
kexec: copy only happens before uchunk goes to zero
get_signal: don't initialize ksig->info if SIGNAL_GROUP_EXIT/group_exec_task
get_signal: hide_si_addr_tag_bits: fix the usage of uninitialized ksig
get_signal: don't abuse ksig->info.si_signo and ksig->sig
const_structs.checkpatch: add device_type
Normalise "name (ad@dr)" MODULE_AUTHORs to "name <ad@dr>"
dyndbg: replace kstrdup() + strchr() with kstrdup_and_replace()
list: leverage list_is_head() for list_entry_is_head()
nilfs2: MAINTAINERS: drop unreachable project mirror site
smp: make __smp_processor_id() 0-argument macro
fat: fix uninitialized field in nostale filehandles
...
This reverts commit 2beb81fbf0.
While removing CONFIG_CRYPTO_STATS is a worthy goal, this also
removed unrelated infrastructure such as crypto_comp_alg_common.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
while sg_nents is 1, which is always true for the current kernel
as the only user - zswap is this case, we might have a chance to
remove memcpy, thus improve the performance.
Though sg_nents is 1, its buffer might cross two pages. If those
pages are highmem, we have no cheap way to map them to contiguous
virtual address because kmap doesn't support more than one page
(kmap single higmem page could be still expensive for tlb) and
vmap is expensive.
So we also test and enure page is not highmem in order to safely
use page_to_virt before removing the memcpy. The good news is
that in the most majority of cases, we are lowmem, and we are
always lowmem in those modern and popular hardware.
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Nhat Pham <nphamcs@gmail.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Tested-by: Chengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit 7dce598197 ("crypto: dh - implement ffdheXYZ(dh) templates")
implemented the said templates. Add ffdhe2048(dh) test as it is the
fastest one. This is a requirement for the FIPS certification.
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.
Covering each of these points in detail:
1. Feature is not being used
Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.
It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.
Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.
There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.
2. CONFIG_CRYPTO_STATS significantly reduces performance
Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.
It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.
It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.
3. CONFIG_CRYPTO_STATS is a large maintenance burden
There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.
Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The function dh_is_pubkey_valid was added to for FIPS but it was
only partially conditional to fips_enabled.
In particular, the first test in the function relies on the last
test to work properly, but the last test is only run in FIPS mode.
Fix this inconsistency by making the whole function conditional
on fips_enabled.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Correct various small problems in the help text:
a. change 2 spaces to ", "
b. finish an incomplete sentence
c. change non-working URL to working URL
Fixes: a9a98d49da ("crypto: Kconfig - simplify compression/RNG entries")
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218458
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Bagas Sanjaya <bagasdotme@gmail.com>
Cc: Robert Elliott <elliott@hpe.com>
Cc: Christoph Biedl <bugzilla.kernel.bpeb@manchmal.in-ulm.de>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-crypto@vger.kernel.org
Acked-by: Bagas Sanjaya <bagasdotme@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The lskcipher glue code for skcipher needs to copy the IV every
time rather than only on the first and last request. Otherwise
those algorithms that use IV to perform chaining may break, e.g.,
CBC.
This is because crypto_skcipher_import/export do not include the
IV as part of the saved state.
Reported-by: syzbot+b90b904ef6bdfdafec1d@syzkaller.appspotmail.com
Fixes: 662ea18d08 ("crypto: skcipher - Make use of internal state")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
LLVM moved their issue tracker from their own Bugzilla instance to GitHub
issues. While all of the links are still valid, they may not necessarily
show the most up to date information around the issues, as all updates
will occur on GitHub, not Bugzilla.
Another complication is that the Bugzilla issue number is not always the
same as the GitHub issue number. Thankfully, LLVM maintains this mapping
through two shortlinks:
https://llvm.org/bz<num> -> https://bugs.llvm.org/show_bug.cgi?id=<num>
https://llvm.org/pr<num> -> https://github.com/llvm/llvm-project/issues/<mapped_num>
Switch all "https://bugs.llvm.org/show_bug.cgi?id=<num>" links to the
"https://llvm.org/pr<num>" shortlink so that the links show the most up to
date information. Each migrated issue links back to the Bugzilla entry,
so there should be no loss of fidelity of information here.
Link: https://lkml.kernel.org/r/20240109-update-llvm-links-v1-3-eb09b59db071@kernel.org
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Fangrui Song <maskray@google.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Mykola Lysenko <mykolal@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
SP 800-56Br2, Section 7.1.1 [1] specifies that:
1. If m does not satisfy 1 < m < (n – 1), output an indication that m is
out of range, and exit without further processing.
Similarly, Section 7.1.2 of the same standard specifies that:
1. If the ciphertext c does not satisfy 1 < c < (n – 1), output an
indication that the ciphertext is out of range, and exit without further
processing.
This range is slightly more conservative than RFC3447, as it also
excludes RSA fixed points 0, 1, and n - 1.
[1] https://doi.org/10.6028/NIST.SP.800-56Br2
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since crypto_hash_alg_has_setkey() is only called from ahash.c itself,
make it a static function.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When a zero-length message is hashed by algif_hash, and an error
is triggered, it tries to free an SG list that was never allocated
in the first place. Fix this by not freeing the SG list on the
zero-length error path.
Reported-by: Shigeru Yoshida <syoshida@redhat.com>
Reported-by: xingwei lee <xrivendell7@gmail.com>
Fixes: b6d972f689 ("crypto: af_alg/hash: Fix recvmsg() after sendmsg(MSG_MORE)")
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reported-by: syzbot+3266db0c26d1fbbe3abb@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The cbc template should not be applied on stream ciphers, especially
ones that have internal state. Enforce this by checking the state
size when the instance is created.
Reported-by: syzbot+050eeedd6c285d8c42f2@syzkaller.appspotmail.com
Fixes: 47309ea135 ("crypto: arc4 - Add internal state")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit a93492cae3 ("crypto: ccree - remove data unit size support")
removed support for the xts512 and xts4096 algorithms, but left them
defined in testmgr.c. This patch removes those definitions.
Signed-off-by: Joachim Vandersmissen <git@jvdsn.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The pointer secs is being assigned a value however secs is never
read afterwards. The pointer secs is redundant and can be removed.
Cleans up clang scan build warning:
warning: Although the value stored to 'secs' is used in the enclosing
expression, the value is never actually read from 'secs'
[deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The assignment to nbytes is redundant, the while loop needs
to just refer to the value in walk.nbytes and the value of
nbytes is being re-assigned inside the loop on both paths
of the following if-statement. Remove redundant assignment.
Cleans up clang scan build warning:
warning: Although the value stored to 'nbytes' is used in
the enclosing expression, the value is never actually read
from 'nbytes' [deadcode.DeadStores]
Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Create a crypto subdirectory for added accelerated cryptography routines
and hook it into the riscv Kbuild and the main crypto Kconfig.
Signed-off-by: Heiko Stuebner <heiko.stuebner@vrull.eu>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jerry Shih <jerry.shih@sifive.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20240122002024.27477-4-ebiggers@kernel.org
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
When ecb is used to wrap an lskcipher, the statesize isn't set
correctly. Fix this by making the simple instance creator set
the statesize.
Reported-by: syzbot+8ffb0839a24e9c6bfa76@syzkaller.appspotmail.com
Reported-by: Edward Adam Davis <eadavis@qq.com>
Fixes: 662ea18d08 ("crypto: skcipher - Make use of internal state")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Disallow registration of two algorithms with identical driver names.
Cc: <stable@vger.kernel.org>
Reported-by: Ovidiu Panait <ovidiu.panait@windriver.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch registers the deflate-iaa deflate compression algorithm and
hooks it up to the IAA hardware using the 'fixed' compression mode
introduced in the previous patch.
Because the IAA hardware has a 4k history-window limitation, only
buffers <= 4k, or that have been compressed using a <= 4k history
window, are technically compliant with the deflate spec, which allows
for a window of up to 32k. Because of this limitation, the IAA fixed
mode deflate algorithm is given its own algorithm name, 'deflate-iaa'.
With this change, the deflate-iaa crypto algorithm is registered and
operational, and compression and decompression operations are fully
enabled following the successful binding of the first IAA workqueue
to the iaa_crypto sub-driver.
when there are no IAA workqueues bound to the driver, the IAA crypto
algorithm can be unregistered by removing the module.
A new iaa_crypto 'verify_compress' driver attribute is also added,
allowing the user to toggle compression verification. If set, each
compress will be internally decompressed and the contents verified,
returning error codes if unsuccessful. This can be toggled with 0/1:
echo 0 > /sys/bus/dsa/drivers/crypto/verify_compress
The default setting is '1' - verify all compresses.
The verify_compress value setting at the time the algorithm is
registered is captured in the algorithm's crypto_ctx and used for all
compresses when using the algorithm.
[ Based on work originally by George Powley, Jing Lin and Kyung Min
Park ]
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike algif_aead which is always issued in one go (thus limiting
the maximum size of the request), algif_skcipher has always allowed
unlimited input data by cutting them up as necessary and feeding
the fragments to the underlying algorithm one at a time.
However, because of deficiencies in the API, this has been broken
for most stream ciphers such as arc4 or chacha. This is because
they have an internal state in addition to the IV that must be
preserved in order to continue processing.
Fix this by using the new skcipher state API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The arc4 algorithm has always had internal state. It's been buggy
from day one in that the state has been stored in the shared tfm
object. That means two users sharing the same tfm will end up
affecting each other's output, or worse, they may end up with the
same output.
Fix this by declaring an internal state and storing the state there
instead of within the tfm context.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds code to the skcipher/lskcipher API to make use
of the internal state if present. In particular, the skcipher
lskcipher wrapper will allocate a buffer for the IV/state and
feed that to the underlying lskcipher algorithm.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Unlike chaining modes such as CBC, stream ciphers other than CTR
usually hold an internal state that must be preserved if the
operation is to be done piecemeal. This has not been represented
in the API, resulting in the inability to split up stream cipher
operations.
This patch adds the basic representation of an internal state to
skcipher and lskcipher. In the interest of backwards compatibility,
the default has been set such that existing users are assumed to
be operating in one go as opposed to piecemeal.
With the new API, each lskcipher/skcipher algorithm has a new
attribute called statesize. For skcipher, this is the size of
the buffer that can be exported or imported similar to ahash.
For lskcipher, instead of providing a buffer of ivsize, the user
now has to provide a buffer of ivsize + statesize.
Each skcipher operation is assumed to be final as they are now,
but this may be overridden with a request flag. When the override
occurs, the user may then export the partial state and reimport
it later.
For lskcipher operations this is reversed. All operations are
not final and the state will be exported unless the FINAL bit is
set. However, the CONT bit still has to be set for the state
to be used.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having multiple in-flight AIO requests results in unpredictable
output because they all share the same IV. Fix this by only allowing
one request at a time.
Fixes: 83094e5e9e ("crypto: af_alg - add async support to algif_aead")
Fixes: a596999b7d ("crypto: algif - change algif_skcipher to be asynchronous")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
SP800-90C 3rd draft states that SHA-1 will be removed from all
specifications, including drbg by end of 2030. Given kernels built
today will be operating past that date, start complying with upcoming
requirements.
No functional change, as SHA-256 / SHA-512 based DRBG have always been
the preferred ones.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Update code comment, self test & healthcheck to use HMAC SHA512,
instead of HMAC SHA256. These changes are in dead-code, or FIPS
enabled code-paths only and have not effect on usual kernel builds.
On systems booting in FIPS mode that has the effect of switch sanity
selftest to HMAC sha512 based (which has been the default DRBG).
This patch updates code from 9b7b94683a ("crypto: DRBG - switch to
HMAC SHA512 DRBG as default DRBG"), but is not interesting to
cherry-pick for stable updates, because it doesn't affect regular
builds, nor has any tangible effect on FIPS certifcation.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When originally drbg was introduced FIPS self-checks for all types but
CTR were using the most preferred parameters for each type of
DRBG. Update CTR self-check to use aes256.
This patch updates code from 541af946fe ("crypto: drbg - SP800-90A
Deterministic Random Bit Generator"), but is not interesting to
cherry-pick for stable updates, because it doesn't affect regular
builds, nor has any tangible effect on FIPS certifcation.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
drbg supports multiple types of drbg, and multiple parameters of
each. Health check sanity only checks one drbg of a single type. One
can enable all three types of drbg. And instead of checking the most
preferred algorithm (last one wins), it is currently checking first
one instead.
Update ifdef to ensure that healthcheck prefers HMAC, over HASH, over
CTR, last one wins, like all other code and functions.
This patch updates code from 541af946fe ("crypto: drbg - SP800-90A
Deterministic Random Bit Generator"), but is not interesting to
cherry-pick for stable updates, because it doesn't affect regular
builds, nor has any tangible effect on FIPS certifcation.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Static checkers insist that the mpi_alloc() allocation can fail so add
a check to prevent a NULL dereference. Small allocations like this
can't actually fail in current kernels, but adding a check is very
simple and makes the static checkers happy.
Fixes: 6637e11e4a ("crypto: rsa - allow only odd e and restrict value in FIPS mode")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
EINPROGRESS and EBUSY have special meaning for async operations.
However, shash is always synchronous, so these statuses have no special
meaning for shash and don't need to be excluded when handling errors.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmVJ+X0ACgkQxycdCkmx
i6d26w/+LqMXxGgO7tLfnNtcaz3Xz3pNwMC0psIdfkL2ZxY44qX/kxWuURasjQqT
AYB4jTG/cRMtXTw3ayFxtNZ1CFCNNNUKxWmzSvv398m5SmqJTG4IHmi3td0vlpUn
i4AHNbRM8spINlIE/R4j/Bv9fsUHTvuRGHOclRtNMH8+Hk2CySNGeENeUheWxQDN
/+r1Jiq+AkyX1e1lmzBhR6PnHx6rPIWVB2nVIMAZAzoDrNVy+o2hQ4Oln00Sceip
TJ74cQoA9/tFPqYONEG37oc/22jKYHN7nUnO6H5E7pi8qRXunsxH5rX5A412VAI0
KbDnRqXWPVuaGcq3WdaE/9L9All3Hy2bpzT0J/khVSZ3oFl4aarf50mbD2pQNjqW
D1aDrPhFwlT0dXGzz3oNX1y1exSfRvku3kQxwO2ejRG2L+FRoGHCXnl7LkMWvNQZ
bsZruPReGZqtKk0CxTT7Asxi3SEWBe2GiFc3ZqDrvsBVrwb10FyzT8zHfMVDqn2r
foRPOBl4muhMwWUd3LxMxluV/wsx0LwW6++KRzzAewoBRI/x8G37epBQIzO23slf
q69i7Fo0W3dIH2+9vIqSrBkKXbjqlO/3oTH3n/FnstyNJT26S+5zqRHNLF13zEw5
OrpwsZynWaKNivwZRnLzrTZGGsXJSiMF4VbXb8lwWaTWJtiADMU=
=ZfkT
-----END PGP SIGNATURE-----
Merge tag 'v6.7-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
"This fixes a regression in ahash and hides the Kconfig sub-options for
the jitter RNG"
* tag 'v6.7-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: ahash - Set using_shash for cloned ahash wrapper over shash
crypto: jitterentropy - Hide esoteric Kconfig options under FIPS and EXPERT
As JITTERENTROPY is selected by default if you enable the CRYPTO
API, any Kconfig options added there will show up for every single
user. Hide the esoteric options under EXPERT as well as FIPS so
that only distro makers will see them.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Add virtual-address based lskcipher interface.
- Optimise ahash/shash performance in light of costly indirect calls.
- Remove ahash alignmask attribute.
Algorithms:
- Improve AES/XTS performance of 6-way unrolling for ppc.
- Remove some uses of obsolete algorithms (md4, md5, sha1).
- Add FIPS 202 SHA-3 support in pkcs1pad.
- Add fast path for single-page messages in adiantum.
- Remove zlib-deflate.
Drivers:
- Add support for S4 in meson RNG driver.
- Add STM32MP13x support in stm32.
- Add hwrng interface support in qcom-rng.
- Add support for deflate algorithm in hisilicon/zip.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmVB3vgACgkQxycdCkmx
i6dsOBAAykbnX8BpnpnOXYywE9ZWrl98rAk51MK0N9olZNfg78zRPIv7fFxFdC20
SDJrDSNPmn0Qvaa5e0EfoAdklsm0k2GkXL/BwPKMKWUsyIoJVYI3WrBMnjBy9xMp
yfME+h0bKoXJCZKnYkIUSGUejmUPSyRlEylrXoFlH/VWYwAaii/x9zwreQoF+0LR
KI24A1q8AYs6Dw9HSfndaAub9GOzrqKYs6fSaMG+77Y4UC5aoi5J9Bp2G3uVyHay
x/0bZtIxKXS9wn+LeG/3GspX23x/I5VwBOdAoMigrYmAIaIg5qgyMszudltTAs4R
zF1Kh7WsnM5+vpnBSeigzo+/GGOU3QTz8y3tBTg+3ZR7GWGOwQLiizhOYqCyOfAH
pIm6c++sZw/OOHiL69Nt4HeLKzGNYYWk3s4X/B/6cqoouPfOsfBaQobZNx9zfy7q
ZNEvSVBjrFX/L6wDSotny1LTWLUNjHbmLaMV5uQZ/SQKEtv19fp2Dl7SsLkHH+3v
ldOAwfoJR6QcSwz3Ez02TUAvQhtP172Hnxi7u44eiZu2aUboLhCFr7aEU6kVdBCx
1rIRVHD1oqlOEDRwPRXzhF3I8R4QDORJIxZ6UUhg7yueuI+XCGDsBNC+LqBrBmSR
IbdjqmSDUBhJyM5yMnt1VFYhqKQ/ZzwZ3JQviwW76Es9pwEIolM=
=IZmR
-----END PGP SIGNATURE-----
Merge tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Add virtual-address based lskcipher interface
- Optimise ahash/shash performance in light of costly indirect calls
- Remove ahash alignmask attribute
Algorithms:
- Improve AES/XTS performance of 6-way unrolling for ppc
- Remove some uses of obsolete algorithms (md4, md5, sha1)
- Add FIPS 202 SHA-3 support in pkcs1pad
- Add fast path for single-page messages in adiantum
- Remove zlib-deflate
Drivers:
- Add support for S4 in meson RNG driver
- Add STM32MP13x support in stm32
- Add hwrng interface support in qcom-rng
- Add support for deflate algorithm in hisilicon/zip"
* tag 'v6.7-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (283 commits)
crypto: adiantum - flush destination page before unmapping
crypto: testmgr - move pkcs1pad(rsa,sha3-*) to correct place
Documentation/module-signing.txt: bring up to date
module: enable automatic module signing with FIPS 202 SHA-3
crypto: asymmetric_keys - allow FIPS 202 SHA-3 signatures
crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support
crypto: FIPS 202 SHA-3 register in hash info for IMA
x509: Add OIDs for FIPS 202 SHA-3 hash and signatures
crypto: ahash - optimize performance when wrapping shash
crypto: ahash - check for shash type instead of not ahash type
crypto: hash - move "ahash wrapping shash" functions to ahash.c
crypto: talitos - stop using crypto_ahash::init
crypto: chelsio - stop using crypto_ahash::init
crypto: ahash - improve file comment
crypto: ahash - remove struct ahash_request_priv
crypto: ahash - remove crypto_ahash_alignmask
crypto: gcm - stop using alignmask of ahash
crypto: chacha20poly1305 - stop using alignmask of ahash
crypto: ccm - stop using alignmask of ahash
net: ipv6: stop checking crypto_ahash_alignmask
...
-----BEGIN PGP SIGNATURE-----
iIoEABYIADIWIQQdXVVFGN5XqKr1Hj7LwZzRsCrn5QUCZUDyWhQcem9oYXJAbGlu
dXguaWJtLmNvbQAKCRDLwZzRsCrn5QtIAPwLSdHw2qix1A6lMhbRiXqFOWINHcTF
DMtZkiPmpeuTKAEA0KaXfddKq5OC5S/ixPEEZCVqOq2ixxfMDhudyoh/qQs=
=lh3g
-----END PGP SIGNATURE-----
Merge tag 'integrity-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity
Pull integrity updates from Mimi Zohar:
"Four integrity changes: two IMA-overlay updates, an integrity Kconfig
cleanup, and a secondary keyring update"
* tag 'integrity-v6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
ima: detect changes to the backing overlay file
certs: Only allow certs signed by keys on the builtin keyring
integrity: fix indentation of config attributes
ima: annotate iint mutex to avoid lockdep false positive warnings
Upon additional review, the new fast path in adiantum_finish() is
missing the call to flush_dcache_page() that scatterwalk_map_and_copy()
was doing. It's apparently debatable whether flush_dcache_page() is
actually needed, as per the discussion at
https://lore.kernel.org/lkml/YYP1lAq46NWzhOf0@casper.infradead.org/T/#u.
However, it appears that currently all the helper functions that write
to a page, such as scatterwalk_map_and_copy(), memcpy_to_page(), and
memzero_page(), do the dcache flush. So do it to be consistent.
Fixes: dadf5e56c9 ("crypto: adiantum - add fast path for single-page messages")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
alg_test_descs[] needs to be in sorted order, since it is used for
binary search. This fixes the following boot-time warning:
testmgr: alg_test_descs entries in wrong order: 'pkcs1pad(rsa,sha512)' before 'pkcs1pad(rsa,sha3-256)'
Fixes: ee62afb9d0 ("crypto: rsa-pkcs1pad - Add FIPS 202 SHA-3 support")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Originally the secondary trusted keyring provided a keyring to which extra
keys may be added, provided those keys were not blacklisted and were
vouched for by a key built into the kernel or already in the secondary
trusted keyring.
On systems with the machine keyring configured, additional keys may also
be vouched for by a key on the machine keyring.
Prevent loading additional certificates directly onto the secondary
keyring, vouched for by keys on the machine keyring, yet allow these
certificates to be loaded onto other trusted keyrings.
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
Add FIPS 202 SHA-3 hash signature support in x509 certificates, pkcs7
signatures, and authenticode signatures. Supports hashes of size 256
and up, as 224 is too weak for any practical purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add support in rsa-pkcs1pad for FIPS 202 SHA-3 hashes, sizes 256 and
up. As 224 is too weak for any practical purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Register FIPS 202 SHA-3 hashes in hash info for IMA and other
users. Sizes 256 and up, as 224 is too weak for any practical
purposes.
Signed-off-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The "ahash" API provides access to both CPU-based and hardware offload-
based implementations of hash algorithms. Typically the former are
implemented as "shash" algorithms under the hood, while the latter are
implemented as "ahash" algorithms. The "ahash" API provides access to
both. Various kernel subsystems use the ahash API because they want to
support hashing hardware offload without using a separate API for it.
Yet, the common case is that a crypto accelerator is not actually being
used, and ahash is just wrapping a CPU-based shash algorithm.
This patch optimizes the ahash API for that common case by eliminating
the extra indirect call for each ahash operation on top of shash.
It also fixes the double-counting of crypto stats in this scenario
(though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested
in performance anyway...), and it eliminates redundant checking of
CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the previous patch made crypto_shash_type visible to ahash.c,
change checks for '->cra_type != &crypto_ahash_type' to '->cra_type ==
&crypto_shash_type'. This makes more sense and avoids having to
forward-declare crypto_ahash_type. The result is still the same, since
the type is either shash or ahash here.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The functions that are involved in implementing the ahash API on top of
an shash algorithm belong better in ahash.c, not in shash.c where they
currently are. Move them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
struct ahash_request_priv is unused, so remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_gcm_create_common() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify chachapoly_create() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify crypto_ccm_create_common() accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
crypto_ahash_alignmask() always returns 0 and will be removed. In
preparation for this, stop checking crypto_ahash_alignmask() in testmgr.
As a result of this change,
test_sg_division::offset_relative_to_alignmask and
testvec_config::key_offset_relative_to_alignmask no longer have any
effect on ahash (or shash) algorithms. Therefore, also stop setting
these flags in default_hash_testvec_configs[].
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the alignmask for ahash and shash algorithms is always 0,
simplify the code in authenc accordingly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.
This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.
The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.
In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.
This follows the same change that was made to shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Per section 4.c. of the IETF Trust Legal Provisions, "Code Components"
in IETF Documents are licensed on the terms of the BSD-3-Clause license:
https://trustee.ietf.org/documents/trust-legal-provisions/tlp-5/
The term "Code Components" specifically includes ASN.1 modules:
https://trustee.ietf.org/documents/trust-legal-provisions/code-components-list-3/
Add an SPDX identifier as well as a copyright notice pursuant to section
6.d. of the Trust Legal Provisions to all ASN.1 modules in the tree
which are derived from IETF Documents.
Section 4.d. of the Trust Legal Provisions requests that each Code
Component identify the RFC from which it is taken, so link that RFC
in every ASN.1 module.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>