mirror_ubuntu-kernels/tools/testing/selftests/bpf/verifier
Eduard Zingerman 904e6ddf41 bpf: Use scalar ids in mark_chain_precision()
Change mark_chain_precision() to track precision in situations
like below:

    r2 = unknown value
    ...
  --- state #0 ---
    ...
    r1 = r2                 // r1 and r2 now share the same ID
    ...
  --- state #1 {r1.id = A, r2.id = A} ---
    ...
    if (r2 > 10) goto exit; // find_equal_scalars() assigns range to r1
    ...
  --- state #2 {r1.id = A, r2.id = A} ---
    r3 = r10
    r3 += r1                // need to mark both r1 and r2

At the beginning of the processing of each state, ensure that if a
register with a scalar ID is marked as precise, all registers sharing
this ID are also marked as precise.

This property would be used by a follow-up change in regsafe().

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20230613153824.3324830-2-eddyz87@gmail.com
2023-06-13 15:14:27 -07:00
..
.gitignore
atomic_and.c bpf, x86: Fix BPF_FETCH atomic and/or/xor with r0 as src 2021-02-22 18:03:11 +01:00
atomic_bounds.c bpf: Propagate stack bounds to registers in atomics w/ BPF_FETCH 2021-02-02 18:23:29 -08:00
atomic_cmpxchg.c bpf, selftests: Update test case for atomic cmpxchg on r0 with pointer 2021-12-14 19:33:06 -08:00
atomic_fetch_add.c bpf: Add tests for new BPF atomic operations 2021-01-14 18:34:29 -08:00
atomic_fetch.c bpf, selftests: Add test case for atomic fetch on spilled pointer 2021-12-14 19:33:06 -08:00
atomic_invalid.c bpf: Small BPF verifier log improvements 2022-03-03 16:54:10 +01:00
atomic_or.c bpf: Explicitly zero-extend R0 after 32-bit cmpxchg 2021-03-04 19:06:03 -08:00
atomic_xchg.c bpf: Add tests for new BPF atomic operations 2021-01-14 18:34:29 -08:00
atomic_xor.c selftests/bpf: Fix endianness issues in atomic tests 2021-02-10 11:55:22 -08:00
basic_call.c
basic_instr.c
basic_stx_ldx.c
basic.c selftests/bpf: Fix test_verifier after introducing resolve_pseudo_ldimm64 2020-10-06 20:16:57 -07:00
bpf_loop_inline.c selftests/bpf: Fix test_verifier failed test in unprivileged mode 2022-07-21 21:03:25 -07:00
bpf_st_mem.c selftests/bpf: check if BPF_ST with variable offset preserves STACK_ZERO 2023-02-15 11:48:48 -08:00
calls.c bpf: Treat KF_RELEASE kfuncs as KF_TRUSTED_ARGS 2023-03-25 16:56:22 -07:00
ctx_sk_lookup.c selftests/bpf: Add tests for accessing ingress_ifindex in bpf_sk_lookup 2021-11-10 16:29:59 -08:00
ctx_skb.c selftests/bpf: Use __BYTE_ORDER__ 2021-10-25 20:39:42 -07:00
dead_code.c selftests, bpf: Test that dead ldx_w insns are accepted 2021-08-13 17:46:26 +02:00
direct_value_access.c selftests/bpf: Mark tests that require unaligned memory access 2020-11-18 17:45:35 -08:00
event_output.c selftests/bpf: Fix cgroup sockopt verifier test 2020-07-11 01:32:15 +02:00
jit.c bpf: add selftests for lsh, rsh, arsh with reg operand 2022-10-19 16:53:51 -07:00
jmp32.c bpf, selftests: Add verifier test case for jmp32's jeq/jne 2022-07-01 12:56:27 -07:00
jset.c bpf, selftests: Adjust few selftest outcomes wrt unreachable code 2021-06-14 23:06:38 +02:00
jump.c bpf, selftests: Add verifier test case for imm=0,umin=0,umax=1 scalar 2022-07-01 12:56:27 -07:00
junk_insn.c
ld_abs.c
ld_dw.c
ld_imm64.c selftests/bpf: Fix test_verifier after introducing resolve_pseudo_ldimm64 2020-10-06 20:16:57 -07:00
map_kptr.c bpf: Remove bpf_kfunc_call_test_kptr_get() test kfunc 2023-04-16 08:51:24 -07:00
perf_event_sample_period.c selftests/bpf: Use __BYTE_ORDER__ 2021-10-25 20:39:42 -07:00
precise.c bpf: Use scalar ids in mark_chain_precision() 2023-06-13 15:14:27 -07:00
scale.c
sleepable.c bpf: Allow BPF_PROG_TYPE_STRUCT_OPS programs to be sleepable 2023-01-25 10:25:57 -08:00
wide_access.c selftests/bpf: Mark tests that require unaligned memory access 2020-11-18 17:45:35 -08:00