mirror_ubuntu-kernels/kernel/bpf
Martin KaFai Lau 08a7ce384e bpf: Use bpf_mem_cache_alloc/free in bpf_local_storage_elem
This patch uses bpf_mem_alloc for the task and cgroup local storage that
the bpf prog can easily get a hold of the storage owner's PTR_TO_BTF_ID.
eg. bpf_get_current_task_btf() can be used in some of the kmalloc code
path which will cause deadlock/recursion. bpf_mem_cache_alloc is
deadlock free and will solve a legit use case in [1].

For sk storage, its batch creation benchmark shows a few percent
regression when the sk create/destroy batch size is larger than 32.
The sk creation/destruction happens much more often and
depends on external traffic. Considering it is hypothetical
to be able to cause deadlock with sk storage, it can cross
the bridge to use bpf_mem_alloc till a legit (ie. useful)
use case comes up.

For inode storage, bpf_local_storage_destroy() is called before
waiting for a rcu gp and its memory cannot be reused immediately.
inode stays with kmalloc/kfree after the rcu [or tasks_trace] gp.

A 'bool bpf_ma' argument is added to bpf_local_storage_map_alloc().
Only task and cgroup storage have 'bpf_ma == true' which
means to use bpf_mem_cache_alloc/free(). This patch only changes
selem to use bpf_mem_alloc for task and cgroup. The next patch
will change the local_storage to use bpf_mem_alloc also for
task and cgroup.

Here is some more details on the changes:

* memory allocation:
After bpf_mem_cache_alloc(), the SDATA(selem)->data is zero-ed because
bpf_mem_cache_alloc() could return a reused selem. It is to keep
the existing bpf_map_kzalloc() behavior. Only SDATA(selem)->data
is zero-ed. SDATA(selem)->data is the visible part to the bpf prog.
No need to use zero_map_value() to do the zeroing because
bpf_selem_free(..., reuse_now = true) ensures no bpf prog is using
the selem before returning the selem through bpf_mem_cache_free().
For the internal fields of selem, they will be initialized when
linking to the new smap and the new local_storage.

When 'bpf_ma == false', nothing changes in this patch. It will
stay with the bpf_map_kzalloc().

* memory free:
The bpf_selem_free() and bpf_selem_free_rcu() are modified to handle
the bpf_ma == true case.

For the common selem free path where its owner is also being destroyed,
the mem is freed in bpf_local_storage_destroy(), the owner (task
and cgroup) has gone through a rcu gp. The memory can be reused
immediately, so bpf_local_storage_destroy() will call
bpf_selem_free(..., reuse_now = true) which will do
bpf_mem_cache_free() for immediate reuse consideration.

An exception is the delete elem code path. The delete elem code path
is called from the helper bpf_*_storage_delete() and the syscall
bpf_map_delete_elem(). This path is an unusual case for local
storage because the common use case is to have the local storage
staying with its owner life time so that the bpf prog and the user
space does not have to monitor the owner's destruction. For the delete
elem path, the selem cannot be reused immediately because there could
be bpf prog using it. It will call bpf_selem_free(..., reuse_now = false)
and it will wait for a rcu tasks trace gp before freeing the elem. The
rcu callback is changed to do bpf_mem_cache_raw_free() instead of kfree().

When 'bpf_ma == false', it should be the same as before.
__bpf_selem_free() is added to do the kfree_rcu and call_tasks_trace_rcu().
A few words on the 'reuse_now == true'. When 'reuse_now == true',
it is still racing with bpf_local_storage_map_free which is under rcu
protection, so it still needs to wait for a rcu gp instead of kfree().
Otherwise, the selem may be reused by slab for a totally different struct
while the bpf_local_storage_map_free() is still using it (as a
rcu reader). For the inode case, there may be other rcu readers also.
In short, when bpf_ma == false and reuse_now == true => vanilla rcu.

[1]: https://lore.kernel.org/bpf/20221118190109.1512674-1-namhyung@kernel.org/

Cc: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
Link: https://lore.kernel.org/r/20230322215246.1675516-3-martin.lau@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2023-03-25 19:52:52 -07:00
..
preload bpf: iterators: Split iterators.lskel.h into little- and big- endian versions 2023-01-28 12:45:15 -08:00
arraymap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
bloom_filter.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
bpf_cgrp_storage.c bpf: Use bpf_mem_cache_alloc/free in bpf_local_storage_elem 2023-03-25 19:52:52 -07:00
bpf_inode_storage.c bpf: Use bpf_mem_cache_alloc/free in bpf_local_storage_elem 2023-03-25 19:52:52 -07:00
bpf_iter.c bpf: implement numbers iterator 2023-03-08 16:19:51 -08:00
bpf_local_storage.c bpf: Use bpf_mem_cache_alloc/free in bpf_local_storage_elem 2023-03-25 19:52:52 -07:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-02-10 15:54:26 -08:00
bpf_lru_list.h printk: stop including cache.h from printk.h 2022-05-13 07:20:07 -07:00
bpf_lsm.c bpf: Fix the kernel crash caused by bpf_setsockopt(). 2023-01-26 23:26:40 -08:00
bpf_struct_ops_types.h bpf: Add dummy BPF STRUCT_OPS for test purpose 2021-11-01 14:10:00 -07:00
bpf_struct_ops.c bpf: Check IS_ERR for the bpf_map_get() return value 2023-03-24 12:40:47 -07:00
bpf_task_storage.c bpf: Use bpf_mem_cache_alloc/free in bpf_local_storage_elem 2023-03-25 19:52:52 -07:00
btf.c bpf: Disable migration when freeing stashed local kptr using obj drop 2023-03-13 16:55:04 -07:00
cgroup_iter.c bpf: Pin the start cgroup in cgroup_iter_seq_init() 2022-11-21 17:40:42 +01:00
cgroup.c bpf: allow ctx writes using BPF_ST_MEM instruction 2023-03-03 21:41:46 -08:00
core.c bpf: add missing header file include 2023-02-22 09:52:32 -08:00
cpumap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
cpumask.c bpf: Treat KF_RELEASE kfuncs as KF_TRUSTED_ARGS 2023-03-25 16:56:22 -07:00
devmap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
disasm.c bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
disasm.h bpf: Relicense disassembler as GPL-2.0-only OR BSD-2-Clause 2021-09-02 14:49:23 +02:00
dispatcher.c bpf: Synchronize dispatcher update with bpf_dispatcher_xdp_func 2022-12-14 12:02:14 -08:00
hashtab.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
helpers.c bpf: Remove now-unnecessary NULL checks for KF_RELEASE kfuncs 2023-03-25 16:56:22 -07:00
inode.c fs: port inode_init_owner() to mnt_idmap 2023-01-19 09:24:28 +01:00
Kconfig rcu: Make the TASKS_RCU Kconfig option be selected 2022-04-20 16:52:58 -07:00
link_iter.c bpf: Add bpf_link iterator 2022-05-10 11:20:45 -07:00
local_storage.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
lpm_trie.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
Makefile bpf: Enable cpumasks to be queried and used as kptrs 2023-01-25 07:57:49 -08:00
map_in_map.c bpf: Add comments for map BTF matching requirement for bpf_list_head 2022-11-17 19:22:14 -08:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Introduce MEM_RDONLY flag 2021-12-18 13:27:41 -08:00
memalloc.c bpf: Add a few bpf mem allocator functions 2023-03-25 19:52:51 -07:00
mmap_unlock_work.h bpf: Introduce helper bpf_find_vma 2021-11-07 11:54:51 -08:00
net_namespace.c net: Add includes masked by netdevice.h including uapi/bpf.h 2021-12-29 20:03:05 -08:00
offload.c bpf: offload map memory usage 2023-03-07 09:33:43 -08:00
percpu_freelist.c bpf: Initialize same number of free nodes for each pcpu_freelist 2022-11-11 12:05:14 -08:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c
queue_stack_maps.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
reuseport_array.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
ringbuf.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
stackmap.c bpf: return long from bpf_map_ops funcs 2023-03-22 15:11:30 -07:00
syscall.c bpf: Only invoke kptr dtor following non-NULL xchg 2023-03-25 16:56:22 -07:00
sysfs_btf.c bpf: Load and verify kernel module BTFs 2020-11-10 15:25:53 -08:00
task_iter.c bpf: keep a reference to the mm, in case the task is dead. 2022-12-28 14:11:48 -08:00
tnum.c bpf, tnums: Provably sound, faster, and more precise algorithm for tnum_mul 2021-06-01 13:34:15 +02:00
trampoline.c bpf: Fix attaching fentry/fexit/fmod_ret/lsm to modules 2023-03-15 18:38:21 -07:00
verifier.c bpf: Treat KF_RELEASE kfuncs as KF_TRUSTED_ARGS 2023-03-25 16:56:22 -07:00