The term "HQD" is CP-specific and doesn't
accurately describe the queue resources for other IP blocks like SDMA,
VCN, or VPE. This change:
1. Renames `num_hqds` to `num_slots` in amdgpu_kms.c to better reflect
the generic nature of the resource counting
2. Updates the UAPI struct member from `userq_num_hqds` to `userq_num_slots`
3. Maintains the same functionality while using more appropriate terminology
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This change exposes the number of available user queue instances
for each hardware IP type (GFX, COMPUTE, SDMA) through the
drm_amdgpu_info_hw_ip interface.
Key changes:
1. Added userq_num_instance field to drm_amdgpu_info_hw_ip structure
2. Implemented counting of available HQD slots using:
- mes.gfx_hqd_mask for GFX queues
- mes.compute_hqd_mask for COMPUTE queues
- mes.sdma_hqd_mask for SDMA queues
3. Only counts available instances when user queues are enabled
(!disable_uq)
v2: using the adev->mes.gfx_hqd_mask[]/compute_hqd_mask[]/sdma_hqd_mask[] masks
to determine the number of queue slots available for each engine type (Alex)
v3: rename userq_num_instance to userq_num_hqds (Alex)
Suggested-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This reverts commit 5fb90421fa.
The original patch moved `amdgpu_userq_mgr_fini()` to the driver's
`postclose` callback, which is called after `drm_gem_release()` in
the DRM file cleanup sequence.If a user application crashes or aborts
without cleaning up its user queues, 'drm_gem_release()` may free
GEM objects that are still referenced by active user queues, leading
to use-after-free. By reverting, we ensure that user queues are
disabled and cleaned up before any GEM objects are released,
preventing this class of bug. However, this reintroduces a race
during PCI hot-unplug, where device removal can race with per-file
cleanup, leading to use-after-free in suspend/unplug paths.
This will be fixed in the next patch.
Fixes: 5fb90421fa ("drm/amdgpu: fix slab-use-after-free in amdgpu_userq_mgr_fini+0x70c")
Signed-off-by: Vitaly Prosyak <vitaly.prosyak@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Pass amdgpu device context instead of drm device context to some
amdgpu_device_* functions. DRM device context is not required in those
functions. No functional change.
Signed-off-by: Lijo Lazar <lijo.lazar@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add a debugfs file under the client directory which shares
the root page table base address of the VM.
This address could be used to dump the pagetable for debug
memory issues.
Signed-off-by: Sunil Khatri <sunil.khatri@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20250704075548.1549849-4-sunil.khatri@amd.com
Signed-off-by: Christian König <christian.koenig@amd.com>
The current cleanup order during file descriptor close can lead to
a race condition where the eviction fence worker attempts to access
a destroyed mutex from the user queue manager:
[ 517.294055] DEBUG_LOCKS_WARN_ON(lock->magic != lock)
[ 517.294060] WARNING: CPU: 8 PID: 2030 at kernel/locking/mutex.c:564
[ 517.294094] Workqueue: events amdgpu_eviction_fence_suspend_worker [amdgpu]
The issue occurs because:
1. We destroy the user queue manager (including its mutex) first
2. Then try to destroy eviction fences which may have pending work
3. The eviction fence worker may try to access the already-destroyed mutex
Fix this by reordering the cleanup to:
1. First mark the fd as closing and destroy eviction fences,
which flushes any pending work
2. Then safely destroy the user queue manager after we're certain
no more fence work will be executed
The copy in amdgpu_driver_postclose_kms() needs to be removed (Christian)
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Prike Liang <Prike.Liang@amd.com>
Reviewed-by: Arvind Yadav <Arvind.Yadav@amd.com>
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
As the userq resource was already freed at the drm_release
early phase, it should avoid freeing userq resource again
at the later kms postclose callback.
Signed-off-by: Prike Liang <Prike.Liang@amd.com>
Reviewed-by: Jesse Zhang <Jesse.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
drm_file will be used in usermode queues code to
enable better process information in logging and hence
add drm_file part of the userq_mgr struct.
update the drm_file pointer in userq_mgr for each
amdgpu_driver_open_kms.
Signed-off-by: Sunil Khatri <sunil.khatri@amd.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
s/userqueue/userq/
1. remove the mix of amdgpu_userqueue and amdgpu_userq
2. to be consistent with other amdgpu_userq_fence.c
3. it's shorter
Reviewed-by: Prike Liang <Prike.Liang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add a helper to get a mask of IPs which support user queues.
Use this in the INFO IOCTL to get the IP mask to replace
the current code.
Reviewed-by: Prike Liang <Prike.Liang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Fix an array index out of bounds warning in the DMA IP case of
amdgpu_hw_ip_info() where it was incorrectly checking
adev->gfx.gfx_ring[i].no_user_submission instead of
adev->sdma.instance[i].ring.no_user_submission.
The mismatch caused UBSAN to report an array bounds violation since
it was accessing the GFX ring array with SDMA instance indices.
Fixes: 4310acd446 ("drm/amdgpu: add ring flag for no user submissions")
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Jesse Zhang <jesse.zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This would be set by IPs which only accept submissions
from the kernel, not userspace, such as when kernel
queues are disabled. Don't expose the rings to userspace
and reject any submissions in the CS IOCTL.
v2: fix error code (Alex)
Reviewed-by: Sunil Khatri<sunil.khatri@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add an INFO query to check if user queues are supported.
v2: switch to a mask of IPs (Marek)
v3: move to drm_amdgpu_info_device (Marek)
Cc: marek.olsak@amd.com
Cc: prike.liang@amd.com
Cc: sunil.khatri@amd.com
Cc: yogesh.mohanmarimuthu@amd.com
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Reviewed-by: Sunil Khatri <sunil.khatri@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The eviction process can get into a race condition between the eviction
fence suspend work (which replaces the old fence with new) and kms_close
(which destroys the fence and doesn't expect a new one).
This patch:
- adds a flag to indicate that fd is closing, so fence replacement is
not required (evf_mgr->fd_closing)
- adds a flush_work() during the ev_fence_destroy routine
V2: Addressed review comments from Christian:
- Do not use mutex to sync
- Use flush_work and wait for suspend_work to be done
V3: Fixed state machine for queue->active, which adds into race between
suspend/resume and queue ops
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian König <christian.koenig@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
Signed-off-by: Arvind Yadav <arvind.yadav@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds basic eviction fence framework for the gfx buffers.
The idea is to:
- One eviction fence is created per gfx process, at kms_open.
- This fence is attached to all the gem buffers created
by this process.
- This fence is detached to all the gem buffers at postclose_kms.
This framework will be further used for usermode queues.
V2: Addressed review comments from Christian
- keep fence_ctx and fence_seq directly in fpriv
- evcition_fence should be dynamically allocated
- do not save eviction fence instance in BO, there could be many
such fences attached to one BO
- use dma_resv_replace_fence() in detach
V3: Addressed review comments from Christian
- eviction fence create and destroy functions should be called
only once from fpriv create/destroy
- use dma_fence_put() in eviction_fence_destroy
V4: Addressed review comments from Christian:
- create a separate ev_fence_mgr structure
- cleanup fence init part
- do not add a domain for fence owner KGD
V5: Addressed review comments from Christian:
- drop the dma_fence_is_signaled check
- use a local variable to access evf_mgr->ev_fence under the
spin_lock() multiple places
- remove the vm->is_compute_ctx check to attach gfx eviction fence,
in gem_object_open
V6: Addressed review comments from Christian:
- drop the return value from eviction_fence_signal
- reserve_fence should be the first thing inside the
attach_eviction_fence function, also keep the resv_add_fence inside
the lock
- remove the unwanted ev_fence check inside detach function
- fix wrong variable check in eviction_fence_init function
- return the error value of eviction_fence_init to the caller, dont
keep it void.
- fail gem_object_open if attaching of eviction_fence fails
- detach the eviction fence only when amdgpu_vm_is_bo_always_valid
is not true.
V7: Addressed review comments from Christian:
- Do not add a uq_mgr ptr in ev_fence, rather add evf_mgr
V8: Move eviction fence enabling into separate patch for CI
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian Koenig <christian.koenig@amd.com>
Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
Signed-off-by: Arvind Yadav <arvind.yadav@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds a new subquery (AMDGPU_INFO_UQ_FW_AREAS) in
AMDGPU_INFO_IOCTL to get the size and alignment of shadow
and csa objects from the FW setup. This information is
required for the userqueue consumers.
V2: Added Alex's suggestions and addressed review comments:
- make this query IP specific (GFX/SDMA etc)
- give a better title (AMDGPU_INFO_UQ_METADATA)
- restructured the code as per sample code shared by Alex
V3: Split the UAPI patch from shadow_size_fn modifications
V4: Addressed review comments from UAPI review (Marek/Pierre-Eric)
- Change the query name to AMDGPU_INFO_UQ_FW_AREAS
- remove unused inpur parameter for AMDGPU_HW_IP*
UAPI link: https://gitlab.freedesktop.org/mesa/drm/-/merge_requests/400/
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Cc: Arvind Yadav <arvind.yadav@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds IP independent skeleton code for amdgpu
usermode queue. It contains:
- A new files with init functions of usermode queues.
- A queue context manager in driver private data.
V1: Worked on design review comments from RFC patch series:
(https://patchwork.freedesktop.org/series/112214/)
- Alex: Keep a list of queues, instead of single queue per process.
- Christian: Use the queue manager instead of global ptrs,
Don't keep the queue structure in amdgpu_ctx
V2:
- Reformatted code, split the big patch into two
V3:
- Integration with doorbell manager
V4:
- Align the structure member names to the largest member's column
(Luben)
- Added SPDX license (Luben)
V5:
- Do not add amdgpu.h in amdgpu_userqueue.h (Christian).
- Move struct amdgpu_userq_mgr into amdgpu_userqueue.h (Christian).
V6: Rebase
V9: Rebase
V10: Rebase + Alex's R-B
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Christian Koenig <christian.koenig@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Shashank Sharma <shashank.sharma@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Move more per instance data into the per instance structure.
v2: index instances directly on vcn1.0 and 2.0 to make
it clear that they only support a single instance (Lijo)
v3: fix typo on vcn 2.5
Reviewed-by: Boyuan Zhang <Boyuan.Zhang@amd.com> (v2)
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add extra flag definition for ids_flag field to distinguish
between vf/pf/pt modes
v2: Updated kms driver minor version & removed pf check as default is 0
v3: Fix up version (Alex)
v4: rebase (Alex)
Proposed userspace:
e663bed7d6
Signed-off-by: Asad Kamal <asad.kamal@amd.com>
Reviewed-by: Lijo Lazar <lijo.lazar@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Combine the platform and GPU caps like we do for PCIe Gen.
This aligns properly with expectations and documentation
for the interface.
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/3820
Reviewed-by: Yang Wang <kevinyang.wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Register access from userspace should be blocked until
reset is complete.
Signed-off-by: Victor Skvortsov <victor.skvortsov@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Remove the implementation of struct drm_driver.lastclose. The hook
was only necessary before in-kernel DRM clients existed, but is now
obsolete. The code in amdgpu_driver_lastclose_kms() is performed by
drm_lastclose().
v2:
- update commit message
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240812083000.337744-3-tzimmermann@suse.de
This greater-than-or-equal-to-zero comparison of an unsigned value is always true. fpriv->xcp_id >= 0U
Signed-off-by: Jesse Zhang <Jesse.Zhang@amd.com>
Reviewed-by: Tim Huang <Tim.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Check the return value of amdgpu_xcp_get_inst_details, otherwise we
may use an uninitialized variable inst_mask
Signed-off-by: Ma Jun <Jun.Ma2@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
refactor the code of runtime pm mode detection to support
amdgpu_runtime_pm =2 and 1 two cases
Signed-off-by: Ma Jun <Jun.Ma2@amd.com>
Reviewed-by: Yang Wang <kevinyang.wang@amd.com>
Reviewed-by: Lijo Lazar <lijo.lazar@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Optimize the code to add support for BAMACO mode checking
Signed-off-by: Ma Jun <Jun.Ma2@amd.com>
Reviewed-by: Lijo Lazar <lijo.lazar@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
- Enable the seq64 mapping sequence.
- Fix wflinfo va conflict and other bugs.
v1:
- The seq64 area needs to be included in the AMDGPU_VA_RESERVED_SIZE
otherwise the areas will conflict with user space allocations (Alex)
- It needs to be mapped read only in the user VM (Alex)
v2:
- Instead of just one define for TOP/BOTTOM
reserved space separate them into two (Christian)
- Fix the CPU and VA calculations and while at it
also cleanup error handling and kerneldoc (Christian)
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Some chips provide both average and input power. Previously
we just exposed average power, add a new query for input
power.
Example userspace:
https://github.com/Umio-Yasuno/libdrm-amdgpu-sys-rs/tree/input_power
Reviewed-by: Yang Wang <kevinyang.wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
For backwards compatibility with userspace.
Fixes: 47f1724db4 ("drm/amd: Introduce `AMDGPU_PP_SENSOR_GPU_INPUT_POWER`")
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2897
Reviewed-by: Yang Wang <kevinyang.wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Developed a new driver which allocates a 64bit memory on
each request in sequence order. At the moment, user queue
fence memory is the main consumer of this seq64 driver.
v2: Worked on review comments from Christian for the following
modifications
- Move driver name from "semaphore" to "seq64"
- Remove unnecessary PT/PD mapping
- Move enable_mes check into init/fini functions.
v3: Worked on review comments from Christian
- drop enable_mes check
- use DECLARE_BITMAP for bit array
- added kerneldoc for seq64
v4: Worked on review comments from Christian
- Rename amdgpu_seq64_get name with amdgpu_seq64_alloc
v5: Worked on review comments from Christian
- Fix seq64 lockdep warning
- move fpriv->seq64_va check into amdgpu_seq64_unmap()
- make the function amdgpu_seq64_unmap() return as void.
- reserve the buffers as not interruptible.
v6: port to drm_exec (Alex)
v7: disable for now (Arun)
Signed-off-by: Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
fix wrong ip count INFO on spatial partitions. update the query
to return the instance count corresponding to the partition id.
v2:
initialize variables only when required to be (Christian)
move variable declarations to the beginning of function (Christian)
Signed-off-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Include subrevision and variant fileds also to IP version.
Signed-off-by: Lijo Lazar <lijo.lazar@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
update the query to return the number of functional
instances where there is more than an instance of the requested
type and for others continue to return one.
v2: count must reflect the actual number of engines (Alex)
v3: fix wrong number of engines for vcn (Alex)
Signed-off-by: Sathishkumar S <sathishkumar.sundararaju@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Add missing IP discovery info.
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Lang Yu <lang.yu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use an inline function for version check. Gives more flexibility to
handle any format changes.
Signed-off-by: Lijo Lazar <lijo.lazar@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
On some APU systems, there is no atom context and so the
atom_context struct is null.
Add a check to the VBIOS_INFO branch of amdgpu_info_ioctl
to handle this case, returning all zeroes.
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: David Francis <David.Francis@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Currently, we store CU info only for a single XCC assuming
that it is the same for all XCCs. However, that may not be
true. As a result, store CU info for all XCCs. This info is
later used for CU masking.
Signed-off-by: Mukul Joshi <mukul.joshi@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Use the clearer name `AMDGPU_PP_SENSOR_GPU_AVG_POWER` instead.
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Fixes the following:
WARNING: min() should probably be min_t(size_t, size, sizeof(ip))
+ ret = copy_to_user(out, &ip, min((size_t)size, sizeof(ip)));
And other style fixes:
WARNING: Prefer 'unsigned int' to bare use of 'unsigned'
WARNING: Missing a blank line after declarations
Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Srinivasan Shanmugam <srinivasan.shanmugam@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The VBIOS part number is read both in amdgpu_atom_parse() as well
as in atom_get_vbios_pn() and stored twice in the `struct atom_context`
structure. Remove the first unnecessary read and move the `pr_info`
line from that read into the second.
v2: squash in unused variable removal
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Recent code set xcp_id stored from file private data when opening
device to amdgpu bo for accounting memory usage etc, but not all
VMs are attached to this fpriv structure like the vm cases in
amdgpu_mes_self_test, otherwise, KASAN will complain below out
of bound access. And more importantly, VM code should not touch
fpriv structure, so drop fpriv code handling from amdgpu_vm_pt.
[ 77.292314] BUG: KASAN: slab-out-of-bounds in amdgpu_vm_pt_create+0x17e/0x4b0 [amdgpu]
[ 77.293845] Read of size 4 at addr ffff888102c48a48 by task modprobe/1069
[ 77.294146] Call Trace:
[ 77.294178] <TASK>
[ 77.294208] dump_stack_lvl+0x49/0x63
[ 77.294260] print_report+0x16f/0x4a6
[ 77.294307] ? amdgpu_vm_pt_create+0x17e/0x4b0 [amdgpu]
[ 77.295979] ? kasan_complete_mode_report_info+0x3c/0x200
[ 77.296057] ? amdgpu_vm_pt_create+0x17e/0x4b0 [amdgpu]
[ 77.297556] kasan_report+0xb4/0x130
[ 77.297609] ? amdgpu_vm_pt_create+0x17e/0x4b0 [amdgpu]
[ 77.299202] __asan_load4+0x6f/0x90
[ 77.299272] amdgpu_vm_pt_create+0x17e/0x4b0 [amdgpu]
[ 77.300796] ? amdgpu_init+0x6e/0x1000 [amdgpu]
[ 77.302222] ? amdgpu_vm_pt_clear+0x750/0x750 [amdgpu]
[ 77.303721] ? preempt_count_sub+0x18/0xc0
[ 77.303786] amdgpu_vm_init+0x39e/0x870 [amdgpu]
[ 77.305186] ? amdgpu_vm_wait_idle+0x90/0x90 [amdgpu]
[ 77.306683] ? kasan_set_track+0x25/0x30
[ 77.306737] ? kasan_save_alloc_info+0x1b/0x30
[ 77.306795] ? __kasan_kmalloc+0x87/0xa0
[ 77.306852] amdgpu_mes_self_test+0x169/0x620 [amdgpu]
v2: without specifying xcp partition for PD/PT bo, the xcp id is -1.
Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2686
Fixes: 3ebfd221c1 ("drm/amdkfd: Store xcp partition id to amdgpu bo")
Signed-off-by: Guchun Chen <guchun.chen@amd.com>
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>