Compare commits

...

20 Commits

Author SHA1 Message Date
Huacai Chen
4d89432937 Merge branch 'loongarch-kvm' into loongarch-next 2025-05-20 20:21:03 +08:00
Bibo Mao
a867688c8c KVM: selftests: Add supported test cases for LoongArch
Some common KVM test cases are supported on LoongArch now as following:
  coalesced_io_test
  demand_paging_test
  dirty_log_perf_test
  dirty_log_test
  guest_print_test
  hardware_disable_test
  kvm_binary_stats_test
  kvm_create_max_vcpus
  kvm_page_table_test
  memslot_modification_stress_test
  memslot_perf_test
  set_memory_region_test

And other test cases are not supported by LoongArch such as rseq_test,
since it is not supported on LoongArch physical machine either.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:26 +08:00
Bibo Mao
304b93b1a0 KVM: selftests: Add ucall test support for LoongArch
Add ucall test support for LoongArch, ucall method on LoongArch uses
undefined mmio area. It will cause vCPU exiting to hypervisor so that
hypervisor can communicate with vCPU.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:26 +08:00
Bibo Mao
2ebf31d59f KVM: selftests: Add core KVM selftests support for LoongArch
Add core KVM selftests support for LoongArch, it includes exception
handler, mmu page table setup and vCPU startup entry support.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:26 +08:00
Bibo Mao
21872c74b0 KVM: selftests: Add KVM selftests header files for LoongArch
Add KVM selftests header files for LoongArch, including processor.h
and kvm_util_arch.h. It mainly contains LoongArch CSR register and page
table entry definition.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:23 +08:00
Bibo Mao
a5460d1919 KVM: selftests: Add VM_MODE_P47V47_16K VM mode
On LoongArch system, 16K page is used in general and GVA width is 47 bit
while GPA width is 47 bit also, here add new VM mode VM_MODE_P47V47_16K.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:23 +08:00
Bibo Mao
05d70ebf74 LoongArch: KVM: Do not flush tlb if HW PTW supported
With HW PTW supported, invalid TLB is not added when page fault happens.
But for EXCCODE_TLBM exception, stale TLB may exist because of the last
read access. Thus TLB flush operation is necessary for the EXCCODE_TLBM
exception, but not necessary for other tyeps of page fault exceptions.

With SW PTW supported, invalid TLB is added in the TLB refill exception.
TLB flush operation is necessary for all types of page fault exceptions.

Here remove unnecessary TLB flush opereation with HW PTW supported.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:18 +08:00
Bibo Mao
fecd903c3c LoongArch: KVM: Add ecode parameter for exception handlers
For some KVM exception types, they share the same exception handler. To
show the difference, ecode (exception code) is added as a new parameter
in exception handlers.

Signed-off-by: Bibo Mao <maobibo@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:20:18 +08:00
Binbin Zhou
7d8521c746 LoongArch: dts: Add PWM support to Loongson-2K2000
The module is supported, enable it.

Reviewed-by: Yanteng Si <si.yanteng@linux.dev>
Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:16 +08:00
Binbin Zhou
26f59fcc81 LoongArch: dts: Add PWM support to Loongson-2K1000
The module is supported, enable it.

Also, add the pwm-fan and cooling-maps associated with it.

Reviewed-by: Yanteng Si <si.yanteng@linux.dev>
Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:15 +08:00
Binbin Zhou
eac796dee2 LoongArch: dts: Add PWM support to Loongson-2K0500
The module is supported, enable it.

Reviewed-by: Yanteng Si <si.yanteng@linux.dev>
Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:10 +08:00
Huacai Chen
5b5fc8c48e LoongArch: Preserve firmware configuration when desired
If we must preserve the firmware resource assignments, claim the existing
resources rather than reassigning everything.

According to PCI Firmware Specification: if ACPI DSM#5 function returns
0, the OS must retain the resource allocation for PCI in the firmware; if
ACPI DSM#5 function returns 1, the OS can ignore the resource allocation
for PCI and reallocate it.

Signed-off-by: Qihang Gao <gaoqihang@loongson.cn>
Signed-off-by: Juxin Gao <gaojuxin@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:06 +08:00
Huacai Chen
76026c44e3 LoongArch: Introduce the numa_memblks conversion
Commit 8748270821 ("mm: introduce numa_memblks") has moved
numa_memblks from x86 to the generic code, but LoongArch was left out
of this conversion.

This patch introduces the generic numa_memblks for LoongArch.

In detail:
1. Enable NUMA_MEMBLKS (but disable NUMA_EMU) in Kconfig;
2. Use generic definition for numa_memblk and numa_meminfo;
3. Use generic implementation for numa_add_memblk() and its friends;
4. Use generic implementation for numa_set_distance() and its friends;
5. Use generic implementation for memory_add_physaddr_to_nid() and its
   friends.

Note: Disable NUMA_EMU because it needs more efforts and no obvious
demand now.

Tested-by: Binbin Zhou <zhoubinbin@loongson.cn>
Signed-off-by: Yuquan Wang <wangyuquan1236@phytium.com.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:00 +08:00
Huacai Chen
d08e3456ab LoongArch: Increase max supported CPUs up to 2048
Increase max supported CPUs up to 2048, including:
1. Increase CSR.CPUID register's effective width;
2. Define MAX_CORE_PIC (a.k.a. max physical ID) to 2048;
3. Allow NR_CPUS (a.k.a. max logical ID) to be as large as 2048;
4. Introduce acpi_numa_x2apic_affinity_init() to handle ACPI SRAT
   for CPUID >= 256.

Note: The reason of increasing to 2048 rather than 4096/8192 is because
      the IPI hardware can only support 2048 as a maximum.

Reviewed-by: Yanteng Si <si.yanteng@linux.dev>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:00 +08:00
Youling Tang
6495d3477a LoongArch: Enable HAVE_ARCH_STACKLEAK
Add support for the stackleak feature. It initializes the stack with the
poison value before returning from system calls which improves the kernel
security.

At the same time, disables the plugin in EFI stub code because EFI stub
is out of scope for the protection.

Tested on Loongson-3A5000 (enable GCC_PLUGIN_STACKLEAK and LKDTM):
 # echo STACKLEAK_ERASING > /sys/kernel/debug/provoke-crash/DIRECT
 # dmesg
   lkdtm: Performing direct entry STACKLEAK_ERASING
   lkdtm: stackleak stack usage:
      high offset: 320 bytes
      current:     448 bytes
      lowest:      1264 bytes
      tracked:     1264 bytes
      untracked:   208 bytes
      poisoned:    14528 bytes
      low offset:  64 bytes
   lkdtm: OK: the rest of the thread stack is properly erased

Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:00 +08:00
Yuli Wang
90aa8ae10e LoongArch: Enable ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
Provide support for CONFIG_MSEAL_SYSTEM_MAPPINGS on LoongArch, covering
the vdso.

Link: https://lore.kernel.org/all/25bad37f-273e-4626-999c-e1890be96182@lucifer.local/
Acked-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Acked-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Jeff Xu <jeffxu@chromium.org>
Tested-by: Yuli Wang <wangyuli@uniontech.com>
Signed-off-by: Yuli Wang <wangyuli@uniontech.com>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:00 +08:00
Tianyang Zhang
630bfe425e LoongArch: Add SCHED_MC (Multi-core scheduler) support
In order to achieve more reasonable load balancing behavior, add
SCHED_MC (Multi-core scheduler) support.

The LLC distribution of LoongArch now is consistent with NUMA node,
the balancing domain of SCHED_MC can effectively reduce the situation
where processes are awakened to smt_sibling.

Co-developed-by: Hongliang Wang <wanghongliang@loongson.cn>
Signed-off-by: Hongliang Wang <wanghongliang@loongson.cn>
Signed-off-by: Tianyang Zhang <zhangtianyang@loongson.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:00 +08:00
Youling Tang
9534482b45 LoongArch: Add some annotations in archhelp
- Add annotations to the kernel image.
- Modify the annotations of make insatll.

Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:18:00 +08:00
Youling Tang
e8034ac570 LoongArch: Using generic scripts/install.sh in make install
Use the generic script/install.sh to perform the make install operation.
This will automatically generate the initrd file and modify the grub.cfg
without manual intervention (The previous kernel image, config file and
System.map will also be generated), similar to other architectures.

Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:17:42 +08:00
Youling Tang
fd9521c3eb LoongArch: Add a default install.sh
As specified in scripts/install.sh, the priority order is as follows
(from highest to lowest):
  ~/bin/installkernel
  /sbin/installkernel
  arch/loongarch/boot/install.sh

Fallback to default install.sh if installkernel is not found.

Signed-off-by: Youling Tang <tangyouling@kylinos.cn>
Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
2025-05-20 20:17:42 +08:00
43 changed files with 1170 additions and 194 deletions

View File

@ -12,7 +12,7 @@
| arm64: | ok |
| csky: | N/A |
| hexagon: | N/A |
| loongarch: | TODO |
| loongarch: | ok |
| m68k: | N/A |
| microblaze: | N/A |
| mips: | TODO |

View File

@ -144,7 +144,7 @@ Use cases
architecture.
The following architectures currently support this feature: x86-64, arm64,
and s390.
loongarch and s390.
WARNING: This feature breaks programs which rely on relocating
or unmapping system mappings. Known broken software at the time

View File

@ -13051,6 +13051,8 @@ F: Documentation/virt/kvm/loongarch/
F: arch/loongarch/include/asm/kvm*
F: arch/loongarch/include/uapi/asm/kvm*
F: arch/loongarch/kvm/
F: tools/testing/selftests/kvm/*/loongarch/
F: tools/testing/selftests/kvm/lib/loongarch/
KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
M: Huacai Chen <chenhuacai@kernel.org>

View File

@ -69,6 +69,7 @@ config LOONGARCH
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128
select ARCH_SUPPORTS_LTO_CLANG
select ARCH_SUPPORTS_LTO_CLANG_THIN
select ARCH_SUPPORTS_MSEAL_SYSTEM_MAPPINGS
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_RT
select ARCH_USE_BUILTIN_BSWAP
@ -123,6 +124,7 @@ config LOONGARCH
select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
select HAVE_ARCH_SECCOMP
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_STACKLEAK
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD
@ -187,6 +189,7 @@ config LOONGARCH
select MODULES_USE_ELF_RELA if MODULES
select NEED_PER_CPU_EMBED_FIRST_CHUNK
select NEED_PER_CPU_PAGE_FIRST_CHUNK
select NUMA_MEMBLKS if NUMA
select OF
select OF_EARLY_FLATTREE
select PCI
@ -456,6 +459,15 @@ config SCHED_SMT
Improves scheduler's performance when there are multiple
threads in one physical core.
config SCHED_MC
bool "Multi-core scheduler support"
depends on SMP
default y
help
Multi-core scheduler support improves the CPU scheduler's decision
making when dealing with multi-core CPU chips at a cost of slightly
increased overhead in some places.
config SMP
bool "Multi-Processing support"
help
@ -485,10 +497,10 @@ config HOTPLUG_CPU
Say N if you want to disable CPU hotplug.
config NR_CPUS
int "Maximum number of CPUs (2-256)"
range 2 256
int "Maximum number of CPUs (2-2048)"
range 2 2048
default "2048"
depends on SMP
default "64"
help
This allows you to specify the maximum number of CPUs which this
kernel will support.

View File

@ -181,11 +181,14 @@ vmlinux.elf vmlinux.efi vmlinuz.efi: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $(bootvars-y) $(boot)/$@
install:
$(Q)install -D -m 755 $(KBUILD_IMAGE) $(INSTALL_PATH)/$(image-name-y)-$(KERNELRELEASE)
$(Q)install -D -m 644 .config $(INSTALL_PATH)/config-$(KERNELRELEASE)
$(Q)install -D -m 644 System.map $(INSTALL_PATH)/System.map-$(KERNELRELEASE)
$(call cmd,install)
define archhelp
echo ' install - install kernel into $(INSTALL_PATH)'
echo ' vmlinux.elf - Uncompressed ELF kernel image (arch/loongarch/boot/vmlinux.elf)'
echo ' vmlinux.efi - Uncompressed EFI kernel image (arch/loongarch/boot/vmlinux.efi)'
echo ' vmlinuz.efi - GZIP/ZSTD-compressed EFI kernel image (arch/loongarch/boot/vmlinuz.efi)'
echo ' Default when CONFIG_EFI_ZBOOT=y'
echo ' install - Install kernel using (your) ~/bin/$(INSTALLKERNEL) or'
echo ' (distribution) /sbin/$(INSTALLKERNEL) or install.sh to $$(INSTALL_PATH)'
echo
endef

View File

@ -169,6 +169,166 @@ eiointc: interrupt-controller@1fe11600 {
interrupts = <3>;
};
pwm@1ff5c000 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c000 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c010 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c010 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c020 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c020 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c030 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c030 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c040 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c040 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c050 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c050 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c060 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c060 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c070 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c070 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c080 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c080 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <26 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c090 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c090 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <26 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c0a0 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c0a0 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <26 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c0b0 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c0b0 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <26 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c0c0 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c0c0 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c0d0 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c0d0 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c0e0 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c0e0 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1ff5c0f0 {
compatible = "loongson,ls2k0500-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1ff5c0f0 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
gmac0: ethernet@1f020000 {
compatible = "snps,dwmac-3.70a";
reg = <0x0 0x1f020000 0x0 0x10000>;

View File

@ -5,6 +5,7 @@
/dts-v1/;
#include "dt-bindings/thermal/thermal.h"
#include "loongson-2k1000.dtsi"
/ {
@ -38,6 +39,13 @@ linux,cma {
linux,cma-default;
};
};
fan0: pwm-fan {
compatible = "pwm-fan";
cooling-levels = <255 153 85 25>;
pwms = <&pwm1 0 100000 0>;
#cooling-cells = <2>;
};
};
&gmac0 {
@ -92,6 +100,22 @@ &spi0 {
#size-cells = <0>;
};
&pwm1 {
status = "okay";
pinctrl-0 = <&pwm1_pins_default>;
pinctrl-names = "default";
};
&cpu_thermal {
cooling-maps {
map0 {
trip = <&cpu_alert>;
cooling-device = <&fan0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
};
};
};
&ehci0 {
status = "okay";
};

View File

@ -68,7 +68,7 @@ i2c-gpio-1 {
};
thermal-zones {
cpu-thermal {
cpu_thermal: cpu-thermal {
polling-delay-passive = <1000>;
polling-delay = <5000>;
thermal-sensors = <&tsensor 0>;
@ -322,6 +322,46 @@ i2c3: i2c@1fe21800 {
status = "disabled";
};
pwm@1fe22000 {
compatible = "loongson,ls2k1000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1fe22000 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm1: pwm@1fe22010 {
compatible = "loongson,ls2k1000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1fe22010 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1fe22020 {
compatible = "loongson,ls2k1000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1fe22020 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <26 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@1fe22030 {
compatible = "loongson,ls2k1000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x1fe22030 0x0 0x10>;
interrupt-parent = <&liointc0>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_APB_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pmc: power-management@1fe27000 {
compatible = "loongson,ls2k1000-pmc", "loongson,ls2k0500-pmc", "syscon";
reg = <0x0 0x1fe27000 0x0 0x58>;

View File

@ -165,6 +165,66 @@ msi: msi-controller@1fe01140 {
interrupt-parent = <&eiointc>;
};
pwm@100a0000 {
compatible = "loongson,ls2k2000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x100a0000 0x0 0x10>;
interrupt-parent = <&pic>;
interrupts = <24 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_MISC_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@100a0100 {
compatible = "loongson,ls2k2000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x100a0100 0x0 0x10>;
interrupt-parent = <&pic>;
interrupts = <25 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_MISC_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@100a0200 {
compatible = "loongson,ls2k2000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x100a0200 0x0 0x10>;
interrupt-parent = <&pic>;
interrupts = <26 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_MISC_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@100a0300 {
compatible = "loongson,ls2k2000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x100a0300 0x0 0x10>;
interrupt-parent = <&pic>;
interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_MISC_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@100a0400 {
compatible = "loongson,ls2k2000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x100a0400 0x0 0x10>;
interrupt-parent = <&pic>;
interrupts = <38 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_MISC_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
pwm@100a0500 {
compatible = "loongson,ls2k2000-pwm", "loongson,ls7a-pwm";
reg = <0x0 0x100a0500 0x0 0x10>;
interrupt-parent = <&pic>;
interrupts = <39 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LOONGSON2_MISC_CLK>;
#pwm-cells = <3>;
status = "disabled";
};
rtc0: rtc@100d0100 {
compatible = "loongson,ls2k2000-rtc", "loongson,ls7a-rtc";
reg = <0x0 0x100d0100 0x0 0x100>;

56
arch/loongarch/boot/install.sh Executable file
View File

@ -0,0 +1,56 @@
#!/bin/sh
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1995 by Linus Torvalds
#
# Adapted from code in arch/i386/boot/Makefile by H. Peter Anvin
# Adapted from code in arch/i386/boot/install.sh by Russell King
#
# "make install" script for the LoongArch Linux port
#
# Arguments:
# $1 - kernel version
# $2 - kernel image file
# $3 - kernel map file
# $4 - default install path (blank if root directory)
set -e
case "${2##*/}" in
vmlinux.elf)
echo "Installing uncompressed vmlinux.elf kernel"
base=vmlinux
;;
vmlinux.efi)
echo "Installing uncompressed vmlinux.efi kernel"
base=vmlinux
;;
vmlinuz.efi)
echo "Installing gzip/zstd compressed vmlinuz.efi kernel"
base=vmlinuz
;;
*)
echo "Warning: Unexpected kernel type"
exit 1
;;
esac
if [ -f $4/$base-$1 ]; then
mv $4/$base-$1 $4/$base-$1.old
fi
cat $2 > $4/$base-$1
# Install system map file
if [ -f $4/System.map-$1 ]; then
mv $4/System.map-$1 $4/System.map-$1.old
fi
cp $3 $4/System.map-$1
# Install kernel config file
if [ -f $4/config-$1 ]; then
mv $4/config-$1 $4/config-$1.old
fi
cp .config $4/config-$1

View File

@ -33,7 +33,7 @@ static inline bool acpi_has_cpu_in_madt(void)
return true;
}
#define MAX_CORE_PIC 256
#define MAX_CORE_PIC 2048
extern struct list_head acpi_wakeup_device_list;
extern struct acpi_madt_core_pic acpi_core_pic[MAX_CORE_PIC];

View File

@ -2,12 +2,6 @@
#ifndef ARCH_LOONGARCH_ENTRY_COMMON_H
#define ARCH_LOONGARCH_ENTRY_COMMON_H
#include <linux/sched.h>
#include <linux/processor.h>
static inline bool on_thread_stack(void)
{
return !(((unsigned long)(current->stack) ^ current_stack_pointer) & ~(THREAD_SIZE - 1));
}
#include <asm/stacktrace.h> /* For on_thread_stack() */
#endif

View File

@ -301,7 +301,7 @@ int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu);
/* MMU handling */
void kvm_flush_tlb_all(void);
void kvm_flush_tlb_gpa(struct kvm_vcpu *vcpu, unsigned long gpa);
int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write);
int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long badv, bool write, int ecode);
int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end, bool blockable);
int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);

View File

@ -37,7 +37,7 @@
#define KVM_LOONGSON_IRQ_NUM_MASK 0xffff
typedef union loongarch_instruction larch_inst;
typedef int (*exit_handle_fn)(struct kvm_vcpu *);
typedef int (*exit_handle_fn)(struct kvm_vcpu *, int);
int kvm_emu_mmio_read(struct kvm_vcpu *vcpu, larch_inst inst);
int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst);

View File

@ -411,8 +411,8 @@
/* Config CSR registers */
#define LOONGARCH_CSR_CPUID 0x20 /* CPU core id */
#define CSR_CPUID_COREID_WIDTH 9
#define CSR_CPUID_COREID _ULCAST_(0x1ff)
#define CSR_CPUID_COREID_WIDTH 11
#define CSR_CPUID_COREID _ULCAST_(0x7ff)
#define LOONGARCH_CSR_PRCFG1 0x21 /* Config1 */
#define CSR_CONF1_VSMAX_SHIFT 12

View File

@ -22,20 +22,6 @@ extern int numa_off;
extern s16 __cpuid_to_node[CONFIG_NR_CPUS];
extern nodemask_t numa_nodes_parsed __initdata;
struct numa_memblk {
u64 start;
u64 end;
int nid;
};
#define NR_NODE_MEMBLKS (MAX_NUMNODES*2)
struct numa_meminfo {
int nr_blks;
struct numa_memblk blk[NR_NODE_MEMBLKS];
};
extern int __init numa_add_memblk(int nodeid, u64 start, u64 end);
extern void __init early_numa_add_cpu(int cpuid, s16 node);
extern void numa_add_cpu(unsigned int cpu);
extern void numa_remove_cpu(unsigned int cpu);

View File

@ -25,6 +25,7 @@ extern int smp_num_siblings;
extern int num_processors;
extern int disabled_cpus;
extern cpumask_t cpu_sibling_map[];
extern cpumask_t cpu_llc_shared_map[];
extern cpumask_t cpu_core_map[];
extern cpumask_t cpu_foreign_map[];

View File

@ -21,11 +21,6 @@
#define VMEMMAP_SIZE 0 /* 1, For FLATMEM; 2, For SPARSEMEM without VMEMMAP. */
#endif
#ifdef CONFIG_MEMORY_HOTPLUG
int memory_add_physaddr_to_nid(u64 addr);
#define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
#endif
#define INIT_MEMBLOCK_RESERVED_REGIONS (INIT_MEMBLOCK_REGIONS + NR_CPUS)
#endif /* _LOONGARCH_SPARSEMEM_H */

View File

@ -57,6 +57,12 @@
jirl zero, \temp1, 0xc
.endm
.macro STACKLEAK_ERASE
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK
bl stackleak_erase_on_task_stack
#endif
.endm
.macro BACKUP_T0T1
csrwr t0, EXCEPTION_KS0
csrwr t1, EXCEPTION_KS1

View File

@ -31,6 +31,11 @@ bool in_irq_stack(unsigned long stack, struct stack_info *info);
bool in_task_stack(unsigned long stack, struct task_struct *task, struct stack_info *info);
int get_stack_info(unsigned long stack, struct task_struct *task, struct stack_info *info);
static __always_inline bool on_thread_stack(void)
{
return !(((unsigned long)(current->stack) ^ current_stack_pointer) & ~(THREAD_SIZE - 1));
}
#define STR_LONG_L __stringify(LONG_L)
#define STR_LONG_S __stringify(LONG_S)
#define STR_LONGSIZE __stringify(LONGSIZE)

View File

@ -19,17 +19,22 @@ extern int pcibus_to_node(struct pci_bus *);
#define cpumask_of_pcibus(bus) (cpu_online_mask)
extern unsigned char node_distances[MAX_NUMNODES][MAX_NUMNODES];
void numa_set_distance(int from, int to, int distance);
#define node_distance(from, to) (node_distances[(from)][(to)])
int __node_distance(int from, int to);
#define node_distance(from, to) __node_distance(from, to)
#else
#define pcibus_to_node(bus) 0
#endif
#ifdef CONFIG_SMP
/*
* Return cpus that shares the last level cache.
*/
static inline const struct cpumask *cpu_coregroup_mask(int cpu)
{
return &cpu_llc_shared_map[cpu];
}
#define topology_physical_package_id(cpu) (cpu_data[cpu].package)
#define topology_core_id(cpu) (cpu_data[cpu].core)
#define topology_core_cpumask(cpu) (&cpu_core_map[cpu])

View File

@ -244,22 +244,6 @@ void __init acpi_boot_table_init(void)
#ifdef CONFIG_ACPI_NUMA
static __init int setup_node(int pxm)
{
return acpi_map_pxm_to_node(pxm);
}
void __init numa_set_distance(int from, int to, int distance)
{
if ((u8)distance != distance || (from == to && distance != LOCAL_DISTANCE)) {
pr_warn_once("Warning: invalid distance parameter, from=%d to=%d distance=%d\n",
from, to, distance);
return;
}
node_distances[from][to] = distance;
}
/* Callback for Proximity Domain -> CPUID mapping */
void __init
acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
@ -280,7 +264,41 @@ acpi_numa_processor_affinity_init(struct acpi_srat_cpu_affinity *pa)
pxm |= (pa->proximity_domain_hi[1] << 16);
pxm |= (pa->proximity_domain_hi[2] << 24);
}
node = setup_node(pxm);
node = acpi_map_pxm_to_node(pxm);
if (node < 0) {
pr_err("SRAT: Too many proximity domains %x\n", pxm);
bad_srat();
return;
}
if (pa->apic_id >= CONFIG_NR_CPUS) {
pr_info("SRAT: PXM %u -> CPU 0x%02x -> Node %u skipped apicid that is too big\n",
pxm, pa->apic_id, node);
return;
}
early_numa_add_cpu(pa->apic_id, node);
set_cpuid_to_node(pa->apic_id, node);
node_set(node, numa_nodes_parsed);
pr_info("SRAT: PXM %u -> CPU 0x%02x -> Node %u\n", pxm, pa->apic_id, node);
}
void __init
acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa)
{
int pxm, node;
if (srat_disabled())
return;
if (pa->header.length < sizeof(struct acpi_srat_x2apic_cpu_affinity)) {
bad_srat();
return;
}
if ((pa->flags & ACPI_SRAT_CPU_ENABLED) == 0)
return;
pxm = pa->proximity_domain;
node = acpi_map_pxm_to_node(pxm);
if (node < 0) {
pr_err("SRAT: Too many proximity domains %x\n", pxm);
bad_srat();

View File

@ -73,6 +73,7 @@ SYM_CODE_START(handle_syscall)
move a0, sp
bl do_syscall
STACKLEAK_ERASE
RESTORE_ALL_AND_RET
SYM_CODE_END(handle_syscall)
_ASM_NOKPROBE(handle_syscall)
@ -82,6 +83,7 @@ SYM_CODE_START(ret_from_fork)
bl schedule_tail # a0 = struct task_struct *prev
move a0, sp
bl syscall_exit_to_user_mode
STACKLEAK_ERASE
RESTORE_STATIC
RESTORE_SOME
RESTORE_SP_AND_RET
@ -94,6 +96,7 @@ SYM_CODE_START(ret_from_kernel_thread)
jirl ra, s0, 0
move a0, sp
bl syscall_exit_to_user_mode
STACKLEAK_ERASE
RESTORE_STATIC
RESTORE_SOME
RESTORE_SP_AND_RET

View File

@ -11,6 +11,7 @@
#include <linux/mmzone.h>
#include <linux/export.h>
#include <linux/nodemask.h>
#include <linux/numa_memblks.h>
#include <linux/swap.h>
#include <linux/memblock.h>
#include <linux/pfn.h>
@ -27,10 +28,6 @@
#include <asm/time.h>
int numa_off;
unsigned char node_distances[MAX_NUMNODES][MAX_NUMNODES];
EXPORT_SYMBOL(node_distances);
static struct numa_meminfo numa_meminfo;
cpumask_t cpus_on_node[MAX_NUMNODES];
cpumask_t phys_cpus_on_node[MAX_NUMNODES];
EXPORT_SYMBOL(cpus_on_node);
@ -43,8 +40,6 @@ s16 __cpuid_to_node[CONFIG_NR_CPUS] = {
};
EXPORT_SYMBOL(__cpuid_to_node);
nodemask_t numa_nodes_parsed __initdata;
#ifdef CONFIG_HAVE_SETUP_PER_CPU_AREA
unsigned long __per_cpu_offset[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(__per_cpu_offset);
@ -145,48 +140,6 @@ void numa_remove_cpu(unsigned int cpu)
cpumask_clear_cpu(cpu, &cpus_on_node[nid]);
}
static int __init numa_add_memblk_to(int nid, u64 start, u64 end,
struct numa_meminfo *mi)
{
/* ignore zero length blks */
if (start == end)
return 0;
/* whine about and ignore invalid blks */
if (start > end || nid < 0 || nid >= MAX_NUMNODES) {
pr_warn("NUMA: Warning: invalid memblk node %d [mem %#010Lx-%#010Lx]\n",
nid, start, end - 1);
return 0;
}
if (mi->nr_blks >= NR_NODE_MEMBLKS) {
pr_err("NUMA: too many memblk ranges\n");
return -EINVAL;
}
mi->blk[mi->nr_blks].start = PFN_ALIGN(start);
mi->blk[mi->nr_blks].end = PFN_ALIGN(end - PAGE_SIZE + 1);
mi->blk[mi->nr_blks].nid = nid;
mi->nr_blks++;
return 0;
}
/**
* numa_add_memblk - Add one numa_memblk to numa_meminfo
* @nid: NUMA node ID of the new memblk
* @start: Start address of the new memblk
* @end: End address of the new memblk
*
* Add a new memblk to the default numa_meminfo.
*
* RETURNS:
* 0 on success, -errno on failure.
*/
int __init numa_add_memblk(int nid, u64 start, u64 end)
{
return numa_add_memblk_to(nid, start, end, &numa_meminfo);
}
static void __init node_mem_init(unsigned int node)
{
unsigned long start_pfn, end_pfn;
@ -205,18 +158,6 @@ static void __init node_mem_init(unsigned int node)
#ifdef CONFIG_ACPI_NUMA
static void __init add_node_intersection(u32 node, u64 start, u64 size, u32 type)
{
static unsigned long num_physpages;
num_physpages += (size >> PAGE_SHIFT);
pr_info("Node%d: mem_type:%d, mem_start:0x%llx, mem_size:0x%llx Bytes\n",
node, type, start, size);
pr_info(" start_pfn:0x%llx, end_pfn:0x%llx, num_physpages:0x%lx\n",
start >> PAGE_SHIFT, (start + size) >> PAGE_SHIFT, num_physpages);
memblock_set_node(start, size, &memblock.memory, node);
}
/*
* add_numamem_region
*
@ -228,28 +169,21 @@ static void __init add_node_intersection(u32 node, u64 start, u64 size, u32 type
*/
static void __init add_numamem_region(u64 start, u64 end, u32 type)
{
u32 i;
u64 ofs = start;
u32 node = pa_to_nid(start);
u64 size = end - start;
static unsigned long num_physpages;
if (start >= end) {
pr_debug("Invalid region: %016llx-%016llx\n", start, end);
return;
}
for (i = 0; i < numa_meminfo.nr_blks; i++) {
struct numa_memblk *mb = &numa_meminfo.blk[i];
if (ofs > mb->end)
continue;
if (end > mb->end) {
add_node_intersection(mb->nid, ofs, mb->end - ofs, type);
ofs = mb->end;
} else {
add_node_intersection(mb->nid, ofs, end - ofs, type);
break;
}
}
num_physpages += (size >> PAGE_SHIFT);
pr_info("Node%d: mem_type:%d, mem_start:0x%llx, mem_size:0x%llx Bytes\n",
node, type, start, size);
pr_info(" start_pfn:0x%llx, end_pfn:0x%llx, num_physpages:0x%lx\n",
start >> PAGE_SHIFT, end >> PAGE_SHIFT, num_physpages);
memblock_set_node(start, size, &memblock.memory, node);
}
static void __init init_node_memblock(void)
@ -291,24 +225,6 @@ static void __init init_node_memblock(void)
}
}
static void __init numa_default_distance(void)
{
int row, col;
for (row = 0; row < MAX_NUMNODES; row++)
for (col = 0; col < MAX_NUMNODES; col++) {
if (col == row)
node_distances[row][col] = LOCAL_DISTANCE;
else
/* We assume that one node per package here!
*
* A SLIT should be used for multiple nodes
* per package to override default setting.
*/
node_distances[row][col] = REMOTE_DISTANCE;
}
}
/*
* fake_numa_init() - For Non-ACPI systems
* Return: 0 on success, -errno on failure.
@ -333,11 +249,11 @@ int __init init_numa_memory(void)
for (i = 0; i < NR_CPUS; i++)
set_cpuid_to_node(i, NUMA_NO_NODE);
numa_default_distance();
numa_reset_distance();
nodes_clear(numa_nodes_parsed);
nodes_clear(node_possible_map);
nodes_clear(node_online_map);
memset(&numa_meminfo, 0, sizeof(numa_meminfo));
WARN_ON(memblock_clear_hotplug(0, PHYS_ADDR_MAX));
/* Parse SRAT and SLIT if provided by firmware. */
ret = acpi_disabled ? fake_numa_init() : acpi_numa_init();

View File

@ -46,6 +46,10 @@ EXPORT_SYMBOL(__cpu_logical_map);
cpumask_t cpu_sibling_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_sibling_map);
/* Representing the last level cache shared map of each logical CPU */
cpumask_t cpu_llc_shared_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_llc_shared_map);
/* Representing the core map of multi-core chips of each logical CPU */
cpumask_t cpu_core_map[NR_CPUS] __read_mostly;
EXPORT_SYMBOL(cpu_core_map);
@ -63,6 +67,9 @@ EXPORT_SYMBOL(cpu_foreign_map);
/* representing cpus for which sibling maps can be computed */
static cpumask_t cpu_sibling_setup_map;
/* representing cpus for which llc shared maps can be computed */
static cpumask_t cpu_llc_shared_setup_map;
/* representing cpus for which core maps can be computed */
static cpumask_t cpu_core_setup_map;
@ -102,6 +109,34 @@ static inline void set_cpu_core_map(int cpu)
}
}
static inline void set_cpu_llc_shared_map(int cpu)
{
int i;
cpumask_set_cpu(cpu, &cpu_llc_shared_setup_map);
for_each_cpu(i, &cpu_llc_shared_setup_map) {
if (cpu_to_node(cpu) == cpu_to_node(i)) {
cpumask_set_cpu(i, &cpu_llc_shared_map[cpu]);
cpumask_set_cpu(cpu, &cpu_llc_shared_map[i]);
}
}
}
static inline void clear_cpu_llc_shared_map(int cpu)
{
int i;
for_each_cpu(i, &cpu_llc_shared_setup_map) {
if (cpu_to_node(cpu) == cpu_to_node(i)) {
cpumask_clear_cpu(i, &cpu_llc_shared_map[cpu]);
cpumask_clear_cpu(cpu, &cpu_llc_shared_map[i]);
}
}
cpumask_clear_cpu(cpu, &cpu_llc_shared_setup_map);
}
static inline void set_cpu_sibling_map(int cpu)
{
int i;
@ -406,6 +441,7 @@ int loongson_cpu_disable(void)
#endif
set_cpu_online(cpu, false);
clear_cpu_sibling_map(cpu);
clear_cpu_llc_shared_map(cpu);
calculate_cpu_foreign_map();
local_irq_save(flags);
irq_migrate_all_off_this_cpu();
@ -572,6 +608,7 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
current_thread_info()->cpu = 0;
loongson_prepare_cpus(max_cpus);
set_cpu_sibling_map(0);
set_cpu_llc_shared_map(0);
set_cpu_core_map(0);
calculate_cpu_foreign_map();
#ifndef CONFIG_HOTPLUG_CPU
@ -613,6 +650,7 @@ asmlinkage void start_secondary(void)
loongson_init_secondary();
set_cpu_sibling_map(cpu);
set_cpu_llc_shared_map(cpu);
set_cpu_core_map(cpu);
notify_cpu_starting(cpu);

View File

@ -105,7 +105,9 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
vdso_addr = data_addr + VVAR_SIZE;
vma = _install_special_mapping(mm, vdso_addr, info->size,
VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC,
VM_READ | VM_EXEC |
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC |
VM_SEALED_SYSMAP,
&info->code_mapping);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);

View File

@ -341,7 +341,7 @@ static int kvm_trap_handle_gspr(struct kvm_vcpu *vcpu)
* 2) Execute CACOP/IDLE instructions;
* 3) Access to unimplemented CSRs/IOCSRs.
*/
static int kvm_handle_gspr(struct kvm_vcpu *vcpu)
static int kvm_handle_gspr(struct kvm_vcpu *vcpu, int ecode)
{
int ret = RESUME_GUEST;
enum emulation_result er = EMULATE_DONE;
@ -661,7 +661,7 @@ int kvm_emu_mmio_write(struct kvm_vcpu *vcpu, larch_inst inst)
return ret;
}
static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write)
static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write, int ecode)
{
int ret;
larch_inst inst;
@ -675,7 +675,7 @@ static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write)
return RESUME_GUEST;
}
ret = kvm_handle_mm_fault(vcpu, badv, write);
ret = kvm_handle_mm_fault(vcpu, badv, write, ecode);
if (ret) {
/* Treat as MMIO */
inst.word = vcpu->arch.badi;
@ -705,14 +705,14 @@ static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write)
return ret;
}
static int kvm_handle_read_fault(struct kvm_vcpu *vcpu)
static int kvm_handle_read_fault(struct kvm_vcpu *vcpu, int ecode)
{
return kvm_handle_rdwr_fault(vcpu, false);
return kvm_handle_rdwr_fault(vcpu, false, ecode);
}
static int kvm_handle_write_fault(struct kvm_vcpu *vcpu)
static int kvm_handle_write_fault(struct kvm_vcpu *vcpu, int ecode)
{
return kvm_handle_rdwr_fault(vcpu, true);
return kvm_handle_rdwr_fault(vcpu, true, ecode);
}
int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run)
@ -726,11 +726,12 @@ int kvm_complete_user_service(struct kvm_vcpu *vcpu, struct kvm_run *run)
/**
* kvm_handle_fpu_disabled() - Guest used fpu however it is disabled at host
* @vcpu: Virtual CPU context.
* @ecode: Exception code.
*
* Handle when the guest attempts to use fpu which hasn't been allowed
* by the root context.
*/
static int kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu)
static int kvm_handle_fpu_disabled(struct kvm_vcpu *vcpu, int ecode)
{
struct kvm_run *run = vcpu->run;
@ -783,11 +784,12 @@ static long kvm_save_notify(struct kvm_vcpu *vcpu)
/*
* kvm_handle_lsx_disabled() - Guest used LSX while disabled in root.
* @vcpu: Virtual CPU context.
* @ecode: Exception code.
*
* Handle when the guest attempts to use LSX when it is disabled in the root
* context.
*/
static int kvm_handle_lsx_disabled(struct kvm_vcpu *vcpu)
static int kvm_handle_lsx_disabled(struct kvm_vcpu *vcpu, int ecode)
{
if (kvm_own_lsx(vcpu))
kvm_queue_exception(vcpu, EXCCODE_INE, 0);
@ -798,11 +800,12 @@ static int kvm_handle_lsx_disabled(struct kvm_vcpu *vcpu)
/*
* kvm_handle_lasx_disabled() - Guest used LASX while disabled in root.
* @vcpu: Virtual CPU context.
* @ecode: Exception code.
*
* Handle when the guest attempts to use LASX when it is disabled in the root
* context.
*/
static int kvm_handle_lasx_disabled(struct kvm_vcpu *vcpu)
static int kvm_handle_lasx_disabled(struct kvm_vcpu *vcpu, int ecode)
{
if (kvm_own_lasx(vcpu))
kvm_queue_exception(vcpu, EXCCODE_INE, 0);
@ -810,7 +813,7 @@ static int kvm_handle_lasx_disabled(struct kvm_vcpu *vcpu)
return RESUME_GUEST;
}
static int kvm_handle_lbt_disabled(struct kvm_vcpu *vcpu)
static int kvm_handle_lbt_disabled(struct kvm_vcpu *vcpu, int ecode)
{
if (kvm_own_lbt(vcpu))
kvm_queue_exception(vcpu, EXCCODE_INE, 0);
@ -872,7 +875,7 @@ static void kvm_handle_service(struct kvm_vcpu *vcpu)
kvm_write_reg(vcpu, LOONGARCH_GPR_A0, ret);
}
static int kvm_handle_hypercall(struct kvm_vcpu *vcpu)
static int kvm_handle_hypercall(struct kvm_vcpu *vcpu, int ecode)
{
int ret;
larch_inst inst;
@ -932,16 +935,14 @@ static int kvm_handle_hypercall(struct kvm_vcpu *vcpu)
/*
* LoongArch KVM callback handling for unimplemented guest exiting
*/
static int kvm_fault_ni(struct kvm_vcpu *vcpu)
static int kvm_fault_ni(struct kvm_vcpu *vcpu, int ecode)
{
unsigned int ecode, inst;
unsigned long estat, badv;
unsigned int inst;
unsigned long badv;
/* Fetch the instruction */
inst = vcpu->arch.badi;
badv = vcpu->arch.badv;
estat = vcpu->arch.host_estat;
ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
kvm_err("ECode: %d PC=%#lx Inst=0x%08x BadVaddr=%#lx ESTAT=%#lx\n",
ecode, vcpu->arch.pc, inst, badv, read_gcsr_estat());
kvm_arch_vcpu_dump_regs(vcpu);
@ -966,5 +967,5 @@ static exit_handle_fn kvm_fault_tables[EXCCODE_INT_START] = {
int kvm_handle_fault(struct kvm_vcpu *vcpu, int fault)
{
return kvm_fault_tables[fault](vcpu);
return kvm_fault_tables[fault](vcpu, fault);
}

View File

@ -912,7 +912,7 @@ static int kvm_map_page(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
return err;
}
int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write, int ecode)
{
int ret;
@ -921,8 +921,17 @@ int kvm_handle_mm_fault(struct kvm_vcpu *vcpu, unsigned long gpa, bool write)
return ret;
/* Invalidate this entry in the TLB */
vcpu->arch.flush_gpa = gpa;
kvm_make_request(KVM_REQ_TLB_FLUSH_GPA, vcpu);
if (!cpu_has_ptw || (ecode == EXCCODE_TLBM)) {
/*
* With HW PTW, invalid TLB is not added when page fault. But
* for EXCCODE_TLBM exception, stale TLB may exist because of
* the last read access.
*
* With SW PTW, invalid TLB is added in TLB refill exception.
*/
vcpu->arch.flush_gpa = gpa;
kvm_make_request(KVM_REQ_TLB_FLUSH_GPA, vcpu);
}
return 0;
}

View File

@ -106,14 +106,6 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
page += vmem_altmap_offset(altmap);
__remove_pages(start_pfn, nr_pages, altmap);
}
#ifdef CONFIG_NUMA
int memory_add_physaddr_to_nid(u64 start)
{
return pa_to_nid(start);
}
EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
#endif
#endif
#ifdef CONFIG_SPARSEMEM_VMEMMAP

View File

@ -194,6 +194,7 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
{
struct pci_bus *bus;
struct pci_root_info *info;
struct pci_host_bridge *host;
struct acpi_pci_root_ops *root_ops;
int domain = root->segment;
int busnum = root->secondary.start;
@ -237,8 +238,17 @@ struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
return NULL;
}
pci_bus_size_bridges(bus);
pci_bus_assign_resources(bus);
/* If we must preserve the resource configuration, claim now */
host = pci_find_host_bridge(bus);
if (host->preserve_config)
pci_bus_claim_resources(bus);
/*
* Assign whatever was left unassigned. If we didn't claim above,
* this will reassign everything.
*/
pci_assign_unassigned_root_bus_resources(bus);
list_for_each_entry(child, &bus->children, node)
pcie_bus_configure_settings(child);
}

View File

@ -31,7 +31,7 @@ cflags-$(CONFIG_ARM) += -DEFI_HAVE_STRLEN -DEFI_HAVE_STRNLEN \
$(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_RISCV) += -fpic -DNO_ALTERNATIVE -mno-relax \
$(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_LOONGARCH) += -fpie
cflags-$(CONFIG_LOONGARCH) += -fpie $(DISABLE_STACKLEAK_PLUGIN)
cflags-$(CONFIG_EFI_PARAMS_FROM_FDT) += -I$(srctree)/scripts/dtc/libfdt

View File

@ -1317,6 +1317,7 @@ config NUMA_MEMBLKS
config NUMA_EMU
bool "NUMA emulation"
depends on NUMA_MEMBLKS
depends on X86 || GENERIC_ARCH_NUMA
help
Enable NUMA emulation. A flat machine will be split
into virtual nodes when booted with "numa=fake=N", where N is the

View File

@ -3,7 +3,7 @@ top_srcdir = ../../../..
include $(top_srcdir)/scripts/subarch.include
ARCH ?= $(SUBARCH)
ifeq ($(ARCH),$(filter $(ARCH),arm64 s390 riscv x86 x86_64))
ifeq ($(ARCH),$(filter $(ARCH),arm64 s390 riscv x86 x86_64 loongarch))
# Top-level selftests allows ARCH=x86_64 :-(
ifeq ($(ARCH),x86_64)
ARCH := x86

View File

@ -47,6 +47,10 @@ LIBKVM_riscv += lib/riscv/handlers.S
LIBKVM_riscv += lib/riscv/processor.c
LIBKVM_riscv += lib/riscv/ucall.c
LIBKVM_loongarch += lib/loongarch/processor.c
LIBKVM_loongarch += lib/loongarch/ucall.c
LIBKVM_loongarch += lib/loongarch/exception.S
# Non-compiled test targets
TEST_PROGS_x86 += x86/nx_huge_pages_test.sh
@ -190,6 +194,19 @@ TEST_GEN_PROGS_riscv += coalesced_io_test
TEST_GEN_PROGS_riscv += get-reg-list
TEST_GEN_PROGS_riscv += steal_time
TEST_GEN_PROGS_loongarch += coalesced_io_test
TEST_GEN_PROGS_loongarch += demand_paging_test
TEST_GEN_PROGS_loongarch += dirty_log_perf_test
TEST_GEN_PROGS_loongarch += dirty_log_test
TEST_GEN_PROGS_loongarch += guest_print_test
TEST_GEN_PROGS_loongarch += hardware_disable_test
TEST_GEN_PROGS_loongarch += kvm_binary_stats_test
TEST_GEN_PROGS_loongarch += kvm_create_max_vcpus
TEST_GEN_PROGS_loongarch += kvm_page_table_test
TEST_GEN_PROGS_loongarch += memslot_modification_stress_test
TEST_GEN_PROGS_loongarch += memslot_perf_test
TEST_GEN_PROGS_loongarch += set_memory_region_test
SPLIT_TESTS += arch_timer
SPLIT_TESTS += get-reg-list

View File

@ -177,6 +177,7 @@ enum vm_guest_mode {
VM_MODE_P36V48_4K,
VM_MODE_P36V48_16K,
VM_MODE_P36V48_64K,
VM_MODE_P47V47_16K,
VM_MODE_P36V47_16K,
NUM_VM_MODES,
};
@ -232,6 +233,11 @@ extern enum vm_guest_mode vm_mode_default;
#define MIN_PAGE_SHIFT 12U
#define ptes_per_page(page_size) ((page_size) / 8)
#elif defined(__loongarch__)
#define VM_MODE_DEFAULT VM_MODE_P47V47_16K
#define MIN_PAGE_SHIFT 12U
#define ptes_per_page(page_size) ((page_size) / 8)
#endif
#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)

View File

@ -0,0 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef SELFTEST_KVM_UTIL_ARCH_H
#define SELFTEST_KVM_UTIL_ARCH_H
struct kvm_vm_arch {};
#endif // SELFTEST_KVM_UTIL_ARCH_H

View File

@ -0,0 +1,141 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef SELFTEST_KVM_PROCESSOR_H
#define SELFTEST_KVM_PROCESSOR_H
#ifndef __ASSEMBLER__
#include "ucall_common.h"
#else
/* general registers */
#define zero $r0
#define ra $r1
#define tp $r2
#define sp $r3
#define a0 $r4
#define a1 $r5
#define a2 $r6
#define a3 $r7
#define a4 $r8
#define a5 $r9
#define a6 $r10
#define a7 $r11
#define t0 $r12
#define t1 $r13
#define t2 $r14
#define t3 $r15
#define t4 $r16
#define t5 $r17
#define t6 $r18
#define t7 $r19
#define t8 $r20
#define u0 $r21
#define fp $r22
#define s0 $r23
#define s1 $r24
#define s2 $r25
#define s3 $r26
#define s4 $r27
#define s5 $r28
#define s6 $r29
#define s7 $r30
#define s8 $r31
#endif
/*
* LoongArch page table entry definition
* Original header file arch/loongarch/include/asm/loongarch.h
*/
#define _PAGE_VALID_SHIFT 0
#define _PAGE_DIRTY_SHIFT 1
#define _PAGE_PLV_SHIFT 2 /* 2~3, two bits */
#define PLV_KERN 0
#define PLV_USER 3
#define PLV_MASK 0x3
#define _CACHE_SHIFT 4 /* 4~5, two bits */
#define _PAGE_PRESENT_SHIFT 7
#define _PAGE_WRITE_SHIFT 8
#define _PAGE_VALID BIT_ULL(_PAGE_VALID_SHIFT)
#define _PAGE_PRESENT BIT_ULL(_PAGE_PRESENT_SHIFT)
#define _PAGE_WRITE BIT_ULL(_PAGE_WRITE_SHIFT)
#define _PAGE_DIRTY BIT_ULL(_PAGE_DIRTY_SHIFT)
#define _PAGE_USER (PLV_USER << _PAGE_PLV_SHIFT)
#define __READABLE (_PAGE_VALID)
#define __WRITEABLE (_PAGE_DIRTY | _PAGE_WRITE)
/* Coherent Cached */
#define _CACHE_CC BIT_ULL(_CACHE_SHIFT)
#define PS_4K 0x0000000c
#define PS_16K 0x0000000e
#define PS_64K 0x00000010
#define PS_DEFAULT_SIZE PS_16K
/* LoongArch Basic CSR registers */
#define LOONGARCH_CSR_CRMD 0x0 /* Current mode info */
#define CSR_CRMD_PG_SHIFT 4
#define CSR_CRMD_PG BIT_ULL(CSR_CRMD_PG_SHIFT)
#define CSR_CRMD_IE_SHIFT 2
#define CSR_CRMD_IE BIT_ULL(CSR_CRMD_IE_SHIFT)
#define CSR_CRMD_PLV_SHIFT 0
#define CSR_CRMD_PLV_WIDTH 2
#define CSR_CRMD_PLV (0x3UL << CSR_CRMD_PLV_SHIFT)
#define PLV_MASK 0x3
#define LOONGARCH_CSR_PRMD 0x1
#define LOONGARCH_CSR_EUEN 0x2
#define LOONGARCH_CSR_ECFG 0x4
#define LOONGARCH_CSR_ESTAT 0x5 /* Exception status */
#define LOONGARCH_CSR_ERA 0x6 /* ERA */
#define LOONGARCH_CSR_BADV 0x7 /* Bad virtual address */
#define LOONGARCH_CSR_EENTRY 0xc
#define LOONGARCH_CSR_TLBIDX 0x10 /* TLB Index, EHINV, PageSize */
#define CSR_TLBIDX_PS_SHIFT 24
#define CSR_TLBIDX_PS_WIDTH 6
#define CSR_TLBIDX_PS (0x3fUL << CSR_TLBIDX_PS_SHIFT)
#define CSR_TLBIDX_SIZEM 0x3f000000
#define CSR_TLBIDX_SIZE CSR_TLBIDX_PS_SHIFT
#define LOONGARCH_CSR_ASID 0x18 /* ASID */
#define LOONGARCH_CSR_PGDL 0x19
#define LOONGARCH_CSR_PGDH 0x1a
/* Page table base */
#define LOONGARCH_CSR_PGD 0x1b
#define LOONGARCH_CSR_PWCTL0 0x1c
#define LOONGARCH_CSR_PWCTL1 0x1d
#define LOONGARCH_CSR_STLBPGSIZE 0x1e
#define LOONGARCH_CSR_CPUID 0x20
#define LOONGARCH_CSR_KS0 0x30
#define LOONGARCH_CSR_KS1 0x31
#define LOONGARCH_CSR_TMID 0x40
#define LOONGARCH_CSR_TCFG 0x41
/* TLB refill exception entry */
#define LOONGARCH_CSR_TLBRENTRY 0x88
#define LOONGARCH_CSR_TLBRSAVE 0x8b
#define LOONGARCH_CSR_TLBREHI 0x8e
#define CSR_TLBREHI_PS_SHIFT 0
#define CSR_TLBREHI_PS (0x3fUL << CSR_TLBREHI_PS_SHIFT)
#define EXREGS_GPRS (32)
#ifndef __ASSEMBLER__
void handle_tlb_refill(void);
void handle_exception(void);
struct ex_regs {
unsigned long regs[EXREGS_GPRS];
unsigned long pc;
unsigned long estat;
unsigned long badv;
};
#define PC_OFFSET_EXREGS offsetof(struct ex_regs, pc)
#define ESTAT_OFFSET_EXREGS offsetof(struct ex_regs, estat)
#define BADV_OFFSET_EXREGS offsetof(struct ex_regs, badv)
#define EXREGS_SIZE sizeof(struct ex_regs)
#else
#define PC_OFFSET_EXREGS ((EXREGS_GPRS + 0) * 8)
#define ESTAT_OFFSET_EXREGS ((EXREGS_GPRS + 1) * 8)
#define BADV_OFFSET_EXREGS ((EXREGS_GPRS + 2) * 8)
#define EXREGS_SIZE ((EXREGS_GPRS + 3) * 8)
#endif
#endif /* SELFTEST_KVM_PROCESSOR_H */

View File

@ -0,0 +1,20 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef SELFTEST_KVM_UCALL_H
#define SELFTEST_KVM_UCALL_H
#include "kvm_util.h"
#define UCALL_EXIT_REASON KVM_EXIT_MMIO
/*
* ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
* VM), it must not be accessed from host code.
*/
extern vm_vaddr_t *ucall_exit_mmio_addr;
static inline void ucall_arch_do_ucall(vm_vaddr_t uc)
{
WRITE_ONCE(*ucall_exit_mmio_addr, uc);
}
#endif

View File

@ -222,6 +222,7 @@ const char *vm_guest_mode_string(uint32_t i)
[VM_MODE_P36V48_4K] = "PA-bits:36, VA-bits:48, 4K pages",
[VM_MODE_P36V48_16K] = "PA-bits:36, VA-bits:48, 16K pages",
[VM_MODE_P36V48_64K] = "PA-bits:36, VA-bits:48, 64K pages",
[VM_MODE_P47V47_16K] = "PA-bits:47, VA-bits:47, 16K pages",
[VM_MODE_P36V47_16K] = "PA-bits:36, VA-bits:47, 16K pages",
};
_Static_assert(sizeof(strings)/sizeof(char *) == NUM_VM_MODES,
@ -248,6 +249,7 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
[VM_MODE_P36V48_4K] = { 36, 48, 0x1000, 12 },
[VM_MODE_P36V48_16K] = { 36, 48, 0x4000, 14 },
[VM_MODE_P36V48_64K] = { 36, 48, 0x10000, 16 },
[VM_MODE_P47V47_16K] = { 47, 47, 0x4000, 14 },
[VM_MODE_P36V47_16K] = { 36, 47, 0x4000, 14 },
};
_Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
@ -319,6 +321,7 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
case VM_MODE_P36V48_16K:
vm->pgtable_levels = 4;
break;
case VM_MODE_P47V47_16K:
case VM_MODE_P36V47_16K:
vm->pgtable_levels = 3;
break;

View File

@ -0,0 +1,59 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include "processor.h"
/* address of refill exception should be 4K aligned */
.balign 4096
.global handle_tlb_refill
handle_tlb_refill:
csrwr t0, LOONGARCH_CSR_TLBRSAVE
csrrd t0, LOONGARCH_CSR_PGD
lddir t0, t0, 3
lddir t0, t0, 1
ldpte t0, 0
ldpte t0, 1
tlbfill
csrrd t0, LOONGARCH_CSR_TLBRSAVE
ertn
/*
* save and restore all gprs except base register,
* and default value of base register is sp ($r3).
*/
.macro save_gprs base
.irp n,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
st.d $r\n, \base, 8 * \n
.endr
.endm
.macro restore_gprs base
.irp n,1,2,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31
ld.d $r\n, \base, 8 * \n
.endr
.endm
/* address of general exception should be 4K aligned */
.balign 4096
.global handle_exception
handle_exception:
csrwr sp, LOONGARCH_CSR_KS0
csrrd sp, LOONGARCH_CSR_KS1
addi.d sp, sp, -EXREGS_SIZE
save_gprs sp
/* save sp register to stack */
csrrd t0, LOONGARCH_CSR_KS0
st.d t0, sp, 3 * 8
csrrd t0, LOONGARCH_CSR_ERA
st.d t0, sp, PC_OFFSET_EXREGS
csrrd t0, LOONGARCH_CSR_ESTAT
st.d t0, sp, ESTAT_OFFSET_EXREGS
csrrd t0, LOONGARCH_CSR_BADV
st.d t0, sp, BADV_OFFSET_EXREGS
or a0, sp, zero
bl route_exception
restore_gprs sp
csrrd sp, LOONGARCH_CSR_KS0
ertn

View File

@ -0,0 +1,346 @@
// SPDX-License-Identifier: GPL-2.0
#include <assert.h>
#include <linux/compiler.h>
#include "kvm_util.h"
#include "processor.h"
#include "ucall_common.h"
#define LOONGARCH_PAGE_TABLE_PHYS_MIN 0x200000
#define LOONGARCH_GUEST_STACK_VADDR_MIN 0x200000
static vm_paddr_t invalid_pgtable[4];
static uint64_t virt_pte_index(struct kvm_vm *vm, vm_vaddr_t gva, int level)
{
unsigned int shift;
uint64_t mask;
shift = level * (vm->page_shift - 3) + vm->page_shift;
mask = (1UL << (vm->page_shift - 3)) - 1;
return (gva >> shift) & mask;
}
static uint64_t pte_addr(struct kvm_vm *vm, uint64_t entry)
{
return entry & ~((0x1UL << vm->page_shift) - 1);
}
static uint64_t ptrs_per_pte(struct kvm_vm *vm)
{
return 1 << (vm->page_shift - 3);
}
static void virt_set_pgtable(struct kvm_vm *vm, vm_paddr_t table, vm_paddr_t child)
{
uint64_t *ptep;
int i, ptrs_per_pte;
ptep = addr_gpa2hva(vm, table);
ptrs_per_pte = 1 << (vm->page_shift - 3);
for (i = 0; i < ptrs_per_pte; i++)
WRITE_ONCE(*(ptep + i), child);
}
void virt_arch_pgd_alloc(struct kvm_vm *vm)
{
int i;
vm_paddr_t child, table;
if (vm->pgd_created)
return;
child = table = 0;
for (i = 0; i < vm->pgtable_levels; i++) {
invalid_pgtable[i] = child;
table = vm_phy_page_alloc(vm, LOONGARCH_PAGE_TABLE_PHYS_MIN,
vm->memslots[MEM_REGION_PT]);
TEST_ASSERT(table, "Fail to allocate page tale at level %d\n", i);
virt_set_pgtable(vm, table, child);
child = table;
}
vm->pgd = table;
vm->pgd_created = true;
}
static int virt_pte_none(uint64_t *ptep, int level)
{
return *ptep == invalid_pgtable[level];
}
static uint64_t *virt_populate_pte(struct kvm_vm *vm, vm_vaddr_t gva, int alloc)
{
int level;
uint64_t *ptep;
vm_paddr_t child;
if (!vm->pgd_created)
goto unmapped_gva;
child = vm->pgd;
level = vm->pgtable_levels - 1;
while (level > 0) {
ptep = addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8;
if (virt_pte_none(ptep, level)) {
if (alloc) {
child = vm_alloc_page_table(vm);
virt_set_pgtable(vm, child, invalid_pgtable[level - 1]);
WRITE_ONCE(*ptep, child);
} else
goto unmapped_gva;
} else
child = pte_addr(vm, *ptep);
level--;
}
ptep = addr_gpa2hva(vm, child) + virt_pte_index(vm, gva, level) * 8;
return ptep;
unmapped_gva:
TEST_FAIL("No mapping for vm virtual address, gva: 0x%lx", gva);
exit(EXIT_FAILURE);
}
vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
{
uint64_t *ptep;
ptep = virt_populate_pte(vm, gva, 0);
TEST_ASSERT(*ptep != 0, "Virtual address vaddr: 0x%lx not mapped\n", gva);
return pte_addr(vm, *ptep) + (gva & (vm->page_size - 1));
}
void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
{
uint32_t prot_bits;
uint64_t *ptep;
TEST_ASSERT((vaddr % vm->page_size) == 0,
"Virtual address not on page boundary,\n"
"vaddr: 0x%lx vm->page_size: 0x%x", vaddr, vm->page_size);
TEST_ASSERT(sparsebit_is_set(vm->vpages_valid,
(vaddr >> vm->page_shift)),
"Invalid virtual address, vaddr: 0x%lx", vaddr);
TEST_ASSERT((paddr % vm->page_size) == 0,
"Physical address not on page boundary,\n"
"paddr: 0x%lx vm->page_size: 0x%x", paddr, vm->page_size);
TEST_ASSERT((paddr >> vm->page_shift) <= vm->max_gfn,
"Physical address beyond maximum supported,\n"
"paddr: 0x%lx vm->max_gfn: 0x%lx vm->page_size: 0x%x",
paddr, vm->max_gfn, vm->page_size);
ptep = virt_populate_pte(vm, vaddr, 1);
prot_bits = _PAGE_PRESENT | __READABLE | __WRITEABLE | _CACHE_CC | _PAGE_USER;
WRITE_ONCE(*ptep, paddr | prot_bits);
}
static void pte_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent, uint64_t page, int level)
{
uint64_t pte, *ptep;
static const char * const type[] = { "pte", "pmd", "pud", "pgd"};
if (level < 0)
return;
for (pte = page; pte < page + ptrs_per_pte(vm) * 8; pte += 8) {
ptep = addr_gpa2hva(vm, pte);
if (virt_pte_none(ptep, level))
continue;
fprintf(stream, "%*s%s: %lx: %lx at %p\n",
indent, "", type[level], pte, *ptep, ptep);
pte_dump(stream, vm, indent + 1, pte_addr(vm, *ptep), level--);
}
}
void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
{
int level;
if (!vm->pgd_created)
return;
level = vm->pgtable_levels - 1;
pte_dump(stream, vm, indent, vm->pgd, level);
}
void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu, uint8_t indent)
{
}
void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
{
struct ucall uc;
if (get_ucall(vcpu, &uc) != UCALL_UNHANDLED)
return;
TEST_FAIL("Unexpected exception (pc:0x%lx, estat:0x%lx, badv:0x%lx)",
uc.args[0], uc.args[1], uc.args[2]);
}
void route_exception(struct ex_regs *regs)
{
unsigned long pc, estat, badv;
pc = regs->pc;
badv = regs->badv;
estat = regs->estat;
ucall(UCALL_UNHANDLED, 3, pc, estat, badv);
while (1) ;
}
void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...)
{
int i;
va_list ap;
struct kvm_regs regs;
TEST_ASSERT(num >= 1 && num <= 8, "Unsupported number of args,\n"
"num: %u\n", num);
vcpu_regs_get(vcpu, &regs);
va_start(ap, num);
for (i = 0; i < num; i++)
regs.gpr[i + 4] = va_arg(ap, uint64_t);
va_end(ap);
vcpu_regs_set(vcpu, &regs);
}
static void loongarch_get_csr(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
{
uint64_t csrid;
csrid = KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | 8 * id;
__vcpu_get_reg(vcpu, csrid, addr);
}
static void loongarch_set_csr(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
{
uint64_t csrid;
csrid = KVM_REG_LOONGARCH_CSR | KVM_REG_SIZE_U64 | 8 * id;
__vcpu_set_reg(vcpu, csrid, val);
}
static void loongarch_vcpu_setup(struct kvm_vcpu *vcpu)
{
int width;
unsigned long val;
struct kvm_vm *vm = vcpu->vm;
switch (vm->mode) {
case VM_MODE_P36V47_16K:
case VM_MODE_P47V47_16K:
break;
default:
TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
}
/* user mode and page enable mode */
val = PLV_USER | CSR_CRMD_PG;
loongarch_set_csr(vcpu, LOONGARCH_CSR_CRMD, val);
loongarch_set_csr(vcpu, LOONGARCH_CSR_PRMD, val);
loongarch_set_csr(vcpu, LOONGARCH_CSR_EUEN, 1);
loongarch_set_csr(vcpu, LOONGARCH_CSR_ECFG, 0);
loongarch_set_csr(vcpu, LOONGARCH_CSR_TCFG, 0);
loongarch_set_csr(vcpu, LOONGARCH_CSR_ASID, 1);
val = 0;
width = vm->page_shift - 3;
switch (vm->pgtable_levels) {
case 4:
/* pud page shift and width */
val = (vm->page_shift + width * 2) << 20 | (width << 25);
/* fall throuth */
case 3:
/* pmd page shift and width */
val |= (vm->page_shift + width) << 10 | (width << 15);
/* pte page shift and width */
val |= vm->page_shift | width << 5;
break;
default:
TEST_FAIL("Got %u page table levels, expected 3 or 4", vm->pgtable_levels);
}
loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL0, val);
/* PGD page shift and width */
val = (vm->page_shift + width * (vm->pgtable_levels - 1)) | width << 6;
loongarch_set_csr(vcpu, LOONGARCH_CSR_PWCTL1, val);
loongarch_set_csr(vcpu, LOONGARCH_CSR_PGDL, vm->pgd);
/*
* Refill exception runs on real mode
* Entry address should be physical address
*/
val = addr_gva2gpa(vm, (unsigned long)handle_tlb_refill);
loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBRENTRY, val);
/*
* General exception runs on page-enabled mode
* Entry address should be virtual address
*/
val = (unsigned long)handle_exception;
loongarch_set_csr(vcpu, LOONGARCH_CSR_EENTRY, val);
loongarch_get_csr(vcpu, LOONGARCH_CSR_TLBIDX, &val);
val &= ~CSR_TLBIDX_SIZEM;
val |= PS_DEFAULT_SIZE << CSR_TLBIDX_SIZE;
loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBIDX, val);
loongarch_set_csr(vcpu, LOONGARCH_CSR_STLBPGSIZE, PS_DEFAULT_SIZE);
/* LOONGARCH_CSR_KS1 is used for exception stack */
val = __vm_vaddr_alloc(vm, vm->page_size,
LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
TEST_ASSERT(val != 0, "No memory for exception stack");
val = val + vm->page_size;
loongarch_set_csr(vcpu, LOONGARCH_CSR_KS1, val);
loongarch_get_csr(vcpu, LOONGARCH_CSR_TLBREHI, &val);
val &= ~CSR_TLBREHI_PS;
val |= PS_DEFAULT_SIZE << CSR_TLBREHI_PS_SHIFT;
loongarch_set_csr(vcpu, LOONGARCH_CSR_TLBREHI, val);
loongarch_set_csr(vcpu, LOONGARCH_CSR_CPUID, vcpu->id);
loongarch_set_csr(vcpu, LOONGARCH_CSR_TMID, vcpu->id);
}
struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
{
size_t stack_size;
uint64_t stack_vaddr;
struct kvm_regs regs;
struct kvm_vcpu *vcpu;
vcpu = __vm_vcpu_add(vm, vcpu_id);
stack_size = vm->page_size;
stack_vaddr = __vm_vaddr_alloc(vm, stack_size,
LOONGARCH_GUEST_STACK_VADDR_MIN, MEM_REGION_DATA);
TEST_ASSERT(stack_vaddr != 0, "No memory for vm stack");
loongarch_vcpu_setup(vcpu);
/* Setup guest general purpose registers */
vcpu_regs_get(vcpu, &regs);
regs.gpr[3] = stack_vaddr + stack_size;
vcpu_regs_set(vcpu, &regs);
return vcpu;
}
void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code)
{
struct kvm_regs regs;
/* Setup guest PC register */
vcpu_regs_get(vcpu, &regs);
regs.pc = (uint64_t)guest_code;
vcpu_regs_set(vcpu, &regs);
}

View File

@ -0,0 +1,38 @@
// SPDX-License-Identifier: GPL-2.0
/*
* ucall support. A ucall is a "hypercall to userspace".
*
*/
#include "kvm_util.h"
/*
* ucall_exit_mmio_addr holds per-VM values (global data is duplicated by each
* VM), it must not be accessed from host code.
*/
vm_vaddr_t *ucall_exit_mmio_addr;
void ucall_arch_init(struct kvm_vm *vm, vm_paddr_t mmio_gpa)
{
vm_vaddr_t mmio_gva = vm_vaddr_unused_gap(vm, vm->page_size, KVM_UTIL_MIN_VADDR);
virt_map(vm, mmio_gva, mmio_gpa, 1);
vm->ucall_mmio_addr = mmio_gpa;
write_guest_global(vm, ucall_exit_mmio_addr, (vm_vaddr_t *)mmio_gva);
}
void *ucall_arch_get_ucall(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
if (run->exit_reason == KVM_EXIT_MMIO &&
run->mmio.phys_addr == vcpu->vm->ucall_mmio_addr) {
TEST_ASSERT(run->mmio.is_write && run->mmio.len == sizeof(uint64_t),
"Unexpected ucall exit mmio address access");
return (void *)(*((uint64_t *)run->mmio.data));
}
return NULL;
}

View File

@ -350,7 +350,7 @@ static void test_invalid_memory_region_flags(void)
struct kvm_vm *vm;
int r, i;
#if defined __aarch64__ || defined __riscv || defined __x86_64__
#if defined __aarch64__ || defined __riscv || defined __x86_64__ || defined __loongarch__
supported_flags |= KVM_MEM_READONLY;
#endif