Fix a commit 8a8158c85e ("MIPS: memset.S: EVA & fault support for
small_memset") regression and remove assembly warnings:
arch/mips/lib/memset.S: Assembler messages:
arch/mips/lib/memset.S:243: Warning: Macro instruction expanded into multiple instructions in a branch delay slot
triggering with the CPU_DADDI_WORKAROUNDS option set and this code:
PTR_SUBU a2, t1, a0
jr ra
PTR_ADDIU a2, 1
This is because with that option in place the DADDIU instruction, which
the PTR_ADDIU CPP macro expands to, becomes a GAS macro, which in turn
expands to an LI/DADDU (or actually ADDIU/DADDU) sequence:
13c: 01a4302f dsubu a2,t1,a0
140: 03e00008 jr ra
144: 24010001 li at,1
148: 00c1302d daddu a2,a2,at
...
Correct this by switching off the `noreorder' assembly mode and letting
GAS schedule this jump's delay slot, as there is nothing special about
it that would require manual scheduling. With this change in place
correct code is produced:
13c: 01a4302f dsubu a2,t1,a0
140: 24010001 li at,1
144: 03e00008 jr ra
148: 00c1302d daddu a2,a2,at
...
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 8a8158c85e ("MIPS: memset.S: EVA & fault support for small_memset")
Patchwork: https://patchwork.linux-mips.org/patch/20833/
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: stable@vger.kernel.org # 4.17+
MIPSR6 CPUs do not support unaligned load/store instructions
(LWL, LWR, SWL, SWR and LDL, LDR, SDL, SDR for 64bit).
Currently the MIPS tree has some special cases to avoid these
instructions, and the code is testing for !CONFIG_CPU_MIPSR6.
This patch declares a new Kconfig variable:
CONFIG_CPU_HAS_LOAD_STORE_LR.
This variable indicates that the CPU supports these instructions.
Then, the patch does the following:
- Carefully selects this option on all CPUs except MIPSR6.
- Switches all the special cases to test for the new variable,
and inverts the logic:
'#ifndef CONFIG_CPU_MIPSR6' turns into
'#ifdef CONFIG_CPU_HAS_LOAD_STORE_LR'
and vice-versa.
Also, when this variable is NOT selected (e.g. MIPSR6),
CONFIG_GENERIC_CSUM will default to 'y', to compile generic
C checksum code (instead of special assembly code that uses the
unsupported instructions).
This commit should not affect any existing CPU, and is required
for future Lexra CPU support, that misses these instructions too.
Signed-off-by: Yasha Cherikovsky <yasha.che3@gmail.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20808/
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
MIPS has a copy of lib/iomap.c with minor alterations, none of which are
necessary given appropriate definitions of PIO_OFFSET, PIO_MASK &
PIO_RESERVED. Provide such definitions, select GENERIC_IOMAP & remove
arch/mips/lib/iomap.c to cut back on the needless duplication.
The one change this does make is to our mmio_{in,out}s[bwl] functions,
which began to deviate from their generic counterparts with commit
0845bb721e ("MIPS: iomap: Use __mem_{read,write}{b,w,l} for MMIO"). I
suspect that this commit was incorrect, and that the SEAD-3 platform
should have instead selected CONFIG_SWAP_IO_SPACE. Since the SEAD-3
platform code is now gone & the board is instead supported by the
generic platform (CONFIG_MIPS_GENERIC) which selects
CONFIG_SWAP_IO_SPACE anyway, this shouldn't be a problem any more.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/20342/
Cc: linux-mips@linux-mips.org
Some versions of GCC suboptimally generate calls to the __multi3()
intrinsic for MIPS64r6 builds, resulting in link failures due to the
missing function:
LD vmlinux.o
MODPOST vmlinux.o
kernel/bpf/verifier.o: In function `kmalloc_array':
include/linux/slab.h:631: undefined reference to `__multi3'
fs/select.o: In function `kmalloc_array':
include/linux/slab.h:631: undefined reference to `__multi3'
...
We already have a workaround for this in which we provide the
instrinsic, but we do so selectively for GCC 7 only. Unfortunately the
issue occurs with older GCC versions too - it has been observed with
both GCC 5.4.0 & GCC 6.4.0.
MIPSr6 support was introduced in GCC 5, so all major GCC versions prior
to GCC 8 are affected and we extend our workaround accordingly to all
MIPS64r6 builds using GCC versions older than GCC 8.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Reported-by: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
Fixes: ebabcf17bc ("MIPS: Implement __multi3 for GCC7 MIPS64r6 builds")
Patchwork: https://patchwork.linux-mips.org/patch/20297/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 4.15+
It is not immediately obvious what the expected inputs to these fault
handlers is and how they calculate the number of unset bytes. Having
stared deeply at this in order to fix some corner cases, add some
comments to assist those who follow.
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19339/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: <linux-mips@linux-mips.org>
Cc: <linux-kernel@vger.kernel.org>
The __clear_user function is defined to return the number of bytes that
could not be cleared. From the underlying memset / bzero implementation
this means setting register a2 to that number on return. Currently if a
page fault is triggered within the MIPSr6 version of setting of initial
unaligned bytes, the value loaded into a2 on return is meaningless.
During the MIPSr6 version of the initial unaligned bytes block, register
a2 contains the number of bytes to be set beyond the initial unaligned
bytes. The t0 register is initally set to the number of unaligned bytes
- STORSIZE, effectively a negative version of the number of unaligned
bytes. This is then incremented before each byte is saved.
The label .Lbyte_fixup\@ is jumped to on page fault. Currently the value
in a2 is incorrectly replaced by 0 - t0 + 1, effectively the number of
unaligned bytes remaining. This leads to the failures being reported by
the following test code:
static int __init test_clear_user(void)
{
int j, k;
pr_info("\n\n\nTesting clear_user\n");
for (j = 0; j < 512; j++) {
if ((k = clear_user(NULL+3, j)) != j) {
pr_err("clear_user (NULL %d) returned %d\n", j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
Which reports:
[ 3.965439] Testing clear_user
[ 3.973169] clear_user (NULL 8) returned 6
[ 3.976782] clear_user (NULL 9) returned 6
[ 3.980390] clear_user (NULL 10) returned 6
[ 3.984052] clear_user (NULL 11) returned 6
[ 3.987524] clear_user (NULL 12) returned 6
Fix this by subtracting t0 from a2 (rather than $0), effectivey giving:
unset_bytes = (#bytes - (#unaligned bytes)) - (-#unaligned bytes remaining + 1) + 1
a2 = a2 - t0 + 1
This fixes the value returned from __clear user when the number of bytes
to set is > LONGSIZE and the address is invalid and unaligned.
Unfortunately, this breaks the fixup handling for unaligned bytes after
the final long, where register a2 still contains the number of bytes
remaining to be set and the t0 register is to 0 - the number of
unaligned bytes remaining.
Because t0 is now is now subtracted from a2 rather than 0, the number of
bytes unset is reported incorrectly:
static int __init test_clear_user(void)
{
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
if ((k = clear_user(test + PAGE_SIZE - 254, j)) != j - 254) {
pr_err("clear_user (%px %d) returned %d\n",
test + PAGE_SIZE - 254, j, k);
}
}
return 0;
}
late_initcall(test_clear_user);
[ 3.976775] clear_user (c00000000000df02 256) returned 4
[ 3.981957] clear_user (c00000000000df02 257) returned 6
[ 3.986425] clear_user (c00000000000df02 258) returned 8
[ 3.990850] clear_user (c00000000000df02 259) returned 10
[ 3.995332] clear_user (c00000000000df02 260) returned 12
[ 3.999815] clear_user (c00000000000df02 261) returned 14
Fix this by ensuring that a2 is set to 0 during the set of final
unaligned bytes.
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes: 8c56208aff ("MIPS: lib: memset: Add MIPS R6 support")
Patchwork: https://patchwork.linux-mips.org/patch/19338/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org # v4.0+
Assembly language within the MIPS kernel conventionally indents
instructions which are in a branch delay slot to make them easier to
see. Commit 8483b14aaa ("MIPS: lib: memset: Whitespace fixes") rather
inexplicably removed all of these indentations from memset.S. Reinstate
the convention for all instructions in a branch delay slot. This
effectively reverts the above commit, plus other locations introduced
with MIPSR6 support.
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/19111/
Signed-off-by: James Hogan <jhogan@kernel.org>
The commit b35cd9884f ("lib: Add shared copies of some GCC library
routines") makes it possible to share generic GCC library routines by
several architectures.
This commit removes several generic GCC library routines from
arch/mips/lib/ in favour of similar routines from lib/.
Signed-off-by: Antony Pavlov <antonynpavlov@gmail.com>
[Matt Redfearn] Use GENERIC_LIB_* named Kconfig entries
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/19051/
Signed-off-by: James Hogan <jhogan@kernel.org>
The label .Llast_fixup\@ is jumped to on page fault within the final
byte set loop of memset (on < MIPSR6 architectures). For some reason, in
this fault handler, the v1 register is randomly set to a2 & STORMASK.
This clobbers v1 for the calling function. This can be observed with the
following test code:
static int __init __attribute__((optimize("O0"))) test_clear_user(void)
{
register int t asm("v1");
char *test;
int j, k;
pr_info("\n\n\nTesting clear_user\n");
test = vmalloc(PAGE_SIZE);
for (j = 256; j < 512; j++) {
t = 0xa5a5a5a5;
if ((k = clear_user(test + PAGE_SIZE - 256, j)) != j - 256) {
pr_err("clear_user (%px %d) returned %d\n", test + PAGE_SIZE - 256, j, k);
}
if (t != 0xa5a5a5a5) {
pr_err("v1 was clobbered to 0x%x!\n", t);
}
}
return 0;
}
late_initcall(test_clear_user);
Which demonstrates that v1 is indeed clobbered (MIPS64):
Testing clear_user
v1 was clobbered to 0x1!
v1 was clobbered to 0x2!
v1 was clobbered to 0x3!
v1 was clobbered to 0x4!
v1 was clobbered to 0x5!
v1 was clobbered to 0x6!
v1 was clobbered to 0x7!
Since the number of bytes that could not be set is already contained in
a2, the andi placing a value in v1 is not necessary and actively
harmful in clobbering v1.
Reported-by: James Hogan <jhogan@kernel.org>
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/19109/
Signed-off-by: James Hogan <jhogan@kernel.org>
The __clear_user function is defined to return the number of bytes that
could not be cleared. From the underlying memset / bzero implementation
this means setting register a2 to that number on return. Currently if a
page fault is triggered within the memset_partial block, the value
loaded into a2 on return is meaningless.
The label .Lpartial_fixup\@ is jumped to on page fault. In order to work
out how many bytes failed to copy, the exception handler should find how
many bytes left in the partial block (andi a2, STORMASK), add that to
the partial block end address (a2), and subtract the faulting address to
get the remainder. Currently it incorrectly subtracts the partial block
start address (t1), which has additionally been clobbered to generate a
jump target in memset_partial. Fix this by adding the block end address
instead.
This issue was found with the following test code:
int j, k;
for (j = 0; j < 512; j++) {
if ((k = clear_user(NULL, j)) != j) {
pr_err("clear_user (NULL %d) returned %d\n", j, k);
}
}
Which now passes on Creator Ci40 (MIPS32) and Cavium Octeon II (MIPS64).
Suggested-by: James Hogan <jhogan@kernel.org>
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/19108/
Signed-off-by: James Hogan <jhogan@kernel.org>
The MIPS kernel memset / bzero implementation includes a small_memset
branch which is used when the region to be set is smaller than a long (4
bytes on 32bit, 8 bytes on 64bit). The current small_memset
implementation uses a simple store byte loop to write the destination.
There are 2 issues with this implementation:
1. When EVA mode is active, user and kernel address spaces may overlap.
Currently the use of the sb instruction means kernel mode addressing is
always used and an intended write to userspace may actually overwrite
some critical kernel data.
2. If the write triggers a page fault, for example by calling
__clear_user(NULL, 2), instead of gracefully handling the fault, an OOPS
is triggered.
Fix these issues by replacing the sb instruction with the EX() macro,
which will emit EVA compatible instuctions as required. Additionally
implement a fault fixup for small_memset which sets a2 to the number of
bytes that could not be cleared (as defined by __clear_user).
Reported-by: Chuanhua Lei <chuanhua.lei@intel.com>
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/18975/
Signed-off-by: James Hogan <jhogan@kernel.org>
GCC7 is a bit too eager to generate suboptimal __multi3 calls (128bit
multiply with 128bit result) for MIPS64r6 builds, even in code which
doesn't explicitly use 128bit types, such as the following:
unsigned long func(unsigned long a, unsigned long b)
{
return a > (~0UL) / b;
}
Which GCC rearanges to:
return (unsigned __int128)a * (unsigned __int128)b > 0xffffffffffffffff;
Therefore implement __multi3, but only for MIPS64r6 with GCC7 as under
normal circumstances we wouldn't expect any calls to __multi3 to be
generated from kernel code.
Reported-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: James Hogan <jhogan@kernel.org>
Tested-by: Waldemar Brodkorb <wbx@openadk.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Maciej W. Rozycki <macro@mips.com>
Cc: Matthew Fortune <matthew.fortune@mips.com>
Cc: Florian Fainelli <florian@openwrt.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17890/
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
We currently have __ioread32_copy, __iowrite32_copy & __iowrite64_copy
helpers in lib/iomap_copy.c. This patch adds __ioread64_copy to round
out the set, allowing copies from I/O memory using 32 or 64 bit reads.
[ralf@linux-mips.org: Changed to move all the code of this patch to be
applied to arch/mips temporarily.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/17025/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
arch/mips/lib/delay.c provides our implementations of the __delay(),
__ndelay() & __udelay() functions, but doesn't include the asm/delay.h
header which declares them. This leads to warnings from sparse:
arch/mips/lib/delay.c:26:6: warning: symbol '__delay' was not
declared. Should it be static?
arch/mips/lib/delay.c:50:6: warning: symbol '__udelay' was not
declared. Should it be static?
arch/mips/lib/delay.c:58:6: warning: symbol '__ndelay' was not
declared. Should it be static?
To keep checkpatch happy was well, include <linux/delay.h> rather than
<asm/delay.h> directly to get the declarations of __delay(), __ndelay() &
__udelay().
[ralf@linux-mips.org: Fixed to include <linux/delay.h.]
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: trivial@kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/17170/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Pull MIPS updates from Ralf Baechle:
"Boston platform support:
- Document DT bindings
- Add CLK driver for board clocks
CM:
- Avoid per-core locking with CM3 & higher
- WARN on attempt to lock invalid VP, not BUG
CPS:
- Select CONFIG_SYS_SUPPORTS_SCHED_SMT for MIPSr6
- Prevent multi-core with dcache aliasing
- Handle cores not powering down more gracefully
- Handle spurious VP starts more gracefully
DSP:
- Add lwx & lhx missaligned access support
eBPF:
- Add MIPS support along with many supporting change to add the
required infrastructure
Generic arch code:
- Misc sysmips MIPS_ATOMIC_SET fixes
- Drop duplicate HAVE_SYSCALL_TRACEPOINTS
- Negate error syscall return in trace
- Correct forced syscall errors
- Traced negative syscalls should return -ENOSYS
- Allow samples/bpf/tracex5 to access syscall arguments for sane
traces
- Cleanup from old Kconfig options in defconfigs
- Fix PREF instruction usage by memcpy for MIPS R6
- Fix various special cases in the FPU eulation
- Fix some special cases in MIPS16e2 support
- Fix MIPS I ISA /proc/cpuinfo reporting
- Sort MIPS Kconfig alphabetically
- Fix minimum alignment requirement of IRQ stack as required by
ABI / GCC
- Fix special cases in the module loader
- Perform post-DMA cache flushes on systems with MAARs
- Probe the I6500 CPU
- Cleanup cmpxchg and add support for 1 and 2 byte operations
- Use queued read/write locks (qrwlock)
- Use queued spinlocks (qspinlock)
- Add CPU shared FTLB feature detection
- Handle tlbex-tlbp race condition
- Allow storing pgd in C0_CONTEXT for MIPSr6
- Use current_cpu_type() in m4kc_tlbp_war()
- Support Boston in the generic kernel
Generic platform:
- yamon-dt: Pull YAMON DT shim code out of SEAD-3 board
- yamon-dt: Support > 256MB of RAM
- yamon-dt: Use serial* rather than uart* aliases
- Abstract FDT fixup application
- Set RTC_ALWAYS_BCD to 0
- Add a MAINTAINERS entry
core kernel:
- qspinlock.c: include linux/prefetch.h
Loongson 3:
- Add support
Perf:
- Add I6500 support
SEAD-3:
- Remove GIC timer from DT
- Set interrupt-parent per-device, not at root node
- Fix GIC interrupt specifiers
SMP:
- Skip IPI setup if we only have a single CPU
VDSO:
- Make comment match reality
- Improvements to time code in VDSO"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (86 commits)
locking/qspinlock: Include linux/prefetch.h
MIPS: Fix MIPS I ISA /proc/cpuinfo reporting
MIPS: Fix minimum alignment requirement of IRQ stack
MIPS: generic: Support MIPS Boston development boards
MIPS: DTS: img: Don't attempt to build-in all .dtb files
clk: boston: Add a driver for MIPS Boston board clocks
dt-bindings: Document img,boston-clock binding
MIPS: Traced negative syscalls should return -ENOSYS
MIPS: Correct forced syscall errors
MIPS: Negate error syscall return in trace
MIPS: Drop duplicate HAVE_SYSCALL_TRACEPOINTS select
MIPS16e2: Provide feature overrides for non-MIPS16 systems
MIPS: MIPS16e2: Report ASE presence in /proc/cpuinfo
MIPS: MIPS16e2: Subdecode extended LWSP/SWSP instructions
MIPS: MIPS16e2: Identify ASE presence
MIPS: VDSO: Fix a mismatch between comment and preprocessor constant
MIPS: VDSO: Add implementation of gettimeofday() fallback
MIPS: VDSO: Add implementation of clock_gettime() fallback
MIPS: VDSO: Fix conversions in do_monotonic()/do_monotonic_coarse()
MIPS: Use current_cpu_type() in m4kc_tlbp_war()
...
Disable usage of PREF instruction usage by memcpy for MIPS R6.
MIPS R6 redefines PREF instruction with smaller offset than
ordinary MIPS. However, the memcpy code uses PREF instruction
with offsets bigger than +-256 bytes.
Malta kernels already disable usage of PREF for memcpy.
This was found during adaptation of MIPS R6 for virtual board
used by Android emulator.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtech.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16510/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
kernelci.org reports tons of build warnings for linux-next:
35 WARNING: "memcpy" [fs/fat/msdos.ko] has no CRC!
35 WARNING: "__copy_user" [fs/fat/fat.ko] has no CRC!
32 WARNING: EXPORT symbol "memset" [vmlinux] version generation failed, symbol will not be versioned.
32 WARNING: EXPORT symbol "copy_page" [vmlinux] version generation failed, symbol will not be versioned.
32 WARNING: EXPORT symbol "clear_page" [vmlinux] version generation failed, symbol will not be versioned.
32 WARNING: EXPORT symbol "__strncpy_from_user_nocheck_asm" [vmlinux] version generation failed, symbol will not be versioned.
The problem here is mainly the missing asm/asm-prototypes.h header file
that is supposed to include the prototypes for each symbol that is exported
from an assembler file.
A second problem is that the asm/uaccess.h header contains some but not
all the necessary declarations for the user access helpers.
Finally, the vdso build is broken once we add asm/asm-prototypes.h, so
we have to fix this at the same time by changing the vdso header. My
approach here is to just not look for exported symbols in the VDSO
assembler files, as the symbols cannot be exported anyway.
Fixes: 576a2f0c5c ("MIPS: Export memcpy & memset functions alongside their definitions")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/15038/
Patchwork: https://patchwork.linux-mips.org/patch/15069/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Now that EXPORT_SYMBOL can be used from assembly source, move the
EXPORT_SYMBOL invocations for the memcpy & memset functions & variants
thereof to be alongside their definitions.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14514/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Now that EXPORT_SYMBOL can be used from assembly source, move the
EXPORT_SYMBOL invocations for the strlen*, strnlen* & strncpy* functions
to be alongside their definitions.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14513/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Now that EXPORT_SYMBOL can be used from assembly source, move the
EXPORT_SYMBOL invocations for the csum_partial_* functions to be
alongside their definitions.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14512/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Since commit 4bcc595ccd ("printk: reinstate KERN_CONT for printing
continuation lines") the output from TLB dumps on MIPS has been
pretty unreadable due to the lack of KERN_CONT markers. Use pr_cont to
provide the appropriate markers & restore the expected output.
Continuation is also used for the second line of each TLB entry printed
in dump_tlb.c even though it has a newline, since it is a continuation
of the interpretation of the same TLB entry. For example:
[ 46.371884] Index: 0 pgmask=16kb va=77654000 asid=73 gid=00
[ri=0 xi=0 pa=ffc18000 c=5 d=0 v=1 g=0] [ri=0 xi=0 pa=ffc1c000 c=5 d=0 v=1 g=0]
[ 46.385380] Index: 12 pgmask=16kb va=004b4000 asid=73 gid=00
[ri=0 xi=0 pa=00000000 c=0 d=0 v=0 g=0] [ri=0 xi=0 pa=ffb00000 c=5 d=1 v=1 g=0]
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14444/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Introduce 2 Kconfig symbols, CONFIG_PCI_DRIVERS_GENERIC &
CONFIG_PCI_DRIVERS_LEGACY, which indicate whether the system should be
built to for PCI drivers using the MIPS-specific struct pci_controller
API (hereafter "legacy" drivers) or more generic drivers using only
functionality provided by the PCI core (hereafter "generic" drivers).
The Kconfig entries are created such that platforms have to select
CONFIG_PCI_DRIVERS_GENERIC if they wish to use it - that is, the default
is CONFIG_PCI_DRIVERS_LEGACY so that existing platforms need no
modification.
The functions declared in pci.h are rearranged with those provided only
by pci-legacy.c being guarded by an #ifdef CONFIG_PCI_DRIVERS_LEGACY to
ensure they are only used in configurations where they are implemented.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14345/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends. That changed
when we forked out support for the latter into the export.h file.
This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig. The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.
Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.
The compiler.h additions are for an implict presence of the
"notrace" which module.h brought in but export.h does not.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/14034/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
On certain MIPS32 devices, the ftrace tracer "function_graph" uses
__lshrdi3() during the capturing of trace data. ftrace then attempts to
trace __lshrdi3() which leads to infinite recursion and a stack overflow.
Fix this by marking __lshrdi3() as notrace. Mark the other compiler
intrinsics as notrace in case the compiler decides to use them in the
ftrace path.
Signed-off-by: Harvey Hunt <harvey.hunt@imgtec.com>
Cc: <linux-mips@linux-mips.org>
Cc: <linux-kernel@vger.kernel.org>
Cc: <stable@vger.kernel.org> # 4.2.x-
Patchwork: https://patchwork.linux-mips.org/patch/13354/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The GuestCtl1 CP0 register can contain the GuestID used for root TLB
operations, which affects TLB matching. The other TLB registers are
already dumped out to the log on a machine check exception due to
multiple matching TLB entries, so also dump the value of the GuestCtl1
register if GuestIDs are supported.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13232/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The GuestID for root TLB operations (GuestCtl1.RID) is modified by TLB
reads, so needs preserving by dump_tlb() like the ASID field of EntryHi.
Also dump the GuestID of each entry if it exists alongside the ASID, as
it forms an important part of the TLB entry when VZ guests are used.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13230/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In preparation for supporting variable ASID masks, retrieve ASID masks
using functions in asm/cpu-info.h which accept struct cpuinfo_mips. This
will allow those functions to determine the ASID mask based upon the CPU
in a later patch. This also allows for the r3k & r8k cases to be handled
in Kconfig, which is arguably cleaner than the previous #ifdefs.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/13210/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In history, __arch_local_irq_restore() is only used by SMTC. However,
SMTC support has been removed since 3.16, this patch remove the unused
function.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/12159/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
__clear_user() (and clear_user() which uses it), always access the user
mode address space, which results in EVA store instructions when EVA is
enabled even if the current user address limit is KERNEL_DS.
Fix this by adding a new symbol __bzero_kernel for the normal kernel
address space bzero in EVA mode, and call that from __clear_user() if
eva_kernel_access().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: Paul Burton <paul.burton@imgtec.com>
Cc: Leonid Yegoshin <leonid.yegoshin@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10844/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
ARCH_USE_BUILTIN_BSWAP will use __builtin_bswap16(), __builtin_bswap32()
and __builtin_bswap64() where available. This allows better instruction
scheduling. On pre-R2 processors it will result in 32 bit and 64 bit
swapping being performed in a call to a __bswapsi2() rsp. __bswapdi2()
functions, so we add these, too.
For a 4.2 kernel with GCC 4.9 this yields the following kernel sizes:
text data bss dec hex filename
3996071 155804 88992 4240867 40b5e3 vmlinux ip22 baseline
3985687 159900 88992 4234579 409d53 vmlinux ip22 + bswap patch
6913157 378552 251024 7542733 7317cd vmlinux ip27 baseline
6878581 378552 251024 7508157 7290bd vmlinux ip27 + bswap patch
5773777 268752 187424 6229953 5f0fc1 vmlinux malta baseline
5773401 268752 187424 6229577 5f0e49 vmlinux malta + bswap patch
Presumably the code size improvments yield better cache hit rate thus
better performance compensating for the extra function call but this
will still need to be benchmarked.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The generic field definitions (i.e. present before MIPS32/MIPS64) in
mipsregs.h are conventionally not prefixed with MIPS_, so rename the
recently added MIPS_ENTRYLO_* definitions for the G, V, D, and C fields
to ENTRYLO_*. Also rearrange to put the EntryLo and EntryHi definitions
in the right place in the file.
Fixes: 8ab6abcb6a ("MIPS: mipsregs.h: Add EntryLo bit definitions")
Reported-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10725/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The FrameMask register is relevant to the TLB so it should be dumped by
dump_tlb_regs(), however it is only present in certain cores (r10000,
r12000, r14000, r16000). Add dumping of it, conditional upon
current_cpu_type().
Suggested-by: Joshua Kinard <kumba@gentoo.org>
Suggested-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10724/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The PageGrain register may not exist if certain architectural features
aren't present, therefore only print out its value when dumping the TLB
registers if it is expected to contain fields relevant to the TLB.
Fixes: d1e9a4f547 ("MIPS: Add SysRq operation to dump TLBs on all CPUs")
Reported-by: Joshua Kinard <kumba@gentoo.org>
Reported-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Joshua Kinard <kumba@gentoo.org>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10723/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The TLB registers are dumped in a couble of places:
- sysrq_tlbdump_single() - when dumping TLB state.
- do_mcheck() - in response to a machine check error.
The main TLB registers also differ between r3k and r4k, but r4k appears
to be assumed.
Refactor this code into a dump_tlb_regs() function, implemented for both
r3k and r4k, and used by both of the above functions.
Fixes: d1e9a4f547 ("MIPS: Add SysRq operation to dump TLBs on all CPUs")
Suggested-by: Maciej W. Rozycki <macro@linux-mips.org>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10721/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Move the initialisation of the CP0.Wired register implemented by Toshiba
TX3922 and TX3927 processors from `tx39_cache_init' to `tlb_init' where
it belongs, correcting code structure and making sure initialisation
does not rely on `tx39_cache_init' being called before `tlb_init' to
work correctly.
Make `r3k_have_wired_reg' static as it's no longer externally referred
to; remove a stale declaration too.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10195/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
XPA extends the physical addresses on MIPS32, including the EntryLo
registers. Update dump_tlb() to concatenate the PFNX field from the high
end of the EntryLo registers (as read by mfhc0).
The width of physical and virtual addresses are also separated to show
only 8 nibbles of virtual but 11 nibbles of physical with XPA.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Steven J. Hill <Steven.Hill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10077/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The RI/XI bits when present are above the PFN field in the EntryLo
registers, at bits 63,62 when read with dmfc0, and bits 31,30 when read
with mfc0. This makes them appear as part of the physical address, since
the other bits are masked with PAGE_MASK, for example:
Index: 253 pgmask=16kb va=77b18000 asid=75
[pa=1000744000 c=5 d=1 v=1 g=0] [pa=100134c000 c=5 d=1 v=1 g=0]
The physical addresses have bit 36 set, which corresponds to bit 30 of
EntryLo1, the XI bit.
Explicitly mask off the RI and XI bits from the printed physical
address, and print the RI and XI bits separately if they exist, giving
output more like this:
Index: 226 pgmask=16kb va=77be0000 asid=79
[ri=0 xi=1 pa=01288000 c=5 d=1 v=1 g=0] [ri=0 xi=0 pa=010e4000 c=5 d=0 v=1 g=0]
Cc: linux-mips@linux-mips.org
Cc: James Hogan <james.hogan@imgtec.com>
Cc: David Daney <ddaney@caviumnetworks.com>
Patchwork: https://patchwork.linux-mips.org/patch/10080/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The EHINV bit in EntryHi allows a TLB entry to be properly marked
invalid so that EntryHi doesn't have to be set to a unique value to
avoid machine check exceptions due to multiple matching entries.
Unfortunately dump_tlb() doesn't take this into account so it will print
all the uninteresting invalid TLB entries if the current ASID happens to
be 00. Therefore add a condition to skip entries which are marked
invalid with the EHINV bit.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10076/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The TLB only matches the ASID when the global bit isn't set, so
dump_tlb() shouldn't really be skipping global entries just because the
ASID doesn't match. Fix the condition to read the TLB entry's global bit
from EntryLo0. Note that after a TLB read the global bits in both
EntryLo registers reflect the same global bit in the TLB entry.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10079/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Make use of recently added EntryLo bit definitions in mipsregs.h when
dumping TLB contents.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10075/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Refactor the TLB matching code in dump_tlb() slightly so that the
conditions which can cause a TLB entry to be skipped can be more easily
extended. This should prevent the match condition getting unwieldy once
it is updated to take further conditions into account.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10081/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Use the new tlb read hazard macros from <asm/hazards.h> rather than the
local BARRIER() macro which uses 7 ops regardless of the kernel
configuration.
We use mtc0_tlbr_hazard for the hazard between mtc0 to the index
register and the tlbr, and tlb_read_hazard for the hazard between the
tlbr and the mfc0 of the TLB registers written by tlbr.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10074/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Correct a regression introduced with 8453eebd [MIPS: Fix strnlen_user()
return value in case of overlong strings.] causing assembler warnings
and broken code generated in __strnlen_kernel_nocheck_asm:
arch/mips/lib/strnlen_user.S: Assembler messages:
arch/mips/lib/strnlen_user.S:64: Warning: Macro instruction expanded into multiple instructions in a branch delay slot
with the CPU_DADDI_WORKAROUNDS option set, resulting in the function
looping indefinitely upon mounting NFS root.
Use conditional assembly to avoid a microMIPS code size regression.
Using $at unconditionally would cause such a regression as there are no
16-bit instruction encodings available for ALU operations using this
register. Using $v1 unconditionally would produce short microMIPS
encodings, but would prevent this register from being used across calls
to this function.
The extra LI operation introduced is free, replacing a NOP originally
scheduled into the delay slot of the branch that follows.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/10205/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Computing sum introduces true data dependency. This patch removes some
true data depdendencies, hence increases instruction level parallelism.
This patch brings up to 50% csum performance gain on Loongson 3a.
One example about how this patch works is in CSUM_BIGCHUNK1:
// ** original ** vs ** patch applied **
ADDC(sum, t0) ADDC(t0, t1)
ADDC(sum, t1) ADDC(t2, t3)
ADDC(sum, t2) ADDC(sum, t0)
ADDC(sum, t3) ADDC(sum, t2)
In the original implementation, each ADDC(sum, ...) depends on the sum
value updated by previous ADDC(as source operand).
With this patch applied, the first two ADDC operations are independent,
hence can be executed simultaneously if possible.
Another example is in the "copy and sum calculating chunk":
// ** original ** vs ** patch applied **
STORE(t0, UNIT(0) ... STORE(t0, UNIT(0) ...
ADDC(sum, t0) ADDC(t0, t1)
STORE(t1, UNIT(1) ... STORE(t1, UNIT(1) ...
ADDC(sum, t1) ADDC(sum, t0)
STORE(t2, UNIT(2) ... STORE(t2, UNIT(2) ...
ADDC(sum, t2) ADDC(t2, t3)
STORE(t3, UNIT(3) ... STORE(t3, UNIT(3) ...
ADDC(sum, t3) ADDC(sum, t2)
With this patch applied, ADDC and the **next next** ADDC are independent.
Signed-off-by: chenj <chenj@lemote.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/9608/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
MIPS R6 dropped the unaligned load and store instructions so
we need to re-write this part of the code for R6 to store
one byte at a time.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
MIPS R6 does not support the unaligned load and store instructions
so we add a special MIPS R6 case to copy one byte at a time if we
need to read/write to unaligned memory addresses.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The following instructions have been removed from MIPS R6
ulw, ulh, swl, lwr, lwl, swr.
However, all of them are used in the MIPS specific checksum implementation.
As a result of which, we will use the generic checksum on MIPS R6
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The toolchain defines exactly one of __MIPSEB__ and
__MIPSEL__. As a result, simplify the ifdefery a little bit.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8522/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Having #ifdefs just to guard comments is not really helpful
so drop them. Moreover, the code wasn't really reached anyway
since there is a #ifndef CONFIG_CPU_MIPSR2 on the top of the file.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8513/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Commit cf62a8b813 ("MIPS: lib: memcpy: Use macro to build the
copy_user code") switched to a macro in order to build the memcpy
symbols in preparation for the EVA support. However, this commit
also removed the NOP instruction after the 'jr ra' when returning
back to the caller. This had no visible side-effects since the next
instruction was a load to the t0 register which was already in the
clobbered list, but it may have undesired effects in the future
if some other code is introduced in between the .Ldone and
the .Ll_exc_copy labels.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: <stable@vger.kernel.org> # v3.15+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/8512/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Virtual page number of R3000 in entryhi is 20 bit from MSB. But in
dump_tlb(), the bit mask to read it from entryhi is 19 bit (0xffffe000).
The patch fixes that to 0xfffff000.
Signed-off-by: Isamu Mogi <isamu@leafytree.jp>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/8290/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
We were returning maxlen like the userland strnlen if no '\0' character
was encountered while the kernel version is expected to return a value
larger than maxlen. Fixed to return maxlen + 1.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This small update to the previous fix to __delay removes a conditional
around the ABI-dependent subtraction operation within an inline asm in
favor to the standard <asm/asm.h> LONG_SUBU macro. No change in code
produced.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6703/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Nobody is maintaining SMTC anymore and there also seems to be no userbase.
Which is a pity - the SMTC technology primarily developed by Kevin D.
Kissell <kevink@paralogos.com> is an ingenious demonstration for the MT
ASE's power and elegance.
Based on Markos Chandras <Markos.Chandras@imgtec.com> patch
https://patchwork.linux-mips.org/patch/6719/ which while very similar did
no longer apply cleanly when I tried to merge it plus some additional
post-SMTC cleanup - SMTC was a feature as tricky to remove as it was to
merge once upon a time.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This change reverts most of commit
60724ca59e [MIPS: IP checksums: Remove
unncessary .set pseudos] that introduced warnings with the
CPU_DADDI_WORKAROUNDS option set:
arch/mips/lib/csum_partial.S: Assembler messages:
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:467: Warning: used $3 with ".set at=$3"
[...]
arch/mips/lib/csum_partial.S:577: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:577: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:577: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3"
arch/mips/lib/csum_partial.S:601: Warning: used $3 with ".set at=$3"
[and so on, and so on...]
The warnings are benign and good code is produced regardless because no
macros that'd use the assembler's temporary register are involved, however
the `.set noat' directives removed by the commit referred are crucial to
guarantee this is still going to be the case after any changes in the
future. Therefore they need to be brought back to place which this
change does.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6686/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This corrects assembler warnings and broken code generated in
__strncpy_from_user_asm:
arch/mips/lib/strncpy_user.S: Assembler messages:
arch/mips/lib/strncpy_user.S:52: Warning: Macro instruction expanded into
multiple instructions in a branch delay slot
with the CPU_DADDI_WORKAROUNDS option set. The function schedules delay
slots manually where there is really no need to as GAS is happy to do it
all itself, so undo it all and remove `.set noreorder'.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6685/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
With CPU_DADDI_WORKAROUNDS enabled __delay assembles with a macro in a
branch delay slot:
{standard input}: Assembler messages:
{standard input}:18: Warning: Macro instruction expanded into multiple
instructions in a branch delay slot
and broken code results:
0000000000000000 <__delay>:
0: 1480ffff bnez a0,0 <__delay>
4: 24010001 li at,1
8: 0081202f dsubu a0,a0,at
c: 03e00008 jr ra
10: 00000000 nop
14: 00000000 nop
Consequently the function loops indefinitely, showing up prominently as a
hang in the delay loop calibration at bootstrap.
This change corrects the problem by forcing the immediate 1 into a
register while keeping code produced identical where CPU_DADDI_WORKAROUNDS
is disabled.
Signed-off-by: Maciej W. Rozycki <macro@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/6669/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
In preparation for EVA support, we use a macro to build the
__csum_partial_copy_user main code so it can be shared across
multiple implementations. EVA uses the same code but it replaces
the load/store/prefetch instructions with the EVA specific ones
therefore using a macro avoids unnecessary code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Each load/store macro always adds an entry to the __ex_table
using the EXC macro. There are cases where a load instruction may
never fail such as when we are sure the load happens in the kernel
address space. Therefore, we merge these the EXC and LOADX/STOREX
macros into a single one. We also expand the argument list in the EXC
macro to make the macro more flexible. The extra 'type' argument is not
used by this commit, but it will be used when EVA support is added to
memcpy.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The 'copy_user' symbol can be used to copy from or to
userland so we will use two different symbols for these
operations. This makes no difference in the existing code,
but when the core is operating in EVA mode, different instructions
need to be used to read and write to userland address space.
The old function has also been renamed to 'copy_kernel' to denote
that it is suitable for copy data to and from kernel space.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Build the __bzero function using the EVA load/store instructions
when operating in the EVA mode. This function is only used when
accessing user code so there is no need to build two distinct symbols
for user and kernel operations respectively.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Build the __bzero symbol using a macor. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Add copy_{to,from,in}_user when the CPU operates in EVA mode.
This is necessary so the EVA specific instructions can be used
to perform the virtual to physical translation for user space
addresses. We will use the non-EVA functions to read from kernel
if needed.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
The code can be shared between EVA and non-EVA configurations,
therefore use a macro to build it to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In preparation for EVA support, the PREF macro is split into two
separate macros, PREFS and PREFD, for source and destination data
prefetching respectively.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Each load/store macro always adds an entry to the __ex_table
using the EXC macro. Therefore, these load/store macros are now merged
with the EXC one. The argument list is also expanded in order to make
the macro more flexible. The extra 'type' argument is not used by this
commit, but it will be used when the EVA support is added to the memcpy.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In non-EVA mode, strncpy_from_user* aliases are used for the
strncpy_from_kernel* symbols since the code is identical. In EVA
mode, new strcpy_from_user* symbols are used which use the EVA
specific instructions to load values from userspace.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Build the __strncpy_from_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In non-EVA mode, strlen_user* aliases are used for the
strlen_kernel* symbols since the code is identical. In EVA
mode, new strlen_user* symbols are used which use the EVA
specific instructions to load values from userspace.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Build the __strlen_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
In non-EVA mode, a strlen_user* alias is used for the
strlen_kernel* symbols since the code is identical. In EVA
mode, a new strlen_user* symbol is used which uses the EVA
specific instructions to load values from userspace.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Build the __strnlen_user symbol using a macro. In EVA mode we will
need to use similar code to do the userspace load operations so
it is better if we use a macro to avoid code duplications.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
None of these files are actually using any __init type directives
and hence don't need to include <linux/init.h>. Most are just a
left over from __devinit and __cpuinit removal, or simply due to
code getting copied from one driver to the next.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: John Crispin <blogic@openwrt.org>
Patchwork: http://patchwork.linux-mips.org/patch/6320/
commit 3747069b25e419f6b51395f48127e9812abc3596 upstream.
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
and are flagged as __cpuinit -- so if we remove the __cpuinit from
the arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
related content into no-ops as early as possible, since that will get
rid of these warnings. In any case, they are temporary and harmless.
Here, we remove all the MIPS __cpuinit from C code and __CPUINIT
from asm files. MIPS is interesting in this respect, because there
are also uasm users hiding behind their own renamed versions of the
__cpuinit macros.
[1] https://lkml.org/lkml/2013/5/20/589
[ralf@linux-mips.org: Folded in Paul's followup fix.]
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/5494/
Patchwork: https://patchwork.linux-mips.org/patch/5495/
Patchwork: https://patchwork.linux-mips.org/patch/5509/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
This reverts commit d532f3d267.
The original commit has several problems:
1) Doesn't work with 64-bit kernels.
2) Calls TLBMISS_HANDLER_SETUP() before the code is generated.
3) Calls TLBMISS_HANDLER_SETUP() twice in per_cpu_trap_init() when
only one call is needed.
[ralf@linux-mips.org: Also revert the bits of the ASID patch which were
hidden in the KVM merge.]
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: "Steven J. Hill" <Steven.Hill@imgtec.com>
Cc: David Daney <david.daney@cavium.com>
Patchwork: https://patchwork.linux-mips.org/patch/5242/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Optimise 'strnlen' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Optimise 'strlen' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Optimise 'strncpy' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Optimise 'memset' to use microMIPS instructions and/or optimisations
for binary size reduction. When the microMIPS ISA is not being used,
the library function compiles to the original binary code.
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Original patch by Ralf Baechle and removed by Harold Koerfgen
with commit f67e4ffc79905482c3b9b8c8dd65197bac7eb508. This
allows for more generic kernels since the size of the ASID
and corresponding masks can be determined at run-time. This
patch is also required for the new Aptiv cores and has been
tested on Malta and Malta Aptiv platforms.
[ralf@linux-mips.org: Added relevant part of fix
https://patchwork.linux-mips.org/patch/5213/]
Signed-off-by: Steven J. Hill <Steven.Hill@imgtec.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
The operations on the bitmap pointers are protected by "memory"
clobbering raw_local_irq_{save,restore}(), so there is no need for
volatile here. By removing the volatile we get better code generation
out of the compiler.
Signed-off-by: David Daney <david.daney@cavium.com>
Patchwork: http://patchwork.linux-mips.org/patch/4966/
Acked-by: John Crispin <blogic@openwrt.org>
commit 92d11594f6 (MIPS: Remove irqflags.h dependency from bitops.h)
factored some of the bitops code out into a separate file
(arch/mips/lib/bitops.c). Unfortunately the logic converting a bit
mask into a boolean result was lost in some of the functions. We had:
int res;
unsigned long shifted_result_bit;
.
.
.
res = shifted_result_bit;
return res;
Which truncates off the high 32 bits (thus yielding an incorrect
value) on 64-bit systems.
The manifestation of this is that a non-SMP 64-bit kernel will not
boot as the bitmap operations in bootmem.c are all screwed up.
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Cc: Jim Quinlan <jim2101024@gmail.com>
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/4965/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>