1. 28 4月, 2020 2 次提交
    • A
      arm64: set TEXT_OFFSET to 0x0 in preparation for removing it entirely · cfa7ede2
      Ard Biesheuvel 提交于
      TEXT_OFFSET on arm64 is a historical artifact from the early days of
      the arm64 port where the boot protocol was basically 'copy this image
      to the base of memory + 512k', giving us 512 KB of guaranteed BSS space
      to put the swapper page tables. When the arm64 Image header was added in
      v3.10, it already carried the actual value of TEXT_OFFSET, to allow the
      bootloader to discover it dynamically rather than hardcode it to 512 KB.
      
      Today, this memory window is not used for any particular purpose, and
      it is simply handed to the page allocator at boot. The only reason it
      still exists is because of the 512k misalignment it causes with respect
      to the 2 MB aligned virtual base address of the kernel, which affects
      the virtual addresses of all statically allocated objects in the kernel
      image.
      
      However, with the introduction of KASLR in v4.6, we added the concept of
      relocatable kernels, which rewrite all absolute symbol references at
      boot anyway, and so the placement of such kernels in the physical address
      space is irrelevant, provided that the minimum segment alignment is
      honoured (64 KB in most cases, 128 KB for 64k pages kernels with vmap'ed
      stacks enabled). This makes 0x0 and 512 KB equally suitable values for
      TEXT_OFFSET on the off chance that we are dealing with boot loaders that
      ignore the value passed via the header entirely.
      
      Considering that the distros as well as Android ship KASLR-capable
      kernels today, and the fact that TEXT_OFFSET was discoverable from the
      Image header from the very beginning, let's change this value to 0x0, in
      preparation for removing it entirely at a later date.
      Signed-off-by: NArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/20200415082922.32709-1-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      cfa7ede2
    • A
      arm64: drop GZFLAGS definition and export · 4cf23494
      Ard Biesheuvel 提交于
      Drop the definition and export of GZFLAGS, which was never referenced
      on arm64, and whose last recorded use in the ARM port (on which arm64
      was based original) was removed by patch
      
        commit 5e89d379edb5ae08b57f39dd8d91697275245cbf [*]
        Author: Russell King <rmk@flint.arm.linux.org.uk>
        Date:   Wed Oct 16 14:32:17 2002 +0100
      
          [ARM] Convert ARM makefiles to new kbuild (Sam Ravnborg, Kai, rmk)
      
      [*] git commit ID based on Thomas Gleixner's historical GIT repository at
          git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.gitSigned-off-by: NArd Biesheuvel <ardb@kernel.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/20200415123049.25504-1-ardb@kernel.orgSigned-off-by: NWill Deacon <will@kernel.org>
      4cf23494
  2. 02 4月, 2020 1 次提交
    • M
      arm64: Always force a branch protection mode when the compiler has one · b8fdef31
      Mark Brown 提交于
      Compilers with branch protection support can be configured to enable it by
      default, it is likely that distributions will do this as part of deploying
      branch protection system wide. As well as the slight overhead from having
      some extra NOPs for unused branch protection features this can cause more
      serious problems when the kernel is providing pointer authentication to
      userspace but not built for pointer authentication itself. In that case our
      switching of keys for userspace can affect the kernel unexpectedly, causing
      pointer authentication instructions in the kernel to corrupt addresses.
      
      To ensure that we get consistent and reliable behaviour always explicitly
      initialise the branch protection mode, ensuring that the kernel is built
      the same way regardless of the compiler defaults.
      
      Fixes: 75031975 (arm64: add basic pointer authentication support)
      Reported-by: NSzabolcs Nagy <szabolcs.nagy@arm.com>
      Signed-off-by: NMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      [catalin.marinas@arm.com: remove Kconfig option in favour of Makefile check]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b8fdef31
  3. 18 3月, 2020 1 次提交
    • K
      arm64: compile the kernel with ptrauth return address signing · 74afda40
      Kristina Martsenko 提交于
      Compile all functions with two ptrauth instructions: PACIASP in the
      prologue to sign the return address, and AUTIASP in the epilogue to
      authenticate the return address (from the stack). If authentication
      fails, the return will cause an instruction abort to be taken, followed
      by an oops and killing the task.
      
      This should help protect the kernel against attacks using
      return-oriented programming. As ptrauth protects the return address, it
      can also serve as a replacement for CONFIG_STACKPROTECTOR, although note
      that it does not protect other parts of the stack.
      
      The new instructions are in the HINT encoding space, so on a system
      without ptrauth they execute as NOPs.
      
      CONFIG_ARM64_PTR_AUTH now not only enables ptrauth for userspace and KVM
      guests, but also automatically builds the kernel with ptrauth
      instructions if the compiler supports it. If there is no compiler
      support, we do not warn that the kernel was built without ptrauth
      instructions.
      
      GCC 7 and 8 support the -msign-return-address option, while GCC 9
      deprecates that option and replaces it with -mbranch-protection. Support
      both options.
      
      Clang uses an external assembler hence this patch makes sure that the
      correct parameters (-march=armv8.3-a) are passed down to help it recognize
      the ptrauth instructions.
      
      Ftrace function tracer works properly with Ptrauth only when
      patchable-function-entry feature is present and is ensured by the
      Kconfig dependency.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Reviewed-by: Vincenzo Frascino <Vincenzo.Frascino@arm.com> # not co-dev parts
      Co-developed-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: NKristina Martsenko <kristina.martsenko@arm.com>
      [Amit: Cover leaf function, comments, Ftrace Kconfig]
      Signed-off-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      74afda40
  4. 16 1月, 2020 1 次提交
  5. 15 1月, 2020 1 次提交
  6. 06 11月, 2019 1 次提交
    • T
      arm64: implement ftrace with regs · 3b23e499
      Torsten Duwe 提交于
      This patch implements FTRACE_WITH_REGS for arm64, which allows a traced
      function's arguments (and some other registers) to be captured into a
      struct pt_regs, allowing these to be inspected and/or modified. This is
      a building block for live-patching, where a function's arguments may be
      forwarded to another function. This is also necessary to enable ftrace
      and in-kernel pointer authentication at the same time, as it allows the
      LR value to be captured and adjusted prior to signing.
      
      Using GCC's -fpatchable-function-entry=N option, we can have the
      compiler insert a configurable number of NOPs between the function entry
      point and the usual prologue. This also ensures functions are AAPCS
      compliant (e.g. disabling inter-procedural register allocation).
      
      For example, with -fpatchable-function-entry=2, GCC 8.1.0 compiles the
      following:
      
      | unsigned long bar(void);
      |
      | unsigned long foo(void)
      | {
      |         return bar() + 1;
      | }
      
      ... to:
      
      | <foo>:
      |         nop
      |         nop
      |         stp     x29, x30, [sp, #-16]!
      |         mov     x29, sp
      |         bl      0 <bar>
      |         add     x0, x0, #0x1
      |         ldp     x29, x30, [sp], #16
      |         ret
      
      This patch builds the kernel with -fpatchable-function-entry=2,
      prefixing each function with two NOPs. To trace a function, we replace
      these NOPs with a sequence that saves the LR into a GPR, then calls an
      ftrace entry assembly function which saves this and other relevant
      registers:
      
      | mov	x9, x30
      | bl	<ftrace-entry>
      
      Since patchable functions are AAPCS compliant (and the kernel does not
      use x18 as a platform register), x9-x18 can be safely clobbered in the
      patched sequence and the ftrace entry code.
      
      There are now two ftrace entry functions, ftrace_regs_entry (which saves
      all GPRs), and ftrace_entry (which saves the bare minimum). A PLT is
      allocated for each within modules.
      Signed-off-by: NTorsten Duwe <duwe@suse.de>
      [Mark: rework asm, comments, PLTs, initialization, commit message]
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: NTorsten Duwe <duwe@suse.de>
      Tested-by: NAmit Daniel Kachhap <amit.kachhap@arm.com>
      Tested-by: NTorsten Duwe <duwe@suse.de>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Julien Thierry <jthierry@redhat.com>
      Cc: Will Deacon <will@kernel.org>
      3b23e499
  7. 07 10月, 2019 4 次提交
  8. 30 8月, 2019 1 次提交
    • W
      arm64: atomics: Use K constraint when toolchain appears to support it · 03adcbd9
      Will Deacon 提交于
      The 'K' constraint is a documented AArch64 machine constraint supported
      by GCC for matching integer constants that can be used with a 32-bit
      logical instruction. Unfortunately, some released compilers erroneously
      accept the immediate '4294967295' for this constraint, which is later
      refused by GAS at assembly time. This had led us to avoid the use of
      the 'K' constraint altogether.
      
      Instead, detect whether the compiler is up to the job when building the
      kernel and pass the 'K' constraint to our 32-bit atomic macros when it
      appears to be supported.
      Signed-off-by: NWill Deacon <will@kernel.org>
      03adcbd9
  9. 22 8月, 2019 1 次提交
  10. 21 8月, 2019 1 次提交
  11. 09 8月, 2019 2 次提交
    • S
      arm64: kasan: Switch to using KASAN_SHADOW_OFFSET · 6bd1d0be
      Steve Capper 提交于
      KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
      line argument and affects the codegen of the inline address sanetiser.
      
      Essentially, for an example memory access:
          *ptr1 = val;
      The compiler will insert logic similar to the below:
          shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
          if (somethingWrong(shadowValue))
              flagAnError();
      
      This code sequence is inserted into many places, thus
      KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
      text.
      
      If we want to run a single kernel binary with multiple address spaces,
      then we need to do this with KASAN_SHADOW_OFFSET fixed.
      
      Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
      shadow addresses we know that the end of the shadow region is constant
      w.r.t. VA space size:
          KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
      
      This means that if we increase the size of the VA space, the start of
      the KASAN region expands into lower addresses whilst the end of the
      KASAN region is fixed.
      
      Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
      build scripts with the VA size used as a parameter. (There are build
      time checks in the C code too to ensure that expected values are being
      derived). It is sufficient, and indeed is a simplification, to remove
      the build scripts (and build time checks) entirely and instead provide
      KASAN_SHADOW_OFFSET values.
      
      This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
      arm64 Makefile, and instead we adopt the approach used by x86 to supply
      offset values in kConfig. To help debug/develop future VA space changes,
      the Makefile logic has been preserved in a script file in the arm64
      Documentation folder.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      6bd1d0be
    • S
      arm64: mm: Flip kernel VA space · 14c127c9
      Steve Capper 提交于
      In order to allow for a KASAN shadow that changes size at boot time, one
      must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
      start address. Also, it is highly desirable to maintain the same
      function addresses in the kernel .text between VA sizes. Both of these
      requirements necessitate us to flip the kernel address space halves s.t.
      the direct linear map occupies the lower addresses.
      
      This patch puts the direct linear map in the lower addresses of the
      kernel VA range and everything else in the higher ranges.
      
      We need to adjust:
       *) KASAN shadow region placement logic,
       *) KASAN_SHADOW_OFFSET computation logic,
       *) virt_to_phys, phys_to_virt checks,
       *) page table dumper.
      
      These are all small changes, that need to take place atomically, so they
      are bundled into this commit.
      
      As part of the re-arrangement, a guard region of 2MB (to preserve
      alignment for fixed map) is added after the vmemmap. Otherwise the
      vmemmap could intersect with IS_ERR pointers.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NWill Deacon <will@kernel.org>
      14c127c9
  12. 01 8月, 2019 1 次提交
  13. 23 6月, 2019 1 次提交
  14. 12 6月, 2019 1 次提交
    • N
      arm64: Don't unconditionally add -Wno-psabi to KBUILD_CFLAGS · fa63da2a
      Nathan Chancellor 提交于
      This is a GCC only option, which warns about ABI changes within GCC, so
      unconditionally adding it breaks Clang with tons of:
      
      warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
      
      and link time failures:
      
      ld.lld: error: undefined symbol: __efistub___stack_chk_guard
      >>> referenced by arm-stub.c:73
      (/home/nathan/cbl/linux/drivers/firmware/efi/libstub/arm-stub.c:73)
      >>>               arm-stub.stub.o:(__efistub_install_memreserve_table)
      in archive ./drivers/firmware/efi/libstub/lib.a
      
      These failures come from the lack of -fno-stack-protector, which is
      added via cc-option in drivers/firmware/efi/libstub/Makefile. When an
      unknown flag is added to KBUILD_CFLAGS, clang will noisily warn that it
      is ignoring the option like above, unlike gcc, who will just error.
      
      $ echo "int main() { return 0; }" > tmp.c
      
      $ clang -Wno-psabi tmp.c; echo $?
      warning: unknown warning option '-Wno-psabi' [-Wunknown-warning-option]
      1 warning generated.
      0
      
      $ gcc -Wsometimes-uninitialized tmp.c; echo $?
      gcc: error: unrecognized command line option
      ‘-Wsometimes-uninitialized’; did you mean ‘-Wmaybe-uninitialized’?
      1
      
      For cc-option to work properly with clang and behave like gcc, -Werror
      is needed, which was done in commit c3f0d0bc ("kbuild, LLVMLinux:
      Add -Werror to cc-option to support clang").
      
      $ clang -Werror -Wno-psabi tmp.c; echo $?
      error: unknown warning option '-Wno-psabi'
      [-Werror,-Wunknown-warning-option]
      1
      
      As a consequence of this, when an unknown flag is unconditionally added
      to KBUILD_CFLAGS, it will cause cc-option to always fail and those flags
      will never get added:
      
      $ clang -Werror -Wno-psabi -fno-stack-protector tmp.c; echo $?
      error: unknown warning option '-Wno-psabi'
      [-Werror,-Wunknown-warning-option]
      1
      
      This can be seen when compiling the whole kernel as some warnings that
      are normally disabled (see below) show up. The full list of flags
      missing from drivers/firmware/efi/libstub are the following (gathered
      from diffing .arm64-stub.o.cmd):
      
      -fno-delete-null-pointer-checks
      -Wno-address-of-packed-member
      -Wframe-larger-than=2048
      -Wno-unused-const-variable
      -fno-strict-overflow
      -fno-merge-all-constants
      -fno-stack-check
      -Werror=date-time
      -Werror=incompatible-pointer-types
      -ffreestanding
      -fno-stack-protector
      
      Use cc-disable-warning so that it gets disabled for GCC and does nothing
      for Clang.
      
      Fixes: ebcc5928 ("arm64: Silence gcc warnings about arch ABI drift")
      Link: https://github.com/ClangBuiltLinux/linux/issues/511Reported-by: NQian Cai <cai@lca.pw>
      Acked-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fa63da2a
  15. 09 6月, 2019 1 次提交
  16. 06 6月, 2019 1 次提交
    • D
      arm64: Silence gcc warnings about arch ABI drift · ebcc5928
      Dave Martin 提交于
      Since GCC 9, the compiler warns about evolution of the
      platform-specific ABI, in particular relating for the marshaling of
      certain structures involving bitfields.
      
      The kernel is a standalone binary, and of course nobody would be
      so stupid as to expose structs containing bitfields as function
      arguments in ABI.  (Passing a pointer to such a struct, however
      inadvisable, should be unaffected by this change.  perf and various
      drivers rely on that.)
      
      So these warnings do more harm than good: turn them off.
      
      We may miss warnings about future ABI drift, but that's too bad.
      Future ABI breaks of this class will have to be debugged and fixed
      the traditional way unless the compiler evolves finer-grained
      diagnostics.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ebcc5928
  17. 29 12月, 2018 1 次提交
  18. 13 12月, 2018 1 次提交
  19. 04 12月, 2018 1 次提交
    • A
      arm64: relocatable: fix inconsistencies in linker script and options · 3bbd3db8
      Ard Biesheuvel 提交于
      readelf complains about the section layout of vmlinux when building
      with CONFIG_RELOCATABLE=y (for KASLR):
      
        readelf: Warning: [21]: Link field (0) should index a symtab section.
        readelf: Warning: [21]: Info field (0) should index a relocatable section.
      
      Also, it seems that our use of '-pie -shared' is contradictory, and
      thus ambiguous. In general, the way KASLR is wired up at the moment
      is highly tailored to how ld.bfd happens to implement (and conflate)
      PIE executables and shared libraries, so given the current effort to
      support other toolchains, let's fix some of these issues as well.
      
      - Drop the -pie linker argument and just leave -shared. In ld.bfd,
        the differences between them are unclear (except for the ELF type
        of the produced image [0]) but lld chokes on seeing both at the
        same time.
      
      - Rename the .rela output section to .rela.dyn, as is customary for
        shared libraries and PIE executables, so that it is not misidentified
        by readelf as a static relocation section (producing the warnings
        above).
      
      - Pass the -z notext and -z norelro options to explicitly instruct the
        linker to permit text relocations, and to omit the RELRO program
        header (which requires a certain section layout that we don't adhere
        to in the kernel). These are the defaults for current versions of
        ld.bfd.
      
      - Discard .eh_frame and .gnu.hash sections to avoid them from being
        emitted between .head.text and .text, screwing up the section layout.
      
      These changes only affect the ELF image, and produce the same binary
      image.
      
      [0] b9dce7f1 ("arm64: kernel: force ET_DYN ELF type for ...")
      
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Peter Smith <peter.smith@linaro.org>
      Tested-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3bbd3db8
  20. 03 11月, 2018 1 次提交
    • V
      arm64: makefile fix build of .i file in external module case · 98356eb0
      Victor Kamensky 提交于
      After 'a66649da arm64: fix vdso-offsets.h dependency' if
      one will try to build .i file in case of external kernel module,
      build fails complaining that prepare0 target is missing. This
      issue came up with SystemTap when it tries to build variety
      of .i files for its own generated kernel modules trying to
      figure given kernel features/capabilities.
      
      The issue is that prepare0 is defined in top level Makefile
      only if KBUILD_EXTMOD is not defined. .i file rule depends
      on prepare and in case KBUILD_EXTMOD defined top level Makefile
      contains empty rule for prepare. But after mentioned commit
      arch/arm64/Makefile would introduce dependency on prepare0
      through its own prepare target.
      
      Fix it to put proper ifdef KBUILD_EXTMOD around code introduced
      by mentioned commit. It matches what top level Makefile does.
      Acked-by: NKevin Brodsky <kevin.brodsky@arm.com>
      Signed-off-by: NVictor Kamensky <kamensky@cisco.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      98356eb0
  21. 02 10月, 2018 1 次提交
    • R
      kbuild: consolidate Devicetree dtb build rules · 37c8a5fa
      Rob Herring 提交于
      There is nothing arch specific about building dtb files other than their
      location under /arch/*/boot/dts/. Keeping each arch aligned is a pain.
      The dependencies and supported targets are all slightly different.
      Also, a cross-compiler for each arch is needed, but really the host
      compiler preprocessor is perfectly fine for building dtbs. Move the
      build rules to a common location and remove the arch specific ones. This
      is done in a single step to avoid warnings about overriding rules.
      
      The build dependencies had been a mixture of 'scripts' and/or 'prepare'.
      These pull in several dependencies some of which need a target compiler
      (specifically devicetable-offsets.h) and aren't needed to build dtbs.
      All that is really needed is dtc, so adjust the dependencies to only be
      dtc.
      
      This change enables support 'dtbs_install' on some arches which were
      missing the target.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Acked-by: NPaul Burton <paul.burton@mips.com>
      Acked-by: NLey Foon Tan <ley.foon.tan@intel.com>
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Michal Marek <michal.lkml@markovi.net>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: linux-kbuild@vger.kernel.org
      Cc: linux-snps-arc@lists.infradead.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      Cc: linux-mips@linux-mips.org
      Cc: nios2-dev@lists.rocketboards.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-xtensa@linux-xtensa.org
      Signed-off-by: NRob Herring <robh@kernel.org>
      37c8a5fa
  22. 24 8月, 2018 1 次提交
  23. 23 7月, 2018 1 次提交
  24. 10 7月, 2018 1 次提交
    • L
      Revert "arm64: Use aarch64elf and aarch64elfb emulation mode variants" · 96f95a17
      Laura Abbott 提交于
      This reverts commit 38fc4248.
      
      Distributions such as Fedora and Debian do not package the ELF linker
      scripts with their toolchains, resulting in kernel build failures such
      as:
      
        |   CHK     include/generated/compile.h
        |   LD [M]  arch/arm64/crypto/sha512-ce.o
        | aarch64-linux-gnu-ld: cannot open linker script file ldscripts/aarch64elf.xr: No such file or directory
        | make[1]: *** [scripts/Makefile.build:530: arch/arm64/crypto/sha512-ce.o] Error 1
        | make: *** [Makefile:1029: arch/arm64/crypto] Error 2
      
      Revert back to the linux targets for now, adding a comment to the Makefile
      so we don't accidentally break this in the future.
      
      Cc: Paul Kocialkowski <contact@paulk.fr>
      Cc: <stable@vger.kernel.org>
      Fixes: 38fc4248 ("arm64: Use aarch64elf and aarch64elfb emulation mode variants")
      Tested-by: NKevin Hilman <khilman@baylibre.com>
      Signed-off-by: NLaura Abbott <labbott@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      96f95a17
  25. 06 7月, 2018 1 次提交
    • G
      arm64: remove no-op -p linker flag · 1a381d4a
      Greg Hackmann 提交于
      Linking the ARM64 defconfig kernel with LLVM lld fails with the error:
      
        ld.lld: error: unknown argument: -p
        Makefile:1015: recipe for target 'vmlinux' failed
      
      Without this flag, the ARM64 defconfig kernel successfully links with
      lld and boots on Dragonboard 410c.
      
      After digging through binutils source and changelogs, it turns out that
      -p is only relevant to ancient binutils installations targeting 32-bit
      ARM.  binutils accepts -p for AArch64 too, but it's always been
      undocumented and silently ignored.  A comment in
      ld/emultempl/aarch64elf.em explains that it's "Only here for backwards
      compatibility".
      
      Since this flag is a no-op on ARM64, we can safely drop it.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Reviewed-by: NNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: NGreg Hackmann <ghackmann@google.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1a381d4a
  26. 05 7月, 2018 2 次提交
  27. 08 6月, 2018 1 次提交
  28. 01 6月, 2018 1 次提交
  29. 25 4月, 2018 1 次提交
  30. 09 3月, 2018 1 次提交
    • A
      arm64/kernel: don't ban ADRP to work around Cortex-A53 erratum #843419 · a257e025
      Ard Biesheuvel 提交于
      Working around Cortex-A53 erratum #843419 involves special handling of
      ADRP instructions that end up in the last two instruction slots of a
      4k page, or whose output register gets overwritten without having been
      read. (Note that the latter instruction sequence is never emitted by
      a properly functioning compiler, which is why it is disregarded by the
      handling of the same erratum in the bfd.ld linker which we rely on for
      the core kernel)
      
      Normally, this gets taken care of by the linker, which can spot such
      sequences at final link time, and insert a veneer if the ADRP ends up
      at a vulnerable offset. However, linux kernel modules are partially
      linked ELF objects, and so there is no 'final link time' other than the
      runtime loading of the module, at which time all the static relocations
      are resolved.
      
      For this reason, we have implemented the #843419 workaround for modules
      by avoiding ADRP instructions altogether, by using the large C model,
      and by passing -mpc-relative-literal-loads to recent versions of GCC
      that may emit adrp/ldr pairs to perform literal loads. However, this
      workaround forces us to keep literal data mixed with the instructions
      in the executable .text segment, and literal data may inadvertently
      turn into an exploitable speculative gadget depending on the relative
      offsets of arbitrary symbols.
      
      So let's reimplement this workaround in a way that allows us to switch
      back to the small C model, and to drop the -mpc-relative-literal-loads
      GCC switch, by patching affected ADRP instructions at runtime:
      - ADRP instructions that do not appear at 4k relative offset 0xff8 or
        0xffc are ignored
      - ADRP instructions that are within 1 MB of their target symbol are
        converted into ADR instructions
      - remaining ADRP instructions are redirected via a veneer that performs
        the load using an unaffected movn/movk sequence.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: tidied up ADRP -> ADR instruction patching.]
      [will: use ULL suffix for 64-bit immediate]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a257e025
  31. 07 3月, 2018 1 次提交
  32. 01 12月, 2017 1 次提交
    • A
      arm64: ftrace: emit ftrace-mod.o contents through code · be0f272b
      Ard Biesheuvel 提交于
      When building the arm64 kernel with both CONFIG_ARM64_MODULE_PLTS and
      CONFIG_DYNAMIC_FTRACE enabled, the ftrace-mod.o object file is built
      with the kernel and contains a trampoline that is linked into each
      module, so that modules can be loaded far away from the kernel and
      still reach the ftrace entry point in the core kernel with an ordinary
      relative branch, as is emitted by the compiler instrumentation code
      dynamic ftrace relies on.
      
      In order to be able to build out of tree modules, this object file
      needs to be included into the linux-headers or linux-devel packages,
      which is undesirable, as it makes arm64 a special case (although a
      precedent does exist for 32-bit PPC).
      
      Given that the trampoline essentially consists of a PLT entry, let's
      not bother with a source or object file for it, and simply patch it
      in whenever the trampoline is being populated, using the existing
      PLT support routines.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      be0f272b
  33. 03 11月, 2017 1 次提交
  34. 30 10月, 2017 1 次提交
    • N
      arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27 · fd9dde6a
      Nick Desaulniers 提交于
      Upon upgrading to binutils 2.27, we found that our lz4 and gzip
      compressed kernel images were significantly larger, resulting is 10ms
      boot time regressions.
      
      As noted by Rahul:
      "aarch64 binaries uses RELA relocations, where each relocation entry
      includes an addend value. This is similar to x86_64.  On x86_64, the
      addend values are also stored at the relocation offset for relative
      relocations. This is an optimization: in the case where code does not
      need to be relocated, the loader can simply skip processing relative
      relocations.  In binutils-2.25, both bfd and gold linkers did this for
      x86_64, but only the gold linker did this for aarch64.  The kernel build
      here is using the bfd linker, which stored zeroes at the relocation
      offsets for relative relocations.  Since a set of zeroes compresses
      better than a set of non-zero addend values, this behavior was resulting
      in much better lz4 compression.
      
      The bfd linker in binutils-2.27 is now storing the actual addend values
      at the relocation offsets. The behavior is now consistent with what it
      does for x86_64 and what gold linker does for both architectures.  The
      change happened in this upstream commit:
      https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=1f56df9d0d5ad89806c24e71f296576d82344613
      Since a bunch of zeroes got replaced by non-zero addend values, we see
      the side effect of lz4 compressed image being a bit bigger.
      
      To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
      flag can be used:
      $ LDFLAGS="--no-apply-dynamic-relocs" make
      With this flag, the compressed image size is back to what it was with
      binutils-2.25.
      
      If the kernel is using ASLR, there aren't additional runtime costs to
      --no-apply-dynamic-relocs, as the relocations will need to be applied
      again anyway after the kernel is relocated to a random address.
      
      If the kernel is not using ASLR, then presumably the current default
      behavior of the linker is better. Since the static linker performed the
      dynamic relocs, and the kernel is not moved to a different address at
      load time, it can skip applying the relocations all over again."
      
      Some measurements:
      
      $ ld -v
      GNU ld (binutils-2.25-f3d35cf6) 2.25.51.20141117
                          ^
      $ ls -l vmlinux
      -rwxr-x--- 1 ndesaulniers eng 300652760 Oct 26 11:57 vmlinux
      $ ls -l Image.lz4-dtb
      -rw-r----- 1 ndesaulniers eng 16932627 Oct 26 11:57 Image.lz4-dtb
      
      $ ld -v
      GNU ld (binutils-2.27-53dd00a1) 2.27.0.20170315
                          ^
      pre patch:
      $ ls -l vmlinux
      -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 11:43 vmlinux
      $ ls -l Image.lz4-dtb
      -rw-r----- 1 ndesaulniers eng 18159474 Oct 26 11:43 Image.lz4-dtb
      
      post patch:
      $ ls -l vmlinux
      -rwxr-x--- 1 ndesaulniers eng 300376208 Oct 26 12:06 vmlinux
      $ ls -l Image.lz4-dtb
      -rw-r----- 1 ndesaulniers eng 16932466 Oct 26 12:06 Image.lz4-dtb
      
      By Siqi's measurement w/ gzip:
      binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
      Image 41535488
      Image.gz 13404067
      
      binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
      Image 41535488
      Image.gz 14125516
      
      Any compression scheme should be able to get better results from the
      longer runs of zeros, not just GZIP and LZ4.
      
      10ms boot time savings isn't anything to get excited about, but users of
      arm64+compression+bfd-2.27 should not have to pay a penalty for no
      runtime improvement.
      Reported-by: NGopinath Elanchezhian <gelanchezhian@google.com>
      Reported-by: NSindhuri Pentyala <spentyala@google.com>
      Reported-by: NWei Wang <wvw@google.com>
      Suggested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Suggested-by: NRahul Chaudhry <rahulchaudhry@google.com>
      Suggested-by: NSiqi Lin <siqilin@google.com>
      Suggested-by: NStephen Hines <srhines@google.com>
      Signed-off-by: NNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: added comment to Makefile]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fd9dde6a