Commit Graph

25 Commits

Author SHA1 Message Date
Travis Geiselbrecht
50864eda02 [arch][arm64] add routine to read the boot EL 2025-10-14 01:38:22 -07:00
Travis Geiselbrecht
e4d65228b5 [mp] restructure the sequence of how cpus are brought up
- Move a bit of the shared logic of secondary bootstrapping into a new
  function, lk_secondary_cpu_entry_early() which sets the current cpu
  pointer before calling the first half of the secondary LK_INIT
  routines.
- Create the per cpu idle threads on the main cpu instead of the
  secondary as they come up.
- Tweak all of the SMP capable architectures to use this new path.
- Move the top level mp routines into a separate file top/mp.c
- A bit more correctly ifdef out more SMP code.
2025-10-12 19:47:33 -07:00
Travis Geiselbrecht
e47183725d [arch][arm64] move secondary cpu entry point to separate function
- Make the secondary entry point be logically separate function, though
  declared in the same file.
- Add a trick where the kernel base + 4 is the secondary entry point.
  Not really useful except makes it easy to compute the offset
  elsewhere.
- Changed the entry point to arm64_reset and move _start to the linker
  script, which is what most other arches do.
- While was in the linker script, make sure the text segment is aligned
  on MAXPAGESIZE, though doesn't make any real difference currently.
- Generally clean up the assembly in start.S with newer macros from
  Fuchsia, and avoid using ldr X, =value as much as possible.
- Fix and make sure arm64 can build and run with WITH_SMP set to false.
  Add a new no-smp project to test this.

Note this will likely break systems where all of the cpus enter the
kernel simultaneously, which we can fix if that becomes an issue.
Secondary code now completely assumes the cpu number is passed in x0.
This can be emulated with platform specific trampoline code if it needs
to that then just directs into the the secondary entry point, instead of
trying to make the arch code have to deal with all cases.
2025-10-12 19:47:33 -07:00
Travis Geiselbrecht
91128ad729 [arch][arm64] clean up how secondary cpus are initialized and tracked
- Add a percpu structure for each cpu, akin to x86-64 and riscv. Pointed
  to by x18, which is now reserved for this in the kernel. Tweaked
  exception and context switch routines to leave x18 alone.
- Remove the cpu-trapping spinlock logic that is unused in mainline,
  probably. (Can add a new version of it back if it's necessary).
- Switch fdtwalk helper to using the newer, cleaner way of initializing
  secondaries using the PSCI CPU_ON argument that should be pretty
  standard on modern implementations. (Possibly an issue with old
  firmware).
- Remove the notion of computing the cpu ID from the Affinity levels,
  which doesn't really work properly on modern ARM CPUs which more or
  less abandoned the logical meaning of AFFn.
2025-10-12 19:47:33 -07:00
Travis Geiselbrecht
86f85453b1 [arch][arm64] start to clean up cpu initialization
More definitively set up each cpu's SCTLR_EL1 instead of relying on any
default values being present. Also set all RES1 values to 1 according to
what is useful at the moment, generally giving the maximum amount of
priviledges EL1 and EL0.
2025-10-12 19:47:33 -07:00
Travis Geiselbrecht
e739abc490 [kernel] tweak a few thread apis to to take a const pointer
A bit of reformatting on some ARM code while was touching it.
2025-10-01 20:56:06 -07:00
Travis Geiselbrecht
fff0f2a740 [arch][arm64] add support for 16k pages 2025-08-31 21:47:32 -07:00
Travis Geiselbrecht
52fa818e21 [arch][arm64] remove an unnecessary call to arm64_el3_to_el1
The existing arm64_elx_to_el1 already handles dropping the primary and
any secondary cpu down to el1 by the time this code path is reached.
2024-11-10 03:39:34 +00:00
Travis Geiselbrecht
a6ddffd80b [arch][warnings] fix -Wmissing-declarations warnings 2021-10-21 23:08:38 -07:00
vannapurve
945cd5ecdb [ARCH][ARM64] Dump more information during aborts
1) Decode FSC and dump more human readable status
2) Add support of stack unwinding as referred from
arm64 procedure call standard and frame pointer usage.
3) Compiler options for not omitting frame pointer
are enabled to ensure usage of frame pointers even
with higher optimization levels enabled.

Signed-off-by: vannapurve <vannapurve@google.com>
2020-10-13 16:16:15 -07:00
Travis Geiselbrecht
f371fa246b [arch] move the atomic ops into a separate header
Now you need to include arch/atomic.h to get to the atomic routines.
This simplifies a recusion issue in the way arch/ops.h included
arch_ops. Also just generally makes things cleaner.
2020-05-16 15:05:34 -07:00
Travis Geiselbrecht
cba9e47987 [license] replace the longer full MIT license with a shorter one
Used scripts/replacelic. Everything seems to build fine.
2019-07-05 17:22:23 -07:00
Travis Geiselbrecht
d8fa82cb91 [formatting] run everything through codestyle
Almost nothing changes here except moving braces to the same line as the
function declaration. Everything else is largely whitespace changes and
a few dangling files with tab indents.

See scripts/codestyle
2019-06-19 21:02:24 -07:00
Travis Geiselbrecht
1b7a28efb8 [include][lk] fixup lk/ include path move 2019-06-19 19:46:11 -07:00
Travis Geiselbrecht
c705e4d0ff [arch][arm64] add multi-aspace and general bugfixes to arm64 2016-02-23 21:07:17 -08:00
Travis Geiselbrecht
e7e894900f [arch][arm64] get compiling again 2015-09-20 12:13:06 -07:00
Arve Hjønnevåg
bd052a3507 [arch][arm64] Fix fiq support
Enable fiqs at boot, and during exceptions that can trigger a
context switch.

This fixes two problems. It avoids deadlock in code that uses spinlocks
with fiqs enabled as one cpu could be holding that spinlock and get
interrupted by an fiq, while another cpu is blocked trying to lock that
spinlock with fiqs disabled. This deadlocks if the fiq is delivered to
both these cpus and the second cpu is responsible to clearing the
interrupt.

Also, since thread_preempt can return with fiqs enabled,
regrestore_short could get interrupted by an fiq which would then
corrupt elr_el1 and spsr_el1.

Change-Id: I427f39ff94514866bf87f48393d145b7f1723502
2015-06-03 15:58:55 -07:00
Arve Hjønnevåg
645efc77a8 [arch][arm/arm64] Validate that start.S used cpu number that arch_curr_cpu_num returns.
Change-Id: I39e33214e20668c7c5e739c5bc84fa73d777b764
2015-05-13 20:21:05 -07:00
Arve Hjønnevåg
3302ea6002 [arch][arm/arm64] Don't wait for secondary cpus to boot
There are several use cases for this:
- Generic kernel on a system with fewer cpus.
- Systems with two clusters that cannot run concurrently.
- LK as the secure os on a system where secondary cpus do not
  boot until the non-secure os boots.

Change-Id: I17917944c485ff4ac581c159b4abba05471ee5b8
2015-05-13 20:20:39 -07:00
Arve Hjønnevåg
47e5b7101f [arch][arm64] SMP support
Change-Id: Ieb9dec2ad64b9b04d51da15a17b9e7c4df50460b
2015-05-13 20:20:35 -07:00
Arve Hjønnevåg
b22c0c8d5b [arch][arm64] Add mmu support
Change-Id: I52aa2071bf3b2d1a03b01ab6b1f32a809a3ebebe
2015-03-19 18:01:12 -07:00
Travis Geiselbrecht
ee9d2927ad [arm] add ability to pass and generically read up to 4 boot args from whoever loaded lk
-Extend arch_chain_load() to pass 4 args
2014-11-21 15:50:18 -08:00
Travis Geiselbrecht
ec69757a59 [arch] relocate binary to proper physical location at boot, add arch_chain_load
-in arm start.S, calculate and move the current binary to the proper physical
location before enabling the mmu.
-add arch_chain_load which does the necessary translations from virtual to
physical, tries to gracefully shut the system down, and branches into the loaded binary.
2014-08-12 16:21:27 -07:00
Travis Geiselbrecht
25a78c5225 [lib][heap] have the heap pull pages out of the vm, if present 2014-07-11 18:11:58 -07:00
Travis Geiselbrecht
4ec1bac774 [arch][arm64] initial port to armv8-a (aarch64) 2014-01-26 22:52:16 -08:00