3290 Commits

Author SHA1 Message Date
Peter Mitsis
3944b0cfc7 kernel: Extend thread user_options to 16 bits
Upgrades the thread user_options to 16 bits from an 8-bit value to
provide more space for future values.

Also, as the size of this field has changed, the values for the
existing architecture specific thread options have also shifted
from the upper end of the old 8-bit field, to the upper end of
the new 16-bit field.

Fixes #101034

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-22 08:40:17 +00:00
Peter Mitsis
924874baef kernel: Fix two k_condvar_wait() issues
1. When the timeout is K_NO_WAIT, the thread should not be added
   to the wait queue as that would otherwise cause to the thread
   to wait until the next tick (which is not a no-wait situation).
2. Threads that were added to the wait queue AND did not receive
   a signal before timing out should not lock the supplied mutex.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-22 08:39:55 +00:00
Tim Lin
2edece9d58 kernel: Add Kconfig option to disable LTO for kernel sources
Some SoCs require kernel code to be placed in RAM, which makes
link-time optimization (LTO) unsuitable for these files.
Disabling LTO allows the affected code to be linked as separate
objects and placed in specific memory regions.

Running kernel code from RAM can improve execution performance,
especially for timing-critical routines or context switch paths.

Signed-off-by: Tim Lin <tim2.lin@ite.corp-partner.google.com>
2026-01-21 17:04:50 +01:00
Sylvio Alves
f3deb7bed7 kernel: nothread: fix build when CONFIG_SYS_CLOCK_EXISTS=n
The k_timer API requires CONFIG_SYS_CLOCK_EXISTS to be enabled,
as timer.c is only compiled when this config is set. Guard the
timer-based k_sleep() implementation and fall back to the previous
busy-wait approach when no system clock exists.

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
2026-01-13 17:27:58 +01:00
Lauren Murphy
d01a2bc5a5 tests: subsys: llext: intel_adsp build fixes
Adds board overlays for Intel ADSP platforms to use
CONFIG_LLEXT_TYPE_ELF_RELOCATABLE instead of SHAREDLIB
as xt-clang cannot link shared libs for Xtensa, exports
symbols used by Intel ADSP with Xtensa toolchain, and
adds XTENSA MPU / MMU to "no memory protection" config file.

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2026-01-09 17:08:24 -06:00
Vijay Sharma
83f1b8ba49 kernel: timer: Add timer observer hooks for extensibility
Introduce lifecycle observer callbacks (init, start, stop, expiry)
for k_timer using Zephyr's iterable sections pattern. This enables
external modules to extend timer functionality without modifying
kernel internals.

Signed-off-by: Vijay Sharma <vijshar@qti.qualcomm.com>
2026-01-09 14:25:31 -06:00
Peter Mitsis
669a8d0704 kernel: O(1) search for threads among CPUs
Instead of performing a linear search to determine if a given
thread is running on another CPU, or if it is marked as being
preempted by a metaIRQ on any CPU do this in O(1) time.

On SMP systems, Zephyr already tracks the CPU on which a thread
executes (or lasted executed). This information is leveraged to
do the search in O(1) time.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-08 17:34:14 -06:00
Emanuele Di Santo
a23cfc3f68 kernel: nothread: use k_timer to sleep instead of busy waiting
The current implementation of k_sleep(), when multi-threading
is disabled, busy waits using k_busy_wait() until the sleep timeout
has expired.

This patch aims to improve power efficiency of k_sleep() for
single-threaded applications by starting a timer (k_timer) and idling
the CPU until the timer interrupt wakes it up, thus avoiding
busy-looping.

Signed-off-by: Emanuele Di Santo <emdi@nordicsemi.no>
2026-01-08 17:33:28 -06:00
Bjarki Arge Andreasen
6e4ef44847 kernel: poll: patch recursive lock in z_vrfy_k_poll
In z_vrfy_k_poll, there is a memory access check
K_SYSCALL_MEMORY_WRITE which is wrapped in a spinlock, the same
spinlock used in z_handle_obj_poll_events which is called from
k_sem_give() for example.

The K_SYSCALL_MEMORY_WRITE() macro conditionally calls LOG_ERR()
which may call the UART console, which may call an API like
k_sem_give(). This will cause a deadlock since the locked spinlock
will be relocked, and a recursive lock if SPINLOCK_VALIDATE and
ASSERTS are enabled as the validation will fail, causing a LOG_ERR,
causing a k_sem_give() causing a relock... until stack overflows.

To solve the issue, only protect the copy of events to events_copy
with the spinlock, the content of events is not actually checked, and
bound is not shared, so there is no need to do this validation in a
critical section. The contents of events is shared so that must be
copied in atomically.

Signed-off-by: Bjarki Arge Andreasen <bjarki.andreasen@nordicsemi.no>
2026-01-08 17:32:35 -06:00
Thinh Le Cong
15cdf90bee include: zephyr: toolchain: suppress Go004 warning for inline functions
IAR compiler may emit Error[Go004]: Could not inline function
when handling functions marked as always_inline or inline=forced,
especially in complex kernel code

Signed-off-by: Thinh Le Cong <thinh.le.xr@bp.renesas.com>
2026-01-08 12:00:29 +00:00
Emanuele Di Santo
bc97315e0b kernel: timer: let k_timer_status_sync() idle the CPU when MT is off
This patch modifies k_timer_status_sync() to idle the CPU when MT
is disabled, instead of busy-looping. For this purpose, the spinlock
in the MULTITHREADING=n case has been reduced to a irq_lock(),
which works in pair with k_cpu_atomic_idle() to ensure the atomicity
of enabling the IRQs and idling the CPU.

Signed-off-by: Emanuele Di Santo <emdi@nordicsemi.no>
2026-01-07 14:07:33 +01:00
Peter Mitsis
3affd0385e kernel: Fix race condition in z_time_slice()
Instead of directly calling the current thread-specific time slice
handler in z_time_slice(), we must call a saved copy of the handler
that was made when _sched_spinlock was still held. Otherwise there
is a small window of time where another CPU could change the handler
to NULL just before we call it.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-03 10:18:53 +01:00
Peter Mitsis
c4e2db088f kernel: Add check to k_sched_time_slice_set()
When k_sched_time_slice_set() is called, the current time slice
should not be reset if the current thread is using thread-grained
time slicing. This is to maintain consistency with the already
established idea that thread-grained time slicing takes precedence
over the system-wide time slice size `slice_ticks`.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-03 10:18:53 +01:00
Peter Mitsis
3a0784fe22 kernel: Update thread_is_sliceable()
This fixes several minor items related to the priority or importance
of checks in determining whether the thread can be time sliced.

A thread that is prevented from running can not be time sliced
regardless of whether it was configured for thread-grained
time slicing or not. Nor can the idle thread be time sliced.

If the thread is configured for thread-grained time slicing, then
do not bother with the preemptible or priority threshhold checks.
This maintains the same behavior, and just optimizes the checks.

If the thread is sliceable, we may as well return the size of the
tick slice since we are checking that information anyway. Thus, a
return value of zero (0) means that the thread is not sliceable,
and a value greater than zero (0) means that it is sliceable.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-03 10:18:53 +01:00
Peter Mitsis
d05d9454bf kernel: Remove superfluous thread_is_sliceable() call
Within z_sched_ipi() there is no need for the thread_is_sliceable()
test as z_time_slice() performs that check. Since as a result of this
thread_is_sliceable() is now only used within timeslicing.c, the
'static' keyword is applied to it.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2026-01-03 10:18:53 +01:00
Peter Mitsis
3a8c9797ca kernel: Re-instate metaIRQ z_is_thread_ready() check
Re-instate a z_is_thread_ready() check on the preempted metaIRQ
thread before selecting it as the preferred next thread to
schedule. This code exists because of a corner case where it is
possible for the thread that was recorded as being pre-empted
by a meta-IRQ thread can be marked as not 'ready to run' when
the meta-IRQ thread(s) complete.

Such a scenario may occur if an interrupt ...
  1. suspends the interrupted thread, then
  2. readies a meta-IRQ thread, then
  3. exits
The resulting reschedule can result in the suspended interrupted
thread being recorded as being interrupted by a meta-IRQ thread.
There may be other scenarios too.

Fixes #101296

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-22 22:33:18 +01:00
Daniel Leung
51adafd3f3 kernel: mem_domain: remove extra newline character for logging
There is no need for the newline characters when using logging
macros. So remove them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-12-20 15:04:39 +01:00
Peter Mitsis
5dd36854fd kernel: Update clearing metairq_preempted record
If the thread being aborted or suspended was preempted by a metaIRQ
thread then clear the metairq_preempted record. In the case of
aborting a thread, this prevents a re-used thread from being
mistaken for a preempted thread. Furthermore, it removes the need
to test the recorded thread for readiness in next_up().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-10 10:32:50 +00:00
Peter Mitsis
36d195b717 kernel: MetaIRQ on SMP fix
When a cooperative thread (temporary or otherwise) is preempted by a
metaIRQ thread on SMP, it is no longer re-inserted into the readyQ.
This prevents it from being scheduled by another CPU while the
preempting metaIRQ thread runs.

Fixes #95081

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-10 10:32:50 +00:00
Peter Mitsis
653731d2e1 kernel: Adjust metairq preemption tracking bounds
Adjust the bounds for tracking metairq preemption to include the
case where the number of metairq threads matches the number of
cooperative threads. This is needed as a thread that is schedule
locked through k_sched_lock() is documented to be treated as a
cooperative thread. This implies that if such a thread is preempted
by a metairq thread that execution control must return to that
thread after the metairq thread finishes its work.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-10 10:32:50 +00:00
Vijay Sharma
432d9d2a6b tracing: trace timer calls
Add tracing support for timer expiry and stop function callbacks,
enabling measurement of callback execution duration and facilitating
debugging of cases where callbacks take longer than expected.

Signed-off-by: Vijay Sharma <vijshar@qti.qualcomm.com>
2025-12-09 22:40:13 -05:00
Daniel Leung
169304813a cache: move arch_mem_coherent() into cache subsys
arch_mem_coherent() is cache related so it is better to move it
under cache subsys. It is renamed to sys_cache_is_mem_coherent()
to reflect this change.

The only user of arch_mem_coherent() is Xtensa. However, it is
not an architecture feature. That's why it is moved to the cache
subsys.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-12-09 09:25:33 +01:00
Lingao Meng
409e1277b6 kernal: work: Use Z_WORK_DELAYABLE_INITIALIZER replace dup code
Use `Z_WORK_DELAYABLE_INITIALIZER` replace duplicate init code.

Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
2025-12-06 11:37:21 -05:00
Peter Mitsis
864e648e68 kernel: Add ifdef guard around ipi_lock definition
The global variable ipi_lock is both local to the file ipi.c and
only used when CONFIG_SCHED_IPI_SUPPORTED is enabled. As such its
definition should be wrapped with an ifdef.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-05 10:55:32 +02:00
Peter Mitsis
c08905ecc9 kernel: Add thread runtime stack safety
Adds support for thread runtime stack safety. This kernel feature
allows a developer to run enhanced stack usage checks on threads
such that if the amount of unused stack space drops below a thread's
configured threshold, it will invoke a custom handler/callback.

This can be used by monitoring software to log warnings, suspend
or abort threads, or even reboot the system.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 19:25:44 +00:00
Peter Mitsis
ce6c26a927 kernel: Simplify move_current_to_end_of_prio_q()
It is now more obvious that the move_current_to_end_or_prio_q() logic
is supposed to match that of k_yield() (without the schedule point).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Peter Mitsis
77ad7111e1 kernel: Rename move_thread_to_end_of_prio_q()
All instances of the internal routine move_thread_to_end_of_prio_q()
use the current thread. Renaming it to move_current_to_end_of_prio_q()
to reflect that.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Peter Mitsis
ffc6c8839b kernel: Rename z_move_thread_to_end_of_prio_q()
The routine z_move_thread_to_end_of_prio_q() has been renamed to
z_yield_testing_only() as it was only both only used for test code
and always operated on the current thread.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Liu Qian
a2ca8b9b0a device: remove duplicate code
API z_device_state_init has already defined in init.c

Signed-off-by: Liu Qian <liuqian.andy@picoheart.com>
2025-11-24 17:33:13 +01:00
Jamie McCrae
b128e51994 kernel: kconfig: Disable DEVICE_DEINIT_SUPPORT by default
This Kconfig, which it itself admits is for a "very specific case"
was set to default as yes, this includes extra code in drivers with
this functionality and increases driver struct size for cases where
this function isn't needed (i.e. all because it's enabled by
default), therefore change it to be opt-in rather than opt-out

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2025-11-20 17:14:50 +00:00
Yong Cong Sin
3c5807f6ec arch: riscv: stacktrace: support stacktrace in early system init
Add support for stacktrace in dummy thread which is used to run
the early system initialization code before the kernel switches
to the main thread.

On RISC-V, the dummy thread will be running temporarily on the
interrupt stack, but currently we do not initialize the stack
info for the dummy thread, hence check the address against the
interrupt stack.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2025-11-18 17:38:22 -05:00
Yong Cong Sin
4f5f42fa69 kernel: thread: constify thread arg of read-only functions
Since these helper functions are read-only, mark the `thread`
arg as `const` so that we can pass const thread to it without
triggering warnings.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2025-11-18 17:38:22 -05:00
Ederson de Souza
d6071319b5 kernel/userspace: Dynamically allocate privileged stack after user stack
When ARM CONFIG_BUILTIN_STACK_GUARD=y, it expects that the privileged
stack has a higher memory address than that of the normal user stack.
However, dynamically allocated stacks had the other way round:
privileged stack had a lower memory address.

This was probably not caught before because relevant tests, such as
`kernel.threads.dynamic_thread.stack.pool.alloc.user` run with no
hardware stack protection. If one were to test it on HW that has stack
protection, such as frdm_mcxn947 with CONFIG_HW_STACK_PROTECTION=y, they
would see it failing.

This patch naively assumes that ARC and RISC-V PMP will be happy with
the shuffling of user and privileged stack positions.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2025-11-13 23:20:45 +02:00
Nicolas Pitre
af7ae5d61f kernel: sched: plug assertion race in z_get_next_switch_handle()
Commit d4d51dc062 ("kernel:  Replace redundant switch_handle assignment
with assertion") introduced an assertion check that may be triggered
as follows by tests/kernel/smp_abort:

CPU0              CPU1              CPU2
----              ----              ____
* [thread A]      * [thread B]      * [thread C]
* irq_offload()   * irq_offload()   * irq_offload()
* k_thread_abort(thread B)
                  * k_thread_abort(thread C)
                                    * k_thread_abort(thread A)
* thread_halt_spin()
* z_is_thread_halting(_current) is false
* while (z_is_thread_halting(thread B));
                  * thread_halt_spin()
                  * z_is_thread_halting(_current) is true
                  * halt_thread(_current...);
                  * z_dummy_thread_init()
                    - dummy_thread->switch_handle = NULL;
                    - _current = dummy_thread;
                  * while (z_is_thread_halting(thread C));
* z_get_next_switch_handle()
* z_arm64_context_switch()
* [thread A is dead]
                                    * thread_halt_spin()
                                    * z_is_thread_halting(_current) is true
                                    * halt_thread(_current...);
                                    * z_dummy_thread_init()
                                      - dummy_thread->switch_handle = NULL;
                                      - _current = dummy_thread;
                                    * while(z_is_thread_halting(thread A));
                  * z_get_next_switch_handle()
                    - old_thread == dummy_thread
                    - __ASSERT(old_thread->switch_handle == NULL) OK
                  * z_arm64_context_switch()
                    - str x1, [x1, #___thread_t_switch_handle_OFFSET]
                  * [thread B is dead]
                  * %%% dummy_thread->switch_handle no longer NULL %%%
                                    * z_get_next_switch_handle()
                                      - old_thread == dummy_thread
                                      - __ASSERT(old_thread->
                                             switch_handle == NULL) FAIL

This needs at least 3 CPUs and the perfect timing for the race to work as
sometimes CPUs 1 and 2 may be close enough in their execution paths for
the assertion to pass. For example, QEMU is OK while FVP is not.
Also adding sufficient debug traces can make the issue go away.

This happens because the dummy thread is shared among concurrent CPUs.
It could be argued that a per-CPU dummy thread structure would be the
proper solution to this problem. However the purpose of a dummy thread
structure is to provide a dumping ground for the scheduler code to work
while the original thread structure might already be reused and
therefore can't be clobbered as demonstrated above. But the dummy
structure _can_ be clobbered to some extent and it is not worth the
additional memory footprint implied by per-CPU instances. We just have
to ignore some validity tests when the dummy thread is concerned.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 13:45:24 -05:00
Nicolas Pitre
1c8f1c8647 kernel: sched: use clearly invalid value for halting thread switch_handle
When a thread halts and dummifies, set its switch_handle to (void *)1
instead of the thread pointer itself. This maintains the non-NULL value
required to prevent deadlock in k_thread_join() while making it obvious
that this value is not meant to be dereferenced or used.

The switch_handle should be an opaque architecture-specific value and
not be assumed to be a thread pointer in generic code. Using 1 makes
the intent clearer.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 13:45:24 -05:00
Carles Cufi
cd8e773b32 kernel: events: Depend on multithreading
Kernel events depend on multithreading being enabled, and mixing them
with a non-multithreaded build gives linker failures internal to
events.c. To avoid this, make events depend on multithreading.

```
libkernel.a(events.c.obj): in function `k_event_post_internal':
175: undefined reference to `z_sched_waitq_walk'
events.c:183: undefined reference to `z_sched_wake_thread'
events.c:191: undefined reference to `z_reschedule'
libkernel.a(events.c.obj): in function `k_sched_current_thread_query':
kernel.h:216: undefined reference to `z_impl_k_sched_current_thread_query'
libkernel.a(events.c.obj): in function `k_event_wait_internal':
events.c:312: undefined reference to `z_pend_curr'
```

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2025-10-30 15:13:38 +02:00
Anas Nashif
303af992e5 style: fix 'if (' usage in cmake files
Replace with 'if(' and 'else(' per the cmake style guidelines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-29 11:44:13 +02:00
TaiJu Wu
d4d51dc062 kernel: Replace redundant switch_handle assignment with assertion
The switch_handle for the outgoing thread is expected to be NULL
at the start of a context switch.
The previous code performed a redundant assignment to NULL.

This change replaces the assignment with an __ASSERT(). This makes the
code more robust by explicitly enforcing this precondition, helping to
catch potential scheduler bugs earlier.

Also, the switch_handle pointer is used to check a thread's state during a
context switch. For dummy threads, this pointer was left uninitialized,
potentially holding a unexpected value.

Set the handle to NULL during initialization to ensure these threads are
handled safely and predictably.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-25 15:59:29 +03:00
Anas Nashif
e23d663b85 tracing: ctf: add condition variables
Add hooks for condition variables.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-25 15:59:19 +03:00
Anas Nashif
a5728add11 kernel: msgq: return once to simplify tracing
Return once simplifying tracing macros.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-25 15:59:19 +03:00
TaiJu Wu
91f1acbb85 kernel: Add more debug info and thread checking in run queue
1. There are debug info within k_sched_unlock so we shoulld add
   same debug info to k_sched_lock.

2. The thread in run queue should be normal or metairq thread, we should
   check it is not dummy thread.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-24 13:26:15 -04:00
Fabio Baltieri
700a1a5a28 lib, kernel: use single evaluation min/max/clamp
Replace all in-function instances of MIN/MAX/CLAMP with the single
evaluation version min/max/clamp.

There's probably no race conditions in these files, but the single
evaluation ones save a couple of instructions each so they should save
few code bytes and potentially perform better, so they should be
preferred in general.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2025-10-24 01:10:40 +03:00
Nicolas Pitre
b5363d5fff kernel: usage: Fix CPU stats retrieval in z_sched_cpu_usage()
The z_sched_cpu_usage() function was incorrectly using _current_cpu
instead of the requested cpu_id parameter when retrieving CPU usage
statistics. This caused it to always return stats from the current CPU
rather than the specified CPU.

This bug manifested in SMP systems when k_thread_runtime_stats_all_get()
looped through all CPUs - it would get stats from the wrong CPU for
each iteration, leading to inconsistent time values. For example, in
the times() POSIX function, this caused time to appear to move backwards:

  t0: utime: 59908
  t1: utime: 824

The fix ensures that:
1. cpu pointer is set to &_kernel.cpus[cpu_id] (the requested CPU)
2. The check for "is this the current CPU" is correctly written as
   (cpu == _current_cpu)

This fixes the portability.posix.muti_process.newlib test failure
on FVP SMP platforms where times() was reporting backwards time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-10-22 09:04:13 +02:00
Daniel Leung
38d49efdac kernel: mem_domain: keep track of threads only if needed
Adds a new kconfig CONFIG_MEM_DOMAIN_HAS_THREAD_LIST so that
only the architectures requiring to keep track of threads in
memory domains will have the necessary list struct inside
the memory domain structs. Saves a few bytes for those arch
not needing this.

Also rename the struct fields to be most descriptive of what
they are.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-21 22:54:44 +03:00
Nicolas Pitre
8d1da57d57 kernel: mmu: k_mem_page_frame_evict() fix locking typo
... when CONFIG_DEMAND_PAGING_ALLOW_IRQ is set.

Found during code inspection. k_mem_page_frame_evict() is otherwise
rarely used,

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-10-21 22:53:04 +03:00
Anas Nashif
6240b0ddb9 kernel: set DYNAMIC_THREAD_STACK_SIZE to 4096 for coverage
Increase stack sizes to allow coverage to complete.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-14 17:32:46 -04:00
Anas Nashif
f22a0afc74 testsuite: coverage: Support semihosting
Use semihosting to collect coverage data instead of dumping data to
serial console.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-14 17:32:46 -04:00
Łukasz Stępnicki
6571f4e1bc kernel: work: work timeout handler uninitialized variables fix
work and handler pointers are local and not initialized.
Initialize them with NULL to avoid compiler error maybe-uninitialized.

Signed-off-by: Łukasz Stępnicki <lukasz.stepnicki@nordicsemi.no>
2025-10-10 12:55:06 -04:00
Andrzej Puzdrowski
eb931d425f kernel/Kconfig.init: update description of SOC_RESET_HOOK
Updated description on conditions and assumptions in which
the soc_reset_hook is executed.

Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2025-10-07 12:50:10 +02:00
Andrzej Puzdrowski
418eed0f90 arch/arm: introduce the pre-stack/RAM init hook
Introduce hook for customize reset.S code even before stack is
initialized or RAM is accessed. Hook can be enabled using
CONFIG_SOC_EARLY_RESET_HOOK=y.
Hook implementation is by soc_early_reset_hook() function which should
be provided by custom code.

Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2025-10-07 12:50:10 +02:00