Compare commits

..

118 Commits

Author SHA1 Message Date
Vinayak Kariappa Chettimada
4ebecae78a Bluetooth: Controller: Fix connection update interval_us variables
Fix connection update microsecond interval variable data
type, to use 32-bit so that a value upto 2000 seconds, i.e.
4 seconds interval and 499 peripheral latency can be stored.

Regression in commit abfe5f17a9 ("Bluetooth: Controller:
1 ms connection").

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 0a4480ce61)
2025-07-11 11:24:10 -10:00
Glenn Andrews
5e24e48342 boards: disco_l475_iot1: fix arduino_i2c assignment
0f05f58bf5 assigned the `arduino_i2c` to the incorrect pins for the Arduino UNO connector. This assigns them back.

signed-off-by: Glenn Andrews <glenn.andrews.42@gmail.com>
(cherry picked from commit a2da571172)
2025-07-10 06:27:54 -10:00
Benjamin Cabé
6753a86f27 doc: _templates: show correct version in sidebar in case of releases
Ensure that version indicator in sidebar is correct in case of releases.

Fixes zephyrproject-rtos/zephyr#91799.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit bbfaf8adec)
2025-06-23 13:04:19 -07:00
Fabian Blatz
aabd3a1826 modules: lvgl: Register print callback after lv_init
Move initialization of the print callback handler after calling lv_init, as
the latter zeroes the global callback structure.

Signed-off-by: Fabian Blatz <fabianblatz@gmail.com>
(cherry picked from commit 5cffb8e5a6)
2025-06-05 13:07:45 -07:00
Henrik Brix Andersen
5bef9327b0 tests: drivers: virtualization: qemu_kvm_arm64.overlay: fix SPDX license
"UNLICENSED" is not a valid SPDX License Identifier. Remove it from the
list and just keep "Apache-2.0".

Fixes: #89413

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit b240f11592)
2025-06-03 21:40:31 -07:00
Jukka Rissanen
7c48241e5e net: http: server: Select POSIX_C_LIB_EXT instead of FNMATCH
The CONFIG_POSIX_C_LIB_EXT will get support for fnmatch() function.
The old CONFIG_FNMATCH is deprecated.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 0f90affcdf)
2025-06-03 21:39:52 -07:00
Jukka Rissanen
63d789ac0b net: http: server: The detail length of wildcard detail was wrong
The path length of the detail resource was not set properly.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 33cf7dc78a)
2025-06-03 21:39:52 -07:00
Krzysztof Chruściński
c08390df71 drivers: serial: nrfx_uarte: Fix use of PM_DEVICE_ISR_SAFE
PM_DEVICE_ISR_SAFE shall not be used when non-asynchronous API is used
because RX is disabled in suspend action and that takes relatively
long time. In case of PM_DEVICE_ISR_SAFE it is done with interrupts
disabled. RX is not used at all if disable-rx property is set and in
that case PM_DEVICE_ISR_SAFE can be used.

Added macro which determines if ISR safe mode can be used.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit e0f5241b28)
2025-05-23 11:44:33 -07:00
Luis Ubieda
d4374ed2cf sensor: adxl345: Disable Sensor Streaming by default
Have the application enable this feature explicitcly, so that
simple applications do not need to disable this to get the
expected behavior.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Luis Ubieda
0f9a0f4d00 sensor: adxl345: Only enable FIFO Stream with Sensor Stream is enabled
Otherwise with its default configuration (25-Hz, 32-level FIFO),
getting individual samples could be up to 1-second old.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Luis Ubieda
80b607dd8f sensor: adxl345: Fix decoder for non-streaming mode
The following fixes have been applied to this decoder:
- The Q-scale factor was fixed, both for full-scale and non
full-scale modes.
- The data-type decoded is struct sensor_three_axis_data, as
it should for read/decode API.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Luis Ubieda
e465e6168e sensor: adxl345: Add get_size_info API
Used by the sensor-shell in order to retrieve values, otherwise
it crashes.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Dong Wang
0fb81dec80 logging: Assign IDs to log backends early during log_core_init
This commit moves the assignment of backend IDs from the 'z_log_init'
function to the earlier 'log_core_init' function. This ensures that
backend IDs are assigned before they are used in the 'log_backend_enable'
function, preventing incorrect settings of log dynamic filters.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
(cherry picked from commit 3fac83151d)
2025-05-09 11:59:59 -07:00
Benjamin Cabé
1cdd63e5ba doc: Update Graphviz font configuration
Customize Graphviz dot rendering to use same font stack as our Sphinx
theme.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit c0f76d9363)
2025-05-09 11:57:23 -07:00
Dong Wang
64451449ac logging: Ensure atomic update of log filter slot in LOG_FILTER_SLOT_SET
Replaced the read-modify-write sequence with a single read and write
operation, preventing the intermediate value is wrongly used to filter
out logs of another thread with higher priority that preempts the current
thread.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
(cherry picked from commit 872f363696)
2025-05-09 10:45:48 -07:00
Chekhov Ma
79696d420e drivers: gpio: adp5585: fix wrong reg during pin configure
The ADP5585_GPO_OUT_MODE_A is used when configuring initial output
during pin configuration, causing pins configured HIGH is incorrectly
configured as open-drain. Replacing the reg with ADP5585_GPO_DATA_OUT
fixes the issue.

Signed-off-by: Chekhov Ma <chekhov.ma@nxp.com>
(cherry picked from commit c87900aa40)
2025-05-09 10:36:07 -07:00
Benjamin Cabé
d5d5a7c7db doc: make graphviz diagrams look good in dark theme
Add filter to invert colors in dark mode.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit fd919b5160)
2025-05-09 10:34:47 -07:00
Jukka Rissanen
215737c229 net: if: Release the interface lock early when starting IPv4 ACD
In order to avoid any mutex deadlocks between iface->lock and
TX lock, release the interface lock before calling a function
that will acquire TX lock. See previous commit for similar issue
in RS timer handling. So here we create a separate list of ACD
addresses that are to be started when network interface comes up
without iface->lock held.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
60190a93f4 net: if: Release the interface lock early when starting IPv6 DAD
In order to avoid any mutex deadlocks between iface->lock and
TX lock, release the interface lock before calling a function
that will acquire TX lock. See previous commit for similar issue
in RS timer handling.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
f749a05200 net: if: Release the interface lock early in IPv6 RS timeout handler
The net_if.c:rs_timeout() is sending a new IPv6 router solicitation
message to network by calling net_if_start_rs(). That function will
then acquire iface->lock and call net_ipv6_start_rs() which will try
to send the RS message and acquire TX send lock.
During this RS send, we might receive TCP data that could try to
send an ack to peer. This will then in turn cause also TX lock
to be acquired. Depending on timing, the lock ordering between
rx thread and system workq might mix which could lead to deadlock.
Fix this issue by releasing the iface->lock before starting the
RS sending process. The net_if_start_rs() does not really need to
keep the interface lock for a long time as it is the only one sending
the RS message.

Fixes #86499

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
3af878f68e net: if: Documentation missing for IPv4 ACD timeout variable
Add documentation for ACD timeout variable as it was missing.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
05dd94d1dd net: if: Documentation missing for IPv6 DAD start time variable
Add documentation for DAD start time variable as it was missing.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Sudan Landge
0a8f9bb61c modules: tf-m: update to 2.1.2
Update TF-M to 2.1.2 from version 2.1.1.
This is required to use MbedTLS 3.6.3.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2025-05-09 10:27:22 -07:00
Sudan Landge
bbe7ab00de modules: mbedtls: update to 3.6.3
Update Mbed TLS to 3.6.3 as it has CVE fixes.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2025-05-09 10:27:22 -07:00
Fabio Baltieri
7ff8f6696c ci: update few workflows to use Python 3.12
The HTML doc build workflow and PR assigner recently started failing to
install python packages, breaking all backports. Seems to work when
using Python 3.12, this is already done in main in bacb99da6d.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2025-05-09 11:19:59 +01:00
Alex An
83e99be7d1 drivers: retained_mem: Fix using multiple nRF retained memory regions
Fixes an error when using multiple nRF retained memory regions caused by
a missing separating comma.

Signed-off-by: Alex An <aza0337@auburn.edu>
(cherry picked from commit fbb75862a1)
2025-05-07 11:02:27 -07:00
Luis Ubieda
94c7905e01 spi_rtio: fix transactions for default handler
This patch fixes transaction op items not performed within a single
SPI transfer. This is common for Write + Read commands, that depend on
the CS kept asserted until the end, otherwise the context will be lost.
A similar fix was applied to i2c_rtio_default on #79890.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit b5ca5b7b77)
2025-05-07 08:14:55 -07:00
Mathieu Choplain
43c79ba72c dts: st: stm32u5: restore correct clocks on multi-bit devices
During the transition to STM32_CLOCK macro (in 57723cf), the `clocks`
property of peripherals requiring more than one bit to be set were
mistakenly modified. Commit 2c3294b079
partially fixed these errors, but some nodes for the U5 series are still
wrong.

Restore `clocks` on affected devices in corresponding STM32U5 DTSI.

Fixes: 57723cf405
Signed-off-by: Mathieu Choplain <mathieu.choplain@st.com>
(cherry picked from commit 37bdc38ec6)
2025-03-20 14:25:06 -07:00
Luis Ubieda
7412674a3c rtio: workq: Select Early P4WQ threads init
Otherwise the RTIO Workqueue is not functional for devices during init.

An example of this issue is devices using SPI transfers with default
spi_rtio.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 2ce2794987)
2025-02-26 09:38:00 -06:00
Luis Ubieda
0c413db7eb p4wq: Add Kconfig to perform early init on threads
In order to make them functional for devices during init. Default
behavior is to keep late initialization, as before.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit ec45b29ea3)
2025-02-26 09:38:00 -06:00
Simon Guinot
af81a8ba32 drivers: led: lp50xx: check the number of LED colors
The current code assumes (especially in the lp50xx_set_color function)
that the number of LED colors defined in DT is not greater than 3. But
since this is not checked, then this is not necessarily the case...

This patch consolidates the initialization of the lp50xx LED driver by
checking the number of colors for each LED found in DT.

Signed-off-by: Simon Guinot <simon.guinot@sequanux.org>
(cherry picked from commit ffbb8f981a)
2025-02-26 09:31:58 -06:00
Robert Lubos
546386cd1f net: if: Setup DAD timer regardless of DAD query result
In rare occasions when sending DAD NS packet fails, we should still
setup the DAD timer, unless we implement some kind of more advanced
retry mechanism. If we don't do that, the IPv6 address added to the
interface will never be usable in such cases.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 008a7ca202)
2025-02-26 09:30:50 -06:00
Robert Lubos
aa48023434 net: if: Clear neighbor cache when removing IPv6 addr with active DAD
DAD creates an entry in the neighbor cache for the queried (own)
address. In case the address is removed from the interface while DAD is
still incomplete, we need to remove the corresponding cache entry (just
like in case of DAD timeout) to avoid stale entries in the cache.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit a09fd8e97f)
2025-02-26 09:30:50 -06:00
Robert Lubos
3410679046 tests: net: conn_mgr: Use valid LL address in tests
Using L2 address of length 1 (invalid/unsupported one) confused IPv6
layer during LL address generation - since that length was not a valid
one, the address was not initialized properly and a part of it was set
semi-random. This could result for example in filling out the neighbor
tables.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 3089a5d116)
2025-02-26 09:30:50 -06:00
Derek Snell
7eab725dfb soc: nxp: rw: Update system core clock frequency
After updating the main_clk, need to update the frequency tracked in
HAL MCUXpresso SDK framework for other drivers.

Signed-off-by: Derek Snell <derek.snell@nxp.com>
(cherry picked from commit 793e44afdd)
2025-02-26 09:28:45 -06:00
Yong Cong Sin
a50476ca00 logging: log_cmds: init uninitialized backend on log_go()
For backends that do not autostart themselves, initialize
& enable them on `log backend <log_backend_*> go`, so
that they function properly.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit f840a35be3)
2025-02-20 13:03:28 -06:00
Yong Cong Sin
ae73df5621 logging: init backend id regardless of autostart
The `id` is basically a compile-time constant. Setting it every
time the backend is enabled is unnecessary. Instead, set it on
`z_log_init()` regardless of whether or not it requires to be
`autostart`ed.

Fixes an issue where the `filter_get`/`filter_set`
accessed the wrong index and displayed the wrong log level when
user accesses the status of an uninitialized backend via:
`log backend <uninitialized_backend> status`.

Also fixes an issue when user tries to list the backends via:
`log list_backends`, where all uninitialized backends will have
ID = 0.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 8dd9d924fe)
2025-02-20 13:03:28 -06:00
Jordan Yates
f61d53d5b5 bluetooth: increment BT_BUF_CMD_TX_COUNT
The extended advertising start procedure can consume both command
buffers in a single API call, resulting in `bt_le_create_conn_cancel`
being unable to claim a buffer to terminate the connection request.

Increase the command count if both extended advertising and Bluetooth
central are enabled in an application.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2025-02-20 10:56:17 -08:00
Jordan Yates
790bd7c44b bluetooth: host: hci_core: add missing NULL check
Add check that the command buffer claimed in `bt_le_create_conn_cancel`
is not `NULL`. Fixes a fault caused by providing the `NULL` buffer to
`bt_hci_cmd_state_set_init`.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2025-02-20 10:56:17 -08:00
Florian Weber
c2ad663314 rtio: workq: bugfix of memory allocation
This commit fixes the bug that the memory of the work request is
freed up in the work handler, before it is not needed anymore by the p4wq.
This is fixed now, by using the new done_handler in the p4wq
for freeing up that memory.

Signed-off-by: Florian Weber <Florian.Weber@live.de>
(cherry picked from commit b55b9ae72b)
2025-02-20 10:55:30 -08:00
Florian Weber
f0c05671da lib: os: p4wq: add done handler
Add an optional handler to the p4wq to give the submitting code
(e.g. rtio workq) a possibility execute code after the work was
succesfully executed.

Signed-off-by: Florian Weber <Florian.Weber@live.de>
(cherry picked from commit 093b29fdb5)
2025-02-20 10:55:30 -08:00
Jukka Rissanen
b40cacb67d net: Update IP address refcount properly when address already exists
If an IP address already exists when it is tried to be added to the
network interface, then just return it but update ref count if it was
not updated. This could happen if the address was added and then removed,
but for example an active connection was still using it and keeping the
ref count > 0. In this case we must update the ref count so that the IP
address is not removed if the connection is closed.

Fixes #85380

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit ae05221762)
2025-02-20 10:52:59 -08:00
Erwan Gouriou
c1f3c446bc dts: stm32h7: Fix ltdc reset lines
LTDC reset lines where off by 1. Fix it.

Signed-off-by: Erwan Gouriou <erwan.gouriou@st.com>
(cherry picked from commit 6f55a32da8)
2025-02-20 10:46:32 -08:00
Robert Lubos
cd85e0ef82 net: ipv6: Fix Neighbor Advertisement processing w/o TLLA option
According to RFC 4861, ch. 7.2.5:

 "If the Override flag is set, or the supplied link-layer address
  is the same as that in the cache, or no Target Link-Layer Address
  option was supplied, the received advertisement MUST update the
  Neighbor Cache entry as follows

  ...

  If the Solicited flag is set, the state of the entry MUST be
  set to REACHABLE"

This indicates that Target Link-Layer Address option does not need to be
present in the received solicited Neighbor Advertisement to confirm
reachability. Therefore remove `tllao_offset` variable check from the
if condition responsible for updating cache entry. No further changes in
the logic are required because if TLLA option is missing,
`lladdr_changed` will be set to false, so no LL address will be updated.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 02c153c8b1)
2025-02-20 10:44:32 -08:00
Robert Lubos
e47c933b29 net: ipv6: Send Neighbor Solicitations in PROBE state as unicast
According to RFC 4861, ch. 7.3.3:

 "Upon entering the PROBE state, a node sends a unicast Neighbor
  Solicitation message to the neighbor using the cached link-layer
  address."

Zephyr's implementation was not compliant with behavior, as instead of
sending a unicast probe for reachability confirmation, it was sending a
multicast packet instead. This commit fixes it.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 8cd213e846)
2025-02-20 10:44:32 -08:00
Robert Lubos
c4cb9bd1af net: ipv6: Fix neighbor registration based on received RA message
When Router Advertisement with Source Link-Layer Address option is
received, host should register a new neighbor marked as STALE
(RFC 4861, ch. 6.3.4). This behavior was broken however, because
we always added a new neighbor in INCOMPLETE state before processing
SLLA option. In result, the entry was not updated to the STALE state,
and a redundant Neighbor Solicitation was sent.

Fix this by moving the code responsible for adding neighbor in
INCOMPLETE state after options processing, and only as a fallback
behavior if the SLLA option was not present.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit fce53922ef)
2025-02-20 10:44:32 -08:00
Lingao Meng
f001cf6de9 Bluetooth: Mesh: Fix Assert in bt_mesh_adv_unref when messages to a proxy
Fixes:https://github.com/zephyrproject-rtos/zephyr/issues/83904

This solution fix is to define a separate variable for the each proxy FIFO.

Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
(cherry picked from commit 6371080406)
2025-02-13 09:44:08 -08:00
Nazar Palamar
da7ec06d12 test: arm: irq: Add overlays files for Infineon boards
Changed interrupt priority for GPIO, default 6 is not suitable for
for the ZERO_LATENCY_IRQS function used in this test.
used in this test.

Fixes for problem with arm_irq_zero_latency_levels
refer to pull/81377

Signed-off-by: Nazar Palamar <nazar.palamar@infineon.com>
(cherry picked from commit 6172092730)
2025-02-11 10:21:13 -08:00
Nazar Palamar
b0e5803c06 Infineon: board: Add CONFIG_GPIO to defconfigs
Add CONFIG_GPIO from defconfigs for Infineon boards.

Revert pull/81377, which affect some ble samples which
used GPIO.

Signed-off-by: Nazar Palamar <nazar.palamar@infineon.com>
(cherry picked from commit 697efe8b50)
2025-02-11 10:21:13 -08:00
Michal Myczkowski
e104bf9a28 dts: atmel sam4s: fix sram address
Changed Atmel SAM4S series sram adress from 0x20100000 to 0x20000000. Now
it matches what is in Atmel SAM4S Series Datasheet chapter 8
section 1.1 Internal SRAM:

https://ww1.microchip.com/downloads/aemDocuments/documents/OTH/ProductDocuments/DataSheets/Atmel-11100-32-bitCortex-M4-Microcontroller-SAM4S_Datasheet.pdf#G1.1069257

Fixes #85211.

Signed-off-by: Michal Myczkowski <mmyczkowski@antmicro.com>
(cherry picked from commit 3a53845fad)
2025-02-11 10:19:42 -08:00
Jamie McCrae
41bafc5dbf mgmt: mcumgr: grp: img_mgmt: Fix calling confirm
Fixes calling the registered callbacks for image being confirmed
if the confirmation was not successful

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 98d5aa3792)
2025-02-11 10:18:49 -08:00
Mahesh Mahadevan
37321f0702 boards: frdm_mcxn947: Delete enable of GPIO5 clock
There is no bit to enable GPIO5 clock in the clock control
register.

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
(cherry picked from commit bda04093fb)
2025-02-11 10:13:45 -08:00
Dario Binacchi
8c516c6d86 dts: arm: st: re-enable master can gating clock for can2
Commit 57723cf405 ("dts: arm: st: Refactor DTSI files to use macro"),
which replaced raw hex codes by using STM32_CLOCK macro, causes
regression in the case of the CAN device where the previous raw value
contained more than one bit set to 1. The macro is in fact correct only
for values with a single bit set. In all other cases, raw values must
continue to be used.

Tested on STM32F429I-DISC1 board

Fixes: 57723cf405
Co-authored-by: Michael Trimarchi <michael@amarulasolutions.com>
Signed-off-by: Dario Binacchi <dario.binacchi@amarulasolutions.com>
(cherry picked from commit 2c3294b079)
2025-02-11 09:50:03 -08:00
Erwan Gouriou
e397b1b657 drivers: ethernet: stm32: Use MDIO API only if enabled by DTS
Not all STM32 series support Zephyr MDIO API yet, while the API is enabled
by default.
To preserve compatibility, put MDIO API related code under the condition
of "st,stm32-mdio" compatible enablement.

Signed-off-by: Erwan Gouriou <erwan.gouriou@st.com>
(cherry picked from commit 2d81351517)
2025-02-04 16:30:36 -06:00
Luis Ubieda
b1631a2913 samples: sensor🐚 test all channels
Loop through all of the channels and make sure that each
channel has an output.

Changes in patch originally present in #72972.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 1e64d7bea1)
2025-02-04 16:29:39 -06:00
Luis Ubieda
c4210d749a tests: sensor: generic_test: Remove special-handling for axis values
Since now they're treated as q31 values when individually queried, as
opposed to lump them in a triplet.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 19a4261517)
2025-02-04 16:29:39 -06:00
Luis Ubieda
212d0c50f1 sensor: default_rtio_sensor: fix: Treat individual axis data as q31_t
Instead of assuming that an individual axis must contain all
data-values. Additionally, for determining that there's XYZ data, all
three channels must be present in the header.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 5a1dfc82c3)
2025-02-04 16:29:39 -06:00
Luis Ubieda
2b884a703e sensor: shell: Allow individual axis data to be processed
For the supported channels, instead of just allowing 3-axis data being
printed out, allow individual channels to be processed (e.g: accel_x,
gyro_y, etc).

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 36dfea121d)
2025-02-04 16:29:39 -06:00
Jukka Rissanen
50c415d3ee net: socket: Release packets in accepted socket in close
If we have received data to the accepted socket, then release
those before removing the accepted socket. This is a rare event
as it requires that we get multiple simultaneous connections
and there is a failure before the socket accept is called by
the application.
For example one such scenario is when HTTP server receives multiple
connection attempts at the same time, and the server poll fails
before socket accept is called. This leads to buffer leak as the
socket close is not called for the accepted socket because the
accepted is not yet created from application point of view.
The solution is to flush the received queue of the accepted socket
before removing the actual accepted socket.

Fixes #84538

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 535e70a298)
2025-02-04 16:27:07 -06:00
Luis Ubieda
0429f17b76 sensor: adxl3xx: Move run-time modification of ODR from cfg to data
Config struct is constant and attempting to modify it triggers a fault.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 5a9ff03c21)
2025-02-04 16:22:11 -06:00
Chen Shu
5080ba8b0c fs: ext2: Fix nbytes_to_read calculation in ext2_inode_read()
Fix incorrect nbytes_to_read calculation in ext2_inode_read() function.
Previously nbytes_to_read was decremented by read value which caused
incorrect calculation of bytes to read in subsequent iterations.
Now nbytes_to_read is decremented by to_read value which represents
the actual number of bytes read in current iteration.

This fixes potential data corruption issues when reading files from
ext2 filesystem.

Signed-off-by: Chen Shu <751541594@qq.com>
(cherry picked from commit 5a5f05ba4e)
2025-02-04 16:21:32 -06:00
Vinayak Kariappa Chettimada
13e7d6501f Bluetooth: Controller: Fix uninitialized is_aborted in conn done event
Fix uninitialized is_aborted in connection done event.

Relates to commit cadef5a64f ("Bluetooth: Controller:
Introduce BT_CTLR_PERIPHERAL_RESERVE_MAX").

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit f3e398d64c)
2025-02-04 16:20:40 -06:00
Jakub Rzeszutko
4a1ffd8519 shell: fix unsafe API calls and add configurable autoflush behavior
Fixes an issue where the shell API could block indefinitely when called
from threads other than the shell's processing thread, especially when
the transport (e.g. USB CDC ACM) was unavailable or inactive.

Replaced `k_mutex_lock` calls with an indefinite timeout (`K_FOREVER`)
by using a fixed timeout (`K_MSEC(SHELL_TX_MTX_TIMEOUT_MS)`) in shell
API functions to prevent indefinite blocking.

Link: https://github.com/zephyrproject-rtos/zephyr/issues/84274

Signed-off-by: Jakub Rzeszutko <jakub.rzeszutko@verkada.com>
(cherry picked from commit b0a0febe58)
2025-01-30 17:54:30 -08:00
Jakub Rzeszutko
98a1a1510c shell: add Kconfig option for configurable autoflush behavior
Introduced a new Kconfig option `SHELL_PRINTF_AUTOFLUSH` to allow
configuring the autoflush behavior of shell printing functions.

Updated `Z_SHELL_FPRINTF_DEFINE` to use the
`CONFIG_SHELL_PRINTF_AUTOFLUSH` setting instead of hardcoding
the autoflush behavior to `true`.

Signed-off-by: Jakub Rzeszutko <jakub.rzeszutko@verkada.com>
(cherry picked from commit 8991b954bc)
2025-01-30 17:54:30 -08:00
Tomislav Milkovic
5fe9a7e6f9 drivers: can: can_tcan4x5x: fix compiler build warning/error
Fix compiler warning when optional property reset-gpios
is not supplied in the ti,tcan4x5x-compatible device tree
node

Signed-off-by: Tomislav Milkovic <tomislav.milkovic95@gmail.com>
(cherry picked from commit fb98387f4d)
2025-01-30 17:52:29 -08:00
Mayank Narang
a1eb85002e drivers: sensor: lis2de12: add length check in spi write incr routine
Added a length check in the spi write incr routine to handle both
single and multi byte write operations properly.

Signed-off-by: Mayank Narang <narang.may77@gmail.com>
(cherry picked from commit cb608812cf)
2025-01-30 17:51:43 -08:00
Mayank Narang
58491c40c0 drivers: sensor: lis2de12: fix read accel via spi
The lis2de12 sensor driver spi interface was calling spi read api.
This leads to a single byte operation on reading acceleration data
which is a multi byte operation. Fix it by adding a call to spi read
incr api instead. Added a length check to handle both single and multi
byte read properly.

Signed-off-by: Mayank Narang <narang.may77@gmail.com>
(cherry picked from commit 7b062f5b09)
2025-01-30 17:51:43 -08:00
Luca Burelli
adfc5b3863 tests: cmake: run zephyr_get() tests in script mode
Re-run the zephyr_get() testsuite in script mode after the project
mode testsuite has been executed. This is to ensure that the
zephyr_get() function works correctly in script mode as well.

Signed-off-by: Luca Burelli <l.burelli@arduino.cc>
(cherry picked from commit d492c849a3)
2025-01-30 17:49:53 -08:00
Torsten Rasmussen
58b2b82b44 cmake: use GLOBAL property instead TARGET properties for scoping
Targets are not available in script mode.
To support the Zephyr scoping feature used by snippets and yaml module
then this commit moves from using custom targets to use GLOBAL
properties for scopes.

A scope property is prefixed with `<scope>:<property>` to avoid naming
collisions.
A `scope:<scope-name>` global property is used to track created scopes.
Tracking valid scopes ensure that properties are only set on known
scopes and thus catches typos / naming errors.

Add zephyr_scope_exists() and zephyr_get_scoped() to abstract the
implementation details of the scoped property retrieval and refactor
current code to use them.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Signed-off-by: Luca Burelli <l.burelli@arduino.cc>
(cherry picked from commit 4e29a35b22)
2025-01-30 17:49:53 -08:00
Pieter De Gendt
2f0e67fee8 drivers: spi: spi_mcux_ecspi: Fix data size when using 16/32 bit transfers
The data size is set using a burst length, the data size for 8/16/32 is
always 1 in those cases.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
(cherry picked from commit 0e8aed7393)
2025-01-22 14:46:39 -08:00
Pieter De Gendt
7d59e9f11e drivers: spi: spi_mcux_ecspi: Ignore chip select channel with cs-gpios
The internal chip selects are limited to 4, however when using GPIOS
does not pose this limitation.
Also set internal channel to 0 if GPIOS are used.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
(cherry picked from commit 7250e987e5)
2025-01-22 14:46:39 -08:00
Alberto Escolar Piedras
de825415d6 tests/subsys/lorawan/frag_decoder: Change random seed
These tests are quite sensitive to the exact random sequence.
After a change of the native_sim entropy generation 2 of them
started failing. Let's set a random seed which avoids the failure.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit 64d4fcd8cb)
2025-01-22 14:43:50 -08:00
Alberto Escolar Piedras
fff6ad389e drivers: entropy: fix native_posix driver for more than 3 byte requests
Fix the native_posix fake entropy driver for more than 3 byte requests,
and specially for native_sim//64 builds.

The host random() provides a number between 0 and 2**31-1 (INT_MAX),
so bit 32 was always 0.
So when filling a buffer with more than 3 bytes we would be filling
each 4th byte with a byte which always had its MSbit as 0.

For LP64 native_sim//64 builds, this was even worse, as the driver had
another bug where we assumed random() returned the whole long filled,
and therefore all 4 upper bytes would be 0.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit b3407f04e7)
2025-01-22 14:43:50 -08:00
Derek Snell
7d2ac02254 drivers: flash: soc_flash_mcux: remove CMD_MARGIN_CHECK
The CMD_BLANK_CHECK can return errors when the flash is readable, and
should only be used after programming, not in is_area_readable().  From
the LPC55S69 datasheet: "As cells age and lose charge, a correctly
programmed address will fail this check, while still being able to be
read successfully for the remaining duration of the data retention time."

Signed-off-by: Derek Snell <derek.snell@nxp.com>
(cherry picked from commit 88b9cb6efc)
2025-01-18 09:00:18 -06:00
Thomas Schranz
72a1c809df soc: sam0: samd5x: xosc32 configurable startup time
Adds Kconfig option to configure the startup time of the external
32KHz crystal oscillator.

Signed-off-by: Thomas Schranz <electronics@wandfluh.com>
(cherry picked from commit cd20154bc7)
2025-01-15 16:32:01 -08:00
Daniel DeGrasse
c861286bc0 drivers: flash: flash_mcux_flexspi_nor: check all 3 bytes of JEDEC ID
The FlexSPI NOR driver should verify all 3 bytes of the JEDEC ID match
the expected value before attempting to use a custom LUT table with a
flash chip. This reduces the odds that an incompatible LUT will be used
with a flash chip, as some flash chips may share the same first byte of
their device ID but not be compatible with the custom LUT table.

Signed-off-by: Daniel DeGrasse <daniel.degrasse@nxp.com>
(cherry picked from commit c12030acb9)
2025-01-06 08:37:43 -08:00
Erik Tamlin
3f4037b951 manifest: update percepio
Update the percepio module to use TraceRecorder v4.10.2

Signed-off-by: Erik Tamlin <erik.tamlin@percepio.com>
(cherry picked from commit 91b2156398)
2024-12-27 12:06:38 -08:00
Jukka Rissanen
763665dcda tests: net: dns_dispatcher: Add tests for dispatcher
Make sure that the socket service is properly unregistered when
dispatcher is unregistered.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit d98fe73684)
2024-12-27 11:32:22 -08:00
Jukka Rissanen
d58e1ea145 net: dns: Close socket service properly from dispatcher
We need to close the socket service context when dispatcher is
unregistered.

Fixes #82652

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit da148ab3bc)
2024-12-27 11:32:22 -08:00
Jukka Rissanen
fc639b6b27 net: dns: Avoid errors when DNS dispatcher is already registered
Skip error prints and extra DNS events if DNS dispatcher was already
registered.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit c110331507)
2024-12-27 11:32:22 -08:00
Jukka Rissanen
1d445685bf net: dns: Fix the debug print when dispatcher fails
Depending on DNS type, print the output (mDNS vs DNS) correctly.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit efcfcfe292)
2024-12-27 11:32:22 -08:00
Marcio Ribeiro
6dc5129040 drivers: uart: esp32: reset tx fifo during driver init
Resets uart tx fifo during driver initialization to have a well defined
initial condition mainly preventing unwanted characters being sent

Signed-off-by: Marcio Ribeiro <marcio.ribeiro@espressif.com>
(cherry picked from commit f0516ead27)
2024-12-27 11:31:09 -08:00
DEVER Emiel
d8ab599ecf fs: ext2: Fix ext2 read buffer overflow
Keep track of the amount of bytes read so no buffer overflow occurs.

Signed-off-by: DEVER Emiel <emiel.dever@psicontrol.com>
(cherry picked from commit 336c650646)
2024-12-27 11:29:38 -08:00
Djordje Nedic
8abda1d12f fs: littlefs: Fix cache and lookahead size checks
This fixes an issue where wrong values were checked against block device
defaults. Kconfig values are used when no filesystem config values are
provided, and the buffers are sized according to the Kconfig values,
but the checks were only performed against filesystem config variables.

Signed-off-by: Djordje Nedic <nedic.djordje2@gmail.com>
(cherry picked from commit 2750492e86)
2024-12-27 11:27:37 -08:00
Gerson Fernando Budke
578251beac dts: atmel: samr21: Use samd21 as base
The samr21 is a samd21 with a builtin at86rf233 radio. Use the samd21 as
base for these SoC and drop all duplicated nodes.

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit 01fc0a750a)
2024-12-27 11:21:47 -08:00
Gerson Fernando Budke
ca7eba2f11 dts: atmel: saml2x: Define USB node
The saml2x series provide USB to all SoC series. This moves the USB node
from saml21.dtsi to the base file saml2x.dtsi.

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit 34fc01e008)
2024-12-27 11:21:47 -08:00
Gerson Fernando Budke
3acaf7a069 dts: atmel: saml21: Exclude DMA node
The DMA is already defined on the base header. This exclude the
duplicated node.

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit 2e7599eac6)
2024-12-27 11:21:47 -08:00
Gerson Fernando Budke
ba39f63957 dts: atmel: sam0: Fix aliases and chosen nodes
Reorder and add missing aliases and chosen nodes.

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit 748bbd012b)
2024-12-27 11:21:47 -08:00
Gerson Fernando Budke
503d458893 dts: atmel: sam0: Explicity disable nodes
When running tests on sam0 platform was detected that pinctrl for ADC
were not defined for some boards. To force an error at build time
nodes should be explicity disabled. This explicity disable nodes on
devicetree that require some user configuration.

In addition, the adc feature were excluded in some boards and
samr21-xpro was correct updated.

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit 162f728787)
2024-12-27 11:21:47 -08:00
Gerson Fernando Budke
8e2f36cfcf dts: atmel: sam0: Normalize dtsi nodes
Keep a consistent order on the nodes definitions to make it easy to read
between all the SoC series.

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit 6f1f598a72)
2024-12-27 11:21:47 -08:00
Jamie McCrae
d70bbe0eb6 samples: mgmt: mcumgr: smp_svr: Fix re-advertise issue on connection
Fixes an issue introduced with commit
c6ad4a7927 which wrong restarts
advertising after a device connects when it should only restart
advertising if a device fails to connect

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 655be99fa7)
2024-12-27 11:19:58 -08:00
Dominik Ermel
fe8dc82b16 drivers/flash: Correct flash_erase userspace handler
As the erase callback is optional, handler should not check
if it is not NULL.

Fixes #81777

Signed-off-by: Dominik Ermel <dominik.ermel@nordicsemi.no>
(cherry picked from commit 486428951b)
2024-12-27 11:18:30 -08:00
Jilay Pandya
17df0f1748 drivers: auxdisplay: jhd1313: fix Out-of-bounds read
fix out of bounds read by doing the comparison with ARRAY_SIZE correctly

Signed-off-by: Jilay Pandya <jilay.pandya@outlook.com>
(cherry picked from commit 3202773b11)
2024-12-27 11:17:37 -08:00
Jukka Rissanen
d6ec225075 net: wifi_cred: Decrease flash usage for error print strings
As the error print strings are very similar, construct the final
output at runtime to save some flash space.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 3c59bd4fb5)
2024-12-09 13:40:54 -08:00
Jukka Rissanen
e60954d54e net: wifi_cred: Remove extra empty lines
Follow net coding style and remove extra new lines between
variable set and checking its value.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 8deebcec21)
2024-12-09 13:40:54 -08:00
Jukka Rissanen
18845d2bb5 net: wifi_cred: Introduce variables at the start of function
Follow the net coding style and declare the variables at the start
of the function.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit b2056e0966)
2024-12-09 13:40:54 -08:00
Jukka Rissanen
8fdf221640 net: wifi_cred: Check null before access
We must do null check before trying to access the fields.

Fixes #81980
Coverify-CID: 434549

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 3aa0c8670e)
2024-12-09 13:40:54 -08:00
Marek Matej
b306b24ec4 dts: espressif: Add flash size options to partition tables
Update the partition table list with 16MB and 32MB options.

Signed-off-by: Marek Matej <marek.matej@espressif.com>
(cherry picked from commit 2dc2cdea75)
2024-12-09 13:40:32 -08:00
Jukka Rissanen
2fe01e9400 tests: net: dns: Add test for invalid DNS answer parsing
Make sure we catch invalid answer during parsing.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 16669ec4d5)
2024-12-09 13:37:22 -08:00
Jukka Rissanen
aec5129706 net: dns: Check DNS answer properly
The dns_unpack_answer() did not check the length of the message
properly which can cause out of bounds read.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 6e7fcff579)
2024-12-09 13:37:22 -08:00
Jukka Rissanen
b912e674cf net: dns: Validate source buffer length properly
Make sure that when copying the qname, the source buffer is large
enough for the data.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 43c2b9cfe8)
2024-12-09 13:37:22 -08:00
Jukka Rissanen
1805b1579f tests: net: dns: Add checking of malformed packet
Make sure we test malformed packet parsing.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 6f96915a14)
2024-12-09 13:37:22 -08:00
Jukka Rissanen
ce695706c0 net: dns: Check parsing error properly for response
If the packet parsing fails in dns_unpack_response_query(), then
do not continue further but bail out early.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit eb2550a441)
2024-12-09 13:37:22 -08:00
Jukka Rissanen
1e88982fe3 net: ethernet: bridge: Drop the cloned packet in error
We need to drop the cloned packet that was fed to the bridge instead of
returning directly from the function. Without this change we have a
buffer leak.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit c6a1af5c17)
2024-12-09 13:36:03 -08:00
Jukka Rissanen
d4e329483f net: ethernet: bridge: Avoid null pointer access
If the packet cloning failed, bail out in order to avoid
null pointer access.

Fixes #81992
Coverity-CID: 434493

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit ed0dcca2fb)
2024-12-09 13:36:03 -08:00
Jamie McCrae
4d3a4eafd6 mgmt: mcumgr: grp: img_mgmt: Fix misplaced #endif
Fixes a misplaced endif which wrongly excluded functions

Signed-off-by: Jamie McCrae <spam@helper3000.net>
(cherry picked from commit 5437ded36c)
2024-12-09 13:35:23 -08:00
Steven Poon
9c63873db2 net: lib: lwm2m: Fix missing mutex unlock
lwm2m_engine_set() and lwm2m_engine_get() locks
the registry_lock mutex, but this is not unlocked
when setting or getting a time resource where the buffer
lengths are invalid resulting in an early return without
unlocking the mutex. This results in a deadlock when
attempting to lock the registry in another thread.

Signed-off-by: Steven Poon <steven-github@outlook.com>
(cherry picked from commit 30b30c29a3)
2024-12-09 13:35:07 -08:00
Jakub Rzeszutko
bd1db48d25 lib: shell: replace strtol with strtoul in cmd_load for address parsing
Addresses in cmd_load() should always be unsigned. Previously, strtol()
was used, which is limited to signed long values, causing issues with
addresses >= 0x80000000. This commit replaces strtol() with strtoul(),
ensuring proper handling of the full 32-bit address space.

Fixes #81343

Signed-off-by: Aaron Fontaine <aaron.fontaine@dojofive.com>
Signed-off-by: Jakub Rzeszutko <jakub.rzeszutko@verkada.com>
(cherry picked from commit 1aaf08f7f1)
2024-12-06 13:47:19 -08:00
Pavel Vasilyev
8ac7bde2b1 bluetooth: mesh: proxy_msg: Fix extracting role from k_work
Fix extracting role from k_work.
Hot fix for #78914

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit f5409bd3de)
2024-12-06 13:40:02 -08:00
Jerzy Kasenberg
98d9f3e551 i2c: target: eeprom_target: Fix buffer write
When larger buffer index was introduced only function:
eeprom_target_write_received() was updated to handle
address-width = 16

This adds the same functionality when buffered API is used,
enabled by CONFIG_I2C_TARGET_BUFFER_MODE.

Signed-off-by: Jerzy Kasenberg <jerzy.kasenberg@codecoup.pl>
(cherry picked from commit e6c9e9a2dd)
2024-12-06 09:55:36 -06:00
Yong Cong Sin
2c5fb2d916 arch: riscv: reg: include required header
Include `zephyr/sys/util.h` for the `STRINGIFY()` macro.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
(cherry picked from commit 45ebd390cf)
2024-12-01 14:49:38 -08:00
Gerson Fernando Budke
3b14973425 drivers: rtc: sam: Add platform on API test coverage
The #81456 fixed the driver issue related to issue #81454. This add
the RTC configurations on sam_v71_xult board to enable test coverage.

Fixes #81454

Signed-off-by: Gerson Fernando Budke <nandojve@gmail.com>
(cherry picked from commit cffb66f5c6)
2024-12-01 14:44:05 -08:00
Alberto Escolar Piedras
aae851fdb5 bluetooth: CTS: Fix includes to avoid build error with some libCs
Remove unnecessary include in header and source file.

gmtime_r() is an extension to the C library, and therefore one
needs to explicitly ask for its prototype to have it exposed.
This is done by defining _POSIX_C_SOURCE so let's do so.

These two changes fix build errors with some libCs.
Tested with pico, newlib, minimal and the host glibc.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit a785548df6)
2024-12-01 14:43:44 -08:00
Matthias Alleman
f7edec7da6 posix: fpu: Fix compiler error when enabling fpu on posix boards
Enabling CONFIG_FPU and CONFIG_FPU_SHARING requires the
definition of `arch_float_disable` and `arch_float_enable`.

Signed-off-by: Matthias Alleman <matthias.alleman@basalte.be>
(cherry picked from commit e7c353bf14)
2024-12-01 14:43:31 -08:00
Andrej Butok
371e3a6769 boards: nxp: fix s26ks512s0 flash write-block-size
- Sets s26ks512s0 flash write-block-size to correct 256KB.
- Optimizes MCUboot partitions to fit the correct write-block-size.

Fixes #80284

Signed-off-by: Andrej Butok <andrey.butok@nxp.com>
(cherry picked from commit a730d9abad)
2024-11-22 13:03:12 -06:00
Chaitanya Tata
07a0ae1a99 drivers: nrfwifi: Remove passing unused flag
This flag is now unused in OSAL as it takes the input via the API.

Signed-off-by: Chaitanya Tata <Chaitanya.Tata@nordicsemi.no>
(cherry picked from commit f537cf311d)
2024-11-22 13:02:54 -06:00
Kapil Bhatt
966da750e1 drivers: wifi: Fix offloaded raw TX feature flags
Pass passive scan and offloaded raw tx feature flags to OSAL.

Signed-off-by: Kapil Bhatt <kapil.bhatt@nordicsemi.no>
(cherry picked from commit 62e06a5072)
2024-11-22 13:02:54 -06:00
Benjamin Cabé
01a3232eeb doc: doxygen: improve formatting for kconfig alias
\verbatim is not giving the right output as we need an inline literal.
Switch to \c instead.

Fixes #81595.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit 83356e924a)
2024-11-22 13:01:56 -06:00
35690 changed files with 479985 additions and 1767164 deletions

View File

@@ -5,6 +5,7 @@
--min-conf-desc-length=1
--typedefsfile=scripts/checkpatch/typedefsfile
--ignore BRACES
--ignore PRINTK_WITHOUT_KERN_LEVEL
--ignore SPLIT_STRING
--ignore VOLATILE
@@ -29,4 +30,3 @@
--ignore ENOSYS
--ignore IS_ENABLED_CONFIG
--ignore EXPORT_SYMBOL
--ignore COMPARISON_TO_NULL

View File

@@ -82,7 +82,6 @@ ForEachMacros:
- 'HTTP_SERVICE_FOREACH_RESOURCE'
- 'I3C_BUS_FOR_EACH_I3CDEV'
- 'I3C_BUS_FOR_EACH_I2CDEV'
- 'MIN_HEAP_FOREACH'
IfMacros:
- 'CHECKIF'
# Disabled for now, see bug https://github.com/zephyrproject-rtos/zephyr/issues/48520
@@ -100,7 +99,6 @@ IndentCaseLabels: false
IndentGotoLabels: false
IndentWidth: 8
InsertBraces: true
InsertNewlineAtEOF: true
SpaceBeforeInheritanceColon: False
SpaceBeforeParens: ControlStatementsExceptControlMacros
SortIncludes: Never
@@ -113,4 +111,3 @@ WhitespaceSensitiveMacros:
- LISTIFY
- STRINGIFY
- Z_STRINGIFY
- DT_FOREACH_PROP_ELEM_SEP

View File

@@ -86,10 +86,6 @@ indent_size = 8
[COMMIT_EDITMSG]
max_line_length = 75
# Patches
[{*.patch,*.diff}]
trim_trailing_whitespace = false
# Kconfig
[Kconfig*]
indent_style = tab

View File

@@ -0,0 +1,71 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
<!--
**Notes**
Github Discussions (https://github.com/zephyrproject-rtos/zephyr/discussions)
are available to first verify that the issue is a genuine Zephyr bug and not a
consequence of Zephyr services misuse.
This issue list is only for bugs in the main Zephyr code base
(https://github.com/zephyrproject-rtos/zephyr/). If the bug is for a project
fork (such as NCS) specific feature, please open an issue in the fork project
instead.
-->
**Describe the bug**
<!--
A clear and concise description of what the bug is.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
- ...
-->
**To Reproduce**
<!--
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Impact**
<!--
What impact does this issue have on your progress (e.g., annoyance, showstopper)
-->
**Logs and console output**
<!--
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
-->
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
<!--
Add any other context that could be relevant to your issue, such as pin setting,
target configuration, ...
-->

View File

@@ -1,85 +0,0 @@
name: Bug Report
description: File a bug report.
labels: ["bug"]
type: "Bug"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: textarea
id: what-happened
attributes:
label: Describe the bug
description: |
A clear and concise description of what the bug is.
placeholder: |
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
validations:
required: true
- type: checkboxes
id: regression
attributes:
label: Regression
description: |
Check this box if this is a regression and provide a SHA if you were able to "git bisect" to a specific commit.
options:
- label: This is a regression.
required: false
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Steps to reproduce the behavior.
placeholder: |
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
validations:
required: false
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
attributes:
label: Impact
description: Impact of this bug
multiple: false
options:
- Showstopper Prevents release or major functionality; system unusable.
- Major Severely degrades functionality; workaround is difficult or unavailable.
- Functional Limitation Some features not working as expected, but system usable.
- Annoyance Minor irritation; no significant impact on usability or functionality.
- Intermittent Occurs occasionally; hard to reproduce.
- Not sure
default: 3
validations:
required: true
- type: textarea
id: env
attributes:
label: Environment
description: please complete the following information
placeholder: |
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
- type: textarea
id: context
attributes:
label: Additional Context
description: Provide other context that could be relevant to the bug, such as pin setting, target configuration,etc.

View File

@@ -0,0 +1,28 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is.
-->
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or graphics (drag-and-drop an image) about the feature request here.
-->

View File

@@ -1,42 +0,0 @@
name: Enhancement
description: Submit an Enhancement
labels: ["Enhancement"]
type: "Enhancement"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this enhancement proposal.
- type: textarea
id: description
attributes:
label: Summary
description: |
Is your enhancement proposal related to a problem? Please describe.
placeholder: |
A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: |
Describe the solution you'd like
placeholder: |
A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives
description: Describe alternatives you've considered
placeholder: |
A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: context
attributes:
label: Additional Context
description: Add any other context or graphics (drag-and-drop an image) about the enhancement here.

View File

@@ -0,0 +1,60 @@
---
name: RFC / Proposal
about: Submit an RFC / Proposal
title: ''
labels: RFC
assignees: ''
---
## Introduction
<!--
This section targets end users, TSC members, maintainers and anyone else that might
need a quick explanation of your proposed change.
-->
### Problem description
<!--
Why do we want this change and what problem are we trying to address?
-->
### Proposed change
<!--
A brief summary of the proposed change - the 10,000 ft view on what it will
change once this change is implemented.
-->
## Detailed RFC
<!--
In this section of the document the target audience is the dev team. Upon
reading this section each engineer should have a rather clear picture of what
needs to be done in order to implement the described feature.
-->
### Proposed change (Detailed)
<!--
This section is freeform - you should describe your change in as much detail
as possible. Please also ensure to include any context or background info here.
For example, do we have existing components which can be reused or altered.
By reading this section, each team member should be able to know what exactly
you're planning to change and how.
-->
### Dependencies
<!--
Highlight how the change may affect the rest of the project (new components,
modifications in other areas), or other teams/projects.
-->
### Concerns and Unresolved Questions
<!--
List any concerns, unknowns, and generally unresolved questions etc.
-->
## Alternatives
<!--
List any alternatives considered, and the reasons for choosing this option
over them.
-->

View File

@@ -1,75 +0,0 @@
name: RFC / Proposal
description: Submit a Proposal (RFC)
labels: ["RFC"]
type: RFC
assignees: []
body:
- type: markdown
attributes:
value: |
## Introduction
This section targets end users, TSC members, maintainers and anyone else
that might need a quick explanation of your proposed change.
- type: textarea
id: problem-description
attributes:
label: Problem Description
description: Why do we want this change and what problem are we trying to address?
placeholder: Explain the problem or limitation this RFC is meant to resolve.
validations:
required: true
- type: textarea
id: proposed-change-summary
attributes:
label: Proposed Change (Summary)
description: A high-level summary of the proposed change.
placeholder: Brief summary of what will change if this RFC is implemented.
validations:
required: true
- type: markdown
attributes:
value: |
## Detailed RFC
This section targets the development team. Upon reading it, each engineer
should understand what must be done to implement the proposed feature.
- type: textarea
id: detailed-change
attributes:
label: Proposed Change (Detailed)
description: Describe the change in as much detail as possible. Include context or background info, and reuse of existing components if applicable.
placeholder: Explain exactly what youre planning to change and how.
validations:
required: true
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: Highlight how this change may affect the rest of the project or other teams/components.
placeholder: List components, modules, or teams affected.
validations:
required: false
- type: textarea
id: concerns
attributes:
label: Concerns and Unresolved Questions
description: List any concerns, unknowns, or unresolved questions related to this proposal.
placeholder: Any areas of uncertainty?
validations:
required: false
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: What alternative solutions were considered? Why was this proposal chosen?
placeholder: List alternatives and explain the rationale behind your choice.
validations:
required: false

View File

@@ -0,0 +1,28 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: Feature Request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is.
-->
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or graphics (drag-and-drop an image) about the feature request here.
-->

View File

@@ -1,29 +0,0 @@
name: Feature Request
description: Suggest a new feature or enhancement
labels: ["Feature Request"]
type: Feature
assignees: []
body:
- type: textarea
id: problem
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is.
placeholder: e.g., I'm frustrated when I need to do X manually because Y is missing.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
placeholder: e.g., It would be great if the system could automatically handle X by doing Y.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: Include any alternative solutions or features

View File

@@ -0,0 +1,42 @@
---
name: Contributor Nomination
about: Nominate a GitHub user for additional rights on the Zephyr Project
title: ''
labels: Role Nomination
assignees: ''
---
# Background
The [TSC Project Roles] defines the main roles for the Zephyr Project, including
Maintainer, Collaborator, and Contributor.
By default anyone that contributes code or documentation is a Contributor, but
with the lowest [GitHub Permission Level] of Read. For example, Contributors
with Read permission do not have the permission to add reviewers to a pull
request.
Use this template to nominate a GitHub user for the Contributor role with
Triage permission level, which allows the user to add reviewers to a pull
request and be added as a reviewer by other users.
# Nomination
## GitHub User
Provide the following information about the GitHub user:
1. Full Name
1. GitHub username
1. Organization (optional)
## Supporting Documents
Add links to 3-5 GitHub pull requests, in the Zephyr project, authored or
reviewed by the GitHub user that demonstrate the user's dedication to the
Zephyr project.
[TSC Project Roles]: <https://docs.zephyrproject.org/latest/project/project_roles.html>
[GitHub Permission Level]: <https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization>

View File

@@ -1,57 +0,0 @@
name: Contributor Nomination
description: Nominate a GitHub user for the Contributor role with triage permissions
labels: [Role Nomination]
assignees: ['nashif']
body:
- type: markdown
attributes:
value: |
## Background
The [TSC Project Roles](https://docs.zephyrproject.org/latest/project/project_roles.html) defines the main roles for the Zephyr Project, including Maintainer, Collaborator, and Contributor.
By default, anyone who contributes code or documentation is a Contributor, but with the lowest [GitHub Permission Level](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization) of **Read**.
Use this form to nominate a user for the **Contributor** role with **Triage** permission, which allows the user to:
- Add reviewers to pull requests
- Be added as a reviewer by others
- type: input
id: full-name
attributes:
label: Full Name
description: Full name of the nominated contributor.
placeholder: e.g., Jane Doe
validations:
required: true
- type: input
id: github-username
attributes:
label: GitHub Username
description: GitHub handle of the nominated contributor.
placeholder: e.g., @janedoe
validations:
required: true
- type: input
id: organization
attributes:
label: Organization
description: Organization the nominee is affiliated with (optional).
placeholder: e.g., Acme Corp
validations:
required: false
- type: textarea
id: supporting-documents
attributes:
label: Supporting Documents
description: Provide links to 35 pull requests authored or reviewed by the nominee that demonstrate their dedication to the Zephyr project.
placeholder: |
e.g.,
- https://github.com/zephyrproject-rtos/zephyr/pull/12345
- https://github.com/zephyrproject-rtos/zephyr/pull/23456
- https://github.com/zephyrproject-rtos/zephyr/pull/34567
validations:
required: true

View File

@@ -0,0 +1,68 @@
---
name: External Source Code
about: Submit a proposal to integrate external source code
title: ''
labels: TSC
assignees: ''
---
## Origin
Name of project hosting the original open source code
Provide a link to the source
## Purpose
Brief description of what this software does
## Mode of integration
Describe whether you'd like to integrate this external component in the main tree
or as a module, and why. If the mode of integration is a module, suggest a
repository name for the module
## Maintainership
List the person(s) that will be maintaining the integration of this external code
for the foreseeable future. Please use GitHub IDs to identify them. You can
choose to identify a single maintainer only or add collaborators as well
## Pull Request
Pull request (if any) with the actual implementation of the integration, be it
in the main tree or as a module (pointing to your own fork for now). Make sure
the PR is correctly labeled as "DNM"
## Description
Long description that will help reviewers discuss suitability of the
component to solve the problem at hand (there may be a better options
available.)
What is its primary functionality (e.g., SQLLite is a lightweight
database)?
What problem are you trying to solve? (e.g., a state store is
required to maintain ...)
Why is this the right component to solve it (e.g., SQLite is small,
easy to use, and has a very liberal license.)
## Dependencies
What other components does this package depend on?
Will the Zephyr project have a direct dependency on the component, or
will it be included via an abstraction layer with this component as a
replaceable implementation?
## Revision
Version or SHA you would like to integrate initially
## License
Please use an SPDX identifier (https://spdx.org/licenses/), such as
``BSD-3-Clause``

View File

@@ -1,106 +0,0 @@
name: External Component Integration
description: Propose integration of an external open source component
labels: ["TSC"]
assignees: []
body:
- type: textarea
id: origin
attributes:
label: Origin
description: Name of project hosting the original open source code. Provide a link to the source.
placeholder: e.g., SQLite - https://sqlite.org
validations:
required: true
- type: textarea
id: purpose
attributes:
label: Purpose
description: Brief description of what this software does.
placeholder: |
e.g., A small, fast, self-contained SQL database engine.
validations:
required: true
- type: textarea
id: integration-mode
attributes:
label: Mode of Integration
description: Should this be integrated in the main tree or as a module? Explain your choice and suggest a module repo name if applicable.
placeholder: |
e.g., As a module - proposed repo name: zephyr-sqlite
validations:
required: true
- type: textarea
id: maintainership
attributes:
label: Maintainership
description: List maintainers (GitHub IDs) for this integration. Include at least one primary maintainer.
placeholder: |
e.g., @username1 (primary), @username2 (collaborator)
validations:
required: true
- type: input
id: pull-request
attributes:
label: Pull Request
description: Link to the pull request (if any) for this integration. Must be labeled "DNM" (Do Not Merge).
placeholder: |
e.g., https://github.com/zephyrproject-rtos/zephyr/pull/12345
validations:
required: false
- type: textarea
id: description
attributes:
label: Description
description: Long-form description to justify suitability of this component.
placeholder: |
- What is its primary functionality?
- What problem does it solve?
- Why is this the right component?
validations:
required: true
- type: textarea
id: security
attributes:
label: Security
description: Security-related aspects of this component, including cryptographic functions and known vulnerabilities.
placeholder: |
- Does it use cryptography?
- How are vulnerabilities handled?
- Any known CVEs?
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: What does this component depend on, and how will it be integrated (directly or via abstraction)?
placeholder: |
- Other external packages?
- Direct or abstracted use in Zephyr?
validations:
required: false
- type: input
id: revision
attributes:
label: Version or SHA
description: Which version or specific commit should be initially integrated?
placeholder: e.g., v3.45.0 or 79cc94d
validations:
required: true
- type: input
id: license
attributes:
label: License (SPDX)
description: Provide the license using a valid SPDX identifier (e.g., BSD-3-Clause).
placeholder: e.g., MIT or BSD-3-Clause
validations:
required: true

41
.github/ISSUE_TEMPLATE/008_bin-blobs.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: Binary blobs
about: Submit a proposal to integrate binary blob(s)
title: ''
labels: TSC
assignees: ''
---
## Origin
Describe where the binary blob(s) originate from
## Type
- [ ] Precompiled library
- [ ] Firmware image
## Module
The Zephyr module that this blob(s) will be referenced from
## Purpose
Brief description of what the blob(s) do. It is especially important to describe
the functionality that the blob(s) provide, to the largest extent possible
## Pull Request
Link to the Pull request with the actual implementation of the integration. If
you are submitting this as part of a new module you may link to the same Pull
Request that introduced the new module.
## Dependencies
What other components do the blob(s) depend on, if any?
## License
Document the license the blob(s) are distributed under

View File

@@ -1,5 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Zephyr Community Support
url: https://github.com/zephyrproject-rtos/zephyr/discussions
about: Please ask and answer questions here.

6
.github/SECURITY.md vendored
View File

@@ -8,12 +8,12 @@ updates:
- The most recent release, and the release prior to that.
- Active LTS releases.
At this time, with the latest release of v4.3, the supported
At this time, with the latest release of v3.6, the supported
versions are:
- v4.3: Current release
- v4.2: Prior release
- v3.7: Current LTS
- v3.6: Prior release
- v2.7: Prior LTS
## Reporting process

View File

@@ -1,2 +0,0 @@
paths:
- .github

View File

@@ -1,2 +0,0 @@
paths:
- doc

View File

@@ -1,26 +0,0 @@
version: 2
enable-beta-ecosystems: true
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
commit-message:
prefix: "ci: github: "
labels: []
groups:
actions-deps:
patterns:
- "*"
- package-ecosystem: "uv"
directory: "/doc"
schedule:
interval: "weekly"
commit-message:
prefix: "ci: doc: "
labels: []
groups:
doc-deps:
patterns:
- "*"

View File

@@ -1,4 +1,4 @@
name: Pull Request/Issue Assigner
name: Pull Request Assigner
on:
pull_request_target:
@@ -15,74 +15,43 @@ on:
types:
- labeled
permissions:
contents: read
jobs:
assignment:
name: Pull Request/Issue Assignment
name: Pull Request Assignment
if: github.event.pull_request.draft == false
runs-on: ubuntu-24.04
permissions:
issues: write # to add assignees to issues
runs-on: ubuntu-22.04
steps:
- name: Check out source code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: zephyrproject-rtos/action-python-env@32e53bef090c33d53aa94f1d9a9d29c93cfdc5f7 # main
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Fetch west.yml/Maintainer.yml from pull request
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
- name: Install Python dependencies
run: |
git fetch origin pull/${{ github.event.pull_request.number }}/merge
git show FETCH_HEAD:west.yml > pr_west.yml
git show FETCH_HEAD:MAINTAINERS.yml > pr_MAINTAINERS.yml
sudo pip3 install -U setuptools wheel pip
pip3 install -U PyGithub>=1.55 west
- name: west setup
if: >
github.event_name == 'pull_request_target'
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Check out source code
uses: actions/checkout@v4
- name: Run assignment script
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
FLAGS="-v"
FLAGS+=" -o ${{ github.event.repository.owner.login }}"
FLAGS+=" -r ${{ github.event.repository.name }}"
FLAGS+=" -M MAINTAINERS.yml"
if [ "${{ github.event_name }}" = "pull_request_target" ]; then
if [ "${{ github.base_ref }}" != "main" ]; then
FLAGS+=" -P ${{ github.event.pull_request.number }} --updated-manifest pr_west.yml --updated-maintainer-file pr_MAINTAINERS.yml"
else
FLAGS+=" -P ${{ github.event.pull_request.number }}"
fi
elif [ "${{ github.event_name }}" = "issues" ]; then
FLAGS+=" -I ${{ github.event.issue.number }}"
FLAGS+=" -I ${{ github.event.issue.number }}"
elif [ "${{ github.event_name }}" = "schedule" ]; then
FLAGS+=" --modules"
FLAGS+=" --modules"
else
echo "Unknown event: ${{ github.event_name }}"
exit 1
fi
python3 scripts/ci/set_assignees.py $FLAGS
- name: Check maintainer file changes
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
run: |
python ./scripts/ci/check_maintainer_changes.py \
--repo zephyrproject-rtos/zephyr MAINTAINERS.yml pr_MAINTAINERS.yml
python3 scripts/set_assignees.py $FLAGS

View File

@@ -7,17 +7,10 @@ on:
branches:
- main
permissions:
contents: read
jobs:
backport:
name: Backport
runs-on: ubuntu-24.04
permissions:
contents: write # to create/push backport branches
pull-requests: write # to create backport PRs
issues: write # to add labels to issue created if backport fails
runs-on: ubuntu-22.04
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
@@ -31,8 +24,8 @@ jobs:
)
steps:
- name: Backport
uses: zephyrproject-rtos/action-backport@7e74f601d11eaca577742445e87775b5651a965f # v2.0.3-3
uses: zephyrproject-rtos/action-backport@v2.0.3-3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_token: ${{ secrets.ZB_GITHUB_TOKEN }}
issue_labels: Backport
labels_template: '["Backport"]'

View File

@@ -10,41 +10,30 @@ on:
branches:
- v*-branch
permissions:
contents: read
jobs:
backport:
name: Backport Issue Check
concurrency:
group: backport-issue-check-${{ github.ref }}
cancel-in-progress: true
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
issues: read # to check if associated issue exists for backport
steps:
- name: Check out source code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -5,26 +5,20 @@ on:
workflows: ["BabbleSim Tests"]
types:
- completed
permissions:
contents: read
jobs:
bsim-test-results:
name: "Publish BabbleSim Test Results"
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.event.workflow_run.conclusion != 'skipped'
permissions:
checks: write # to create the check run entry with test results
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v6
with:
run_id: ${{ github.event.workflow_run.id }}
- name: Publish BabbleSim Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: BabbleSim Test Results
comment_mode: off

View File

@@ -10,15 +10,14 @@ on:
- "tests/bsim/**"
- "boards/nordic/nrf5*/*dt*"
- "dts/*/nordic/**"
- "tests/bluetooth/**"
- "tests/bluetooth/common/testlib/**"
- "samples/bluetooth/**"
- "boards/native/**"
- "soc/native/**"
- "boards/posix/**"
- "soc/posix/**"
- "arch/posix/**"
- "include/zephyr/arch/posix/**"
- "scripts/native_simulator/**"
- "samples/net/sockets/echo_*/**"
- "modules/hal_nordic/**"
- "modules/mbedtls/**"
- "modules/openthread/**"
- "subsys/net/l2/openthread/**"
@@ -27,10 +26,6 @@ on:
- "include/zephyr/net/ieee802154*"
- "drivers/serial/*nrfx*"
- "tests/drivers/uart/**"
- '!**.rst'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -42,16 +37,13 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.27.4.20241026
options: '--entrypoint /bin/bash'
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
permissions:
checks: write # to create the check run entry with test results
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -74,7 +66,7 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -85,7 +77,6 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
@@ -97,20 +88,16 @@ jobs:
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Check common triggering files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v45
id: check-common-files
with:
files: |
.github/workflows/bsim-tests.yaml
.github/workflows/bsim-tests-publish.yaml
west.yml
boards/native/
soc/native/
boards/posix/
soc/posix/
arch/posix/
include/zephyr/arch/posix/
scripts/native_simulator/
@@ -118,20 +105,18 @@ jobs:
boards/nordic/nrf5*/*dt*
dts/*/nordic/
modules/mbedtls/**
modules/hal_nordic/**
- name: Check if Bluethooth files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v45
id: check-bluetooth-files
with:
files: |
tests/bsim/bluetooth/
samples/bluetooth/
subsys/bluetooth/
tests/bluetooth/
tests/bsim/bluetooth/
- name: Check if Networking files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v45
id: check-networking-files
with:
files: |
@@ -144,7 +129,7 @@ jobs:
include/zephyr/net/ieee802154*
- name: Check if UART files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v45
id: check-uart-files
with:
files: |
@@ -184,12 +169,13 @@ jobs:
- name: Merge Test Results
run: |
pip3 install junitparser junit2html
junitparser merge --glob "./bsim_*/*bsim_results.*.xml" "./twister-out/twister.xml" junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: HTML Unit Test Results
if-no-files-found: ignore
@@ -197,7 +183,7 @@ jobs:
junit.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: Bsim Test Results
files: "junit.xml"
@@ -205,7 +191,7 @@ jobs:
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: event
path: |

View File

@@ -8,34 +8,25 @@ name: Bug Snapshot
on:
workflow_dispatch:
branches: [main]
schedule:
# Run daily at 00:05
- cron: '5 00 * * *'
permissions:
contents: read
# Run daily at 14:05
- cron: '5 14 * * *'
jobs:
make_bugs_pickle:
name: Make bugs pickle
runs-on: ubuntu-24.04
if: github.repository_owner == 'zephyrproject-rtos' && github.ref == 'refs/heads/main'
runs-on: ubuntu-22.04
if: github.repository_owner == 'zephyrproject-rtos'
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Snapshot bugs
env:
@@ -51,7 +42,7 @@ jobs:
echo "BUGS_PICKLE_PATH=${BUGS_PICKLE_PATH}" >> ${GITHUB_ENV}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_SECRET_ACCESS_KEY }}

View File

@@ -1,12 +1,6 @@
name: Build with Clang/LLVM
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -18,19 +12,21 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.27.4.20241026
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
matrix:
subset: [1, 2]
platform: ["native_sim"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-16
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -53,8 +49,9 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -64,7 +61,7 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
@@ -86,10 +83,6 @@ jobs:
gcc --version
ls -la
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
@@ -113,8 +106,18 @@ jobs:
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=llvm
./scripts/twister -p native_sim --force-color --inline-logs -M -N -v --retry-failed 2 \
-T tests --subset ${{matrix.subset}}/2 -j 16
# check if we need to run a full twister or not based on files changed
python3 ./scripts/ci/test_plan.py --platform ${{ matrix.platform }} -c origin/${BASE_REF}..
# We can limit scope to just what has changed
if [ -s testplan.json ]; then
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --force-color --inline-logs -M -N -v --load-tests testplan.json --retry-failed 2
else
# if nothing is run, skip reporting step
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: Print ccache stats
if: always()
@@ -122,53 +125,31 @@ jobs:
ccache -s -vv
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v4
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
path: |
twister-out/twister.xml
twister-out/twister.json
if-no-files-found: ignore
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml
clang-build-results:
name: "Publish Unit Tests Results"
needs: clang-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create GitHub annotations
if: (success() || failure())
runs-on: ubuntu-22.04
if: (success() || failure() ) && needs.clang-build.outputs.report_needed != 0
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Download Artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@v4
with:
path: artifacts
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/twister.xml junit.xml
junit2html junit.xml junit-clang.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: HTML Unit Test Results
if-no-files-found: ignore
@@ -176,7 +157,7 @@ jobs:
junit-clang.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
check_name: Unit Test Results

View File

@@ -1,14 +1,8 @@
name: Code Coverage with codecov
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
schedule:
- cron: '25 06,18 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -20,7 +14,7 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.27.4.20241026
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
@@ -67,21 +61,10 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
@@ -102,37 +85,30 @@ jobs:
ccache -p
ccache -z -s -vv
- name: Update BabbleSim to manifest revision
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- name: Run Tests with Twister (Push)
continue-on-error: true
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
mkdir -p coverage/reports
./scripts/twister --save-tests ${{matrix.normalized}}-testplan.json
pip3 install gcovr==6.0
./scripts/twister -E ${{matrix.normalized}}-testplan.json
ls -la
./scripts/twister \
-i --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage \
-T tests --coverage-tool gcovr -xCONFIG_TEST_EXTRA_STACK_SIZE=4096 -e nano \
--timeout-multiplier 2
- name: Build Doxygen Coverage
if: matrix.platform == 'unit_testing'
run: |
pip install -r doc/requirements.txt --require-hashes
sudo apt-get update
sudo apt-get install -y graphviz # dot is needed but currently missing from the Docker image
cmake -B doc/_build -S doc
cmake --build doc/_build --target doxygen-coverage
- name: Upload Doxygen Coverage Results
if: matrix.platform == 'unit_testing'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: doxygen-coverage-results
path: |
doc/_build/new.info
doc/_build/coverage-report
- name: Print ccache stats
if: always()
run: |
@@ -145,7 +121,7 @@ jobs:
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: Coverage Data (Subset ${{ matrix.normalized }})
path: |
@@ -155,29 +131,18 @@ jobs:
codecov-results:
name: "Publish Coverage Results"
needs: codecov
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
# the codecov job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@v4
with:
path: coverage/reports
@@ -217,6 +182,7 @@ jobs:
- name: Merge coverage files
run: |
pushd ./coverage/reports
pip3 install gcovr==6.0
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --json merged.json
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --cobertura merged.xml
popd
@@ -232,6 +198,7 @@ jobs:
- name: Generate Coverage Report
if: always()
run: |
pip install xlsxwriter ijson
python3 ./scripts/ci/coverage/coverage_analysis.py \
-t native_sim-testplan.json \
-m MAINTAINERS.yml \
@@ -242,7 +209,7 @@ jobs:
- name: Upload Merged Coverage Results and Report
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: Coverage Data and report
path: |
@@ -251,25 +218,12 @@ jobs:
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.json
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.xlsx
- name: Upload test coverage to Codecov
- name: Upload coverage to Codecov
if: always()
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
uses: codecov/codecov-action@v4
with:
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/merged.xml
flags: unittests-coverage
- name: Upload Doxygen coverage to Codecov
if: always()
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
with:
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/doxygen-coverage-results/new.info
disable_search: true
flags: doxygen-coverage

View File

@@ -1,58 +0,0 @@
name: "CodeQL"
on:
push:
branches:
- main
- v*-branch
- collab-*
schedule:
- cron: '34 16 * * 6'
pull_request:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
jobs:
analyze:
name: Analyze (${{ matrix.language }})
runs-on: ubuntu-24.04
permissions:
security-events: write
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
- language: actions
build-mode: none
config: ./.github/codeql/codeql-actions-config.yml
- language: javascript-typescript
build-mode: none
config: ./.github/codeql/codeql-js-config.yml
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Initialize CodeQL
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
queries: security-extended
config-file: ${{ matrix.config }}
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo "nothing yet"
exit 0
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,37 +2,35 @@ name: Coding Guidelines
on: pull_request
permissions:
contents: read
jobs:
compliance_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: cache-pip
uses: actions/cache@v4
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('.github/workflows/coding_guidelines.yml') }}
- name: Install Python packages
- name: Install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install unidiff
pip3 install wheel
pip3 install sh
- name: Install Packages
run: |
sudo apt-get update
sudo apt-get install coccinelle
- name: Run Coding Guidelines Checks
- name: Run Coding Guildeines Checks
continue-on-error: true
id: coding_guidelines
env:
@@ -42,8 +40,6 @@ jobs:
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
source zephyr-env.sh

View File

@@ -8,12 +8,9 @@ on:
- reopened
- synchronize
permissions:
contents: read
jobs:
check_compliance:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -21,12 +18,30 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Rebase onto the target branch
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: cache-pip
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('.github/workflows/compliance.yml') }}
- name: Install python dependencies
run: |
pip3 install setuptools
pip3 install wheel
pip3 install python-magic lxml junitparser gitlint pylint pykwalify yamllint clang-format unidiff sphinx-lint
pip3 install west
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
@@ -36,40 +51,21 @@ jobs:
# Ensure there's no merge commits in the PR
[[ "$(git rev-list --merges --count origin/${BASE_REF}..)" == "0" ]] || \
(echo "::error ::Merge commits not allowed, rebase instead";false)
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
# debug
git log --pretty=oneline | head -n 10
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west config manifest.group-filter -- +ci,-optional
west update -o=--depth=1 -n 2>&1 1> west.update.log || west update -o=--depth=1 -n 2>&1 1> west.update2.log
- name: Setup Node.js
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: "lts/*"
cache: npm
check-latest: true
cache-dependency-path: ./scripts/ci/package-lock.json
- name: Install Node dependencies
run: npm --prefix ./scripts/ci ci
- name: Check for PR description
if: ${{ github.event.pull_request.body == '' }}
continue-on-error: true
id: pr_description
run: |
echo "Pull request description cannot be empty."
exit 1
- name: Run Compliance Tests
continue-on-error: true
@@ -83,35 +79,23 @@ jobs:
git log --pretty=oneline | head -n 10
# Increase rename limit to allow for large PRs
git config diff.renameLimit 10000
excludes="-e KconfigBasic -e SysbuildKconfigBasic -e ClangFormat"
# The signed-off-by check for dependabot should be skipped
if [ "${{ github.actor }}" == "dependabot[bot]" ]; then
excludes="$excludes -e Identity"
fi
./scripts/ci/check_compliance.py --annotate $excludes -c origin/${BASE_REF}..
./scripts/ci/check_compliance.py --annotate -e KconfigBasic \
-c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: compliance.xml
path: compliance.xml
- name: Upload dts linter patch
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
continue-on-error: true
if: hashFiles('dts_linter.patch') != ''
with:
name: dts_linter.patch
path: dts_linter.patch
- name: check-warns
run: |
if [[ ! -s "compliance.xml" ]]; then
exit 1;
fi
warns=("ClangFormat" "LicenseAndCopyrightCheck")
warns=("ClangFormat")
files=($(./scripts/ci/check_compliance.py -l))
for file in "${files[@]}"; do
@@ -136,3 +120,8 @@ jobs:
echo "You can run this step locally with the ./scripts/ci/check_compliance.py script."
exit 1;
fi
if [ "${{ steps.pr_description.outcome }}" == "failure" ]; then
echo "PR description cannot be empty"
exit 1;
fi

View File

@@ -10,38 +10,28 @@ on:
branches:
- refs/tags/*
permissions:
contents: read
jobs:
get_version:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .

View File

@@ -20,9 +20,6 @@ on:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
permissions:
contents: read
jobs:
devicetree-checks:
name: Devicetree script tests
@@ -33,19 +30,40 @@ jobs:
os: [ubuntu-22.04, macos-14, windows-2022]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v4
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v4
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest pyyaml tox
- name: run tox
working-directory: scripts/dts/python-devicetree
run: |

20
.github/workflows/do_not_merge.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
name: Do Not Merge
on:
pull_request:
types: [synchronize, opened, reopened, labeled, unlabeled]
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'DNM') ||
contains(github.event.*.labels.*.name, 'TSC') ||
contains(github.event.*.labels.*.name, 'Architecture Review') ||
contains(github.event.*.labels.*.name, 'dev-review') }}
name: Prevent Merging
runs-on: ubuntu-22.04
steps:
- name: Check for label
run: |
echo "Pull request is labeled as 'DNM', 'TSC', 'Architecture Review' or 'dev-review'."
echo "This workflow fails so that the pull request cannot be merged."
exit 1

View File

@@ -4,35 +4,40 @@
name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
push:
branches:
- main
tags:
- v*
pull_request:
permissions:
contents: read
env:
DOXYGEN_VERSION: 1.15.0
DOXYGEN_SHA256SUM: 0ec2e5b2c3cd82b7106d19cb42d8466450730b8cb7a9e85af712be38bf4523a1
JOB_COUNT: 8
# NOTE: west docstrings will be extracted from the version listed here
WEST_VERSION: 1.2.0
# The latest CMake available directly with apt is 3.18, but we need >=3.20
# so we fetch that through pip.
CMAKE_VERSION: 3.20.5
DOXYGEN_VERSION: 1.12.0
# Job count is set to 2 less than the vCPU count of 16 because the total available RAM is 32GiB
# and each sphinx-build process may use more than 2GiB of RAM.
JOB_COUNT: 14
jobs:
doc-file-check:
name: Check for doc changes
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: >
github.repository_owner == 'zephyrproject-rtos'
outputs:
file_check: ${{ steps.check-doc-files.outputs.any_modified }}
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Check if Documentation related files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v45
id: check-doc-files
with:
files: |
@@ -43,6 +48,7 @@ jobs:
kernel/include/kernel_arch_interface.h
lib/libc/**
subsys/testsuite/ztest/include/**
tests/
**/Kconfig*
west.yml
scripts/dts/
@@ -55,75 +61,67 @@ jobs:
name: "Documentation Build (HTML)"
needs: [doc-file-check]
if: >
needs.doc-file-check.outputs.file_check == 'true' || github.event_name != 'pull_request'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
options: '--entrypoint /bin/bash'
timeout-minutes: 60
github.repository_owner == 'zephyrproject-rtos' &&
( needs.doc-file-check.outputs.file_check == 'true' || github.event_name != 'pull_request' )
runs-on: ubuntu-22.04
timeout-minutes: 90
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
env:
BASE_REF: ${{ github.base_ref }}
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Print cloud service information
- name: install-pkgs
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
sudo apt-get update
sudo apt-get install -y wget python3-pip git ninja-build graphviz lcov
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
echo "${HOME}/.local/bin" >> $GITHUB_PATH
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Environment Setup
- name: Rebase
if: github.event_name == 'pull_request'
continue-on-error: true
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
west init -l . || true
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: cache-pip
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages required for documentation build
- name: install-pip
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip install -r doc/requirements.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
pip3 install coverxygen
- name: Build HTML documentation
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
@@ -149,26 +147,25 @@ jobs:
lcov --remove doc-coverage.info \*/deprecated > new.info
genhtml --no-function-coverage --no-branch-coverage new.info -o coverage-report
- name: Compress documentation build artifacts
- name: compress-docs
run: |
tar --use-compress-program="xz -T0" -cf html-output.tar.xz --exclude html/_sources --exclude html/doxygen/xml --directory=doc/_build html
tar --use-compress-program="xz -T0" -cf html-output.tar.xz --directory=doc/_build html
tar --use-compress-program="xz -T0" -cf api-output.tar.xz --directory=doc/_build html/doxygen/html
tar --use-compress-program="xz -T0" -cf api-coverage.tar.xz coverage-report
- name: Upload HTML output
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- name: upload-build
uses: actions/upload-artifact@v4
with:
name: html-output
path: html-output.tar.xz
- name: Upload Doxygen coverage artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- name: upload-api-coverage
uses: actions/upload-artifact@v4
with:
name: api-coverage
path: api-coverage.tar.xz
- name: Summarize PR documentation URLs
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
@@ -182,8 +179,8 @@ jobs:
echo "API Documentation will be available shortly at: ${API_DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Coverage Report will be available shortly at: ${API_COVERAGE_URL}" >> $GITHUB_STEP_SUMMARY
- name: Upload PR number
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- name: upload-pr-number
uses: actions/upload-artifact@v4
if: github.event_name == 'pull_request'
with:
name: pr_num
@@ -193,57 +190,58 @@ jobs:
name: "Documentation Build (PDF)"
needs: [doc-file-check]
if: |
github.event_name != 'pull_request'
runs-on: ubuntu-24.04
github.event_name != 'pull_request' &&
github.repository_owner == 'zephyrproject-rtos'
runs-on: ubuntu-22.04
container: texlive/texlive:latest
timeout-minutes: 120
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: Apply container owner mismatch workaround
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: 3.12
cache: pip
cache-dependency-path: doc/requirements.txt
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends graphviz librsvg2-bin \
texlive-latex-base texlive-latex-extra latexmk \
texlive-fonts-recommended texlive-fonts-extra texlive-xetex \
imagemagick fonts-noto xindy
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
echo "${DOXYGEN_SHA256SUM} doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz" | sha256sum -c
if [ $? -ne 0 ]; then
echo "Failed to verify doxygen tarball"
exit 1
fi
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
apt-get update
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin imagemagick
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
- name: cache-pip
uses: actions/cache@v4
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
enable-ccache: false
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: install-pip-pkgs
working-directory: zephyr
- name: setup-venv
run: |
pip install -r doc/requirements.txt --require-hashes
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
working-directory: zephyr
continue-on-error: true
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
@@ -259,13 +257,13 @@ jobs:
- name: upload-build
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: pdf-output
if-no-files-found: ignore
path: |
zephyr/doc/_build/latex/zephyr.pdf
zephyr/doc/_build/latex/zephyr.log
doc/_build/latex/zephyr.pdf
doc/_build/latex/zephyr.log
doc-build-status-check:
if: always()

View File

@@ -10,13 +10,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -25,7 +22,7 @@ jobs:
steps:
- name: Download artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v6
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
@@ -33,7 +30,7 @@ jobs:
- name: Load PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@v7
with:
script: |
let fs = require("fs");
@@ -43,7 +40,7 @@ jobs:
- name: Check PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
id: check-pr
uses: carpentries/actions/check-valid-pr@2e20fd5ee53b691e27455ce7ca3b16ea885140e8 # v0.15.0
uses: carpentries/actions/check-valid-pr@v0.14.0
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
@@ -66,7 +63,7 @@ jobs:
- name: Configure AWS Credentials
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}

View File

@@ -13,13 +13,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event != 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -27,7 +24,7 @@ jobs:
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v6
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
@@ -40,7 +37,7 @@ jobs:
fi
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}

View File

@@ -5,32 +5,30 @@ on:
- '.github/workflows/errno.yml'
- 'lib/libc/minimal/include/errno.h'
- 'scripts/ci/errno.py'
- 'SDK_VERSION'
permissions:
contents: read
jobs:
check-errno:
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
runs-on: ubuntu-22.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.27.4
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
west-group-filter: -hal,-tools,-bootloader,-babblesim
west-project-filter: -nrf_hw_models
enable-ccache: false
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@v4
- name: Environment Setup
run: |
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Run errno.py
working-directory: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
export ZEPHYR_BASE=${PWD}
./scripts/ci/errno.py

View File

@@ -1,8 +1,9 @@
name: Footprint Tracking
# Run every 12 hours and on tags
on:
schedule:
- cron: '50 00 * * *'
- cron: '50 1/12 * * *'
push:
paths:
- 'VERSION'
@@ -15,9 +16,6 @@ on:
# same commit
- 'v*'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
@@ -28,7 +26,7 @@ jobs:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository_owner == 'zephyrproject-rtos'
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.27.4.20241026
options: '--entrypoint /bin/bash'
defaults:
run:
@@ -60,24 +58,14 @@ jobs:
run: |
sudo apt-get update
sudo apt-get install -y python3-venv
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Environment Setup
run: |
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
@@ -89,7 +77,7 @@ jobs:
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
@@ -106,6 +94,7 @@ jobs:
run: |
python3 -m venv .venv
. .venv/bin/activate
pip3 install awscli
aws s3 sync --quiet footprint_data/ s3://testing.zephyrproject.org/footprint_data/
- name: Transform Footprint data to Twister JSON reports
@@ -124,6 +113,7 @@ jobs:
ELASTICSEARCH_INDEX: ${{ vars.FOOTPRINT_TRACKING_INDEX }}
run: |
shopt -s globstar
pip3 install -U elasticsearch
run_date=`date --iso-8601=minutes`
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--flatten footprint \

View File

@@ -6,20 +6,14 @@ on:
pull_request_target:
types: [opened, closed]
permissions:
contents: read
jobs:
check_for_first_interaction:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on pull requests
issues: write # to comment on issues
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: zephyrproject-rtos/action-first-interaction@58853996b1ac504b8e0f6964301f369d2bb22e5c # v1.1.1+zephyr.6
- uses: actions/checkout@v4
- uses: zephyrproject-rtos/action-first-interaction@v1.1.1-zephyr-5
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
@@ -45,7 +39,7 @@ jobs:
and update (by amending and force-pushing the commits) your pull request if necessary.
If you are stuck or need help please join us on [Discord](https://chat.zephyrproject.org/)
and ask your question there. Additionally, you can [escalate the review](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html#pr-technical-escalation)
and ask your question there. Additionally, you can [escalate the review](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html#pr-review-escalation)
when applicable. 😊
pr-merged-message: >

View File

@@ -12,13 +12,11 @@ on:
- v*-branch
- collab-*
paths:
- 'scripts/**'
- 'scripts/build/**'
- 'scripts/requirements*.txt'
- '.github/workflows/hello_world_multiplatform.yaml'
- 'SDK_VERSION'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
@@ -28,11 +26,11 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, ubuntu-24.04, ubuntu-24.04-arm, macos-14, windows-2022]
os: [ubuntu-22.04, ubuntu-24.04, macos-13, macos-14, windows-2022]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
path: zephyr
fetch-depth: 0
@@ -47,23 +45,20 @@ jobs:
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: 3.12
python-version: 3.11
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
uses: zephyrproject-rtos/action-zephyr-setup@v1
with:
app-path: zephyr
toolchains: aarch64-zephyr-elf:arc-zephyr-elf:arc64-zephyr-elf:arm-zephyr-eabi:mips-zephyr-elf:riscv64-zephyr-elf:sparc-zephyr-elf:x86_64-zephyr-elf:xtensa-dc233c_zephyr-elf:xtensa-sample_controller32_zephyr-elf:rx-zephyr-elf
ccache-cache-key: hw-${{ matrix.os }}
toolchains: all
- name: Build firmware
working-directory: zephyr
@@ -73,14 +68,12 @@ jobs:
EXTRA_TWISTER_FLAGS="-P native_sim --build-only"
elif [ "${{ runner.os }}" = "Windows" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --short-build-path -O/tmp/twister-out"
elif [ "${{ runner.os }}-${{ runner.arch }}" == "Linux-ARM64" ]; then
EXTRA_TWISTER_FLAGS="--exclude-platform native_sim/native"
fi
west twister --runtime-artifact-cleanup --force-color --inline-logs -T samples/hello_world -T samples/cpp/hello_world -v $EXTRA_TWISTER_FLAGS
./scripts/twister --force-color --inline-logs -T samples/hello_world -T samples/cpp/hello_world -v $EXTRA_TWISTER_FLAGS
- name: Upload artifacts
if: failure()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
if-no-files-found: ignore
path:

View File

@@ -2,10 +2,7 @@ name: Issue Tracker
on:
schedule:
- cron: '10 1/4 * * *'
permissions:
contents: read
- cron: '*/10 * * * *'
env:
OUTPUT_FILE_NAME: IssuesReport.md
@@ -17,7 +14,7 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
@@ -30,7 +27,7 @@ jobs:
sudo apt-get update
sudo apt-get install discount
- uses: brcrista/summarize-issues@54c549b7d38b7db39e5c6e06fd9617e12e5c3491 # v4
- uses: brcrista/summarize-issues@v4
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
@@ -38,14 +35,14 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}

View File

@@ -2,25 +2,22 @@ name: Scancode
on: [pull_request]
permissions:
contents: read
jobs:
scancode_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@23ef91ce31cd4b954366a7b71eea47520da9b380 # v4
uses: zephyrproject-rtos/action_scancode@v4
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: scancode
path: ./artifacts

View File

@@ -2,20 +2,16 @@ name: Manifest
on:
pull_request_target:
permissions:
contents: read
jobs:
contribs:
runs-on: ubuntu-24.04
permissions:
pull-requests: write # to create/update pull request comments
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -24,15 +20,15 @@ jobs:
BASE_REF: ${{ github.base_ref }}
working-directory: zephyrproject/zephyr
run: |
pip install -r scripts/requirements-west.txt --require-hashes
pip3 install west
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Manifest
uses: zephyrproject-rtos/action-manifest@09983f53d3d878791aa37a7755ae44d695f4c1e5 # v2.0.0
uses: zephyrproject-rtos/action-manifest@v1.3.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
github-token: ${{ secrets.ZB_GITHUB_TOKEN }}
manifest-path: 'west.yml'
checkout-path: 'zephyrproject/zephyr'
use-tree-checkout: 'true'
@@ -40,6 +36,4 @@ jobs:
label-prefix: 'manifest-'
verbosity-level: '1'
labels: 'manifest'
dnm-labels: 'DNM (manifest)'
blobs-added-labels: 'Binary Blobs Added'
blobs-modified-labels: 'Binary Blobs Modified'
dnm-labels: 'DNM'

View File

@@ -1,19 +0,0 @@
name: Check SHA-pinned GitHub Actions
on:
pull_request:
paths:
- '.github/workflows/**'
permissions:
contents: read
jobs:
check-sha-pinned-actions:
name: Verify GitHub Actions
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@9e9574ef04ea69da568d6249bd69539ccc704e74 # v4.0.0

View File

@@ -1,47 +0,0 @@
name: PR Metadata Check
on:
pull_request:
types:
- synchronize
- opened
- reopened
- labeled
- unlabeled
- edited
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
do-not-merge:
name: Prevent Merging
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run the check script
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python -u scripts/ci/do_not_merge.py \
-p "${{ github.event.pull_request.number }}" \
-o "${{ github.event.repository.owner.login }}" \
-r "${{ github.event.repository.name }}"

View File

@@ -19,9 +19,6 @@ on:
- 'scripts/pylib/build_helpers/**'
- '.github/workflows/pylib_tests.yml'
permissions:
contents: read
jobs:
pylib-tests:
name: Misc. Pylib Unit Tests
@@ -29,21 +26,25 @@ jobs:
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest for build_helpers
env:
ZEPHYR_BASE: ./

View File

@@ -7,9 +7,6 @@ on:
type: string
required: true
permissions:
contents: read
jobs:
all_jobs_passed:
name: all jobs passed

View File

@@ -6,16 +6,11 @@ on:
- 'v*'
- '!v*rc*'
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-24.04
permissions:
contents: write # to create GitHub release entry
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -26,12 +21,12 @@ jobs:
echo "TRIMMED_VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@676e2d560c9a403aa252096d99fcab3e1132b0f5 # v6.0.0
uses: fsfe/reuse-action@v4
with:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
continue-on-error: true
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
@@ -43,7 +38,7 @@ jobs:
- name: Create Release
id: create_release
uses: actions/create-release@0cb9c9b65d5d1901c1f53e5e66eaf4afd303e70e # v1.1.4
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -55,7 +50,7 @@ jobs:
- name: Upload Release Assets
id: upload-release-asset
uses: actions/upload-release-asset@e8f9f06c4b078e705bd2ea027f0926603fc9b4d5 # v1.0.2
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -29,12 +29,12 @@ jobs:
steps:
- name: "Checkout code"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4.1.7
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
uses: ossf/scorecard-action@62b2cac7ed8198b15735ed49ab1e5cf35480ba46 # v2.4.0
with:
results_file: results.sarif
results_format: sarif
@@ -47,7 +47,7 @@ jobs:
# uploads of run results in SARIF format to the repository Actions tab.
# https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts
- name: "Upload artifact"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@89ef406dd8d7e03cfd12d9e0a4a378f454709029 # v4.3.5
with:
name: SARIF file
path: results.sarif
@@ -56,6 +56,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
uses: github/codeql-action/upload-sarif@afb54ba388a7dca6ecae48f608c4ff05ff4cc77a # v3.25.15
with:
sarif_file: results.sarif

View File

@@ -19,9 +19,6 @@ on:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
permissions:
contents: read
jobs:
scripts-tests:
name: Scripts tests
@@ -29,10 +26,10 @@ jobs:
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -45,22 +42,27 @@ jobs:
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest
env:

View File

@@ -2,13 +2,11 @@ name: Stale Workflow Queue Cleanup
on:
workflow_dispatch:
branches: [main]
schedule:
# everyday at 15:00
- cron: '0 15 * * *'
permissions:
contents: read
concurrency:
group: stale-workflow-queue-cleanup
cancel-in-progress: true
@@ -16,14 +14,11 @@ concurrency:
jobs:
cleanup:
name: Cleanup
runs-on: ubuntu-24.04
if: github.ref == 'refs/heads/main'
permissions:
actions: write # to delete stale workflow runs
runs-on: ubuntu-22.04
steps:
- name: Delete stale queued workflow runs
uses: MajorScruffy/delete-old-workflow-runs@78b5af714fefaefdf74862181c467b061782719e # v0.3.0
uses: MajorScruffy/delete-old-workflow-runs@v0.3.0
with:
repository: ${{ github.repository }}
# Remove any workflow runs in "queued" state for more than 1 day

View File

@@ -3,20 +3,13 @@ on:
schedule:
- cron: "16 00 * * *"
permissions:
contents: read
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on stale pull requests
issues: write # to comment on stale issues
steps:
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
- uses: actions/stale@v9
with:
stale-pr-message: 'This pull request has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you

View File

@@ -6,28 +6,13 @@ on:
- main
- v*-branch
types: [closed]
permissions:
contents: read
jobs:
record_merged:
if: github.event.pull_request.merged == true && github.repository == 'zephyrproject-rtos/zephyr'
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
uses: actions/checkout@v4
- name: PR event
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -35,4 +20,5 @@ jobs:
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
PR_STAT_ES_INDEX: ${{ vars.PR_STAT_ES_INDEX }}
run: |
pip3 install pygithub elasticsearch
python3 ./scripts/ci/stats/merged_prs.py --pull-request ${{ github.event.pull_request.number }} --repo ${{ github.repository }}

View File

@@ -1,64 +0,0 @@
name: Publish Twister Test Results
on:
workflow_run:
workflows: ["Run tests with twister"]
branches:
- main
types:
- completed
permissions:
contents: read
jobs:
upload-to-elasticsearch:
if: |
github.repository == 'zephyrproject-rtos/zephyr' &&
github.event.workflow_run.event != 'pull_request'
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
runs-on: ubuntu-24.04
steps:
# Needed for elasticearch and upload script
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
with:
path: artifacts
workflow: twister.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Upload to elasticsearch
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event.workflow_run.event}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event.workflow_run.event}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi

View File

@@ -6,17 +6,14 @@ on:
- main
- v*-branch
- collab-*
pull_request:
pull_request_target:
branches:
- main
- v*-branch
- collab-*
schedule:
# Run at 02:00 UTC on every Sunday
- cron: '0 2 * * 0'
permissions:
contents: read
# Run at 03:00 UTC on every Sunday
- cron: '0 3 * * 0'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -25,7 +22,11 @@ concurrency:
jobs:
twister-build-prep:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: ubuntu-24.04
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.27.4.20241026
options: '--entrypoint /bin/bash'
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
@@ -33,58 +34,61 @@ jobs:
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 20
WEEKLY_MATRIX_SIZE: 200
DAILY_MATRIX_SIZE: 80
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TESTS_PER_BUILDER: 900
TESTS_PER_BUILDER: 700
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request'
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
path: zephyr
persist-credentials: false
- name: Set up Python
if: github.event_name == 'pull_request'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: install-packages
working-directory: zephyr
if: github.event_name == 'pull_request'
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Setup Zephyr project
if: github.event_name == 'pull_request'
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
enable-ccache: false
- name: Environment Setup
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci,+optional
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Generate Test Plan with Twister
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
id: test-plan
run: |
export ZEPHYR_BASE=${PWD}
@@ -100,23 +104,22 @@ jobs:
- name: Determine matrix size
id: output-services
run: |
if [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
if [ -n "${TWISTER_NODES}" ]; then
subset="[$(seq -s',' 1 ${TWISTER_NODES})]"
else
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
fi
size=${TWISTER_NODES}
elif [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "schedule" -a "${{github.repository}}" = "zephyrproject-rtos/zephyr" ]; then
subset="[$(seq -s',' 1 ${WEEKLY_MATRIX_SIZE})]"
size=${WEEKLY_MATRIX_SIZE}
subset="[$(seq -s',' 1 ${DAILY_MATRIX_SIZE})]"
size=${DAILY_MATRIX_SIZE}
else
size=0
fi
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
@@ -127,7 +130,7 @@ jobs:
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.27.4.20241026
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
@@ -142,13 +145,12 @@ jobs:
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TWISTER_COMMON: ' --test-config tests/test_config_ci.yaml --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
WEEKLY_OPTIONS: ' -M --build-only --all --show-footprint --report-filtered -j 32'
PR_OPTIONS: ' --clobber-output --integration -j 16'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint --report-filtered -j 16'
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
DAILY_OPTIONS: ' -M --build-only --all --show-footprint'
PR_OPTIONS: ' --clobber-output --integration'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint --report-filtered'
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
steps:
- name: Print cloud service information
run: |
@@ -171,7 +173,7 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -179,11 +181,10 @@ jobs:
- name: Environment Setup
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
@@ -192,7 +193,7 @@ jobs:
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
west init -l . || true
west config manifest.group-filter -- +ci,+optional,+testing
west config manifest.group-filter -- +ci,+optional
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
@@ -210,11 +211,6 @@ jobs:
echo "github.base_ref: ${{ github.base_ref }}"
echo "github.ref_name: ${{ github.ref_name }}"
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
west packages pip --install
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
@@ -234,7 +230,6 @@ jobs:
- if: github.event_name == 'push'
name: Run Tests with Twister (Push)
id: run_twister
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
@@ -246,9 +241,8 @@ jobs:
fi
fi
- if: github.event_name == 'pull_request'
- if: github.event_name == 'pull_request_target'
name: Run Tests with Twister (Pull Request)
id: run_twister_pr
run: |
rm -f testplan.json
export ZEPHYR_BASE=${PWD}
@@ -263,16 +257,15 @@ jobs:
fi
- if: github.event_name == 'schedule'
name: Run Tests with Twister (Weekly)
id: run_twister_sched
name: Run Tests with Twister (Daily)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${DAILY_OPTIONS}
if [ "${{matrix.subset}}" = "1" ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${DAILY_OPTIONS}
fi
fi
@@ -283,7 +276,7 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -301,11 +294,11 @@ jobs:
timestamp="$(date)"
version="$(git describe --abbrev=12 --always)"
echo -e "# Generated at $timestamp ($version)\n" > $FREEZE_FILE
pip freeze | tee -a $FREEZE_FILE
pip3 freeze | tee -a $FREEZE_FILE
- if: matrix.subset == 1 && github.event_name == 'push'
name: Upload the list of Python packages
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: Frozen PIP package set
path: |
@@ -313,84 +306,63 @@ jobs:
twister-test-results:
name: "Publish Unit Tests Results"
needs:
- twister-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create the check run entry with Twister test results
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
needs: twister-build
runs-on: ubuntu-22.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: Check out source code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Needed for elasticearch and upload script
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Checkout
uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@v4
with:
path: artifacts
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Upload to elasticsearch
run: |
pip3 install elasticsearch
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event_name}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event_name}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/*/twister.xml junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v4
with:
name: Unit Test Results
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit.html
junit.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() && (github.event_name == 'push') }}
uses: codecov/test-results-action@47f89e9acb64b76debcd5ea40642d25a4adced9f # v1.1.1
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: Unit Test Results
files: "**/twister.xml"
comment_mode: off
- name: Analyze Twister Reports
if: needs.twister-build.result == 'failure'
run: |
./scripts/ci/twister_report_analyzer.py artifacts/*/*/twister.json --long-summary --platforms --output-md errors.md
if [[ -s "errors.md" ]]; then
echo '### Error Summary! 🚀' >> $GITHUB_STEP_SUMMARY
cat errors.md >> $GITHUB_STEP_SUMMARY
fi
- name: Upload Twister Analysis Results
if: needs.twister-build.result == 'failure'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: Twister Analysis Results
if-no-files-found: ignore
path: |
twister_report_summary.json
twister-status-check:
if: always()
name: "Check Twister Status"

View File

@@ -26,9 +26,6 @@ on:
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
permissions:
contents: read
jobs:
twister-tests:
name: Twister Unit Tests
@@ -36,22 +33,25 @@ jobs:
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt -r scripts/requirements-run-test.txt
- name: Run pytest for twisterlib
env:
ZEPHYR_BASE: ./

View File

@@ -15,54 +15,65 @@ on:
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/twister_tests_blackbox.yml'
permissions:
contents: read
env:
PYTHONIOENCODING: utf-8
jobs:
twister-tests:
name: Twister Black Box Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04, macos-14, windows-2022]
fail-fast: false
runs-on: ${{ matrix.os }}
os: [ubuntu-22.04]
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.27.4
steps:
- name: Apply Container Owner Mismatch Workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
fetch-depth: 0
uses: actions/checkout@v4
- name: Environment Setup
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
west init -l . || true
# we do not depend on any hals, tools or bootloader, save some time and space...
west config manifest.group-filter -- -hal,-tools,-bootloader
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
toolchains: all
enable-ccache: false
west-group-filter: -tools,-bootloader,-babblesim,-hal
west-project-filter: -nrf_hw_models,+cmsis,+hal_xtensa,+cmsis_6
- name: Go Into Venv
shell: bash
run: |
python3 -m pip install --user virtualenv
python3 -m venv env
source env/bin/activate
echo "$(which python)"
- name: Install Packages
run: |
python3 -m pip install -U -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt -r scripts/requirements-run-test.txt
- name: Run Pytest For Twister Black Box Tests
if: ${{ runner.os == 'Linux' }}
working-directory: zephyr
shell: bash
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
sudo apt-get install -y lcov
echo "Run twister tests"
source zephyr-env.sh
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox/
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox

View File

@@ -23,11 +23,8 @@ on:
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
permissions:
contents: read
jobs:
west-commands:
west-commnads:
name: West Command Tests
runs-on: ${{ matrix.os }}
strategy:
@@ -36,24 +33,44 @@ jobs:
os: [ubuntu-22.04, macos-14, windows-2022]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v4
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v4
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest west pyelftools canopen natsort progress mypy intelhex psutil ply pyserial anytree
- name: run pytest-win
if: runner.os == 'Windows'
run: |
python ./scripts/west_commands/run_tests.py
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |

18
.gitignore vendored
View File

@@ -1,6 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The Zephyr Project Contributors
*.o
*.a
*.d
@@ -58,7 +55,6 @@ doc/doc.warnings
hide-defaults-note
venv
.venv
.envrc
.DS_Store
.clangd
new.info
@@ -76,7 +72,6 @@ target/
# CI output
compliance.xml
dts_linter.patch
_error.types
# Tag files
@@ -89,13 +84,11 @@ tags
.idea
# from check_compliance.py
# zephyr-keep-sorted-start
BinaryFiles.txt
BoardYml.txt
Checkpatch.txt
ClangFormat.txt
DevicetreeBindings.txt
DevicetreeLinting.txt
GitDiffCheck.txt
Gitlint.txt
Identity.txt
@@ -105,21 +98,10 @@ KconfigBasic.txt
KconfigBasicNoModules.txt
KconfigHWMv2.txt
KeepSorted.txt
LicenseAndCopyrightCheck.txt
MaintainersFormat.txt
ModulesMaintainers.txt
Nits.txt
Pylint.txt
PythonCompat.txt
Ruff.txt
SphinxLint.txt
SysbuildKconfig.txt
SysbuildKconfigBasic.txt
SysbuildKconfigBasicNoModules.txt
TextEncoding.txt
YAMLLint.txt
ZephyrModuleFile.txt
# zephyr-keep-sorted-stop
# Node dependecies
node_modules

View File

@@ -1,7 +1,6 @@
# All these sections are optional, edit this file as you like.
# Zephyr-specific defaults are located in scripts/gitlint/zephyr_commit_rules.py
[general]
regex-style-search=true
ignore=title-trailing-punctuation, T3, title-max-length, T1, body-hard-tab, B3, B1
# verbosity should be a value between 1 and 3, the commandline -v flags take precedence over this
verbosity = 3
@@ -60,7 +59,3 @@ ignore-merge-commits=false
# By specifying this rule, developers can only change the file when they explicitly reference
# it in the commit message.
#files=gitlint/rules.py,README.md
[ignore-by-author-name]
regex=^dependabot\[bot\]$
ignore=all

View File

@@ -1,14 +1,14 @@
Alexandr Kolosov <rikorsev@gmail.com>
Alexandre d'Alton <alexandre.dalton@intel.com>
Amir Kaplan <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com>
Amir Kaplan <amir.kaplan@intel.com> <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com> <anas.nashif@intel.com>
Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Anthony Smigielski <thebasti0ncode@gmail.com>
Armand Ciejak <armand@riedonetworks.com> <armandciejak@users.noreply.github.com>
Aska Wu <aska.wu@linaro.org>
Bit Pathe <bitpathe@gmail.com>
Bit Pathe <bitpathe@gmail.com> <bitpathe@gmail.com>
Bjarki Arge Andreasen <baa@trackunit.com>
Carles Cufi <carles.cufi@nordicsemi.no>
Carles Cufi <carles.cufi@nordicsemi.no> <carles.cufi@nordicsemi.no>
chao an <anchao@xiaomi.com>
Charles E. Youse <charles.youse@intel.com>
Chen Xingyu <hi@xingrz.me>
@@ -16,32 +16,32 @@ Christoph Schnetzler <christoph.schnetzler@husqvarnagroup.com>
Christoph Schramm <schramm@makaio.com>
Christopher Friedt <cfriedt@meta.com>
Christopher Friedt <cfriedt@meta.com> <cfriedt@fb.com>
Chuck Jordan <cjordan@synopsys.com>
Chuck Jordan <cjordan@synopsys.com> <cjordan@synopsys.com>
Chunlin Han <chunlin.han@linaro.org> <chunlin.han@acer.com>
David B. Kinder <david.b.kinder@intel.com>
David Komel <a8961713@gmail.com>
David Leach <david.leach@nxp.com>
Dirk Brandewie <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com>
Dirk Brandewie <dirk.j.brandewie@intel.com> <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com> <d0u9.su@outlook.com>
Enjia Mai <enjia.mai@intel.com>
Enjia Mai <enjia.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com>
Enjia Mai <enjia.mai@intel.com> <enjiax.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com> <evanx.couzens@intel.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com> <Eugeniy.Paltsev@synopsys.com>
Felipe Neves <ryukokki.felipe@gmail.com>
Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Findlay Feng <i@fengch.me>
Flavio Arieta Netto <flavio@exati.com.br>
Francois Ramu <francois.ramu@st.com>
Gerardo Aceves <gerardo.aceves@intel.com>
Gerardo Aceves <gerardo.aceves@intel.com> <gerardo.aceves@intel.com>
Gregory Shue <gregory.shue@legrand.com>
Gregory Shue <gregory.shue@legrand.com> <gregory.shue@legrand.us>
HaiLong Yang <cameledyang@pm.me>
James Johnson <james.johnson672@t-mobile.com>
Jarno Lämsä <jarno.lamsa@nordicsemi.no>
Jędrzej Ciupis <jedrzej.ciupis@nordicsemi.no>
Jeremie Garcia <jeremie.garcia@intel.com>
Jeremie Garcia <jeremie.garcia@intel.com> <jeremie.garcia@intel.com>
Jim Benjamin Luther <jilu@oticon.com>
Johan Kruger <johan.kruger@windriver.com>
Johan Kruger <johan.kruger@windriver.com> <johan.kruger@windriver.com>
Johann Fischer <j.fischer@phytec.de>
Jørgen Kvalvaag <jorgen.kvalvaag@nordicsemi.no>
Juan Manuel Cruz Alcaraz <juan.m.cruz.alcaraz@intel.com>
@@ -51,11 +51,11 @@ Jun Li <jun.r.li@intel.com>
Justin Watson <jwatson5@gmail.com>
Kamil Sroka <kamil.sroka@nordicsemi.no>
Katarzyna Giądła <katarzyna.giadla@nordicsemi.no>
Keren Siman-Tov <keren.siman-tov@intel.com>
Keren Siman-Tov <keren.siman-tov@intel.com> <keren.siman-tov@intel.com>
Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com> <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com> <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com> <leonax.cook@intel.com>
Leona Cook <leonax.cook@intel.com> <lsc@hackeress.com>
Lixin Guo <lixinx.guo@intel.com>
Łukasz Mazur <lukasz.mazur@hidglobal.com>
@@ -72,14 +72,14 @@ Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.bolivar@linaro.org>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.f.bolivar@gmail.com>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@foundries.io>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@opensourcefoundries.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Mateusz Hołenko <mholenko@antmicro.com>
Michael Rosen <michael.r.rosen@intel.com>
Michal Narajowski <michal.narajowski@codecoup.pl>
Mike Hirst <michael.hirst@windriver.com>
Mike Hirst <michael.hirst@windriver.com> <michael.hirst@windriver.com>
Ming Shao <ming.shao@intel.com>
Mohan Kumar Kumar <mohankm@fb.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com> <tulasi.r@tcs.com>
Navin Sankar Velliangiri <navin@linumiz.com>
NingX Zhao <ningx.zhao@intel.com>
Nishikant Nayak <nishikantax.nayak@intel.com>
@@ -100,23 +100,21 @@ Radosław Koppel <radoslaw.koppel@nordicsemi.no> <r.koppel@k-el.com>
Raja D. Singh <rdsingh@iotwizards.com>
Ricardo Salveti <ricardo@opensourcefoundries.com>
Ricardo Salveti <ricardo@opensourcefoundries.com> <ricardo.salveti@linaro.org>
Ruud Derwig <Ruud.Derwig@synopsys.com>
Ruud Derwig <Ruud.Derwig@synopsys.com> <Ruud.Derwig@synopsys.com>
Saku Rautio <saku.rautio@nordicsemi.no>
Scott Worley <scott.worley@microchip.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>
Sharron Liu <sharron.liu@intel.com>
Shilpashree L C <shilpashree.lc@intel.com>
Shuang He <shuang.he@intel.com>
Shuang He <shuang.he@intel.com> <shuang.he@intel.com>
Sigvart Hovland <sigvart.hovland@nordicsemi.no>
Stéphane D'Alu <sdalu@sdalu.com>
Stine Åkredalen <stine.akredalen@nordicsemi.no>
Thomas Heeley <thomas.heeley@intel.com>
Thomas Heeley <thomas.heeley@intel.com> <thomas.heeley@intel.com>
Tim Sørensen <tims@demant.com>
Tim Sørensen <tims@demant.com> <tims@oticon.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa@gmail.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com>

File diff suppressed because it is too large Load Diff

View File

@@ -1,30 +0,0 @@
# Copyright (c) 2024 Basalte bv
# SPDX-License-Identifier: Apache-2.0
extend = ".ruff-excludes.toml"
line-length = 100
target-version = "py310"
[lint]
select = [
# zephyr-keep-sorted-start
"B", # flake8-bugbear
"E", # pycodestyle
"F", # pyflakes
"I", # isort
"SIM", # flake8-simplify
"UP", # pyupgrade
"W", # pycodestyle warnings
# zephyr-keep-sorted-stop
]
ignore = [
# zephyr-keep-sorted-start
"SIM108", # Allow if-else blocks instead of forcing ternary operator
# zephyr-keep-sorted-stop
]
[format]
quote-style = "preserve"
line-ending = "lf"

View File

@@ -71,15 +71,6 @@ set(ZEPHYR_CURRENT_LINKER_PASS 0)
set(ZEPHYR_CURRENT_LINKER_CMD linker_zephyr_pre${ZEPHYR_CURRENT_LINKER_PASS}.cmd)
set(ZEPHYR_LINK_STAGE_EXECUTABLE zephyr_pre${ZEPHYR_CURRENT_LINKER_PASS})
# Make kconfig variables available to the linker script generator
zephyr_linker_include_generated(KCONFIG ${CMAKE_CURRENT_BINARY_DIR}/.config)
# The linker generator also needs sections.h to be able to access e.g. _APP_SMEM_SECTION_NAME
# for linkerscripts that do not support c-preprocessing.
zephyr_linker_include_generated(HEADER ${ZEPHYR_BASE}/include/zephyr/linker/sections.h)
zephyr_linker_include_var(VAR CMAKE_VERBOSE_MAKEFILE VALUE ${CMAKE_VERBOSE_MAKEFILE})
# ZEPHYR_PREBUILT_EXECUTABLE is used outside of this file, therefore keep the
# existing variable to allow slowly cleanup of linking stage handling.
# Three stage linking active: pre0 -> pre1 -> final, this will correspond to `pre1`
@@ -96,7 +87,7 @@ set(OFFSETS_H_TARGET offsets_h)
set(SYSCALL_LIST_H_TARGET syscall_list_h_target)
set(DRIVER_VALIDATION_H_TARGET driver_validation_h_target)
set(KOBJ_TYPES_H_TARGET kobj_types_h_target)
set(DEVICE_API_LD_TARGET device_api_ld_target)
set(PARSE_SYSCALLS_TARGET parse_syscalls_target)
define_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT BRIEF_DOCS " " FULL_DOCS " ")
set_property( GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-little${ARCH}) # BFD format
@@ -159,19 +150,13 @@ zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,no_str
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler-cpp,no_strict_aliasing>>)
# Extra warnings options for twister run
if(CONFIG_COMPILER_WARNINGS_AS_ERRORS)
if (CONFIG_COMPILER_WARNINGS_AS_ERRORS)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,warnings_as_errors>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,warnings_as_errors>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,warnings_as_errors>>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,warnings_as_errors>)
endif()
if(CONFIG_DEPRECATION_TEST)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,no_deprecation_warning>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,no_deprecation_warning>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,no_deprecation_warning>>)
endif()
# @Intent: Set compiler flags to enable buffer overflow checks in libc functions
# @details:
# Kconfig.zephyr "Detect buffer overflows in libc calls" is a kconfig choice,
@@ -188,17 +173,7 @@ endif()
# @Intent: Set compiler flags to detect general stack overflows across all functions
if(CONFIG_STACK_CANARIES)
zephyr_compile_options("$<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,security_canaries>>")
zephyr_compile_options("$<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,security_canaries>>")
elseif(CONFIG_STACK_CANARIES_STRONG)
zephyr_compile_options("$<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,security_canaries_strong>>")
zephyr_compile_options("$<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,security_canaries_strong>>")
elseif(CONFIG_STACK_CANARIES_ALL)
zephyr_compile_options("$<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,security_canaries_all>>")
zephyr_compile_options("$<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,security_canaries_all>>")
elseif(CONFIG_STACK_CANARIES_EXPLICIT)
zephyr_compile_options("$<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,security_canaries_explicit>>")
zephyr_compile_options("$<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,security_canaries_explicit>>")
zephyr_compile_options($<TARGET_PROPERTY:compiler,security_canaries>)
endif()
# @Intent: Obtain compiler optimizations flags and store in variables
@@ -214,82 +189,24 @@ endif()
# 1) Using EXTRA_CFLAGS which is applied regardless of kconfig choice, or
# 2) Rely on override support being implemented by your toolchain_cc_optimize_*()
#
get_property(COMPILER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG TARGET compiler PROPERTY no_optimization)
get_property(COMPILER_OPTIMIZE_FOR_DEBUG_FLAG TARGET compiler PROPERTY optimization_debug)
get_property(COMPILER_OPTIMIZE_FOR_SPEED_FLAG TARGET compiler PROPERTY optimization_speed)
get_property(COMPILER_OPTIMIZE_FOR_SIZE_FLAG TARGET compiler PROPERTY optimization_size)
get_property(COMPILER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG TARGET compiler PROPERTY optimization_size_aggressive)
get_property(ASM_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG TARGET asm PROPERTY no_optimization)
get_property(ASM_OPTIMIZE_FOR_DEBUG_FLAG TARGET asm PROPERTY optimization_debug)
get_property(ASM_OPTIMIZE_FOR_SPEED_FLAG TARGET asm PROPERTY optimization_speed)
get_property(ASM_OPTIMIZE_FOR_SIZE_FLAG TARGET asm PROPERTY optimization_size)
get_property(ASM_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG TARGET asm PROPERTY optimization_size_aggressive)
get_property(LINKER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG TARGET linker PROPERTY no_optimization)
get_property(LINKER_OPTIMIZE_FOR_DEBUG_FLAG TARGET linker PROPERTY optimization_debug)
get_property(LINKER_OPTIMIZE_FOR_SPEED_FLAG TARGET linker PROPERTY optimization_speed)
get_property(LINKER_OPTIMIZE_FOR_SIZE_FLAG TARGET linker PROPERTY optimization_size)
get_property(LINKER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG TARGET linker PROPERTY optimization_size_aggressive)
# Let the assembler inherit the optimization flags of the compiler if it is
# not set explicitly.
if(NOT ASM_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG)
set(ASM_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG ${COMPILER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG})
endif()
if(NOT ASM_OPTIMIZE_FOR_DEBUG_FLAG)
set(ASM_OPTIMIZE_FOR_DEBUG_FLAG ${COMPILER_OPTIMIZE_FOR_DEBUG_FLAG})
endif()
if(NOT ASM_OPTIMIZE_FOR_SPEED_FLAG)
set(ASM_OPTIMIZE_FOR_SPEED_FLAG ${COMPILER_OPTIMIZE_FOR_SPEED_FLAG})
endif()
if(NOT ASM_OPTIMIZE_FOR_SIZE_FLAG)
set(ASM_OPTIMIZE_FOR_SIZE_FLAG ${COMPILER_OPTIMIZE_FOR_SIZE_FLAG})
endif()
if(NOT ASM_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG)
set(ASM_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG ${COMPILER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
endif()
# Let the linker inherit the optimization flags of the compiler if it is
# not set explicitly.
if(NOT LINKER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG)
set(LINKER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG ${COMPILER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG})
endif()
if(NOT LINKER_OPTIMIZE_FOR_DEBUG_FLAG)
set(LINKER_OPTIMIZE_FOR_DEBUG_FLAG ${COMPILER_OPTIMIZE_FOR_DEBUG_FLAG})
endif()
if(NOT LINKER_OPTIMIZE_FOR_SPEED_FLAG)
set(LINKER_OPTIMIZE_FOR_SPEED_FLAG ${COMPILER_OPTIMIZE_FOR_SPEED_FLAG})
endif()
if(NOT LINKER_OPTIMIZE_FOR_SIZE_FLAG)
set(LINKER_OPTIMIZE_FOR_SIZE_FLAG ${COMPILER_OPTIMIZE_FOR_SIZE_FLAG})
endif()
if(NOT LINKER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG)
set(LINKER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG ${COMPILER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
endif()
get_property(OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG TARGET compiler PROPERTY no_optimization)
get_property(OPTIMIZE_FOR_DEBUG_FLAG TARGET compiler PROPERTY optimization_debug)
get_property(OPTIMIZE_FOR_SPEED_FLAG TARGET compiler PROPERTY optimization_speed)
get_property(OPTIMIZE_FOR_SIZE_FLAG TARGET compiler PROPERTY optimization_size)
get_property(OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG TARGET compiler PROPERTY optimization_size_aggressive)
# From kconfig choice, pick the actual OPTIMIZATION_FLAG to use.
# Kconfig choice ensures only one of these CONFIG_*_OPTIMIZATIONS is set.
if(CONFIG_NO_OPTIMIZATIONS)
set(COMPILER_OPTIMIZATION_FLAG ${COMPILER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG})
set(ASM_OPTIMIZATION_FLAG ${ASM_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG})
set(LINKER_OPTIMIZATION_FLAG ${LINKER_OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG})
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG})
elseif(CONFIG_DEBUG_OPTIMIZATIONS)
set(COMPILER_OPTIMIZATION_FLAG ${COMPILER_OPTIMIZE_FOR_DEBUG_FLAG})
set(ASM_OPTIMIZATION_FLAG ${ASM_OPTIMIZE_FOR_DEBUG_FLAG})
set(LINKER_OPTIMIZATION_FLAG ${LINKER_OPTIMIZE_FOR_DEBUG_FLAG})
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_DEBUG_FLAG})
elseif(CONFIG_SPEED_OPTIMIZATIONS)
set(COMPILER_OPTIMIZATION_FLAG ${COMPILER_OPTIMIZE_FOR_SPEED_FLAG})
set(ASM_OPTIMIZATION_FLAG ${ASM_OPTIMIZE_FOR_SPEED_FLAG})
set(LINKER_OPTIMIZATION_FLAG ${LINKER_OPTIMIZE_FOR_SPEED_FLAG})
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_SPEED_FLAG})
elseif(CONFIG_SIZE_OPTIMIZATIONS)
set(COMPILER_OPTIMIZATION_FLAG ${COMPILER_OPTIMIZE_FOR_SIZE_FLAG}) # Default in kconfig
set(ASM_OPTIMIZATION_FLAG ${ASM_OPTIMIZE_FOR_SIZE_FLAG})
set(LINKER_OPTIMIZATION_FLAG ${LINKER_OPTIMIZE_FOR_SIZE_FLAG})
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_SIZE_FLAG}) # Default in kconfig
elseif(CONFIG_SIZE_OPTIMIZATIONS_AGGRESSIVE)
set(COMPILER_OPTIMIZATION_FLAG ${COMPILER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
set(ASM_OPTIMIZATION_FLAG ${ASM_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
set(LINKER_OPTIMIZATION_FLAG ${LINKER_OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
else()
message(FATAL_ERROR
"Unreachable code. Expected optimization level to have been chosen. See Kconfig.zephyr")
@@ -303,21 +220,11 @@ SOC_* symbol.")
endif()
# Apply the final optimization flag(s)
zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:${ASM_OPTIMIZATION_FLAG}>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:${COMPILER_OPTIMIZATION_FLAG}>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:${COMPILER_OPTIMIZATION_FLAG}>)
add_link_options(${LINKER_OPTIMIZATION_FLAG})
compiler_simple_options(simple_options)
toolchain_linker_add_compiler_options(${simple_options})
zephyr_compile_options(${OPTIMIZATION_FLAG})
if(CONFIG_LTO)
if(CONFIG_LTO_SINGLE_THREADED)
zephyr_compile_options($<TARGET_PROPERTY:compiler,optimization_lto_st>)
add_link_options($<TARGET_PROPERTY:linker,lto_arguments_st>)
else()
zephyr_compile_options($<TARGET_PROPERTY:compiler,optimization_lto>)
add_link_options($<TARGET_PROPERTY:linker,lto_arguments>)
endif()
zephyr_compile_options($<TARGET_PROPERTY:compiler,optimization_lto>)
add_link_options($<TARGET_PROPERTY:linker,lto_arguments>)
endif()
if(CONFIG_STD_C23)
@@ -368,9 +275,6 @@ if(CONFIG_CPP)
elseif(CONFIG_STD_CPP2B)
set(STD_CPP_DIALECT_FLAGS $<TARGET_PROPERTY:compiler-cpp,dialect_cpp2b>)
list(APPEND CMAKE_CXX_COMPILE_FEATURES ${compile_features_cpp20})
elseif(CONFIG_STD_CPP23)
set(STD_CPP_DIALECT_FLAGS $<TARGET_PROPERTY:compiler-cpp,dialect_cpp23>)
list(APPEND CMAKE_CXX_COMPILE_FEATURES ${compile_features_cpp23})
else()
message(FATAL_ERROR
"Unreachable code. Expected C++ standard to have been chosen. See Kconfig.zephyr.")
@@ -404,9 +308,7 @@ if(CONFIG_CODING_GUIDELINE_CHECK)
endif()
# @Intent: Set compiler specific macro inclusion of AUTOCONF_H
zephyr_compile_options("SHELL: $<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,imacros> ${AUTOCONF_H}>")
zephyr_compile_options("SHELL: $<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,imacros> ${AUTOCONF_H}>")
zephyr_compile_options("SHELL: $<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,imacros> ${AUTOCONF_H}>")
zephyr_compile_options("SHELL: $<TARGET_PROPERTY:compiler,imacros> ${AUTOCONF_H}")
if(CONFIG_COMPILER_FREESTANDING)
# @Intent: Set compiler specific flag for bare metal freestanding option
@@ -414,23 +316,11 @@ if(CONFIG_COMPILER_FREESTANDING)
zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:compiler,freestanding>>)
endif()
if(CONFIG_PICOLIBC AND NOT CONFIG_PICOLIBC_IO_FLOAT)
if (CONFIG_PICOLIBC AND NOT CONFIG_PICOLIBC_IO_FLOAT)
# @Intent: Set compiler specific flag to disable printf-related optimizations
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,no_printf_return_value>>)
endif()
if(CONFIG_UBSAN)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,sanitizer_undefined>>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,sanitizer_undefined>)
if(CONFIG_UBSAN_LIBRARY)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,sanitizer_undefined_library>>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,sanitizer_undefined_library>)
elseif(CONFIG_UBSAN_TRAP)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,sanitizer_undefined_trap>>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,sanitizer_undefined_trap>)
endif()
endif()
# @Intent: Set compiler specific flag for tentative definitions, no-common
zephyr_compile_options($<TARGET_PROPERTY:compiler,no_common>)
@@ -463,9 +353,7 @@ zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,required>
# @Intent: Enforce standard integer type correspondence to match Zephyr usage.
# (must be after compiler specific flags)
if(CONFIG_ENFORCE_ZEPHYR_STDINT)
zephyr_compile_options("SHELL:$<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,imacros> ${ZEPHYR_BASE}/include/zephyr/toolchain/zephyr_stdint.h>")
zephyr_compile_options("SHELL:$<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,imacros> ${ZEPHYR_BASE}/include/zephyr/toolchain/zephyr_stdint.h>")
zephyr_compile_options("SHELL:$<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,imacros> ${ZEPHYR_BASE}/include/zephyr/toolchain/zephyr_stdint.h>")
zephyr_compile_options("SHELL: $<TARGET_PROPERTY:compiler,imacros> ${ZEPHYR_BASE}/include/zephyr/toolchain/zephyr_stdint.h")
endif()
# Common toolchain-agnostic assembly flags
@@ -479,16 +367,6 @@ if(DEFINED TOOLCHAIN_LD_FLAGS)
zephyr_ld_options(${TOOLCHAIN_LD_FLAGS})
endif()
foreach(GROUPED_FLAGS IN LISTS TOOLCHAIN_GROUPED_LD_FLAGS)
if(DEFINED ${GROUPED_FLAGS})
zephyr_ld_options(NO_SPLIT ${${GROUPED_FLAGS}})
else()
message(FATAL_ERROR "Variable '${GROUPED_FLAGS}' is not defined, "
"please ensure the variable is defined and points to valid linker flags. "
"If '${GROUPED_FLAGS}' is a linker flags itself, then it must be placed in a list")
endif()
endforeach()
zephyr_link_libraries(PROPERTY base)
zephyr_link_libraries_ifndef(CONFIG_LINKER_USE_RELAX PROPERTY no_relax)
@@ -583,24 +461,6 @@ zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,no_p
zephyr_link_libraries_ifndef(CONFIG_NATIVE_LIBRARY
$<TARGET_PROPERTY:linker,no_position_independent>)
if(CONFIG_INSTRUMENTATION)
# @Intent: Enable function instrumentation injection at compile time
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,func_instrumentation>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,func_instrumentation>>)
# @Intent: Enable function blocklist for the instrumentation subsystem
if(CONFIG_INSTRUMENTATION_EXCLUDE_FUNCTION_LIST)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,func_instrumentation_exclude_function_list>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,func_instrumentation_exclude_function_list>>)
endif()
# @Intent: Enable file blocklist for the instrumentation subsystem
if(CONFIG_INSTRUMENTATION_EXCLUDE_FILE_LIST)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,func_instrumentation_exclude_file_list>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler,func_instrumentation_exclude_file_list>>)
endif()
endif()
# Allow the user to inject options when calling cmake, e.g.
# 'cmake -DEXTRA_CFLAGS="-Werror -Wno-deprecated-declarations" ..'
include(cmake/extra_flags.cmake)
@@ -742,10 +602,6 @@ zephyr_get(KERNEL_VERSION_CUSTOMIZATION SYSBUILD LOCAL)
set_property(TARGET version_h PROPERTY KERNEL_VERSION_CUSTOMIZATION ${KERNEL_VERSION_CUSTOMIZATION})
if(EXISTS ${APPLICATION_SOURCE_DIR}/VERSION)
# A generator expression is used to allow applications to add their own dependencies for the
# file version, this might typically be the git index file if the application is in a git
# repository. To add a dependency to the application version file, this can be used:
# set_property(TARGET app_version_h PROPERTY APP_VERSION_DEPENDS <FILE>)
add_custom_command(
OUTPUT ${PROJECT_BINARY_DIR}/include/generated/zephyr/app_version.h
COMMAND ${CMAKE_COMMAND} -DZEPHYR_BASE=${ZEPHYR_BASE}
@@ -756,7 +612,6 @@ if(EXISTS ${APPLICATION_SOURCE_DIR}/VERSION)
${build_version_argument}
-P ${ZEPHYR_BASE}/cmake/gen_version_h.cmake
DEPENDS ${APPLICATION_SOURCE_DIR}/VERSION ${git_dependency}
$<TARGET_PROPERTY:app_version_h,APP_VERSION_DEPENDS>
COMMAND_EXPAND_LISTS
)
add_custom_target(
@@ -942,7 +797,7 @@ add_custom_command(
--file-list ${syscalls_file_list_output}
$<$<BOOL:${CONFIG_EMIT_ALL_SYSCALLS}>:--emit-all-syscalls>
DEPENDS ${syscalls_subdirs_trigger} ${PARSE_SYSCALLS_HEADER_DEPENDS}
${syscalls_file_list_output} syscalls_interface
${syscalls_file_list_output} ${syscalls_interface}
)
# Make sure Picolibc is built before the rest of the system; there's no explicit
@@ -960,6 +815,12 @@ set_property(TARGET ${SYSCALL_LIST_H_TARGET}
${CMAKE_CURRENT_BINARY_DIR}/include/generated/zephyr/syscalls
)
add_custom_target(${PARSE_SYSCALLS_TARGET}
DEPENDS
${syscalls_json}
${struct_tags_json}
)
# 64-bit systems do not require special handling of 64-bit system call
# parameters or return values, indicate this to the system call boilerplate
# generation script.
@@ -980,12 +841,7 @@ if(CONFIG_LEGACY_GENERATED_INCLUDE_PATH)
${CMAKE_CURRENT_BINARY_DIR}/include/generated/syscall_list.h)
endif()
add_custom_command(
OUTPUT
include/generated/zephyr/syscall_dispatch.c
include/generated/zephyr/syscall_exports_llext.c
syscall_weakdefs_llext.c
${syscall_list_h}
add_custom_command(OUTPUT include/generated/zephyr/syscall_dispatch.c ${syscall_list_h}
# Also, some files are written to include/generated/zephyr/syscalls/
COMMAND
${PYTHON_EXECUTABLE}
@@ -993,8 +849,7 @@ add_custom_command(
--json-file ${syscalls_json} # Read this file
--base-output include/generated/zephyr/syscalls # Write to this dir
--syscall-dispatch include/generated/zephyr/syscall_dispatch.c # Write this file
--syscall-exports-llext include/generated/zephyr/syscall_exports_llext.c
--syscall-weakdefs-llext syscall_weakdefs_llext.c # compiled in CMake library 'syscall_weakdefs'
--syscall-export-llext include/generated/zephyr/syscall_export_llext.c
--syscall-list ${syscall_list_h}
$<$<BOOL:${CONFIG_USERSPACE}>:--gen-mrsh-files>
${SYSCALL_LONG_REGISTERS_ARG}
@@ -1002,66 +857,35 @@ add_custom_command(
COMMAND
${LEGACY_SYSCALL_LIST_H_ARGS}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
DEPENDS ${syscalls_json}
DEPENDS ${PARSE_SYSCALLS_TARGET}
)
include(${ZEPHYR_BASE}/cmake/kobj.cmake)
# This is passed into all calls to the gen_kobject_list.py script.
set(gen_kobject_list_include_args --include-subsystem-list ${struct_tags_json})
set(DRV_VALIDATION ${PROJECT_BINARY_DIR}/include/generated/zephyr/driver-validation.h)
gen_kobject_list(
TARGET ${DRIVER_VALIDATION_H_TARGET}
OUTPUTS ${DRV_VALIDATION}
SCRIPT_ARGS --validation-output ${DRV_VALIDATION}
INCLUDES ${struct_tags_json}
DEPENDS ${struct_tags_json}
)
gen_kobject_list_headers(
INCLUDES ${struct_tags_json}
DEPENDS ${struct_tags_json}
)
# Generate sections for kernel device subsystems
set(
DEVICE_API_LD_SECTIONS
${CMAKE_CURRENT_BINARY_DIR}/include/generated/device-api-sections.ld
)
set(DEVICE_API_LINKER_SECTIONS_CMAKE
${CMAKE_CURRENT_BINARY_DIR}/include/generated/device-api-sections.cmake
)
add_custom_command(
OUTPUT ${DEVICE_API_LD_SECTIONS} ${DEVICE_API_LINKER_SECTIONS_CMAKE}
OUTPUT ${DRV_VALIDATION}
COMMAND
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/build/gen_iter_sections.py
--alignment ${CONFIG_LINKER_ITERABLE_SUBALIGN}
--input ${struct_tags_json}
--tag __subsystem
--ld-output ${DEVICE_API_LD_SECTIONS}
--cmake-output ${DEVICE_API_LINKER_SECTIONS_CMAKE}
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/build/gen_kobject_list.py
--validation-output ${DRV_VALIDATION}
${gen_kobject_list_include_args}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
${ZEPHYR_BASE}/scripts/build/gen_iter_sections.py
${struct_tags_json}
${ZEPHYR_BASE}/scripts/build/gen_kobject_list.py
${PARSE_SYSCALLS_TARGET}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(${DRIVER_VALIDATION_H_TARGET} DEPENDS ${DRV_VALIDATION})
add_custom_target(${DEVICE_API_LD_TARGET}
DEPENDS ${DEVICE_API_LD_SECTIONS}
${DEVICE_API_LINKER_SECTIONS_CMAKE}
)
include(${ZEPHYR_BASE}/cmake/kobj.cmake)
gen_kobj(KOBJ_INCLUDE_PATH)
zephyr_linker_include_generated(CMAKE ${DEVICE_API_LINKER_SECTIONS_CMAKE})
# Add a pseudo-target that is up-to-date when all generated headers
# are up-to-date.
# Create an interface library which collects the dependencies to Zephyr
# generated headers. Libraries containing files that include these headers can
# use add_dependencies or target_link_libraries against this target to ensure
# that generated headers are up-to-date before the libraries are built. This
# ordering dependency can be propagated to these libraries' dependenents via
# the PUBLIC or INTERFACE option of target_link_libraries.
add_library(zephyr_generated_headers INTERFACE)
add_custom_target(zephyr_generated_headers)
add_dependencies(zephyr_generated_headers
offsets_h version_h
)
@@ -1088,7 +912,6 @@ add_dependencies(zephyr_interface
${SYSCALL_LIST_H_TARGET}
${DRIVER_VALIDATION_H_TARGET}
${KOBJ_TYPES_H_TARGET}
${DEVICE_API_LD_TARGET}
)
add_custom_command(
@@ -1153,21 +976,9 @@ else()
set(NO_WHOLE_ARCHIVE_LIBS kernel)
endif()
if(CONFIG_LLEXT)
# LLEXT exports symbols for all syscalls, including unimplemented ones.
# Weak definitions for these must be added at the end of the link order
# to avoid shadowing actual implementations.
add_library(syscall_weakdefs syscall_weakdefs_llext.c)
target_link_libraries(syscall_weakdefs
zephyr_generated_headers
zephyr_interface
)
list(APPEND NO_WHOLE_ARCHIVE_LIBS syscall_weakdefs)
endif()
get_property(OUTPUT_FORMAT GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT)
if(CONFIG_CODE_DATA_RELOCATION)
if (CONFIG_CODE_DATA_RELOCATION)
set(CODE_RELOCATION_DEP code_relocation_source_lib)
endif() # CONFIG_CODE_DATA_RELOCATION
@@ -1219,23 +1030,23 @@ if(CONFIG_CODE_DATA_RELOCATION)
endif()
if(CONFIG_USERSPACE)
# Go for raw properties here since zephyr_get_compile_options_for_lang()
# processes the list of options, and wraps it in a $<JOIN thing. When
# generating the build systems this leads to some interesting command lines,
# with SHELL: not being present and other "random" list-join related issues
# (e.g. for IAR toolchains the lists were joined with "gnu" postfixed on a
# bunch of entries).
get_property(compiler_flags_priv TARGET zephyr_interface PROPERTY INTERFACE_COMPILE_OPTIONS)
zephyr_get_compile_options_for_lang_as_string(C compiler_flags_priv)
string(REPLACE "$<TARGET_PROPERTY:compiler,coverage>" ""
KOBJECT_HASH_COMPILE_OPTIONS "${compiler_flags_priv}")
list(APPEND KOBJECT_HASH_COMPILE_OPTIONS
$<TARGET_PROPERTY:compiler,no_function_sections>
$<TARGET_PROPERTY:compiler,no_data_sections>)
NO_COVERAGE_FLAGS "${compiler_flags_priv}"
)
set(GEN_KOBJ_LIST ${ZEPHYR_BASE}/scripts/build/gen_kobject_list.py)
set(PROCESS_GPERF ${ZEPHYR_BASE}/scripts/build/process_gperf.py)
endif()
get_property(GLOBAL_CSTD GLOBAL PROPERTY CSTD)
if(DEFINED GLOBAL_CSTD)
message(DEPRECATION
"Global CSTD property is deprecated, see Kconfig.zephyr for C Standard options.")
set(CSTD ${GLOBAL_CSTD})
list(APPEND CMAKE_C_COMPILE_FEATURES ${compile_features_${CSTD}})
endif()
# @Intent: Obtain compiler specific flag for specifying the c standard
zephyr_compile_options(
$<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,cstd>${CSTD}>
@@ -1251,15 +1062,17 @@ set_ifndef( TOPT "${COMPILER_TOPT}")
set_ifndef( TOPT -Wl,-T) # Use this if the compiler driver doesn't set a value
if(CONFIG_HAVE_CUSTOM_LINKER_SCRIPT)
string(CONFIGURE ${APPLICATION_SOURCE_DIR}/${CONFIG_CUSTOM_LINKER_SCRIPT} LINKER_SCRIPT)
set(LINKER_SCRIPT ${APPLICATION_SOURCE_DIR}/${CONFIG_CUSTOM_LINKER_SCRIPT})
if(NOT EXISTS ${LINKER_SCRIPT})
string(CONFIGURE ${CONFIG_CUSTOM_LINKER_SCRIPT} LINKER_SCRIPT)
assert_exists(LINKER_SCRIPT)
set(LINKER_SCRIPT ${CONFIG_CUSTOM_LINKER_SCRIPT})
assert_exists(CONFIG_CUSTOM_LINKER_SCRIPT)
endif()
elseif(DEFINED BOARD_LINKER_SCRIPT)
set(LINKER_SCRIPT ${BOARD_LINKER_SCRIPT})
elseif(DEFINED SOC_LINKER_SCRIPT)
set(LINKER_SCRIPT ${SOC_LINKER_SCRIPT})
else()
find_package(Deprecated COMPONENTS SEARCHED_LINKER_SCRIPT)
endif()
if(NOT EXISTS ${LINKER_SCRIPT})
@@ -1267,20 +1080,14 @@ if(NOT EXISTS ${LINKER_SCRIPT})
endif()
if(CONFIG_USERSPACE)
if(CONFIG_CMAKE_LINKER_GENERATOR)
set(APP_SMEM_LD_EXT "cmake")
else()
set(APP_SMEM_LD_EXT "ld")
endif()
set(APP_SMEM_ALIGNED_LD "${PROJECT_BINARY_DIR}/include/generated/app_smem_aligned.${APP_SMEM_LD_EXT}")
set(APP_SMEM_UNALIGNED_LD "${PROJECT_BINARY_DIR}/include/generated/app_smem_unaligned.${APP_SMEM_LD_EXT}")
set(APP_SMEM_ALIGNED_LD "${PROJECT_BINARY_DIR}/include/generated/app_smem_aligned.ld")
set(APP_SMEM_UNALIGNED_LD "${PROJECT_BINARY_DIR}/include/generated/app_smem_unaligned.ld")
if(CONFIG_LINKER_USE_PINNED_SECTION)
set(APP_SMEM_PINNED_ALIGNED_LD
"${PROJECT_BINARY_DIR}/include/generated/app_smem_pinned_aligned.${APP_SMEM_LD_EXT}")
"${PROJECT_BINARY_DIR}/include/generated/app_smem_pinned_aligned.ld")
set(APP_SMEM_PINNED_UNALIGNED_LD
"${PROJECT_BINARY_DIR}/include/generated/app_smem_pinned_unaligned.${APP_SMEM_LD_EXT}")
"${PROJECT_BINARY_DIR}/include/generated/app_smem_pinned_unaligned.ld")
if(NOT CONFIG_LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT)
# The libc partition may hold symbols that are required during boot process,
@@ -1345,11 +1152,9 @@ if(CONFIG_USERSPACE)
set(APP_SMEM_UNALIGNED_LIB app_smem_unaligned_output_obj_renamed_lib)
list(APPEND LINKER_PASS_${ZEPHYR_CURRENT_LINKER_PASS}_DEFINE "LINKER_APP_SMEM_UNALIGNED")
endif()
foreach(dep ${APP_SMEM_UNALIGNED_LD} ${APP_SMEM_PINNED_UNALIGNED_LD})
zephyr_linker_include_generated(CMAKE ${dep} PASS LINKER_APP_SMEM_UNALIGNED)
endforeach()
if (CONFIG_USERSPACE)
add_custom_command(
OUTPUT ${APP_SMEM_ALIGNED_LD} ${APP_SMEM_PINNED_ALIGNED_LD}
COMMAND ${PYTHON_EXECUTABLE}
@@ -1369,9 +1174,6 @@ if(CONFIG_USERSPACE)
COMMAND_EXPAND_LISTS
COMMENT "Generating app_smem_aligned linker section"
)
foreach(dep ${APP_SMEM_ALIGNED_LD} ${APP_SMEM_PINNED_ALIGNED_LD})
zephyr_linker_include_generated(CMAKE ${dep} PASS NOT LINKER_APP_SMEM_UNALIGNED)
endforeach()
endif()
if(CONFIG_USERSPACE)
@@ -1386,13 +1188,23 @@ if(CONFIG_USERSPACE)
set(KOBJECT_PREBUILT_HASH_OUTPUT_SRC_PRE kobject_prebuilt_hash_preprocessed.c)
set(KOBJECT_PREBUILT_HASH_OUTPUT_SRC kobject_prebuilt_hash.c)
gen_kobject_list_gperf(
TARGET kobj_prebuilt_hash_list
add_custom_command(
OUTPUT ${KOBJECT_PREBUILT_HASH_LIST}
KERNEL_TARGET ${ZEPHYR_LINK_STAGE_EXECUTABLE}
INCLUDES ${struct_tags_json}
DEPENDS ${struct_tags_json}
COMMAND
${PYTHON_EXECUTABLE}
${GEN_KOBJ_LIST}
--kernel $<TARGET_FILE:${ZEPHYR_LINK_STAGE_EXECUTABLE}>
--gperf-output ${KOBJECT_PREBUILT_HASH_LIST}
${gen_kobject_list_include_args}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
${ZEPHYR_LINK_STAGE_EXECUTABLE}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(
kobj_prebuilt_hash_list
DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${KOBJECT_PREBUILT_HASH_LIST}
)
add_custom_command(
OUTPUT ${KOBJECT_PREBUILT_HASH_OUTPUT_SRC_PRE}
@@ -1429,13 +1241,11 @@ if(CONFIG_USERSPACE)
add_library(
kobj_prebuilt_hash_output_lib
OBJECT ${CMAKE_CURRENT_BINARY_DIR}/${KOBJECT_PREBUILT_HASH_OUTPUT_SRC}
)
)
# set_target_properties sets ALL properties, target_compile_options() adds
# and KOBJECT_HASH_COMPILE_OPTIONS contains all the options.
set_target_properties(kobj_prebuilt_hash_output_lib PROPERTIES
COMPILE_OPTIONS "${KOBJECT_HASH_COMPILE_OPTIONS}"
)
set_source_files_properties(${KOBJECT_PREBUILT_HASH_OUTPUT_SRC}
PROPERTIES COMPILE_FLAGS
"${NO_COVERAGE_FLAGS} -fno-function-sections -fno-data-sections")
target_compile_definitions(kobj_prebuilt_hash_output_lib
PRIVATE $<TARGET_PROPERTY:zephyr_interface,INTERFACE_COMPILE_DEFINITIONS>
@@ -1471,13 +1281,6 @@ if(CONFIG_USERSPACE)
DEPENDS
${KOBJECT_LINKER_HEADER_DATA}
)
# gen_kobject_placeholders.py generates linker-kobject-prebuild-data.h,
# linker-kobject-prebuild-priv-stacks.h and linker-kobject-prebuild-rodata.h
foreach(ext "-data.h" "-priv-stacks.h" "-rodata.h")
string(REGEX REPLACE "-data.h$" ${ext} file ${KOBJECT_LINKER_HEADER_DATA})
zephyr_linker_include_generated(HEADER ${file} PASS LINKER_ZEPHYR_PREBUILT LINKER_ZEPHYR_FINAL)
endforeach()
endif()
if(CONFIG_USERSPACE OR CONFIG_DEVICE_DEPS)
@@ -1585,13 +1388,23 @@ if(CONFIG_USERSPACE)
# Use the script GEN_KOBJ_LIST to scan the kernel binary's
# (${ZEPHYR_LINK_STAGE_EXECUTABLE}) DWARF information to produce a table of kernel
# objects (KOBJECT_HASH_LIST) which we will then pass to gperf
gen_kobject_list_gperf(
TARGET kobj_hash_list
add_custom_command(
OUTPUT ${KOBJECT_HASH_LIST}
KERNEL_TARGET ${ZEPHYR_LINK_STAGE_EXECUTABLE}
INCLUDES ${struct_tags_json}
DEPENDS ${struct_tags_json}
COMMAND
${PYTHON_EXECUTABLE}
${GEN_KOBJ_LIST}
--kernel $<TARGET_FILE:${ZEPHYR_LINK_STAGE_EXECUTABLE}>
--gperf-output ${KOBJECT_HASH_LIST}
${gen_kobject_list_include_args}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
${ZEPHYR_LINK_STAGE_EXECUTABLE}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(
kobj_hash_list
DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${KOBJECT_HASH_LIST}
)
# Use gperf to generate C code (KOBJECT_HASH_OUTPUT_SRC_PRE) which implements a
# perfect hashtable based on KOBJECT_HASH_LIST
@@ -1640,9 +1453,9 @@ if(CONFIG_USERSPACE)
OBJECT ${CMAKE_CURRENT_BINARY_DIR}/${KOBJECT_HASH_OUTPUT_SRC}
)
set_target_properties(kobj_hash_output_lib PROPERTIES
COMPILE_OPTIONS "${KOBJECT_HASH_COMPILE_OPTIONS}"
)
set_source_files_properties(${KOBJECT_HASH_OUTPUT_SRC}
PROPERTIES COMPILE_FLAGS
"${NO_COVERAGE_FLAGS} -fno-function-sections -fno-data-sections")
target_compile_definitions(kobj_hash_output_lib
PRIVATE $<TARGET_PROPERTY:zephyr_interface,INTERFACE_COMPILE_DEFINITIONS>
@@ -1809,12 +1622,9 @@ list(APPEND
)
list(APPEND post_build_byproducts ${KERNEL_MAP_NAME})
# Use ';' as separator to get proper space in resulting command.
# Ignore gapfill for RX target as some RX toolchains generate wrong output image when running
# gapfill for binary format.
if(NOT CONFIG_RX)
set(gap_fill_prop "$<TARGET_PROPERTY:bintools,elfconvert_flag_gapfill>")
set(gap_fill "$<$<BOOL:${gap_fill_prop}>:${gap_fill_prop}${CONFIG_BUILD_GAP_FILL_PATTERN}>")
if(NOT CONFIG_BUILD_NO_GAP_FILL)
# Use ';' as separator to get proper space in resulting command.
set(GAP_FILL "$<TARGET_PROPERTY:bintools,elfconvert_flag_gapfill>0xff")
endif()
if(CONFIG_OUTPUT_PRINT_MEMORY_USAGE)
@@ -1858,8 +1668,13 @@ if(CONFIG_BUILD_OUTPUT_ADJUST_LMA)
)
endif()
if(NOT CONFIG_CPP_EXCEPTIONS)
set(eh_frame_section ".eh_frame")
else()
set(eh_frame_section "")
endif()
set(remove_sections_argument_list "")
foreach(section .comment COMMON)
foreach(section .comment COMMON ${eh_frame_section})
list(APPEND remove_sections_argument_list
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>${section})
endforeach()
@@ -1871,7 +1686,7 @@ if(CONFIG_BUILD_OUTPUT_HEX OR BOARD_FLASH_RUNNER STREQUAL openocd)
post_build_commands
COMMAND $<TARGET_PROPERTY:bintools,elfconvert_command>
$<TARGET_PROPERTY:bintools,elfconvert_flag>
$<$<BOOL:${CONFIG_BUILD_OUTPUT_HEX_GAP_FILL}>:${gap_fill}>
${GAP_FILL}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outtarget>ihex
${remove_sections_argument_list}
$<TARGET_PROPERTY:bintools,elfconvert_flag_infile>${KERNEL_ELF_NAME}
@@ -1893,7 +1708,7 @@ if(CONFIG_BUILD_OUTPUT_BIN)
post_build_commands
COMMAND $<TARGET_PROPERTY:bintools,elfconvert_command>
$<TARGET_PROPERTY:bintools,elfconvert_flag>
${gap_fill}
${GAP_FILL}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outtarget>binary
${remove_sections_argument_list}
$<TARGET_PROPERTY:bintools,elfconvert_flag_infile>${KERNEL_ELF_NAME}
@@ -1910,39 +1725,26 @@ endif()
if(CONFIG_BUILD_OUTPUT_BIN AND CONFIG_BUILD_OUTPUT_UF2)
if(CONFIG_BUILD_OUTPUT_UF2_USE_FLASH_BASE)
set(code_address "${CONFIG_FLASH_BASE_ADDRESS}")
set(flash_addr "${CONFIG_FLASH_BASE_ADDRESS}")
else()
set(code_address "${CONFIG_FLASH_LOAD_OFFSET}")
set(flash_addr "${CONFIG_FLASH_LOAD_OFFSET}")
endif()
if(CONFIG_BUILD_OUTPUT_UF2_USE_FLASH_OFFSET)
# Note, the `+ 0` in formula below avoids errors in cases where a Kconfig
# variable is undefined and thus expands to nothing.
math(EXPR code_address
"${code_address} + ${CONFIG_FLASH_LOAD_OFFSET} + 0"
math(EXPR flash_addr
"${flash_addr} + ${CONFIG_FLASH_LOAD_OFFSET} + 0"
OUTPUT_FORMAT HEXADECIMAL
)
endif()
# No-XIP images (such as ones for RP2350 with CONFIG_XIP=n)
# are typically loaded to RAM
if(NOT CONFIG_XIP)
if(CONFIG_BUILD_OUTPUT_ADJUST_LMA)
math(EXPR code_address
"${CONFIG_SRAM_BASE_ADDRESS} + ${CONFIG_BUILD_OUTPUT_ADJUST_LMA} + 0"
OUTPUT_FORMAT HEXADECIMAL
)
else()
set(code_address "${CONFIG_SRAM_BASE_ADDRESS}")
endif()
endif()
list(APPEND
post_build_commands
COMMAND ${PYTHON_EXECUTABLE} ${ZEPHYR_BASE}/scripts/build/uf2conv.py
-c
-f ${CONFIG_BUILD_OUTPUT_UF2_FAMILY_ID}
-b ${code_address}
-b ${flash_addr}
-o ${KERNEL_UF2_NAME}
${KERNEL_BIN_NAME}
)
@@ -1953,28 +1755,6 @@ if(CONFIG_BUILD_OUTPUT_BIN AND CONFIG_BUILD_OUTPUT_UF2)
set(BYPRODUCT_KERNEL_UF2_NAME "${PROJECT_BINARY_DIR}/${KERNEL_UF2_NAME}" CACHE FILEPATH "Kernel uf2 file" FORCE)
endif()
if(CONFIG_BUILD_OUTPUT_MOT)
get_property(elfconvert_formats TARGET bintools PROPERTY elfconvert_formats)
if(srec IN_LIST elfconvert_formats)
list(APPEND
post_build_commands
COMMAND $<TARGET_PROPERTY:bintools,elfconvert_command>
$<TARGET_PROPERTY:bintools,elfconvert_flag>
${GAP_FILL}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outtarget>srec
$<TARGET_PROPERTY:bintools,elfconvert_flag_intarget>${OUTPUT_FORMAT}
$<TARGET_PROPERTY:bintools,elfconvert_flag_infile>${KERNEL_ELF_NAME}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outfile>${KERNEL_MOT_NAME}
$<TARGET_PROPERTY:bintools,elfconvert_flag_final>
)
list(APPEND
post_build_byproducts
${KERNEL_MOT_NAME}
)
set(BYPRODUCT_KERNEL_MOT_NAME "${PROJECT_BINARY_DIR}/${KERNEL_MOT_NAME}" CACHE FILEPATH "Kernel mot file" FORCE)
endif()
endif()
set(KERNEL_META_PATH ${PROJECT_BINARY_DIR}/${KERNEL_META_NAME} CACHE INTERNAL "")
if(CONFIG_BUILD_OUTPUT_META)
list(APPEND
@@ -2015,7 +1795,7 @@ if(CONFIG_BUILD_OUTPUT_S19)
post_build_commands
COMMAND $<TARGET_PROPERTY:bintools,elfconvert_command>
$<TARGET_PROPERTY:bintools,elfconvert_flag>
$<$<BOOL:${CONFIG_BUILD_OUTPUT_S19_GAP_FILL}>:${gap_fill}>
${GAP_FILL}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outtarget>srec
$<TARGET_PROPERTY:bintools,elfconvert_flag_srec_len>1
$<TARGET_PROPERTY:bintools,elfconvert_flag_infile>${KERNEL_ELF_NAME}
@@ -2031,14 +1811,11 @@ if(CONFIG_BUILD_OUTPUT_S19)
endif()
if(CONFIG_OUTPUT_DISASSEMBLY)
if(CONFIG_OUTPUT_DISASSEMBLE_ALL)
if(CONFIG_OUTPUT_DISASSEMBLE_ALL)
set(disassembly_type "$<TARGET_PROPERTY:bintools,disassembly_flag_all>")
elseif(CONFIG_OUTPUT_DISASSEMBLY_WITH_SOURCE)
elseif (CONFIG_OUTPUT_DISASSEMBLY_WITH_SOURCE)
set(disassembly_type "$<TARGET_PROPERTY:bintools,disassembly_flag_inline_source>")
endif()
if(CONFIG_OUTPUT_DISASSEMBLY_NO_ALIASES)
list(APPEND disassembly_type "$<TARGET_PROPERTY:bintools,disassembly_flag_no_aliases>")
endif()
list(APPEND
post_build_commands
COMMAND $<TARGET_PROPERTY:bintools,disassembly_command>
@@ -2113,20 +1890,32 @@ if(CONFIG_BUILD_OUTPUT_COMPRESS_DEBUG_SECTIONS)
endif()
if(CONFIG_BUILD_OUTPUT_EXE)
if(CMAKE_GENERATOR STREQUAL "Unix Makefiles")
set(MAKE "${CMAKE_MAKE_PROGRAM}" CACHE FILEPATH "cmake defined make")
if (NOT CONFIG_NATIVE_LIBRARY)
list(APPEND
post_build_commands
COMMAND
${CMAKE_COMMAND} -E copy ${KERNEL_ELF_NAME} ${KERNEL_EXE_NAME}
)
list(APPEND
post_build_byproducts
${KERNEL_EXE_NAME}
)
else()
if(CMAKE_GENERATOR STREQUAL "Unix Makefiles")
set(MAKE "${CMAKE_MAKE_PROGRAM}" CACHE FILEPATH "cmake defined make")
endif()
find_program(MAKE make REQUIRED)
add_custom_target(native_runner_executable
ALL
COMMENT "Building native simulator runner, and linking final executable"
COMMAND
${MAKE} -f ${ZEPHYR_BASE}/scripts/native_simulator/Makefile all --warn-undefined-variables
-r NSI_CONFIG_FILE=${APPLICATION_BINARY_DIR}/zephyr/NSI/nsi_config
# nsi_config is created by the board cmake file
DEPENDS ${logical_target_for_zephyr_elf}
BYPRODUCTS ${KERNEL_EXE_NAME}
)
endif()
find_program(MAKE make REQUIRED)
add_custom_target(native_runner_executable
ALL
COMMENT "Building native simulator runner, and linking final executable"
COMMAND
${MAKE} -f ${ZEPHYR_BASE}/scripts/native_simulator/Makefile all --warn-undefined-variables
-r NSI_CONFIG_FILE=${APPLICATION_BINARY_DIR}/zephyr/NSI/nsi_config
# nsi_config is created by the board cmake file
DEPENDS ${logical_target_for_zephyr_elf}
BYPRODUCTS ${KERNEL_EXE_NAME}
)
set(BYPRODUCT_KERNEL_EXE_NAME "${PROJECT_BINARY_DIR}/${KERNEL_EXE_NAME}" CACHE FILEPATH "Kernel exe file" FORCE)
endif()
@@ -2144,7 +1933,7 @@ if(CONFIG_BUILD_OUTPUT_INFO_HEADER)
)
endif()
if(CONFIG_LLEXT AND CONFIG_LLEXT_EXPORT_BUILTINS_BY_SLID)
if (CONFIG_LLEXT AND CONFIG_LLEXT_EXPORT_BUILTINS_BY_SLID)
#slidgen must be the first post-build command to be executed
#on the Zephyr ELF to ensure that all other commands, such as
#binary file generation, are operating on a preparated ELF.
@@ -2155,6 +1944,7 @@ if(CONFIG_LLEXT AND CONFIG_LLEXT_EXPORT_BUILTINS_BY_SLID)
--elf-file ${PROJECT_BINARY_DIR}/${KERNEL_ELF_NAME}
--slid-listing ${PROJECT_BINARY_DIR}/slid_listing.txt
)
endif()
if(NOT CMAKE_C_COMPILER_ID STREQUAL "ARMClang")
@@ -2210,20 +2000,10 @@ if(signing_script)
endif()
# Generate USB-C VIF policies in XML format
if(CONFIG_BUILD_OUTPUT_VIF)
if (CONFIG_BUILD_OUTPUT_VIF)
include(${CMAKE_CURRENT_LIST_DIR}/cmake/vif.cmake)
endif()
get_property(post_build_patch_elf_commands
GLOBAL PROPERTY
post_build_patch_elf_commands
)
list(PREPEND
post_build_commands
${post_build_patch_elf_commands}
)
get_property(extra_post_build_commands
GLOBAL PROPERTY
extra_post_build_commands
@@ -2261,7 +2041,7 @@ if(LOG_DICT_DB_NAME_ARG)
--build-header ${PROJECT_BINARY_DIR}/include/generated/zephyr/version.h
)
if(NOT CONFIG_LOG_DICTIONARY_DB_TARGET)
if (NOT CONFIG_LOG_DICTIONARY_DB_TARGET)
# If not using a separate target for generating logging dictionary
# database, add the generation to post build command to make sure
# the database is actually being generated.
@@ -2395,57 +2175,51 @@ if((CMAKE_BUILD_TYPE IN_LIST build_types) AND (NOT NO_BUILD_TYPE_WARNING))
endif()
# Extension Development Kit (EDK) generation.
if(CONFIG_LLEXT_EDK)
if(CONFIG_LLEXT_EDK_FORMAT_TAR_XZ)
set(llext_edk_extension "tar.xz")
elseif(CONFIG_LLEXT_EDK_FORMAT_TAR_ZSTD)
set(llext_edk_extension "tar.Z")
elseif(CONFIG_LLEXT_EDK_FORMAT_ZIP)
set(llext_edk_extension "zip")
else()
message(FATAL_ERROR "Unsupported LLEXT_EDK_FORMAT choice")
endif()
set(llext_edk_file ${PROJECT_BINARY_DIR}/${CONFIG_LLEXT_EDK_NAME}.${llext_edk_extension})
set(llext_edk_file ${PROJECT_BINARY_DIR}/${CONFIG_LLEXT_EDK_NAME}.tar.xz)
# TODO maybe generate flags for C CXX ASM
zephyr_get_compile_definitions_for_lang(C zephyr_defs)
zephyr_get_compile_options_for_lang(C zephyr_flags)
# TODO maybe generate flags for C CXX ASM
zephyr_get_compile_definitions_for_lang(C zephyr_defs)
zephyr_get_compile_options_for_lang(C zephyr_flags)
# Filter out non LLEXT and LLEXT_EDK flags - and add required ones
llext_filter_zephyr_flags(LLEXT_REMOVE_FLAGS ${zephyr_flags} llext_filt_flags)
llext_filter_zephyr_flags(LLEXT_EDK_REMOVE_FLAGS ${llext_filt_flags} llext_filt_flags)
# Filter out non LLEXT and LLEXT_EDK flags - and add required ones
llext_filter_zephyr_flags(LLEXT_REMOVE_FLAGS ${zephyr_flags} llext_filt_flags)
llext_filter_zephyr_flags(LLEXT_EDK_REMOVE_FLAGS ${llext_filt_flags} llext_filt_flags)
set(llext_edk_cflags ${zephyr_defs} -DLL_EXTENSION_BUILD)
list(APPEND llext_edk_cflags ${llext_filt_flags})
list(APPEND llext_edk_cflags ${LLEXT_APPEND_FLAGS})
list(APPEND llext_edk_cflags ${LLEXT_EDK_APPEND_FLAGS})
set(llext_edk_cflags ${zephyr_defs} -DLL_EXTENSION_BUILD)
list(APPEND llext_edk_cflags ${llext_filt_flags})
list(APPEND llext_edk_cflags ${LLEXT_APPEND_FLAGS})
list(APPEND llext_edk_cflags ${LLEXT_EDK_APPEND_FLAGS})
build_info(llext-edk file PATH ${llext_edk_file})
build_info(llext-edk cflags VALUE ${llext_edk_cflags})
build_info(llext-edk include-dirs VALUE "$<TARGET_PROPERTY:zephyr_interface,INTERFACE_INCLUDE_DIRECTORIES>")
add_custom_command(
add_custom_command(
OUTPUT ${llext_edk_file}
# Regenerate syscalls in case CONFIG_LLEXT_EDK_USERSPACE_ONLY
COMMAND ${CMAKE_COMMAND}
-E make_directory edk/include/generated/zephyr
-E make_directory edk/include/generated/zephyr
COMMAND
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/build/gen_syscalls.py
--json-file ${syscalls_json} # Read this file
--base-output edk/include/generated/zephyr/syscalls # Write to this dir
--syscall-dispatch edk/include/generated/zephyr/syscall_dispatch.c # Write this file
--syscall-list ${edk_syscall_list_h}
$<$<BOOL:${CONFIG_LLEXT_EDK_USERSPACE_ONLY}>:--userspace-only>
${SYSCALL_LONG_REGISTERS_ARG}
${SYSCALL_SPLIT_TIMEOUT_ARG}
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/build/gen_syscalls.py
--json-file ${syscalls_json} # Read this file
--base-output edk/include/generated/zephyr/syscalls # Write to this dir
--syscall-dispatch edk/include/generated/zephyr/syscall_dispatch.c # Write this file
--syscall-list ${edk_syscall_list_h}
$<$<BOOL:${CONFIG_LLEXT_EDK_USERSPACE_ONLY}>:--userspace-only>
${SYSCALL_LONG_REGISTERS_ARG}
${SYSCALL_SPLIT_TIMEOUT_ARG}
COMMAND ${CMAKE_COMMAND}
-DPROJECT_BINARY_DIR=${PROJECT_BINARY_DIR}
-DAPPLICATION_SOURCE_DIR=${APPLICATION_SOURCE_DIR}
-DINTERFACE_INCLUDE_DIRECTORIES="$<TARGET_PROPERTY:zephyr_interface,INTERFACE_INCLUDE_DIRECTORIES>"
-Dllext_edk_file=${llext_edk_file}
-Dllext_edk_cflags="${llext_edk_cflags}"
-Dllext_edk_name=${CONFIG_LLEXT_EDK_NAME}
-DWEST_TOPDIR=${WEST_TOPDIR}
-DZEPHYR_BASE=${ZEPHYR_BASE}
-DCONFIG_LLEXT_EDK_USERSPACE_ONLY=${CONFIG_LLEXT_EDK_USERSPACE_ONLY}
-P ${ZEPHYR_BASE}/cmake/llext-edk.cmake
DEPENDS ${logical_target_for_zephyr_elf} ${syscalls_json} build_info_yaml_saved
DEPENDS ${logical_target_for_zephyr_elf}
COMMAND_EXPAND_LISTS
)
add_custom_target(llext-edk DEPENDS ${llext_edk_file})
endif()
)
add_custom_target(llext-edk DEPENDS ${llext_edk_file})
# @Intent: Set compiler specific flags for standard C/C++ includes
# Done at the very end, so any other system includes which may
@@ -2464,13 +2238,11 @@ add_subdirectory_ifdef(
cmake/makefile_exports
)
# Ask the compiler to set the lib_include_dir and rt_library properties
compiler_set_linker_properties()
toolchain_linker_finalize()
# export build information
build_info(zephyr version VALUE ${PROJECT_VERSION_STR})
build_info(zephyr zephyr-base VALUE ${ZEPHYR_BASE})
yaml_save(NAME build_info)
yaml_context(EXISTS NAME build_info result)
if(result)
build_info(zephyr version VALUE ${PROJECT_VERSION_STR})
build_info(zephyr zephyr-base VALUE ${ZEPHYR_BASE})
yaml_save(NAME build_info)
endif()

View File

@@ -1,4 +1,491 @@
# Instead of the CODEOWNERS file, The Zephyr Project uses a custom format.
# See MAINTAINERS.yml for details.
# CODEOWNERS for autoreview assigning in github
# https://help.github.com/en/articles/about-code-owners#codeowners-syntax
# Order is important; for each modified file, the last matching
# pattern takes the most precedence.
# That is, with the last pattern being
# *.rst @nashif
# if only .rst files are being modified, only nashif is
# automatically requested for review, but you can manually
# add others as needed.
# Do not use wildcard on all source yet
#
# DO NOT EDIT THIS FILE.
# +++++++++++ NOTE ++++++++++++++++
#
# Please use the MAINTAINERS file to add yourself in an area or to add a new
# component or code. This file is going to be deprecated and currently only had
# entries that are not covered by the MAINTAINERS file.
/soc/arm/aspeed/ @aspeeddylan
/soc/atmel/ @nandojve
/soc/arm/bcm*/ @sbranden
/soc/arm/ene/ @ene-steven
/soc/arm/infineon_cat1/ @ifyall @npal-cy
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/silabs_exx32/efm32pg1b/ @rdmeneze
/soc/arm/silabs_exx32/efr32mg21/ @l-alfred
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/st_stm32/stm32h7/*stm32h735* @benediktibk
/soc/arm/st_stm32/stm32l4/*stm32l451* @benediktibk
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynq7000/ @ibirnbaum
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/arm/renesas_rcar/ @aaillet
/soc/riscv/openisa*/ @dleach02
/soc/riscv/riscv-privileged/andes_v5/ @cwshu @kevinwang821020 @jimmyzhe
/soc/riscv/riscv-privileged/neorv32/ @henrikbrixandersen
/soc/riscv/riscv-privileged/gd32vf103/ @soburi
/soc/starfive/jh71xx/ @pfarwsi
/soc/riscv/riscv-privileged/niosv/ @sweeaun
/boards/adafruit/feather_nrf52840/ @jacobw
/boards/ene/ @ene-steven
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
/boards/arm/96b_stm32_sensor_mez/ @Mani-Sadhasivam
/boards/arm/96b_wistrio/ @Mani-Sadhasivam
/boards/arm/acn52832/ @sven-hm
/boards/arm/arduino_mkrzero/ @soburi
/boards/arm/bbc_microbit_v2/ @LingaoM
/boards/arm/blackpill_f401ce/ @coderkalyan
/boards/arm/blackpill_f411ce/ @coderkalyan
/boards/arm/bt*10/ @greg-leach
/boards/arm/cc1352r1_launchxl/ @bwitherspoon
/boards/arm/cc26x2r1_launchxl/ @bwitherspoon
/boards/arm/cc3220sf_launchxl/ @vanti
/boards/arm/cy8ckit_062_ble/ @ifyall @npal-cy
/boards/arm/cy8ckit_062s4/ @DaWei8823
/boards/arm/cy8ckit_062_wifi_bt/ @ifyall @npal-cy
/boards/arm/cy8cproto_062_4343w/ @ifyall @npal-cy
/boards/arm/efm32pg_stk3401a/ @rdmeneze
/boards/arm/faze/ @mbittan @simonguinot
/boards/arm/frdm*/ @mmahadevan108 @dleach02
/boards/arm/gd32*/ @nandojve
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @mmahadevan108 @dleach02
/boards/arm/ip_k66f/ @parthitce @lmajewski
/boards/arm/legend/ @mbittan @simonguinot
/boards/arm/lpcxpresso*/ @mmahadevan108 @dleach02
/boards/arm/mimx8mm_evk/ @Mani-Sadhasivam
/boards/arm/mimx8mm_phyboard_polis @pefech
/boards/arm/mimxrt*/ @mmahadevan108 @dleach02
/boards/arm/mps2_an385/ @fvincenzo
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/npcx7m6fb_evb/ @MulinChao @ChiHuaL
/boards/arm/nrf*/ @carlescufi @lemrey
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nuvoton_pfm_m487/ @ssekar15
/boards/arm/qemu_cortex_a9/ @ibirnbaum
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg @stephanosio
/boards/arm/quick_feather/ @fkokosinski @kgugala
/boards/arm/rak4631_nrf52840/ @gpaquet85
/boards/arm/rak5010_nrf52840/ @gpaquet85
/boards/arm/rpi_pico/ @yonsch
/boards/arm/ronoth_lodev/ @NorthernDean
/boards/arm/xmc45_relax_kit/ @parthitce
/boards/atmel/ @nandojve
/boards/arm/scobc_module1/ @yashi
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/s32*/ @manuargue
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32h735g_disco/ @benediktibk
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/rcar_*/ @aaillet
/boards/arm/ubx_bmd345eval_nrf52840/ @Navin-Sankar @brec-u-blox
/boards/arm/nrf5340_audio_dk_nrf5340 @koffes @alexsven @erikrobstad @rick1082 @gWacey
/boards/arm/stm32_min_dev/ @sidcha
/boards/ezurio/* @rerickson1
/boards/riscv/rv32m1_vega/ @dleach02
/boards/riscv/adp_xc7k_ae350/ @cwshu @kevinwang821020 @jimmyzhe
/boards/riscv/longan_nano/ @soburi
/boards/riscv/neorv32/ @henrikbrixandersen
/boards/riscv/niosv*/ @sweeaun
/boards/riscv/sparkfun_red_v_things_plus/ @soburi
/boards/riscv/stamp_c3/ @soburi
/boards/starfive/visionfive2/ @kanakshilledar @pfarwsi
/boards/shields/atmel_rf2xx/ @nandojve
/boards/shields/esp_8266/ @nandojve
/boards/shields/inventek_eswifi/ @nandojve
/boards/xtensa/odroid_go/ @ydamigos
/boards/xtensa/nxp_adsp_imx8/ @iuliana-prodan @dbaluta
/boards/xtensa/kincony_kc868_a32/ @bbilas
/boards/arm64/qemu_cortex_a53/ @carlocaione
/boards/arm64/bcm958402m2_a72/ @abhishek-brcm
/boards/arm64/mimx8mm_evk/ @MrVan @JiafeiPan
/boards/arm64/mimx8mp_evk/ @MrVan @JiafeiPan
/boards/arm64/mimx8mn_evk/ @JiafeiPan
/boards/arm64/mimx93_evk/ @JiafeiPan
/boards/arm64/nxp_ls1046ardb/ @JiafeiPan
/boards/arm64/xenvm/ @lorc @firscity
/boards/arm64/fvp_baser_aemv8r/ @povergoing
/boards/arm64/fvp_base_revc_2xaemv8a/ @carlocaione
/boards/arm64/intel_socfpga_agilex_socdk/ @siclim @ngboonkhai
/boards/arm64/intel_socfpga_agilex5_socdk/ @teikheng @gdengi
/boards/arm64/rcar_*/ @lorc @xakep-amatop
# All cmake related files
/doc/develop/tools/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/services/device_mgmt/smp_protocol.rst @de-nordic @nordicjm
/doc/services/device_mgmt/smp_groups/ @de-nordic @nordicjm
/doc/services/sensing/ @lixuzha @ghu0510 @qianruh
/doc/CMakeLists.txt @carlescufi
/doc/_scripts/ @carlescufi
/drivers/*/*sam4l* @nandojve
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*gd32* @nandojve
/drivers/*/*mcux* @mmahadevan108 @dleach02
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*lpc11u6x* @mbittan @simonguinot
/drivers/*/*npcx* @MulinChao @ChiHuaL
/drivers/*/*andes* @cwshu @kevinwang821020 @jimmyzhe
/drivers/*/*ifx_cat1* @ifyall @npal-cy
/drivers/*/*neorv32* @henrikbrixandersen
/drivers/*/*_s32* @manuargue
/drivers/adc/adc_ads1x1x.c @XenuIsWatching
/drivers/adc/adc_stm32.c @cybertale
/drivers/adc/adc_rpi_pico.c @soburi
/drivers/adc/*ads114s0x* @benediktibk
/drivers/adc/*max11102_17* @benediktibk
/drivers/adc/*kb1200* @ene-steven
/drivers/adc/adc_ad559x.c @bbilas
/drivers/audio/*nrfx* @anangl
/drivers/auxdisplay/*pt6314* @xingrz
/drivers/auxdisplay/* @thedjnK
/drivers/bbram/* @yperess @sjg20 @jackrosenthal
/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/drivers/bluetooth/hci/hci_esp32.c @sylvioalves
/drivers/can/*mcp2515* @karstenkoenig
/drivers/can/*rcar* @aaillet
/drivers/clock_control/*agilex* @siclim @gdengi
/drivers/clock_control/*nrf* @nordic-krch
/drivers/clock_control/*esp32* @extremegtx @sylvioalves
/drivers/clock_control/*cpg_mssr* @aaillet
/drivers/console/ipm_console.c @finikorg
/drivers/console/semihost_console.c @luozhongyao
/drivers/console/jailhouse_debug_console.c @MrVan
/drivers/counter/counter_cmos.c @dcpleung
/drivers/counter/counter_ll_stm32_timer.c @kentjhall
/drivers/counter/*esp32* @sylvioalves
/drivers/counter/dw_timer.c @pbalsundar
/drivers/counter/counter_timer_shell.c @pbalsundar
/drivers/crypto/*nrf_ecb* @maciekfabia @anangl
/drivers/display/*rm68200* @mmahadevan108
/drivers/display/display_ili9342c.* @extremegtx
/drivers/dac/*ad56xx* @benediktibk
/drivers/dac/dac_ad559x.c @bbilas
/drivers/dai/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/ssp/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/dmic/ @marcinszkudlinski @abonislawski
/drivers/dai/intel/alh/ @abonislawski
/drivers/dma/dma_dw_axi.c @pbalsundar
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*dw_common* @abonislawski
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale @lowlander
/drivers/dma/*pl330* @raveenp
/drivers/dma/*iproc_pax* @raveenp
/drivers/dma/*intel_adsp* @teburd @abonislawski
/drivers/dma/*rpi_pico* @soburi
/drivers/dma/*xmc4xxx* @talih0
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*b91* @andy-liu-telink
/drivers/entropy/*bt_hci* @JordanYates
/drivers/entropy/*rv32m1* @dleach02
/drivers/ethernet/*dwmac* @npitre
/drivers/ethernet/*stm32* @Nukersson @lochej
/drivers/ethernet/*w5500* @parthitce
/drivers/ethernet/*xlnx_gem* @ibirnbaum
/drivers/ethernet/*smsc91x* @sgrrzhf
/drivers/ethernet/*adin2111* @GeorgeCGV
/drivers/ethernet/*oa_tc6* @lmajewski
/drivers/ethernet/*lan865x* @lmajewski
/drivers/ethernet/dwc_xgmac @Smale-12048867
/drivers/ethernet/dwc_xgmac/dwc_xgmac @Smale-12048867
/drivers/ethernet/phy/ @rlubos @tbursztyka @arvinf @jukkar
/drivers/ethernet/phy/*adin2111* @GeorgeCGV
/drivers/mdio/*adin2111* @GeorgeCGV
/drivers/flash/*stm32_qspi* @lmajewski
/drivers/flash/*b91* @andy-liu-telink
/drivers/flash/*cadence* @ngboonkhai
/drivers/flash/*cc13xx_cc26xx* @pepe2k
/drivers/flash/*nrf* @de-nordic
/drivers/flash/*esp32* @sylvioalves
/drivers/flash/flash_cadence_nand* @nbalabak
/drivers/gpio/*b91* @andy-liu-telink
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*nct38xx* @MulinChao @ChiHuaL
/drivers/gpio/*eos_s3* @fkokosinski @kgugala
/drivers/gpio/*rcar* @aaillet
/drivers/gpio/*esp32* @sylvioalves
/drivers/gpio/*rpi_pico* @yonsch
/drivers/gpio/*xlnx_ps* @ibirnbaum
/drivers/gpio/*ads114s0x* @benediktibk
/drivers/gpio/*bd8lb600fs* @benediktibk
/drivers/gpio/*pcal64xxa* @benediktibk
/drivers/gpio/*kb1200* @ene-steven
/drivers/gpio/gpio_altera_pio.c @shilinte
/drivers/gpio/gpio_ad559x.c @bbilas
/drivers/i2c/i2c_common.c @sjg20
/drivers/i2c/i2c_emul.c @sjg20
/drivers/i2c/i2c_ite_enhance.c @GTLin08
/drivers/i2c/i2c_ite_it8xxx2.c @GTLin08
/drivers/i2c/i2c_shell.c @nashif
/drivers/i2c/Kconfig.i2c_emul @sjg20
/drivers/i2c/Kconfig.it8xxx2 @GTLin08
/drivers/i2c/target/*eeprom* @henrikbrixandersen
/drivers/i2c/Kconfig.test @mbolivar-ampere
/drivers/i2c/i2c_test.c @mbolivar-ampere
/drivers/i2c/*rcar* @aaillet
/drivers/i2c/*kb1200* @ene-steven
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2s/*nrfx* @anangl
/drivers/i3c/i3c_cdns.c @XenuIsWatching
/drivers/ieee802154/ @rlubos @tbursztyka @jukkar @fgrandel
/drivers/ieee802154/*b91* @andy-liu-telink
/drivers/ieee802154/ieee802154_nrf5* @ankuns
/drivers/ieee802154/ieee802154_rf2xx* @tbursztyka @nandojve
/drivers/ieee802154/ieee802154_cc13xx* @bwitherspoon @cfriedt @vaishnavachath
/drivers/interrupt_controller/ @dcpleung @nashif
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/interrupt_controller/*esp32* @sylvioalves
/drivers/interrupt_controller/intc_nuclei_eclic.c @soburi
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic
/drivers/ipm/ipm_cavs* @dcpleung @andyross
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/ipm/ipm_stm32_hsem.c @cameled
/drivers/ipm/ipm_esp32.c @uLipe
/drivers/ipm/ipm_ivshmem.c @uLipe
/drivers/kscan/*xec* @franciscomunoz @sjvasanth1
/drivers/kscan/*ft5336* @MaureenHelm
/drivers/kscan/*ht16k33* @henrikbrixandersen
/drivers/led_strip/ @mbolivar-ampere
/drivers/mfd/mfd_ad559x.c @bbilas
/drivers/mfd/mfd_max20335.c @bbilas
/drivers/misc/ft8xx/ @hubertmis
/drivers/modem/hl7800.c @rerickson1
/drivers/modem/simcom-sim7080.c @lgehreke
/drivers/modem/simcom-sim7080.h @lgehreke
/drivers/modem/Kconfig.hl7800 @rerickson1
/drivers/modem/Kconfig.simcom-sim7080 @lgehreke
/drivers/pinctrl/*esp32* @sylvioalves
/drivers/pinctrl/*it8xxx2* @ite
/drivers/pinctrl/*kb1200* @ene-steven
/drivers/pm_cpu_ops/psci_shell.c @nbalabak @gdengi
/drivers/power_domain/ @ceolin
/drivers/ps2/*xec* @franciscomunoz @sjvasanth1
/drivers/ps2/*npcx* @MulinChao @ChiHuaL
/drivers/pwm/*b91* @andy-liu-telink
/drivers/pwm/*pca9685* @nixward
/drivers/pwm/*rpi_pico* @burumaj
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/*sam0* @nzmichaelh
/drivers/pwm/*test* @JordanYates
/drivers/pwm/*xlnx* @henrikbrixandersen
/drivers/pwm/pwm_capture.c @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/pwm/*gecko* @sun681
/drivers/pwm/*it8xxx2* @RuibinChang
/drivers/pwm/*esp32* @LucasTambor
/drivers/pwm/*rcar* @aaillet
/drivers/pwm/*max31790* @benediktibk
/drivers/pwm/*kb1200* @ene-steven
/drivers/regulator/* @gmarull
/drivers/regulator/regulator_max20335.c @bbilas
/drivers/regulator/regulator_pca9420.c @danieldegrasse
/drivers/regulator/regulator_rpi_pico.c @soburi
/drivers/regulator/regulator_shell.c @danieldegrasse
/drivers/reset/reset_intel_socfpga.c @nbalabak
/drivers/reset/Kconfig.intel_socfpga @nbalabak
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
/drivers/sensor/grow_r502a/ @DineshDK03
/drivers/sensor/hts*/ @avisconti
/drivers/sensor/ina23*/ @bbilas
/drivers/sensor/lis*/ @avisconti
/drivers/sensor/lps*/ @avisconti
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/mpr/ @sven-hm
/drivers/sensor/qdec_stm32/ @valeriosetti
/drivers/sensor/rpi_pico_temp/ @soburi
/drivers/sensor/st*/ @avisconti
/drivers/sensor/veaa_x_3/ @jeppenodgaard @MaureenHelm
/drivers/sensor/ene_tack_kb1200/ @ene-steven
/drivers/serial/*b91* @andy-liu-telink
/drivers/serial/uart_altera_jtag.c @nashif @gohshunjing
/drivers/serial/uart_altera.c @gohshunjing
/drivers/serial/*ns16550* @dcpleung @nashif @gdengi
/drivers/serial/*nrfx* @anangl
/drivers/serial/Kconfig.mcux_iuart @Mani-Sadhasivam
/drivers/serial/uart_mcux_iuart.c @Mani-Sadhasivam
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/*rpi_pico* @yonsch
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/serial/uart_xlnx_uartlite.c @henrikbrixandersen
/drivers/serial/*xmc4xxx* @parthitce
/drivers/serial/*numicro* @ssekar15
/drivers/serial/*apbuart* @julius-barendt
/drivers/serial/*rcar* @aaillet
/drivers/serial/Kconfig.xen @lorc @firscity
/drivers/serial/uart_hvc_xen.c @lorc @firscity
/drivers/serial/uart_hvc_xen_consoleio.c @lorc @firscity
/drivers/serial/Kconfig.it8xxx2 @GTLin08
/drivers/serial/uart_ite_it8xxx2.c @GTLin08
/drivers/serial/*intel_lw* @shilinte
/drivers/serial/*kb1200* @ene-steven
/drivers/disk/sdmmc_stm32.c @anthonybrandon
/drivers/ptp_clock/ @tbursztyka @jukkar
/drivers/spi/*b91* @andy-liu-telink
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/spi/*esp32* @sylvioalves
/drivers/spi/*pl022* @soburi
/drivers/sdhc/ @danieldegrasse
/drivers/sdhc/sdhc_cdns* @roymurlidhar @tanmaykathpalia
/drivers/timer/*arm_arch* @carlocaione
/drivers/timer/*cortex_m_systick* @anangl
/drivers/timer/*altera_avalon* @nashif
/drivers/timer/*riscv_machine* @kgugala @pgielda
/drivers/timer/*ite_it8xxx2* @ite
/drivers/timer/*xlnx_psttc* @wjliang @stephanosio
/drivers/timer/*cc13xx_cc26xx_rtc* @vanti
/drivers/timer/*cavs* @dcpleung
/drivers/timer/*leon_gptimer* @julius-barendt
/drivers/timer/*mips_cp0* @frantony
/drivers/timer/*rcar_cmt* @aaillet
/drivers/timer/*esp32_sys* @uLipe
/drivers/timer/*sam0_rtc* @bendiscz
/drivers/timer/*xtensa* @dcpleung
/drivers/timer/*rv32m1_lptmr* @mbolivar
/drivers/timer/*nrf_rtc* @anangl
/drivers/timer/*hpet* @dcpleung
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/i2c/*b91* @andy-liu-telink
/drivers/i2c/i2c_ll_stm32* @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/i2c/*tca954x* @kurddt
/drivers/*/*xec* @franciscomunoz @albertofloyd @sjvasanth1
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/*sifive* @katsuster
/drivers/watchdog/wdt_handlers.c @dcpleung @nashif
/drivers/watchdog/*cc32xx* @pavlohamov
/drivers/watchdog/wdt_ite_it8xxx2.c @RuibinChang
/drivers/watchdog/Kconfig.it8xxx2 @RuibinChang
/drivers/watchdog/wdt_counter.c @nordic-krch
/drivers/watchdog/*rpi_pico* @thedjnK
/drivers/watchdog/*dw* @softwarecki @pbalsundar
/drivers/watchdog/*ifx* @sreeramIfx
/drivers/watchdog/*kb1200* @ene-steven
/drivers/wifi/esp_at/ @mniestroj
/drivers/wifi/eswifi/ @loicpoulain @nandojve
/drivers/wifi/winc1500/ @kludentwo
/drivers/virtualization/ @tbursztyka
/dts/arm/acsip/ @NorthernDean
/dts/arm/aspeed/ @aspeeddylan
/dts/arm/atmel/ @galak @nandojve
/dts/arm/broadcom/ @sbranden
/dts/arm/cypress/ @ifyall @npal-cy
/dts/arm/ene/kb1200 @ene-steven
/dts/arm/gd/ @nandojve
/dts/arm/infineon/xmc4* @parthitce @ifyall @npal-cy
/dts/arm/infineon/psoc6/ @ifyall @npal-cy
/dts/arm64/armv8-r.dtsi @povergoing
/dts/arm64/intel/*intel_socfpga* @siclim
/dts/arm64/nxp/ @JiafeiPan
/dts/arm64/renesas/ @lorc @xakep-amatop
/dts/arm/quicklogic/ @fkokosinski @kgugala
/dts/arm/seeed_studio/ @str4t0m
/dts/arm/st/h7/*stm32h735* @benediktibk
/dts/arm/st/l4/*stm32l451* @benediktibk
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @anangl @carlescufi
/dts/arm/nuvoton/ @ssekar15 @MulinChao @ChiHuaL
/dts/arm/nxp/ @mmahadevan108 @dleach02
/dts/arm/nxp/nxp_s32* @manuargue
/dts/arm/microchip/ @franciscomunoz @albertofloyd @sjvasanth1
/dts/arm/rpi_pico/ @yonsch
/dts/arm/silabs/efm32_pg_1b.dtsi @rdmeneze
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efr32bg13p* @mnkp
/dts/arm/silabs/efr32bg22* @kgugala @fkokosinski @pczarnecki
/dts/arm/silabs/efr32xg13p* @mnkp
/dts/arm/silabs/efm32pg1b* @rdmeneze
/dts/arm/silabs/efr32mg21* @l-alfred
/dts/arm/silabs/efr32fg13* @yonsch
/dts/riscv/ite/ @ite
/dts/riscv/microchip/microchip-miv.dtsi @galak
/dts/riscv/openisa/rv32m1* @dleach02
/dts/riscv/starfive/ @rajnesh-kanwal @pfarwsi
/dts/riscv/andes/andes_v5* @cwshu @kevinwang821020 @jimmyzhe
/dts/riscv/niosv/ @sweeaun
/dts/arm/armv*m.dtsi @galak @ioannisg
/dts/arm/armv7-a.dtsi @ibirnbaum
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/arm/renesas/rcar/ @aaillet
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/xtensa/espressif/ @sylvioalves
/dts/xtensa/nxp/ @iuliana-prodan @dbaluta
/dts/bindings/can/ @alexanderwachter @henrikbrixandersen
/dts/bindings/i2c/zephyr*i2c-emul*.yaml @sjg20
/dts/bindings/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/adc/*ads114s08.yaml @benediktibk
/dts/bindings/adc/*max111* @benediktibk
/dts/bindings/modem/*hl7800.yaml @rerickson1
/dts/bindings/serial/ns16550.yaml @dcpleung @nashif
/dts/bindings/counter/snps,dw-timers.yaml @pbalsundar
/dts/bindings/wifi/*esp-at.yaml @mniestroj
/dts/bindings/*/*gd32* @nandojve
/dts/bindings/*/*sam* @nandojve
/dts/bindings/*/*npcx* @MulinChao @ChiHuaL
/dts/bindings/*/*psoc6* @ifyall @npal-cy
/dts/bindings/*/*infineon*cat1* @ifyall @npal-cy
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @mmahadevan108 @dleach02
/dts/bindings/*/nxp*s32* @manuargue
/dts/bindings/*/openisa* @dleach02
/dts/bindings/*/raspberrypi*pico* @yonsch
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/andes* @cwshu @kevinwang821020 @jimmyzhe
/dts/bindings/*/neorv32* @henrikbrixandersen
/dts/bindings/*/*lan91c111* @sgrrzhf
/dts/bindings/i3c/ @dcpleung
/dts/bindings/pm_cpu_ops/* @carlocaione
/dts/bindings/ethernet/*gem.yaml @ibirnbaum
/dts/bindings/auxdisplay/*pt6314.yaml @xingrz
/dts/bindings/auxdisplay/* @thedjnK
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/*ina23* @bbilas
/dts/bindings/sensor/st* @avisconti
/dts/bindings/sensor/zephyr,sensing.yaml @lixuzha @ghu0510 @qianruh
/dts/bindings/sensor/zephyr,sensing*.yaml @lixuzha @ghu0510 @qianruh
/dts/bindings/smbus/ @finikorg
/dts/bindings/sip_svc/ @maheshraotm
/dts/bindings/cpu/intel,niosv.yaml @sweeaun
/dts/bindings/reset/intel,socfpga-reset.yaml @nbalabak
/dts/bindings/gpio/*pcal64* @benediktibk
/dts/bindings/gpio/*bd8lb600fs* @benediktibk
/dts/bindings/gpio/*ads114s0x* @benediktibk
/dts/bindings/pwm/*max31790* @benediktibk
/dts/bindings/dac/*ad56* @benediktibk

View File

@@ -1,4 +1,4 @@
# Zephyr Project Code of Conduct
# Contributor Covenant Code of Conduct
## Our Pledge

View File

@@ -7,11 +7,7 @@
source "Kconfig.constants"
osource "$(APPLICATION_SOURCE_DIR)/VERSION"
# This should be sourced early since the autogen Kconfig.dts options
# and macros may get used by shields/boards/SoC defconfig or modules.
source "dts/Kconfig"
osource "${APPLICATION_SOURCE_DIR}/VERSION"
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
@@ -37,6 +33,10 @@ osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig.defconfig"
# This loads the testsuite defconfig
source "subsys/testsuite/Kconfig.defconfig"
# This should be early since the autogen Kconfig.dts symbols may get
# used by modules
source "dts/Kconfig"
menu "Modules"
source "modules/Kconfig"
@@ -215,6 +215,12 @@ config LINKER_SORT_BY_ALIGNMENT
in decreasing size of symbols. This helps to minimize
padding between symbols.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
help
The option specifies that the vector table should be placed at the
start of SRAM instead of the start of flash.
config HAS_SRAM_OFFSET
bool
help
@@ -357,19 +363,18 @@ menu "Compiler Options"
config REQUIRES_STD_C99
bool
select DEPRECATED
help
Hidden option to select compiler support C99 standard or higher.
config REQUIRES_STD_C11
bool
select DEPRECATED
select REQUIRES_STD_C99
help
Hidden option to select compiler support C11 standard or higher.
config REQUIRES_STD_C17
bool
select REQUIRES_STD_C11
help
Hidden option to select compiler support C17 standard or higher.
@@ -382,28 +387,27 @@ config REQUIRES_STD_C23
choice STD_C
prompt "C Standard"
default STD_C23 if REQUIRES_STD_C23
default STD_C17
default STD_C17 if REQUIRES_STD_C17
default STD_C11 if REQUIRES_STD_C11
default STD_C99
help
C Standards.
config STD_C90
bool "C90 [DEPRECATED]"
select DEPRECATED
bool "C90"
depends on !REQUIRES_STD_C99
help
1989 C standard as completed in 1989 and ratified by ISO/IEC
as ISO/IEC 9899:1990. This version is known as "ANSI C".
config STD_C99
bool "C99 [DEPRECATED]"
select DEPRECATED
bool "C99"
depends on !REQUIRES_STD_C11
help
1999 C standard.
config STD_C11
bool "C11 [DEPRECATED]"
select DEPRECATED
bool "C11"
depends on !REQUIRES_STD_C17
help
2011 C standard.
@@ -444,7 +448,6 @@ config CODING_GUIDELINE_CHECK
config NATIVE_LIBC
bool
select FULL_LIBC_SUPPORTED
select TC_PROVIDES_POSIX_C_LANG_SUPPORT_R
help
Zephyr will use the host system C library.
@@ -462,6 +465,15 @@ config NATIVE_BUILD
Zephyr will be built targeting the host system for debug and
development purposes.
config NATIVE_APPLICATION
bool
default y if ARCH_POSIX
depends on !NATIVE_LIBRARY
select NATIVE_BUILD
help
Build as a native application that can run on the host and using
resources and libraries provided by the host.
config NATIVE_LIBRARY
bool
select NATIVE_BUILD
@@ -482,9 +494,7 @@ choice COMPILER_OPTIMIZATIONS
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
# gcc 14.3 -Os is broken on riscv. This setting should be in the SDK, it's here for testing
default SPEED_OPTIMIZATIONS if "$(TOOLCHAIN_VARIANT_COMPILER)" = "gnu" && RISCV
default SIZE_OPTIMIZATIONS_AGGRESSIVE if "$(TOOLCHAIN_VARIANT_COMPILER)" = "llvm"
default SIZE_OPTIMIZATIONS_AGGRESSIVE if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm"
default SIZE_OPTIMIZATIONS
help
Note that these flags shall only control the compiler
@@ -500,7 +510,7 @@ config SIZE_OPTIMIZATIONS
config SIZE_OPTIMIZATIONS_AGGRESSIVE
bool "Aggressively optimize for size"
help
Compiler optimizations will be set to -Oz independently of other
Compiler optimizations wil be set to -Oz independently of other
options.
config SPEED_OPTIMIZATIONS
@@ -533,27 +543,11 @@ config LTO
help
This option enables Link Time Optimization.
config LTO_SINGLE_THREADED
bool "Single-threaded LTO"
depends on LTO
help
This option instructs the linker to use a single thread to process
LTO. See the following issue for more info:
https://github.com/zephyrproject-rtos/sdk-ng/issues/1038
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config DEPRECATION_TEST
bool "Indicate test for deprecated feature"
help
This option is selected by tests which check functionality of
deprecated features. It ensures that COMPILER_WARNINGS_AS_ERRORS
is not selected as that would generate errors when the deprecated
features are used.
config COMPILER_SAVE_TEMPS
bool "Save temporary object files"
help
@@ -624,13 +618,6 @@ config MISRA_SANE
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
config TOOLCHAIN_SUPPORTS_VLA_IN_STATEMENTS
bool
default y
help
Hidden symbol to state if the toolchain can handle vla in
statements.
endmenu
choice
@@ -698,13 +685,6 @@ config OUTPUT_DISASSEMBLY_WITH_SOURCE
the .lst file that can vary across platforms because
of reasons such as having ".." include paths.
config OUTPUT_DISASSEMBLY_NO_ALIASES
bool "Disassemble with raw instruction mnemonics"
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain raw instruction mnemonic instead of
pseudo instructions.
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -726,21 +706,8 @@ config CLEANUP_INTERMEDIATE_FILES
from the build process. Note this breaks incremental builds, west spdx
(Software Bill of Material generation), and maybe others.
config BUILD_GAP_FILL_PATTERN
hex "Gap fill pattern"
default 0xFF
help
Pattern used for gap filling of output files.
This value should be set to the value of a clean flash as this can
significantly reduce flash write times.
This setting only defines the gap fill pattern and doesn't enable gap
filling.
Note: binary files are always gap filled as they contain no address
information.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/s19 files [DEPRECATED]."
select DEPRECATED
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
@@ -748,12 +715,6 @@ config BUILD_OUTPUT_HEX
Build an Intel HEX binary zephyr/zephyr.hex in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_HEX_GAP_FILL
bool "Fill gaps in hex files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in hex based files.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
@@ -788,12 +749,6 @@ config BUILD_OUTPUT_S19
Build an S19 binary zephyr/zephyr.s19 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_S19_GAP_FILL
bool "Fill gaps in s19 files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in s19 based files.
config BUILD_OUTPUT_UF2
bool "Build a binary in UF2 format"
depends on BUILD_OUTPUT_BIN
@@ -801,12 +756,6 @@ config BUILD_OUTPUT_UF2
Build a UF2 binary zephyr/zephyr.uf2 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_MOT
bool "Build a binary in MOT format"
help
Build a MOT binary zephyr/zephyr.mot in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
if BUILD_OUTPUT_UF2
config BUILD_OUTPUT_UF2_FAMILY_ID
@@ -816,8 +765,7 @@ config BUILD_OUTPUT_UF2_FAMILY_ID
default "0xada52840" if SOC_NRF52840_QIAA
default "0x4fb2d5bd" if SOC_SERIES_IMXRT10XX || SOC_SERIES_IMXRT11XX
default "0x2abc77ec" if SOC_SERIES_LPC55XXX
default "0xe48bff56" if SOC_SERIES_RP2040
default "0xe48bff57" if SOC_SERIES_RP2350
default "0xe48bff56" if SOC_SERIES_RP2XXX
default "0x68ed2b88" if SOC_SERIES_SAMD21
default "0x55114460" if SOC_SERIES_SAMD51
default "0x647824b6" if SOC_SERIES_STM32F0X
@@ -975,7 +923,7 @@ config BUILD_OUTPUT_STRIP_PATHS
bool "Strip absolute paths from binaries"
default y
help
If the compiler supports it, strip the $(ZEPHYR_BASE) prefix from the
If the compiler supports it, strip the ${ZEPHYR_BASE} prefix from the
__FILE__ macro used in __ASSERT*, in the
.noinit."/home/joe/zephyr/fu/bar.c" section names and in any
application code.
@@ -1038,12 +986,6 @@ config WARN_EXPERIMENTAL
Print a warning when the Kconfig tree is parsed if any experimental
features are enabled.
config NOT_SECURE
bool
help
Symbol to be selected by a feature to indicate that feature is
not secure.
config TAINT
bool
help
@@ -1073,10 +1015,38 @@ menu "Boot Options"
config IS_BOOTLOADER
bool "Act as a bootloader"
depends on XIP
depends on ARM
help
This option indicates that Zephyr will act as a bootloader to execute
a separate Zephyr image payload.
config BOOTLOADER_SRAM_SIZE
int "SRAM reserved for bootloader [DEPRECATED]"
default 0
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
This option specifies the amount of SRAM (measure in kB) reserved for
a bootloader image, when either:
- the Zephyr image itself is to act as the bootloader, or
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
This option is deprecated, users should transition to using DTS to set this, if needed.
To be removed after Zephyr 3.7 release.
config BOOTLOADER_SRAM_SIZE_DEPRECATED
bool
default y
select DEPRECATED
depends on BOOTLOADER_SRAM_SIZE != 0
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
Non-prompt symbol to indicate that the deprecated BOOTLOADER_SRAM_SIZE Kconfig has a
non-0 value. Please transition to using devicetree.
config BOOTLOADER_BOSSA
bool "BOSSA bootloader support"
select USE_DT_CODE_PARTITION

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,10 +0,0 @@
> [!IMPORTANT]
> The license files in this directory are maintained as per the recommendation
> of the REUSE specification (https://reuse.software/spec-3.3).
> These are licenses that may apply to some components hosted in the source tree
> but that do not end up in built binaries
> (see https://docs.zephyrproject.org/latest/LICENSING.html#zephyr-licensing).
>
> These files do _not_ define the license of the Zephyr project itself.
> Zephyr is licensed under the Apache 2.0 license, as specified in
> the [LICENSE](/LICENSE) file at the root of the repository.

File diff suppressed because it is too large Load Diff

View File

@@ -24,7 +24,7 @@ resource-constrained systems: from simple embedded environmental sensors and
LED wearables to sophisticated smart watches and IoT wireless gateways.
The Zephyr kernel supports multiple architectures, including ARM (Cortex-A,
Cortex-R, Cortex-M), Intel x86, ARC, Tensilica Xtensa, and RISC-V,
Cortex-R, Cortex-M), Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V,
SPARC, MIPS, and a large number of `supported boards`_.
.. below included in doc/introduction/introduction.rst

View File

@@ -1,25 +0,0 @@
version = 1
# Declare default license and copyright text for files that typically do not include them.
[[annotations]]
path = [
# zephyr-keep-sorted-start
"**/*.cfg",
"**/*.cmake",
"**/*.conf",
"**/*.ecl",
"**/*.html",
"**/*.rst",
"**/*.yaml",
"**/*.yml",
"**/*_defconfig",
"**/CMakeLists.txt",
"**/Kconfig*",
"doc/requirements.in",
"doc/requirements.txt",
"scripts/requirements-*.in",
"scripts/requirements-*.txt",
# zephyr-keep-sorted-stop
]
SPDX-License-Identifier = "Apache-2.0"
SPDX-FileCopyrightText = "Copyright The Zephyr Project Contributors"

View File

@@ -1 +1 @@
0.17.4
0.17.0

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 4
VERSION_MINOR = 3
PATCHLEVEL = 99
VERSION_MINOR = 0
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -8,7 +8,7 @@
# Include these first so that any properties (e.g. defaults) below can be
# overridden (by defining symbols in multiple locations)
source "$(KCONFIG_BINARY_DIR)/arch/Kconfig"
source "$(ARCH_DIR)/Kconfig.$(HWM_SCHEME)"
# ToDo: Generate a Kconfig.arch for loading of additional arch in HWMv2.
osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
@@ -33,7 +33,6 @@ config ARM
select ARCH_IS_SET
select ARCH_SUPPORTS_COREDUMP if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_THREADS if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if CPU_CORTEX_M
# FIXME: current state of the code for all ARM requires this, but
# is really only necessary for Cortex-M with ARM MPU!
select GEN_PRIV_STACKS
@@ -51,13 +50,11 @@ config ARM64
select ARCH_HAS_THREAD_LOCAL_STORAGE
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select BARRIER_OPERATIONS_ARCH
select ARCH_HAS_DIRECTED_IPIS
select ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_MAPPING
select ARCH_SUPPORTS_EVICTION_TRACKING
select EVICTION_TRACKING if DEMAND_PAGING
select MEM_DOMAIN_HAS_THREAD_LIST if ARM_MPU
help
ARM64 (AArch64) architecture
@@ -96,6 +93,7 @@ config X86
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_HAS_DEMAND_PAGING if !X86_64
select ARCH_HAS_DEMAND_MAPPING if ARCH_HAS_DEMAND_PAGING
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select NEED_LIBC_MEM_PARTITION if USERSPACE && TIMING_FUNCTIONS \
&& !BOARD_HAS_TIMING_FUNCTIONS \
&& !SOC_HAS_TIMING_FUNCTIONS
@@ -105,17 +103,25 @@ config X86
help
x86 architecture
config NIOS2
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
imply XIP
select ARCH_HAS_TIMING_FUNCTIONS
help
Nios II Gen 2 architecture
config RISCV
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C if !RISCV_ISA_EXT_A
select ATOMIC_OPERATIONS_BUILTIN if RISCV_ISA_EXT_A
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !SOC_FAMILY_ESPRESSIF_ESP32
select ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_THREAD_LOCAL_STORAGE
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select USE_SWITCH_SUPPORTED
select USE_SWITCH
select SCHED_IPI_SUPPORTED if SMP
@@ -130,15 +136,13 @@ config XTENSA
select ARCH_IS_SET
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_MEM_DOMAIN_DATA if USERSPACE
select ARCH_HAS_DIRECTED_IPIS
select THREAD_STACK_INFO
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if !SMP
select ARCH_HAS_USERSPACE if XTENSA_MMU || XTENSA_MPU
imply ARCH_HAS_RESERVED_PAGE_FRAMES if XTENSA_MMU
help
Xtensa architecture
@@ -155,18 +159,15 @@ config ARCH_POSIX
select BARRIER_OPERATIONS_BUILTIN
# POSIX arch based targets get their memory cleared on entry by the host OS
select SKIP_BSS_CLEAR
# Override the C standard used for compilation to C 2011
# This is due to some tests using _Static_assert which is a 2011 feature, but
# otherwise relying on compilers supporting it also when set to C99.
# This was in general ok, but with some host compilers and C library versions
# it led to problems. So we override it to 2011 for the native targets.
select REQUIRES_STD_C11
help
POSIX (native) architecture
config RX
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select USE_SWITCH
select USE_SWITCH_SUPPORTED
help
Renesas RX architecture
config ARCH_IS_SET
bool
help
@@ -228,17 +229,7 @@ config SRAM_BASE_ADDRESS
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
config XIP
bool "Execute in place"
help
This option allows zephyr to operate with its text and read-only
sections residing in ROM (or similar read-only memory). Not all boards
support this option so it must be used with care; you must also
supply a linker command file when building your image. Enabling this
option increases both the code and data footprint of the image.
if ARC || ARM || ARM64 || X86 || RISCV || RX || ARCH_POSIX
if ARC || ARM || ARM64 || NIOS2 || X86 || RISCV
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_FLASH := zephyr,flash
@@ -261,7 +252,7 @@ config FLASH_BASE_ADDRESS
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
endif # ARM || ARM64 || ARC || X86 || RISCV || RX || ARCH_POSIX
endif # ARM || ARM64 || ARC || NIOS2 || X86 || RISCV
if ARCH_HAS_TRUSTED_EXECUTION
@@ -324,21 +315,11 @@ config USERSPACE
privileged mode or handling a system call; to ensure these are always
caught, enable CONFIG_HW_STACK_PROTECTION.
config NOINIT_SNIPPET_FIRST
bool "Place the no-init linker script snippet first"
help
By default the include/zephyr/linker/common-noinit.ld file inserts the
snippets-noinit.ld file at the end of the section. There are times when
the directives in the snippets-noinit.ld file apply to the other directives
in this file. And in that case the include statement for the snippets-noinit.ld
file needs to come at the start of the section. This configuration option
allows that to happen.
config PRIVILEGED_STACK_SIZE
int "Size of privileged stack"
default 2048 if EMUL
default 1024
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
This option sets the privileged stack region size that will be used
in addition to the user mode thread stack. During normal execution,
@@ -348,19 +329,18 @@ config PRIVILEGED_STACK_SIZE
config KOBJECT_TEXT_AREA
int "Size of kobject text area"
default 1024 if UBSAN
default 512 if COVERAGE_GCOV
default 512 if NO_OPTIMIZATIONS
default 512 if STACK_CANARIES && RISCV
default 256
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Size of kernel object text area. Used in linker script.
config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
int "Reserve extra kobject data area (in percentage)"
default 100
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Multiplication factor used to calculate the size of placeholder to
reserve space for kobject metadata hash table. The hash table is
@@ -374,7 +354,7 @@ config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
config KOBJECT_RODATA_AREA_EXTRA_BYTES
int "Reserve extra bytes for kobject rodata area"
default 16
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Reserve a few more bytes for the RODATA region for kobject metadata.
This is to account for the uncertainty of tables generated by gperf.
@@ -460,25 +440,20 @@ config ARCH_STACKWALK_MAX_FRAMES
menu "Interrupt Configuration"
config TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
bool
help
Hidden option to signal that toolchain supports local declaration of
interrupt tables.
config ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
bool
default y
# Userspace is currently not supported
depends on !USERSPACE
# List of currently supported architectures
depends on ARM || ARM64 || RISCV
depends on ARM || ARM64
# List of currently supported toolchains
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "gnuarmemb" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm" || TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "gnuarmemb"
config ISR_TABLES_LOCAL_DECLARATION
bool "ISR tables created locally and placed by linker"
bool "ISR tables created locally and placed by linker [EXPERIMENTAL]"
depends on ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
select EXPERIMENTAL
help
Enable new scheme of interrupt tables generation.
This is totally different generator that would create tables entries locally
@@ -490,7 +465,6 @@ config ISR_TABLES_LOCAL_DECLARATION
config DYNAMIC_INTERRUPTS
bool "Installation of IRQs at runtime"
select SRAM_SW_ISR_TABLE
help
Enable installation of interrupts at runtime, which will move some
interrupt-related data structures to RAM instead of ROM, and
@@ -542,12 +516,6 @@ config ARCH_IRQ_VECTOR_TABLE_ALIGN
to be aligned to architecture specific size. The default
size is 0 for no alignment.
config ARCH_DEVICE_STATE_ALIGN
int "Alignment size of device state"
default 4
help
This option controls alignment size of device state.
choice IRQ_VECTOR_TABLE_TYPE
prompt "IRQ vector table type"
depends on GEN_IRQ_VECTOR_TABLE
@@ -608,30 +576,14 @@ config IRQ_OFFLOAD
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
depends on ARCH_HAS_VECTOR_TABLE_RELOCATION
depends on XIP
depends on !ROMSTART_RELOCATION_ROM
help
When XiP is enabled, this option will result in the vector table being
relocated from Flash to SRAM.
config SRAM_SW_ISR_TABLE
bool "Place the software ISR table in SRAM instead of flash"
help
The option specifies that the software interrupts vector table will be
placed inside SRAM instead of the flash.
config IRQ_OFFLOAD_NESTED
bool "irq_offload() supports nested IRQs"
depends on IRQ_OFFLOAD
default y if ARM64 || X86 || RISCV || XTENSA
help
When set by the platform layers, indicates that
irq_offload() may legally be called in interrupt context to
cause a synchronous nested interrupt on the current CPU.
Not all hardware is capable.
When set by the arch layer, indicates that irq_offload() may
legally be called in interrupt context to cause a
synchronous nested interrupt on the current CPU. Not all
hardware is capable.
config EXCEPTION_DEBUG
bool "Unhandled exception debugging"
@@ -721,9 +673,6 @@ config ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config ARCH_HAS_RAMFUNC_SUPPORT
bool
config ARCH_HAS_VECTOR_TABLE_RELOCATION
bool
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
@@ -736,9 +685,6 @@ config ARCH_SUPPORTS_COREDUMP_THREADS
config ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
bool
config ARCH_SUPPORTS_COREDUMP_STACK_PTR
bool
config ARCH_SUPPORTS_ARCH_HW_INIT
bool
@@ -748,18 +694,19 @@ config ARCH_SUPPORTS_ROM_START
config ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
bool
config ARCH_SUPPORTS_EVICTION_TRACKING
bool
help
Architecture code supports page tracking for eviction algorithms
when demand paging is enabled.
config ARCH_HAS_EXTRA_EXCEPTION_INFO
bool
config ARCH_HAS_GDBSTUB
bool
config ARCH_HAS_COHERENCE
bool
help
When selected, the architecture supports the
arch_mem_coherent() API and can link into incoherent/cached
memory using the ".cached" linker section.
config ARCH_HAS_THREAD_LOCAL_STORAGE
bool
@@ -781,9 +728,6 @@ config ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET
help
Select when the architecture implements arch_thread_priv_stack_space_get().
config ARCH_HAS_HW_SHADOW_STACK
bool
#
# Other architecture related options
#
@@ -1078,22 +1022,10 @@ config ICACHE
help
This option enables the support for the instruction cache (i-cache).
config CACHE_HAS_MIRRORED_MEMORY_REGIONS
bool "Mirrored memory region(s) for both cached and uncached access"
depends on CPU_CACHE_INCOHERENT
help
Enable this if hardware has mirrored memory regions at different
addressed when accessing one would go through cache, but accessing
the other would go to memory directly. A pointer can be cheaply
converted to cached or uncached access.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
config CACHE_DOUBLEMAP
bool "Cache double-mapping support"
select CACHE_HAS_MIRRORED_MEMORY_REGIONS
select DEPRECATED
depends on CPU_CACHE_INCOHERENT
default y
help
Double-mapping behavior where a pointer can be cheaply converted to
point to the same cached/uncached memory at different locations.
@@ -1108,12 +1040,9 @@ config CACHE_MANAGEMENT
This option enables the cache management functions backed by arch or
driver code.
if CACHE_MANAGEMENT
if DCACHE
config DCACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
depends on CACHE_MANAGEMENT && DCACHE
help
This option enables querying some architecture-specific hardware for
finding the d-cache line size at the expense of taking more memory and
@@ -1125,21 +1054,18 @@ config DCACHE_LINE_SIZE_DETECT
config DCACHE_LINE_SIZE
int "d-cache line size"
depends on !DCACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,d-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,d-cache-line-size)
depends on CACHE_MANAGEMENT && DCACHE && !DCACHE_LINE_SIZE_DETECT
default 0
help
Size in bytes of a CPU d-cache line.
Size in bytes of a CPU d-cache line. If this is set to 0 the value is
obtained from the 'd-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting DCACHE_LINE_SIZE_DETECT.
endif # DCACHE
if ICACHE
config ICACHE_LINE_SIZE_DETECT
bool "Detect i-cache line size at runtime"
depends on CACHE_MANAGEMENT && ICACHE
help
This option enables querying some architecture-specific hardware for
finding the i-cache line size at the expense of taking more memory and
@@ -1151,19 +1077,17 @@ config ICACHE_LINE_SIZE_DETECT
config ICACHE_LINE_SIZE
int "i-cache line size"
depends on !ICACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,i-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,i-cache-line-size)
depends on CACHE_MANAGEMENT && ICACHE && !ICACHE_LINE_SIZE_DETECT
default 0
help
Size in bytes of a CPU i-cache line.
Size in bytes of a CPU i-cache line. If this is set to 0 the value is
obtained from the 'i-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting ICACHE_LINE_SIZE_DETECT.
endif # ICACHE
choice CACHE_TYPE
prompt "Cache type"
depends on CACHE_MANAGEMENT
default ARCH_CACHE
config ARCH_CACHE
@@ -1171,14 +1095,6 @@ config ARCH_CACHE
help
Integrated on-core cache controller
config SOC_CACHE
bool "SoC specific cache controller"
depends on SOC_HAS_CACHE_FUNCTIONS
help
SoC specific cache controller.
This requires soc_cache.h file to exist in search path.
config EXTERNAL_CACHE
bool "External cache controller"
help
@@ -1186,16 +1102,6 @@ config EXTERNAL_CACHE
endchoice
config CACHE_CAN_SAY_MEM_COHERENCE
bool
help
sys_cache_is_mem_coherent() is defined when enabled. This function can be
used to determine if a pointer lies inside "coherence regions" and can be
safely used in multiprocessor code without explicit flush or invalidate
operations.
endif # CACHE_MANAGEMENT
endmenu
config ARCH
@@ -1224,9 +1130,9 @@ config ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
config ARCH_HAS_CUSTOM_SWAP_TO_MAIN
bool
help
It's possible that an architecture port cannot use z_swap_unlocked()
to swap to the main thread (bg_thread_main), but instead must do
something custom. It must enable this option in that case.
It's possible that an architecture port cannot use _Swap() to swap to
the _main() thread, but instead must do something custom. It must
enable this option in that case.
config ARCH_HAS_CUSTOM_BUSY_WAIT
bool
@@ -1234,15 +1140,3 @@ config ARCH_HAS_CUSTOM_BUSY_WAIT
It's possible that an architecture port cannot or does not want to use
the provided k_busy_wait(), but instead must do something custom. It must
enable this option in that case.
config ARCH_HAS_CUSTOM_CURRENT_IMPL
bool
help
Select when architecture implements arch_current_thread() &
arch_current_thread_set().
config ARCH_IPI_LAZY_COPROCESSORS_SAVE
bool
help
Select when the architecture has multi-CPU lazy context switching
of coprocessor registers.

5
arch/Kconfig.v1 Normal file
View File

@@ -0,0 +1,5 @@
# Copyright (c) 2023 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
# Note: $ARCH might be a glob pattern
source "$(ARCH_DIR)/$(ARCH)/Kconfig"

5
arch/Kconfig.v2 Normal file
View File

@@ -0,0 +1,5 @@
# Copyright (c) 2023 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
source "$(KCONFIG_BINARY_DIR)/arch/Kconfig"

View File

@@ -258,11 +258,11 @@ config ARC_CURRENT_THREAD_USE_NO_TLS
select CURRENT_THREAD_USE_NO_TLS
default y if (RGF_NUM_BANKS > 1) || ("$(ZEPHYR_TOOLCHAIN_VARIANT)" = "arcmwdt")
help
Disable current Thread Local Storage for ARC. For cores with more than one
RGF_NUM_BANKS the parameter is disabled by-default because banks synchronization
Disable current Thread Local Storage for ARC. For cores with more then one
RGF_NUM_BANKS the parameter is disabled by-default because banks syncronization
requires significant time, and it slows down performance.
ARCMWDT works with TLS pointer in different way then GCC. Optimized access to
TLS pointer via the _current symbol does not provide significant advantages
ARCMWDT works with tls pointer in different way then GCC. Optimized access to
TLS pointer via _current variable does not provide significant advantages
in case of MetaWare.
config GEN_ISR_TABLES
@@ -274,7 +274,7 @@ config GEN_IRQ_START_VECTOR
config HARVARD
bool "Harvard Architecture"
help
The ARC CPU can be configured to have two buses;
The ARC CPU can be configured to have two busses;
one for instruction fetching and another that serves as a data bus.
config CODE_DENSITY
@@ -370,28 +370,6 @@ endmenu
config DCACHE_LINE_SIZE
default 32
config ARC_DCACHE_REGION_OPERATIONS
bool "DCACHE region operations"
depends on CACHE_MANAGEMENT && DCACHE
default n
help
Perform L1 data cache management operations by regions rather than line by line in a loop,
improves performance of cache management operations.
config ARC_SLC
bool "System level cache"
depends on CACHE_MANAGEMENT && DCACHE && (CPU_HS4X || CPU_HS3X)
default n
help
This option enables System Level Cache, and adds SLC support to the data cache management operations.
config ARC_SLC_LINE_SIZE
int "SLC line size"
depends on ARC_SLC
default 128
help
Size in bytes of a CPU system level cache line.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768 if !64BIT
@@ -411,11 +389,6 @@ config ARC_EARLY_SOC_INIT
(before C runtime initialization). Setup code is called in form of
soc_early_asm_init_percpu assembler macro.
# ARC vector table must be aligned to 1KiB boundary, and will be at the
# start of the ROM region.
config ROM_START_OFFSET
default 0x400 if BOOTLOADER_MCUBOOT
config MAIN_STACK_SIZE
default 4096 if 64BIT

View File

@@ -8,6 +8,14 @@
__weak void *__dso_handle;
int __cxa_atexit(void (*destructor)(void *), void *objptr, void *dso)
{
ARG_UNUSED(destructor);
ARG_UNUSED(objptr);
ARG_UNUSED(dso);
return 0;
}
int atexit(void (*function)(void))
{
return 0;

View File

@@ -34,5 +34,3 @@ add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_LLEXT elf.c)

View File

@@ -2,7 +2,6 @@
/*
* Copyright (c) 2016 Synopsys, Inc. All rights reserved.
* Copyright (c) 2025 GSI Technology, All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -30,34 +29,16 @@
size_t sys_cache_line_size;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100 /* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
#define DC_CTRL_INVALIDATE_MODE 0x40 /* d-cache invalidate mode bit */
#define DC_CTRL_REGION_OP 0xe00 /* d-cache region operation */
#define MMU_BUILD_PHYSICAL_ADDR_EXTENSION 0x1000 /* physical address extension enable mask */
#define SLC_CTRL_DISABLE 0x1 /* SLC disable */
#define SLC_CTRL_INVALIDATE_MODE 0x40 /* SLC invalidate mode */
#define SLC_CTRL_BUSY_STATUS 0x100 /* SLC busy status */
#define SLC_CTRL_REGION_OP 0xe00 /* SLC region operation */
#if defined(CONFIG_ARC_SLC)
/*
* spinlock is used for SLC access because depending on HW configuration, the SLC might be shared
* between the cores, and in this case, only one core is allowed to access the SLC register
* interface at a time.
*/
static struct k_spinlock slc_lock;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100/* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
static bool dcache_available(void)
{
@@ -74,189 +55,9 @@ static void dcache_dc_ctrl(uint32_t dcache_en_mask)
}
}
static bool pae_exists(void)
{
uint32_t bcr = z_arc_v2_aux_reg_read(_ARC_V2_MMU_BUILD);
return 1 == FIELD_GET(MMU_BUILD_PHYSICAL_ADDR_EXTENSION, bcr);
}
#if defined(CONFIG_ARC_SLC)
static void slc_enable(void)
{
uint32_t val = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
val &= ~SLC_CTRL_DISABLE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, val);
}
static void slc_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END1, 0);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START1, 0);
}
static void slc_flush_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_all(void)
{
K_SPINLOCK(&slc_lock) {
z_arc_v2_aux_reg_write(_ARC_V2_SLC_FLUSH, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
#endif /* CONFIG_ARC_SLC */
void arch_dcache_enable(void)
{
dcache_dc_ctrl(DC_CTRL_DC_ENABLE);
#if defined(CONFIG_ARC_SLC)
slc_enable();
#endif
}
void arch_dcache_disable(void)
@@ -264,107 +65,17 @@ void arch_dcache_disable(void)
/* nothing */
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
static void dcache_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_DC_PTAG_HI, 0);
}
static void dcache_flush_region(void *start_addr_ptr, size_t size)
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
arch_irq_unlock(key); /* --exit critical section-- */
}
#else /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
static void dcache_flush_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
@@ -387,81 +98,6 @@ static void dcache_flush_lines(void *start_addr_ptr, size_t size)
} while (start_addr < end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
static void dcache_flush_and_invalidate_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
#endif /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_region(start_addr_ptr, size);
#else
dcache_flush_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_region(start_addr_ptr, size);
#endif
return 0;
}
@@ -469,127 +105,48 @@ int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
int arch_dcache_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_invalidate_region(start_addr_ptr, size);
#else
dcache_invalidate_lines(start_addr_ptr, size);
#endif
key = arch_irq_lock(); /* -enter critical section- */
#if defined(CONFIG_ARC_SLC)
slc_invalidate_region(start_addr_ptr, size);
#endif
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
return 0;
}
int arch_dcache_flush_and_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_and_invalidate_region(start_addr_ptr, size);
#else
dcache_flush_and_invalidate_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_region(start_addr_ptr, size);
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_flush_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLSH, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_all();
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_invalidate_all();
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_flush_and_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_all();
#endif
return 0;
return -ENOTSUP;
}
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
@@ -667,19 +224,6 @@ static int init_dcache(void)
init_dcache_line_size();
#endif
/*
* Init high address registers to 0 if PAE exists, cache operations for 40 bit addresses not
* implemented
*/
if (pae_exists()) {
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_high_addr_init();
#endif
#if defined(CONFIG_ARC_SLC)
slc_high_addr_init();
#endif
}
return 0;
}

View File

@@ -39,7 +39,7 @@ config ARC_XY_ENABLE
bool "ARC address generation unit registers"
help
Processors with XY memory and AGU registers can configure this
option to accelerate DSP instructions.
option to accelerate DSP instrctions.
config ARC_AGU_SHARING
bool "ARC address generation unit register sharing"

View File

@@ -1,118 +0,0 @@
/*
* Copyright (c) 2024 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/llext/elf.h>
#include <zephyr/llext/llext.h>
#include <zephyr/llext/llext_internal.h>
#include <zephyr/llext/loader.h>
#include <zephyr/logging/log.h>
#include <zephyr/sys/util.h>
LOG_MODULE_REGISTER(elf, CONFIG_LLEXT_LOG_LEVEL);
#define R_ARC_32 4
#define R_ARC_B26 5 /* AKA R_ARC_64 */
#define R_ARC_S25H_PCREL 16
#define R_ARC_S25W_PCREL 17
#define R_ARC_32_ME 27
/* ARCompact insns packed in memory have Middle Endian encoding */
#define ME(x) (((x & 0xffff0000) >> 16) | ((x & 0xffff) << 16))
/**
* @brief Architecture specific function for relocating shared elf
*
* Elf files contain a series of relocations described in multiple sections.
* These relocation instructions are architecture specific and each architecture
* supporting modules must implement this.
*
* The relocation codes are well documented:
* https://github.com/foss-for-synopsys-dwc-arc-processors/arc-ABI-manual/blob/master/ARCv2_ABI.pdf
* https://github.com/zephyrproject-rtos/binutils-gdb
*/
int arch_elf_relocate(struct llext_loader *ldr, struct llext *ext, elf_rela_t *rel,
const elf_shdr_t *shdr)
{
int ret = 0;
uint32_t value;
const uintptr_t loc = llext_get_reloc_instruction_location(ldr, ext, shdr->sh_info, rel);
uint32_t insn = UNALIGNED_GET((uint32_t *)loc);
elf_sym_t sym;
uintptr_t sym_base_addr;
const char *sym_name;
ret = llext_read_symbol(ldr, ext, rel, &sym);
if (ret != 0) {
LOG_ERR("Could not read symbol from binary!");
return ret;
}
sym_name = llext_symbol_name(ldr, ext, &sym);
ret = llext_lookup_symbol(ldr, ext, &sym_base_addr, rel, &sym, sym_name, shdr);
if (ret != 0) {
LOG_ERR("Could not find symbol %s!", sym_name);
return ret;
}
sym_base_addr += rel->r_addend;
int reloc_type = ELF32_R_TYPE(rel->r_info);
switch (reloc_type) {
case R_ARC_32:
case R_ARC_B26:
UNALIGNED_PUT(sym_base_addr, (uint32_t *)loc);
break;
case R_ARC_S25H_PCREL:
/* ((S + A) - P) >> 1
* S = symbol address
* A = addend
* P = relative offset to PCL
*/
value = (sym_base_addr + rel->r_addend - (loc & ~0x3)) >> 1;
insn = ME(insn);
/* disp25h */
insn = insn & ~0x7feffcf;
insn |= ((value >> 0) & 0x03ff) << 17;
insn |= ((value >> 10) & 0x03ff) << 6;
insn |= ((value >> 20) & 0x000f) << 0;
insn = ME(insn);
UNALIGNED_PUT(insn, (uint32_t *)loc);
break;
case R_ARC_S25W_PCREL:
/* ((S + A) - P) >> 2 */
value = (sym_base_addr + rel->r_addend - (loc & ~0x3)) >> 2;
insn = ME(insn);
/* disp25w */
insn = insn & ~0x7fcffcf;
insn |= ((value >> 0) & 0x01ff) << 18;
insn |= ((value >> 9) & 0x03ff) << 6;
insn |= ((value >> 19) & 0x000f) << 0;
insn = ME(insn);
UNALIGNED_PUT(insn, (uint32_t *)loc);
break;
case R_ARC_32_ME:
UNALIGNED_PUT(ME(sym_base_addr), (uint32_t *)loc);
break;
default:
LOG_ERR("unknown relocation: %u\n", reloc_type);
ret = -ENOEXEC;
break;
}
return ret;
}

View File

@@ -17,25 +17,26 @@
#include <zephyr/arch/cpu.h>
#include <zephyr/logging/log.h>
#include <kernel_arch_data.h>
#include <zephyr/arch/exception.h>
#include <zephyr/arch/arc/v2/exception.h>
#include <err_dump_handling.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_EXCEPTION_DEBUG
static void dump_arc_esf(const struct arch_esf *esf)
{
EXCEPTION_DUMP(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR
ARC_EXCEPTION_DUMP(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR
" r3: 0x%" PRIxPTR "", esf->r0, esf->r1, esf->r2, esf->r3);
EXCEPTION_DUMP(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR
ARC_EXCEPTION_DUMP(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR
" r7: 0x%" PRIxPTR "", esf->r4, esf->r5, esf->r6, esf->r7);
EXCEPTION_DUMP(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR
ARC_EXCEPTION_DUMP(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR
" r11: 0x%" PRIxPTR "", esf->r8, esf->r9, esf->r10, esf->r11);
EXCEPTION_DUMP("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
ARC_EXCEPTION_DUMP("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
esf->r12, esf->r13, esf->pc);
EXCEPTION_DUMP(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "",
ARC_EXCEPTION_DUMP(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "",
esf->blink, esf->status32);
#ifdef CONFIG_ARC_HAS_ZOL
EXCEPTION_DUMP("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR
ARC_EXCEPTION_DUMP("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR
" lp_count: 0x%" PRIxPTR "", esf->lp_end, esf->lp_start, esf->lp_count);
#endif /* CONFIG_ARC_HAS_ZOL */
}

View File

@@ -20,6 +20,7 @@
#include <zephyr/kernel_structs.h>
#include <zephyr/arch/common/exc_handle.h>
#include <zephyr/logging/log.h>
#include <err_dump_handling.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
@@ -137,32 +138,32 @@ static void dump_protv_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("Instruction fetch violation (%s)",
ARC_EXCEPTION_DUMP("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
break;
case 0x1:
EXCEPTION_DUMP("Memory read protection violation (%s)",
ARC_EXCEPTION_DUMP("Memory read protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x2:
EXCEPTION_DUMP("Memory write protection violation (%s)",
ARC_EXCEPTION_DUMP("Memory write protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x3:
EXCEPTION_DUMP("Memory read-modify-write violation (%s)",
ARC_EXCEPTION_DUMP("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
break;
case 0x10:
EXCEPTION_DUMP("Normal vector table in secure memory");
ARC_EXCEPTION_DUMP("Normal vector table in secure memory");
break;
case 0x11:
EXCEPTION_DUMP("NS handler code located in S memory");
ARC_EXCEPTION_DUMP("NS handler code located in S memory");
break;
case 0x12:
EXCEPTION_DUMP("NSC Table Range Violation");
ARC_EXCEPTION_DUMP("NSC Table Range Violation");
break;
default:
EXCEPTION_DUMP("unknown");
ARC_EXCEPTION_DUMP("unknown");
break;
}
}
@@ -171,46 +172,46 @@ static void dump_machine_check_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("double fault");
ARC_EXCEPTION_DUMP("double fault");
break;
case 0x1:
EXCEPTION_DUMP("overlapping TLB entries");
ARC_EXCEPTION_DUMP("overlapping TLB entries");
break;
case 0x2:
EXCEPTION_DUMP("fatal TLB error");
ARC_EXCEPTION_DUMP("fatal TLB error");
break;
case 0x3:
EXCEPTION_DUMP("fatal cache error");
ARC_EXCEPTION_DUMP("fatal cache error");
break;
case 0x4:
EXCEPTION_DUMP("internal memory error on instruction fetch");
ARC_EXCEPTION_DUMP("internal memory error on instruction fetch");
break;
case 0x5:
EXCEPTION_DUMP("internal memory error on data fetch");
ARC_EXCEPTION_DUMP("internal memory error on data fetch");
break;
case 0x6:
EXCEPTION_DUMP("illegal overlapping MPU entries");
ARC_EXCEPTION_DUMP("illegal overlapping MPU entries");
if (parameter == 0x1) {
EXCEPTION_DUMP(" - jump and branch target");
ARC_EXCEPTION_DUMP(" - jump and branch target");
}
break;
case 0x10:
EXCEPTION_DUMP("secure vector table not located in secure memory");
ARC_EXCEPTION_DUMP("secure vector table not located in secure memory");
break;
case 0x11:
EXCEPTION_DUMP("NSC jump table not located in secure memory");
ARC_EXCEPTION_DUMP("NSC jump table not located in secure memory");
break;
case 0x12:
EXCEPTION_DUMP("secure handler code not located in secure memory");
ARC_EXCEPTION_DUMP("secure handler code not located in secure memory");
break;
case 0x13:
EXCEPTION_DUMP("NSC target address not located in secure memory");
ARC_EXCEPTION_DUMP("NSC target address not located in secure memory");
break;
case 0x80:
EXCEPTION_DUMP("uncorrectable ECC or parity error in vector memory");
ARC_EXCEPTION_DUMP("uncorrectable ECC or parity error in vector memory");
break;
default:
EXCEPTION_DUMP("unknown");
ARC_EXCEPTION_DUMP("unknown");
break;
}
}
@@ -219,54 +220,54 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("Privilege violation");
ARC_EXCEPTION_DUMP("Privilege violation");
break;
case 0x1:
EXCEPTION_DUMP("disabled extension");
ARC_EXCEPTION_DUMP("disabled extension");
break;
case 0x2:
EXCEPTION_DUMP("action point hit");
ARC_EXCEPTION_DUMP("action point hit");
break;
case 0x10:
switch (parameter) {
case 0x1:
EXCEPTION_DUMP("N to S return using incorrect return mechanism");
ARC_EXCEPTION_DUMP("N to S return using incorrect return mechanism");
break;
case 0x2:
EXCEPTION_DUMP("N to S return with incorrect operating mode");
ARC_EXCEPTION_DUMP("N to S return with incorrect operating mode");
break;
case 0x3:
EXCEPTION_DUMP("IRQ/exception return fetch from wrong mode");
ARC_EXCEPTION_DUMP("IRQ/exception return fetch from wrong mode");
break;
case 0x4:
EXCEPTION_DUMP("attempt to halt secure processor in NS mode");
ARC_EXCEPTION_DUMP("attempt to halt secure processor in NS mode");
break;
case 0x20:
EXCEPTION_DUMP("attempt to access secure resource from normal mode");
ARC_EXCEPTION_DUMP("attempt to access secure resource from normal mode");
break;
case 0x40:
EXCEPTION_DUMP("SID violation on resource access (APEX/UAUX/key NVM)");
ARC_EXCEPTION_DUMP("SID violation on resource access (APEX/UAUX/key NVM)");
break;
default:
EXCEPTION_DUMP("unknown");
ARC_EXCEPTION_DUMP("unknown");
break;
}
break;
case 0x13:
switch (parameter) {
case 0x20:
EXCEPTION_DUMP("attempt to access secure APEX feature from NS mode");
ARC_EXCEPTION_DUMP("attempt to access secure APEX feature from NS mode");
break;
case 0x40:
EXCEPTION_DUMP("SID violation on access to APEX feature");
ARC_EXCEPTION_DUMP("SID violation on access to APEX feature");
break;
default:
EXCEPTION_DUMP("unknown");
ARC_EXCEPTION_DUMP("unknown");
break;
}
break;
default:
EXCEPTION_DUMP("unknown");
ARC_EXCEPTION_DUMP("unknown");
break;
}
}
@@ -274,7 +275,7 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
EXCEPTION_DUMP("interrupt %u", vector);
ARC_EXCEPTION_DUMP("interrupt %u", vector);
return;
}
@@ -283,55 +284,55 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
*/
switch (vector) {
case ARC_EV_RESET:
EXCEPTION_DUMP("Reset");
ARC_EXCEPTION_DUMP("Reset");
break;
case ARC_EV_MEM_ERROR:
EXCEPTION_DUMP("Memory Error");
ARC_EXCEPTION_DUMP("Memory Error");
break;
case ARC_EV_INS_ERROR:
EXCEPTION_DUMP("Instruction Error");
ARC_EXCEPTION_DUMP("Instruction Error");
break;
case ARC_EV_MACHINE_CHECK:
EXCEPTION_DUMP("EV_MachineCheck");
ARC_EXCEPTION_DUMP("EV_MachineCheck");
dump_machine_check_exception(cause, parameter);
break;
case ARC_EV_TLB_MISS_I:
EXCEPTION_DUMP("EV_TLBMissI");
ARC_EXCEPTION_DUMP("EV_TLBMissI");
break;
case ARC_EV_TLB_MISS_D:
EXCEPTION_DUMP("EV_TLBMissD");
ARC_EXCEPTION_DUMP("EV_TLBMissD");
break;
case ARC_EV_PROT_V:
EXCEPTION_DUMP("EV_ProtV");
ARC_EXCEPTION_DUMP("EV_ProtV");
dump_protv_exception(cause, parameter);
break;
case ARC_EV_PRIVILEGE_V:
EXCEPTION_DUMP("EV_PrivilegeV");
ARC_EXCEPTION_DUMP("EV_PrivilegeV");
dump_privilege_exception(cause, parameter);
break;
case ARC_EV_SWI:
EXCEPTION_DUMP("EV_SWI");
ARC_EXCEPTION_DUMP("EV_SWI");
break;
case ARC_EV_TRAP:
EXCEPTION_DUMP("EV_Trap");
ARC_EXCEPTION_DUMP("EV_Trap");
break;
case ARC_EV_EXTENSION:
EXCEPTION_DUMP("EV_Extension");
ARC_EXCEPTION_DUMP("EV_Extension");
break;
case ARC_EV_DIV_ZERO:
EXCEPTION_DUMP("EV_DivZero");
ARC_EXCEPTION_DUMP("EV_DivZero");
break;
case ARC_EV_DC_ERROR:
EXCEPTION_DUMP("EV_DCError");
ARC_EXCEPTION_DUMP("EV_DCError");
break;
case ARC_EV_MISALIGNED:
EXCEPTION_DUMP("EV_Misaligned");
ARC_EXCEPTION_DUMP("EV_Misaligned");
break;
case ARC_EV_VEC_UNIT:
EXCEPTION_DUMP("EV_VecUnit");
ARC_EXCEPTION_DUMP("EV_VecUnit");
break;
default:
EXCEPTION_DUMP("unknown");
ARC_EXCEPTION_DUMP("unknown");
break;
}
}
@@ -345,7 +346,7 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
* invokes the user provided routine k_sys_fatal_error_handler() which is
* responsible for implementing the error handling policy.
*/
void z_arc_fault(struct arch_esf *esf, uint32_t old_sp)
void _Fault(struct arch_esf *esf, uint32_t old_sp)
{
uint32_t vector, cause, parameter;
uint32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
@@ -385,9 +386,9 @@ void z_arc_fault(struct arch_esf *esf, uint32_t old_sp)
}
#ifdef CONFIG_EXCEPTION_DEBUG
EXCEPTION_DUMP("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
ARC_EXCEPTION_DUMP("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
EXCEPTION_DUMP("Address 0x%x", exc_addr);
ARC_EXCEPTION_DUMP("Address 0x%x", exc_addr);
dump_exception_info(vector, cause, parameter);
#endif

View File

@@ -19,7 +19,7 @@
#include <zephyr/syscall.h>
#include <zephyr/arch/arc/asm-compat/assembler.h>
GTEXT(z_arc_fault)
GTEXT(_Fault)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -99,11 +99,11 @@ _exc_entry:
_save_exc_regs_into_stack
/* sp is parameter of z_arc_fault */
/* sp is parameter of _Fault */
MOVR r0, sp
/* ilink is the thread's original sp */
MOVR r1, ilink
jl z_arc_fault
jl _Fault
_exc_return:
/* the exception cause must be fixed in exception handler when exception returns

View File

@@ -22,11 +22,53 @@
#include <zephyr/arch/arc/v2/aux_regs.h>
#include <zephyr/arch/arc/cluster.h>
#include <zephyr/kernel_structs.h>
#include <zephyr/arch/common/xip.h>
#include <zephyr/arch/common/init.h>
#include <kernel_internal.h>
#include <zephyr/platform/hooks.h>
#include <zephyr/arch/cache.h>
/* XXX - keep for future use in full-featured cache APIs */
#if 0
/**
* @brief Disable the i-cache if present
*
* For those ARC CPUs that have a i-cache present,
* invalidate the i-cache and then disable it.
*/
static void disable_icache(void)
{
unsigned int val;
val = z_arc_v2_aux_reg_read(_ARC_V2_I_CACHE_BUILD);
val &= 0xff; /* version field */
if (val == 0) {
return; /* skip if i-cache is not present */
}
z_arc_v2_aux_reg_write(_ARC_V2_IC_IVIC, 0);
__builtin_arc_nop();
z_arc_v2_aux_reg_write(_ARC_V2_IC_CTRL, 1);
}
/**
* @brief Invalidate the data cache if present
*
* For those ARC CPUs that have a data cache present,
* invalidate the data cache.
*/
static void invalidate_dcache(void)
{
unsigned int val;
val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
val &= 0xff; /* version field */
if (val == 0) {
return; /* skip if d-cache is not present */
}
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 1);
}
#endif
#ifdef CONFIG_ISA_ARCV3
/* NOTE: it will be called from early C code - we must NOT use global / static variables in it! */
static void arc_cluster_scm_enable(void)
@@ -50,8 +92,8 @@ static void arc_cluster_scm_enable(void)
/* Invalidate SCM before enabling. */
arc_cln_write_reg_nolock(ARC_CLN_CACHE_CMD,
ARC_CLN_CACHE_CMD_OP_REG_INV | ARC_CLN_CACHE_CMD_INCR);
while (arc_cln_read_reg_nolock(ARC_CLN_CACHE_STATUS) & ARC_CLN_CACHE_STATUS_BUSY) {
}
while (arc_cln_read_reg_nolock(ARC_CLN_CACHE_STATUS) & ARC_CLN_CACHE_STATUS_BUSY)
;
arc_cln_write_reg_nolock(ARC_CLN_CACHE_STATUS, ARC_CLN_CACHE_STATUS_EN);
}
@@ -68,7 +110,7 @@ extern char __device_states_end[];
*/
static void dev_state_zero(void)
{
arch_early_memset(__device_states_start, 0, __device_states_end - __device_states_start);
z_early_memset(__device_states_start, 0, __device_states_end - __device_states_start);
}
#endif
@@ -82,19 +124,21 @@ extern void arc_secureshield_init(void);
* This routine prepares for the execution of and runs C code.
*/
FUNC_NORETURN void z_prep_c(void)
void z_prep_c(void)
{
#if defined(CONFIG_SOC_PREP_HOOK)
soc_prep_hook();
#endif
#ifdef CONFIG_ISA_ARCV3
arc_cluster_scm_enable();
#endif
arch_bss_zero();
z_bss_zero();
#ifdef __CCAC__
dev_state_zero();
#endif
arch_data_copy();
z_data_copy();
#if CONFIG_ARCH_CACHE
arch_cache_init();
#endif

View File

@@ -163,20 +163,20 @@ hw_pf_setup_done:
#if defined(CONFIG_SMP) || CONFIG_MP_MAX_NUM_CPUS > 1
_get_cpu_id r0
breq r0, 0, _primary_core_startup
breq r0, 0, _master_core_startup
/*
* Non-primary cores wait for primary core (core 0) to boot enough
* Non-masters wait for master core (core 0) to boot enough
*/
_secondary_core_wait:
_slave_core_wait:
#if CONFIG_MP_MAX_NUM_CPUS == 1
kflag 1
#endif
ld r1, [arc_cpu_wake_flag]
brne r0, r1, _secondary_core_wait
brne r0, r1, _slave_core_wait
LDR sp, arc_cpu_sp
/* signal primary core that secondary core runs */
/* signal master core that slave core runs */
st 0, [arc_cpu_wake_flag]
#if defined(CONFIG_ARC_FIRQ_STACK)
@@ -186,7 +186,7 @@ _secondary_core_wait:
#endif
j arch_secondary_cpu_init
_primary_core_startup:
_master_core_startup:
#endif
#ifdef CONFIG_INIT_STACKS

View File

@@ -11,13 +11,12 @@
*/
#include <zephyr/device.h>
#include <zephyr/kernel.h>
#include <zephyr/irq.h>
#include <zephyr/kernel_structs.h>
#include <ksched.h>
#include <ipi.h>
#include <zephyr/init.h>
#include <zephyr/platform/hooks.h>
#include <zephyr/irq.h>
#include <arc_irq_offload.h>
#include <kernel_arch_func.h>
volatile struct {
arch_cpustart_t fn;
@@ -25,10 +24,10 @@ volatile struct {
} arc_cpu_init[CONFIG_MP_MAX_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up primary core and secondary cores
* Secondary core will spin for arc_cpu_wake_flag until primary core sets
* it to the core id of secondary core. Then, secondary core clears it to notify
* primary core that it's waken
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile uint32_t arc_cpu_wake_flag;
@@ -50,13 +49,13 @@ void arch_cpu_start(int cpu_num, k_thread_stack_t *stack, int sz,
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one secondary cpu can read it per time
* only one slave cpu can read it per time
*/
arc_cpu_sp = K_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait secondary cpu to start */
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0U) {
;
}
@@ -90,7 +89,7 @@ static void arc_connect_debug_mask_update(int cpu_num)
void arc_core_private_intc_init(void);
/* the C entry of secondary cores */
/* the C entry of slave cores */
void arch_secondary_cpu_init(int cpu_num)
{
arch_cpustart_t fn;
@@ -116,11 +115,6 @@ void arch_secondary_cpu_init(int cpu_num)
DT_IRQ(DT_NODELABEL(ici), priority), 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
#endif
#ifdef CONFIG_SOC_PER_CORE_INIT_HOOK
soc_per_core_init_hook();
#endif /* CONFIG_SOC_PER_CORE_INIT_HOOK */
/* call the function set by arch_cpu_start */
fn = arc_cpu_init[cpu_num].fn;
@@ -162,7 +156,7 @@ int arch_smp_init(void)
{
struct arc_connect_bcr bcr;
/* necessary primary core init */
/* necessary master core init */
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
@@ -173,7 +167,7 @@ int arch_smp_init(void)
}
if (bcr.ipi) {
/* register ici interrupt, just need primary core to register once */
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority),

View File

@@ -276,15 +276,6 @@ int arch_float_enable(struct k_thread *thread, unsigned int options)
}
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
int arch_coprocessors_disable(struct k_thread *thread)
{
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
return arch_float_disable(thread);
#else
return -ENOTSUP;
#endif
}
#if !defined(CONFIG_MULTITHREADING)
K_KERNEL_STACK_ARRAY_DECLARE(z_interrupt_stacks, CONFIG_MP_MAX_NUM_CPUS, CONFIG_ISR_STACK_SIZE);

View File

@@ -47,7 +47,7 @@ struct vector_table {
uintptr_t unused_2;
};
const struct vector_table _VectorTable Z_GENERIC_SECTION(.exc_vector_table) = {
struct vector_table _VectorTable Z_GENERIC_SECTION(.exc_vector_table) = {
(uintptr_t)__reset,
(uintptr_t)__memory_error,
(uintptr_t)__instruction_error,

View File

@@ -0,0 +1,16 @@
/*
* Copyright (c) 2023 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_ERR_DUMP_HANDLING_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_ERR_DUMP_HANDLING_H_
#if defined CONFIG_LOG
#define ARC_EXCEPTION_DUMP(...) LOG_ERR(__VA_ARGS__)
#else
#define ARC_EXCEPTION_DUMP(format, ...) printk(format "\n", ##__VA_ARGS__)
#endif
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_ERR_DUMP_HANDLING_H_ */

View File

@@ -26,8 +26,6 @@
#include <v2/irq.h>
#include <zephyr/platform/hooks.h>
#ifdef __cplusplus
extern "C" {
#endif
@@ -35,10 +33,6 @@ extern "C" {
static ALWAYS_INLINE void arch_kernel_init(void)
{
z_irq_setup();
#ifdef CONFIG_SOC_PER_CORE_INIT_HOOK
soc_per_core_init_hook();
#endif /* CONFIG_SOC_PER_CORE_INIT_HOOK */
}

View File

@@ -1,31 +1,21 @@
archs:
- name: arc
path: arc
full_name: Synopsys DesignWare ARC
- name: arm
path: arm
full_name: ARM
- name: arm64
path: arm64
full_name: ARM 64
- name: mips
path: mips
full_name: MIPS
- name: nios2
path: nios2
- name: posix
path: posix
full_name: POSIX
- name: riscv
path: riscv
full_name: RISC-V
- name: sparc
path: sparc
full_name: SPARC
- name: xtensa
path: xtensa
full_name: Xtensa
- name: x86
path: x86
full_name: x86
- name: rx
path: rx
full_name: Renesas RX

View File

@@ -49,7 +49,7 @@ config ROMSTART_RELOCATION_ROM
Most SOCs include an alias for the boot-vector at address 0x00000000
so a default which might be supported by the corresponding Linux rproc driver.
If it is not, additional options allow to specify the addresses.
If it is not, additionnal options allows to specify the addresses.
In general this option should be chosen if the zephyr,flash chosen node
is not placed into the boot-vector memory area.
@@ -155,12 +155,12 @@ config CPU_HAS_ARM_MPU
This option is enabled when the CPU has a Memory Protection Unit (MPU)
in ARM flavor.
config CPU_HAS_NXP_SYSMPU
config CPU_HAS_NXP_MPU
bool
select CPU_HAS_MPU
help
This option is enabled when the CPU has an NXP System Memory Protection
Unit (SYSMPU).
This option is enabled when the CPU has a Memory Protection Unit (MPU)
in NXP flavor.
config CPU_HAS_CUSTOM_FIXED_SOC_MPU_REGIONS
bool "Custom fixed SoC MPU region definition"
@@ -190,43 +190,4 @@ config HAS_SWO
help
When enabled, indicates that SoC has an SWO output
DT_CHOSEN_Z_DTCM := zephyr,dtcm
DT_CHOSEN_Z_ITCM := zephyr,itcm
choice
prompt "Vector table memory location"
depends on SRAM_VECTOR_TABLE
default ARM_VECTOR_TABLE_SRAM
config ARM_VECTOR_TABLE_SRAM
bool "Place the vector table in DT Chosen SRAM instead of DT Chosen Flash"
help
When executing in place (XiP), selecting this option will result in the
interrupt vector table being relocated from DT 'zephyr,flash' chosen
memory to DT 'zephyr,sram' chosen memory.
config ARM_VECTOR_TABLE_DTCM
bool "Place the vector table in DT Chosen DTCM instead of DT Chosen Flash"
depends on $(dt_chosen_enabled,$(DT_CHOSEN_Z_DTCM))
help
When executing in place (XiP), selecting this option will result in the
interrupt vector table being relocated from DT 'zephyr,flash' chosen
memory to DT 'zephyr,dtcm' chosen memory. While the vector table is
instruction-fetched during exception entry, a DTCM option is provided
for systems where ITCM is unavailable. DTCM still offers low-latency,
deterministic access compared to normal RAM, but is not the optimal
location for instruction fetch performance.
config ARM_VECTOR_TABLE_ITCM
bool "Place the vector table in DT Chosen ITCM instead of DT Chosen Flash"
depends on $(dt_chosen_enabled,$(DT_CHOSEN_Z_ITCM))
help
When executing in place (XiP), selecting this option will result in the
interrupt vector table being relocated from DT 'zephyr,flash' chosen
memory to DT 'zephyr,itcm' chosen memory. ITCM provides single-cycle,
deterministic instruction fetches via the CPU instruction bus, it offers
the lowest interrupt latency and is the preferred location when available.
endchoice
endmenu

View File

@@ -27,7 +27,7 @@ add_subdirectory_ifdef(CONFIG_ARM_AARCH32_MMU mmu)
add_subdirectory_ifdef(CONFIG_CPU_AARCH32_CORTEX_R cortex_a_r)
add_subdirectory_ifdef(CONFIG_CPU_AARCH32_CORTEX_A cortex_a_r)
if(CONFIG_ARM_ZIMAGE_HEADER)
if (CONFIG_ARM_ZIMAGE_HEADER)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors zimage_header.ld)
zephyr_linker_sources(ROM_START SORT_KEY 0x1vectors vector_table.ld)
zephyr_linker_sources(ROM_START SORT_KEY 0x2vectors cortex_m/vector_table_pad.ld)

View File

@@ -16,7 +16,6 @@ config CPU_CORTEX_M
select ARCH_HAS_USERSPACE if ARM_MPU
select ARCH_HAS_NOCACHE_MEMORY_SUPPORT if ARM_MPU && CPU_HAS_ARM_MPU && CPU_HAS_DCACHE
select ARCH_HAS_RAMFUNC_SUPPORT
select ARCH_HAS_VECTOR_TABLE_RELOCATION if CPU_CORTEX_M_HAS_VTOR
select ARCH_HAS_NESTED_EXCEPTION_DETECTION
select SWAP_NONATOMIC
select ARCH_HAS_EXTRA_EXCEPTION_INFO

View File

@@ -109,17 +109,6 @@ config VFP_U_DP_D16_FP16_FMAC
fused multiply-accumulate) and floating-point exception trapping with 16
double-word registers.
config VFP_DP_D32
bool
select CPU_HAS_VFP
select VFP_FEATURE_SINGLE_PRECISION
select VFP_FEATURE_DOUBLE_PRECISION
select VFP_FEATURE_REGS_S64_D32
help
This option signifies the use of a VFP floating-point coprocessor
that supports single- and double-precision operations
with 32 double-word registers.
config VFP_DP_D32_FP16_FMAC
bool
select CPU_HAS_VFP

Some files were not shown because too many files have changed in this diff Show More