Compare commits

..

124 Commits

Author SHA1 Message Date
Vinayak Kariappa Chettimada
ac9c9d2c8d Bluetooth: Controller: Add explicit LLCP error code check
Add unit tests to cover explicit LLCP error code check and
cover the same in the Controller implementation.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit d6f2bc9669)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Vinayak Kariappa Chettimada
f780ff8154 Bluetooth: Controller: Use BT_HCI_ERR_UNSPECIFIED as needed
A Host shall consider any error code that it does not
explicitly understand equivalent to the error code
Unspecified Error (0x1F).

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 78466c8f52)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Vinayak Kariappa Chettimada
acfd2b279d Bluetooth: Controller: Refactor BT_CTLR_LE_ENC implementation
Refactor reused function in BT_CTLR_LE_ENC feature.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit fe205a598e)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Vinayak Kariappa Chettimada
636d73e67a Bluetooth: Controller: Fix missing conn update ind PDU validation
Fix missing validation of Connection Update Ind PDU. Ignore
invalid connection update parameters and force a silent
local connection termination.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 4b6d3f1e16)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Erik Brockhoff
0ea4e22c6a Bluetooth: controller: minor cleanup and a fix-up re. LLCP
Only perform retention if not already done.
Ensure 'sched' is performed on phy ntf even if dle is not.

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 9d8059b6e5)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Erik Brockhoff
e267abdc5d Bluetooth: controller: fix node_rx retention mechanism
Ensure that in LLCP reference to node_rx is cleared when
retention is NOT used, to avoid corruption of node_rx later
re-allocated

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 806a4fcf92)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Erik Brockhoff
3ddd2734a9 Bluetooth: controller: fixing rx node leak on CPR reject of parallel CPR
In case a CPR is intiated but rejected due to CPR active on
other connection, rx nodes are leaked due to retained node not
being properly released.

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit edef1b7cf4)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Morten Priess
6b2e403a75 Bluetooth: controller: Prevent invalid compiler code reordering
In ull_disable, it is imperative that the callback is set up before a
second reference counter check, otherwise it may happen that an LLL done
event has already passed when the disable callback and semaphore is
assigned.

This causes the HCI thread to wait until timeout and assert after
ull_ticker_stop_with_mark.

For certain compilers, due to compiler optimizations, it can be seen
from the assembler code that the callback is assigned after the second
reference counter check.

By adding memory barriers, the code correctly reorders code to the
expected sequence.

Signed-off-by: Morten Priess <mtpr@oticon.com>
(cherry picked from commit 7f82b6a219)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-22 10:35:25 +03:00
Robert Lubos
fcd271d0dc net: mqtt: Fix possible socket leak with websocket transport
In case underlying TCP/TLS connection is already down, the
websocket_disconnect() call is expected to fail, as it involves
communication. Therefore, mqtt_client_websocket_disconnect() should not
quit early in such cases, as it could lead to an underlying socket leak.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 4625fa713f)
2024-07-09 11:50:32 +01:00
Jared Kangas
b5d53e922f drivers: adc: lmp90xxx: fix checksum mismatch return value
During channel reads, zero is returned on CRC mismatches: the returned
error variable is not written to after a previous non-zero check. Return
-EIO to mirror other drivers' checksum validation behaviors.

Signed-off-by: Jared Kangas <kangas.jd@gmail.com>
(cherry picked from commit 8ec3c045f8)
2024-07-08 19:16:28 +03:00
Jamie McCrae
aa96c4d3cb cmake: modules: extensions: Fix dts watch file processing
Fixes and simplifies the handling of how the dts watch file is
processed

Signed-off-by: Grzegorz Swiderski <grzegorz.swiderski@nordicsemi.no>
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 11c1f3de61)
2024-07-01 10:59:51 +01:00
Jamie McCrae
d32ad62356 cmake: modules: extension: Fix dts file watch
Fixes an issue in the code that processes the output file of a
compiler to see which files should be watched, the compiler can
combine multiple files into a single line instead of putting them
each on separate lines if the length of the file paths is short,
therefore account for this and split it up into multiple elements

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit f4cfb8cd96)
2024-07-01 10:59:51 +01:00
Johan Hedberg
1424b494cd Bluetooth: Host: Avoid processing "no change" encryption changes
If the new encryption state is the same as the old one, there's no point in
doing additional processing or callbacks. Simply log a warning and ignore
the HCI event in such a case.

Signed-off-by: Johan Hedberg <johan.hedberg@silabs.com>
(cherry picked from commit bf363d7c3e)
2024-06-25 14:18:44 +01:00
Stephanos Ioannidis
48ba8b406e tests: kernel: thread_runtime_stats: Relax precision test for QEMU
This commit relaxes the idle event statistics test precision requirement
for emulated QEMU targets because the cycle counts may be inaccurate when
the host CPU is overloaded (e.g. when running tests with twister) and a
high failure rate is observed for this test in the CI.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 2b2dd01c38)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
75f737a5cc testsuite: ztest: Increase ZTEST_TEST_DELAY_MS to 5000
This commit increases the default value of `ZTEST_TEST_DELAY_MS` from 3000
to 5000 milliseconds because the current value of 3000ms may not be
sufficient for the hosts with lower CPU clock frequency (e.g. new Zephyr CI
runners with cost-effective processors).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1bf751074d)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
316637f865 ci: footprint: Use zephyr-runner v2
This commit updates the footprint workflow to use the new zephyr-runner v2
CI runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
68b759a214 ci: bsim-tests: Use zephyr-runner v2
This commit updates the bsim-tests workflow to use the new zephyr-runner v2
CI runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 9f9a6c547b)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
3366400449 ci: clang: Prioritise remote Redis cache storage
This commit updates the clang workflow such that ccache only uses remote
Redis cache storage when available.

The purpose of this to reduce the individual runner local disk IOPS
requirement; thereby, reducing the overall load on the SAN.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 95e7eb31e6)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
a8ac372cee ci: clang: Use Redis remote storage for ccache
This commit updates the clang workflow to, when available, use Redis remote
storage backend for the ccache compilation cache data.

The Redis cache server is hosted in the Kubernetes cluster in which the
zephyr-runner pods run -- the Redis remote storage backend will be ignored
if the server is unavailable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 4a2884c652)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
eb8acf9d37 ci: clang: Store ccache data in node cache
This commit updates the clang workflow to store ccache data in the
zephyr-runner v2 node cache.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cd83f0724b)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
c1298b7af0 ci: clang: Use zephyr-runner v2
This commit updates the clang workflow to use the new zephyr-runner v2 CI
runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 64ca699fc8)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
c34def10af ci: codecov: Set twister timeout multiplier to 2
This commit sets the codecov workflow twister timeout multiplier to 2,
which effectively increases the default test timeout from 60 to 120
seconds, because the new cost-effective Zephyr runners may take longer to
execute tests and the default timeout is not sufficient for some tests to
complete.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 550bb4e4a6)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
d54f5337e3 ci: codecov: Prioritise remote Redis cache storage
This commit updates the codecov workflow such that ccache only uses remote
Redis cache storage when available.

The purpose of this to reduce the individual runner local disk IOPS
requirement; thereby, reducing the overall load on the SAN.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit a636c52b6a)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
43c538e8cc ci: codecov: Add --specs to ccache ignore option list
This commit adds the compiler `--specs=*` flag to the ccache ignore option
list because ccache is unable to resolve the toolchain-provided specs file
path and will consider source files to be uncacheable if it is unable to
read the specified specs file.

Note that adding `--specs=*` to the ignore option list is not a problem
because it is unlikely for the content of the toolchain libc spec file to
change without the compiler executable itself changing.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit ab9f6b456b)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
207639a3fd ci: codecov: Use Redis remote storage for ccache
This commit updates the codecov workflow to, when available, use Redis
remote storage backend for the ccache compilation cache data.

The Redis cache server is hosted in the Kubernetes cluster in which the
zephyr-runner pods run -- the Redis remote storage backend will be ignored
if the server is unavailable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit b57f1b5a15)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
c641a3e2d0 ci: codecov: Store ccache data in node cache
This commit updates the codecov workflow to store ccache data in the
zephyr-runner v2 node cache.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 36b0b101d4)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
91a8ed6064 ci: codecov: Use zephyr-runner v2
This commit updates the codecov workflow to use the new zephyr-runner v2 CI
runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 354e290a23)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
a0aef88ee2 ci: codecov: Run on all zephyrproject-rtos organisation repositories
This commit updates the codecov workflow to run on all forks under the
zephyrproject-rtos organisation.

The purpose of this is mainly to simplify the process of testing of this
workflow under the zephyr-testing repository.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit c1bd5a613f)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
96c278e008 ci: footprint-tracking: Use zephyr-runner v2
This commit updates the bsim-tests workflow to use the new zephyr-runner v2
CI runner deployment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 9838633c0e)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
dff8361d9a ci: doc-build: Use zephyr-runner v2
This commit updates the doc-build workflow to use the new zephyr-runner v2
CI runner deployment.

It also installs additional system packages that are not available by
default in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 2819c3526a)
2024-06-11 02:40:49 +09:00
Anas Nashif
24b9511fa5 ci: twister: increase matrix size for push jobs
Increase matrix size to 20 from 15 on push events.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
(cherry picked from commit 9970724652)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
012e361818 ci: twister: Set build job timeout to 24 hours
This commit increases the twister build job timeout from the default value
of 6 hours to 24 hours because scheduled (weekly) build runs take longer
than 6 hours to complete.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 7df7e834d9)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
399f06e287 ci: twister: Set twister timeout multiplier to 2
This commit sets the twister timeout multiplier to 2, which effectively
increases the default test timeout from 60 to 120 seconds, because the new
cost-effective Zephyr runners may take longer to execute tests and the
default timeout is not sufficient for some tests to complete.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit de68ea7ce0)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
ab6606312d ci: twister: Prioritise remote Redis cache storage
This commit updates the twister workflow such that ccache only uses remote
Redis cache storage when available.

The purpose of this to reduce the individual runner local disk IOPS
requirement; thereby, reducing the overall load on the SAN.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 527435d642)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
0bc62cc4bc ci: twister: Add --specs to ccache ignore option list
This commit adds the compiler `--specs=*` flag to the ccache ignore option
list because ccache is unable to resolve the toolchain-provided specs file
path and will consider source files to be uncacheable if it is unable to
read the specified specs file.

Note that adding `--specs=*` to the ignore option list is not a problem
because it is unlikely for the content of the toolchain libc spec file to
change without the compiler executable itself changing.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit d3f9f391ad)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
491acd95c1 ci: twister: Use Redis remote storage for ccache
This commit updates the twister workflow to, when available, use Redis
remote storage backend for the ccache compilation cache data.

The Redis cache server is hosted in the Kubernetes cluster in which the
zephyr-runner pods run -- the Redis remote storage backend will be ignored
if the server is unavailable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 3823f1f0db)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
4728630fe5 ci: twister: Store ccache data in node cache
This commit updates the twister workflow to store ccache data in the
zephyr-runner v2 node cache.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1be3aacad3)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
411c7eb494 ci: twister: Print cloud service information
This commit updates the twister workflow jobs that run on the zephyr-runner
v2 to print the underlying cloud service information in the logs to help
trace and debug potential runner issues.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 9a9bebb9f9)
2024-06-11 02:40:49 +09:00
Stephanos Ioannidis
fd1972be62 ci: twister: Use zephyr-runner v2
This commit updates the twister workflow to use the new zephyr-runner v2 CI
runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 7c19bc70bf)
2024-06-11 02:40:49 +09:00
Sean Nyekjaer
af85375f11 dts: arm: st: mp1: fix exti interrupt numbering
Align interrupt numbering with RM0436 for STM32MP157.
This will allow EXTI interrupt for line 6, 7, 8, 9, 10 and 11.

Fixes: ff231fa20a ("dts: stm32: Populate new properties for exti nodes")
Signed-off-by: Sean Nyekjaer <sean@geanix.com>
(cherry picked from commit ede866440d)
2024-06-07 15:47:08 +03:00
Gerson Fernando Budke
77df4454f8 mgmt: updatehub: Fix mark for update
This fixes compatibility with recent bootutils API.

Fixes #69297

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
(cherry picked from commit 94cd46d6ef)
2024-06-02 20:58:00 +03:00
Gerson Fernando Budke
4507175d61 mgmt: updatehub: Fix json arrays
After the changes introduced by #50816 the UpdateHub could not decode
anymore the JSON object. This introduce missing parsing definitions
to allow JSON parser undertood the correct UpdateHub probe object.

Fixes #69297

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
(cherry picked from commit 5fb62ca960)
2024-06-02 20:58:00 +03:00
Henrik Brix Andersen
d3f72d7072 drivers: can: shell: print raw DLC when sending frame, not bytes
Print the raw DLC when enqueuing a CAN frame for sending, not the
corresponding number of bytes.

Fixes: #73309

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 6a070ee165)
2024-06-02 20:56:50 +03:00
Henrik Brix Andersen
76a15672e6 drivers: can: shell: fully initialize frame before sending
Zerorise the CAN frame before filling in data to ensure all data bytes are
initialized.

Fixes: #73309

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit fb4f67b775)
2024-06-02 20:56:50 +03:00
Henrik Brix Andersen
d47129f0f7 drivers: sensor: nxp: kinetis: temp: fix memset() length
Use the correct buffer size when calling memset().

Fixes: #73093

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 59402fd82e)
2024-05-25 11:21:42 +03:00
Henrik Brix Andersen
b3dc1ff89d drivers: sensors: nxp: kinetis: temp: select CONFIG_ADC
The NXP Kinetis temperature sensor depends on CONFIG_ADC. Make the driver
Kconfig select CONFIG_ADC to get better CI coverage (enabling the driver
when CONFIG_SENSOR is enabled without depending on CONFIG_ADC=y).

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 0a49b788a4)
2024-05-25 11:21:42 +03:00
Henrik Brix Andersen
0e0cd2a0d7 drivers: can: mcan: enable transmitter delay compensation when possible
Enable Transmitter Delay Compensation whenever the data phase timing
parameters allow it.

Fixes: #70447

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit bfad7bc00e)
2024-04-05 00:00:48 +03:00
Henrik Brix Andersen
ee3fab469d drivers: can: add utility macro for calculating TDC offset
Add a utility macro for calculating the Transmitter Delay Compensation
(TDC) Offset using the sample point and CAN core clock prescaler specified
by a set of data phase timing parameters.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit a3631264d1)
2024-04-05 00:00:48 +03:00
Henrik Brix Andersen
927ab297ef dts: bindings: can: can-fd-controller: remove tx-delay-comp-offset prop
Remove the unused "tx-delay-comp-offset" property from the base CAN FD
controller devicetree binding.

Having a static Transmitter Delay Compensation (TDC) offset is useless.
The offset needs to match the data phase timing parameters in order to
properly configure the second sample point when transmitting CAN FD frames
with BRS enabled.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 744f20d005)
2024-04-05 00:00:48 +03:00
Henrik Brix Andersen
e6f480fc89 drivers: can: mcan: remove broken transmitter delay compensation support
Remove broken support for Transmitter Delay Compensation from the Bosch
M_CAN backend driver.

Even if this was enabled via Kconfig, the TDC bit in the DBTP register set
during driver initialization is overwritten in can_mcan_set_timing_data(),
turning TDC off.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit ec75dc2232)
2024-04-05 00:00:48 +03:00
Flavio Ceolin
d0d9125eec fs: fuse: Avoid possible buffer overflow
Checks path's size before copying it to local variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 3267bdc4b7)
2024-04-05 00:00:12 +03:00
Fin Maaß
2c700c1b5e mgmt: hawkbit: remove hb_context.status_buffer_size
remove hb_context.status_buffer_size and replace it with
sizeof(hb_context.status_buffer), because hb_context.status_buffer_size
is never set.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
(cherry picked from commit 1bea938c9f)
2024-03-27 10:46:50 +02:00
Yasushi SHOJI
e486e6ce62 drivers: sensor: ams_as5600: Fix calculation of fractional part
commit 98903d48c3 upstream.

The original calculation has two bugs. One is the calculated value, and the
other is that the value is not in one-millionth parts.

What the original calculation does is compute a scaled position value by
multiplying the raw sensor value (`dev_data->position`) by
`AS5600_FULL_ANGLE`, which represents a full rotation in degrees. It then
subtracts the product of the whole number of pulses (`val->val1`) and
`AS5600_PULSES_PER_REV` from this scaled position value.

    ((int32_t)dev_data->position * AS5600_FULL_ANGLE)
    - (val->val1 * AS5600_PULSES_PER_REV);

What you actually need is to extract the fractional part of the value by
taking the modulo of AS5600_PULSES_PER_REV from the scaled value of the
position.

   (((int32_t)dev_data->position * AS5600_FULL_ANGLE)
   % AS5600_PULSES_PER_REV)

Then convert the value to one-millionth part.

   * (AS5600_MILLION_UNIT / AS5600_PULSES_PER_REV);

Signed-off-by: Yasushi SHOJI <yashi@spacecubics.com>
2024-03-18 18:34:16 +00:00
Jamie McCrae
cf24d48c52 scripts: snippets: Fix path output on windows
Fixes an issue with paths being output in windows-style with back
slashes, this causes issues for certain escape sequences when
cmake interprets them. Replace these paths with posix paths so
that they are not treated as possible escape sequences.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit b680a6ec72)
2024-03-13 18:35:16 +02:00
Marcin Gasiorek
95b772f231 net: ip: Fix for improper offset return by net_pkt_find_offset()
The original packet's link-layer destination and source address can be
stored in separately allocated memory. This allocated memory can be
placed just after pkt data buffers.
In case when `net_pkt_find_offset()` uses condition:
`if (buf->data <= ptr && ptr <= (buf->data + buf->len)) {`
the offset is set outside the packet's buffer and the function returns
incorrect offset instead of error code.
Finally the offset is used to set ll address in cloned packet, and
this can have unexpected behavior (e.g. crash when cursor will be set
to empty memory).

Signed-off-by: Marcin Gasiorek <marcin.gasiorek@nordicsemi.no>
(cherry picked from commit fb99f65fe9)
2024-03-11 17:54:07 +00:00
Abram Early
0de7085475 scripts/requirements: bump imgtool to 2.0.0
Resolves incorrectly located `image_ok` tag in generated hex files when
CONFIG_MCUBOOT_GENERATE_CONFIRMED_IMAGE is used.

Fixes #64098

Signed-off-by: Abram Early <abram.early@gmail.com>
(cherry picked from commit 5dec4fcab8)
2024-03-05 17:35:07 +00:00
Henrik Brix Andersen
002392c646 drivers: can: mcan: fix handling of bus-off status events
Fix handling bus-off events in the Bosch M_CAN driver backend:
- Cancel all pending TX buffers when entering bus-off state
- Call all pending TX buffer callbacks with -ENETUNREACH when entering
  bus-off
- Automatically initiate bus-off recovery if
  CONFIG_CAN_AUTO_BUS_OFF_RECOVERY=y

Fixes: #68953

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 6e5c1ba3c1)
2024-03-05 17:35:00 +00:00
Vinayak Kariappa Chettimada
e867618593 Bluetooth: Controller: Fix extended scanning data length assertion
Fix assertion due to LLL scheduling of auxiliary PDU
reception was not considered in the ULL when checking for
accumulated data length exceeding the supported maximum scan
data length. This caused the auxiliary context to be release
while LLL was still active with scheduled PDU reception.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 4158d9523e)
2024-02-01 10:16:43 +02:00
Guillaume Gautier
48e9a63446 drivers: adc: stm32: do not disable adc after measurement
Do not disable the ADC after the end of the measurement to avoid systematic
enabling which is time-consuming in case the configuration is unchanged.

Signed-off-by: Guillaume Gautier <guillaume.gautier-ext@st.com>
(cherry picked from commit 62f1105550)
2024-02-01 10:05:13 +02:00
Jamie McCrae
bc7d3dcc25 sysbuild: kconfig: Unset shield config value variable
Fixes an issue with shields that have Kconfig file fragments when
being used with sysbuild, they would be loaded into sysbuild
itself which would then fail because it does not have the Kconfig
tree that zephyr applications have

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 49f9d8e19c)
2024-01-09 13:09:52 -08:00
Jukka Rissanen
62e3c7d871 tests: net: dhcpv6: Adjust the source address of test pkt
We would drop the received packet if the source address is our
address so tweak the test and make source address different.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit fa0e04e2ed)
2024-01-09 13:09:33 -08:00
Jukka Rissanen
bc69b8054b tests: net: ipv6: Adjust the source address of test pkt
We would drop the received packet if the source address is our
address so tweak the test and make source address different.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 155e2149f2)
2024-01-09 13:09:33 -08:00
Jukka Rissanen
a6f82ce330 net: ipv6: Check that received src address is not mine
Drop received packet if the source address is the same as
the device address.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 8d3d48e057)
2024-01-09 13:09:33 -08:00
Jukka Rissanen
f5f896f5c9 net: ipv4: Drop packet if source address is my address
If we receive a packet where the source address is our own
address, then we should drop it.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 19392a6d2b)
2023-12-29 10:50:03 +00:00
Jukka Rissanen
325c922846 net: ipv4: Check localhost for incoming packet
If we receive a packet from non localhost interface, then
drop it if either source or destination address is a localhost
address.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 6d41e68352)
2023-12-29 10:50:03 +00:00
Yong Cong Sin
5b999233ef scripts: build: gen_isr_tables: add some debug prints
Add some debug prints for the interrupts bits and bitmasks
in each level.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
(cherry picked from commit 0884a33ee3)
2023-12-27 16:04:50 +00:00
Yong Cong Sin
e231b667aa scripts: build: gen_isr_tables: change naming of bitmask variables
Rename the bitmask variables from `*_LVL_INTERRUPTS` to
`INTERRUPT_LVL_BITMASK[]` array to be consistent with
`INTERRUPT_BITS`, making it easier to loop over the bitmasks.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
(cherry picked from commit b4db285c1f)
2023-12-27 16:04:50 +00:00
Yong Cong Sin
db5475da27 scripts: build: gen_isr_tables: fix calculation of THIRD_LVL_INTERRUPTS
The calculation of `THIRD_LVL_INTERRUPTS` bitmask in the
`update_masks()` function is wrong, the number of bits to shift
should be the sum of the first two levels.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
(cherry picked from commit 56570cc8c1)
2023-12-27 16:04:50 +00:00
Flavio Ceolin
64e2ec47cb settings: shell: Fix possible buffer overflow
Checks the size of the given string before copying it to internal
buffer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 8977784afe)
2023-12-21 10:00:28 +00:00
Henrik Brix Andersen
b7dae51454 drivers: spi: mcux: lpspi: fix error on first configure on MKE1xF
Fix error writing to the CR register on the first call to SPI configure on
NXP MKE1xF. On the first call, the module clock is not enabled and writing
to the CR register will fail.

Fixes: #66036

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 7ddc0f713f)
2023-12-21 10:00:16 +00:00
Yuval Peress
e9f997e2a1 llvm: Allow llvm-readelf
Enable multiple names for the readelf program under llvm

Signed-off-by: Yuval Peress <peress@google.com>
(cherry picked from commit 3ddd36ff77)
2023-12-21 10:00:07 +00:00
Yong Cong Sin
f91ebabb28 boards: arm: Add RTC clock source for mikroe_mini_m4_for_stm32
The `mikroe_mini_m4_for_stm32` board doesn't have its RTC node
enabled, and is failing the following test:

`tests/benchmarks/footprints/benchmark.kernel.footprints.pm`

This board seems to have been missed out from 44b8370, let's
enable the rtc & clk_lsi here.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
(cherry picked from commit 54573e7c4c)
2023-12-18 17:27:00 +00:00
Christopher Friedt
15ada1522f doc: posix: structural reorganization of posix docs
Revise the structure of the POSIX API docs. This separates
related items out to dedicated pages. Further improvements
could yet be made - e.g. using the 'collapse' feature to
expand and collapse large sections of text or tables.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 3b45235887)
2023-12-07 16:15:03 +00:00
Grant Ramsay
2d8508ba32 drivers: ethernet: Fix eth_ivshmem shared memory mapping
This driver assumed the ivshmem-v2 output sections would be mapped
contiguously, which is no longer true.

Modify eth_ivshmem to treat each output section independently

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
(cherry picked from commit 82644a31c2)
2023-12-04 14:27:26 +02:00
Grant Ramsay
469b2051ab drivers: virtualization: Map ivshmem-v2 sections individually
Recent changes to the arm64 MMU code mean that you can no longer map
R/O memory as R/W. Mapping R/W memory now causes a cache invalidation
instruction (DC IVAC) that requires write permissions or else a fault
is generated.

Modify ivshmem-v2 to map each R/O and R/W section individually

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
(cherry picked from commit 64cc0764ee)
2023-12-04 14:27:26 +02:00
Yong Cong Sin
6e4539f924 driver: intc: plic: fix trigger type register bit calculation
The bit position calculation for the trigger type is wrong,
fix that.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2023-12-04 14:26:21 +02:00
Pieter De Gendt
60c661dfc8 net: ip: icmp: Cleanup packet on failed priority check
A network memory leak would occur if the priority check fails.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
(cherry picked from commit 473cc03c38)
2023-12-01 11:00:22 +02:00
Christopher Friedt
edad58e168 tests: posix: add tests to ensure pthread_key_delete() works
Ensure that the correct key is deleted via pthread_key_delete().

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 0e11bcf5a0)
2023-11-28 20:11:17 +02:00
Christopher Friedt
8a911d6bdd posix: pthread: ensure pthread_key_delete() removes correct key
Previously, `pthread_key_delete()` was only ever deleting key 0
rather than the key corresponding to the provided argument.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit a89fa32dac)
2023-11-28 20:11:17 +02:00
Jamie McCrae
014571b046 cmake: modules: dts: Fix board revision 0 overlay
Fixes an issue whereby a board revision is 0 and the overlay file
exists but would not be included

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 46889819e6)
2023-11-28 20:10:30 +02:00
Jonathan Rico
49f04fd840 Revert "Bluetooth: att: use a dedicated metadata struct for RSP PDUs"
This reverts commit 14858d96d8.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit dfd7624270)
2023-11-24 14:16:24 +02:00
Jonathan Rico
456496efa6 Revert "Bluetooth: att: re-use REQ buf for RSP"
This reverts commit aa7954bd47.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit bd9c35b496)
2023-11-24 14:16:24 +02:00
Jonathan Rico
7eb8be25e3 Revert "Bluetooth: att: don't re-use the ATT buffer for confirmations"
This reverts commit 4cd0748a40.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 530e845f92)
2023-11-24 14:16:24 +02:00
Henrik Brix Andersen
8415778a1a drivers: can: mcan: use __nocache_noinit for MRAM data variables
Use __nocache_noinit for the Bosch M_CAN MRAM data variables on SoCs
without dedicated MRAM.

Fixes: #64691

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 58e1963c6b)
2023-11-21 08:47:06 +00:00
Henrik Brix Andersen
aff7cbe65e linker: allow tagging variables with __nocache_noinit
Allow tagging variables with __nocach_noinit.

With CONFIG_NOCACHE_MEMORY=y, this will resolve to __nocache, which implies
__noinit. With CONFIG_NOCACHE_MEMORY=n, this simply resolves to __noinit.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit afe1ca6847)
2023-11-21 08:47:06 +00:00
Henrik Brix Andersen
54c11a17f5 modules: canopennode: use zephyr/dsp/types.h for float32_t/float64_t
Include the zephyr/dsp/types.h header for float32_t/float64_t type
definitions to avoid conflicts with other subsystems including this header.

Add compile-time asserts to ensure the typedefs meet the requirements of
the CANopenNode module.

Fixes: #63896

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit c9da68290a)
2023-11-21 08:46:56 +00:00
Declan Snyder
86517b3353 soc: lpc55xxx: Fix system hw clock cycle rate
Commit c6e3bac4f changed the core clock frequency of LPC55XXX series.
That clock is used by the cortex-m systick timer, which is the
default timer used for system time in zephyr on this series.
The bug is that the config SYS_CLOCK_HW_CYCLES_PER_SEC default was not
updated on the affected platforms to account for this change, so system
time is currently recorded as 150% of reality. Fix this by changing the
kconfig to be set automatically at SOC level and remove board defaults.

Signed-off-by: Declan Snyder <declan.snyder@nxp.com>
(cherry picked from commit 4d654250a5)
2023-11-13 16:13:06 +02:00
Mykola Kvach
214b74e103 arch: arm64: avoid invalidating of RO mem after mem map
The Cortex ARM documentation states that the DC IVAC instruction
requires write access permission to the virtual address (VA);
otherwise, it may generate a permission fault.

Therefore, it is needed to avoid invalidating read-only memory
after the memory map operation.

This issue has been produced by commit c9b534c.
This commit resolves the issue #64758.

Signed-off-by: Mykola Kvach <mykola_kvach@epam.com>
(cherry picked from commit c4ffadb0b6)
2023-11-13 11:47:43 +02:00
Daniel DeGrasse
6b12e12312 drivers: flash: mcux_flexspi_nor: Remove flash reads while programming
Care must be taken to avoid any flash access while programming the flash
attached to the FlexSPI either via executing XIP code or reading RO data.

Remove locations where a constant device pointer might be dereferenced
within the mcux_flexspi_nor driver, to help avoid RWW hazards.

Fixes #64702

Signed-off-by: Daniel DeGrasse <daniel.degrasse@nxp.com>
2023-11-11 19:05:13 +02:00
Yong Cong Sin
5a11de06b6 kernel: mmu: fix compilation warnings when memory address and size are 0
When both the memory base address & its size are zero, the
assertions test will be comparing an unsigned int against zero
which result in compilation warning, and will be raised to
error in Twister

Fix them with more conditional compilations.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
(cherry picked from commit 3ed8c4d64b)
2023-11-10 19:33:50 +02:00
Vinayak Kariappa Chettimada
086b5b96db Bluetooth: Controller: Fix missing ext adv terminate event
Fix missing Extended Advertising terminate event and
advertising scheduling not being stopped.

Under race conditions, auxiliary event is aborted without
the generation  of done extra event which is suppose to
stop the scheduling when max events count is used.

The fix now generates the done extra event.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit f64d123a3d)
2023-11-10 19:31:36 +02:00
Vinayak Kariappa Chettimada
abd9ecdd23 Bluetooth: Controller: Fix periodic advertising sync window
Fix periodic advertising sync window calculation to include
the scheduling resolution margin, i.e. be double as with
the event jitter value.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 05c85ddbcf)
2023-11-10 19:31:36 +02:00
Fabio Baltieri
a14c841af4 docs: migration-guide-3.5: add a note about optional modules
Add a note about modules moved to optional and how to enable them again.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
(cherry picked from commit 125e0e8741)
2023-11-06 11:11:18 +00:00
Jukka Rissanen
b872bca7c1 tests: net: tcp: Add test case for connect timeout
This checks that if connect() timeouts, we check TCP pointer
properly in select() and poll() in order to catch the situation.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit b510073db3)
2023-11-06 10:30:26 +02:00
Jukka Rissanen
d8165a3fea net: sockets: Set writefds in case of error in select()
The writefds is typically set if there is an error while
waiting for example the connect() to finish. So check if
the user supplied the writefds and update it accordingly.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 5bf18e39ad)
2023-11-06 10:30:26 +02:00
Jukka Rissanen
91bf1b4ed4 net: tcp: Set errno properly if connecting to non listening port
If we try to connect to a port which no socket is listening to,
we will get a packet with "ACK | RST" flags set. In this case
the errno should be ECONNREFUSED instead of ETIMEDOUT like we
used to return earlier.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit ec4973dd15)
2023-11-06 10:30:26 +02:00
Jukka Rissanen
954bd84d36 net: sockets: Add SO_ERROR socket option to SOL_SOCKET level
Return the last socket error to user.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit b864880000)
2023-11-06 10:30:26 +02:00
Jukka Rissanen
f9f78b4415 net: tcp: Increment ref count in initial SYN
Increase reference count already when initial SYN is sent.
This way the tcp pointer in net_context is fully valid for
the duration of the connection.

Fixes #63952

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit fd1c226cd8)
2023-11-06 10:30:26 +02:00
Johan Hedberg
94ab004c84 acpi: Fix ACPI PCI bus handle
The PRT bus name for most (especially older) platforms is _SB.PCI0. Only
newer platforms use something else (like _SB.PC00).

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
(cherry picked from commit ff0b803334)
2023-11-02 11:04:38 +02:00
Johan Hedberg
a000c24e64 acpi: Fix using correct buffer length for irq rt_table
We should use the given rt_size variable to indicate the size of
rt_table.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
(cherry picked from commit b95931859e)
2023-11-02 11:04:38 +02:00
Johan Hedberg
da3858eaaf acpi: Fix ACPICA initialization routine order
The upstream ACPICA example initialization order does AcpiLoadTables()
before calling AcpiEnableSubsystem(), so use this order in Zephyr too.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
(cherry picked from commit ab46cef69f)
2023-11-02 11:04:38 +02:00
Johan Hedberg
98f09e3442 acpi: Fix trying to register a system memory handler
ACPICA itself already registers its own handler (which works perfectly
fine for our purposes). Furthermore, ACPICA will always fail trying to
register another handler, unless the previous one is explicitly cleared
(which is something the Zephyr code didn't do).

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
(cherry picked from commit a0203d1b20)
2023-11-02 11:04:38 +02:00
Johan Hedberg
c65b84d2aa boards: x86: Increase minimum system heap size for ACPI
ACPICA initialization takes considerably more than 32k. E.g. on
up_squared the utilization is 400-500k, qemu_x86_64 about 120k and on
EHL about 1.5M. Increase the default on these platforms to be big enough
if ACPI is enabled.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
(cherry picked from commit 542f95b811)
2023-11-02 11:04:38 +02:00
Johan Hedberg
6c80bf4811 manifest: Update to latest ACPICA Zephyr fixes
There are a couple of important fixes without which ACPICA doesn't work
on all supported platforms.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
(cherry picked from commit c92c7ab873)
2023-11-02 11:04:38 +02:00
Tomasz Bursztyka
6f207d4d06 arch/x86: Remove useless legacy ACPI code
ACPI is now being handled through ACPICA.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
(cherry picked from commit f5ce4ddc79)
2023-11-01 20:01:58 +02:00
Tomasz Bursztyka
79a538dad0 lib/acpi: Fix the behavior to fit with how it used to be
z_acpi_get_cpu() used to retrieve the local apic on enabled CPU, where
n was about the n'th enabled CPU, not just the n'th local apic.
The system indeed keeps local apic info also about non-enabled CPU,
and we don't care about these as there is nothing to do about it.

This issue exists on up_squared board for instance, but it's a common
one anyway.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
(cherry picked from commit c294b7d270)
2023-11-01 20:01:58 +02:00
Pedro Sousa
3b52bc7b85 kernel: timer: Fix race condition in k_timer_start
The documentation suggests that k_timer_start can be invoked from ISR
and preemptive contexts, however, an assertion failure occurs if one
k_timer_start call preempts another for the same timer instance. This
commit mitigates the issue by implementing a spinlock throughout the
k_timer_start function, ensuring thread-safety.

Fixes: #62908

Signed-off-by: Pedro Sousa <sousapedro596@gmail.com>
(cherry picked from commit 4207f4add8)
2023-11-01 10:59:39 +00:00
Jay Vasanth
a2bb849472 microchip: ps2: fix compilation error
fix compilation error when CONFIG_PM_DEVICE is not enabled

Signed-off-by: Jay Vasanth <jay.vasanth@microchip.com>
(cherry picked from commit 7c43370997)
2023-10-31 15:18:35 +02:00
Jamie McCrae
45c19a9917 mgmt: mcumgr: transport: Fix UDP user data buffer overflow
Fixes a buffer overflow issue if UDP is enabled for IPv4 only
but IPv6 networking is enabled

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit e3c06c5d8f)
2023-10-31 14:10:50 +02:00
Robert Lubos
7ff91648cd tests: lib: thrift: Fix timing issues with TCP
The test delegates putting the server socket into listening (i. e.
calling listen() on the socket) to a separate thread, which did have a
chance to run before client attempted to establish TCP connection.

This was not visible before, as we did not reply with RST to a
connection attempt on a closed port, so the connection was eventually
establish after SYN retransmission. But as we do reject such a
connection now with RST, the connection attempt failed. Therefore, a
small delay was added after spawning the server thread, to give it a
chance to configure the server socket.

Additionally, lower the CONFIG_NET_TCP_TIME_WAIT_DELAY value so that TCP
contexts are released earlier, and add a respective delay in the test
teardown function. Not doing so also triggered unneeded SYN
retransmissions, as there were no enough TCP context to accept the
incoming connection, before freeing the resources allocated for the
previous one.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 822a82d619)
2023-10-31 14:10:11 +02:00
Robert Lubos
2560cea906 net: tcp: Add Kconfig option to enable TCP RST on unbound ports
Add Kconfig option to control TCP RST behavior on connection attempts on
unbound ports. If enabled, TCP stack will reply with RST packet (enabled
by default).

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 4ea3d247d0)
2023-10-31 14:10:11 +02:00
Robert Lubos
76b340d210 tests: net: tcp: Add tests to cover improved RST handling
Add tests which verify that Zephyr's TCP stack replies with RST packet
as expected.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 2b601b34ff)
2023-10-31 14:10:11 +02:00
Robert Lubos
b110f80cc8 net: tcp: Send RST reply for unexpected TCP packets
Send RST as a reply for unexpected TCP packets in the following
scenarios:
1) Unexpected ACK value received during handshake (connection still open
   on the peer side),
2) Unexpected data packet on a listening port (accepted connection
   closed),
3) SYN received on a closed port.

This allows the other end to detect that the connection is no longer
valid (for example due to reboot) and release the resources.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 95d9543e2e)
2023-10-31 14:10:11 +02:00
Robert Lubos
796b8c7f96 net: tcp: Add helper function to send RST packet w/o active connection
Add a helper function which allows to send a RST packet in response to
an unexpected TCP packet, w/o associated connection or net context.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit a28215d028)
2023-10-31 14:10:11 +02:00
Flavio Ceolin
2d4916cf78 ztest: Do not abort k_current_get from ISR
Do not abort k_current_get() from ISR.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit c166685fcf)
2023-10-31 14:09:27 +02:00
Flavio Ceolin
9eee5b3681 treewide: Add CODE_UNREACHABLE after k_thread_abort(current)
Compiler can't tell that k_thread_abort() won't return and issues a
warning unless we tell it that control never gets this far.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 564adad952)
2023-10-31 14:09:27 +02:00
Sophie 'Tyalie' Friedrich
a802b34af6 boards: esp23: m5stack_core2: Fix i2c1 pin definitions
For GPIO pins above 31, one needs to use the `&gpio1` (or similar)
definition, as zephyr seperates GPIOs into chunks of 32.

Signed-off-by: Sophie 'Tyalie' Friedrich <dev@flowerpot.me>
(cherry picked from commit 7857996e90)
2023-10-31 14:08:41 +02:00
Sophie 'Tyalie' Friedrich
933e579728 boards: xiao_esp32s3: Fix connector definition
Previously the seed connector was defined incorrectly and as such D6 and D7
weren't usable as i.e. inputs. Zephyr distinguishes GPIO pins in blocks of
32, a distinction the ESP32 reference manual doesn't do. As such one needs
to write `&gpio1 11` in order to access `GPIO43`.

Signed-off-by: Sophie 'Tyalie' Friedrich <dev@flowerpot.me>
(cherry picked from commit 70cb934959)
2023-10-31 14:08:41 +02:00
Dave Desrochers
480c64d3a7 arch: Fixes potential mangled isr table when SHARED_INTERRUPTS is enabled
The linker was optimizing away z_shared_isr() the zephyr_pre0.elf image
since the function is not used. However, zephyr.elf does use this
function via isr_tables.c file, generated after zephyr_pre0.elf.

This difference caused a shift in all symbol addresses by the size of the
z_shared_isr() function. isr_tables.c had pointers for zephyr_pre0.elf.
The build system assumes the ISR addresses are the same between both .elf
files, when in fact, before this fix, they were not.

When run on a device, interrupts were calling non-interrupt functions.

Signed-off-by: Dave Desrochers <dave@intercreate.io>
(cherry picked from commit 68b9cea690)
2023-10-31 14:07:58 +02:00
Rodrigo Peixoto
ebe1ef02d3 doc: zbus: fix VDED notification sequence figure
The figure and table related to the VDED notification sequence were wrong.
It fixes that by changing the image and adjusting the table content.

Signed-off-by: Rodrigo Peixoto <rodrigopex@gmail.com>
(cherry picked from commit 32bdb24a92)
2023-10-31 14:06:47 +02:00
Alberto Escolar Piedras
63a769afad tests bsim cis: Fix sim_id collision
This test was reusing the sim_id from an l2cap test,
which was colliding with the corresponding l2cap test
when they were run in parallel.
Fix it.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit e934e49aa9)
2023-10-31 11:19:51 +00:00
Flavio Ceolin
ced26e2340 doc: release: 3.5: Add info about CVE-2023-5753
Update 3.5 release notes with published CVE

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 27dedec4cc)
2023-10-31 10:55:23 +01:00
Flavio Ceolin
46cf4486fa doc: vuln: Add information about CVE-2023-5753
Information about CVE-2023-5753

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 31a92fc5e3)
2023-10-31 10:55:23 +01:00
Henrik Brix Andersen
3a67f3d906 drivers: can: be consistent in filter_id checks when removing rx filters
Change the CAN controller driver implementations for the
can_remove_rx_filter() API call to be consistent in their validation of the
supplied filter_id.

Fixes: #64398

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 6c5400d2e1)
2023-10-27 11:36:03 +01:00
56802 changed files with 836770 additions and 3237962 deletions

View File

@@ -5,6 +5,7 @@
--min-conf-desc-length=1
--typedefsfile=scripts/checkpatch/typedefsfile
--ignore BRACES
--ignore PRINTK_WITHOUT_KERN_LEVEL
--ignore SPLIT_STRING
--ignore VOLATILE
@@ -28,5 +29,3 @@
--ignore MULTISTATEMENT_MACRO_USE_DO_WHILE
--ignore ENOSYS
--ignore IS_ENABLED_CONFIG
--ignore EXPORT_SYMBOL
--ignore COMPARISON_TO_NULL

View File

@@ -32,21 +32,17 @@ ColumnLimit: 100
ConstructorInitializerIndentWidth: 8
ContinuationIndentWidth: 8
ForEachMacros:
- 'ARRAY_FOR_EACH'
- 'ARRAY_FOR_EACH_PTR'
- 'FOR_EACH'
- 'FOR_EACH_FIXED_ARG'
- 'FOR_EACH_IDX'
- 'FOR_EACH_IDX_FIXED_ARG'
- 'FOR_EACH_NONEMPTY_TERM'
- 'FOR_EACH_FIXED_ARG_NONEMPTY_TERM'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_DLIST_FOR_EACH_NODE'
- 'SYS_DLIST_FOR_EACH_NODE_SAFE'
- 'SYS_SEM_LOCK'
- 'SYS_SFLIST_FOR_EACH_CONTAINER'
- 'SYS_SFLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SFLIST_FOR_EACH_NODE'
@@ -70,19 +66,7 @@ ForEachMacros:
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
- 'STRUCT_SECTION_FOREACH'
- 'STRUCT_SECTION_FOREACH_ALTERNATE'
- 'TYPE_SECTION_FOREACH'
- 'K_SPINLOCK'
- 'COAP_RESOURCE_FOREACH'
- 'COAP_SERVICE_FOREACH'
- 'COAP_SERVICE_FOREACH_RESOURCE'
- 'HTTP_RESOURCE_FOREACH'
- 'HTTP_SERVER_CONTENT_TYPE_FOREACH'
- 'HTTP_SERVICE_FOREACH'
- 'HTTP_SERVICE_FOREACH_RESOURCE'
- 'I3C_BUS_FOR_EACH_I3CDEV'
- 'I3C_BUS_FOR_EACH_I2CDEV'
- 'MIN_HEAP_FOREACH'
IfMacros:
- 'CHECKIF'
# Disabled for now, see bug https://github.com/zephyrproject-rtos/zephyr/issues/48520
@@ -97,20 +81,11 @@ IncludeCategories:
- Regex: '.*'
Priority: 3
IndentCaseLabels: false
IndentGotoLabels: false
IndentWidth: 8
InsertBraces: true
InsertNewlineAtEOF: true
SpaceBeforeInheritanceColon: False
SpaceBeforeParens: ControlStatementsExceptControlMacros
SortIncludes: Never
UseTab: ForContinuationAndIndentation
WhitespaceSensitiveMacros:
- COND_CODE_0
- COND_CODE_1
- IF_DISABLED
- IF_ENABLED
- LISTIFY
- STRINGIFY
- Z_STRINGIFY
- DT_FOREACH_PROP_ELEM_SEP

View File

@@ -1,21 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (c) 2024, Basalte bv
analyzer:
# Start by disabling all
- --disable-all
# Enable the sensitive profile
- --enable=sensitive
# Disable unused cases
- --disable=boost
- --disable=mpi
# Many identifiers in zephyr start with _
- --disable=clang-diagnostic-reserved-identifier
- --disable=clang-diagnostic-reserved-macro-identifier
# Cleanup
- --clean

View File

@@ -86,10 +86,6 @@ indent_size = 8
[COMMIT_EDITMSG]
max_line_length = 75
# Patches
[{*.patch,*.diff}]
trim_trailing_whitespace = false
# Kconfig
[Kconfig*]
indent_style = tab

View File

@@ -0,0 +1,57 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
**Notes (delete this)**
Github Discussions (https://github.com/zephyrproject-rtos/zephyr/discussions)
are available to first verify that the issue is a genuine Zephyr bug and not a
consequence of Zephyr services misuse.
This issue list is only for bugs in the main Zephyr code base
(https://github.com/zephyrproject-rtos/zephyr/). If the bug is for a project
fork (such as NCS) specific feature, please open an issue in the fork project
instead.
**Describe the bug**
A clear and concise description of what the bug is.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
- ...
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
Add any other context that could be relevant to your issue, such as pin setting,
target configuration, ...

View File

@@ -1,85 +0,0 @@
name: Bug Report
description: File a bug report.
labels: ["bug"]
type: "Bug"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: textarea
id: what-happened
attributes:
label: Describe the bug
description: |
A clear and concise description of what the bug is.
placeholder: |
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
validations:
required: true
- type: checkboxes
id: regression
attributes:
label: Regression
description: |
Check this box if this is a regression and provide a SHA if you were able to "git bisect" to a specific commit.
options:
- label: This is a regression.
required: false
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Steps to reproduce the behavior.
placeholder: |
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
validations:
required: false
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
attributes:
label: Impact
description: Impact of this bug
multiple: false
options:
- Showstopper Prevents release or major functionality; system unusable.
- Major Severely degrades functionality; workaround is difficult or unavailable.
- Functional Limitation Some features not working as expected, but system usable.
- Annoyance Minor irritation; no significant impact on usability or functionality.
- Intermittent Occurs occasionally; hard to reproduce.
- Not sure
default: 3
validations:
required: true
- type: textarea
id: env
attributes:
label: Environment
description: please complete the following information
placeholder: |
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
- type: textarea
id: context
attributes:
label: Additional Context
description: Provide other context that could be relevant to the bug, such as pin setting, target configuration,etc.

View File

@@ -0,0 +1,20 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,42 +0,0 @@
name: Enhancement
description: Submit an Enhancement
labels: ["Enhancement"]
type: "Enhancement"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this enhancement proposal.
- type: textarea
id: description
attributes:
label: Summary
description: |
Is your enhancement proposal related to a problem? Please describe.
placeholder: |
A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: |
Describe the solution you'd like
placeholder: |
A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives
description: Describe alternatives you've considered
placeholder: |
A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: context
attributes:
label: Additional Context
description: Add any other context or graphics (drag-and-drop an image) about the enhancement here.

View File

@@ -0,0 +1,51 @@
---
name: RFC / Proposal
about: Submit an RFC / Proposal
title: ''
labels: RFC
assignees: ''
---
## Introduction
This section targets end users, TSC members, maintainers and anyone else that might
need a quick explanation of your proposed change.
### Problem description
Why do we want this change and what problem are we trying to address?
### Proposed change
A brief summary of the proposed change - the 10,000 ft view on what it will
change once this change is implemented.
## Detailed RFC
In this section of the document the target audience is the dev team. Upon
reading this section each engineer should have a rather clear picture of what
needs to be done in order to implement the described feature.
### Proposed change (Detailed)
This section is freeform - you should describe your change in as much detail
as possible. Please also ensure to include any context or background info here.
For example, do we have existing components which can be reused or altered.
By reading this section, each team member should be able to know what exactly
you're planning to change and how.
### Dependencies
Highlight how the change may affect the rest of the project (new components,
modifications in other areas), or other teams/projects.
### Concerns and Unresolved Questions
List any concerns, unknowns, and generally unresolved questions etc.
## Alternatives
List any alternatives considered, and the reasons for choosing this option
over them.

View File

@@ -1,75 +0,0 @@
name: RFC / Proposal
description: Submit a Proposal (RFC)
labels: ["RFC"]
type: RFC
assignees: []
body:
- type: markdown
attributes:
value: |
## Introduction
This section targets end users, TSC members, maintainers and anyone else
that might need a quick explanation of your proposed change.
- type: textarea
id: problem-description
attributes:
label: Problem Description
description: Why do we want this change and what problem are we trying to address?
placeholder: Explain the problem or limitation this RFC is meant to resolve.
validations:
required: true
- type: textarea
id: proposed-change-summary
attributes:
label: Proposed Change (Summary)
description: A high-level summary of the proposed change.
placeholder: Brief summary of what will change if this RFC is implemented.
validations:
required: true
- type: markdown
attributes:
value: |
## Detailed RFC
This section targets the development team. Upon reading it, each engineer
should understand what must be done to implement the proposed feature.
- type: textarea
id: detailed-change
attributes:
label: Proposed Change (Detailed)
description: Describe the change in as much detail as possible. Include context or background info, and reuse of existing components if applicable.
placeholder: Explain exactly what youre planning to change and how.
validations:
required: true
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: Highlight how this change may affect the rest of the project or other teams/components.
placeholder: List components, modules, or teams affected.
validations:
required: false
- type: textarea
id: concerns
attributes:
label: Concerns and Unresolved Questions
description: List any concerns, unknowns, or unresolved questions related to this proposal.
placeholder: Any areas of uncertainty?
validations:
required: false
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: What alternative solutions were considered? Why was this proposal chosen?
placeholder: List alternatives and explain the rationale behind your choice.
validations:
required: false

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: Feature Request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,29 +0,0 @@
name: Feature Request
description: Suggest a new feature or enhancement
labels: ["Feature Request"]
type: Feature
assignees: []
body:
- type: textarea
id: problem
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is.
placeholder: e.g., I'm frustrated when I need to do X manually because Y is missing.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
placeholder: e.g., It would be great if the system could automatically handle X by doing Y.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: Include any alternative solutions or features

View File

@@ -0,0 +1,42 @@
---
name: Contributor Nomination
about: Nominate a GitHub user for additional rights on the Zephyr Project
title: ''
labels: Role Nomination
assignees: ''
---
# Background
The [TSC Project Roles] defines the main roles for the Zephyr Project, including
Maintainer, Collaborator, and Contributor.
By default anyone that contributes code or documentation is a Contributor, but
with the lowest [GitHub Permission Level] of Read. For example, Contributors
with Read permission do not have the permission to add reviewers to a pull
request.
Use this template to nominate a GitHub user for the Contributor role with
Triage permission level, which allows the user to add reviewers to a pull
request and be added as a reviewer by other users.
# Nomination
## GitHub User
Provide the following information about the GitHub user:
1. Full Name
1. GitHub username
1. Organization (optional)
## Supporting Documents
Add links to 3-5 GitHub pull requests, in the Zephyr project, authored or
reviewed by the GitHub user that demonstrate the user's dedication to the
Zephyr project.
[TSC Project Roles]: <https://docs.zephyrproject.org/latest/project/project_roles.html>
[GitHub Permission Level]: <https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization>

View File

@@ -1,57 +0,0 @@
name: Contributor Nomination
description: Nominate a GitHub user for the Contributor role with triage permissions
labels: [Role Nomination]
assignees: ['nashif']
body:
- type: markdown
attributes:
value: |
## Background
The [TSC Project Roles](https://docs.zephyrproject.org/latest/project/project_roles.html) defines the main roles for the Zephyr Project, including Maintainer, Collaborator, and Contributor.
By default, anyone who contributes code or documentation is a Contributor, but with the lowest [GitHub Permission Level](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization) of **Read**.
Use this form to nominate a user for the **Contributor** role with **Triage** permission, which allows the user to:
- Add reviewers to pull requests
- Be added as a reviewer by others
- type: input
id: full-name
attributes:
label: Full Name
description: Full name of the nominated contributor.
placeholder: e.g., Jane Doe
validations:
required: true
- type: input
id: github-username
attributes:
label: GitHub Username
description: GitHub handle of the nominated contributor.
placeholder: e.g., @janedoe
validations:
required: true
- type: input
id: organization
attributes:
label: Organization
description: Organization the nominee is affiliated with (optional).
placeholder: e.g., Acme Corp
validations:
required: false
- type: textarea
id: supporting-documents
attributes:
label: Supporting Documents
description: Provide links to 35 pull requests authored or reviewed by the nominee that demonstrate their dedication to the Zephyr project.
placeholder: |
e.g.,
- https://github.com/zephyrproject-rtos/zephyr/pull/12345
- https://github.com/zephyrproject-rtos/zephyr/pull/23456
- https://github.com/zephyrproject-rtos/zephyr/pull/34567
validations:
required: true

View File

@@ -0,0 +1,68 @@
---
name: External Source Code
about: Submit a proposal to integrate external source code
title: ''
labels: TSC
assignees: ''
---
## Origin
Name of project hosting the original open source code
Provide a link to the source
## Purpose
Brief description of what this software does
## Mode of integration
Describe whether you'd like to integrate this external component in the main tree
or as a module, and why. If the mode of integration is a module, suggest a
repository name for the module
## Maintainership
List the person(s) that will be maintaining the integration of this external code
for the foreseeable future. Please use GitHub IDs to identify them. You can
choose to identify a single maintainer only or add collaborators as well
## Pull Request
Pull request (if any) with the actual implementation of the integration, be it
in the main tree or as a module (pointing to your own fork for now). Make sure
the PR is correctly labeled as "DNM"
## Description
Long description that will help reviewers discuss suitability of the
component to solve the problem at hand (there may be a better options
available.)
What is its primary functionality (e.g., SQLLite is a lightweight
database)?
What problem are you trying to solve? (e.g., a state store is
required to maintain ...)
Why is this the right component to solve it (e.g., SQLite is small,
easy to use, and has a very liberal license.)
## Dependencies
What other components does this package depend on?
Will the Zephyr project have a direct dependency on the component, or
will it be included via an abstraction layer with this component as a
replaceable implementation?
## Revision
Version or SHA you would like to integrate initially
## License
Please use an SPDX identifier (https://spdx.org/licenses/), such as
``BSD-3-Clause``

View File

@@ -1,106 +0,0 @@
name: External Component Integration
description: Propose integration of an external open source component
labels: ["TSC"]
assignees: []
body:
- type: textarea
id: origin
attributes:
label: Origin
description: Name of project hosting the original open source code. Provide a link to the source.
placeholder: e.g., SQLite - https://sqlite.org
validations:
required: true
- type: textarea
id: purpose
attributes:
label: Purpose
description: Brief description of what this software does.
placeholder: |
e.g., A small, fast, self-contained SQL database engine.
validations:
required: true
- type: textarea
id: integration-mode
attributes:
label: Mode of Integration
description: Should this be integrated in the main tree or as a module? Explain your choice and suggest a module repo name if applicable.
placeholder: |
e.g., As a module - proposed repo name: zephyr-sqlite
validations:
required: true
- type: textarea
id: maintainership
attributes:
label: Maintainership
description: List maintainers (GitHub IDs) for this integration. Include at least one primary maintainer.
placeholder: |
e.g., @username1 (primary), @username2 (collaborator)
validations:
required: true
- type: input
id: pull-request
attributes:
label: Pull Request
description: Link to the pull request (if any) for this integration. Must be labeled "DNM" (Do Not Merge).
placeholder: |
e.g., https://github.com/zephyrproject-rtos/zephyr/pull/12345
validations:
required: false
- type: textarea
id: description
attributes:
label: Description
description: Long-form description to justify suitability of this component.
placeholder: |
- What is its primary functionality?
- What problem does it solve?
- Why is this the right component?
validations:
required: true
- type: textarea
id: security
attributes:
label: Security
description: Security-related aspects of this component, including cryptographic functions and known vulnerabilities.
placeholder: |
- Does it use cryptography?
- How are vulnerabilities handled?
- Any known CVEs?
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: What does this component depend on, and how will it be integrated (directly or via abstraction)?
placeholder: |
- Other external packages?
- Direct or abstracted use in Zephyr?
validations:
required: false
- type: input
id: revision
attributes:
label: Version or SHA
description: Which version or specific commit should be initially integrated?
placeholder: e.g., v3.45.0 or 79cc94d
validations:
required: true
- type: input
id: license
attributes:
label: License (SPDX)
description: Provide the license using a valid SPDX identifier (e.g., BSD-3-Clause).
placeholder: e.g., MIT or BSD-3-Clause
validations:
required: true

41
.github/ISSUE_TEMPLATE/008_bin-blobs.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: Binary blobs
about: Submit a proposal to integrate binary blob(s)
title: ''
labels: TSC
assignees: ''
---
## Origin
Describe where the binary blob(s) originate from
## Type
- [ ] Precompiled library
- [ ] Firmware image
## Module
The Zephyr module that this blob(s) will be referenced from
## Purpose
Brief description of what the blob(s) do. It is especially important to describe
the functionality that the blob(s) provide, to the largest extent possible
## Pull Request
Link to the Pull request with the actual implementation of the integration. If
you are submitting this as part of a new module you may link to the same Pull
Request that introduced the new module.
## Dependencies
What other components do the blob(s) depend on, if any?
## License
Document the license the blob(s) are distributed under

View File

@@ -1,5 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Zephyr Community Support
url: https://github.com/zephyrproject-rtos/zephyr/discussions
about: Please ask and answer questions here.

8
.github/SECURITY.md vendored
View File

@@ -8,12 +8,12 @@ updates:
- The most recent release, and the release prior to that.
- Active LTS releases.
At this time, with the latest release of v4.3, the supported
At this time, with the latest release of v3.5, the supported
versions are:
- v4.3: Current release
- v4.2: Prior release
- v3.7: Current LTS
- v2.7: Current LTS
- v3.4: Prior release
- v3.5: Current release
## Reporting process

View File

@@ -1,2 +0,0 @@
paths:
- .github

View File

@@ -1,2 +0,0 @@
paths:
- doc

View File

@@ -1,30 +0,0 @@
version: 2
enable-beta-ecosystems: true
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
cooldown:
default-days: 7
commit-message:
prefix: "ci: github: "
labels: []
groups:
actions-deps:
patterns:
- "*"
- package-ecosystem: "uv"
directory: "/doc"
schedule:
interval: "weekly"
cooldown:
default-days: 7
commit-message:
prefix: "ci: doc: "
labels: []
groups:
doc-deps:
patterns:
- "*"

View File

@@ -1,4 +1,4 @@
name: Pull Request/Issue Assigner
name: Pull Request Assigner
on:
pull_request_target:
@@ -9,80 +9,43 @@ on:
- ready_for_review
branches:
- main
- collab-*
- v*-branch
issues:
types:
- labeled
permissions:
contents: read
jobs:
assignment:
name: Pull Request/Issue Assignment
name: Pull Request Assignment
if: github.event.pull_request.draft == false
runs-on: ubuntu-24.04
permissions:
issues: write # to add assignees to issues
runs-on: ubuntu-22.04
steps:
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U PyGithub>=1.55 west
- name: Check out source code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: zephyrproject-rtos/action-python-env@32e53bef090c33d53aa94f1d9a9d29c93cfdc5f7 # main
with:
python-version: 3.12
- name: Fetch west.yml/Maintainer.yml from pull request
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
run: |
git fetch origin pull/${{ github.event.pull_request.number }}/merge
git show FETCH_HEAD:west.yml > pr_west.yml
git show FETCH_HEAD:MAINTAINERS.yml > pr_MAINTAINERS.yml
- name: west setup
if: >
github.event_name == 'pull_request_target'
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
uses: actions/checkout@v3
- name: Run assignment script
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
FLAGS="-v"
FLAGS+=" -o ${{ github.event.repository.owner.login }}"
FLAGS+=" -r ${{ github.event.repository.name }}"
FLAGS+=" -M MAINTAINERS.yml"
if [ "${{ github.event_name }}" = "pull_request_target" ]; then
if [ "${{ github.base_ref }}" = "main" ]; then
FLAGS+=" -P ${{ github.event.pull_request.number }} --updated-manifest pr_west.yml --updated-maintainer-file pr_MAINTAINERS.yml"
else
FLAGS+=" -P ${{ github.event.pull_request.number }}"
fi
elif [ "${{ github.event_name }}" = "issues" ]; then
FLAGS+=" -I ${{ github.event.issue.number }}"
FLAGS+=" -I ${{ github.event.issue.number }}"
elif [ "${{ github.event_name }}" = "schedule" ]; then
FLAGS+=" --modules"
FLAGS+=" --modules"
else
echo "Unknown event: ${{ github.event_name }}"
exit 1
fi
python3 scripts/ci/set_assignees.py $FLAGS
- name: Check maintainer file changes
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
run: |
python ./scripts/ci/check_maintainer_changes.py \
--repo zephyrproject-rtos/zephyr MAINTAINERS.yml pr_MAINTAINERS.yml
python3 scripts/set_assignees.py $FLAGS

View File

@@ -7,17 +7,10 @@ on:
branches:
- main
permissions:
contents: read
jobs:
backport:
name: Backport
runs-on: ubuntu-24.04
permissions:
contents: write # to create/push backport branches
pull-requests: write # to create backport PRs
issues: write # to add labels to issue created if backport fails
runs-on: ubuntu-22.04
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
@@ -31,8 +24,8 @@ jobs:
)
steps:
- name: Backport
uses: zephyrproject-rtos/action-backport@7e74f601d11eaca577742445e87775b5651a965f # v2.0.3-3
uses: zephyrproject-rtos/action-backport@v2.0.3-3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_token: ${{ secrets.ZB_GITHUB_TOKEN }}
issue_labels: Backport
labels_template: '["Backport"]'

View File

@@ -2,49 +2,30 @@ name: Backport Issue Check
on:
pull_request_target:
types:
- edited
- opened
- reopened
- synchronize
branches:
- v*-branch
permissions:
contents: read
jobs:
backport:
name: Backport Issue Check
concurrency:
group: backport-issue-check-${{ github.ref }}
cancel-in-progress: true
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
issues: read # to check if associated issue exists for backport
steps:
- name: Check out source code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -5,26 +5,20 @@ on:
workflows: ["BabbleSim Tests"]
types:
- completed
permissions:
contents: read
jobs:
bsim-test-results:
name: "Publish BabbleSim Test Results"
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.event.workflow_run.conclusion != 'skipped'
permissions:
checks: write # to create the check run entry with test results
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
uses: dawidd6/action-download-artifact@v2
with:
run_id: ${{ github.event.workflow_run.id }}
- name: Publish BabbleSim Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: BabbleSim Test Results
comment_mode: off

View File

@@ -8,29 +8,18 @@ on:
- "west.yml"
- "subsys/bluetooth/**"
- "tests/bsim/**"
- "boards/nordic/nrf5*/*dt*"
- "dts/*/nordic/**"
- "tests/bluetooth/**"
- "samples/bluetooth/**"
- "boards/native/**"
- "soc/native/**"
- "boards/posix/**"
- "soc/posix/**"
- "arch/posix/**"
- "include/zephyr/arch/posix/**"
- "scripts/native_simulator/**"
- "samples/net/sockets/echo_*/**"
- "modules/hal_nordic/**"
- "modules/mbedtls/**"
- "modules/openthread/**"
- "subsys/net/l2/openthread/**"
- "include/zephyr/net/openthread.h"
- "drivers/ieee802154/**"
- "include/zephyr/net/ieee802154*"
- "drivers/serial/*nrfx*"
- "tests/drivers/uart/**"
- '!**.rst'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -42,16 +31,17 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
permissions:
checks: write # to create the check run entry with test results
bsim_bt_52_test_results_file: ./bsim_bt/52_bsim_results.xml
bsim_bt_53_test_results_file: ./bsim_bt/53_bsim_results.xml
bsim_net_52_test_results_file: ./bsim_net/52_bsim_results.xml
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -74,7 +64,7 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -85,9 +75,7 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
@@ -95,117 +83,95 @@ jobs:
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Check common triggering files
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
uses: tj-actions/changed-files@v35
id: check-common-files
with:
files: |
.github/workflows/bsim-tests.yaml
.github/workflows/bsim-tests-publish.yaml
west.yml
boards/native/
soc/native/
arch/posix/
include/zephyr/arch/posix/
scripts/native_simulator/
boards/posix/**
soc/posix/**
arch/posix/**
include/zephyr/arch/posix/**
scripts/native_simulator/**
tests/bsim/*
boards/nordic/nrf5*/*dt*
dts/*/nordic/
modules/mbedtls/**
modules/hal_nordic/**
- name: Check if Bluethooth files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
uses: tj-actions/changed-files@v35
id: check-bluetooth-files
with:
files: |
samples/bluetooth/
subsys/bluetooth/
tests/bluetooth/
tests/bsim/bluetooth/
tests/bsim/bluetooth/**
samples/bluetooth/**
subsys/bluetooth/**
- name: Check if Networking files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
uses: tj-actions/changed-files@v35
id: check-networking-files
with:
files: |
tests/bsim/net/
samples/net/sockets/echo_*/
modules/openthread/
subsys/net/l2/openthread/
tests/bsim/net/**
samples/net/sockets/echo_*/**
modules/openthread/**
subsys/net/l2/openthread/**
include/zephyr/net/openthread.h
drivers/ieee802154/
drivers/ieee802154/**
include/zephyr/net/ieee802154*
- name: Check if UART files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-uart-files
with:
files: |
tests/bsim/drivers/uart/
drivers/serial/*nrfx*
tests/drivers/uart/
- name: Update BabbleSim to manifest revision
if: >
steps.check-bluetooth-files.outputs.any_modified == 'true'
|| steps.check-networking-files.outputs.any_modified == 'true'
|| steps.check-uart-files.outputs.any_modified == 'true'
|| steps.check-common-files.outputs.any_modified == 'true'
steps.check-bluetooth-files.outputs.any_changed == 'true'
|| steps.check-networking-files.outputs.any_changed == 'true'
|| steps.check-common-files.outputs.any_changed == 'true'
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
git config --global advice.detachedHead false
git checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- name: Run Bluetooth Tests with BSIM
if: steps.check-bluetooth-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
if: steps.check-bluetooth-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.bt.sh
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_bt nice tests/bsim/bluetooth/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bt_52_test_results_file} \
SEARCH_PATH=tests/bsim/bluetooth/ tests/bsim/run_parallel.sh
# Run the BT controller tests also for the nrf5340
BOARD=nrf5340bsim_nrf5340_cpunet \
WORK_DIR=${ZEPHYR_BASE}/bsim_bt nice tests/bsim/bluetooth/ll/compile.sh
BOARD=nrf5340bsim_nrf5340_cpunet \
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bt_53_test_results_file} \
SEARCH_PATH=tests/bsim/bluetooth/ll/ tests/bsim/run_parallel.sh
- name: Run Networking Tests with BSIM
if: steps.check-networking-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
if: steps.check-networking-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.net.sh
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_net nice tests/bsim/net/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_net_52_test_results_file} \
SEARCH_PATH=tests/bsim/net/ tests/bsim/run_parallel.sh
- name: Run UART Tests with BSIM
if: steps.check-uart-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
run: |
tests/bsim/ci.uart.sh
- name: Merge Test Results
run: |
junitparser merge --glob "./bsim_*/*bsim_results.*.xml" "./twister-out/twister.xml" junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results in HTML
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
name: bsim-test-results
path: |
junit.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
with:
check_name: Bsim Test Results
files: "junit.xml"
comment_mode: off
./bsim_bt/52_bsim_results.xml
./bsim_bt/53_bsim_results.xml
./bsim_net/52_bsim_results.xml
${{ github.event_path }}
if-no-files-found: warn
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: event
path: |

View File

@@ -8,34 +8,25 @@ name: Bug Snapshot
on:
workflow_dispatch:
branches: [main]
schedule:
# Run daily at 00:05
- cron: '5 00 * * *'
permissions:
contents: read
# Run daily at 14:05
- cron: '5 14 * * *'
jobs:
make_bugs_pickle:
name: Make bugs pickle
runs-on: ubuntu-24.04
if: github.repository_owner == 'zephyrproject-rtos' && github.ref == 'refs/heads/main'
runs-on: ubuntu-22.04
if: github.repository_owner == 'zephyrproject-rtos'
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Snapshot bugs
env:
@@ -51,7 +42,7 @@ jobs:
echo "BUGS_PICKLE_PATH=${BUGS_PICKLE_PATH}" >> ${GITHUB_ENV}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_SECRET_ACCESS_KEY }}

View File

@@ -1,12 +1,6 @@
name: Build with Clang/LLVM
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -18,19 +12,22 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
matrix:
subset: [1, 2]
platform: ["native_posix"]
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-16
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -53,8 +50,9 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -64,8 +62,7 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git clean -f -d
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config --global update.narrow true
@@ -77,8 +74,6 @@ jobs:
# west caching).
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Check Environment
run: |
cmake --version
@@ -86,10 +81,6 @@ jobs:
gcc --version
ls -la
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
@@ -97,24 +88,24 @@ jobs:
ccache -p
ccache -z -s -vv
- name: Update BabbleSim to manifest revision
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- name: Run Tests with Twister
id: twister
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=llvm
./scripts/twister -p native_sim --force-color --inline-logs -M -N -v --retry-failed 2 \
-T tests --subset ${{matrix.subset}}/2 -j 16
# check if we need to run a full twister or not based on files changed
python3 ./scripts/ci/test_plan.py --platform ${{ matrix.platform }} -c origin/${BASE_REF}..
# We can limit scope to just what has changed
if [ -s testplan.json ]; then
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --force-color --inline-logs -M -N -v --load-tests testplan.json --retry-failed 2
else
# if nothing is run, skip reporting step
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: Print ccache stats
if: always()
@@ -122,53 +113,31 @@ jobs:
ccache -s -vv
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
path: |
twister-out/twister.xml
twister-out/twister.json
if-no-files-found: ignore
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml
clang-build-results:
name: "Publish Unit Tests Results"
needs: clang-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create GitHub annotations
if: (success() || failure())
runs-on: ubuntu-22.04
if: (success() || failure() ) && needs.clang-build.outputs.report_needed != 0
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Download Artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v3
with:
path: artifacts
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/twister.xml junit.xml
junit2html junit.xml junit-clang.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
@@ -176,7 +145,7 @@ jobs:
junit-clang.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
check_name: Unit Test Results

View File

@@ -1,14 +1,8 @@
name: Code Coverage with codecov
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
schedule:
- cron: '25 */3 * * 1-5'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -20,27 +14,19 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
matrix:
platform: ["mps2/an385", "native_sim", "qemu_x86", "unit_testing"]
include:
- platform: 'mps2/an385'
normalized: 'mps2_an385'
- platform: 'native_sim'
normalized: 'native_sim'
- platform: 'qemu_x86'
normalized: 'qemu_x86'
- platform: 'unit_testing'
normalized: 'unit_testing'
platform: ["native_posix", "qemu_x86", "unit_testing"]
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resovle the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
CCACHE_IGNOREOPTIONS: '--specs=*'
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -67,34 +53,21 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west update 1> west.update.log || west update 1> west.update-2.log
- name: Environment Setup
- name: Check Environment
run: |
cmake --version
gcc --version
ls -la
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
@@ -108,91 +81,56 @@ jobs:
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
mkdir -p coverage/reports
./scripts/twister --save-tests ${{matrix.normalized}}-testplan.json
ls -la
./scripts/twister \
-i --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage \
-T tests --coverage-tool gcovr -xCONFIG_TEST_EXTRA_STACK_SIZE=4096 -e nano \
--timeout-multiplier 2
./scripts/twister --force-color -N -v --filter runnable -p ${{ matrix.platform }} \
--coverage -T tests --timeout-multiplier 2
- name: Build Doxygen Coverage
if: matrix.platform == 'unit_testing'
- name: Generate Coverage Report
run: |
pip install -r doc/requirements.txt --require-hashes
sudo apt-get update
sudo apt-get install -y graphviz # dot is needed but currently missing from the Docker image
cmake -B doc/_build -S doc
cmake --build doc/_build --target doxygen-coverage
- name: Upload Doxygen Coverage Results
if: matrix.platform == 'unit_testing'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: doxygen-coverage-results
path: |
doc/_build/new.info
doc/_build/coverage-report
mv twister-out/coverage.info lcov.pre.info
lcov -q --remove lcov.pre.info mylib.c --remove lcov.pre.info tests/\* \
--remove lcov.pre.info samples/\* --remove lcov.pre.info ext/\* \
--remove lcov.pre.info *generated* \
-o coverage/reports/${{ matrix.platform }}.info --rc lcov_branch_coverage=1
- name: Print ccache stats
if: always()
run: |
ccache -s -vv
- name: Rename coverage files
if: always()
run: |
mv twister-out/coverage.json coverage/reports/${{matrix.normalized}}.json
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.normalized }})
path: |
coverage/reports/${{ matrix.normalized }}.json
${{ matrix.normalized }}-testplan.json
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
codecov-results:
name: "Publish Coverage Results"
needs: codecov
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
# the codecov job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v3
with:
path: coverage/reports
- name: Move coverage files
run: |
ls -lRt ./coverage/reports
mv ./coverage/reports/*/*testplan.json .
mv ./coverage/reports/*/coverage/reports/*.json ./coverage/reports
mv ./coverage/reports/*/*.info ./coverage/reports
ls -la ./coverage/reports
- name: Generate list of coverage files
id: get-coverage-files
shell: cmake -P {0}
run: |
file(GLOB INPUT_FILES_LIST "coverage/reports/*.json")
file(GLOB INPUT_FILES_LIST "coverage/reports/*.info")
set(MERGELIST "")
set(FILELIST "")
foreach(ITEM ${INPUT_FILES_LIST})
@@ -206,7 +144,7 @@ jobs:
foreach(ITEM ${INPUT_FILES_LIST})
get_filename_component(f ${ITEM} NAME)
if(MERGELIST STREQUAL "")
set(MERGELIST "--add-tracefile ${f}")
set(MERGELIST "-a ${f}")
else()
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
@@ -216,60 +154,17 @@ jobs:
- name: Merge coverage files
run: |
pushd ./coverage/reports
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --json merged.json
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --cobertura merged.xml
popd
sudo apt-get update
sudo apt-get install -y lcov
cd ./coverage/reports
lcov ${{ steps.get-coverage-files.outputs.mergefiles }} -o merged.info --rc lcov_branch_coverage=1
- name: Get current date
id: run_date
run: |
echo "run_date=$(date --iso-8601=minutes)" >> "$GITHUB_OUTPUT"
echo "run_date_short=$(date +'%Y-%m-%d')" >> "$GITHUB_OUTPUT"
echo "run_date_year=$(date +'%Y')" >> "$GITHUB_OUTPUT"
echo "run_date_month=$(date +'%m')" >> "$GITHUB_OUTPUT"
- name: Generate Coverage Report
- name: Upload coverage to Codecov
if: always()
run: |
python3 ./scripts/ci/coverage/coverage_analysis.py \
-t native_sim-testplan.json \
-m MAINTAINERS.yml \
-c coverage/reports/merged.json \
-o coverage-report-${{ steps.run_date.outputs.run_date_short }} \
-f all
cp coverage-report-* coverage/reports/
- name: Upload Merged Coverage Results and Report
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: Coverage Data and report
path: |
coverage/reports/merged.json
coverage/reports/merged.xml
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.json
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.xlsx
- name: Upload test coverage to Codecov
if: always()
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@v3
with:
directory: ./coverage/reports
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/merged.xml
flags: unittests-coverage
- name: Upload Doxygen coverage to Codecov
if: always()
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/doxygen-coverage-results/new.info
disable_search: true
flags: doxygen-coverage
files: merged.info

View File

@@ -1,58 +0,0 @@
name: "CodeQL"
on:
push:
branches:
- main
- v*-branch
- collab-*
schedule:
- cron: '34 16 * * 6'
pull_request:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
jobs:
analyze:
name: Analyze (${{ matrix.language }})
runs-on: ubuntu-24.04
permissions:
security-events: write
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
- language: actions
build-mode: none
config: ./.github/codeql/codeql-actions-config.yml
- language: javascript-typescript
build-mode: none
config: ./.github/codeql/codeql-js-config.yml
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Initialize CodeQL
uses: github/codeql-action/init@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
queries: security-extended
config-file: ${{ matrix.config }}
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo "nothing yet"
exit 0
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,37 +2,35 @@ name: Coding Guidelines
on: pull_request
permissions:
contents: read
jobs:
compliance_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: cache-pip
uses: actions/cache@v3
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install Python packages
- name: Install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install unidiff
pip3 install wheel
pip3 install sh
- name: Install Packages
run: |
sudo apt-get update
sudo apt-get install coccinelle
- name: Run Coding Guidelines Checks
- name: Run Coding Guildeines Checks
continue-on-error: true
id: coding_guidelines
env:
@@ -42,10 +40,7 @@ jobs:
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
source zephyr-env.sh
# debug
ls -la

View File

@@ -1,19 +1,10 @@
name: Compliance Checks
on:
pull_request:
types:
- edited
- opened
- reopened
- synchronize
permissions:
contents: read
on: pull_request
jobs:
check_compliance:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -21,12 +12,25 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Rebase onto the target branch
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install python dependencies
run: |
pip3 install setuptools
pip3 install wheel
pip3 install python-magic lxml junitparser gitlint pylint pykwalify yamllint
pip3 install west
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
@@ -36,40 +40,12 @@ jobs:
# Ensure there's no merge commits in the PR
[[ "$(git rev-list --merges --count origin/${BASE_REF}..)" == "0" ]] || \
(echo "::error ::Merge commits not allowed, rebase instead";false)
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
# debug
git log --pretty=oneline | head -n 10
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west config manifest.group-filter -- +ci,-optional
west update -o=--depth=1 -n 2>&1 1> west.update.log || west update -o=--depth=1 -n 2>&1 1> west.update2.log
- name: Setup Node.js
uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with:
node-version: "lts/*"
cache: npm
check-latest: true
cache-dependency-path: ./scripts/ci/package-lock.json
- name: Install Node dependencies
run: npm --prefix ./scripts/ci ci
west config manifest.group-filter -- +ci,+optional
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Run Compliance Tests
continue-on-error: true
@@ -81,58 +57,35 @@ jobs:
# debug
ls -la
git log --pretty=oneline | head -n 10
# Increase rename limit to allow for large PRs
git config diff.renameLimit 10000
excludes="-e KconfigBasic -e SysbuildKconfigBasic -e ClangFormat"
# The signed-off-by check for dependabot should be skipped
if [ "${{ github.actor }}" == "dependabot[bot]" ]; then
excludes="$excludes -e Identity"
fi
./scripts/ci/check_compliance.py --annotate $excludes -c origin/${BASE_REF}..
./scripts/ci/check_compliance.py --annotate -e KconfigBasic \
-c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: compliance.xml
path: compliance.xml
- name: Upload dts linter patch
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
continue-on-error: true
if: hashFiles('dts_linter.patch') != ''
with:
name: dts_linter.patch
path: dts_linter.patch
- name: check-warns
run: |
if [[ ! -s "compliance.xml" ]]; then
exit 1;
fi
warns=("ClangFormat" "LicenseAndCopyrightCheck")
files=($(./scripts/ci/check_compliance.py -l))
for file in "${files[@]}"; do
f="${file}.txt"
if [[ -s $f ]]; then
results=$(cat $f)
results="${results//'%'/'%25'}"
results="${results//$'\n'/'%0A'}"
results="${results//$'\r'/'%0D'}"
if [[ "${warns[@]}" =~ "${file}" ]]; then
echo "::warning file=${f}::$results"
else
echo "::error file=${f}::$results"
exit=1
fi
errors=$(cat $f)
errors="${errors//'%'/'%25'}"
errors="${errors//$'\n'/'%0A'}"
errors="${errors//$'\r'/'%0D'}"
echo "::error file=${f}::$errors"
exit=1
fi
done
if [ "${exit}" == "1" ]; then
echo "Compliance error, check for error messages in the \"Run Compliance Tests\" step"
echo "You can run this step locally with the ./scripts/ci/check_compliance.py script."
exit 1;
fi

View File

@@ -10,38 +10,28 @@ on:
branches:
- refs/tags/*
permissions:
contents: read
jobs:
get_version:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .

View File

@@ -20,32 +20,55 @@ on:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
permissions:
contents: read
jobs:
devicetree-checks:
name: Devicetree script tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-22.04, macos-14, windows-2022]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest pyyaml tox
- name: run tox
working-directory: scripts/dts/python-devicetree
run: |

18
.github/workflows/do_not_merge.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Do Not Merge
on:
pull_request:
types: [synchronize, opened, reopened, labeled, unlabeled]
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'DNM') ||
contains(github.event.*.labels.*.name, 'TSC') }}
name: Prevent Merging
runs-on: ubuntu-22.04
steps:
- name: Check for label
run: |
echo "Pull request is labeled as 'DNM' or 'TSC'"
echo "This workflow fails so that the pull request cannot be merged"
exit 1

View File

@@ -4,126 +4,112 @@
name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
push:
branches:
- main
tags:
- v*
pull_request:
permissions:
contents: read
paths:
- 'doc/**'
- '**.rst'
- 'include/**'
- 'kernel/include/kernel_arch_interface.h'
- 'lib/libc/**'
- 'subsys/testsuite/ztest/include/**'
- 'tests/**'
- '**/Kconfig*'
- 'west.yml'
- '.github/workflows/doc-build.yml'
- 'scripts/dts/**'
- 'doc/requirements.txt'
env:
DOXYGEN_VERSION: 1.15.0
DOXYGEN_SHA256SUM: 0ec2e5b2c3cd82b7106d19cb42d8466450730b8cb7a9e85af712be38bf4523a1
JOB_COUNT: 8
# NOTE: west docstrings will be extracted from the version listed here
WEST_VERSION: 1.0.0
# The latest CMake available directly with apt is 3.18, but we need >=3.20
# so we fetch that through pip.
CMAKE_VERSION: 3.20.5
DOXYGEN_VERSION: 1.9.6
jobs:
doc-file-check:
name: Check for doc changes
runs-on: ubuntu-24.04
outputs:
file_check: ${{ steps.check-doc-files.outputs.any_modified }}
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Check if Documentation related files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-doc-files
with:
files: |
doc/
boards/**/doc/
**.rst
include/
kernel/include/kernel_arch_interface.h
lib/libc/**
subsys/testsuite/ztest/include/**
**/Kconfig*
west.yml
scripts/dts/
doc/requirements.txt
.github/workflows/doc-build.yml
scripts/pylib/pytest-twister-harness/src/twister_harness/device/device_adapter.py
scripts/pylib/pytest-twister-harness/src/twister_harness/helpers/shell.py
doc-build-html:
name: "Documentation Build (HTML)"
needs: [doc-file-check]
if: >
needs.doc-file-check.outputs.file_check == 'true' || github.event_name != 'pull_request'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
options: '--entrypoint /bin/bash'
timeout-minutes: 60
timeout-minutes: 45
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
env:
BASE_REF: ${{ github.base_ref }}
steps:
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Apply container owner mismatch workaround
- name: install-pkgs
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
sudo apt-get update
sudo apt-get install -y wget python3-pip git ninja-build graphviz
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
echo "${HOME}/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Environment Setup
- name: Rebase
continue-on-error: true
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git rebase origin/${BASE_REF}
git log --graph --oneline HEAD...${PR_HEAD}
west init -l . || true
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages required for documentation build
- name: Rebase
continue-on-error: true
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip install -r doc/requirements.txt --require-hashes
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git rebase origin/${BASE_REF}
git log --graph --oneline HEAD...${PR_HEAD}
- name: Build HTML documentation
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: install-pip
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
@@ -138,52 +124,30 @@ jobs:
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-j ${JOB_COUNT} -W --keep-going -T" \
SPHINXOPTS_EXTRA="-q -t publish" \
make -C doc ${DOC_TARGET}
DOC_TAG=${DOC_TAG} SPHINXOPTS_EXTRA="-q -t publish" make -C doc ${DOC_TARGET}
# API documentation coverage
python3 -m coverxygen --xml-dir doc/_build/html/doxygen/xml/ --src-dir include/ --output doc-coverage.info
# deprecated page causing issues
lcov --remove doc-coverage.info \*/deprecated > new.info
genhtml --no-function-coverage --no-branch-coverage new.info -o coverage-report
- name: Compress documentation build artifacts
- name: compress-docs
run: |
tar --use-compress-program="xz -T0" -cf html-output.tar.xz --exclude html/_sources --exclude html/doxygen/xml --directory=doc/_build html
tar --use-compress-program="xz -T0" -cf api-output.tar.xz --directory=doc/_build html/doxygen/html
tar --use-compress-program="xz -T0" -cf api-coverage.tar.xz coverage-report
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: Upload HTML output
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
- name: upload-build
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: Upload Doxygen coverage artifacts
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: api-coverage
path: api-coverage.tar.xz
- name: Summarize PR documentation URLs
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
API_DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/doxygen/html/"
API_COVERAGE_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/api-coverage/"
echo "${PR_NUM}" > pr_num
echo "Documentation will be available shortly at: ${DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Documentation will be available shortly at: ${API_DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Coverage Report will be available shortly at: ${API_COVERAGE_URL}" >> $GITHUB_STEP_SUMMARY
- name: Upload PR number
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
@@ -191,59 +155,55 @@ jobs:
doc-build-pdf:
name: "Documentation Build (PDF)"
needs: [doc-file-check]
if: |
github.event_name != 'pull_request'
runs-on: ubuntu-24.04
timeout-minutes: 120
if: github.event_name != 'pull_request'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container: texlive/texlive:latest
timeout-minutes: 60
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: doc/requirements.txt
- name: checkout
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends graphviz librsvg2-bin \
texlive-latex-base texlive-latex-extra latexmk \
texlive-fonts-recommended texlive-fonts-extra texlive-xetex \
imagemagick fonts-noto xindy
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
echo "${DOXYGEN_SHA256SUM} doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz" | sha256sum -c
if [ $? -ne 0 ]; then
echo "Failed to verify doxygen tarball"
exit 1
fi
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
apt-get update
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
- name: cache-pip
uses: actions/cache@v3
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
enable-ccache: false
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: install-pip-pkgs
working-directory: zephyr
- name: setup-venv
run: |
pip install -r doc/requirements.txt --require-hashes
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
working-directory: zephyr
continue-on-error: true
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
@@ -252,28 +212,14 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-q -j ${JOB_COUNT}" \
LATEXMKOPTS="-quiet -halt-on-error" \
make -C doc pdf
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: pdf-output
if-no-files-found: ignore
path: |
zephyr/doc/_build/latex/zephyr.pdf
zephyr/doc/_build/latex/zephyr.log
doc-build-status-check:
if: always()
name: "Documentation Build Status"
needs:
- doc-build-pdf
- doc-file-check
- doc-build-html
uses: ./.github/workflows/ready-to-merge.yml
with:
needs_context: ${{ toJson(needs) }}
doc/_build/latex/zephyr.pdf
doc/_build/latex/zephyr.log

View File

@@ -10,13 +10,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -24,64 +21,43 @@ jobs:
steps:
- name: Download artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Load PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
let fs = require("fs");
let pr_number = Number(fs.readFileSync("./pr_num/pr_num"));
core.exportVariable("PR_NUM", pr_number);
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
id: check-pr
uses: carpentries/actions/check-valid-pr@083bb9952b1414bd2b9e10ecec1717c938aba4c5 # v0.17.0
uses: carpentries/actions/check-valid-pr@v0.14.0
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: |
steps.download-artifacts.outputs.found_artifact == 'true' &&
steps.check-pr.outputs.VALID != 'true'
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
tar xf html-output/html-output.tar.xz -C html-output
if [ -f api-coverage/api-coverage.tar.xz ]; then
tar xf api-coverage/api-coverage.tar.xz -C api-coverage
fi
- name: Configure AWS Credentials
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
if: steps.download-artifacts.outputs.found_artifact == 'true'
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete
if [ -d api-coverage/coverage-report ]; then
aws s3 sync --quiet api-coverage/coverage-report/ \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/api-coverage \
--delete
fi

View File

@@ -13,13 +13,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event != 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -27,7 +24,7 @@ jobs:
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
@@ -35,12 +32,9 @@ jobs:
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
if [ -f api-coverage/api-coverage.tar.xz ]; then
tar xf api-coverage/api-coverage.tar.xz -C api-coverage
fi
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
@@ -58,7 +52,4 @@ jobs:
aws s3 sync --quiet html-output/html s3://docs.zephyrproject.org/${VERSION} --delete
aws s3 sync --quiet html-output/html/doxygen/html s3://docs.zephyrproject.org/apidoc/${VERSION} --delete
if [ -d api-coverage/coverage-report ]; then
aws s3 sync --quiet api-coverage/coverage-report/ s3://docs.zephyrproject.org/api-coverage/${VERSION} --delete
fi
aws s3 cp --quiet pdf-output/zephyr.pdf s3://docs.zephyrproject.org/${VERSION}/zephyr.pdf

View File

@@ -5,32 +5,28 @@ on:
- '.github/workflows/errno.yml'
- 'lib/libc/minimal/include/errno.h'
- 'scripts/ci/errno.py'
- 'SDK_VERSION'
permissions:
contents: read
jobs:
check-errno:
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
runs-on: ubuntu-22.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.5
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
west-group-filter: -hal,-tools,-bootloader,-babblesim
west-project-filter: -nrf_hw_models
enable-ccache: false
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@v3
- name: Run errno.py
working-directory: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
export ZEPHYR_BASE=${PWD}
./scripts/ci/errno.py

View File

@@ -1,8 +1,9 @@
name: Footprint Tracking
# Run every 12 hours and on tags
on:
schedule:
- cron: '50 00 * * *'
- cron: '50 1/12 * * *'
push:
paths:
- 'VERSION'
@@ -15,9 +16,6 @@ on:
# same commit
- 'v*'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
@@ -28,14 +26,12 @@ jobs:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository_owner == 'zephyrproject-rtos'
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
defaults:
run:
shell: bash
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
@@ -60,28 +56,14 @@ jobs:
run: |
sudo apt-get update
sudo apt-get install -y python3-venv
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Environment Setup
run: |
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: west setup
run: |
west init -l . || true
@@ -89,7 +71,7 @@ jobs:
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
@@ -106,33 +88,5 @@ jobs:
run: |
python3 -m venv .venv
. .venv/bin/activate
pip3 install awscli
aws s3 sync --quiet footprint_data/ s3://testing.zephyrproject.org/footprint_data/
- name: Transform Footprint data to Twister JSON reports
run: |
shopt -s globstar
export ZEPHYR_BASE=${PWD}
python3 ./scripts/footprint/pack_as_twister.py -vvv \
--plan ./scripts/footprint/plan.txt \
--test-name='name.feature' \
./footprint_data/**/
- name: Upload to ElasticSearch
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
ELASTICSEARCH_INDEX: ${{ vars.FOOTPRINT_TRACKING_INDEX }}
run: |
shopt -s globstar
run_date=`date --iso-8601=minutes`
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--flatten footprint \
--flatten-list-names "{'children':'name'}" \
--transform "{ 'footprint_name': '^(?P<footprint_area>([^\/]+\/){0,2})(?P<footprint_path>([^\/]*\/)*)(?P<footprint_symbol>[^\/]*)$' }" \
--run-id "${{ github.run_id }}" \
--run-attempt "${{ github.run_attempt }}" \
--run-workflow "footprint-tracking:${{ github.event_name }}" \
--run-branch "${{ github.ref_name }}" \
-i ${ELASTICSEARCH_INDEX} \
./footprint_data/**/twister_footprint.json
#

78
.github/workflows/footprint.yml vendored Normal file
View File

@@ -0,0 +1,78 @@
name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-delta:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository == 'zephyrproject-rtos/zephyr'
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: west setup
run: |
west init -l . || true
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update.log
- name: Detect Changes in Footprint
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=${PWD}
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
git rebase origin/${BASE_REF}
git checkout -b this_pr
west update
west build -b frdm_k64f tests/benchmarks/footprints -t ram_report
cp build/ram.json ram2.json
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/rom.json rom2.json
git checkout origin/${BASE_REF}
west update
west build -p always -b frdm_k64f tests/benchmarks/footprints -t ram_report
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/ram.json ram1.json
cp build/rom.json rom1.json
git checkout this_pr
./scripts/footprint/fpdiff.py ram1.json ram2.json
./scripts/footprint/fpdiff.py rom1.json rom2.json

View File

@@ -6,20 +6,14 @@ on:
pull_request_target:
types: [opened, closed]
permissions:
contents: read
jobs:
check_for_first_interaction:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on pull requests
issues: write # to comment on issues
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: zephyrproject-rtos/action-first-interaction@58853996b1ac504b8e0f6964301f369d2bb22e5c # v1.1.1+zephyr.6
- uses: actions/checkout@v3
- uses: zephyrproject-rtos/action-first-interaction@v1.1.1-zephyr-4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
@@ -45,7 +39,7 @@ jobs:
and update (by amending and force-pushing the commits) your pull request if necessary.
If you are stuck or need help please join us on [Discord](https://chat.zephyrproject.org/)
and ask your question there. Additionally, you can [escalate the review](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html#pr-technical-escalation)
and ask your question there. Additionally, you can [escalate the review](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html#pr-review-escalation)
when applicable. 😊
pr-merged-message: >

View File

@@ -1,96 +0,0 @@
name: Hello World (Multiplatform)
on:
push:
branches:
- main
- v*-branch
- collab-*
pull_request:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/**'
- '.github/workflows/hello_world_multiplatform.yaml'
- 'SDK_VERSION'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
build:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, ubuntu-24.04, ubuntu-24.04-arm, macos-14, windows-2022]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
fetch-depth: 0
- name: Rebase
if: github.event_name == 'pull_request'
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
working-directory: zephyr
shell: bash
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
toolchains: arm-zephyr-eabi:riscv64-zephyr-elf
ccache-cache-key: hw-${{ matrix.os }}
- name: Build firmware
working-directory: zephyr
shell: bash
run: |
if [ "${{ runner.os }}" = "macOS" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --build-only"
elif [ "${{ runner.os }}" = "Windows" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --short-build-path -O/tmp/twister-out"
elif [ "${{ runner.os }}-${{ runner.arch }}" == "Linux-ARM64" ]; then
EXTRA_TWISTER_FLAGS="--exclude-platform native_sim/native"
fi
west twister \
-p native_sim -p qemu_cortex_m0 -p qemu_riscv32 -p qemu_riscv64 \
--runtime-artifact-cleanup \
--force-color \
--inline-logs \
-T samples/hello_world \
-T samples/cpp/hello_world \
-v \
$EXTRA_TWISTER_FLAGS
- name: Upload artifacts
if: failure()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
if-no-files-found: ignore
path:
zephyr/twister-out/*/samples/hello_world/sample.basic.helloworld/build.log
zephyr/twister-out/*/samples/cpp/hello_world/sample.cpp.helloworld/build.log

View File

@@ -2,10 +2,7 @@ name: Issue Tracker
on:
schedule:
- cron: '10 1/4 * * *'
permissions:
contents: read
- cron: '*/10 * * * *'
env:
OUTPUT_FILE_NAME: IssuesReport.md
@@ -17,7 +14,7 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
@@ -30,7 +27,7 @@ jobs:
sudo apt-get update
sudo apt-get install discount
- uses: brcrista/summarize-issues@54c549b7d38b7db39e5c6e06fd9617e12e5c3491 # v4
- uses: brcrista/summarize-issues@v3
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
@@ -38,14 +35,14 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}

View File

@@ -2,25 +2,20 @@ name: Scancode
on: [pull_request]
permissions:
contents: read
jobs:
scancode_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
uses: actions/checkout@v3
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@23ef91ce31cd4b954366a7b71eea47520da9b380 # v4
uses: zephyrproject-rtos/action_scancode@v4
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -2,20 +2,16 @@ name: Manifest
on:
pull_request_target:
permissions:
contents: read
jobs:
contribs:
runs-on: ubuntu-24.04
permissions:
pull-requests: write # to create/update pull request comments
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -24,22 +20,19 @@ jobs:
BASE_REF: ${{ github.base_ref }}
working-directory: zephyrproject/zephyr
run: |
pip install -r scripts/requirements-west.txt --require-hashes
pip3 install west
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Manifest
uses: zephyrproject-rtos/action-manifest@09983f53d3d878791aa37a7755ae44d695f4c1e5 # v2.0.0
uses: zephyrproject-rtos/action-manifest@v1.2.0
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
github-token: ${{ secrets.ZB_GITHUB_TOKEN }}
manifest-path: 'west.yml'
checkout-path: 'zephyrproject/zephyr'
use-tree-checkout: 'true'
check-impostor-commits: 'true'
label-prefix: 'manifest-'
verbosity-level: '1'
labels: 'manifest'
dnm-labels: 'DNM (manifest)'
blobs-added-labels: 'Binary Blobs Added'
blobs-modified-labels: 'Binary Blobs Modified'
dnm-labels: 'DNM'

View File

@@ -1,19 +0,0 @@
name: Check SHA-pinned GitHub Actions
on:
pull_request:
paths:
- '.github/workflows/**'
permissions:
contents: read
jobs:
check-sha-pinned-actions:
name: Verify GitHub Actions
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@6124774845927d14c601359ab8138699fa5b70c3 # v4.0.1

View File

@@ -1,46 +0,0 @@
name: PR Metadata Check
on:
pull_request:
types:
- synchronize
- opened
- reopened
- labeled
- unlabeled
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
do-not-merge:
name: Prevent Merging
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run the check script
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python -u scripts/ci/do_not_merge.py \
-p "${{ github.event.pull_request.number }}" \
-o "${{ github.event.repository.owner.login }}" \
-r "${{ github.event.repository.name }}"

View File

@@ -19,31 +19,32 @@ on:
- 'scripts/pylib/build_helpers/**'
- '.github/workflows/pylib_tests.yml'
permissions:
contents: read
jobs:
pylib-tests:
name: Misc. Pylib Unit Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest for build_helpers
env:
ZEPHYR_BASE: ./

View File

@@ -1,31 +0,0 @@
name: ready to merge
on:
workflow_call:
inputs:
needs_context:
type: string
required: true
permissions:
contents: read
jobs:
all_jobs_passed:
name: all jobs passed
runs-on: ubuntu-latest
steps:
- name: "Check status of all required jobs"
run: |-
NEEDS_CONTEXT='${{ inputs.needs_context }}'
JOB_IDS=$(echo "$NEEDS_CONTEXT" | jq -r 'keys[]')
for JOB_ID in $JOB_IDS; do
RESULT=$(echo "$NEEDS_CONTEXT" | jq -r ".[\"$JOB_ID\"].result")
echo "$JOB_ID job result: $RESULT"
if [[ $RESULT != "success" && $RESULT != "skipped" ]]; then
echo "***"
echo "Error: The $JOB_ID job did not pass."
exit 1
fi
done
echo "All jobs passed or were skipped."

View File

@@ -6,16 +6,11 @@ on:
- 'v*'
- '!v*rc*'
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-24.04
permissions:
contents: write # to create GitHub release entry
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -26,12 +21,12 @@ jobs:
echo "TRIMMED_VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@676e2d560c9a403aa252096d99fcab3e1132b0f5 # v6.0.0
uses: fsfe/reuse-action@v1
with:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
@@ -43,7 +38,7 @@ jobs:
- name: Create Release
id: create_release
uses: actions/create-release@0cb9c9b65d5d1901c1f53e5e66eaf4afd303e70e # v1.1.4
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -55,7 +50,7 @@ jobs:
- name: Upload Release Assets
id: upload-release-asset
uses: actions/upload-release-asset@e8f9f06c4b078e705bd2ea027f0926603fc9b4d5 # v1.0.2
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -1,61 +0,0 @@
# This workflow uses actions that are not certified by GitHub. They are provided
# by a third-party and are governed by separate terms of service, privacy
# policy, and support documentation.
name: Scorecards supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '43 7 * * 6'
push:
branches:
- main
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed for Code scanning upload
security-events: write
# Needed for GitHub OIDC token if publish_results is true
id-token: write
steps:
- name: "Checkout code"
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
with:
results_file: results.sarif
results_format: sarif
# Publish results to OpenSSF REST API for easy access by consumers.
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable
# uploads of run results in SARIF format to the repository Actions tab.
# https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts
- name: "Upload artifact"
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
with:
sarif_file: results.sarif

View File

@@ -19,20 +19,17 @@ on:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
permissions:
contents: read
jobs:
scripts-tests:
name: Scripts tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -45,22 +42,26 @@ jobs:
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest
env:

View File

@@ -2,13 +2,11 @@ name: Stale Workflow Queue Cleanup
on:
workflow_dispatch:
branches: [main]
schedule:
# everyday at 15:00
- cron: '0 15 * * *'
permissions:
contents: read
concurrency:
group: stale-workflow-queue-cleanup
cancel-in-progress: true
@@ -16,14 +14,11 @@ concurrency:
jobs:
cleanup:
name: Cleanup
runs-on: ubuntu-24.04
if: github.ref == 'refs/heads/main'
permissions:
actions: write # to delete stale workflow runs
runs-on: ubuntu-22.04
steps:
- name: Delete stale queued workflow runs
uses: MajorScruffy/delete-old-workflow-runs@78b5af714fefaefdf74862181c467b061782719e # v0.3.0
uses: MajorScruffy/delete-old-workflow-runs@v0.3.0
with:
repository: ${{ github.repository }}
# Remove any workflow runs in "queued" state for more than 1 day

View File

@@ -3,20 +3,13 @@ on:
schedule:
- cron: "16 00 * * *"
permissions:
contents: read
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on stale pull requests
issues: write # to comment on stale issues
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@v8
with:
stale-pr-message: 'This pull request has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you
@@ -31,5 +24,5 @@ jobs:
stale-issue-label: 'Stale'
stale-pr-label: 'Stale'
exempt-pr-labels: 'Blocked,In progress'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process,Coverity'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process'
operations-per-run: 400

View File

@@ -1,38 +0,0 @@
name: Merged PR stats
on:
pull_request_target:
branches:
- main
- v*-branch
types: [closed]
permissions:
contents: read
jobs:
record_merged:
if: github.event.pull_request.merged == true && github.repository == 'zephyrproject-rtos/zephyr'
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: PR event
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
PR_STAT_ES_INDEX: ${{ vars.PR_STAT_ES_INDEX }}
run: |
python3 ./scripts/ci/stats/merged_prs.py --pull-request ${{ github.event.pull_request.number }} --repo ${{ github.repository }}

View File

@@ -1,64 +0,0 @@
name: Publish Twister Test Results
on:
workflow_run:
workflows: ["Run tests with twister"]
branches:
- main
types:
- completed
permissions:
contents: read
jobs:
upload-to-elasticsearch:
if: |
github.repository == 'zephyrproject-rtos/zephyr' &&
github.event.workflow_run.event != 'pull_request'
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
runs-on: ubuntu-24.04
steps:
# Needed for elasticearch and upload script
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
with:
path: artifacts
workflow: twister.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Upload to elasticsearch
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event.workflow_run.event}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event.workflow_run.event}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi

View File

@@ -6,17 +6,14 @@ on:
- main
- v*-branch
- collab-*
pull_request:
pull_request_target:
branches:
- main
- v*-branch
- collab-*
schedule:
# Run at 02:00 UTC on every Sunday
- cron: '0 2 * * 0'
permissions:
contents: read
# Run at 03:00 UTC on every Sunday
- cron: '0 3 * * 0'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -25,7 +22,11 @@ concurrency:
jobs:
twister-build-prep:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: ubuntu-24.04
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
@@ -33,58 +34,59 @@ jobs:
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 20
WEEKLY_MATRIX_SIZE: 200
DAILY_MATRIX_SIZE: 80
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TESTS_PER_BUILDER: 900
TESTS_PER_BUILDER: 700
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
path: zephyr
persist-credentials: false
- name: Set up Python
if: github.event_name == 'pull_request'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: install-packages
working-directory: zephyr
if: github.event_name == 'pull_request'
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Setup Zephyr project
if: github.event_name == 'pull_request'
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
enable-ccache: false
- name: Environment Setup
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci,+optional
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Generate Test Plan with Twister
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
id: test-plan
run: |
export ZEPHYR_BASE=${PWD}
@@ -100,23 +102,22 @@ jobs:
- name: Determine matrix size
id: output-services
run: |
if [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
if [ -n "${TWISTER_NODES}" ]; then
subset="[$(seq -s',' 1 ${TWISTER_NODES})]"
else
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
fi
size=${TWISTER_NODES}
elif [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "schedule" -a "${{github.repository}}" = "zephyrproject-rtos/zephyr" ]; then
subset="[$(seq -s',' 1 ${WEEKLY_MATRIX_SIZE})]"
size=${WEEKLY_MATRIX_SIZE}
subset="[$(seq -s',' 1 ${DAILY_MATRIX_SIZE})]"
size=${DAILY_MATRIX_SIZE}
else
size=0
fi
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
@@ -127,7 +128,7 @@ jobs:
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.5.20231213
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
@@ -135,20 +136,20 @@ jobs:
subset: ${{fromJSON(needs.twister-build-prep.outputs.subset)}}
timeout-minutes: 1440
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resolve the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
CCACHE_IGNOREOPTIONS: '--specs=*'
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TWISTER_COMMON: ' --test-config tests/test_config_ci.yaml --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
WEEKLY_OPTIONS: ' -M --build-only --all --show-footprint --report-filtered -j 32'
PR_OPTIONS: ' --clobber-output --integration -j 16'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint --report-filtered -j 16'
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
DAILY_OPTIONS: ' -M --build-only --all --show-footprint'
PR_OPTIONS: ' --clobber-output --integration'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint'
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
steps:
- name: Print cloud service information
run: |
@@ -171,7 +172,7 @@ jobs:
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -179,42 +180,30 @@ jobs:
- name: Environment Setup
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
west init -l . || true
west config manifest.group-filter -- +ci,+optional,+testing
west config manifest.group-filter -- +ci,+optional
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Check Environment
run: |
cmake --version
gcc --version
cargo --version
rustup target list --installed
ls -la
echo "github.ref: ${{ github.ref }}"
echo "github.base_ref: ${{ github.base_ref }}"
echo "github.ref_name: ${{ github.ref_name }}"
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
west packages pip --install
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
@@ -222,19 +211,8 @@ jobs:
ccache -p
ccache -z -s -vv
- name: Update BabbleSim to manifest revision
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- if: github.event_name == 'push'
name: Run Tests with Twister (Push)
id: run_twister
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
@@ -246,9 +224,8 @@ jobs:
fi
fi
- if: github.event_name == 'pull_request'
- if: github.event_name == 'pull_request_target'
name: Run Tests with Twister (Pull Request)
id: run_twister_pr
run: |
rm -f testplan.json
export ZEPHYR_BASE=${PWD}
@@ -263,16 +240,15 @@ jobs:
fi
- if: github.event_name == 'schedule'
name: Run Tests with Twister (Weekly)
id: run_twister_sched
name: Run Tests with Twister (Daily)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${DAILY_OPTIONS}
if [ "${{matrix.subset}}" = "1" ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${DAILY_OPTIONS}
fi
fi
@@ -283,7 +259,7 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -293,110 +269,62 @@ jobs:
module_tests/twister.xml
testplan.json
- if: matrix.subset == 1 && github.event_name == 'push'
name: Save the list of Python packages
shell: bash
run: |
FREEZE_FILE="frozen-requirements.txt"
timestamp="$(date)"
version="$(git describe --abbrev=12 --always)"
echo -e "# Generated at $timestamp ($version)\n" > $FREEZE_FILE
pip freeze | tee -a $FREEZE_FILE
- if: matrix.subset == 1 && github.event_name == 'push'
name: Upload the list of Python packages
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: Frozen PIP package set
path: |
frozen-requirements.txt
twister-test-results:
name: "Publish Unit Tests Results"
needs:
- twister-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create the check run entry with Twister test results
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
needs: twister-build
runs-on: ubuntu-22.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: Check out source code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# Needed for opensearch and upload script
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v3
with:
path: artifacts
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Upload to opensearch
run: |
pip3 install elasticsearch
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event_name}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event_name}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/*/twister.xml junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit.html
junit.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() && (github.event_name == 'push') }}
uses: codecov/test-results-action@0fa95f0e1eeaafde2c782583b36b28ad0d8c77d3 # v1.2.1
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: Unit Test Results
files: "**/twister.xml"
comment_mode: off
- name: Analyze Twister Reports
if: needs.twister-build.result == 'failure'
run: |
./scripts/ci/twister_report_analyzer.py artifacts/*/*/twister.json --long-summary --platforms --output-md errors.md
if [[ -s "errors.md" ]]; then
echo '### Error Summary! 🚀' >> $GITHUB_STEP_SUMMARY
cat errors.md >> $GITHUB_STEP_SUMMARY
fi
- name: Upload Twister Analysis Results
if: needs.twister-build.result == 'failure'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: Twister Analysis Results
if-no-files-found: ignore
path: |
twister_report_summary.json
twister-status-check:
if: always()
name: "Check Twister Status"
needs:
- twister-build-prep
- twister-build
uses: ./.github/workflows/ready-to-merge.yml
with:
needs_context: ${{ toJson(needs) }}

View File

@@ -8,26 +8,20 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/pylib/**'
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/**'
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
permissions:
contents: read
jobs:
twister-tests:
@@ -35,23 +29,26 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest for twisterlib
env:
ZEPHYR_BASE: ./

View File

@@ -7,60 +7,85 @@ on:
pull_request:
branches:
- main
- collab-*
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/twister_tests_blackbox.yml'
permissions:
contents: read
env:
PYTHONIOENCODING: utf-8
jobs:
twister-tests:
name: Twister Black Box Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
fail-fast: false
runs-on: ubuntu-24.04
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.5
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.3
steps:
- name: Apply Container Owner Mismatch Workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
fetch-depth: 0
uses: actions/checkout@v3
- name: Environment Setup
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
west init -l . || true
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi:riscv64-zephyr-elf:x86_64-zephyr-elf'
enable-ccache: false
west-group-filter: -tools,-bootloader,-babblesim,-hal
west-project-filter: -nrf_hw_models,+cmsis,+hal_xtensa,+cmsis_6
- name: Go Into Venv
shell: bash
run: |
python3 -m pip install --user virtualenv
python3 -m venv env
source env/bin/activate
echo "$(which python)"
- name: Install Packages
run: |
python3 -m pip install -U -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt -r scripts/requirements-run-test.txt
- name: Run Pytest For Twister Black Box Tests
working-directory: zephyr
shell: bash
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
sudo apt-get install -y lcov
echo "Run twister tests"
source zephyr-env.sh
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox/
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox
- name: Upload Unit Test Results
if: success() || failure()
uses: actions/upload-artifact@v2
with:
name: Black Box Test Results (Python ${{ matrix.python-version }})
path: |
twister-out*/twister.log
twister-out*/twister.json
twister-out*/testplan.log
retention-days: 14
- name: Clear Workspace
if: success() || failure()
run: |
rm -rf twister-out*/

View File

@@ -8,7 +8,6 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -17,43 +16,64 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
permissions:
contents: read
jobs:
west-commands:
west-commnads:
name: West Command Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-22.04, macos-14, windows-2022]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest west pyelftools canopen natsort progress mypy intelhex psutil ply pyserial
- name: run pytest-win
if: runner.os == 'Windows'
run: |
python ./scripts/west_commands/run_tests.py
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |

45
.gitignore vendored
View File

@@ -1,6 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The Zephyr Project Contributors
*.o
*.a
*.d
@@ -10,14 +7,11 @@
*.swp
*.swo
*~
# Emacs
.\#*
\#*\#
build*/
!doc/build/
!scripts/build
!share/sysbuild/build
!tests/drivers/build_all
!scripts/pylib/build_helpers
cscope.*
@@ -33,8 +27,6 @@ outdir
outdir-*
scripts/basic/fixdep
scripts/gen_idt/gen_idt
coverage-report
doc-coverage.info
doc/_build
doc/doxygen
doc/xml
@@ -47,7 +39,6 @@ sanity-out*
twister-out*
bsim_out
bsim_bt_out
myresults.xml
tests/RunResults.xml
scripts/grub
doc/reference/kconfig/*.rst
@@ -59,25 +50,11 @@ doc/doc.warnings
hide-defaults-note
venv
.venv
.envrc
.DS_Store
.clangd
new.info
# Cargo drops lock files in projects to capture resolved dependencies.
# We don't want to record these.
Cargo.lock
# Cargo encourages a .cargo/config.toml file to symlink to a generated file. Don't save these.
.cargo/
# Normal west builds will place the Rust target directory under the build directory. However,
# sometimes IDEs and such will litter these target directories as well.
target/
# CI output
compliance.xml
dts_linter.patch
_error.types
# Tag files
@@ -90,37 +67,17 @@ tags
.idea
# from check_compliance.py
# zephyr-keep-sorted-start
BinaryFiles.txt
BoardYml.txt
Checkpatch.txt
ClangFormat.txt
DevicetreeBindings.txt
DevicetreeLinting.txt
GitDiffCheck.txt
Gitlint.txt
Identity.txt
ImageSize.txt
Kconfig.txt
KconfigBasic.txt
KconfigBasicNoModules.txt
KconfigHWMv2.txt
KeepSorted.txt
LicenseAndCopyrightCheck.txt
MaintainersFormat.txt
ModulesMaintainers.txt
Nits.txt
Pylint.txt
PythonCompat.txt
Ruff.txt
SphinxLint.txt
SysbuildKconfig.txt
SysbuildKconfigBasic.txt
SysbuildKconfigBasicNoModules.txt
TextEncoding.txt
YAMLLint.txt
ZephyrModuleFile.txt
# zephyr-keep-sorted-stop
# Node dependecies
node_modules

View File

@@ -1,7 +1,6 @@
# All these sections are optional, edit this file as you like.
# Zephyr-specific defaults are located in scripts/gitlint/zephyr_commit_rules.py
[general]
regex-style-search=true
ignore=title-trailing-punctuation, T3, title-max-length, T1, body-hard-tab, B3, B1
# verbosity should be a value between 1 and 3, the commandline -v flags take precedence over this
verbosity = 3
@@ -27,7 +26,7 @@ extra-path=scripts/gitlint
# max-line-count=200
[title-starts-with-subsystem]
regex = ^(?!subsys:)(?!treewide:)(([^:]+):)(\s([^:]+):)*\s(.+)$
regex = ^(?!subsys:)(([^:]+):)(\s([^:]+):)*\s(.+)$
[title-must-not-contain-word]
# Comma-separated list of words that should not occur in the title. Matching is case
@@ -60,7 +59,3 @@ ignore-merge-commits=false
# By specifying this rule, developers can only change the file when they explicitly reference
# it in the commit message.
#files=gitlint/rules.py,README.md
[ignore-by-author-name]
regex=^dependabot\[bot\]$
ignore=all

View File

@@ -1,14 +1,14 @@
Alexandr Kolosov <rikorsev@gmail.com>
Alexandre d'Alton <alexandre.dalton@intel.com>
Amir Kaplan <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com>
Amir Kaplan <amir.kaplan@intel.com> <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com> <anas.nashif@intel.com>
Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Anthony Smigielski <thebasti0ncode@gmail.com>
Armand Ciejak <armand@riedonetworks.com> <armandciejak@users.noreply.github.com>
Aska Wu <aska.wu@linaro.org>
Bit Pathe <bitpathe@gmail.com>
Bit Pathe <bitpathe@gmail.com> <bitpathe@gmail.com>
Bjarki Arge Andreasen <baa@trackunit.com>
Carles Cufi <carles.cufi@nordicsemi.no>
Carles Cufi <carles.cufi@nordicsemi.no> <carles.cufi@nordicsemi.no>
chao an <anchao@xiaomi.com>
Charles E. Youse <charles.youse@intel.com>
Chen Xingyu <hi@xingrz.me>
@@ -16,32 +16,32 @@ Christoph Schnetzler <christoph.schnetzler@husqvarnagroup.com>
Christoph Schramm <schramm@makaio.com>
Christopher Friedt <cfriedt@meta.com>
Christopher Friedt <cfriedt@meta.com> <cfriedt@fb.com>
Chuck Jordan <cjordan@synopsys.com>
Chuck Jordan <cjordan@synopsys.com> <cjordan@synopsys.com>
Chunlin Han <chunlin.han@linaro.org> <chunlin.han@acer.com>
David B. Kinder <david.b.kinder@intel.com>
David Komel <a8961713@gmail.com>
David Leach <david.leach@nxp.com>
Dirk Brandewie <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com>
Dirk Brandewie <dirk.j.brandewie@intel.com> <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com> <d0u9.su@outlook.com>
Enjia Mai <enjia.mai@intel.com>
Enjia Mai <enjia.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com>
Enjia Mai <enjia.mai@intel.com> <enjiax.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com> <evanx.couzens@intel.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com> <Eugeniy.Paltsev@synopsys.com>
Felipe Neves <ryukokki.felipe@gmail.com>
Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Findlay Feng <i@fengch.me>
Flavio Arieta Netto <flavio@exati.com.br>
Francois Ramu <francois.ramu@st.com>
Gerardo Aceves <gerardo.aceves@intel.com>
Gerardo Aceves <gerardo.aceves@intel.com> <gerardo.aceves@intel.com>
Gregory Shue <gregory.shue@legrand.com>
Gregory Shue <gregory.shue@legrand.com> <gregory.shue@legrand.us>
HaiLong Yang <cameledyang@pm.me>
James Johnson <james.johnson672@t-mobile.com>
Jarno Lämsä <jarno.lamsa@nordicsemi.no>
Jędrzej Ciupis <jedrzej.ciupis@nordicsemi.no>
Jeremie Garcia <jeremie.garcia@intel.com>
Jeremie Garcia <jeremie.garcia@intel.com> <jeremie.garcia@intel.com>
Jim Benjamin Luther <jilu@oticon.com>
Johan Kruger <johan.kruger@windriver.com>
Johan Kruger <johan.kruger@windriver.com> <johan.kruger@windriver.com>
Johann Fischer <j.fischer@phytec.de>
Jørgen Kvalvaag <jorgen.kvalvaag@nordicsemi.no>
Juan Manuel Cruz Alcaraz <juan.m.cruz.alcaraz@intel.com>
@@ -51,17 +51,16 @@ Jun Li <jun.r.li@intel.com>
Justin Watson <jwatson5@gmail.com>
Kamil Sroka <kamil.sroka@nordicsemi.no>
Katarzyna Giądła <katarzyna.giadla@nordicsemi.no>
Keren Siman-Tov <keren.siman-tov@intel.com>
Keren Siman-Tov <keren.siman-tov@intel.com> <keren.siman-tov@intel.com>
Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com> <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com> <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com> <leonax.cook@intel.com>
Leona Cook <leonax.cook@intel.com> <lsc@hackeress.com>
Lixin Guo <lixinx.guo@intel.com>
Łukasz Mazur <lukasz.mazur@hidglobal.com>
Manuel Argüelles <manuel.arguelles@nxp.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <manuel.arguelles@coredumplabs.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <marguelles.dev@gmail.com>
Marc Herbert <marc.herbert@intel.com> <46978960+marc-hb@users.noreply.github.com>
Marin Jurjević <marin.jurjevic@hotmail.com>
Mariusz Ryndzionek <mariusz.ryndzionek@firmwave.com>
@@ -72,14 +71,14 @@ Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.bolivar@linaro.org>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.f.bolivar@gmail.com>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@foundries.io>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@opensourcefoundries.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Mateusz Hołenko <mholenko@antmicro.com>
Michael Rosen <michael.r.rosen@intel.com>
Michal Narajowski <michal.narajowski@codecoup.pl>
Mike Hirst <michael.hirst@windriver.com>
Mike Hirst <michael.hirst@windriver.com> <michael.hirst@windriver.com>
Ming Shao <ming.shao@intel.com>
Mohan Kumar Kumar <mohankm@fb.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com> <tulasi.r@tcs.com>
Navin Sankar Velliangiri <navin@linumiz.com>
NingX Zhao <ningx.zhao@intel.com>
Nishikant Nayak <nishikantax.nayak@intel.com>
@@ -100,23 +99,21 @@ Radosław Koppel <radoslaw.koppel@nordicsemi.no> <r.koppel@k-el.com>
Raja D. Singh <rdsingh@iotwizards.com>
Ricardo Salveti <ricardo@opensourcefoundries.com>
Ricardo Salveti <ricardo@opensourcefoundries.com> <ricardo.salveti@linaro.org>
Ruud Derwig <Ruud.Derwig@synopsys.com>
Ruud Derwig <Ruud.Derwig@synopsys.com> <Ruud.Derwig@synopsys.com>
Saku Rautio <saku.rautio@nordicsemi.no>
Scott Worley <scott.worley@microchip.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>
Sharron Liu <sharron.liu@intel.com>
Shilpashree L C <shilpashree.lc@intel.com>
Shuang He <shuang.he@intel.com>
Shuang He <shuang.he@intel.com> <shuang.he@intel.com>
Sigvart Hovland <sigvart.hovland@nordicsemi.no>
Stéphane D'Alu <sdalu@sdalu.com>
Stine Åkredalen <stine.akredalen@nordicsemi.no>
Thomas Heeley <thomas.heeley@intel.com>
Thomas Heeley <thomas.heeley@intel.com> <thomas.heeley@intel.com>
Tim Sørensen <tims@demant.com>
Tim Sørensen <tims@demant.com> <tims@oticon.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa@gmail.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com>

File diff suppressed because it is too large Load Diff

View File

@@ -1,30 +0,0 @@
# Copyright (c) 2024 Basalte bv
# SPDX-License-Identifier: Apache-2.0
extend = ".ruff-excludes.toml"
line-length = 100
target-version = "py310"
[lint]
select = [
# zephyr-keep-sorted-start
"B", # flake8-bugbear
"E", # pycodestyle
"F", # pyflakes
"I", # isort
"SIM", # flake8-simplify
"UP", # pyupgrade
"W", # pycodestyle warnings
# zephyr-keep-sorted-stop
]
ignore = [
# zephyr-keep-sorted-start
"SIM108", # Allow if-else blocks instead of forcing ternary operator
# zephyr-keep-sorted-stop
]
[format]
quote-style = "preserve"
line-ending = "lf"

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,963 @@
# Instead of the CODEOWNERS file, The Zephyr Project uses a custom format.
# See MAINTAINERS.yml for details.
#
# DO NOT EDIT THIS FILE.
# CODEOWNERS for autoreview assigning in github
# https://help.github.com/en/articles/about-code-owners#codeowners-syntax
# Order is important; for each modified file, the last matching
# pattern takes the most precedence.
# That is, with the last pattern being
# *.rst @nashif
# if only .rst files are being modified, only nashif is
# automatically requested for review, but you can manually
# add others as needed.
# Do not use wildcard on all source yet
# * @galak @nashif
/.github/ @nashif @stephanosio
/.github/workflows/ @galak @nashif
/MAINTAINERS.yml @MaureenHelm
/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/arch/arm/ @MaureenHelm @galak @ioannisg
/arch/arm/core/cortex_m/cmse/ @ioannisg
/arch/arm/include/cortex_m/cmse.h @ioannisg
/arch/arm/core/cortex_a_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/arm64/ @carlocaione
/arch/arm64/core/cortex_r/ @povergoing
/arch/arm64/core/xen/ @lorc @firscity
/arch/common/ @ioannisg @andyross
/arch/mips/ @frantony
/soc/arc/snps_*/ @abrodkin @ruuddw @evgeniy-paltsev
/soc/nios2/ @nashif
/soc/arm/ @MaureenHelm @galak @ioannisg
/soc/arm/arm/mps2/ @fvincenzo
/soc/arm/aspeed/ @aspeeddylan
/soc/arm/atmel_sam/common/*_sam4l_*.c @nandojve
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4l/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/same70/ @nandojve
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/cypress/ @ifyall @npal-cy
/soc/arm/bcm*/ @sbranden
/soc/arm/gigadevice/ @nandojve
/soc/arm/infineon_cat1/ @ifyall @npal-cy
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/nxp*/ @mmahadevan108 @dleach02
/soc/arm/nxp_s32/ @manuargue
/soc/arm/nordic_nrf/ @anangl
/soc/arm/nuvoton_npcx/ @MulinChao @ChiHuaL
/soc/arm/nuvoton_numicro/ @ssekar15
/soc/arm/quicklogic_eos_s3/ @fkokosinski @kgugala
/soc/arm/rpi_pico/ @yonsch
/soc/arm/silabs_exx32/efm32pg1b/ @rdmeneze
/soc/arm/silabs_exx32/efr32mg21/ @l-alfred
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/*/power.c @FRASTM
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/st_stm32/stm32h7/*stm32h735* @benediktibk
/soc/arm/st_stm32/stm32l4/*stm32l451* @benediktibk
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynq7000/ @ibirnbaum
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/arm/renesas_rcar/ @aaillet
/soc/arm64/ @carlocaione
/soc/arm64/qemu_cortex_a53/ @carlocaione
/soc/arm64/bcm_vk/ @abhishek-brcm
/soc/arm64/nxp_layerscape/ @JiafeiPan
/soc/arm64/xenvm/ @lorc @firscity
/soc/arm64/nxp_imx/ @MrVan @JiafeiPan
/soc/arm64/arm/ @povergoing
/soc/arm64/arm/fvp_aemv8a/ @carlocaione
/soc/arm64/intel_socfpga/* @siclim
/soc/arm64/renesas_rcar/ @lorc @xakep-amatop
/soc/Kconfig @tejlmand @galak @nashif @nordicjm
/submanifests/* @mbolivar-ampere
/arch/x86/ @jhedberg @nashif
/arch/nios2/ @nashif
/arch/posix/ @aescolar @daor-oti
/arch/riscv/ @kgugala @pgielda
/soc/mips/ @frantony
/soc/posix/ @aescolar @daor-oti
/soc/riscv/ @kgugala @pgielda
/soc/riscv/openisa*/ @dleach02
/soc/riscv/riscv-privileged/andes_v5/ @cwshu @kevinwang821020 @jimmyzhe
/soc/riscv/riscv-privileged/neorv32/ @henrikbrixandersen
/soc/riscv/riscv-privileged/gd32vf103/ @soburi
/soc/riscv/riscv-privileged/niosv/ @sweeaun
/soc/x86/ @dcpleung @nashif
/arch/xtensa/ @dcpleung @andyross @nashif
/soc/xtensa/ @dcpleung @andyross @nashif
/arch/sparc/ @julius-barendt
/soc/sparc/ @julius-barendt
/boards/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/boards/arm/ @MaureenHelm @galak
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
/boards/arm/96b_stm32_sensor_mez/ @Mani-Sadhasivam
/boards/arm/96b_wistrio/ @Mani-Sadhasivam
/boards/arm/arduino_due/ @ioannisg
/boards/arm/acn52832/ @sven-hm
/boards/arm/arduino_mkrzero/ @soburi
/boards/arm/bbc_microbit_v2/ @LingaoM
/boards/arm/bl5340_dvk/ @lairdjm
/boards/arm/bl65*/ @lairdjm
/boards/arm/blackpill_f401ce/ @coderkalyan
/boards/arm/blackpill_f411ce/ @coderkalyan
/boards/arm/bt*10/ @greg-leach
/boards/arm/cc1352r1_launchxl/ @bwitherspoon
/boards/arm/cc26x2r1_launchxl/ @bwitherspoon
/boards/arm/cc3220sf_launchxl/ @vanti
/boards/arm/cy8ckit_062_ble/ @ifyall @npal-cy
/boards/arm/cy8ckit_062s4/ @DaWei8823
/boards/arm/cy8ckit_062_wifi_bt/ @ifyall @npal-cy
/boards/arm/cy8cproto_062_4343w/ @ifyall @npal-cy
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/efm32pg_stk3401a/ @rdmeneze
/boards/arm/faze/ @mbittan @simonguinot
/boards/arm/frdm*/ @mmahadevan108 @dleach02
/boards/arm/gd32*/ @nandojve
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @mmahadevan108 @dleach02
/boards/arm/ip_k66f/ @parthitce @lmajewski
/boards/arm/legend/ @mbittan @simonguinot
/boards/arm/lpcxpresso*/ @mmahadevan108 @dleach02
/boards/arm/mg100/ @rerickson1
/boards/arm/mimx8mm_evk/ @Mani-Sadhasivam
/boards/arm/mimx8mm_phyboard_polis @pefech
/boards/arm/mimxrt*/ @mmahadevan108 @dleach02
/boards/arm/mps2_an385/ @fvincenzo
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/npcx7m6fb_evb/ @MulinChao @ChiHuaL
/boards/arm/nrf*/ @carlescufi @lemrey
/boards/arm/nucleo*/ @erwango @ABOSTM @FRASTM
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nuvoton_pfm_m487/ @ssekar15
/boards/arm/pinnacle_100_dvk/ @rerickson1
/boards/arm/qemu_cortex_a9/ @ibirnbaum
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg @stephanosio
/boards/arm/quick_feather/ @fkokosinski @kgugala
/boards/arm/rak4631_nrf52840/ @gpaquet85
/boards/arm/rak5010_nrf52840/ @gpaquet85
/boards/arm/rpi_pico/ @yonsch
/boards/arm/ronoth_lodev/ @NorthernDean
/boards/arm/xmc45_relax_kit/ @parthitce
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4l_ek/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_e70_xplained/ @nandojve
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/scobc_module1/ @yashi
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/s32*/ @manuargue
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango @ABOSTM @FRASTM
/boards/arm/stm32h735g_disco/ @benediktibk
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango @ABOSTM @FRASTM
/boards/arm/rcar_*/ @aaillet
/boards/arm/ubx_bmd345eval_nrf52840/ @Navin-Sankar @brec-u-blox
/boards/arm/nrf5340_audio_dk_nrf5340 @koffes @alexsven @erikrobstad @rick1082 @gWacey
/boards/common/ @mbolivar-ampere
/boards/deprecated.cmake @tejlmand
/boards/mips/ @frantony
/boards/nios2/ @nashif
/boards/nios2/altera_max10/ @nashif
/boards/arm/stm32_min_dev/ @sidcha
/boards/posix/ @aescolar @daor-oti
/boards/posix/nrf52_bsim/ @aescolar @wopu-ot
/boards/riscv/ @kgugala @pgielda
/boards/riscv/rv32m1_vega/ @dleach02
/boards/riscv/adp_xc7k_ae350/ @cwshu @kevinwang821020 @jimmyzhe
/boards/riscv/longan_nano/ @soburi
/boards/riscv/neorv32/ @henrikbrixandersen
/boards/riscv/niosv*/ @sweeaun
/boards/riscv/sparkfun_red_v_things_plus/ @soburi
/boards/riscv/stamp_c3/ @soburi
/boards/shields/ @erwango
/boards/shields/atmel_rf2xx/ @nandojve
/boards/shields/esp_8266/ @nandojve
/boards/shields/inventek_eswifi/ @nandojve
/boards/x86/ @dcpleung @nashif
/boards/x86/acrn/ @enjiamai
/boards/xtensa/ @nashif @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
/boards/xtensa/nxp_adsp_imx8/ @iuliana-prodan @dbaluta
/boards/sparc/ @julius-barendt
/boards/arm64/qemu_cortex_a53/ @carlocaione
/boards/arm64/bcm958402m2_a72/ @abhishek-brcm
/boards/arm64/mimx8mm_evk/ @MrVan @JiafeiPan
/boards/arm64/mimx8mp_evk/ @MrVan @JiafeiPan
/boards/arm64/mimx8mn_evk/ @JiafeiPan
/boards/arm64/mimx93_evk/ @JiafeiPan
/boards/arm64/nxp_ls1046ardb/ @JiafeiPan
/boards/arm64/xenvm/ @lorc @firscity
/boards/arm64/fvp_baser_aemv8r/ @povergoing
/boards/arm64/fvp_base_revc_2xaemv8a/ @carlocaione
/boards/arm64/intel_socfpga_agilex_socdk/ @siclim @ngboonkhai
/boards/arm64/intel_socfpga_agilex5_socdk/ @chongteikheng
/boards/arm64/rcar_*/ @lorc @xakep-amatop
/boards/Kconfig @tejlmand @galak @nashif @nordicjm
# All cmake related files
/cmake/ @tejlmand @nashif
/cmake/*/arcmwdt/ @abrodkin @evgeniy-paltsev @tejlmand
/CMakeLists.txt @tejlmand @nashif
/doc/ @carlescufi
/doc/develop/tools/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/services/device_mgmt/smp_protocol.rst @de-nordic @nordicjm
/doc/services/device_mgmt/smp_groups/ @de-nordic @nordicjm
/doc/services/sensing/ @lixuzha @ghu0510 @qianruh
/doc/CMakeLists.txt @carlescufi
/doc/_scripts/ @carlescufi
/doc/connectivity/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/doc/connectivity/networking/conn_mgr @carlescufi @glarsennordic
/doc/build/dts/ @galak @mbolivar-ampere
/doc/build/sysbuild/ @tejlmand @nordicjm
/doc/hardware/peripherals/canbus/ @alexanderwachter @henrikbrixandersen
/doc/security/ @ceolin @d3zd3z
/drivers/debug/ @nashif
/drivers/*/*sam4l* @nandojve
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*gd32* @nandojve
/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/*/*mcux* @mmahadevan108 @dleach02
/drivers/*/*stm32* @erwango @ABOSTM @FRASTM
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*lpc11u6x* @mbittan @simonguinot
/drivers/*/*npcx* @MulinChao @ChiHuaL
/drivers/*/*andes* @cwshu @kevinwang821020 @jimmyzhe
/drivers/*/*ifx_cat1* @ifyall @npal-cy
/drivers/*/*neorv32* @henrikbrixandersen
/drivers/*/*_s32* @manuargue
/drivers/adc/ @anangl
/drivers/adc/adc_ads1x1x.c @XenuIsWatching
/drivers/adc/adc_stm32.c @cybertale
/drivers/adc/adc_rpi_pico.c @soburi
/drivers/adc/*ads114s0x* @benediktibk
/drivers/adc/*max11102_17* @benediktibk
/drivers/audio/*nrfx* @anangl
/drivers/auxdisplay/*pt6314* @xingrz
/drivers/auxdisplay/* @thedjnK
/drivers/bbram/* @yperess @sjg20 @jackrosenthal
/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/drivers/bluetooth/hci/hci_esp32.c @sylvioalves
/drivers/cache/ @carlocaione
/drivers/syscon/ @carlocaione @yperess
/drivers/can/ @alexanderwachter @henrikbrixandersen
/drivers/can/*mcp2515* @karstenkoenig
/drivers/can/*rcar* @aaillet
/drivers/clock_control/*agilex* @siclim @gdengi
/drivers/clock_control/*nrf* @nordic-krch
/drivers/clock_control/*esp32* @extremegtx @sylvioalves
/drivers/clock_control/*cpg_mssr* @aaillet
/drivers/counter/ @nordic-krch
/drivers/console/ipm_console.c @finikorg
/drivers/console/semihost_console.c @luozhongyao
/drivers/console/jailhouse_debug_console.c @MrVan
/drivers/counter/counter_cmos.c @dcpleung
/drivers/counter/counter_ll_stm32_timer.c @kentjhall
/drivers/counter/*esp32* @sylvioalves
/drivers/counter/dw_timer.c @pbalsundar
/drivers/counter/counter_timer_shell.c @pbalsundar
/drivers/crypto/*nrf_ecb* @maciekfabia @anangl
/drivers/display/*rm68200* @mmahadevan108
/drivers/display/display_ili9342c.* @extremegtx
/drivers/dac/ @martinjaeger
/drivers/dac/*ad56xx* @benediktibk
/drivers/dai/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/ssp/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/dmic/ @marcinszkudlinski @abonislawski
/drivers/dai/intel/alh/ @abonislawski
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*dw_common* @abonislawski
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale @lowlander
/drivers/dma/*pl330* @raveenp
/drivers/dma/*iproc_pax* @raveenp
/drivers/dma/*intel_adsp* @teburd @abonislawski
/drivers/dma/*rpi_pico* @soburi
/drivers/dma/*xmc4xxx* @talih0
/drivers/edac/ @finikorg
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*b91* @andy-liu-telink
/drivers/entropy/*bt_hci* @JordanYates
/drivers/entropy/*rv32m1* @dleach02
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/espi/ @albertofloyd @franciscomunoz @sjvasanth1
/drivers/ethernet/ @tbursztyka @jukkar
/drivers/ethernet/*dwmac* @npitre
/drivers/ethernet/*stm32* @Nukersson @lochej
/drivers/ethernet/*w5500* @parthitce
/drivers/ethernet/*xlnx_gem* @ibirnbaum
/drivers/ethernet/*smsc91x* @sgrrzhf
/drivers/ethernet/*adin2111* @GeorgeCGV
/drivers/ethernet/phy/ @rlubos @tbursztyka @arvinf @jukkar
/drivers/ethernet/phy/*adin2111* @GeorgeCGV
/drivers/mdio/ @rlubos @tbursztyka @arvinf
/drivers/mdio/*adin2111* @GeorgeCGV
/drivers/flash/ @nashif @de-nordic
/drivers/flash/*stm32_qspi* @lmajewski
/drivers/flash/*b91* @andy-liu-telink
/drivers/flash/*cadence* @ngboonkhai
/drivers/flash/*cc13xx_cc26xx* @pepe2k
/drivers/flash/*nrf* @de-nordic
/drivers/flash/*esp32* @sylvioalves
/drivers/fpga/ @tgorochowik @kgugala
/drivers/gpio/ @mnkp
/drivers/gpio/*b91* @andy-liu-telink
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*nct38xx* @MulinChao @ChiHuaL
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*eos_s3* @fkokosinski @kgugala
/drivers/gpio/*rcar* @aaillet
/drivers/gpio/*esp32* @sylvioalves
/drivers/gpio/*rpi_pico* @yonsch
/drivers/gpio/*xlnx_ps* @ibirnbaum
/drivers/gpio/*ads114s0x* @benediktibk
/drivers/gpio/*bd8lb600fs* @benediktibk
/drivers/gpio/*pcal64xxa* @benediktibk
/drivers/gpio/gpio_altera_pio.c @shilinte
/drivers/hwinfo/ @alexanderwachter
/drivers/i2c/i2c_common.c @sjg20
/drivers/i2c/i2c_emul.c @sjg20
/drivers/i2c/i2c_ite_enhance.c @GTLin08
/drivers/i2c/i2c_ite_it8xxx2.c @GTLin08
/drivers/i2c/i2c_shell.c @nashif
/drivers/i2c/Kconfig.i2c_emul @sjg20
/drivers/i2c/Kconfig.it8xxx2 @GTLin08
/drivers/i2c/target/*eeprom* @henrikbrixandersen
/drivers/i2c/Kconfig.test @mbolivar-ampere
/drivers/i2c/i2c_test.c @mbolivar-ampere
/drivers/i2c/*rcar* @aaillet
/drivers/i2s/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2s/*nrfx* @anangl
/drivers/i3c/ @dcpleung
/drivers/i3c/i3c_cdns.c @XenuIsWatching
/drivers/ieee802154/ @rlubos @tbursztyka @jukkar @fgrandel
/drivers/ieee802154/*b91* @andy-liu-telink
/drivers/ieee802154/ieee802154_nrf5* @jciupis
/drivers/ieee802154/ieee802154_rf2xx* @tbursztyka @nandojve
/drivers/ieee802154/ieee802154_cc13xx* @bwitherspoon @cfriedt @vaishnavachath
/drivers/interrupt_controller/ @dcpleung @nashif
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/interrupt_controller/*esp32* @sylvioalves
/drivers/interrupt_controller/intc_nuclei_eclic.c @soburi
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic
/drivers/ipm/ipm_cavs* @dcpleung @andyross
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/ipm/ipm_stm32_hsem.c @cameled
/drivers/ipm/ipm_esp32.c @uLipe
/drivers/ipm/ipm_ivshmem.c @uLipe
/drivers/kscan/ @VenkatKotakonda @franciscomunoz @sjvasanth1
/drivers/kscan/*xec* @franciscomunoz @sjvasanth1
/drivers/kscan/*ft5336* @MaureenHelm
/drivers/kscan/*ht16k33* @henrikbrixandersen
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar-ampere
/drivers/lora/ @Mani-Sadhasivam
/drivers/mbox/ @carlocaione
/drivers/misc/ @tejlmand
/drivers/misc/ft8xx/ @hubertmis
/drivers/mm/ @dcpleung
/drivers/modem/hl7800.c @rerickson1
/drivers/modem/simcom-sim7080.c @lgehreke
/drivers/modem/simcom-sim7080.h @lgehreke
/drivers/modem/Kconfig.hl7800 @rerickson1
/drivers/modem/Kconfig.simcom-sim7080 @lgehreke
/drivers/pcie/ @dcpleung @nashif @jhedberg
/drivers/peci/ @albertofloyd @franciscomunoz @sjvasanth1
/drivers/pinctrl/ @gmarull
/drivers/pinctrl/*esp32* @sylvioalves
/drivers/pinctrl/*it8xxx2* @ite
/drivers/pm_cpu_ops/ @carlocaione @gdengi
/drivers/pm_cpu_ops/psci_shell.c @nbalabak @gdengi
/drivers/power_domain/ @ceolin
/drivers/ps2/ @franciscomunoz @sjvasanth1
/drivers/ps2/*xec* @franciscomunoz @sjvasanth1
/drivers/ps2/*npcx* @MulinChao @ChiHuaL
/drivers/pwm/*b91* @andy-liu-telink
/drivers/pwm/*pca9685* @nixward
/drivers/pwm/*rpi_pico* @burumaj
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/*sam0* @nzmichaelh
/drivers/pwm/*test* @JordanYates
/drivers/pwm/*xlnx* @henrikbrixandersen
/drivers/pwm/pwm_capture.c @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/pwm/*gecko* @sun681
/drivers/pwm/*it8xxx2* @RuibinChang
/drivers/pwm/*esp32* @LucasTambor
/drivers/pwm/*rcar* @aaillet
/drivers/pwm/*max31790* @benediktibk
/drivers/regulator/* @gmarull
/drivers/regulator/regulator_pca9420.c @danieldegrasse
/drivers/regulator/regulator_rpi_pico.c @soburi
/drivers/regulator/regulator_shell.c @danieldegrasse
/drivers/reset/ @andrei-edward-popa
/drivers/reset/reset_intel_socfpga.c @nbalabak
/drivers/reset/Kconfig.intel_socfpga @nbalabak
/drivers/sensor/ @MaureenHelm
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
/drivers/sensor/grow_r502a/ @DineshDK03
/drivers/sensor/hts*/ @avisconti
/drivers/sensor/ina23*/ @bbilas
/drivers/sensor/lis*/ @avisconti
/drivers/sensor/lps*/ @avisconti
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/mpr/ @sven-hm
/drivers/sensor/qdec_stm32/ @valeriosetti
/drivers/sensor/rpi_pico_temp/ @soburi
/drivers/sensor/st*/ @avisconti
/drivers/serial/*b91* @andy-liu-telink
/drivers/serial/uart_altera_jtag.c @nashif @gohshunjing
/drivers/serial/uart_altera.c @gohshunjing
/drivers/serial/*ns16550* @dcpleung @nashif @gdengi
/drivers/serial/*nrfx* @anangl
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.mcux_iuart @Mani-Sadhasivam
/drivers/serial/uart_mcux_iuart.c @Mani-Sadhasivam
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/*rpi_pico* @yonsch
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/serial/uart_xlnx_uartlite.c @henrikbrixandersen
/drivers/serial/*xmc4xxx* @parthitce
/drivers/serial/*numicro* @ssekar15
/drivers/serial/*apbuart* @julius-barendt
/drivers/serial/*rcar* @aaillet
/drivers/serial/Kconfig.test @str4t0m
/drivers/serial/serial_test.c @str4t0m
/drivers/serial/Kconfig.xen @lorc @firscity
/drivers/serial/uart_hvc_xen.c @lorc @firscity
/drivers/serial/uart_hvc_xen_consoleio.c @lorc @firscity
/drivers/serial/Kconfig.it8xxx2 @GTLin08
/drivers/serial/uart_ite_it8xxx2.c @GTLin08
/drivers/smbus/ @finikorg
/drivers/sip_svc/ @maheshraotm
/drivers/disk/ @jfischer-no
/drivers/disk/sdmmc_sdhc.h @JunYangNXP
/drivers/disk/sdmmc_stm32.c @anthonybrandon
/drivers/net/ @rlubos @tbursztyka @jukkar
/drivers/ptp_clock/ @tbursztyka @jukkar
/drivers/spi/ @tbursztyka
/drivers/spi/*b91* @andy-liu-telink
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/spi/*esp32* @sylvioalves
/drivers/spi/*pl022* @soburi
/drivers/sdhc/ @danieldegrasse
/drivers/timer/*apic* @dcpleung @nashif
/drivers/timer/apic_tsc.c @andyross
/drivers/timer/*arm_arch* @carlocaione
/drivers/timer/*cortex_m_systick* @anangl
/drivers/timer/*altera_avalon* @nashif
/drivers/timer/*riscv_machine* @kgugala @pgielda
/drivers/timer/*ite_it8xxx2* @ite
/drivers/timer/*xlnx_psttc* @wjliang @stephanosio
/drivers/timer/*cc13xx_cc26xx_rtc* @vanti
/drivers/timer/*cavs* @dcpleung
/drivers/timer/*stm32_lptim* @FRASTM
/drivers/timer/*leon_gptimer* @julius-barendt
/drivers/timer/*mips_cp0* @frantony
/drivers/timer/*rcar_cmt* @aaillet
/drivers/timer/*esp32c3_sys* @uLipe
/drivers/timer/*sam0_rtc* @bendiscz
/drivers/timer/*arcv2* @ruuddw
/drivers/timer/*xtensa* @dcpleung
/drivers/timer/*rv32m1_lptmr* @mbolivar
/drivers/timer/*nrf_rtc* @anangl
/drivers/timer/*hpet* @dcpleung
/drivers/usb/ @jfischer-no
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/usb_c/ @sambhurst
/drivers/video/ @loicpoulain
/drivers/i2c/*b91* @andy-liu-telink
/drivers/i2c/i2c_ll_stm32* @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/i2c/*tca954x* @kurddt
/drivers/*/*xec* @franciscomunoz @albertofloyd @sjvasanth1
/drivers/w1/ @str4t0m
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/*sifive* @katsuster
/drivers/watchdog/wdt_handlers.c @dcpleung @nashif
/drivers/watchdog/*cc32xx* @pavlohamov
/drivers/watchdog/wdt_ite_it8xxx2.c @RuibinChang
/drivers/watchdog/Kconfig.it8xxx2 @RuibinChang
/drivers/watchdog/wdt_counter.c @nordic-krch
/drivers/watchdog/*rpi_pico* @thedjnK
/drivers/watchdog/*dw* @softwarecki
/drivers/watchdog/*ifx* @sreeramIfx
/drivers/wifi/ @rlubos @tbursztyka @jukkar
/drivers/wifi/esp_at/ @mniestroj
/drivers/wifi/eswifi/ @loicpoulain @nandojve
/drivers/wifi/winc1500/ @kludentwo
/drivers/virtualization/ @tbursztyka
/drivers/xen/ @lorc @firscity
/dts/arc/ @abrodkin @ruuddw @iriszzw @evgeniy-paltsev
/dts/arm/acsip/ @NorthernDean
/dts/arm/aspeed/ @aspeeddylan
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/sam4l* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/same70* @nandojve
/dts/arm/atmel/samv71* @nandojve
/dts/arm/atmel/ @galak
/dts/arm/broadcom/ @sbranden
/dts/arm/cypress/ @ifyall @npal-cy
/dts/arm/gigadevice/ @nandojve
/dts/arm/infineon/xmc4* @parthitce @ifyall @npal-cy
/dts/arm/infineon/psoc6/ @ifyall @npal-cy
/dts/arm64/ @carlocaione
/dts/arm64/armv8-r.dtsi @povergoing
/dts/arm64/intel/*intel_socfpga* @siclim
/dts/arm64/nxp/ @JiafeiPan
/dts/arm64/renesas/ @lorc @xakep-amatop
/dts/arm/quicklogic/ @fkokosinski @kgugala
/dts/arm/seeed/ @str4t0m
/dts/arm/st/ @erwango
/dts/arm/st/h7/*stm32h735* @benediktibk
/dts/arm/st/l4/*stm32l451* @benediktibk
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @anangl @carlescufi
/dts/arm/nuvoton/ @ssekar15 @MulinChao @ChiHuaL
/dts/arm/nxp/ @mmahadevan108 @dleach02
/dts/arm/nxp/nxp_s32* @manuargue
/dts/arm/microchip/ @franciscomunoz @albertofloyd @sjvasanth1
/dts/arm/rpi_pico/ @yonsch
/dts/arm/silabs/efm32_pg_1b.dtsi @rdmeneze
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efr32bg13p* @mnkp
/dts/arm/silabs/efr32bg22* @kgugala @fkokosinski @pczarnecki
/dts/arm/silabs/efr32xg13p* @mnkp
/dts/arm/silabs/efm32pg1b* @rdmeneze
/dts/arm/silabs/efr32mg21* @l-alfred
/dts/arm/silabs/efr32fg13* @yonsch
/dts/riscv/ @kgugala @pgielda
/dts/riscv/ite/ @ite
/dts/riscv/microchip/microchip-miv.dtsi @galak
/dts/riscv/openisa/rv32m1* @dleach02
/dts/riscv/riscv32-litex-vexriscv.dtsi @mateusz-holenko @kgugala @pgielda
/dts/riscv/starfive/ @rajnesh-kanwal
/dts/riscv/andes/andes_v5* @cwshu @kevinwang821020 @jimmyzhe
/dts/riscv/niosv/ @sweeaun
/dts/arm/armv*m.dtsi @galak @ioannisg
/dts/arm/armv7-a.dtsi @ibirnbaum
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/arm/renesas/rcar/ @aaillet
/dts/x86/ @jhedberg
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/xtensa/espressif/ @sylvioalves
/dts/xtensa/nxp/ @iuliana-prodan @dbaluta
/dts/sparc/ @julius-barendt
/dts/bindings/ @galak
/dts/bindings/can/ @alexanderwachter @henrikbrixandersen
/dts/bindings/i2c/zephyr*i2c-emul*.yaml @sjg20
/dts/bindings/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/adc/*ads114s08.yaml @benediktibk
/dts/bindings/adc/*max111* @benediktibk
/dts/bindings/modem/*hl7800.yaml @rerickson1
/dts/bindings/serial/ns16550.yaml @dcpleung @nashif
/dts/bindings/counter/snps,dw-timers.yaml @pbalsundar
/dts/bindings/wifi/*esp-at.yaml @mniestroj
/dts/bindings/*/*gd32* @nandojve
/dts/bindings/*/*npcx* @MulinChao @ChiHuaL
/dts/bindings/*/*psoc6* @ifyall @npal-cy
/dts/bindings/*/*infineon*cat1* @ifyall @npal-cy
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @mmahadevan108 @dleach02
/dts/bindings/*/nxp*s32* @manuargue
/dts/bindings/*/openisa* @dleach02
/dts/bindings/*/raspberrypi*pico* @yonsch
/dts/bindings/*/st* @erwango
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/litex* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/vexriscv* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/andes* @cwshu @kevinwang821020 @jimmyzhe
/dts/bindings/*/neorv32* @henrikbrixandersen
/dts/bindings/*/*lan91c111* @sgrrzhf
/dts/bindings/i3c/ @dcpleung
/dts/bindings/pm_cpu_ops/* @carlocaione
/dts/bindings/ethernet/*gem.yaml @ibirnbaum
/dts/bindings/auxdisplay/*pt6314.yaml @xingrz
/dts/bindings/auxdisplay/* @thedjnK
/dts/posix/ @aescolar @daor-oti
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/*ina23* @bbilas
/dts/bindings/sensor/st* @avisconti
/dts/bindings/sensor/zephyr,sensing.yaml @lixuzha @ghu0510 @qianruh
/dts/bindings/sensor/zephyr,sensing*.yaml @lixuzha @ghu0510 @qianruh
/dts/bindings/smbus/ @finikorg
/dts/bindings/sip_svc/ @maheshraotm
/dts/bindings/cpu/intel,niosv.yaml @sweeaun
/dts/bindings/reset/intel,socfpga-reset.yaml @nbalabak
/dts/bindings/gpio/*pcal64* @benediktibk
/dts/bindings/gpio/*bd8lb600fs* @benediktibk
/dts/bindings/gpio/*ads114s0x* @benediktibk
/dts/bindings/pwm/*max31790* @benediktibk
/dts/bindings/dac/*ad56* @benediktibk
/dts/common/ @galak
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/zephyr/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/include/zephyr/drivers/adc.h @anangl
/include/zephyr/drivers/adc/ads114s0x.h @benediktibk
/include/zephyr/drivers/auxdisplay.h @thedjnK
/include/zephyr/drivers/can.h @alexanderwachter @henrikbrixandersen
/include/zephyr/drivers/can/ @alexanderwachter @henrikbrixandersen
/include/zephyr/drivers/counter.h @nordic-krch
/include/zephyr/drivers/dac.h @martinjaeger
/include/zephyr/drivers/espi.h @albertofloyd @franciscomunoz @sjvasanth1
/include/zephyr/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/include/zephyr/drivers/flash.h @nashif @carlescufi @galak @MaureenHelm @de-nordic
/include/zephyr/drivers/i2c_emul.h @sjg20
/include/zephyr/drivers/i3c.h @dcpleung
/include/zephyr/drivers/i3c/ @dcpleung
/include/zephyr/drivers/led/ht16k33.h @henrikbrixandersen
/include/zephyr/drivers/interrupt_controller/ @dcpleung @nashif
/include/zephyr/drivers/interrupt_controller/gic.h @stephanosio
/include/zephyr/drivers/modem/hl7800.h @rerickson1
/include/zephyr/drivers/pcie/ @dcpleung
/include/zephyr/drivers/hwinfo.h @alexanderwachter
/include/zephyr/drivers/led.h @Mani-Sadhasivam
/include/zephyr/drivers/led_strip.h @mbolivar-ampere
/include/zephyr/drivers/sensor.h @MaureenHelm
/include/zephyr/drivers/smbus.h @finikorg
/include/zephyr/drivers/spi.h @tbursztyka
/include/zephyr/drivers/sip_svc/ @maheshraotm
/include/zephyr/drivers/lora.h @Mani-Sadhasivam
/include/zephyr/drivers/peci.h @albertofloyd @franciscomunoz @sjvasanth1
/include/zephyr/drivers/pm_cpu_ops.h @carlocaione
/include/zephyr/drivers/pm_cpu_ops/ @carlocaione
/include/zephyr/drivers/w1.h @str4t0m
/include/zephyr/drivers/pwm/max31790.h @benediktibk
/include/zephyr/app_memory/ @dcpleung
/include/zephyr/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arc/arch.h @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arc/v2/irq.h @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arm @MaureenHelm @galak @ioannisg
/include/zephyr/arch/arm/cortex_a_r/ @stephanosio
/include/zephyr/arch/arm64/ @carlocaione
/include/zephyr/arch/arm64/cortex_r/ @povergoing
/include/zephyr/arch/arm/irq.h @carlocaione
/include/zephyr/arch/mips/ @frantony
/include/zephyr/arch/nios2/ @nashif
/include/zephyr/arch/nios2/arch.h @nashif
/include/zephyr/arch/posix/ @aescolar @daor-oti
/include/zephyr/arch/riscv/ @kgugala @pgielda
/include/zephyr/arch/x86/ @jhedberg @dcpleung
/include/zephyr/arch/common/ @andyross @nashif
/include/zephyr/arch/xtensa/ @andyross @dcpleung
/include/zephyr/arch/sparc/ @julius-barendt
/include/zephyr/sys/atomic.h @andyross
/include/zephyr/bluetooth/ @alwa-nordic @jhedberg @Vudentz @sjanc
/include/zephyr/bluetooth/audio/ @jhedberg @Vudentz @Thalley
/include/zephyr/cache.h @carlocaione @andyross
/include/zephyr/canbus/ @alexanderwachter @henrikbrixandersen
/include/zephyr/tracing/ @nashif
/include/zephyr/debug/ @nashif
/include/zephyr/debug/coredump.h @dcpleung
/include/zephyr/debug/gdbstub.h @ceolin
/include/zephyr/device.h @tbursztyka @nashif
/include/zephyr/devicetree.h @galak
/include/zephyr/devicetree/can.h @henrikbrixandersen
/include/zephyr/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/zephyr/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
/include/zephyr/dt-bindings/ethernet/xlnx_gem.h @ibirnbaum
/include/zephyr/dt-bindings/pcie/ @dcpleung
/include/zephyr/dt-bindings/pinctrl/esp* @sylvioalves
/include/zephyr/dt-bindings/pwm/*it8xxx2* @RuibinChang
/include/zephyr/dt-bindings/usb/usb.h @galak
/include/zephyr/dt-bindings/adc/ads114s0x_adc.h @benediktibk
/include/zephyr/drivers/emul.h @sjg20
/include/zephyr/data/ @d3zd3z
/include/zephyr/fs/ @nashif @de-nordic
/include/zephyr/init.h @nashif @andyross
/include/zephyr/irq.h @dcpleung @nashif @andyross
/include/zephyr/irq_offload.h @dcpleung @nashif @andyross
/include/zephyr/kernel.h @dcpleung @nashif @andyross
/include/zephyr/kernel_version.h @dcpleung @nashif @andyross
/include/zephyr/linker/app_smem*.ld @dcpleung @nashif
/include/zephyr/linker/ @dcpleung @nashif @andyross
/include/zephyr/logging/ @nordic-krch
/include/zephyr/lorawan/lorawan.h @Mani-Sadhasivam
/include/zephyr/mgmt/osdp.h @sidcha
/include/zephyr/mgmt/mcumgr/ @nordicjm
/include/zephyr/net/ @rlubos @tbursztyka @jukkar
/include/zephyr/net/buf.h @jhedberg @tbursztyka @rlubos @jukkar
/include/zephyr/net/coap*.h @rlubos
/include/zephyr/net/conn_mgr*.h @rlubos @glarsennordic @jukkar
/include/zephyr/net/gptp.h @rlubos @jukkar @fgrandel
/include/zephyr/net/ieee802154*.h @rlubos @tbursztyka @jukkar @fgrandel
/include/zephyr/net/lwm2m*.h @rlubos
/include/zephyr/net/mqtt.h @rlubos
/include/zephyr/net/mqtt_sn.h @rlubos @BeckmaR
/include/zephyr/net/net_pkt_filter.h @npitre
/include/zephyr/posix/ @cfreidt
/include/zephyr/pm/pm.h @nashif @ceolin
/include/zephyr/drivers/ptp_clock.h @tbursztyka @jukkar
/include/zephyr/rtio/ @teburd
/include/zephyr/sensing/ @lixuzha @ghu0510 @qianruh
/include/zephyr/shared_irq.h @dcpleung @nashif @andyross
/include/zephyr/shell/ @jakub-uC @nordic-krch
/include/zephyr/shell/shell_mqtt.h @ycsin
/include/zephyr/sw_isr_table.h @dcpleung @nashif @andyross
/include/zephyr/sd/ @danieldegrasse
/include/zephyr/sip_svc/ @maheshraotm
/include/zephyr/sys_clock.h @dcpleung @nashif @andyross
/include/zephyr/sys/sys_io.h @dcpleung @nashif @andyross
/include/zephyr/sys/kobject.h @dcpleung @nashif
/include/zephyr/toolchain.h @dcpleung @andyross @nashif
/include/zephyr/toolchain/ @dcpleung @nashif @andyross
/include/zephyr/zephyr.h @dcpleung @nashif @andyross
/kernel/ @dcpleung @nashif @andyross
/lib/cpp/ @stephanosio
/lib/smf/ @sambhurst
/lib/util/ @carlescufi @jakub-uC
/lib/util/fnmatch/ @carlescufi @jakub-uC
/lib/open-amp/ @arnopo
/lib/os/ @dcpleung @nashif @andyross
/lib/os/cbprintf_packaged.c @npitre
/lib/os/json.c @d3zd3z
/lib/posix/ @cfriedt
/lib/posix/getopt/ @jakub-uC
/subsys/portability/ @nashif
/subsys/sensing/ @lixuzha @ghu0510 @qianruh
/lib/libc/ @nashif
/lib/libc/arcmwdt/ @abrodkin @ruuddw @evgeniy-paltsev
/misc/ @tejlmand
/modules/ @nashif
/modules/canopennode/ @henrikbrixandersen
/modules/mbedtls/ @ceolin @d3zd3z
/modules/hal_gigadevice/ @nandojve
/modules/hal_nordic/nrf_802154/ @jciupis
/modules/trusted-firmware-m/ @microbuilder
/kernel/device.c @andyross @nashif
/kernel/idle.c @andyross @nashif
/samples/ @nashif
/samples/application_development/sysbuild/ @tejlmand @nordicjm
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/boards/*microbit* @jhe
/samples/bluetooth/ @jhedberg @Vudentz @alwa-nordic @sjanc
/samples/compression/ @Navin-Sankar
/samples/drivers/can/ @alexanderwachter @henrikbrixandersen
/samples/drivers/clock_control_litex/ @mateusz-holenko @kgugala @pgielda
/samples/drivers/eeprom/ @henrikbrixandersen
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/drivers/smbus/ @finikorg
/samples/subsys/lorawan/ @Mani-Sadhasivam
/samples/modules/canopennode/ @henrikbrixandersen
/samples/net/ @rlubos @tbursztyka @jukkar
/samples/net/cloud/tagoio_http_post/ @nandojve
/samples/net/dns_resolve/ @rlubos @tbursztyka @jukkar
/samples/net/gptp/ @rlubos @jukkar @fgrandel
/samples/net/lwm2m_client/ @rlubos
/samples/net/mqtt_publisher/ @rlubos
/samples/net/mqtt_sn_publisher/ @rlubos @BeckmaR
/samples/net/sockets/coap_*/ @rlubos
/samples/net/sockets/ @rlubos @tbursztyka @jukkar
/samples/sensor/ @MaureenHelm
/samples/shields/ @avisconti
/samples/subsys/ipc/ipc_service/icmsg @emob-nordic
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/logging/syst/ @dcpleung
/samples/subsys/shell/ @jakub-uC @nordic-krch @gdengi
/samples/subsys/sip_svc/ @maheshraotm
/samples/subsys/mgmt/mcumgr/ @de-nordic @nordicjm
/samples/subsys/mgmt/updatehub/ @nandojve @otavio
/samples/subsys/mgmt/osdp/ @sidcha
/samples/subsys/usb/ @jfischer-no
/samples/subsys/usb_c/ @sambhurst
/samples/subsys/pm/ @nashif @ceolin
/samples/subsys/sensing/ @lixuzha @ghu0510 @qianruh
/samples/tfm_integration/ @microbuilder
/samples/userspace/ @dcpleung @nashif
/scripts/release/bug_bash.py @cfriedt
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/coredump/ @dcpleung
/scripts/footprint/ @nashif
/scripts/kconfig/ @ulfalizer
/scripts/logging/dictionary/ @dcpleung
/scripts/native_simulator/ @aescolar
/scripts/pylib/twister/expr_parser.py @nashif
/scripts/schemas/twister/ @nashif
/scripts/build/gen_app_partitions.py @dcpleung @nashif
scripts/build/gen_image_info.py @tejlmand
/scripts/get_maintainer.py @nashif
/scripts/dts/ @mbolivar-ampere @galak
/scripts/release/ @nashif
/scripts/ci/ @nashif
/scripts/ci/check_compliance.py @nashif @carlescufi
/arch/x86/gen_gdt.py @dcpleung @nashif
/arch/x86/gen_idt.py @dcpleung @nashif
/scripts/build/gen_kobject_list.py @dcpleung @nashif
/scripts/build/gen_kobject_placeholders.py @dcpleung
/scripts/build/gen_syscalls.py @dcpleung @nashif
/scripts/list_boards.py @mbolivar-ampere
/scripts/build/process_gperf.py @dcpleung @nashif
/scripts/build/gen_relocate_app.py @dcpleung
/scripts/generate_usb_vif/ @madhurimaparuchuri
/scripts/requirements*.txt @mbolivar-ampere @galak @nashif
/scripts/tests/build/test_subfolder_list.py @rmstoi
/scripts/tracing/ @nashif
/scripts/pylib/twister/ @nashif
/scripts/twister @nashif
/scripts/series-push-hook.sh @erwango
/scripts/utils/pinctrl_nrf_migrate.py @gmarull
/scripts/utils/migrate_mcumgr_kconfigs.py @de-nordic
/scripts/west_commands/ @mbolivar-ampere
/scripts/west_commands/blobs.py @carlescufi
/scripts/west_commands/fetchers/ @carlescufi
/scripts/west_commands/runners/gd32isp.py @mbolivar-ampere @nandojve
/scripts/west_commands/tests/test_gd32isp.py @mbolivar-ampere @nandojve
/scripts/west-commands.yml @mbolivar-ampere
/scripts/zephyr_module.py @tejlmand
/scripts/build/uf2conv.py @petejohanson
/scripts/build/user_wordsize.py @cfriedt
/scripts/valgrind.supp @aescolar @daor-oti
/share/sysbuild/ @tejlmand @nordicjm
/share/zephyr-package/ @tejlmand
/share/zephyrunittest-package/ @tejlmand
/subsys/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/subsys/bluetooth/audio/ @jhedberg @Vudentz @Thalley @sjanc
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot @kruithofa
/subsys/bluetooth/host/ @alwa-nordic @jhedberg @Vudentz @sjanc
/subsys/bluetooth/mesh/ @jhedberg @PavelVPV @Vudentz @LingaoM
/subsys/canbus/ @alexanderwachter @henrikbrixandersen
/subsys/debug/ @nashif
/subsys/debug/coredump/ @dcpleung
/subsys/debug/gdbstub/ @ceolin
/subsys/debug/gdbstub.c @ceolin
/subsys/dfu/ @de-nordic @nordicjm
/subsys/disk/ @jfischer-no
/subsys/dsp/ @yperess
/subsys/tracing/ @nashif
/subsys/debug/asan_hacks.c @aescolar @daor-oti
/subsys/demand_paging/ @dcpleung @nashif
/subsys/emul/ @sjg20
/subsys/fb/ @jfischer-no
/subsys/fs/ @nashif
/subsys/fs/nvs/ @Laczen
/subsys/ipc/ @carlocaione
/subsys/ipc/ipc_service/*/*icmsg* @emob-nordic
/subsys/jwt/ @d3zd3z
/subsys/logging/ @nordic-krch
/subsys/logging/backends/log_backend_net.c @nordic-krch @rlubos @jukkar
/subsys/lorawan/ @Mani-Sadhasivam
/subsys/mgmt/ec_host_cmd/ @jettr
/subsys/mgmt/mcumgr/ @carlescufi @de-nordic @nordicjm
/subsys/mgmt/hawkbit/ @Navin-Sankar
/subsys/mgmt/updatehub/ @nandojve @otavio
/subsys/mgmt/osdp/ @sidcha
/subsys/modbus/ @jfischer-no
/subsys/net/buf.c @jhedberg @tbursztyka @rlubos @jukkar
/subsys/net/conn_mgr/ @rlubos @glarsennordic @jukkar
/subsys/net/ip/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/dns/ @rlubos @tbursztyka @cfriedt @jukkar
/subsys/net/lib/lwm2m/ @rlubos
/subsys/net/lib/config/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/mqtt/ @rlubos
/subsys/net/lib/mqtt_sn/ @rlubos @BeckmaR
/subsys/net/lib/coap/ @rlubos
/subsys/net/lib/sockets/socketpair.c @cfriedt
/subsys/net/lib/sockets/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @rlubos @tbursztyka @jukkar
/subsys/net/l2/ethernet/gptp/ @rlubos @jukkar @fgrandel
/subsys/net/l2/ieee802154/ @rlubos @tbursztyka @jukkar @fgrandel
/subsys/net/l2/canbus/ @alexanderwachter
/subsys/net/pkt_filter/ @npitre
/subsys/net/*/openthread/ @rlubos
/subsys/pm/ @nashif @ceolin
/subsys/random/ @dleach02
/subsys/shell/ @jakub-uC @nordic-krch
/subsys/shell/backends/shell_mqtt.c @ycsin
/subsys/sd/ @danieldegrasse
/subsys/sip_svc/ @maheshraotm
/subsys/task_wdt/ @martinjaeger
/subsys/testsuite/ @nashif
/subsys/testsuite/ztest/*/ztress* @nordic-krch
/subsys/timing/ @nashif @dcpleung
/subsys/usb/ @jfischer-no
/subsys/usb/usb_c/ @sambhurst
/tests/ @nashif
/tests/arch/arm/ @ioannisg @stephanosio
/tests/benchmarks/cmsis_dsp/ @stephanosio
/tests/boards/native_posix/ @aescolar @daor-oti
/tests/bluetooth/ @alwa-nordic @jhedberg @Vudentz @sjanc
/tests/bluetooth/audio/ @jhedberg @Vudentz @wopu-ot @Thalley
/tests/bluetooth/controller/ @cvinayak @thoh-ot @kruithofa @erbr-ot @sjanc @ppryga
/tests/bsim/bluetooth/ @alwa-nordic @jhedberg @Vudentz @wopu-ot
/tests/bsim/bluetooth/audio/ @jhedberg @Vudentz @wopu-ot @Thalley
/tests/bsim/bluetooth/mesh/ @jhedberg @Vudentz @wopu-ot @PavelVPV
/tests/bluetooth/mesh_shell/ @jhedberg @Vudentz @sjanc @PavelVPV
/tests/bluetooth/tester/ @alwa-nordic @jhedberg @Vudentz @sjanc
/tests/posix/ @cfriedt
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin @d3zd3z
/tests/drivers/can/ @alexanderwachter @henrikbrixandersen
/tests/drivers/counter/ @nordic-krch
/tests/drivers/eeprom/ @henrikbrixandersen @sjg20
/tests/drivers/flash_simulator/ @de-nordic
/tests/drivers/gpio/ @mnkp
/tests/drivers/hwinfo/ @alexanderwachter
/tests/drivers/smbus/ @finikorg
/tests/drivers/spi/ @tbursztyka
/tests/drivers/uart/uart_async_api/ @anangl
/tests/drivers/w1/ @str4t0m
/tests/kernel/ @dcpleung @andyross @nashif
/tests/lib/ @nashif
/tests/lib/cmsis_dsp/ @stephanosio
/tests/lib/json/ @d3zd3z
/tests/net/ @rlubos @tbursztyka @jukkar
/tests/net/buf/ @jhedberg @tbursztyka @jukkar
/tests/net/conn_mgr_monitor/ @rlubos @glarsennordic @jukkar
/tests/net/conn_mgr_conn/ @rlubos @glarsennordic @jukkar
/tests/net/ieee802154/l2/ @rlubos @tbursztyka @jukkar @fgrandel
/tests/net/lib/ @rlubos @tbursztyka @jukkar
/tests/net/lib/http_header_fields/ @rlubos @tbursztyka @jukkar
/tests/net/lib/mqtt_packet/ @rlubos
/tests/net/lib/mqtt_sn_packet/ @rlubos @BeckmaR
/tests/net/lib/mqtt_sn_client/ @rlubos @BeckmaR
/tests/net/lib/coap/ @rlubos
/tests/net/npf/ @npitre
/tests/net/socket/socketpair/ @cfriedt
/tests/net/socket/ @rlubos @tbursztyka @jukkar
/tests/subsys/debug/coredump/ @dcpleung
/tests/subsys/fs/ @nashif @de-nordic
/tests/subsys/jwt/ @d3zd3z
/tests/subsys/mgmt/mcumgr/ @de-nordic @nordicjm
/tests/subsys/sd/ @danieldegrasse
/tests/subsys/rtio/ @teburd
/tests/subsys/shell/ @jakub-uC @nordic-krch
# Get all docs reviewed
*.rst @nashif
/doc/kernel/ @andyross @nashif
*posix*.rst @aescolar @daor-oti

View File

@@ -1,139 +1,78 @@
# Zephyr Project Code of Conduct
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
Examples of behavior that contributes to creating a positive environment
include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior include:
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
professional setting
## Enforcement Responsibilities
## Our Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
conduct@zephyrproject.org. Reports will be received by the Chair of the Zephyr
Governing Board, the Zephyr Project Director (Linux Foundation), and the Zephyr
Project Developer Advocate (Linux Foundation). You may refer to the [Governing
Board](https://zephyrproject.org/governing-board/) and [Linux Foundation
Staff](https://zephyrproject.org/staff/) web pages to identify who are the
individuals currently holding these positions.
All complaints will be reviewed and investigated promptly and fairly.
reported by contacting the project team at conduct@zephyrproject.org.
Reports will be received by Kate Stewart (Linux Foundation) and Amy Occhialino
(Intel). All complaints will be reviewed and investigated, and will result in a
response that is deemed necessary and appropriate to the circumstances. The
project team is obligated to maintain confidentiality with regard to the
reporter of an incident. Further details of specific enforcement policies may
be posted separately.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
The only changes made by The Zephyr Project to the original document were to
make explicit who the recipients of Code of Conduct incident reports are.
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@@ -1,19 +0,0 @@
# Constant variables to be used across Kconfig options
# Copyright (c) 2024 basalte bv
# SPDX-License-Identifier: Apache-2.0
INT8_MIN := -128
INT16_MIN := -32768
INT32_MIN := -2147483648
INT64_MIN := -9223372036854775808
INT8_MAX := 127
INT16_MAX := 32767
INT32_MAX := 2147483647
INT64_MAX := 9223372036854775807
UINT8_MAX := 255
UINT16_MAX := 65535
UINT32_MAX := 4294967295
UINT64_MAX := 18446744073709551615

View File

@@ -5,13 +5,7 @@
# Copyright (c) 2023 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
source "Kconfig.constants"
osource "$(APPLICATION_SOURCE_DIR)/VERSION"
# This should be sourced early since the autogen Kconfig.dts options
# and macros may get used by shields/boards/SoC defconfig or modules.
source "dts/Kconfig"
osource "${APPLICATION_SOURCE_DIR}/VERSION"
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
@@ -21,22 +15,27 @@ source "dts/Kconfig"
# Shield defaults should have precedence over board defaults, which should have
# precedence over SoC defaults, so include them in that order.
#
# $ARCH and $KCONFIG_BOARD_DIR will be glob patterns when building documentation.
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
# This loads custom shields defconfigs (from BOARD_ROOT)
osource "$(KCONFIG_BINARY_DIR)/Kconfig.shield.defconfig"
# This loads Zephyr base shield defconfigs
source "boards/shields/*/Kconfig.defconfig"
osource "$(KCONFIG_BOARD_DIR)/Kconfig.defconfig"
# This loads Zephyr specific SoC root defconfigs
source "$(KCONFIG_BINARY_DIR)/soc/Kconfig.defconfig"
source "$(BOARD_DIR)/Kconfig.defconfig"
# This loads custom SoC root defconfigs
osource "$(KCONFIG_BINARY_DIR)/Kconfig.soc.defconfig"
# This loads Zephyr base SoC root defconfigs
osource "soc/$(ARCH)/*/Kconfig.defconfig"
# This loads the toolchain defconfigs
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig.defconfig"
# This loads the testsuite defconfig
source "subsys/testsuite/Kconfig.defconfig"
# This should be early since the autogen Kconfig.dts symbols may get
# used by modules
source "dts/Kconfig"
menu "Modules"
source "modules/Kconfig"
@@ -142,17 +141,6 @@ config ROM_START_OFFSET
alignment requirements on most ARM targets, although some targets
may require smaller or larger values.
config ROM_END_OFFSET
hex "ROM end offset"
default 0
help
If non-zero, this option reduces the maximum size that the Zephyr image is allowed to
occupy, this is to allow for additional image storage which can be created and used by
other systems such as bootloaders (for MCUboot, this would include the image swap
fields and TLV storage at the end of the image).
If unsure, leave at the default value 0.
config LD_LINKER_SCRIPT_SUPPORTED
bool
default y
@@ -215,6 +203,12 @@ config LINKER_SORT_BY_ALIGNMENT
in decreasing size of symbols. This helps to minimize
padding between symbols.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
help
The option specifies that the vector table should be placed at the
start of SRAM instead of the start of flash.
config HAS_SRAM_OFFSET
bool
help
@@ -254,20 +248,6 @@ config LINKER_USE_PINNED_SECTION
Requires that pinned sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_USE_ONDEMAND_SECTION
bool "Use Evictable Linker Section"
depends on DEMAND_MAPPING
depends on !LINKER_USE_PINNED_SECTION
depends on !ARCH_MAPS_ALL_RAM
help
If enabled, the symbols which may be evicted from memory
will be put into a linker section reserved for on-demand symbols.
During boot, the corresponding memory will be mapped as paged out.
This is conceptually the opposite of CONFIG_LINKER_USE_PINNED_SECTION.
Requires that on-demand sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
bool "Generic sections are present at boot" if DEMAND_PAGING && LINKER_USE_PINNED_SECTION
default y
@@ -283,7 +263,7 @@ config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
config LINKER_LAST_SECTION_ID
bool "Last section identifier"
default y if !ARM64
default y
depends on ARM || ARM64 || RISCV
help
If enabled, the last section will contain an identifier.
@@ -326,142 +306,33 @@ config LINKER_USE_RELAX
endmenu # "Linker Sections"
config LINKER_ITERABLE_SUBALIGN
int
default 8 if 64BIT
default 4
help
Hidden option for the default subalignment of iterable sections.
config LINKER_DEVNULL_SUPPORT
bool
default y if CPU_CORTEX_M || (RISCV && !64BIT)
config LINKER_DEVNULL_MEMORY
bool "Devnull region"
depends on LINKER_DEVNULL_SUPPORT
help
Devnull region is created. It is stripped from final binary but remains
in byproduct elf file.
config LINKER_DEVNULL_MEMORY_SIZE
int "Devnull region size"
depends on LINKER_DEVNULL_MEMORY
default 262144
help
Size can be adjusted so it fits all data placed in that region.
endmenu
menu "Compiler Options"
config REQUIRES_STD_C99
bool
select DEPRECATED
help
Hidden option to select compiler support C99 standard or higher.
config REQUIRES_STD_C11
bool
select DEPRECATED
select REQUIRES_STD_C99
help
Hidden option to select compiler support C11 standard or higher.
config REQUIRES_STD_C17
bool
help
Hidden option to select compiler support C17 standard or higher.
config REQUIRES_STD_C23
bool
select REQUIRES_STD_C17
help
Hidden option to select compiler support C23 standard or higher.
choice STD_C
prompt "C Standard"
default STD_C23 if REQUIRES_STD_C23
default STD_C17
help
C Standards.
config STD_C90
bool "C90 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C99
help
1989 C standard as completed in 1989 and ratified by ISO/IEC
as ISO/IEC 9899:1990. This version is known as "ANSI C".
config STD_C99
bool "C99 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C11
help
1999 C standard.
config STD_C11
bool "C11 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C17
help
2011 C standard.
config STD_C17
bool "C17"
depends on !REQUIRES_STD_C23
help
2017 C standard, addresses defects in C11 without introducing
new language features.
config STD_C23
bool "C23"
help
2023 C standard.
endchoice
config TOOLCHAIN_SUPPORTS_GNU_EXTENSIONS
bool
default y if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr"
help
Hidden option to signal that toolchain supports GNU Extensions.
config GNU_C_EXTENSIONS
bool "GNU C Extensions"
depends on TOOLCHAIN_SUPPORTS_GNU_EXTENSIONS
help
Enable GNU C Extensions. GNU C provides several language features
not found in ISO standard C.
config CODING_GUIDELINE_CHECK
bool "Enforce coding guideline rules"
help
Use available compiler flags to check coding guideline rules during
the build.
config NATIVE_LIBC
bool
select FULL_LIBC_SUPPORTED
select TC_PROVIDES_POSIX_C_LANG_SUPPORT_R
help
Zephyr will use the host system C library.
config NATIVE_LIBCPP
bool
select FULL_LIBCPP_SUPPORTED
help
Zephyr will use the host system C++ library
config NATIVE_BUILD
bool
select NATIVE_LIBC if EXTERNAL_LIBC
select NATIVE_LIBCPP if EXTERNAL_LIBCPP
select FULL_LIBC_SUPPORTED
select FULL_LIBCPP_SUPPORTED if CPP
help
Zephyr will be built targeting the host system for debug and
development purposes.
config NATIVE_APPLICATION
bool
default y if ARCH_POSIX
depends on !NATIVE_LIBRARY
select NATIVE_BUILD
help
Build as a native application that can run on the host and using
resources and libraries provided by the host.
config NATIVE_LIBRARY
bool
select NATIVE_BUILD
@@ -482,7 +353,6 @@ choice COMPILER_OPTIMIZATIONS
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
default SIZE_OPTIMIZATIONS_AGGRESSIVE if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm"
default SIZE_OPTIMIZATIONS
help
Note that these flags shall only control the compiler
@@ -495,12 +365,6 @@ config SIZE_OPTIMIZATIONS
Compiler optimizations will be set to -Os independently of other
options.
config SIZE_OPTIMIZATIONS_AGGRESSIVE
bool "Aggressively optimize for size"
help
Compiler optimizations will be set to -Oz independently of other
options.
config SPEED_OPTIMIZATIONS
bool "Optimize for speed"
help
@@ -523,35 +387,11 @@ config NO_OPTIMIZATIONS
default stack sizes in order to avoid stack overflows.
endchoice
config LTO
bool "Link Time Optimization"
depends on !(GEN_ISR_TABLES || GEN_IRQ_VECTOR_TABLE) || ISR_TABLES_LOCAL_DECLARATION
depends on !NATIVE_LIBRARY
depends on !CODE_DATA_RELOCATION
help
This option enables Link Time Optimization.
config LTO_SINGLE_THREADED
bool "Single-threaded LTO"
depends on LTO
help
This option instructs the linker to use a single thread to process
LTO. See the following issue for more info:
https://github.com/zephyrproject-rtos/sdk-ng/issues/1038
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config DEPRECATION_TEST
bool "Indicate test for deprecated feature"
help
This option is selected by tests which check functionality of
deprecated features. It ensures that COMPILER_WARNINGS_AS_ERRORS
is not selected as that would generate errors when the deprecated
features are used.
config COMPILER_SAVE_TEMPS
bool "Save temporary object files"
help
@@ -622,13 +462,6 @@ config MISRA_SANE
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
config TOOLCHAIN_SUPPORTS_VLA_IN_STATEMENTS
bool
default y
help
Hidden symbol to state if the toolchain can handle vla in
statements.
endmenu
choice
@@ -638,17 +471,17 @@ choice
config ASSERT_ON_ERRORS
bool "Assert on all errors"
help
Assert on errors covered with the CHECKIF() macro.
Assert on errors covered with the CHECK macro.
config NO_RUNTIME_CHECKS
bool "No runtime error checks"
help
Do not do any runtime checks or asserts when using the CHECKIF() macro.
Do not do any runtime checks or asserts when using the CHECK macro.
config RUNTIME_ERROR_CHECKS
bool "Runtime error checks"
help
Always perform runtime checks covered with the CHECKIF() macro. This
Always perform runtime checks covered with the CHECK macro. This
option is the default and the only option used during testing.
endchoice
@@ -685,24 +518,6 @@ config OUTPUT_DISASSEMBLE_ALL
The .lst file will contain complete disassembly of the firmware
not just those expected to contain instructions including zeros
config OUTPUT_DISASSEMBLY_WITH_SOURCE
bool "Include source code in output disassembly file"
default y
depends on OUTPUT_DISASSEMBLY && !OUTPUT_DISASSEMBLE_ALL
help
The .lst file will also contain the source code. Having
control over this can be useful for reproducible builds
since it can be used to remove one of the elements of
the .lst file that can vary across platforms because
of reasons such as having ".." include paths.
config OUTPUT_DISASSEMBLY_NO_ALIASES
bool "Disassemble with raw instruction mnemonics"
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain raw instruction mnemonic instead of
pseudo instructions.
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -724,21 +539,8 @@ config CLEANUP_INTERMEDIATE_FILES
from the build process. Note this breaks incremental builds, west spdx
(Software Bill of Material generation), and maybe others.
config BUILD_GAP_FILL_PATTERN
hex "Gap fill pattern"
default 0xFF
help
Pattern used for gap filling of output files.
This value should be set to the value of a clean flash as this can
significantly reduce flash write times.
This setting only defines the gap fill pattern and doesn't enable gap
filling.
Note: binary files are always gap filled as they contain no address
information.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/s19 files [DEPRECATED]."
select DEPRECATED
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
@@ -746,12 +548,6 @@ config BUILD_OUTPUT_HEX
Build an Intel HEX binary zephyr/zephyr.hex in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_HEX_GAP_FILL
bool "Fill gaps in hex files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in hex based files.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
@@ -786,12 +582,6 @@ config BUILD_OUTPUT_S19
Build an S19 binary zephyr/zephyr.s19 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_S19_GAP_FILL
bool "Fill gaps in s19 files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in s19 based files.
config BUILD_OUTPUT_UF2
bool "Build a binary in UF2 format"
depends on BUILD_OUTPUT_BIN
@@ -799,12 +589,6 @@ config BUILD_OUTPUT_UF2
Build a UF2 binary zephyr/zephyr.uf2 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_MOT
bool "Build a binary in MOT format"
help
Build a MOT binary zephyr/zephyr.mot in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
if BUILD_OUTPUT_UF2
config BUILD_OUTPUT_UF2_FAMILY_ID
@@ -812,10 +596,9 @@ config BUILD_OUTPUT_UF2_FAMILY_ID
default "0x1c5f21b0" if SOC_SERIES_ESP32
default "0x621e937a" if SOC_NRF52833_QIAA
default "0xada52840" if SOC_NRF52840_QIAA
default "0x4fb2d5bd" if SOC_SERIES_IMXRT10XX || SOC_SERIES_IMXRT11XX
default "0x4fb2d5bd" if SOC_SERIES_IMX_RT
default "0x2abc77ec" if SOC_SERIES_LPC55XXX
default "0xe48bff56" if SOC_SERIES_RP2040
default "0xe48bff57" if SOC_SERIES_RP2350
default "0xe48bff56" if SOC_SERIES_RP2XXX
default "0x68ed2b88" if SOC_SERIES_SAMD21
default "0x55114460" if SOC_SERIES_SAMD51
default "0x647824b6" if SOC_SERIES_STM32F0X
@@ -856,11 +639,6 @@ config BUILD_OUTPUT_STRIPPED
Build a stripped binary zephyr/zephyr.strip in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_COMPRESS_DEBUG_SECTIONS
bool "Compress debug sections in the ELF file"
help
Compress debug sections in the ELF file to reduce the file size.
config BUILD_OUTPUT_ADJUST_LMA
string
help
@@ -885,23 +663,6 @@ config BUILD_OUTPUT_ADJUST_LMA
default "$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_IMAGE_M4))-\
$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH))"
config BUILD_OUTPUT_ADJUST_LMA_SECTIONS
def_string "*"
depends on BUILD_OUTPUT_ADJUST_LMA!=""
help
This determines the output sections to which the above LMA adjustment
will be applied.
The value can be the name of a section in the final ELF, like "text".
It can also be a pattern with wildcards, such as "*bss", which could
match more than one section name. Multiple such patterns can be given
as a ";"-separated list. It's possible to supply a 'negative' pattern
starting with "!", to exclude sections matched by a preceding pattern.
By default, all sections will have their LMA adjusted. The following
example excludes one section produced by the code relocation feature:
config BUILD_OUTPUT_ADJUST_LMA_SECTIONS
default "*;!.extflash_text_reloc"
config BUILD_OUTPUT_INFO_HEADER
bool "Create a image information header"
help
@@ -973,7 +734,7 @@ config BUILD_OUTPUT_STRIP_PATHS
bool "Strip absolute paths from binaries"
default y
help
If the compiler supports it, strip the $(ZEPHYR_BASE) prefix from the
If the compiler supports it, strip the ${ZEPHYR_BASE} prefix from the
__FILE__ macro used in __ASSERT*, in the
.noinit."/home/joe/zephyr/fu/bar.c" section names and in any
application code.
@@ -986,9 +747,7 @@ config BUILD_OUTPUT_STRIP_PATHS
config CHECK_INIT_PRIORITIES
bool "Build time initialization priorities check"
default y
# If we are building a native_simulator target, we can only check the init priorities
# if we are building the final output but we are not assembling several images together
depends on !(NATIVE_LIBRARY && (!BUILD_OUTPUT_EXE || NATIVE_SIMULATOR_EXTRA_IMAGE_PATHS != ""))
depends on !NATIVE_LIBRARY
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "armclang"
help
Check the build for initialization priority issues by comparing the
@@ -996,7 +755,16 @@ config CHECK_INIT_PRIORITIES
derived from the devicetree definition.
Fails the build on priority errors (dependent devices, inverted
priority).
priority), see CHECK_INIT_PRIORITIES_FAIL_ON_WARNING to fail on
warnings (dependent devices, same priority) as well.
config CHECK_INIT_PRIORITIES_FAIL_ON_WARNING
bool "Fail the build on priority check warnings"
depends on CHECK_INIT_PRIORITIES
help
Fail the build if the dependency check script identifies any pair of
devices depending on each other but initialized with the same
priority.
config EMIT_ALL_SYSCALLS
bool "Emit all possible syscalls in the tree"
@@ -1012,8 +780,6 @@ config DEPRECATED
help
Symbol that must be selected by a feature or module if it is
considered to be deprecated.
When adding this to an option, remember to follow the instructions in
https://docs.zephyrproject.org/latest/develop/api/api_lifecycle.html#deprecated
config WARN_DEPRECATED
bool
@@ -1021,8 +787,7 @@ config WARN_DEPRECATED
prompt "Warn on deprecated usage"
help
Print a warning when the Kconfig tree is parsed if any deprecated
features are enabled, or at compile time, when deprecated macros or
symbols are used.
features are enabled.
config EXPERIMENTAL
bool
@@ -1037,12 +802,6 @@ config WARN_EXPERIMENTAL
Print a warning when the Kconfig tree is parsed if any experimental
features are enabled.
config NOT_SECURE
bool
help
Symbol to be selected by a feature to indicate that feature is
not secure.
config TAINT
bool
help
@@ -1072,10 +831,33 @@ menu "Boot Options"
config IS_BOOTLOADER
bool "Act as a bootloader"
depends on XIP
depends on ARM
help
This option indicates that Zephyr will act as a bootloader to execute
a separate Zephyr image payload.
config BOOTLOADER_SRAM_SIZE
int "SRAM reserved for bootloader"
default 0
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
This option specifies the amount of SRAM (measure in kB) reserved for
a bootloader image, when either:
- the Zephyr image itself is to act as the bootloader, or
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
depends on SOC_FAMILY_ESP32 && !BOOTLOADER_MCUBOOT && !MCUBOOT
default y
help
This option will trigger the compilation of the ESP-IDF bootloader
inside the build folder.
At flash time, the bootloader will be flashed with the zephyr image
config BOOTLOADER_BOSSA
bool "BOSSA bootloader support"
select USE_DT_CODE_PARTITION
@@ -1121,17 +903,11 @@ endmenu
menu "Compatibility"
config LEGACY_GENERATED_INCLUDE_PATH
bool "Legacy include path for generated headers"
config COMPAT_INCLUDES
bool "Suppress warnings when using header shims"
default y
help
Allow applications and libraries to use the Zephyr legacy include
path for the generated headers which does not use the `zephyr/` prefix.
From now on, i.e., the preferred way to include the `version.h` header is to
use <zephyr/version.h>, this Kconfig is currently enabled by default so that
user applications won't immediately fail to compile.
This Kconfig will be deprecated and eventually removed in the future releases.
Suppress any warnings from the pre-processor when including
deprecated header files.
endmenu

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,10 +0,0 @@
> [!IMPORTANT]
> The license files in this directory are maintained as per the recommendation
> of the REUSE specification (https://reuse.software/spec-3.3).
> These are licenses that may apply to some components hosted in the source tree
> but that do not end up in built binaries
> (see https://docs.zephyrproject.org/latest/LICENSING.html#zephyr-licensing).
>
> These files do _not_ define the license of the Zephyr project itself.
> Zephyr is licensed under the Apache 2.0 license, as specified in
> the [LICENSE](/LICENSE) file at the root of the repository.

File diff suppressed because it is too large Load Diff

View File

@@ -10,9 +10,12 @@
</p>
</a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a href="https://scorecard.dev/viewer/?uri=github.com/zephyrproject-rtos/zephyr"><img src="https://api.securityscorecards.dev/projects/github.com/zephyrproject-rtos/zephyr/badge"></a>
<a href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain"><img src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img
src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a
href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain">
<img
src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
The Zephyr Project is a scalable real-time operating system (RTOS) supporting
@@ -24,7 +27,7 @@ resource-constrained systems: from simple embedded environmental sensors and
LED wearables to sophisticated smart watches and IoT wireless gateways.
The Zephyr kernel supports multiple architectures, including ARM (Cortex-A,
Cortex-R, Cortex-M), Intel x86, ARC, Tensilica Xtensa, and RISC-V,
Cortex-R, Cortex-M), Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V,
SPARC, MIPS, and a large number of `supported boards`_.
.. below included in doc/introduction/introduction.rst

View File

@@ -1,26 +0,0 @@
version = 1
# Declare default license and copyright text for files that typically do not or cannot include them.
[[annotations]]
path = [
# zephyr-keep-sorted-start
"**/*.cfg",
"**/*.cmake",
"**/*.conf",
"**/*.ecl",
"**/*.html",
"**/*.json",
"**/*.rst",
"**/*.yaml",
"**/*.yml",
"**/*_defconfig",
"**/CMakeLists.txt",
"**/Kconfig*",
"doc/requirements.in",
"doc/requirements.txt",
"scripts/requirements-*.in",
"scripts/requirements-*.txt",
# zephyr-keep-sorted-stop
]
SPDX-License-Identifier = "Apache-2.0"
SPDX-FileCopyrightText = "Copyright The Zephyr Project Contributors"

View File

@@ -1 +0,0 @@
0.17.4

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 4
VERSION_MINOR = 3
PATCHLEVEL = 99
VERSION_MAJOR = 3
VERSION_MINOR = 5
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -8,10 +8,8 @@
# Include these first so that any properties (e.g. defaults) below can be
# overridden (by defining symbols in multiple locations)
source "$(KCONFIG_BINARY_DIR)/arch/Kconfig"
# ToDo: Generate a Kconfig.arch for loading of additional arch in HWMv2.
osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
# Note: $ARCH might be a glob pattern
source "$(ARCH_DIR)/$(ARCH)/Kconfig"
# Architecture symbols
#
@@ -21,10 +19,10 @@ osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
config ARC
bool
select ARCH_IS_SET
select HAS_DTS
imply XIP
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_SUPPORTS_ROM_START
select ARCH_HAS_DIRECTED_IPIS
help
ARC architecture
@@ -32,8 +30,7 @@ config ARM
bool
select ARCH_IS_SET
select ARCH_SUPPORTS_COREDUMP if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_THREADS if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if CPU_CORTEX_M
select HAS_DTS
# FIXME: current state of the code for all ARM requires this, but
# is really only necessary for Cortex-M with ARM MPU!
select GEN_PRIV_STACKS
@@ -46,18 +43,14 @@ config ARM64
bool
select ARCH_IS_SET
select 64BIT
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select HAS_ARM_SMCCC
select ARCH_HAS_THREAD_LOCAL_STORAGE
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select BARRIER_OPERATIONS_ARCH
select ARCH_HAS_DIRECTED_IPIS
select ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_MAPPING
select ARCH_SUPPORTS_EVICTION_TRACKING
select EVICTION_TRACKING if DEMAND_PAGING
select MEM_DOMAIN_HAS_THREAD_LIST if ARM_MPU
help
ARM64 (AArch64) architecture
@@ -65,12 +58,14 @@ config MIPS
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
help
MIPS architecture
config SPARC
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select BIG_ENDIAN
@@ -85,8 +80,8 @@ config X86
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !X86_64
select CPU_HAS_MMU
select ARCH_MEM_DOMAIN_DATA if USERSPACE && !X86_COMMON_PAGE_TABLE
@@ -94,79 +89,69 @@ config X86
select ARCH_HAS_GDBSTUB if !X86_64
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_HAS_DEMAND_PAGING if !X86_64
select ARCH_HAS_DEMAND_MAPPING if ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_PAGING
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select NEED_LIBC_MEM_PARTITION if USERSPACE && TIMING_FUNCTIONS \
&& !BOARD_HAS_TIMING_FUNCTIONS \
&& !SOC_HAS_TIMING_FUNCTIONS
select ARCH_HAS_STACK_CANARIES_TLS
select ARCH_SUPPORTS_MEM_MAPPED_STACKS if X86_MMU && !DEMAND_PAGING
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
help
x86 architecture
config NIOS2
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
imply XIP
select ARCH_HAS_TIMING_FUNCTIONS
help
Nios II Gen 2 architecture
config RISCV
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C if !RISCV_ISA_EXT_A
select ATOMIC_OPERATIONS_BUILTIN if RISCV_ISA_EXT_A
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !SOC_FAMILY_ESPRESSIF_ESP32
select ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
select ARCH_SUPPORTS_ROM_START if !SOC_SERIES_ESP32C3
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_THREAD_LOCAL_STORAGE
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select USE_SWITCH_SUPPORTED
select USE_SWITCH
select SCHED_IPI_SUPPORTED if SMP
select ARCH_HAS_DIRECTED_IPIS
select BARRIER_OPERATIONS_BUILTIN
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
imply XIP
help
RISCV architecture
config XTENSA
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_MEM_DOMAIN_DATA if USERSPACE
select ARCH_HAS_DIRECTED_IPIS
select THREAD_STACK_INFO
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if !SMP
select ARCH_HAS_USERSPACE if XTENSA_MMU || XTENSA_MPU
imply ARCH_HAS_RESERVED_PAGE_FRAMES if XTENSA_MMU
imply ATOMIC_OPERATIONS_ARCH
help
Xtensa architecture
config ARCH_POSIX
bool
select ARCH_IS_SET
select HAS_DTS
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
select ARCH_HAS_THREAD_ABORT
select ARCH_HAS_THREAD_NAME_HOOK
select NATIVE_BUILD
select HAS_COVERAGE_SUPPORT
select BARRIER_OPERATIONS_BUILTIN
# POSIX arch based targets get their memory cleared on entry by the host OS
select SKIP_BSS_CLEAR
help
POSIX (native) architecture
config RX
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select USE_SWITCH
select USE_SWITCH_SUPPORTED
help
Renesas RX architecture
config ARCH_IS_SET
bool
help
@@ -188,9 +173,8 @@ config BIG_ENDIAN
Little-endian architecture is the default and should leave this option
unselected. This option is selected by arch/$ARCH/Kconfig,
soc/**/Kconfig, or boards/**/Kconfig and the user should generally avoid
modifying it. The option is used to select linker script OUTPUT_FORMAT,
the toolchain flags (TOOLCHAIN_C_FLAGS, TOOLCHAIN_LD_FLAGS), and command
line option for gen_isr_tables.py.
modifying it. The option is used to select linker script OUTPUT_FORMAT
and command line option for gen_isr_tables.py.
config LITTLE_ENDIAN
# Hidden Kconfig option representing the default little-endian architecture
@@ -224,21 +208,11 @@ config SRAM_BASE_ADDRESS
hex "SRAM Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
The SRAM base address. The default value comes from
The SRAM base address. The default value comes from from
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
config XIP
bool "Execute in place"
help
This option allows zephyr to operate with its text and read-only
sections residing in ROM (or similar read-only memory). Not all boards
support this option so it must be used with care; you must also
supply a linker command file when building your image. Enabling this
option increases both the code and data footprint of the image.
if ARC || ARM || ARM64 || X86 || RISCV || RX || ARCH_POSIX
if ARC || ARM || ARM64 || NIOS2 || X86 || RISCV
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_FLASH := zephyr,flash
@@ -246,7 +220,6 @@ DT_CHOSEN_Z_FLASH := zephyr,flash
config FLASH_SIZE
int "Flash Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_FLASH),0,K) if (XIP && (ARM ||ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the size of the flash in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
@@ -255,13 +228,12 @@ config FLASH_SIZE
config FLASH_BASE_ADDRESS
hex "Flash Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH)) if (XIP && (ARM || ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the base address of the flash on the board. It is
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
endif # ARM || ARM64 || ARC || X86 || RISCV || RX || ARCH_POSIX
endif # ARM || ARM64 || ARC || NIOS2 || X86 || RISCV
if ARCH_HAS_TRUSTED_EXECUTION
@@ -324,21 +296,11 @@ config USERSPACE
privileged mode or handling a system call; to ensure these are always
caught, enable CONFIG_HW_STACK_PROTECTION.
config NOINIT_SNIPPET_FIRST
bool "Place the no-init linker script snippet first"
help
By default the include/zephyr/linker/common-noinit.ld file inserts the
snippets-noinit.ld file at the end of the section. There are times when
the directives in the snippets-noinit.ld file apply to the other directives
in this file. And in that case the include statement for the snippets-noinit.ld
file needs to come at the start of the section. This configuration option
allows that to happen.
config PRIVILEGED_STACK_SIZE
int "Size of privileged stack"
default 2048 if EMUL
default 1024
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
This option sets the privileged stack region size that will be used
in addition to the user mode thread stack. During normal execution,
@@ -348,19 +310,18 @@ config PRIVILEGED_STACK_SIZE
config KOBJECT_TEXT_AREA
int "Size of kobject text area"
default 1024 if UBSAN
default 512 if COVERAGE_GCOV
default 512 if NO_OPTIMIZATIONS
default 512 if STACK_CANARIES && RISCV
default 256
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Size of kernel object text area. Used in linker script.
config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
int "Reserve extra kobject data area (in percentage)"
default 100
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Multiplication factor used to calculate the size of placeholder to
reserve space for kobject metadata hash table. The hash table is
@@ -374,7 +335,7 @@ config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
config KOBJECT_RODATA_AREA_EXTRA_BYTES
int "Reserve extra bytes for kobject rodata area"
default 16
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Reserve a few more bytes for the RODATA region for kobject metadata.
This is to account for the uncertainty of tables generated by gperf.
@@ -436,61 +397,10 @@ config NOCACHE_MEMORY
transfers when cache coherence issues are not optimal or can not
be solved using cache maintenance operations.
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
select OVERRIDE_FRAME_POINTER_DEFAULT
help
Select Y here to gain precise stack traces at the expense of slightly
increased size and decreased speed.
config ARCH_STACKWALK
bool "Compile the stack walking function"
default y
depends on ARCH_HAS_STACKWALK
help
Select Y here to compile the `arch_stack_walk()` function
config ARCH_STACKWALK_MAX_FRAMES
int "Max depth for stack walk function"
default 8
depends on ARCH_STACKWALK
help
Depending on implementation, this can place a hard limit on the depths of the stack
for the stack walk function to examine.
menu "Interrupt Configuration"
config TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
bool
help
Hidden option to signal that toolchain supports local declaration of
interrupt tables.
config ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
bool
default y
# Userspace is currently not supported
depends on !USERSPACE
# List of currently supported architectures
depends on ARM || ARM64 || RISCV
# List of currently supported toolchains
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "gnuarmemb" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm" || TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
config ISR_TABLES_LOCAL_DECLARATION
bool "ISR tables created locally and placed by linker"
depends on ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
help
Enable new scheme of interrupt tables generation.
This is totally different generator that would create tables entries locally
where the IRQ_CONNECT macro is called and then use the linker script to position it
in the right place in memory.
The most important advantage of such approach is that the generated interrupt tables
are LTO compatible.
The drawback is that the support on the architecture port is required.
config DYNAMIC_INTERRUPTS
bool "Installation of IRQs at runtime"
select SRAM_SW_ISR_TABLE
help
Enable installation of interrupts at runtime, which will move some
interrupt-related data structures to RAM instead of ROM, and
@@ -542,12 +452,6 @@ config ARCH_IRQ_VECTOR_TABLE_ALIGN
to be aligned to architecture specific size. The default
size is 0 for no alignment.
config ARCH_DEVICE_STATE_ALIGN
int "Alignment size of device state"
default 4
help
This option controls alignment size of device state.
choice IRQ_VECTOR_TABLE_TYPE
prompt "IRQ vector table type"
depends on GEN_IRQ_VECTOR_TABLE
@@ -608,40 +512,14 @@ config IRQ_OFFLOAD
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
depends on ARCH_HAS_VECTOR_TABLE_RELOCATION
depends on XIP
depends on !ROMSTART_RELOCATION_ROM
help
When XiP is enabled, this option will result in the vector table being
relocated from Flash to SRAM.
config SRAM_SW_ISR_TABLE
bool "Place the software ISR table in SRAM instead of flash"
help
The option specifies that the software interrupts vector table will be
placed inside SRAM instead of the flash.
config IRQ_OFFLOAD_NESTED
bool "irq_offload() supports nested IRQs"
depends on IRQ_OFFLOAD
default y if ARM64 || X86 || RISCV || XTENSA
help
When set by the platform layers, indicates that
irq_offload() may legally be called in interrupt context to
cause a synchronous nested interrupt on the current CPU.
Not all hardware is capable.
config EXCEPTION_DEBUG
bool "Unhandled exception debugging"
default y
depends on PRINTK || LOG
help
Install handlers for various CPU exception/trap vectors to
make debugging them easier, at a small expense in code size.
This prints out the specific exception vector and any associated
error codes.
When set by the arch layer, indicates that irq_offload() may
legally be called in interrupt context to cause a
synchronous nested interrupt on the current CPU. Not all
hardware is capable.
config EXTRA_EXCEPTION_INFO
bool "Collect extra exception info"
@@ -663,14 +541,6 @@ config SIMPLIFIED_EXCEPTION_CODES
down to the generic K_ERR_CPU_EXCEPTION, which makes testing code
much more portable.
config EMPTY_IRQ_SPURIOUS
bool "Create empty spurious interrupt handler"
depends on ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
help
This option changes body of spurious interrupt handler. When enabled,
handler contains only an infinite while loop, when disabled, handler
contains the whole Zephyr fault handling procedure.
endmenu # Interrupt configuration
config INIT_ARCH_HW_AT_BOOT
@@ -721,45 +591,31 @@ config ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config ARCH_HAS_RAMFUNC_SUPPORT
bool
config ARCH_HAS_VECTOR_TABLE_RELOCATION
bool
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
config ARCH_SUPPORTS_COREDUMP
bool
config ARCH_SUPPORTS_COREDUMP_THREADS
bool
config ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
bool
config ARCH_SUPPORTS_COREDUMP_STACK_PTR
bool
config ARCH_SUPPORTS_ARCH_HW_INIT
bool
config ARCH_SUPPORTS_ROM_START
bool
config ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
bool
config ARCH_SUPPORTS_EVICTION_TRACKING
bool
help
Architecture code supports page tracking for eviction algorithms
when demand paging is enabled.
config ARCH_HAS_EXTRA_EXCEPTION_INFO
bool
config ARCH_HAS_GDBSTUB
bool
config ARCH_HAS_COHERENCE
bool
help
When selected, the architecture supports the
arch_mem_coherent() API and can link into incoherent/cached
memory using the ".cached" linker section.
config ARCH_HAS_THREAD_LOCAL_STORAGE
bool
@@ -771,19 +627,6 @@ config ARCH_HAS_SUSPEND_TO_RAM
config ARCH_HAS_STACK_CANARIES_TLS
bool
config ARCH_SUPPORTS_MEM_MAPPED_STACKS
bool
help
Select when the architecture supports memory mapped stacks.
config ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET
bool
help
Select when the architecture implements arch_thread_priv_stack_space_get().
config ARCH_HAS_HW_SHADOW_STACK
bool
#
# Other architecture related options
#
@@ -820,11 +663,6 @@ config CPU_HAS_FPU
This option is enabled when the CPU has hardware floating point
unit.
config CPU_HAS_DSP
bool
help
This option is enabled when the CPU has hardware DSP unit.
config CPU_HAS_FPU_DOUBLE_PRECISION
bool
select CPU_HAS_FPU
@@ -849,13 +687,6 @@ config ARCH_HAS_DEMAND_PAGING
This hidden configuration should be selected by the architecture if
demand paging is supported.
config ARCH_HAS_DEMAND_MAPPING
bool
help
This hidden configuration should be selected by the architecture if
demand paging is supported and arch_mem_map() supports
K_MEM_MAP_UNPAGED.
config ARCH_HAS_RESERVED_PAGE_FRAMES
bool
help
@@ -864,25 +695,11 @@ config ARCH_HAS_RESERVED_PAGE_FRAMES
memory mappings. The architecture will need to implement
arch_reserved_pages_update().
config ARCH_HAS_DIRECTED_IPIS
bool
help
This hidden configuration should be selected by the architecture if
it has an implementation for arch_sched_directed_ipi() which allows
for IPIs to be directed to specific CPUs.
config CPU_HAS_DCACHE
bool
help
This hidden configuration should be selected when the CPU has a d-cache.
config CPU_CACHE_INCOHERENT
bool
help
This hidden configuration should be selected when the CPU has
incoherent cache. This applies to intra-CPU multiprocessing
incoherence and makes only sense when MP_MAX_NUM_CPUS > 1.
config CPU_HAS_ICACHE
bool
help
@@ -906,7 +723,7 @@ config ARCH_MAPS_ALL_RAM
virtual addresses elsewhere, this is limited to only management of the
virtual address space. The kernel's page frame ontology will not consider
this mapping at all; non-kernel pages will be considered free (unless marked
as reserved) and K_MEM_PAGE_FRAME_MAPPED will not be set.
as reserved) and Z_PAGE_FRAME_MAPPED will not be set.
config DCLS
bool "Processor is configured in DCLS mode"
@@ -1010,17 +827,6 @@ config CODE_DATA_RELOCATION
the target regions should be specified in CMakeLists.txt using
zephyr_code_relocate().
menu "DSP Options"
config DSP_SHARING
bool "DSP register sharing"
depends on CPU_HAS_DSP
help
This option enables preservation of the hardware DSP registers
across context switches to allow multiple threads to perform concurrent
DSP operations.
endmenu
menu "Floating Point Options"
config FPU
@@ -1078,29 +884,6 @@ config ICACHE
help
This option enables the support for the instruction cache (i-cache).
config CACHE_HAS_MIRRORED_MEMORY_REGIONS
bool "Mirrored memory region(s) for both cached and uncached access"
depends on CPU_CACHE_INCOHERENT
help
Enable this if hardware has mirrored memory regions at different
addressed when accessing one would go through cache, but accessing
the other would go to memory directly. A pointer can be cheaply
converted to cached or uncached access.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
config CACHE_DOUBLEMAP
bool "Cache double-mapping support"
select CACHE_HAS_MIRRORED_MEMORY_REGIONS
select DEPRECATED
help
Double-mapping behavior where a pointer can be cheaply converted to
point to the same cached/uncached memory at different locations.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
config CACHE_MANAGEMENT
bool "Cache management features"
depends on DCACHE || ICACHE
@@ -1108,12 +891,9 @@ config CACHE_MANAGEMENT
This option enables the cache management functions backed by arch or
driver code.
if CACHE_MANAGEMENT
if DCACHE
config DCACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
depends on CACHE_MANAGEMENT && DCACHE
help
This option enables querying some architecture-specific hardware for
finding the d-cache line size at the expense of taking more memory and
@@ -1125,21 +905,18 @@ config DCACHE_LINE_SIZE_DETECT
config DCACHE_LINE_SIZE
int "d-cache line size"
depends on !DCACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,d-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,d-cache-line-size)
depends on CACHE_MANAGEMENT && DCACHE && !DCACHE_LINE_SIZE_DETECT
default 0
help
Size in bytes of a CPU d-cache line.
Size in bytes of a CPU d-cache line. If this is set to 0 the value is
obtained from the 'd-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting DCACHE_LINE_SIZE_DETECT.
endif # DCACHE
if ICACHE
config ICACHE_LINE_SIZE_DETECT
bool "Detect i-cache line size at runtime"
depends on CACHE_MANAGEMENT && ICACHE
help
This option enables querying some architecture-specific hardware for
finding the i-cache line size at the expense of taking more memory and
@@ -1151,19 +928,17 @@ config ICACHE_LINE_SIZE_DETECT
config ICACHE_LINE_SIZE
int "i-cache line size"
depends on !ICACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,i-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,i-cache-line-size)
depends on CACHE_MANAGEMENT && ICACHE && !ICACHE_LINE_SIZE_DETECT
default 0
help
Size in bytes of a CPU i-cache line.
Size in bytes of a CPU i-cache line. If this is set to 0 the value is
obtained from the 'i-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting ICACHE_LINE_SIZE_DETECT.
endif # ICACHE
choice CACHE_TYPE
prompt "Cache type"
depends on CACHE_MANAGEMENT
default ARCH_CACHE
config ARCH_CACHE
@@ -1171,14 +946,6 @@ config ARCH_CACHE
help
Integrated on-core cache controller
config SOC_CACHE
bool "SoC specific cache controller"
depends on SOC_HAS_CACHE_FUNCTIONS
help
SoC specific cache controller.
This requires soc_cache.h file to exist in search path.
config EXTERNAL_CACHE
bool "External cache controller"
help
@@ -1186,16 +953,6 @@ config EXTERNAL_CACHE
endchoice
config CACHE_CAN_SAY_MEM_COHERENCE
bool
help
sys_cache_is_mem_coherent() is defined when enabled. This function can be
used to determine if a pointer lies inside "coherence regions" and can be
safely used in multiprocessor code without explicit flush or invalidate
operations.
endif # CACHE_MANAGEMENT
endmenu
config ARCH
@@ -1203,46 +960,29 @@ config ARCH
help
System architecture string.
config SOC
string
help
SoC name which can be found under soc/<arch>/<soc name>.
This option holds the directory name used by the build system to locate
the correct linker and header files for the SoC.
config SOC_SERIES
string
help
SoC series name which can be found under soc/<arch>/<family>/<series>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config SOC_FAMILY
string
help
SoC family name which can be found under soc/<arch>/<family>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config TOOLCHAIN_HAS_BUILTIN_FFS
bool
default y if !(64BIT && RISCV)
help
Hidden option to signal that toolchain has __builtin_ffs*().
config ARCH_HAS_CUSTOM_CPU_IDLE
bool
help
This options allows applications to override the default arch idle implementation with
a custom one.
config ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
bool
help
This options allows applications to override the default arch idle implementation with
a custom one.
config ARCH_HAS_CUSTOM_SWAP_TO_MAIN
bool
help
It's possible that an architecture port cannot use z_swap_unlocked()
to swap to the main thread (bg_thread_main), but instead must do
something custom. It must enable this option in that case.
config ARCH_HAS_CUSTOM_BUSY_WAIT
bool
help
It's possible that an architecture port cannot or does not want to use
the provided k_busy_wait(), but instead must do something custom. It must
enable this option in that case.
config ARCH_HAS_CUSTOM_CURRENT_IMPL
bool
help
Select when architecture implements arch_current_thread() &
arch_current_thread_set().
config ARCH_IPI_LAZY_COPROCESSORS_SAVE
bool
help
Select when the architecture has multi-CPU lazy context switching
of coprocessor registers.

View File

@@ -9,6 +9,7 @@ menu "ARC Options"
config ARCH
default "arc"
config CPU_ARCEM
bool
select ATOMIC_OPERATIONS_C
@@ -18,7 +19,6 @@ config CPU_ARCEM
config CPU_ARCHS
bool
select ATOMIC_OPERATIONS_BUILTIN
select BARRIER_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU
@@ -253,17 +253,20 @@ config ARC_USE_UNALIGNED_MEM_ACCESS
to support unaligned memory access which is then disabled by default.
Enable unaligned access in hardware and make software to use it.
config ARC_CURRENT_THREAD_USE_NO_TLS
bool
select CURRENT_THREAD_USE_NO_TLS
default y if (RGF_NUM_BANKS > 1) || ("$(ZEPHYR_TOOLCHAIN_VARIANT)" = "arcmwdt")
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
help
Disable current Thread Local Storage for ARC. For cores with more than one
RGF_NUM_BANKS the parameter is disabled by-default because banks synchronization
requires significant time, and it slows down performance.
ARCMWDT works with TLS pointer in different way then GCC. Optimized access to
TLS pointer via the _current symbol does not provide significant advantages
in case of MetaWare.
Different levels for display information when a fault occurs.
2: The default. Display specific and verbose information. Consumes
the most memory (long strings).
1: Display general and short information. Consumes less memory
(short strings).
0: Off.
config GEN_ISR_TABLES
default y
@@ -274,7 +277,7 @@ config GEN_IRQ_START_VECTOR
config HARVARD
bool "Harvard Architecture"
help
The ARC CPU can be configured to have two buses;
The ARC CPU can be configured to have two busses;
one for instruction fetching and another that serves as a data bus.
config CODE_DENSITY
@@ -343,15 +346,6 @@ config ARC_NORMAL_FIRMWARE
resources of the ARC processors, and, therefore, it shall avoid
accessing them.
config ARC_VPX_COOPERATIVE_SHARING
bool "Cooperative sharing of ARC VPX vector registers"
select SCHED_CPU_MASK if MP_MAX_NUM_CPUS > 1
help
This option enables the cooperative sharing of the ARC VPX vector
registers. Threads that want to use those registers must successfully
call arc_vpx_lock() before using them, and call arc_vpx_unlock()
when done using them.
source "arch/arc/core/dsp/Kconfig"
menu "ARC MPU Options"
@@ -370,28 +364,6 @@ endmenu
config DCACHE_LINE_SIZE
default 32
config ARC_DCACHE_REGION_OPERATIONS
bool "DCACHE region operations"
depends on CACHE_MANAGEMENT && DCACHE
default n
help
Perform L1 data cache management operations by regions rather than line by line in a loop,
improves performance of cache management operations.
config ARC_SLC
bool "System level cache"
depends on CACHE_MANAGEMENT && DCACHE && (CPU_HS4X || CPU_HS3X)
default n
help
This option enables System Level Cache, and adds SLC support to the data cache management operations.
config ARC_SLC_LINE_SIZE
int "SLC line size"
depends on ARC_SLC
default 128
help
Size in bytes of a CPU system level cache line.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768 if !64BIT
@@ -404,6 +376,15 @@ config ARC_EXCEPTION_STACK_SIZE
endmenu
config ARC_EXCEPTION_DEBUG
bool "Unhandled exception debugging information"
default n
depends on PRINTK || LOG
help
Print human-readable information about exception vectors, cause codes,
and parameters, at a cost of code/data size for the human-readable
strings.
config ARC_EARLY_SOC_INIT
bool "Make early stage SoC-specific initialization"
help
@@ -411,10 +392,7 @@ config ARC_EARLY_SOC_INIT
(before C runtime initialization). Setup code is called in form of
soc_early_asm_init_percpu assembler macro.
# ARC vector table must be aligned to 1KiB boundary, and will be at the
# start of the ROM region.
config ROM_START_OFFSET
default 0x400 if BOOTLOADER_MCUBOOT
endmenu
config MAIN_STACK_SIZE
default 4096 if 64BIT
@@ -442,5 +420,3 @@ config CMSIS_V2_THREAD_MAX_STACK_SIZE
config CMSIS_V2_THREAD_DYNAMIC_STACK_SIZE
default 2048 if 64BIT
endmenu

View File

@@ -8,6 +8,14 @@
__weak void *__dso_handle;
int __cxa_atexit(void (*destructor)(void *), void *objptr, void *dso)
{
ARG_UNUSED(destructor);
ARG_UNUSED(objptr);
ARG_UNUSED(dso);
return 0;
}
int atexit(void (*function)(void))
{
return 0;

View File

@@ -26,7 +26,7 @@ zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT smp.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE tls.c)
@@ -34,5 +34,3 @@ add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_LLEXT elf.c)

193
arch/arc/core/arc_smp.c Normal file
View File

@@ -0,0 +1,193 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief codes required for ARC multicore and Zephyr smp support
*
*/
#include <zephyr/device.h>
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <ksched.h>
#include <zephyr/init.h>
#include <zephyr/irq.h>
#include <arc_irq_offload.h>
volatile struct {
arch_cpustart_t fn;
void *arg;
} arc_cpu_init[CONFIG_MP_MAX_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile uint32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_MAX_NUM_CPUS];
/* Called from Zephyr initialization */
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0U) {
;
}
}
#ifdef CONFIG_SMP
static void arc_connect_debug_mask_update(int cpu_num)
{
uint32_t core_mask = 1 << cpu_num;
/*
* MDB debugger may modify debug_select and debug_mask registers on start, so we can't
* rely on debug_select reset value.
*/
if (cpu_num != ARC_MP_PRIMARY_CPU_ID) {
core_mask |= z_arc_connect_debug_select_read();
}
z_arc_connect_debug_select_set(core_mask);
/* Debugger halts cores at all conditions:
* ARC_CONNECT_CMD_DEBUG_MASK_H: Core global halt.
* ARC_CONNECT_CMD_DEBUG_MASK_AH: Actionpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_BH: Software breakpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_SH: Self halt.
*/
z_arc_connect_debug_mask_set(core_mask, (ARC_CONNECT_CMD_DEBUG_MASK_SH
| ARC_CONNECT_CMD_DEBUG_MASK_BH | ARC_CONNECT_CMD_DEBUG_MASK_AH
| ARC_CONNECT_CMD_DEBUG_MASK_H));
}
#endif
void arc_core_private_intc_init(void);
/* the C entry of slave cores */
void z_arc_slave_start(int cpu_num)
{
arch_cpustart_t fn;
#ifdef CONFIG_SMP
struct arc_connect_bcr bcr;
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(cpu_num);
}
z_irq_setup();
arc_core_private_intc_init();
arc_irq_offload_init_smp();
z_arc_connect_ici_clear();
z_irq_priority_set(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority), 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
#endif
/* call the function set by arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(arc_cpu_init[cpu_num].arg);
}
#ifdef CONFIG_SMP
static void sched_ipi_handler(const void *unused)
{
ARG_UNUSED(unused);
z_arc_connect_ici_clear();
z_sched_ipi();
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
uint32_t i;
/* broadcast sched_ipi request to other cores
* if the target is current core, hardware will ignore it
*/
unsigned int num_cpus = arch_num_cpus();
for (i = 0U; i < num_cpus; i++) {
z_arc_connect_ici_generate(i);
}
}
static int arc_smp_init(void)
{
struct arc_connect_bcr bcr;
/* necessary master core init */
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(ARC_MP_PRIMARY_CPU_ID);
}
if (bcr.ipi) {
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority),
sched_ipi_handler, NULL, 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
} else {
__ASSERT(0,
"ARC connect has no inter-core interrupt\n");
return -ENODEV;
}
if (bcr.gfrc) {
/* global free running count init */
z_arc_connect_gfrc_enable();
/* when all cores halt, gfrc halt */
z_arc_connect_gfrc_core_set((1 << arch_num_cpus()) - 1);
z_arc_connect_gfrc_clear();
} else {
__ASSERT(0,
"ARC connect has no global free running counter\n");
return -ENODEV;
}
return 0;
}
SYS_INIT(arc_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View File

@@ -2,7 +2,6 @@
/*
* Copyright (c) 2016 Synopsys, Inc. All rights reserved.
* Copyright (c) 2025 GSI Technology, All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -30,34 +29,16 @@
size_t sys_cache_line_size;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100 /* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
#define DC_CTRL_INVALIDATE_MODE 0x40 /* d-cache invalidate mode bit */
#define DC_CTRL_REGION_OP 0xe00 /* d-cache region operation */
#define MMU_BUILD_PHYSICAL_ADDR_EXTENSION 0x1000 /* physical address extension enable mask */
#define SLC_CTRL_DISABLE 0x1 /* SLC disable */
#define SLC_CTRL_INVALIDATE_MODE 0x40 /* SLC invalidate mode */
#define SLC_CTRL_BUSY_STATUS 0x100 /* SLC busy status */
#define SLC_CTRL_REGION_OP 0xe00 /* SLC region operation */
#if defined(CONFIG_ARC_SLC)
/*
* spinlock is used for SLC access because depending on HW configuration, the SLC might be shared
* between the cores, and in this case, only one core is allowed to access the SLC register
* interface at a time.
*/
static struct k_spinlock slc_lock;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100/* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
static bool dcache_available(void)
{
@@ -74,189 +55,9 @@ static void dcache_dc_ctrl(uint32_t dcache_en_mask)
}
}
static bool pae_exists(void)
{
uint32_t bcr = z_arc_v2_aux_reg_read(_ARC_V2_MMU_BUILD);
return 1 == FIELD_GET(MMU_BUILD_PHYSICAL_ADDR_EXTENSION, bcr);
}
#if defined(CONFIG_ARC_SLC)
static void slc_enable(void)
{
uint32_t val = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
val &= ~SLC_CTRL_DISABLE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, val);
}
static void slc_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END1, 0);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START1, 0);
}
static void slc_flush_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_all(void)
{
K_SPINLOCK(&slc_lock) {
z_arc_v2_aux_reg_write(_ARC_V2_SLC_FLUSH, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
#endif /* CONFIG_ARC_SLC */
void arch_dcache_enable(void)
{
dcache_dc_ctrl(DC_CTRL_DC_ENABLE);
#if defined(CONFIG_ARC_SLC)
slc_enable();
#endif
}
void arch_dcache_disable(void)
@@ -264,107 +65,17 @@ void arch_dcache_disable(void)
/* nothing */
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
static void dcache_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_DC_PTAG_HI, 0);
}
static void dcache_flush_region(void *start_addr_ptr, size_t size)
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
arch_irq_unlock(key); /* --exit critical section-- */
}
#else /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
static void dcache_flush_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
@@ -387,81 +98,6 @@ static void dcache_flush_lines(void *start_addr_ptr, size_t size)
} while (start_addr < end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
static void dcache_flush_and_invalidate_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
#endif /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_region(start_addr_ptr, size);
#else
dcache_flush_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_region(start_addr_ptr, size);
#endif
return 0;
}
@@ -469,127 +105,48 @@ int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
int arch_dcache_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_invalidate_region(start_addr_ptr, size);
#else
dcache_invalidate_lines(start_addr_ptr, size);
#endif
key = arch_irq_lock(); /* -enter critical section- */
#if defined(CONFIG_ARC_SLC)
slc_invalidate_region(start_addr_ptr, size);
#endif
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
return 0;
}
int arch_dcache_flush_and_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_and_invalidate_region(start_addr_ptr, size);
#else
dcache_flush_and_invalidate_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_region(start_addr_ptr, size);
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_flush_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLSH, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_all();
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_invalidate_all();
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_flush_and_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_all();
#endif
return 0;
return -ENOTSUP;
}
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
@@ -661,30 +218,14 @@ int arch_icache_flush_and_invd_range(void *addr, size_t size)
static int init_dcache(void)
{
sys_cache_data_enable();
arch_dcache_enable();
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
init_dcache_line_size();
#endif
/*
* Init high address registers to 0 if PAE exists, cache operations for 40 bit addresses not
* implemented
*/
if (pae_exists()) {
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_high_addr_init();
#endif
#if defined(CONFIG_ARC_SLC)
slc_high_addr_init();
#endif
}
return 0;
}
void arch_cache_init(void)
{
init_dcache();
}
SYS_INIT(init_dcache, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View File

@@ -26,7 +26,6 @@ SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
.align 4
.word 0
#ifndef CONFIG_ARCH_HAS_CUSTOM_CPU_IDLE
/*
* @brief Put the CPU in low-power mode
*
@@ -49,9 +48,7 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
sleep r1
j_s [blink]
nop
#endif
#ifndef CONFIG_ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
/*
* @brief Put the CPU in low-power mode, entered with IRQs locked
*
@@ -59,7 +56,6 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
*
* void arch_cpu_atomic_idle(unsigned int key)
*/
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
#ifdef CONFIG_TRACING
@@ -74,4 +70,3 @@ SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
sleep r1
j_s.d [blink]
seti r0
#endif

View File

@@ -3,8 +3,13 @@
# Copyright (c) 2022 Synopsys
# SPDX-License-Identifier: Apache-2.0
config ARC_HAS_DSP
bool
help
This option is enabled when the ARC CPU has hardware DSP unit.
menu "ARC DSP Options"
depends on CPU_HAS_DSP
depends on ARC_HAS_DSP
config ARC_DSP
bool "digital signal processing (DSP)"
@@ -17,7 +22,7 @@ config ARC_DSP_TURNED_OFF
help
This option disables DSP block via resetting DSP_CRTL register.
config DSP_SHARING
config ARC_DSP_SHARING
bool "DSP register sharing"
depends on ARC_DSP && MULTITHREADING
select ARC_HAS_ACCL_REGS
@@ -39,12 +44,12 @@ config ARC_XY_ENABLE
bool "ARC address generation unit registers"
help
Processors with XY memory and AGU registers can configure this
option to accelerate DSP instructions.
option to accelerate DSP instrctions.
config ARC_AGU_SHARING
bool "ARC address generation unit register sharing"
depends on ARC_XY_ENABLE && MULTITHREADING
default y if DSP_SHARING
default y if ARC_DSP_SHARING
help
This option enables preservation of the hardware AGU registers
across context switches to allow multiple threads to perform concurrent

View File

@@ -9,7 +9,7 @@
* @brief ARCv2 DSP and AGU structure member offset definition file
*
*/
#ifdef CONFIG_DSP_SHARING
#ifdef CONFIG_ARC_DSP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_ctrl);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_glo);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_ghi);

View File

@@ -10,7 +10,7 @@
*
*/
.macro _save_dsp_regs
#ifdef CONFIG_DSP_SHARING
#ifdef CONFIG_ARC_DSP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
bbit0 r13, K_DSP_IDX, dsp_skip_save
lr r13, [_ARC_V2_DSP_CTRL]
@@ -136,7 +136,7 @@ agu_skip_save :
.endm
.macro _load_dsp_regs
#ifdef CONFIG_DSP_SHARING
#ifdef CONFIG_ARC_DSP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
bbit0 r13, K_DSP_IDX, dsp_skip_load
ld_s r13, [sp, ___callee_saved_stack_t_dsp_ctrl_OFFSET]

View File

@@ -1,118 +0,0 @@
/*
* Copyright (c) 2024 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/llext/elf.h>
#include <zephyr/llext/llext.h>
#include <zephyr/llext/llext_internal.h>
#include <zephyr/llext/loader.h>
#include <zephyr/logging/log.h>
#include <zephyr/sys/util.h>
LOG_MODULE_REGISTER(elf, CONFIG_LLEXT_LOG_LEVEL);
#define R_ARC_32 4
#define R_ARC_B26 5 /* AKA R_ARC_64 */
#define R_ARC_S25H_PCREL 16
#define R_ARC_S25W_PCREL 17
#define R_ARC_32_ME 27
/* ARCompact insns packed in memory have Middle Endian encoding */
#define ME(x) (((x & 0xffff0000) >> 16) | ((x & 0xffff) << 16))
/**
* @brief Architecture specific function for relocating shared elf
*
* Elf files contain a series of relocations described in multiple sections.
* These relocation instructions are architecture specific and each architecture
* supporting modules must implement this.
*
* The relocation codes are well documented:
* https://github.com/foss-for-synopsys-dwc-arc-processors/arc-ABI-manual/blob/master/ARCv2_ABI.pdf
* https://github.com/zephyrproject-rtos/binutils-gdb
*/
int arch_elf_relocate(struct llext_loader *ldr, struct llext *ext, elf_rela_t *rel,
const elf_shdr_t *shdr)
{
int ret = 0;
uint32_t value;
const uintptr_t loc = llext_get_reloc_instruction_location(ldr, ext, shdr->sh_info, rel);
uint32_t insn = UNALIGNED_GET((uint32_t *)loc);
elf_sym_t sym;
uintptr_t sym_base_addr;
const char *sym_name;
ret = llext_read_symbol(ldr, ext, rel, &sym);
if (ret != 0) {
LOG_ERR("Could not read symbol from binary!");
return ret;
}
sym_name = llext_symbol_name(ldr, ext, &sym);
ret = llext_lookup_symbol(ldr, ext, &sym_base_addr, rel, &sym, sym_name, shdr);
if (ret != 0) {
LOG_ERR("Could not find symbol %s!", sym_name);
return ret;
}
sym_base_addr += rel->r_addend;
int reloc_type = ELF32_R_TYPE(rel->r_info);
switch (reloc_type) {
case R_ARC_32:
case R_ARC_B26:
UNALIGNED_PUT(sym_base_addr, (uint32_t *)loc);
break;
case R_ARC_S25H_PCREL:
/* ((S + A) - P) >> 1
* S = symbol address
* A = addend
* P = relative offset to PCL
*/
value = (sym_base_addr + rel->r_addend - (loc & ~0x3)) >> 1;
insn = ME(insn);
/* disp25h */
insn = insn & ~0x7feffcf;
insn |= ((value >> 0) & 0x03ff) << 17;
insn |= ((value >> 10) & 0x03ff) << 6;
insn |= ((value >> 20) & 0x000f) << 0;
insn = ME(insn);
UNALIGNED_PUT(insn, (uint32_t *)loc);
break;
case R_ARC_S25W_PCREL:
/* ((S + A) - P) >> 2 */
value = (sym_base_addr + rel->r_addend - (loc & ~0x3)) >> 2;
insn = ME(insn);
/* disp25w */
insn = insn & ~0x7fcffcf;
insn |= ((value >> 0) & 0x01ff) << 18;
insn |= ((value >> 9) & 0x03ff) << 6;
insn |= ((value >> 19) & 0x000f) << 0;
insn = ME(insn);
UNALIGNED_PUT(insn, (uint32_t *)loc);
break;
case R_ARC_32_ME:
UNALIGNED_PUT(ME(sym_base_addr), (uint32_t *)loc);
break;
default:
LOG_ERR("unknown relocation: %u\n", reloc_type);
ret = -ENOEXEC;
break;
}
return ret;
}

View File

@@ -17,37 +17,36 @@
#include <zephyr/arch/cpu.h>
#include <zephyr/logging/log.h>
#include <kernel_arch_data.h>
#include <zephyr/arch/exception.h>
#include <zephyr/arch/arc/v2/exc.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_EXCEPTION_DEBUG
static void dump_arc_esf(const struct arch_esf *esf)
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
static void dump_arc_esf(const z_arch_esf_t *esf)
{
EXCEPTION_DUMP(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR
" r3: 0x%" PRIxPTR "", esf->r0, esf->r1, esf->r2, esf->r3);
EXCEPTION_DUMP(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR
" r7: 0x%" PRIxPTR "", esf->r4, esf->r5, esf->r6, esf->r7);
EXCEPTION_DUMP(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR
" r11: 0x%" PRIxPTR "", esf->r8, esf->r9, esf->r10, esf->r11);
EXCEPTION_DUMP("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
LOG_ERR(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR " r3: 0x%" PRIxPTR "",
esf->r0, esf->r1, esf->r2, esf->r3);
LOG_ERR(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR " r7: 0x%" PRIxPTR "",
esf->r4, esf->r5, esf->r6, esf->r7);
LOG_ERR(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR " r11: 0x%" PRIxPTR "",
esf->r8, esf->r9, esf->r10, esf->r11);
LOG_ERR("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
esf->r12, esf->r13, esf->pc);
EXCEPTION_DUMP(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "",
esf->blink, esf->status32);
LOG_ERR(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "", esf->blink, esf->status32);
#ifdef CONFIG_ARC_HAS_ZOL
EXCEPTION_DUMP("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR
" lp_count: 0x%" PRIxPTR "", esf->lp_end, esf->lp_start, esf->lp_count);
LOG_ERR("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR " lp_count: 0x%" PRIxPTR "",
esf->lp_end, esf->lp_start, esf->lp_count);
#endif /* CONFIG_ARC_HAS_ZOL */
}
#endif
void z_arc_fatal_error(unsigned int reason, const struct arch_esf *esf)
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
#ifdef CONFIG_EXCEPTION_DEBUG
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
if (esf != NULL) {
dump_arc_esf(esf);
}
#endif /* CONFIG_EXCEPTION_DEBUG */
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
z_fatal_error(reason, esf);
}

View File

@@ -20,7 +20,6 @@
#include <zephyr/kernel_structs.h>
#include <zephyr/arch/common/exc_handle.h>
#include <zephyr/logging/log.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_USERSPACE
@@ -52,8 +51,9 @@ static const struct z_exc_handle exceptions[] = {
*/
static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
{
#if defined(CONFIG_MULTITHREADING)
uint32_t guard_end, guard_start;
#if defined(CONFIG_MULTITHREADING)
const struct k_thread *thread = _current;
if (!thread) {
@@ -69,7 +69,7 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
* "guard" installed in this case, instead what's
* happening is that the stack pointer is crashing
* into the privilege mode stack buffer which
* immediately precedes it.
* immediately precededs it.
*/
guard_end = thread->stack_info.start;
guard_start = (uint32_t)thread->stack_obj;
@@ -88,6 +88,7 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
guard_end = thread->stack_info.start;
guard_start = guard_end - Z_ARC_STACK_GUARD_SIZE;
}
#endif /* CONFIG_MULTITHREADING */
/* treat any MPU exceptions within the guard region as a stack
* overflow.As some instrustions
@@ -98,13 +99,12 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
if (fault_addr < guard_end && fault_addr >= guard_start) {
return true;
}
#endif /* CONFIG_MULTITHREADING */
return false;
}
#endif
#ifdef CONFIG_EXCEPTION_DEBUG
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
/* For EV_ProtV, the numbering/semantics of the parameter are consistent across
* several codes, although not all combination will be reported.
*
@@ -137,32 +137,32 @@ static void dump_protv_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("Instruction fetch violation (%s)",
LOG_ERR("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
break;
case 0x1:
EXCEPTION_DUMP("Memory read protection violation (%s)",
LOG_ERR("Memory read protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x2:
EXCEPTION_DUMP("Memory write protection violation (%s)",
LOG_ERR("Memory write protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x3:
EXCEPTION_DUMP("Memory read-modify-write violation (%s)",
LOG_ERR("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
break;
case 0x10:
EXCEPTION_DUMP("Normal vector table in secure memory");
LOG_ERR("Normal vector table in secure memory");
break;
case 0x11:
EXCEPTION_DUMP("NS handler code located in S memory");
LOG_ERR("NS handler code located in S memory");
break;
case 0x12:
EXCEPTION_DUMP("NSC Table Range Violation");
LOG_ERR("NSC Table Range Violation");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
@@ -171,46 +171,46 @@ static void dump_machine_check_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("double fault");
LOG_ERR("double fault");
break;
case 0x1:
EXCEPTION_DUMP("overlapping TLB entries");
LOG_ERR("overlapping TLB entries");
break;
case 0x2:
EXCEPTION_DUMP("fatal TLB error");
LOG_ERR("fatal TLB error");
break;
case 0x3:
EXCEPTION_DUMP("fatal cache error");
LOG_ERR("fatal cache error");
break;
case 0x4:
EXCEPTION_DUMP("internal memory error on instruction fetch");
LOG_ERR("internal memory error on instruction fetch");
break;
case 0x5:
EXCEPTION_DUMP("internal memory error on data fetch");
LOG_ERR("internal memory error on data fetch");
break;
case 0x6:
EXCEPTION_DUMP("illegal overlapping MPU entries");
LOG_ERR("illegal overlapping MPU entries");
if (parameter == 0x1) {
EXCEPTION_DUMP(" - jump and branch target");
LOG_ERR(" - jump and branch target");
}
break;
case 0x10:
EXCEPTION_DUMP("secure vector table not located in secure memory");
LOG_ERR("secure vector table not located in secure memory");
break;
case 0x11:
EXCEPTION_DUMP("NSC jump table not located in secure memory");
LOG_ERR("NSC jump table not located in secure memory");
break;
case 0x12:
EXCEPTION_DUMP("secure handler code not located in secure memory");
LOG_ERR("secure handler code not located in secure memory");
break;
case 0x13:
EXCEPTION_DUMP("NSC target address not located in secure memory");
LOG_ERR("NSC target address not located in secure memory");
break;
case 0x80:
EXCEPTION_DUMP("uncorrectable ECC or parity error in vector memory");
LOG_ERR("uncorrectable ECC or parity error in vector memory");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
@@ -219,54 +219,54 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("Privilege violation");
LOG_ERR("Privilege violation");
break;
case 0x1:
EXCEPTION_DUMP("disabled extension");
LOG_ERR("disabled extension");
break;
case 0x2:
EXCEPTION_DUMP("action point hit");
LOG_ERR("action point hit");
break;
case 0x10:
switch (parameter) {
case 0x1:
EXCEPTION_DUMP("N to S return using incorrect return mechanism");
LOG_ERR("N to S return using incorrect return mechanism");
break;
case 0x2:
EXCEPTION_DUMP("N to S return with incorrect operating mode");
LOG_ERR("N to S return with incorrect operating mode");
break;
case 0x3:
EXCEPTION_DUMP("IRQ/exception return fetch from wrong mode");
LOG_ERR("IRQ/exception return fetch from wrong mode");
break;
case 0x4:
EXCEPTION_DUMP("attempt to halt secure processor in NS mode");
LOG_ERR("attempt to halt secure processor in NS mode");
break;
case 0x20:
EXCEPTION_DUMP("attempt to access secure resource from normal mode");
LOG_ERR("attempt to access secure resource from normal mode");
break;
case 0x40:
EXCEPTION_DUMP("SID violation on resource access (APEX/UAUX/key NVM)");
LOG_ERR("SID violation on resource access (APEX/UAUX/key NVM)");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
break;
case 0x13:
switch (parameter) {
case 0x20:
EXCEPTION_DUMP("attempt to access secure APEX feature from NS mode");
LOG_ERR("attempt to access secure APEX feature from NS mode");
break;
case 0x40:
EXCEPTION_DUMP("SID violation on access to APEX feature");
LOG_ERR("SID violation on access to APEX feature");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
@@ -274,7 +274,7 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
EXCEPTION_DUMP("interrupt %u", vector);
LOG_ERR("interrupt %u", vector);
return;
}
@@ -283,59 +283,59 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
*/
switch (vector) {
case ARC_EV_RESET:
EXCEPTION_DUMP("Reset");
LOG_ERR("Reset");
break;
case ARC_EV_MEM_ERROR:
EXCEPTION_DUMP("Memory Error");
LOG_ERR("Memory Error");
break;
case ARC_EV_INS_ERROR:
EXCEPTION_DUMP("Instruction Error");
LOG_ERR("Instruction Error");
break;
case ARC_EV_MACHINE_CHECK:
EXCEPTION_DUMP("EV_MachineCheck");
LOG_ERR("EV_MachineCheck");
dump_machine_check_exception(cause, parameter);
break;
case ARC_EV_TLB_MISS_I:
EXCEPTION_DUMP("EV_TLBMissI");
LOG_ERR("EV_TLBMissI");
break;
case ARC_EV_TLB_MISS_D:
EXCEPTION_DUMP("EV_TLBMissD");
LOG_ERR("EV_TLBMissD");
break;
case ARC_EV_PROT_V:
EXCEPTION_DUMP("EV_ProtV");
LOG_ERR("EV_ProtV");
dump_protv_exception(cause, parameter);
break;
case ARC_EV_PRIVILEGE_V:
EXCEPTION_DUMP("EV_PrivilegeV");
LOG_ERR("EV_PrivilegeV");
dump_privilege_exception(cause, parameter);
break;
case ARC_EV_SWI:
EXCEPTION_DUMP("EV_SWI");
LOG_ERR("EV_SWI");
break;
case ARC_EV_TRAP:
EXCEPTION_DUMP("EV_Trap");
LOG_ERR("EV_Trap");
break;
case ARC_EV_EXTENSION:
EXCEPTION_DUMP("EV_Extension");
LOG_ERR("EV_Extension");
break;
case ARC_EV_DIV_ZERO:
EXCEPTION_DUMP("EV_DivZero");
LOG_ERR("EV_DivZero");
break;
case ARC_EV_DC_ERROR:
EXCEPTION_DUMP("EV_DCError");
LOG_ERR("EV_DCError");
break;
case ARC_EV_MISALIGNED:
EXCEPTION_DUMP("EV_Misaligned");
LOG_ERR("EV_Misaligned");
break;
case ARC_EV_VEC_UNIT:
EXCEPTION_DUMP("EV_VecUnit");
LOG_ERR("EV_VecUnit");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
#endif /* CONFIG_EXCEPTION_DEBUG */
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
/*
* @brief Fault handler
@@ -345,7 +345,7 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
* invokes the user provided routine k_sys_fatal_error_handler() which is
* responsible for implementing the error handling policy.
*/
void z_arc_fault(struct arch_esf *esf, uint32_t old_sp)
void _Fault(z_arch_esf_t *esf, uint32_t old_sp)
{
uint32_t vector, cause, parameter;
uint32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
@@ -384,11 +384,10 @@ void z_arc_fault(struct arch_esf *esf, uint32_t old_sp)
return;
}
#ifdef CONFIG_EXCEPTION_DEBUG
EXCEPTION_DUMP("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
LOG_ERR("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
EXCEPTION_DUMP("Address 0x%x", exc_addr);
LOG_ERR("Address 0x%x", exc_addr);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
dump_exception_info(vector, cause, parameter);
#endif

View File

@@ -19,7 +19,7 @@
#include <zephyr/syscall.h>
#include <zephyr/arch/arc/asm-compat/assembler.h>
GTEXT(z_arc_fault)
GTEXT(_Fault)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -99,11 +99,11 @@ _exc_entry:
_save_exc_regs_into_stack
/* sp is parameter of z_arc_fault */
/* sp is parameter of _Fault */
MOVR r0, sp
/* ilink is the thread's original sp */
MOVR r1, ilink
jl z_arc_fault
jl _Fault
_exc_return:
/* the exception cause must be fixed in exception handler when exception returns

View File

@@ -44,11 +44,11 @@ K_KERNEL_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
void z_arc_firq_stack_set(void)
{
#ifdef CONFIG_SMP
char *firq_sp = K_KERNEL_STACK_BUFFER(
char *firq_sp = Z_KERNEL_STACK_BUFFER(
_firq_interrupt_stack[z_arc_v2_core_id()]) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#else
char *firq_sp = K_KERNEL_STACK_BUFFER(_firq_interrupt_stack) +
char *firq_sp = Z_KERNEL_STACK_BUFFER(_firq_interrupt_stack) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#endif

View File

@@ -54,7 +54,7 @@ void arch_irq_offload(irq_offload_routine_t routine, const void *parameter)
}
/* need to be executed on every core in the system */
void arch_irq_offload_init(void)
int arc_irq_offload_init(void)
{
IRQ_CONNECT(IRQ_OFFLOAD_LINE, IRQ_OFFLOAD_PRIO, arc_irq_offload_handler, NULL, 0);
@@ -64,4 +64,8 @@ void arch_irq_offload_init(void)
* with generic irq_enable() but via z_arc_v2_irq_unit_int_enable().
*/
z_arc_v2_irq_unit_int_enable(IRQ_OFFLOAD_LINE);
return 0;
}
SYS_INIT(arc_irq_offload_init, POST_KERNEL, 0);

View File

@@ -26,7 +26,7 @@ GTEXT(_isr_wrapper)
GTEXT(_isr_demux)
#if defined(CONFIG_PM)
GTEXT(pm_system_resume)
GTEXT(z_pm_save_idle_exit)
#endif
/*
@@ -253,7 +253,7 @@ rirq_path:
st 0, [r1, _kernel_offset_to_idle] /* zero idle duration */
PUSHR blink
jl pm_system_resume
jl z_pm_save_idle_exit
POPR blink
_skip_pm_save_idle_exit:

View File

@@ -35,7 +35,5 @@ config ARC_MPU
select GEN_PRIV_STACKS if !(ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if !(ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MPU_REQUIRES_NON_OVERLAPPING_REGIONS if (ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select ARCH_MEM_DOMAIN_SUPPORTS_ISOLATED_STACKS
select MEM_DOMAIN_ISOLATED_STACKS
help
Target has ARC MPU

View File

@@ -34,7 +34,7 @@ int arch_mem_domain_max_partitions_get(void)
/*
* Validate the given buffer is user accessible or not
*/
int arch_buffer_validate(const void *addr, size_t size, int write)
int arch_buffer_validate(void *addr, size_t size, int write)
{
return arc_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -207,7 +207,7 @@ int arc_core_mpu_get_max_domain_partition_regions(void)
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(const void *addr, size_t size, int write)
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
/*
* For ARC MPU, smaller region number takes priority.
@@ -238,7 +238,7 @@ int arc_core_mpu_buffer_validate(const void *addr, size_t size, int write)
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
void arc_mpu_init(void)
static int arc_mpu_init(void)
{
uint32_t num_regions = get_num_regions();
@@ -246,6 +246,7 @@ void arc_mpu_init(void)
if (mpu_config.num_regions > num_regions) {
__ASSERT(0, "Request to configure: %u regions (supported: %u)\n",
mpu_config.num_regions, num_regions);
return -EINVAL;
}
/* Disable MPU */
@@ -277,7 +278,10 @@ void arc_mpu_init(void)
/* Enable MPU */
arc_core_mpu_enable();
return 0;
}
SYS_INIT(arc_mpu_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_COMMON_INTERNAL_H_ */

View File

@@ -118,7 +118,7 @@ static inline bool _is_enabled_region(uint32_t r_index)
}
/**
* This internal function check if the given buffer is in the region
* This internal function check if the given buffer in in the region
*/
static inline bool _is_in_region(uint32_t r_index, uint32_t start, uint32_t size)
{

Some files were not shown because too many files have changed in this diff Show More