Compare commits

..

119 Commits

Author SHA1 Message Date
Yudong Zhang
0c70f3bad7 shell: mqtt: fix call to bin2hex
Fixes: f2affbd ("os: lib: bin2hex: fix memory overwrite")
Signed-off-by: Yudong Zhang <mtwget@gmail.com>
2023-03-27 14:20:50 -07:00
Yudong Zhang
5875676425 mgmt: hawkbit: fix call to bin2hex
Fixes: f2affbd ("os: lib: bin2hex: fix memory overwrite")
Signed-off-by: Yudong Zhang <mtwget@gmail.com>
2023-03-27 14:20:50 -07:00
Yudong Zhang
5bae87ae69 mgmt: updatehub: fix call to bin2hex
We noticed that in the master branch, updatehub fails to start.
That is because of the behaviour change in bin2hex caused by
commit f2affbd ("os: lib: bin2hex: fix memory overwrite").

Fixes: f2affbd ("os: lib: bin2hex: fix memory overwrite")
Signed-off-by: Yudong Zhang <mtwget@gmail.com>
2023-03-27 14:20:50 -07:00
Stephanos Ioannidis
eb5bbf9863 ci: backport_issue_check: Use ubuntu-22.04 virtual environment
This commit updates the pull request backport issue check workflow to
use the Ubuntu 22.04 virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cadd6e6fa4)
2023-03-22 03:15:13 +09:00
Stephanos Ioannidis
20b20501e0 ci: manifest: Use ubuntu-22.04 virtual environment
This commit updates the manifest workflow to use the Ubuntu 22.04
virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit af6d77f7a7)
2023-03-22 02:59:28 +09:00
Gerard Marull-Paretas
74929f47b0 ci: doc-build: fix PDF build
New LaTeX Docker image (Debian based) uses Python 3.11. On Debian
systems, this version does not allow to install packages to the system
environment using pip.  Use a virtual environment instead.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
(cherry picked from commit e6d9ff2948)
2023-02-28 23:22:01 +09:00
Théo Battrel
3c4bdfa595 Bluetooth: Host: Check returned value by LE_READ_BUFFER_SIZE
`rp->le_max_num` was passed unchecked into `k_sem_init()`, this could
lead to the value being uninitialized and an unknown behavior.

To fix that issue, the `rp->le_max_num` value is checked the same way as
`bt_dev.le.acl_mtu` was already checked. The same things has been done
for `rp->acl_max_num` and `rp->iso_max_num` in
`read_buffer_size_v2_complete()` function.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
(cherry picked from commit ac3dec5212)
2023-02-24 09:20:05 -08:00
Chris Friedt
18869d0f5d net: sockets: socketpair: do not allow blocking IO in ISR context
Using a socketpair for communication in an ISR is not a great
solution, but the implementation should be robust in that case
as well.

It is not acceptible to block in ISR context, so robustness here
means to return -1 to indicate an error, setting errno to `EAGAIN`
(which is synonymous with `EWOULDBLOCK`).

Fixes #25417

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit d832b04e96)
2023-01-10 09:51:56 -08:00
Martí Bolívar
a7d946331f python-devicetree: CI hotfix
Pin the types-PyYAML version to 6.0.7. Version 6.0.8 is causing CI
errors for other pull requests, so we need this in to get other PRs
moving.

Fixes: #46286

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
207bd9a52e ci: Clone cached Zephyr repository with shared objects
In the new ephemeral Zephyr runners, the cached repository files are
located in a foreign file system and Git clone operation cannot create
hard-links to the cached repository objects, which forces the Git clone
operation to copy the objects from the cache file system to the runner
container file system.

This commit updates the CI workflows to instead perform a "shared
clone" of the cached repository, which allows the cloned repository to
utilise the object database of the cached repository.

While "shared clone" can be often dangerous because the source
repository objects can be deleted, in this case, the source repository
(i.e. cached repository) is mounted as read-only and immutable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
7fdafbefc4 ci: Limit workflow scope branches
This commit updates the CI workflows that trigger on both push and pull
request events to limit their event trigger scope to the main and the
release branches.

This prevents these workflows from simultaneously triggering on both
push and pull request events when a pull request is created from an
upstream branch to another upstream branch (e.g. pull requests from
the backport branches to the release branches).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
50800033f7 ci: codecov: Clone cached Zephyr repository
This commit updates the codecov workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
4f83f58ae3 ci: codecov: Use zephyr-runner
This commit updates the codecov workflow to use the new Kubernetes-
based zephyr-runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
7b03c3783c ci: clang: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
79a4063ceb ci: clang: Clone cached Zephyr repository
This commit updates the clang workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
ca020d65f8 ci: clang: Use zephyr-runner
This commit updates the clang workflow to use the new Kubernetes-based
zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
dc12754b51 ci: twister: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
869566c139 ci: twister: Clone cached Zephyr repository
This commit updates the twister workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
5923021c60 ci: twister: Use zephyr-runner
This commit updates the twister workflow to use the new Kubernetes-
based zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
e164b03f70 ci: Use actions/cache@v3
This commit updates the CI workflows to use the latest "cache" action
v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
74f9846f46 ci: Use actions/setup-python@v4
This commit updates the CI workflows to use the latest "setup-python"
action v4, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
74039285dc ci: Use actions/upload-artifact@v3
This commit updates the CI workflows to use the latest
"upload-artifact" action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
c6af6d8595 ci: Use actions/checkout@v3
This commit updates the CI workflows to use the latest "checkout"
action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
1065bccac3 ci: twister: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
34e66d8f0e ci: release: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
172b858e19 ci: codecov: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
0a8f0510e6 ci: clang: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
ed880e19d9 ci: footprint-tracking: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
41ce47af4b ci: footprint: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
2dd9b5b5e7 ci: codecov: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
59203c1137 ci: clang: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
a0b55ac113 ci: twister: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
67209f59f1 ci: bluetooth-tests: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Anas Nashif
244b352d9f ci: update cancel-workflow-action action to 0.11.0
Update action to use latest release which resolves a warning Node 12
being deprecated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
284868c9d1 ci: compliance: Use upload-artifact action v3
This commit updates the "Create a release" workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
aa6fc02be8 ci: issue_count: Use upload-artifact action v3
This commit updates the issue count tracker workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
c02e8142d0 ci: doc-build: Use upload-artifact action v3
This commit updates the documentation build workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
34854a8e78 ci: compliance: Use upload-artifact action v3
This commit updates the compliance check workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
aae46a1840 ci: backport: Use Ubuntu 20.04 runner image
This commit updates the backport workflow to use the ubuntu-20.04
runner image because the ubuntu-18.04 image is deprecated and will
become unsupported by December 1, 2022.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
fbc2ba9a8a ci: west_cmds: Use specific version of runner image
This commit updates the "West Command Tests" workflow to use a specific
runner image version (ubuntu-20.04, macos-11, windows-2022) instead of
the latest version in order to prevent any potential breakages due to
the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
86eaae79f4 ci: devicetree_checks: Use specific version of runner image
This commit updates the "Devicetree script tests" workflow to use a
specific runner image version (ubuntu-20.04, macos-11, windows-2022)
instead of the latest version in order to prevent any potential
breakages due to the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
2e4423814d ci: twister: Use Ubuntu 20.04 runner image
This commit updates the "Run tests with twister" workflow to use a
specific runner image version, ubuntu-20.04, instead of the latest
version in order to prevent any potential breakages due to the 'latest'
version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
230337f51b ci: twister_tests: Use Ubuntu 20.04 runner image
This commit updates the Twister Testsuite workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
648cc40c57 ci: stale_issue: Use Ubuntu 20.04 runner image
This commit updates the stale issue workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
0aa9a177ca ci: release: Use Ubuntu 20.04 runner image
This commit updates the "Create a Release" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
36b7fe0e5c ci: manifest: Use Ubuntu 20.04 runner image
This commit updates the manifest check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
a6f817562e ci: license_check: Use Ubuntu 20.04 runner image
This commit updates the license check workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
9415cd4de5 ci: issue_count: Use Ubuntu 20.04 runner image
This commit updates the issue count tracker workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
6d8cc95715 ci: footprint: Use Ubuntu 20.04 runner image
This commit updates the footprint delta workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
91af641b6c ci: footprint-tracking: Use Ubuntu 20.04 runner image
This commit updates the footprint tracking workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
ae679d0a60 ci: errno: Use Ubuntu 20.04 runner image
This commit updates the error number check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
1806442523 ci: doc: Use Ubuntu 20.04 runner image
This commit updates the documentation build and publish workflows to
use a specific runner image version, ubuntu-20.04, instead of the
latest version in order to prevent any potential breakages due to the
'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
46b3d0f4d9 ci: do_not_merge: Use Ubuntu 20.04 runner image
This commit updates the "Do Not Merge" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
2445934b93 ci: daily_test_version: Use Ubuntu 20.04 runner image
This commit updates the daily test version workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
80fa032333 ci: compliance: Use Ubuntu 20.04 runner image
This commit updates the compliance check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
1a4105b6b5 ci: coding_guidelines: Use Ubuntu 20.04 runner image
This commit updates the coding guidelines workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
d5b2e2fed0 ci: clang: Use Ubuntu 20.04 runner image
This commit updates the Clang workflow to use a specific runner image
version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
6c38e20028 ci: backport_issue_check: Use Ubuntu 20.04 runner image
This commit updates the backport issue check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
87de8ef5a5 ci: bluetooth-tests: Use Ubuntu 20.04 runner image
This commit updates the Bluetooth tests workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
41310ecbcd ci: issue_count: Fix stale reference to master branch
This commit fixes a stale reference to the 'master' branch when
downloading the issue report configuration file.

Note that the 'master' branch is no longer the default
branch -- 'main' is.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Andries Kruithof
133dc2da84 Bluetooth: controller: llcp: initialise DLE parameters
The initialisation of DLE parameters for the peripheral
was done before the intialisation of the PHY settings.
Since the DLE parameters depend on PHY settings this
can result in incorrect parameters for tx/rx time and
octets
One scenario is where a previous connection set the PHY to
2M or CODED, then when a new connection is established
it uses the same memory-locations for connection settings as the
previous connection, and the (uninitialised) PHY settings will be
set to 2M or CODED, and thus the DLE parameters will be wrong
This PR moves the initialisation of DLE parameters after
that of PHY settings

EBQ tests effected include LL/CON/PER/BV-77-C.

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 6d7a04a0ba)
2022-11-02 07:54:57 -07:00
Chen Peng1
2f793c8d8d tests: interrupt: remove unused macro TRIGGER_IRQ_INT.
On X86 platforms, the interrupt trigger method has been
changed to use APIC IPI, we don't use INT command to trigger
interrupt, so remove this unused macro.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-11-01 18:52:34 -07:00
Chen Peng1
30cdfad7b2 test: interrupt: change the test interrupt line to a bigger one.
change the test interrupt line number to a bigger one, because
the low number usually will be used by other devices.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-11-01 18:52:34 -07:00
Chen Peng1
db28910f7f tests: interrupt: add some nop operations in trigger_irq function.
On X86 platforms, the interrupt trigger method has been changed
from using INT command to using APIC IPI, we need to make sure
the IPI interrupt is handled before do our check, so add some
nop operations.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-11-01 18:52:34 -07:00
Erik Brockhoff
71ccc7ee7e Bluetooth: controller: fixing issue re. erroneous DLE changed events
Only apply change to effective DLE times if current max times are too
small to accommodate. Similar to legacy implementation
Update unit tests to new DLE ntf behavior

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 522e0b5ade)
2022-10-07 13:39:26 +02:00
Andries Kruithof
62d4b476e2 Bluetooth: controller: llcp: fix DLE related EBQ tests
Calculation of the DLE related parameters (rx/tx octets and time) depend
on the actual phy in use. For this reason the PHY settings must be
initialised before doing the DLE parameter calculations

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 19bf928ffb)
2022-10-07 13:39:18 +02:00
Andries Kruithof
ef2fd9306a Bluetooth: controller: llcp: fix typo
Fixed a typo: the correct term is 'link layer', not
'linked layer'

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 9eaf102e1b)
2022-10-07 13:39:08 +02:00
Andries Kruithof
88d6d34394 Bluetooth: controller: llcp: avoid regression errors
The change in this commit is required to avoid regression errors
on EBQ test for the PHY update procedure
When in the peripheral role transmission of data must be resumed
while waiting for the PHY IND response from peer.
In other words: in the LP_PU_STATE_WAIT_TX_ACK_PHY_REQ state
data transmission must resume when acting as peripheral,
but not when in the central role

Following tests are effected
LL/CON/PER/BV-49-C
LL/CON/PER/BV-50-C
LL/CON/PER/BV-52-C
LL/CON/PER/BV-53-C
LL/CON/PER/BV-54-C
LL/CON/PER/BV-55-C
LL/CON/PER/BV-56-C
LL/CON/PER/BV-58-C

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 1d1a2f8b57)
2022-10-07 13:39:08 +02:00
Andries Kruithof
0e0d3dcb03 Bluetooth: controller: llcp: update unit tests with changes in PHY proc
The pausing and timing of data transmission is changed in the PHY update
procedure, so that conformance tests passes. This requires an update
in unittests as well

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit c16f788959)
2022-10-07 13:39:08 +02:00
Andries Kruithof
5e5b9e7867 Bluetooth: controller: llcp: fix PHY procedure for conformance test
This PR fixes the PHY update procedures for conformance tests when
being a Central
The problem was that data was in the LLL tx queue and was still being
queued before the PHY IND was queued (with a given instant).
As a result by the time the PHY IND was transmitted over the air the
instant was in the past.
The fix is to ensure that the LLL tx queue is empty, and to stop
queueing new data  before queueing the PHY IND

Following tests are fixed:
LL/CON/CEN/BV-49-C
LL/CON/CEN/BV-50-C
LL/CON/CEN/BV-53-C
LL/CON/CEN/BV-54-C

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit aa80f7da5f)
2022-10-07 13:39:08 +02:00
Thomas Ebert Hansen
3f23efbcd1 Bluetooth: controller: llcp: fix issue re. version exchange
Complete the remote initiated version exchange if a LL_VERSION_IND is
received while already having responded in an earlier version exchange
procedure.

Clarify comment regarding how to handle this invalid behaviour.

This has been seen when running the LL/CON/CEN/BI-12-C test on EBQ.

Signed-off-by: Thomas Ebert Hansen <thoh@oticon.com>
(cherry picked from commit d9774bd925)
2022-10-07 13:39:01 +02:00
Erik Brockhoff
f64f70b1a5 Bluetooth: controller: llcp: fixing tx buffer queue handling
Misc. fixups to get the tx buffer alloc mechanism to work as intended

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 1ff458ec87)
2022-10-07 13:38:52 +02:00
Thomas Ebert Hansen
049a3d1b5f Bluetooth: controller: llcp: Fix data pause/resume
llcp_tx_pause_data() calls ull_tx_q_pause_data() for each pause_mask,
while llcp_tx_resume_data() only calls ull_tx_q_resume_data() when
conn->llcp.tx_q_pause_data_mask == 0 leading to an unbalanced number of
calls to ull_tx_q_pause_data()/ull_tx_q_resume_data() which can leave
the data path of the TX Q paused.

Fix such that only the first call to llcp_tx_pause_data() will pause the
data path of the TX Q.

Add unit test to verify correct pause/resume behavior.

Signed-off-by: Thomas Ebert Hansen <thoh@oticon.com>
(cherry picked from commit 154d67d90b)
2022-10-07 13:38:36 +02:00
Vinayak Kariappa Chettimada
d81bc781ab Bluetooth: Controller: Fix missing recv fifo reset
Fix missing recv fifo reset on HCI reset. This fix handles a
scenario where in Rx Prio thread has enqueued a node rx, Tx
thread handles HCI Reset Command, and Rx thread wakes up
from call to k_fifo_get to handle invalid node rx. The
changes here ensure Rx thread does not get any invalid node
rx post HCI Reset Command handled in Tx thread.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit e027f0671a)
2022-10-07 13:36:30 +02:00
Yong Cong Sin
338acfc80a subsys/mgmt/hawkbit: Set ai_socktype if IPV4/IPV6
Follows the implementation of updatehub and set the
`ai_socktype` only if IPV4/IPV6

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit dd9d6bbb44)
2022-10-07 13:36:23 +02:00
Yong Cong Sin
6fc5d08bf9 subsys/mgmt/hawkbit: Init the hints struct to a known value
Initialize the `hints` struct to a known value so that it won't
cause undetermined behavior when used in `getaddrinfo()`.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 2ed88e998a)
2022-10-07 13:36:23 +02:00
Reto Schneider
fceb688a2c net: context: Fix memory leak
Allocated, but undersized packets must not just be logged, but also
unreferenced before returning an error.

Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
(cherry picked from commit 6de54e0d03)
2022-10-07 13:36:17 +02:00
Yong Cong Sin
1a98e00f13 mgmt/hawkbit: Print hrefs only if there's an update
If the is no update from the server, the _links will be NULL.
Check if it is NULL before trying to LOG these strings.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2022-09-29 16:43:58 +02:00
Ruud Derwig
c1fc2d957a ARC: fx possible memory corruption with userspace
Use  Z_KERNEL_STACK_BUFFER instead of
Z_THREAD_STACK_BUFFER for initial stack.

Fixes #50467

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
(cherry picked from commit 9bccb5cc4b)
2022-09-22 18:02:09 +02:00
Daniel Leung
4b670b584c soc: esp32: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit b820cde7a9)
2022-09-22 18:01:52 +02:00
Daniel Leung
97c9de27fc soc: intel_adsp: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit 74df88d8f5)
2022-09-22 18:01:52 +02:00
Erwan Gouriou
c4957772c1 boards: nucleo_wb55rg: Update regarding supported M0 BLE f/w
Since STM32WBCube release V1.13.2, only "HCI Only" f/w are now compatible
for a use with Zephyr.
"Full stack" f/w which used to be supported are not compatible anymore.

Update board documentation to mention this restriction.

Additionally, update flash partition to reflect the gain implied
by the use of this smaller f/w.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit d00f8505bf)
2022-09-22 15:24:20 +02:00
Thomas Stranger
aa8cf13e25 dts/arm: stm32f105: enable master can gating clock for can2
The can2 only works if gating clock of the master can (can1)
is enabled, therefore also set that bit for can2.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit 24594cf7ce)
2022-09-22 15:24:12 +02:00
Thomas Stranger
1cb45f17c3 include: dt-bindings: pinctrl: stm32f1-afio: fix can & eth pinmap
The bindings for the stm32f105 afio pin remap had defined the wrong
offset for CAN and ETH.
This commit corrects those to the bits specified in RM0008 Rev.21,
but the changes could not be verfied on hw.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit 705c203f26)
2022-09-22 15:24:02 +02:00
Erwan Gouriou
c32e7773ae drivers: gpio: stm32: Apply GPIOG specific code to U5 series
In STM32U5 as well it is required to enable VDD before use.
Difference is that U5 enables this under PWR_SVMCR_IO2SV flag.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit e1cb0845b4)
2022-09-22 15:23:54 +02:00
Nicolas Pitre
1884e2b889 riscv: smp: fix secondary cpus' initial stack
Z_THREAD_STACK_BUFFER() must not be used here. This is meant for stacks
defined with K_THREAD_STACK_ARRAY_DEFINE() whereas in this case we are
given a stack created with K_KERNEL_STACK_ARRAY_DEFINE().

If CONFIG_USERSPACE=y then K_THREAD_STACK_RESERVED gets defined with
a bigger value than K_KERNEL_STACK_RESERVED. Then Z_THREAD_STACK_BUFFER()
returns a pointer that is more advanced than expected, resulting in a
stack pointer outside its actual stack area and therefore memory
corruption ensues.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit c76d8c88c0)
2022-09-21 15:35:47 +02:00
Stephanos Ioannidis
5fc3da40f2 ci: twister: Check out west modules for test plan
This commit updates the twister workflow to check out the west modules
before running the test plan script for the pull request runs because
the twister test plan logic resolves the the module dependencies and
filters tests based on the local availability of the required modules.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit fad899d2ad)
2022-09-16 21:14:11 +09:00
Sylvio Alves
2cd7fa9cbf soc: esp32: opt to make device handles in dram
ESP32 linker loader needs all sections to be align correctly.
When MCUBoot is enabled, device handles provide by device-handles.ld
does not make the ALIGN(4) at the end, which breaks the loader
initialization. This PR make sure that this particular section
is placed in DRAM instead.

For now this is a workaround until this can be handled in loader script.

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
(cherry picked from commit 54ca96f523)
2022-08-18 22:11:10 -07:00
Benjamin Gwin
6628b0631a twister: Exit with a bad status if there were build errors
Previously, twister would exit with a 0 status code if there were build
failures but no execution failures. This makes it easier to catch
build breakages when using twister in CI workflows.

Signed-off-by: Benjamin Gwin <bgwin@google.com>
(cherry picked from commit 3f8d5c49b3)
2022-07-26 15:03:34 -04:00
Erik Brockhoff
7a5bdff5cd Bluetooth: controller: llcp: phy update proc, validate phys and instant
Implementing proper validation of PHY selection for PHY UPDATE procedure
Implement connection termination on PHY UPDATE with instant in the past

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 96817164ea)
2022-07-25 10:56:50 -07:00
Erwan Gouriou
4a3e1dd05d boards: nucleo_wb55rg: Fix documentation about BLE binary compatibility
Rather than stating a version information that will get out of date
at each release, refer to the source of information located in hal_stm32
module.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit 6656607d02)
2022-07-25 10:54:59 -07:00
Henrik Brix Andersen
bbe4c639da drivers: can: mcux: mcan: add pinctrl support
Add pinctrl support to the NXP LPC driver front-end.

Fixes: #47742

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 7b6ca29941)
2022-07-25 10:52:20 -07:00
Henrik Brix Andersen
434d3f98f3 tests: drivers: can: api: add test for RTR filter matching
Add test for CAN RX filtering of RTR frames.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 097cb04916)
2022-07-25 10:52:10 -07:00
Henrik Brix Andersen
4fba98cf9b drivers: can: loopback: check frame ID type and RTR bit in filters
Check the frame ID type and RTR bit when comparing loopback CAN frames
against installed RX filters.

Fixes: #47904

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 588f06d511)
2022-07-25 10:52:10 -07:00
Henrik Brix Andersen
41ba03d0be drivers: can: mcux: flexcan: fix handling of RTR frames
When installing a RX filter, the driver uses "filter->rtr &
filter->rtr_mask" for setting the filter mask. It should just be using
filter->rtr_mask, otherwise filters for non-RTR frames will match RTR
frames as well.

When transmitting a RTR frame, the hardware automatically switches the
mailbox used for TX to RX in order to receive the reply. This, however,
does not match the Zephyr CAN driver model, where mailboxes are dedicated
to either RX or TX. Attempting to reuse the TX mailbox (which was
automatically switched to an RX mailbox by the hardware) fails on the first
call, after which the mailbox is reset and can be reused for TX. To
overcome this, the driver must abort the RX mailbox operation when the
hardware performs the TX to RX switch.

Fixes: #47902

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 3e5aaf837e)
2022-07-25 10:52:10 -07:00
Henrik Brix Andersen
b285c2a275 drivers: can: mcan: acknowledge all received frames
The Bosch M_CAN IP does not support RX filtering of the RTR bit, so the
driver handles this bit in software.

If a recevied frame matches a filter with RTR enabled, the RTR bit of the
frame must match that of the filter in order to be passed to the RX
callback function. If the RTR bits do not match the frame must be dropped.

Improve the readability of the the logic for determining if a frame should
be dropped and add a missing FIFO acknowledge write for dropped frames.

Fixes: #47204

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 5e74f72220)
2022-07-25 10:52:10 -07:00
Andriy Gelman
01e51fed23 tests: net: Check leaks when appending frags to a cloned packet
This test checks that new frags appended to a cloned packet are properly
freed.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit d67a130368)
2022-07-25 10:51:56 -07:00
Andriy Gelman
2b6bbef0e5 tests: net: Add test to detect leak in pkt shallow clone
Checks that taking a shallow of a packet does not leak buffers after the
packet and its clone are unreffed.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit f119a368a2)
2022-07-25 10:51:56 -07:00
Andriy Gelman
bd110756f5 net: pkt: Fix leak when using shallow clone
Currently a shallow clone of a packet will bump the reference count on
all the fragments. The net_pkt_unref() function, however, only drops the
reference count on the head fragment. Fix this by only bumping the ref
count on the head buf during shallow clone.

Only bumping the ref count of head is more in line with the idea that
head buf is not responsible for the fragments of its child.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit f12f9d5e95)
2022-07-25 10:51:56 -07:00
Andriy Gelman
c0c1390ecf net: route: Fix pkt leak if net_send_data() fails
If the call to net_send_data() fails, for example if the forwading
interface is down, then the pkt will leak. The reference taken by
net_pkt_shallow_clone() will never be released. Fix the problem
by dropping the rerefence count in the error path.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit a3cdb2102c)
2022-07-25 10:50:30 -07:00
Stephanos Ioannidis
c6b1d932ff tests: cpp: cxx: Add qemu_cortex_a53 as integration platform
This commit adds the `qemu_cortex_a53`, which is an MMU-based platform,
as an integration platform for the C++ subsystem tests.

This ensures that the `test_global_static_ctor_dynmem` test, which
verifies that the dynamic memory allocation service is functional
during the global static object constructor invocation, is tested on
an MMU-based platform, which may have a different libc heap
initialisation path.

In addition to the above, this increases the overall test coverage
ensuring that the C++ subsystem is functional on an MMU-based platform
in general.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 8e6322a21a)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
1a58ca22e1 tests: cpp: cxx: Test with various types of libc
This commit changes the C++ subsystem test, which previously was only
being run with the minimal libc, to be run with all the mainstream C
libraries (minimal libc, newlib, newlib-nano, picolibc).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 03f0693125)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
0dc782efae tests: cpp: cxx: Add dynamic memory availability test for static init
This commit adds a test to verify that the dynamic memory allocation
service (the `new` operator) is available and functional when the C++
static global object constructors are invoked called during the system
initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit dc4895b876)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
eff3c0d65a tests: cpp: cxx: Add static global constructor invocation test
This commit adds a test to verify that the C++ static global object
constructors are invoked called during the system initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 6e0063af29)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
2c3b4ba5cf lib: libc: newlib: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the newlib malloc heap
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 43e1c28a25)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
0a5feb6832 lib: libc: minimal: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the minimal libc malloc
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit db0748c462)
2022-07-19 14:04:02 -07:00
Christopher Friedt
63e27ea69a scripts: release: list_backports: use older python dict merge method
In Python versions >= 3.9, dicts can be merged with the `|` operator.

This is not the case for python versions < 3.9, and the simplest way
is to use `dict_c = {**dict_a, **dict_b}`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3783cf8353)
2022-07-19 01:04:45 +09:00
Christopher Friedt
c5498f4be0 ci: backports: check if a backport PR has a valid issue
This is an automated check for the Backports project to
require one or more `Fixes #<issue>` items in the body
of the pull request.

Fixes #46164

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit aa4e437573)
2022-07-18 23:32:23 +09:00
Christopher Friedt
551ef9c2e4 scripts: release: list_backports.py
Created list_backports.py to examine prs applied to a backport
branch and extract associated issues. This is helpful for
adding to release notes.

The script may also be used to ensure that backported changes
also have one or more associated issues.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 57762ca12c)
2022-07-18 23:32:23 +09:00
Christopher Friedt
89fd222c70 scripts: release: use GITHUB_TOKEN and start_date in scripts
Updated bug_bash.py and list_issues.py to use the GITHUB_TOKEN
environment variable for consistency with other scripts.

Updated bug_bash.py to use `-s / --start-date` instead of
`-b / --begin-date`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3b3fc27860)
2022-07-18 23:32:23 +09:00
Thomas Stranger
a8cebfc437 dts: bindings: clock: Fix STM32G4 device clk src selection definitions
Some device clock sources selection helpers were not correctly defined.

With this commit the definitions are updated to match the desciption
in the reference manual RM0453.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit 87ed12d796)
2022-07-12 18:08:03 +02:00
Erik Brockhoff
0f6c57f3dd Bluetooth: controller: llcp: fix issue re. missing ack of terminate ind
On remote terminate on central the conn clean-up would happen before ack
of terminate ind was sent to peer.
Now clean-up is 'postponed' until subsequent event.
Also now data tx is paused on rx of terminate ind to ensure no data is
tx'ed after rx of terminate ind

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 8b1d50b981)
2022-07-12 18:07:56 +02:00
Erik Brockhoff
8e09d5445d Bluetooth: controller: llcp: fix issue re. missing release of tx node
On disconnect with refactored LLCP, if data tx is paused,
possibly 'waiting' tx nodes would not get released.

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 8b912f1488)
2022-07-12 18:07:50 +02:00
Vinayak Kariappa Chettimada
689e5f209d Bluetooth: Controller: Replace k_sem_take loop with k_sem_reset
Replace k_sem_take loop used for consuming the remaining
sem give counts with k_sem_reset.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 7ff8581916)
2022-07-12 18:07:41 +02:00
Vinayak Kariappa Chettimada
9e950f5259 Bluetooth: Controller: Fix pdu_free_sem_give assertion under ZLI use
Fix assertion due to multiple mayfly_enqueue calls used
under ZLI when pdu_free_sem_give is invoked from the LLL.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit ae8e7f4c22)
2022-07-12 18:07:41 +02:00
Glauber Maroto Ferreira
c30d375272 west.yml: hal_espressif: updates to latest revision
Updates hal_espressif's revision to include:
- latest pinctrl definitions.
- support for building ESP32C3 USB driver

Signed-off-by: Glauber Maroto Ferreira <glauber.ferreira@espressif.com>
(cherry picked from commit 7e1baee44b)
2022-07-12 18:07:29 +02:00
Martin Jäger
1c45956e06 drivers: can: stm32: enable can1 clock also for can2
For devices with more than one CAN peripherals, CAN1 is the master and
its clock has to be enabled also if only CAN2 is used.

Signed-off-by: Martin Jäger <martin@libre.solar>
(cherry picked from commit 8ab81b02dd)
2022-07-12 18:07:17 +02:00
Stephanos Ioannidis
4ab83425e0 drivers: i2c: Fix infinite recursion in driver unregister function
This commit fixes an infinite recusion in the
`z_vrfy_i2c_slave_driver_unregister` function.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 745b7d202e)
2022-06-22 11:55:15 -07:00
Carles Cufi
44358e62ec version: Fix the EXTRAVERSION field
This was supposed to be left empty, but was instead set to 0.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-06-05 20:17:02 +02:00
61571 changed files with 965041 additions and 4342994 deletions

View File

@@ -5,6 +5,7 @@
--min-conf-desc-length=1
--typedefsfile=scripts/checkpatch/typedefsfile
--ignore BRACES
--ignore PRINTK_WITHOUT_KERN_LEVEL
--ignore SPLIT_STRING
--ignore VOLATILE
@@ -28,5 +29,4 @@
--ignore MULTISTATEMENT_MACRO_USE_DO_WHILE
--ignore ENOSYS
--ignore IS_ENABLED_CONFIG
--ignore EXPORT_SYMBOL
--ignore COMPARISON_TO_NULL
--exclude ext

View File

@@ -1,52 +1,82 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-License-Identifier: GPL-2.0
#
# Note: The list of ForEachMacros can be obtained using:
# clang-format configuration file. Intended for clang-format >= 4.
#
# git grep -h '^#define [^[:space:]]*FOR_EACH[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*FOR_EACH[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
# For more information, see:
#
# Documentation/process/clang-format.rst
# https://clang.llvm.org/docs/ClangFormat.html
# https://clang.llvm.org/docs/ClangFormatStyleOptions.html
#
# References:
# - https://clang.llvm.org/docs/ClangFormatStyleOptions.html
---
BasedOnStyle: LLVM
AlignConsecutiveMacros: AcrossComments
AllowShortBlocksOnASingleLine: Never
AccessModifierOffset: -4
AlignAfterOpenBracket: Align
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false
#AlignEscapedNewlines: Left # Unknown to clang-format-4.0
AlignOperands: true
AlignTrailingComments: false
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortEnumsOnASingleLine: false
AllowShortFunctionsOnASingleLine: None
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AttributeMacros:
- __aligned
- __deprecated
- __packed
- __printf_like
- __syscall
- __syscall_always_inline
- __subsystem
BitFieldColonSpacing: After
BreakBeforeBraces: Linux
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: false
AlwaysBreakTemplateDeclarations: false
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
AfterClass: false
AfterControlStatement: false
AfterEnum: false
AfterFunction: true
AfterNamespace: true
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
#AfterExternBlock: false # Unknown to clang-format-5.0
BeforeCatch: false
BeforeElse: false
IndentBraces: false
#SplitEmptyFunction: true # Unknown to clang-format-4.0
#SplitEmptyRecord: true # Unknown to clang-format-4.0
#SplitEmptyNamespace: true # Unknown to clang-format-4.0
BreakBeforeBinaryOperators: None
BreakBeforeBraces: Custom
#BreakBeforeInheritanceComma: false # Unknown to clang-format-4.0
BreakBeforeTernaryOperators: false
BreakConstructorInitializersBeforeComma: false
#BreakConstructorInitializers: BeforeComma # Unknown to clang-format-4.0
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: false
ColumnLimit: 100
CommentPragmas: '^ IWYU pragma:'
#CompactNamespaces: false # Unknown to clang-format-4.0
ConstructorInitializerAllOnOneLineOrOnePerLine: false
ConstructorInitializerIndentWidth: 8
ContinuationIndentWidth: 8
Cpp11BracedListStyle: false
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
#FixNamespaceComments: false # Unknown to clang-format-4.0
# Taken from:
# git grep -h '^#define [^[:space:]]*FOR_EACH[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*FOR_EACH[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
ForEachMacros:
- 'ARRAY_FOR_EACH'
- 'ARRAY_FOR_EACH_PTR'
- 'FOR_EACH'
- 'FOR_EACH_FIXED_ARG'
- 'FOR_EACH_IDX'
- 'FOR_EACH_IDX_FIXED_ARG'
- 'FOR_EACH_NONEMPTY_TERM'
- 'FOR_EACH_FIXED_ARG_NONEMPTY_TERM'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_DLIST_FOR_EACH_NODE'
- 'SYS_DLIST_FOR_EACH_NODE_SAFE'
- 'SYS_SEM_LOCK'
- 'SYS_SFLIST_FOR_EACH_CONTAINER'
- 'SYS_SFLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SFLIST_FOR_EACH_NODE'
@@ -55,62 +85,61 @@ ForEachMacros:
- 'SYS_SLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SLIST_FOR_EACH_NODE'
- 'SYS_SLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
- 'Z_FOR_EACH'
- 'Z_FOR_EACH_ENGINE'
- 'Z_FOR_EACH_EXEC'
- 'Z_FOR_EACH_FIXED_ARG'
- 'Z_FOR_EACH_FIXED_ARG_EXEC'
- 'Z_FOR_EACH_IDX'
- 'Z_FOR_EACH_IDX_EXEC'
- 'Z_FOR_EACH_IDX_FIXED_ARG'
- 'Z_FOR_EACH_IDX_FIXED_ARG_EXEC'
- 'Z_GENLIST_FOR_EACH_CONTAINER'
- 'Z_GENLIST_FOR_EACH_CONTAINER_SAFE'
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
- 'STRUCT_SECTION_FOREACH'
- 'STRUCT_SECTION_FOREACH_ALTERNATE'
- 'TYPE_SECTION_FOREACH'
- 'K_SPINLOCK'
- 'COAP_RESOURCE_FOREACH'
- 'COAP_SERVICE_FOREACH'
- 'COAP_SERVICE_FOREACH_RESOURCE'
- 'HTTP_RESOURCE_FOREACH'
- 'HTTP_SERVER_CONTENT_TYPE_FOREACH'
- 'HTTP_SERVICE_FOREACH'
- 'HTTP_SERVICE_FOREACH_RESOURCE'
- 'I3C_BUS_FOR_EACH_I3CDEV'
- 'I3C_BUS_FOR_EACH_I2CDEV'
- 'MIN_HEAP_FOREACH'
IfMacros:
- 'CHECKIF'
# Disabled for now, see bug https://github.com/zephyrproject-rtos/zephyr/issues/48520
#IncludeBlocks: Regroup
- '_WAIT_Q_FOR_EACH'
#IncludeBlocks: Preserve # Unknown to clang-format-5.0
IncludeCategories:
- Regex: '^".*\.h"$'
Priority: 0
- Regex: '^<(assert|complex|ctype|errno|fenv|float|inttypes|limits|locale|math|setjmp|signal|stdarg|stdbool|stddef|stdint|stdio|stdlib|string|tgmath|time|wchar|wctype)\.h>$'
Priority: 1
- Regex: '^\<zephyr/.*\.h\>$'
Priority: 2
- Regex: '.*'
Priority: 3
Priority: 1
IncludeIsMainRegex: '(Test)?$'
IndentCaseLabels: false
IndentGotoLabels: false
#IndentPPDirectives: None # Unknown to clang-format-5.0
IndentWidth: 8
InsertBraces: true
InsertNewlineAtEOF: true
SpaceBeforeInheritanceColon: False
SpaceBeforeParens: ControlStatementsExceptControlMacros
SortIncludes: Never
UseTab: ForContinuationAndIndentation
WhitespaceSensitiveMacros:
- COND_CODE_0
- COND_CODE_1
- IF_DISABLED
- IF_ENABLED
- LISTIFY
- STRINGIFY
- Z_STRINGIFY
- DT_FOREACH_PROP_ELEM_SEP
IndentWrappedFunctionNames: false
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: Inner
#ObjCBinPackProtocolList: Auto # Unknown to clang-format-5.0
ObjCBlockIndentWidth: 8
ObjCSpaceAfterProperty: true
ObjCSpaceBeforeProtocolList: true
# Taken from git's rules
#PenaltyBreakAssignment: 10 # Unknown to clang-format-4.0
PenaltyBreakBeforeFirstCallParameter: 30
PenaltyBreakComment: 10
PenaltyBreakFirstLessLess: 0
PenaltyBreakString: 10
PenaltyExcessCharacter: 100
PenaltyReturnTypeOnItsOwnLine: 60
PointerAlignment: Right
ReflowComments: false
SortIncludes: false
#SortUsingDeclarations: false # Unknown to clang-format-4.0
SpaceAfterCStyleCast: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
#SpaceBeforeCtorInitializerColon: true # Unknown to clang-format-5.0
#SpaceBeforeInheritanceColon: true # Unknown to clang-format-5.0
SpaceBeforeParens: ControlStatements
#SpaceBeforeRangeBasedForLoopColon: true # Unknown to clang-format-5.0
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles: false
SpacesInContainerLiterals: false
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp03
TabWidth: 8
UseTab: Always
...

View File

@@ -1,21 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (c) 2024, Basalte bv
analyzer:
# Start by disabling all
- --disable-all
# Enable the sensitive profile
- --enable=sensitive
# Disable unused cases
- --disable=boost
- --disable=mpi
# Many identifiers in zephyr start with _
- --disable=clang-diagnostic-reserved-identifier
- --disable=clang-diagnostic-reserved-macro-identifier
# Cleanup
- --clean

View File

@@ -12,10 +12,10 @@ coverage:
patch: yes
changes: no
# ignore:
# - "tests/**/*"
# - "samples/**/*"
# - "ext/hal/**/*"
#ignore:
# - "tests/**/*"
# - "samples/**/*"
# - "ext/hal/**/*"
parsers:
gcov:

View File

@@ -86,10 +86,6 @@ indent_size = 8
[COMMIT_EDITMSG]
max_line_length = 75
# Patches
[{*.patch,*.diff}]
trim_trailing_whitespace = false
# Kconfig
[Kconfig*]
indent_style = tab

View File

@@ -1,85 +0,0 @@
name: Bug Report
description: File a bug report.
labels: ["bug"]
type: "Bug"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: textarea
id: what-happened
attributes:
label: Describe the bug
description: |
A clear and concise description of what the bug is.
placeholder: |
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
validations:
required: true
- type: checkboxes
id: regression
attributes:
label: Regression
description: |
Check this box if this is a regression and provide a SHA if you were able to "git bisect" to a specific commit.
options:
- label: This is a regression.
required: false
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Steps to reproduce the behavior.
placeholder: |
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
validations:
required: false
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
attributes:
label: Impact
description: Impact of this bug
multiple: false
options:
- Showstopper Prevents release or major functionality; system unusable.
- Major Severely degrades functionality; workaround is difficult or unavailable.
- Functional Limitation Some features not working as expected, but system usable.
- Annoyance Minor irritation; no significant impact on usability or functionality.
- Intermittent Occurs occasionally; hard to reproduce.
- Not sure
default: 3
validations:
required: true
- type: textarea
id: env
attributes:
label: Environment
description: please complete the following information
placeholder: |
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
- type: textarea
id: context
attributes:
label: Additional Context
description: Provide other context that could be relevant to the bug, such as pin setting, target configuration,etc.

View File

@@ -1,42 +0,0 @@
name: Enhancement
description: Submit an Enhancement
labels: ["Enhancement"]
type: "Enhancement"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this enhancement proposal.
- type: textarea
id: description
attributes:
label: Summary
description: |
Is your enhancement proposal related to a problem? Please describe.
placeholder: |
A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: |
Describe the solution you'd like
placeholder: |
A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives
description: Describe alternatives you've considered
placeholder: |
A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: context
attributes:
label: Additional Context
description: Add any other context or graphics (drag-and-drop an image) about the enhancement here.

View File

@@ -1,75 +0,0 @@
name: RFC / Proposal
description: Submit a Proposal (RFC)
labels: ["RFC"]
type: RFC
assignees: []
body:
- type: markdown
attributes:
value: |
## Introduction
This section targets end users, TSC members, maintainers and anyone else
that might need a quick explanation of your proposed change.
- type: textarea
id: problem-description
attributes:
label: Problem Description
description: Why do we want this change and what problem are we trying to address?
placeholder: Explain the problem or limitation this RFC is meant to resolve.
validations:
required: true
- type: textarea
id: proposed-change-summary
attributes:
label: Proposed Change (Summary)
description: A high-level summary of the proposed change.
placeholder: Brief summary of what will change if this RFC is implemented.
validations:
required: true
- type: markdown
attributes:
value: |
## Detailed RFC
This section targets the development team. Upon reading it, each engineer
should understand what must be done to implement the proposed feature.
- type: textarea
id: detailed-change
attributes:
label: Proposed Change (Detailed)
description: Describe the change in as much detail as possible. Include context or background info, and reuse of existing components if applicable.
placeholder: Explain exactly what youre planning to change and how.
validations:
required: true
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: Highlight how this change may affect the rest of the project or other teams/components.
placeholder: List components, modules, or teams affected.
validations:
required: false
- type: textarea
id: concerns
attributes:
label: Concerns and Unresolved Questions
description: List any concerns, unknowns, or unresolved questions related to this proposal.
placeholder: Any areas of uncertainty?
validations:
required: false
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: What alternative solutions were considered? Why was this proposal chosen?
placeholder: List alternatives and explain the rationale behind your choice.
validations:
required: false

View File

@@ -1,29 +0,0 @@
name: Feature Request
description: Suggest a new feature or enhancement
labels: ["Feature Request"]
type: Feature
assignees: []
body:
- type: textarea
id: problem
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is.
placeholder: e.g., I'm frustrated when I need to do X manually because Y is missing.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
placeholder: e.g., It would be great if the system could automatically handle X by doing Y.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: Include any alternative solutions or features

View File

@@ -1,57 +0,0 @@
name: Contributor Nomination
description: Nominate a GitHub user for the Contributor role with triage permissions
labels: [Role Nomination]
assignees: ['nashif']
body:
- type: markdown
attributes:
value: |
## Background
The [TSC Project Roles](https://docs.zephyrproject.org/latest/project/project_roles.html) defines the main roles for the Zephyr Project, including Maintainer, Collaborator, and Contributor.
By default, anyone who contributes code or documentation is a Contributor, but with the lowest [GitHub Permission Level](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization) of **Read**.
Use this form to nominate a user for the **Contributor** role with **Triage** permission, which allows the user to:
- Add reviewers to pull requests
- Be added as a reviewer by others
- type: input
id: full-name
attributes:
label: Full Name
description: Full name of the nominated contributor.
placeholder: e.g., Jane Doe
validations:
required: true
- type: input
id: github-username
attributes:
label: GitHub Username
description: GitHub handle of the nominated contributor.
placeholder: e.g., @janedoe
validations:
required: true
- type: input
id: organization
attributes:
label: Organization
description: Organization the nominee is affiliated with (optional).
placeholder: e.g., Acme Corp
validations:
required: false
- type: textarea
id: supporting-documents
attributes:
label: Supporting Documents
description: Provide links to 35 pull requests authored or reviewed by the nominee that demonstrate their dedication to the Zephyr project.
placeholder: |
e.g.,
- https://github.com/zephyrproject-rtos/zephyr/pull/12345
- https://github.com/zephyrproject-rtos/zephyr/pull/23456
- https://github.com/zephyrproject-rtos/zephyr/pull/34567
validations:
required: true

View File

@@ -1,106 +0,0 @@
name: External Component Integration
description: Propose integration of an external open source component
labels: ["TSC"]
assignees: []
body:
- type: textarea
id: origin
attributes:
label: Origin
description: Name of project hosting the original open source code. Provide a link to the source.
placeholder: e.g., SQLite - https://sqlite.org
validations:
required: true
- type: textarea
id: purpose
attributes:
label: Purpose
description: Brief description of what this software does.
placeholder: |
e.g., A small, fast, self-contained SQL database engine.
validations:
required: true
- type: textarea
id: integration-mode
attributes:
label: Mode of Integration
description: Should this be integrated in the main tree or as a module? Explain your choice and suggest a module repo name if applicable.
placeholder: |
e.g., As a module - proposed repo name: zephyr-sqlite
validations:
required: true
- type: textarea
id: maintainership
attributes:
label: Maintainership
description: List maintainers (GitHub IDs) for this integration. Include at least one primary maintainer.
placeholder: |
e.g., @username1 (primary), @username2 (collaborator)
validations:
required: true
- type: input
id: pull-request
attributes:
label: Pull Request
description: Link to the pull request (if any) for this integration. Must be labeled "DNM" (Do Not Merge).
placeholder: |
e.g., https://github.com/zephyrproject-rtos/zephyr/pull/12345
validations:
required: false
- type: textarea
id: description
attributes:
label: Description
description: Long-form description to justify suitability of this component.
placeholder: |
- What is its primary functionality?
- What problem does it solve?
- Why is this the right component?
validations:
required: true
- type: textarea
id: security
attributes:
label: Security
description: Security-related aspects of this component, including cryptographic functions and known vulnerabilities.
placeholder: |
- Does it use cryptography?
- How are vulnerabilities handled?
- Any known CVEs?
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: What does this component depend on, and how will it be integrated (directly or via abstraction)?
placeholder: |
- Other external packages?
- Direct or abstracted use in Zephyr?
validations:
required: false
- type: input
id: revision
attributes:
label: Version or SHA
description: Which version or specific commit should be initially integrated?
placeholder: e.g., v3.45.0 or 79cc94d
validations:
required: true
- type: input
id: license
attributes:
label: License (SPDX)
description: Provide the license using a valid SPDX identifier (e.g., BSD-3-Clause).
placeholder: e.g., MIT or BSD-3-Clause
validations:
required: true

45
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,45 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- ...
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
Add any other context that could be relevant to your issue, such as pin setting,
target configuration, ...

View File

@@ -1,5 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Zephyr Community Support
url: https://github.com/zephyrproject-rtos/zephyr/discussions
about: Please ask and answer questions here.

20
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

62
.github/ISSUE_TEMPLATE/ext-source.md vendored Normal file
View File

@@ -0,0 +1,62 @@
---
name: External Source Code
about: Submit a proposal to integrate external source code
title: ''
labels: TSC
assignees: ''
---
## Origin
Name of project hosting the original open source code
Provide a link to the source
## Purpose
Brief description of what this software does
## Mode of integration
Describe whether you'd like to integrate this external component in the main tree
or as a module, and why. If the mode of integration is a module, suggest a
repository name for the module
## Pull Request
Pull request (if any) with the actual implementation of the integration, be it
in the main tree or as a module (pointing to your own fork for now). Make sure
the PR is correctly labeled as "DNM"
## Description
Long description that will help reviewers discuss suitability of the
component to solve the problem at hand (there may be a better options
available.)
What is its primary functionality (e.g., SQLLite is a lightweight
database)?
What problem are you trying to solve? (e.g., a state store is
required to maintain ...)
Why is this the right component to solve it (e.g., SQLite is small,
easy to use, and has a very liberal license.)
# Dependencies
What other components does this package depend on?
Will the Zephyr project have a direct dependency on the component, or
will it be included via an abstraction layer with this component as a
replaceable implementation?
## Revision
Version or SHA you would like to integrate initially
## License
Please use an SPDX identifier (https://spdx.org/licenses/), such as
``BSD-3-Clause``

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: Feature Request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -0,0 +1,19 @@
---
name: Hardware Support
about: Suggest adding hardware support
title: ''
labels: Hardware Support
assignees: ''
---
**Is this request related to a missing driver support for a particular hardware platform, SoC or board? Please describe.**
Describe in details the hardware support being requested and why this support benefits Zephyr.
**Describe why you are asking for this support?**
Describe why you are asking for this support instead of contributing it directly to the tree
If this is a new board or SoC, please state whether you are willing to maintain the Zephyr support for it if it is included in the main tree
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the hardware here.

41
.github/ISSUE_TEMPLATE/nomination.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: Contributor Nomination
about: Nominate a GitHub user for additional rights on the Zephyr Project
title: ''
labels: Role Nomination
assignees: ''
---
# Background
The [TSC Project Roles] defines the main roles for the Zephyr Project, including
Maintainer, Collaborator, and Contributor.
By default anyone that contributes code or documentation is a Contributor, but
with the lowest [GitHub Permission Level] of Read. For example, Contributors
with Read permission do not have the permission to add reviewers to a pull
request.
Use this template to nominate a GitHub user for an elevated permission level in
the Contributor role.
# Nomination
## GitHub User
Provide the following information about the GitHub user:
1. Full Name
1. GitHub username
1. Organization (optional)
## Supporting Documents
Add links to 3-5 GitHub pull requests, in the Zephyr project, authored or
reviewed by the GitHub user that demonstrate the user's dedication to the
Zephyr project.
[TSC Project Roles]: <https://docs.zephyrproject.org/latest/project/project_roles.html>
[GitHub Permission Level]: <https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-permission-levels-for-an-organization>

51
.github/ISSUE_TEMPLATE/rfc-proposal.md vendored Normal file
View File

@@ -0,0 +1,51 @@
---
name: RFC / Proposal
about: Submit an RFC / Proposal
title: ''
labels: RFC
assignees: ''
---
## Introduction
This section targets end users, TSC members, maintainers and anyone else that might
need a quick explanation of your proposed change.
### Problem description
Why do we want this change and what problem are we trying to address?
### Proposed change
A brief summary of the proposed change - the 10,000 ft view on what it will
change once this change is implemented.
## Detailed RFC
In this section of the document the target audience is the dev team. Upon
reading this section each engineer should have a rather clear picture of what
needs to be done in order to implement the described feature.
### Proposed change (Detailed)
This section is freeform - you should describe your change in as much detail
as possible. Please also ensure to include any context or background info here.
For example, do we have existing components which can be reused or altered.
By reading this section, each team member should be able to know what exactly
you're planning to change and how.
### Dependencies
Highlight how the change may affect the rest of the project (new components,
modifications in other areas), or other teams/projects.
### Concerns and Unresolved Questions
List any concerns, unknowns, and generally unresolved questions etc.
## Alternatives
List any alternatives considered, and the reasons for choosing this option
over them.

8
.github/SECURITY.md vendored
View File

@@ -8,12 +8,12 @@ updates:
- The most recent release, and the release prior to that.
- Active LTS releases.
At this time, with the latest release of v4.3, the supported
At this time, with the latest release of v3.0.0, the supported
versions are:
- v4.3: Current release
- v4.2: Prior release
- v3.7: Current LTS
- v2.7.0: Current LTS
- v2.7.0: Prior release
- v3.0.0: Current release
## Reporting process

View File

@@ -1,2 +0,0 @@
paths:
- .github

View File

@@ -1,2 +0,0 @@
paths:
- doc

View File

@@ -1,30 +0,0 @@
version: 2
enable-beta-ecosystems: true
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
cooldown:
default-days: 7
commit-message:
prefix: "ci: github: "
labels: []
groups:
actions-deps:
patterns:
- "*"
- package-ecosystem: "uv"
directory: "/doc"
schedule:
interval: "weekly"
cooldown:
default-days: 7
commit-message:
prefix: "ci: doc: "
labels: []
groups:
doc-deps:
patterns:
- "*"

171
.github/labeler.yml vendored Normal file
View File

@@ -0,0 +1,171 @@
"Release Notes":
- "doc/releases/**/*"
"area: Modem":
- "drivers/modem/**/*"
"area: PWM":
- "drivers/pwm/**/*"
"area: Watchdog":
- "drivers/watchdog/**/*"
"area: Sensors":
- "drivers/sensors/**/*"
"area: ADC":
- "drivers/adc/**/*"
"area: Counter":
- "drivers/counter/**/*"
"area: CAN":
- "include/drivers/can.h"
- "include/canbus/*/**"
- "drivers/can/**/*"
- "subsys/canbus/*/**"
"area: EEPROM":
- "include/drivers/eeprom.h"
- "drivers/eeprom/**/*"
"area: Timer":
- "drivers/timer/**/*"
"area: I2S":
- "drivers/i2s/**/*"
"area: C Library":
- "lib/libc/**/*"
"area: Devicetree":
- "dts/**/*"
- "**/*.dts"
- "**/*.dtsi"
- "include/devicetree.h"
- "include/devicetree/*"
- "doc/guides/dts/**/*"
"area: Devicetree Binding":
- "include/dt-bindings/**/*"
- "dts/bindings/**/*"
"area: Devicetree Tooling":
- "scripts/dts/**/*"
"area: I2C":
- "drivers/i2c/**/*"
"area: SPI":
- "drivers/spi/**/*"
"area: Boards":
- "boards/**/*"
"area: POSIX":
- "lib/posix/**/*"
"area: native port":
- "arch/posix/**/*"
- "include/arch/posix/**/*"
- "soc/posix/**/*"
- "**/*native_posix*"
"area: X86":
- "arch/x86/**/*"
- "include/arch/x86/**/*"
"area: ARM":
- "arch/arm/**/*"
- "include/arch/arm/**/*"
"area: ARM64":
- "arch/arm64/**/*"
- "include/arch/arm64/**/*"
"area: MIPS":
- "arch/mips/**/*"
- "include/arch/mips/**/*"
"area: NIOS2":
- "arch/nios2/**/*"
- "include/arch/nios2/**/*"
"area: Xtensa":
- "arch/xtensa/**/*"
- "include/arch/xtensa/**/*"
"area: RISCV":
- "arch/riscv/**/*"
- "include/arch/riscv/**/*"
"area: ARC":
- "arch/arc/**/*"
- "include/arch/arc/**/*"
"area: Networking":
- "subsys/net/**/*"
- "samples/net/**/*"
- "tests/net/**/*"
- "include/net/**/*"
- "include/drivers/ieee802154/**/*"
- "drivers/ethernet/**/*"
- "drivers/ieee802154/**/*"
- "drivers/wifi/**/*"
- "drivers/ptp_clock/**/*"
- "drivers/net/**/*"
"area: Logging":
- "subsys/logging/**/*"
"area: Shell":
- "subsys/shell/**/*"
"area: Console":
- "subsys/console/**/*"
"area: Test Framework":
- "subsys/testsuite/**/*"
"area: Settings":
- "subsys/settings/**/*"
"area: File System":
- "subsys/fs/**/*"
"area: Storage":
- "subsys/storage/**/*"
"area: Bluetooth":
- "subsys/bluetooth/**/*"
- "**/*bluetooth*"
"area: Bluetooth Mesh":
- "subsys/bluetooth/mesh/**/*"
"area: Bluetooth Audio":
- "subsys/bluetooth/audio/**/*"
"area: Bluetooth Controller":
- "subsys/bluetooth/controller/**/*"
"area: Bluetooth Host":
- "subsys/bluetooth/host/**/*"
- "subsys/bluetooth/services/**/*"
"area: API":
- "include/**/*"
"area: Samples":
- "samples/**/*"
"area: Tests":
- "tests/**/*"
"area: Kernel":
- "kernel/**/*"
- "tests/kernel/**/*"
"area: Documentation":
- "**/*.rst"
- "**/*.md"
"area: Build System":
- "cmake/**/*"
- "CmakeLists.txt"
"area: Kconfig":
- "scripts/kconfig/**/*"
- "Kconfig"
- "Kconfig.zephyr"
"area: Twister":
- "scripts/twister"
- "scripts/pylib/twister/**/*"
"area: Modules":
- "west.yml"
- "modules/**/*"
"area: Shields":
- "boards/shields/**"
- "samples/shields/**"
"area: Power Management":
- "subsys/pm/**/*"
- "include/pm/**/*"
- "tests/subsys/pm/**/*"
- "samples/subsys/pm/**/*"
"platform: NXP":
- "boards/arm/frdm*/**"
- "boards/arm/hexiwear*/**"
- "boards/arm/lpcxpresso*/**"
- "boards/arm/*imx*/**"
- "drivers/**/*imx*"
- "drivers/**/*mcux*"
- "dts/arm/nxp/*/*"
- "dts/bindings/**/nxp*"
- "soc/arm/nxp*/**"
"platform: STM32":
- "boards/arm/nucleo_*/**"
- "boards/arm/*stm32*/**"
- "drivers/**/*stm32*"
- "dts/arm/st/*/*"
- "include/drivers/*/*stm32*"
- "soc/arm/st_stm32/**"
"platform: SiLabs":
- "boards/arm/efr32_*/**/*"
- "boards/arm/efm32_*/**/*"
- "drivers/**/*gecko*"
- "dts/arm/silabs/**/*"
- "dts/bindings/**/silabs,gecko*"
- "soc/arm/silabs_exx32/**/*"

View File

@@ -1,88 +0,0 @@
name: Pull Request/Issue Assigner
on:
pull_request_target:
types:
- opened
- synchronize
- reopened
- ready_for_review
branches:
- main
- collab-*
- v*-branch
issues:
types:
- labeled
permissions:
contents: read
jobs:
assignment:
name: Pull Request/Issue Assignment
if: github.event.pull_request.draft == false
runs-on: ubuntu-24.04
permissions:
issues: write # to add assignees to issues
steps:
- name: Check out source code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: zephyrproject-rtos/action-python-env@32e53bef090c33d53aa94f1d9a9d29c93cfdc5f7 # main
with:
python-version: 3.12
- name: Fetch west.yml/Maintainer.yml from pull request
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
run: |
git fetch origin pull/${{ github.event.pull_request.number }}/merge
git show FETCH_HEAD:west.yml > pr_west.yml
git show FETCH_HEAD:MAINTAINERS.yml > pr_MAINTAINERS.yml
- name: west setup
if: >
github.event_name == 'pull_request_target'
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Run assignment script
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
run: |
FLAGS="-v"
FLAGS+=" -o ${{ github.event.repository.owner.login }}"
FLAGS+=" -r ${{ github.event.repository.name }}"
FLAGS+=" -M MAINTAINERS.yml"
if [ "${{ github.event_name }}" = "pull_request_target" ]; then
if [ "${{ github.base_ref }}" = "main" ]; then
FLAGS+=" -P ${{ github.event.pull_request.number }} --updated-manifest pr_west.yml --updated-maintainer-file pr_MAINTAINERS.yml"
else
FLAGS+=" -P ${{ github.event.pull_request.number }}"
fi
elif [ "${{ github.event_name }}" = "issues" ]; then
FLAGS+=" -I ${{ github.event.issue.number }}"
elif [ "${{ github.event_name }}" = "schedule" ]; then
FLAGS+=" --modules"
else
echo "Unknown event: ${{ github.event_name }}"
exit 1
fi
python3 scripts/ci/set_assignees.py $FLAGS
- name: Check maintainer file changes
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
run: |
python ./scripts/ci/check_maintainer_changes.py \
--repo zephyrproject-rtos/zephyr MAINTAINERS.yml pr_MAINTAINERS.yml

View File

@@ -7,32 +7,13 @@ on:
branches:
- main
permissions:
contents: read
jobs:
backport:
runs-on: ubuntu-20.04
name: Backport
runs-on: ubuntu-24.04
permissions:
contents: write # to create/push backport branches
pull-requests: write # to create backport PRs
issues: write # to add labels to issue created if backport fails
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
github.event.pull_request.merged &&
(
github.event.action == 'closed' ||
(
github.event.action == 'labeled' &&
contains(github.event.label.name, 'backport')
)
)
steps:
- name: Backport
uses: zephyrproject-rtos/action-backport@7e74f601d11eaca577742445e87775b5651a965f # v2.0.3-3
uses: zephyrproject-rtos/action-backport@v1.1.1-3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
issue_labels: Backport
labels_template: '["Backport"]'
github_token: ${{ secrets.ZB_GITHUB_TOKEN }}
issue_labels: backport

View File

@@ -2,49 +2,29 @@ name: Backport Issue Check
on:
pull_request_target:
types:
- edited
- opened
- reopened
- synchronize
branches:
- v*-branch
permissions:
contents: read
jobs:
backport:
name: Backport Issue Check
concurrency:
group: backport-issue-check-${{ github.ref }}
cancel-in-progress: true
runs-on: ubuntu-24.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
issues: read # to check if associated issue exists for backport
runs-on: ubuntu-22.04
steps:
- name: Check out source code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -0,0 +1,29 @@
name: Publish Bluetooth Tests Results
on:
workflow_run:
workflows: ["Bluetooth Tests"]
types:
- completed
jobs:
bluetooth-test-results:
name: "Publish Bluetooth Test Results"
runs-on: ubuntu-20.04
if: github.event.workflow_run.conclusion != 'skipped'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: bluetooth-tests.yaml
run_id: ${{ github.event.workflow_run.id }}
- name: Publish Bluetooth Test Results
uses: EnricoMi/publish-unit-test-result-action@v1
with:
check_name: Bluetooth Test Results
comment_mode: off
commit: ${{ github.event.workflow_run.head_sha }}
event_file: event/event.json
event_name: ${{ github.event.workflow_run.event }}
files: "bluetooth-test-results/**/bsim_results.xml"

84
.github/workflows/bluetooth-tests.yaml vendored Normal file
View File

@@ -0,0 +1,84 @@
name: Bluetooth Tests
on:
pull_request:
paths:
- ".github/workflows/bluetooth-test*.yaml"
- "west.yml"
- "subsys/bluetooth/**"
- "tests/bluetooth/bsim_bt/**"
- "boards/posix/**"
- "soc/posix/**"
- "arch/posix/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bluetooth-test:
runs-on: ubuntu-20.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
bsim_bt_test_results_file: ./bsim_bt_out/bsim_results.xml
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
git remote -v
git rebase origin/${BASE_REF}
west init -l . || true
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Run Bluetooth Tests with BSIM
run: |
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_bt_out tests/bluetooth/bsim_bt/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bt_test_results_file} \
SEARCH_PATH=tests/bluetooth/bsim_bt/ tests/bluetooth/bsim_bt/run_parallel.sh
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v3
with:
name: bluetooth-test-results
path: |
./bsim_bt_out/bsim_results.xml
${{ github.event_path }}
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v3
with:
name: event
path: |
${{ github.event_path }}

View File

@@ -1,34 +0,0 @@
name: Publish BabbleSim Tests Results
on:
workflow_run:
workflows: ["BabbleSim Tests"]
types:
- completed
permissions:
contents: read
jobs:
bsim-test-results:
name: "Publish BabbleSim Test Results"
runs-on: ubuntu-24.04
if: github.event.workflow_run.conclusion != 'skipped'
permissions:
checks: write # to create the check run entry with test results
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
with:
run_id: ${{ github.event.workflow_run.id }}
- name: Publish BabbleSim Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
with:
check_name: BabbleSim Test Results
comment_mode: off
commit: ${{ github.event.workflow_run.head_sha }}
event_file: event/event.json
event_name: ${{ github.event.workflow_run.event }}
files: "bsim-test-results/**/bsim_results.xml"

View File

@@ -1,212 +0,0 @@
name: BabbleSim Tests
on:
pull_request:
paths:
- ".github/workflows/bsim-tests.yaml"
- ".github/workflows/bsim-tests-publish.yaml"
- "west.yml"
- "subsys/bluetooth/**"
- "tests/bsim/**"
- "boards/nordic/nrf5*/*dt*"
- "dts/*/nordic/**"
- "tests/bluetooth/**"
- "samples/bluetooth/**"
- "boards/native/**"
- "soc/native/**"
- "arch/posix/**"
- "include/zephyr/arch/posix/**"
- "scripts/native_simulator/**"
- "samples/net/sockets/echo_*/**"
- "modules/hal_nordic/**"
- "modules/mbedtls/**"
- "modules/openthread/**"
- "subsys/net/l2/openthread/**"
- "include/zephyr/net/openthread.h"
- "drivers/ieee802154/**"
- "include/zephyr/net/ieee802154*"
- "drivers/serial/*nrfx*"
- "tests/drivers/uart/**"
- '!**.rst'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bsim-test:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
options: '--entrypoint /bin/bash'
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
permissions:
checks: write # to create the check run entry with test results
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Environment Setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Check common triggering files
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-common-files
with:
files: |
.github/workflows/bsim-tests.yaml
.github/workflows/bsim-tests-publish.yaml
west.yml
boards/native/
soc/native/
arch/posix/
include/zephyr/arch/posix/
scripts/native_simulator/
tests/bsim/*
boards/nordic/nrf5*/*dt*
dts/*/nordic/
modules/mbedtls/**
modules/hal_nordic/**
- name: Check if Bluethooth files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-bluetooth-files
with:
files: |
samples/bluetooth/
subsys/bluetooth/
tests/bluetooth/
tests/bsim/bluetooth/
- name: Check if Networking files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-networking-files
with:
files: |
tests/bsim/net/
samples/net/sockets/echo_*/
modules/openthread/
subsys/net/l2/openthread/
include/zephyr/net/openthread.h
drivers/ieee802154/
include/zephyr/net/ieee802154*
- name: Check if UART files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-uart-files
with:
files: |
tests/bsim/drivers/uart/
drivers/serial/*nrfx*
tests/drivers/uart/
- name: Update BabbleSim to manifest revision
if: >
steps.check-bluetooth-files.outputs.any_modified == 'true'
|| steps.check-networking-files.outputs.any_modified == 'true'
|| steps.check-uart-files.outputs.any_modified == 'true'
|| steps.check-common-files.outputs.any_modified == 'true'
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- name: Run Bluetooth Tests with BSIM
if: steps.check-bluetooth-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
run: |
tests/bsim/ci.bt.sh
- name: Run Networking Tests with BSIM
if: steps.check-networking-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
run: |
tests/bsim/ci.net.sh
- name: Run UART Tests with BSIM
if: steps.check-uart-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
run: |
tests/bsim/ci.uart.sh
- name: Merge Test Results
run: |
junitparser merge --glob "./bsim_*/*bsim_results.*.xml" "./twister-out/twister.xml" junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
with:
check_name: Bsim Test Results
files: "junit.xml"
comment_mode: off
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: event
path: |
${{ github.event_path }}

View File

@@ -8,34 +8,21 @@ name: Bug Snapshot
on:
workflow_dispatch:
schedule:
# Run daily at 00:05
- cron: '5 00 * * *'
permissions:
contents: read
branches: [ main ]
jobs:
make_bugs_pickle:
name: Make bugs pickle
runs-on: ubuntu-24.04
if: github.repository_owner == 'zephyrproject-rtos' && github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Snapshot bugs
env:
@@ -51,23 +38,17 @@ jobs:
echo "BUGS_PICKLE_PATH=${BUGS_PICKLE_PATH}" >> ${GITHUB_ENV}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_ACCESS_KEY_ID }}
aws-access-key-id: ${{ secrets.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
run: |
REPOSITORY_NAME="${GITHUB_REPOSITORY#*/}"
PUBLISH_BUCKET="builds.zephyrproject.org"
PUBLISH_DOMAIN="builds.zephyrproject.io"
if [ "${{github.event_name}}" = "schedule" ]; then
PUBLISH_ROOT="${REPOSITORY_NAME}/bug-snapshot/daily"
else
PUBLISH_ROOT="${REPOSITORY_NAME}/bug-snapshot"
fi
PUBLISH_ROOT="${{ github.event.repository.name }}/bug-snapshot"
PUBLISH_UPLOAD_URI="s3://${PUBLISH_BUCKET}/${PUBLISH_ROOT}/${BUGS_PICKLE_FILENAME}"
PUBLISH_DOWNLOAD_URI="https://${PUBLISH_DOMAIN}/${PUBLISH_ROOT}/${BUGS_PICKLE_FILENAME}"

View File

@@ -1,12 +1,6 @@
name: Build with Clang/LLVM
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -14,23 +8,23 @@ concurrency:
jobs:
clang-build:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
subset: [1, 2]
platform: ["native_posix"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -40,21 +34,16 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -64,48 +53,47 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git clean -f -d
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config --global update.narrow true
west config manifest.group-filter -- +ci,+optional
# In some cases modules are left in a state where they can't be
# updated (i.e. when we cancel a job and the builder is killed),
# So first retry to update, if that does not work, remove all modules
# and start over. (Workaround until we implement more robust module
# west caching).
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
- name: Check Environment
run: |
cmake --version
${LLVM_TOOLCHAIN_PATH}/bin/clang --version
${CLANG_ROOT_DIR}/bin/clang --version
gcc --version
ls -la
- name: Install Python packages
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Set up ccache
- name: ccache stats initial
run: |
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- name: Update BabbleSim to manifest revision
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
test -d github/home/.ccache && rm -rf /github/home/.ccache && mv github/home/.ccache /github/home/.ccache
ccache -M 10G -s
- name: Run Tests with Twister
id: twister
@@ -113,62 +101,49 @@ jobs:
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=llvm
./scripts/twister -p native_sim --force-color --inline-logs -M -N -v --retry-failed 2 \
-T tests --subset ${{matrix.subset}}/2 -j 16
# check if we need to run a full twister or not based on files changed
python3 ./scripts/ci/test_plan.py --platform ${{ matrix.platform }} -c origin/${BASE_REF}..
- name: Print ccache stats
if: always()
# We can limit scope to just what has changed
if [ -s testplan.json ]; then
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --force-color --inline-logs -M -N -v --load-tests testplan.json --retry-failed 2
else
# if nothing is run, skip reporting step
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
run: |
ccache -s -vv
ccache -s
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
path: |
twister-out/twister.xml
twister-out/twister.json
if-no-files-found: ignore
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml
clang-build-results:
name: "Publish Unit Tests Results"
needs: clang-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create GitHub annotations
if: (success() || failure())
runs-on: ubuntu-20.04
if: (success() || failure() ) && needs.clang-build.outputs.report_needed != 0
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Download Artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v2
with:
path: artifacts
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/twister.xml junit.xml
junit2html junit.xml junit-clang.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
@@ -176,9 +151,10 @@ jobs:
junit-clang.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
uses: EnricoMi/publish-unit-test-result-action@v1
if: always()
with:
check_name: Unit Test Results
github_token: ${{ secrets.GITHUB_TOKEN }}
files: "**/twister.xml"
comment_mode: off

View File

@@ -1,14 +1,8 @@
name: Code Coverage with codecov
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
schedule:
- cron: '25 */3 * * 1-5'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -16,31 +10,19 @@ concurrency:
jobs:
codecov:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
platform: ["mps2/an385", "native_sim", "qemu_x86", "unit_testing"]
include:
- platform: 'mps2/an385'
normalized: 'mps2_an385'
- platform: 'native_sim'
normalized: 'native_sim'
- platform: 'qemu_x86'
normalized: 'qemu_x86'
- platform: 'unit_testing'
normalized: 'unit_testing'
platform: ["native_posix", "qemu_x86", "unit_testing"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resovle the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -50,12 +32,6 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
@@ -63,44 +39,48 @@ jobs:
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west update 1> west.update.log || west update 1> west.update-2.log
- name: Environment Setup
- name: Check Environment
run: |
cmake --version
${CLANG_ROOT_DIR}/bin/clang --version
gcc --version
ls -la
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Set up ccache
- name: Prepare ccache keys
id: ccache_cache_prop
shell: cmake -P {0}
run: |
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
with:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
test -d github/home/.ccache && mv github/home/.ccache /github/home/.ccache
ccache -M 10G -s
- name: Run Tests with Twister (Push)
continue-on-error: true
@@ -108,91 +88,54 @@ jobs:
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
mkdir -p coverage/reports
./scripts/twister --save-tests ${{matrix.normalized}}-testplan.json
ls -la
./scripts/twister \
-i --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage \
-T tests --coverage-tool gcovr -xCONFIG_TEST_EXTRA_STACK_SIZE=4096 -e nano \
--timeout-multiplier 2
./scripts/twister --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage -T tests
- name: Build Doxygen Coverage
if: matrix.platform == 'unit_testing'
- name: Generate Coverage Report
run: |
pip install -r doc/requirements.txt --require-hashes
sudo apt-get update
sudo apt-get install -y graphviz # dot is needed but currently missing from the Docker image
cmake -B doc/_build -S doc
cmake --build doc/_build --target doxygen-coverage
mv twister-out/coverage.info lcov.pre.info
lcov -q --remove lcov.pre.info mylib.c --remove lcov.pre.info tests/\* \
--remove lcov.pre.info samples/\* --remove lcov.pre.info ext/\* \
--remove lcov.pre.info *generated* \
-o coverage/reports/${{ matrix.platform }}.info --rc lcov_branch_coverage=1
- name: Upload Doxygen Coverage Results
if: matrix.platform == 'unit_testing'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: doxygen-coverage-results
path: |
doc/_build/new.info
doc/_build/coverage-report
- name: Print ccache stats
if: always()
- name: ccache stats post
run: |
ccache -s -vv
- name: Rename coverage files
if: always()
run: |
mv twister-out/coverage.json coverage/reports/${{matrix.normalized}}.json
ccache -s
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.normalized }})
path: |
coverage/reports/${{ matrix.normalized }}.json
${{ matrix.normalized }}-testplan.json
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
codecov-results:
name: "Publish Coverage Results"
needs: codecov
runs-on: ubuntu-24.04
# the codecov job might be skipped, we don't need to run this job then
runs-on: ubuntu-latest
# the codecov job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v2
with:
path: coverage/reports
- name: Move coverage files
run: |
ls -lRt ./coverage/reports
mv ./coverage/reports/*/*testplan.json .
mv ./coverage/reports/*/coverage/reports/*.json ./coverage/reports
mv ./coverage/reports/*/*.info ./coverage/reports
ls -la ./coverage/reports
- name: Generate list of coverage files
id: get-coverage-files
shell: cmake -P {0}
run: |
file(GLOB INPUT_FILES_LIST "coverage/reports/*.json")
file(GLOB INPUT_FILES_LIST "coverage/reports/*.info")
set(MERGELIST "")
set(FILELIST "")
foreach(ITEM ${INPUT_FILES_LIST})
@@ -206,7 +149,7 @@ jobs:
foreach(ITEM ${INPUT_FILES_LIST})
get_filename_component(f ${ITEM} NAME)
if(MERGELIST STREQUAL "")
set(MERGELIST "--add-tracefile ${f}")
set(MERGELIST "-a ${f}")
else()
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
@@ -216,60 +159,17 @@ jobs:
- name: Merge coverage files
run: |
pushd ./coverage/reports
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --json merged.json
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --cobertura merged.xml
popd
sudo apt-get update
sudo apt-get install -y lcov
cd ./coverage/reports
lcov ${{ steps.get-coverage-files.outputs.mergefiles }} -o merged.info --rc lcov_branch_coverage=1
- name: Get current date
id: run_date
run: |
echo "run_date=$(date --iso-8601=minutes)" >> "$GITHUB_OUTPUT"
echo "run_date_short=$(date +'%Y-%m-%d')" >> "$GITHUB_OUTPUT"
echo "run_date_year=$(date +'%Y')" >> "$GITHUB_OUTPUT"
echo "run_date_month=$(date +'%m')" >> "$GITHUB_OUTPUT"
- name: Generate Coverage Report
- name: Upload coverage to Codecov
if: always()
run: |
python3 ./scripts/ci/coverage/coverage_analysis.py \
-t native_sim-testplan.json \
-m MAINTAINERS.yml \
-c coverage/reports/merged.json \
-o coverage-report-${{ steps.run_date.outputs.run_date_short }} \
-f all
cp coverage-report-* coverage/reports/
- name: Upload Merged Coverage Results and Report
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: Coverage Data and report
path: |
coverage/reports/merged.json
coverage/reports/merged.xml
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.json
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.xlsx
- name: Upload test coverage to Codecov
if: always()
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@v2
with:
directory: ./coverage/reports
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/merged.xml
flags: unittests-coverage
- name: Upload Doxygen coverage to Codecov
if: always()
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
with:
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/doxygen-coverage-results/new.info
disable_search: true
flags: doxygen-coverage
files: merged.info

View File

@@ -1,58 +0,0 @@
name: "CodeQL"
on:
push:
branches:
- main
- v*-branch
- collab-*
schedule:
- cron: '34 16 * * 6'
pull_request:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
jobs:
analyze:
name: Analyze (${{ matrix.language }})
runs-on: ubuntu-24.04
permissions:
security-events: write
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
- language: actions
build-mode: none
config: ./.github/codeql/codeql-actions-config.yml
- language: javascript-typescript
build-mode: none
config: ./.github/codeql/codeql-js-config.yml
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Initialize CodeQL
uses: github/codeql-action/init@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
queries: security-extended
config-file: ${{ matrix.config }}
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo "nothing yet"
exit 0
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,37 +2,37 @@ name: Coding Guidelines
on: pull_request
permissions:
contents: read
jobs:
compliance_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: cache-pip
uses: actions/cache@v3
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install Python packages
- name: Install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install unidiff
pip3 install wheel
pip3 install sh
- name: Install Packages
run: |
sudo apt-get update
sudo apt-get install coccinelle
sudo apt-get install ocaml-base-nox
wget https://launchpad.net/~npalix/+archive/ubuntu/coccinelle/+files/coccinelle_1.0.8~20.04npalix1_amd64.deb
sudo dpkg -i coccinelle_1.0.8~20.04npalix1_amd64.deb
- name: Run Coding Guidelines Checks
- name: Run Coding Guildeines Checks
continue-on-error: true
id: coding_guidelines
env:
@@ -42,10 +42,7 @@ jobs:
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
source zephyr-env.sh
# debug
ls -la

View File

@@ -1,19 +1,26 @@
name: Compliance Checks
on:
pull_request:
types:
- edited
- opened
- reopened
- synchronize
permissions:
contents: read
on: pull_request
jobs:
maintainer_check:
runs-on: ubuntu-20.04
name: Check MAINTAINERS file
steps:
- name: Checkout the code
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Run Maintainers Script
id: maintainer
env:
BASE_REF: ${{ github.base_ref }}
run: |
python3 ./scripts/get_maintainer.py path CMakeLists.txt
check_compliance:
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -21,55 +28,36 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Rebase onto the target branch
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install python dependencies
run: |
pip3 install setuptools
pip3 install wheel
pip3 install python-magic junitparser==1.6.3 gitlint pylint pykwalify
pip3 install west
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
git remote -v
# Ensure there's no merge commits in the PR
[[ "$(git rev-list --merges --count origin/${BASE_REF}..)" == "0" ]] || \
(echo "::error ::Merge commits not allowed, rebase instead";false)
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
# debug
git log --pretty=oneline | head -n 10
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west config manifest.group-filter -- +ci,-optional
west update -o=--depth=1 -n 2>&1 1> west.update.log || west update -o=--depth=1 -n 2>&1 1> west.update2.log
- name: Setup Node.js
uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
with:
node-version: "lts/*"
cache: npm
check-latest: true
cache-dependency-path: ./scripts/ci/package-lock.json
- name: Install Node dependencies
run: npm --prefix ./scripts/ci ci
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Run Compliance Tests
continue-on-error: true
@@ -81,58 +69,32 @@ jobs:
# debug
ls -la
git log --pretty=oneline | head -n 10
# Increase rename limit to allow for large PRs
git config diff.renameLimit 10000
excludes="-e KconfigBasic -e SysbuildKconfigBasic -e ClangFormat"
# The signed-off-by check for dependabot should be skipped
if [ "${{ github.actor }}" == "dependabot[bot]" ]; then
excludes="$excludes -e Identity"
fi
./scripts/ci/check_compliance.py --annotate $excludes -c origin/${BASE_REF}..
./scripts/ci/check_compliance.py -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
continue-on-error: true
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: compliance.xml
path: compliance.xml
- name: Upload dts linter patch
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
continue-on-error: true
if: hashFiles('dts_linter.patch') != ''
with:
name: dts_linter.patch
path: dts_linter.patch
- name: check-warns
run: |
if [[ ! -s "compliance.xml" ]]; then
exit 1;
fi
warns=("ClangFormat" "LicenseAndCopyrightCheck")
files=($(./scripts/ci/check_compliance.py -l))
for file in "${files[@]}"; do
f="${file}.txt"
if [[ -s $f ]]; then
results=$(cat $f)
results="${results//'%'/'%25'}"
results="${results//$'\n'/'%0A'}"
results="${results//$'\r'/'%0D'}"
if [[ "${warns[@]}" =~ "${file}" ]]; then
echo "::warning file=${f}::$results"
else
echo "::error file=${f}::$results"
exit=1
fi
for file in Nits.txt checkpatch.txt Identity.txt Gitlint.txt pylint.txt Devicetree.txt Kconfig.txt; do
if [[ -s $file ]]; then
errors=$(cat $file)
errors="${errors//'%'/'%25'}"
errors="${errors//$'\n'/'%0A'}"
errors="${errors//$'\r'/'%0D'}"
echo "::error file=${file}::$errors"
exit=1
fi
done
if [ "${exit}" == "1" ]; then
echo "Compliance error, check for error messages in the \"Run Compliance Tests\" step"
echo "You can run this step locally with the ./scripts/ci/check_compliance.py script."
exit 1;
fi

14
.github/workflows/conflict.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Conflict Finder
on:
push:
branches-ignore:
- '**'
jobs:
conflict:
runs-on: ubuntu-latest
steps:
- uses: mschilde/auto-label-merge-conflicts@v2
with:
CONFLICT_LABEL_NAME: "has conflicts"
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -5,43 +5,33 @@ name: Publish commit for daily testing
on:
schedule:
- cron: '50 22 * * *'
- cron: '50 22 * * *'
push:
branches:
- refs/tags/*
permissions:
contents: read
- refs/tags/*
jobs:
get_version:
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .

View File

@@ -20,32 +20,55 @@ on:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
permissions:
contents: read
jobs:
devicetree-checks:
name: Devicetree script tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-22.04, macos-14, windows-2022]
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest pyyaml tox
- name: run tox
working-directory: scripts/dts/python-devicetree
run: |

17
.github/workflows/do_not_merge.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
name: Do Not Merge
on:
pull_request:
types: [synchronize, opened, reopened, labeled, unlabeled]
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'DNM') }}
name: Prevent Merging
runs-on: ubuntu-20.04
steps:
- name: Check for label
run: |
echo "Pull request is labeled as 'DNM'"
echo "This workflow fails so that the pull request cannot be merged"
exit 1

View File

@@ -4,127 +4,74 @@
name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
push:
branches:
- main
tags:
- v*
pull_request:
permissions:
contents: read
paths:
- 'doc/**'
- '**.rst'
- 'include/**'
- 'kernel/include/kernel_arch_interface.h'
- 'lib/libc/**'
- 'subsys/testsuite/ztest/include/**'
- 'tests/**'
- '**/Kconfig*'
- 'west.yml'
- '.github/workflows/doc-build.yml'
- 'scripts/dts/**'
- 'scripts/requirements-doc.txt'
env:
DOXYGEN_VERSION: 1.15.0
DOXYGEN_SHA256SUM: 0ec2e5b2c3cd82b7106d19cb42d8466450730b8cb7a9e85af712be38bf4523a1
JOB_COUNT: 8
# NOTE: west docstrings will be extracted from the version listed here
WEST_VERSION: 0.13.0a1
# The latest CMake available directly with apt is 3.18, but we need >=3.20
# so we fetch that through pip.
CMAKE_VERSION: 3.20.5
DOXYGEN_VERSION: 1.9.4
jobs:
doc-file-check:
name: Check for doc changes
runs-on: ubuntu-24.04
outputs:
file_check: ${{ steps.check-doc-files.outputs.any_modified }}
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Check if Documentation related files changed
uses: tj-actions/changed-files@e0021407031f5be11a464abee9a0776171c79891 # v47.0.1
id: check-doc-files
with:
files: |
doc/
boards/**/doc/
**.rst
include/
kernel/include/kernel_arch_interface.h
lib/libc/**
subsys/testsuite/ztest/include/**
**/Kconfig*
west.yml
scripts/dts/
doc/requirements.txt
.github/workflows/doc-build.yml
scripts/pylib/pytest-twister-harness/src/twister_harness/device/device_adapter.py
scripts/pylib/pytest-twister-harness/src/twister_harness/helpers/shell.py
doc-build-html:
name: "Documentation Build (HTML)"
needs: [doc-file-check]
if: >
needs.doc-file-check.outputs.file_check == 'true' || github.event_name != 'pull_request'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
options: '--entrypoint /bin/bash'
timeout-minutes: 60
runs-on: ubuntu-20.04
timeout-minutes: 30
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
env:
BASE_REF: ${{ github.base_ref }}
steps:
- name: checkout
uses: actions/checkout@v3
- name: Print cloud service information
- name: install-pkgs
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
sudo apt-get update
sudo apt-get install -y ninja-build graphviz libclang1-9 libclang-cpp9
wget --no-verbose https://downloads.sourceforge.net/project/doxygen/rel-${DOXYGEN_VERSION}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz
tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz
echo "${PWD}/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: cache-pip
uses: actions/cache@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: Environment Setup
- name: install-pip
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
sudo pip3 install -U setuptools wheel pip
pip3 install -r scripts/requirements-doc.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
west init -l . || true
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages required for documentation build
- name: west setup
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip install -r doc/requirements.txt --require-hashes
west init -l .
- name: Build HTML documentation
shell: bash
- name: build-docs
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
DOC_TAG="release"
@@ -138,52 +85,30 @@ jobs:
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-j ${JOB_COUNT} -W --keep-going -T" \
SPHINXOPTS_EXTRA="-q -t publish" \
make -C doc ${DOC_TARGET}
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W -t publish" make -C doc ${DOC_TARGET}
# API documentation coverage
python3 -m coverxygen --xml-dir doc/_build/html/doxygen/xml/ --src-dir include/ --output doc-coverage.info
# deprecated page causing issues
lcov --remove doc-coverage.info \*/deprecated > new.info
genhtml --no-function-coverage --no-branch-coverage new.info -o coverage-report
- name: Compress documentation build artifacts
- name: compress-docs
run: |
tar --use-compress-program="xz -T0" -cf html-output.tar.xz --exclude html/_sources --exclude html/doxygen/xml --directory=doc/_build html
tar --use-compress-program="xz -T0" -cf api-output.tar.xz --directory=doc/_build html/doxygen/html
tar --use-compress-program="xz -T0" -cf api-coverage.tar.xz coverage-report
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: Upload HTML output
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
- name: upload-build
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: Upload Doxygen coverage artifacts
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: api-coverage
path: api-coverage.tar.xz
- name: Summarize PR documentation URLs
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
API_DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/doxygen/html/"
API_COVERAGE_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/api-coverage/"
echo "${PR_NUM}" > pr_num
echo "Documentation will be available shortly at: ${DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Documentation will be available shortly at: ${API_DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Coverage Report will be available shortly at: ${API_COVERAGE_URL}" >> $GITHUB_STEP_SUMMARY
- name: Upload PR number
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
@@ -191,60 +116,46 @@ jobs:
doc-build-pdf:
name: "Documentation Build (PDF)"
needs: [doc-file-check]
if: |
github.event_name != 'pull_request'
runs-on: ubuntu-24.04
timeout-minutes: 120
runs-on: ubuntu-20.04
container: texlive/texlive:latest
timeout-minutes: 30
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: doc/requirements.txt
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends graphviz librsvg2-bin \
texlive-latex-base texlive-latex-extra latexmk \
texlive-fonts-recommended texlive-fonts-extra texlive-xetex \
imagemagick fonts-noto xindy
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
echo "${DOXYGEN_SHA256SUM} doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz" | sha256sum -c
if [ $? -ne 0 ]; then
echo "Failed to verify doxygen tarball"
exit 1
fi
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
apt-get update
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
- name: cache-pip
uses: actions/cache@v3
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
enable-ccache: false
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: install-pip-pkgs
working-directory: zephyr
- name: setup-venv
run: |
pip install -r doc/requirements.txt --require-hashes
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
pip3 install -r scripts/requirements-doc.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
working-directory: zephyr
continue-on-error: true
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
DOC_TAG="release"
@@ -252,28 +163,10 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-q -j ${JOB_COUNT}" \
LATEXMKOPTS="-quiet -halt-on-error" \
make -C doc pdf
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: pdf-output
if-no-files-found: ignore
path: |
zephyr/doc/_build/latex/zephyr.pdf
zephyr/doc/_build/latex/zephyr.log
doc-build-status-check:
if: always()
name: "Documentation Build Status"
needs:
- doc-build-pdf
- doc-file-check
- doc-build-html
uses: ./.github/workflows/ready-to-merge.yml
with:
needs_context: ${{ toJson(needs) }}
path: doc/_build/latex/zephyr.pdf

View File

@@ -10,13 +10,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -24,64 +21,43 @@ jobs:
steps:
- name: Download artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Load PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
let fs = require("fs");
let pr_number = Number(fs.readFileSync("./pr_num/pr_num"));
core.exportVariable("PR_NUM", pr_number);
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
id: check-pr
uses: carpentries/actions/check-valid-pr@083bb9952b1414bd2b9e10ecec1717c938aba4c5 # v0.17.0
uses: carpentries/actions/check-valid-pr@v0.8
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: |
steps.download-artifacts.outputs.found_artifact == 'true' &&
steps.check-pr.outputs.VALID != 'true'
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
tar xf html-output/html-output.tar.xz -C html-output
if [ -f api-coverage/api-coverage.tar.xz ]; then
tar xf api-coverage/api-coverage.tar.xz -C api-coverage
fi
- name: Configure AWS Credentials
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-access-key-id: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
if: steps.download-artifacts.outputs.found_artifact == 'true'
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete
if [ -d api-coverage/coverage-report ]; then
aws s3 sync --quiet api-coverage/coverage-report/ \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/api-coverage \
--delete
fi

View File

@@ -13,13 +13,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event != 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -27,7 +24,7 @@ jobs:
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
@@ -35,15 +32,12 @@ jobs:
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
if [ -f api-coverage/api-coverage.tar.xz ]; then
tar xf api-coverage/api-coverage.tar.xz -C api-coverage
fi
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
@@ -58,7 +52,4 @@ jobs:
aws s3 sync --quiet html-output/html s3://docs.zephyrproject.org/${VERSION} --delete
aws s3 sync --quiet html-output/html/doxygen/html s3://docs.zephyrproject.org/apidoc/${VERSION} --delete
if [ -d api-coverage/coverage-report ]; then
aws s3 sync --quiet api-coverage/coverage-report/ s3://docs.zephyrproject.org/api-coverage/${VERSION} --delete
fi
aws s3 cp --quiet pdf-output/zephyr.pdf s3://docs.zephyrproject.org/${VERSION}/zephyr.pdf

View File

@@ -5,32 +5,28 @@ on:
- '.github/workflows/errno.yml'
- 'lib/libc/minimal/include/errno.h'
- 'scripts/ci/errno.py'
- 'SDK_VERSION'
permissions:
contents: read
jobs:
check-errno:
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
runs-on: ubuntu-20.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
west-group-filter: -hal,-tools,-bootloader,-babblesim
west-project-filter: -nrf_hw_models
enable-ccache: false
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@v3
- name: Run errno.py
working-directory: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
export ZEPHYR_BASE=${PWD}
./scripts/ci/errno.py

View File

@@ -1,41 +1,34 @@
name: Footprint Tracking
# Run every 12 hours and on tags
on:
schedule:
- cron: '50 00 * * *'
- cron: '50 1/12 * * *'
push:
paths:
- 'VERSION'
- '.github/workflows/footprint-tracking.yml'
branches:
- main
- v*-branch
tags:
# only publish v* tags, do not care about zephyr-v* which point to the
# same commit
- 'v*'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
defaults:
run:
shell: bash
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
@@ -46,42 +39,19 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install packages
- name: Install pip packages
run: |
sudo apt-get update
sudo apt-get install -y python3-venv
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Environment Setup
run: |
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: west setup
run: |
west init -l . || true
@@ -89,10 +59,10 @@ jobs:
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ secrets.FOOTPRINT_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.FOOTPRINT_AWS_ACCESS_KEY }}
aws-region: us-east-1
- name: Record Footprint
@@ -101,38 +71,4 @@ jobs:
run: |
export ZEPHYR_BASE=${PWD}
./scripts/footprint/track.py -p scripts/footprint/plan.txt
- name: Upload footprint data
run: |
python3 -m venv .venv
. .venv/bin/activate
aws s3 sync --quiet footprint_data/ s3://testing.zephyrproject.org/footprint_data/
- name: Transform Footprint data to Twister JSON reports
run: |
shopt -s globstar
export ZEPHYR_BASE=${PWD}
python3 ./scripts/footprint/pack_as_twister.py -vvv \
--plan ./scripts/footprint/plan.txt \
--test-name='name.feature' \
./footprint_data/**/
- name: Upload to ElasticSearch
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
ELASTICSEARCH_INDEX: ${{ vars.FOOTPRINT_TRACKING_INDEX }}
run: |
shopt -s globstar
run_date=`date --iso-8601=minutes`
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--flatten footprint \
--flatten-list-names "{'children':'name'}" \
--transform "{ 'footprint_name': '^(?P<footprint_area>([^\/]+\/){0,2})(?P<footprint_path>([^\/]*\/)*)(?P<footprint_symbol>[^\/]*)$' }" \
--run-id "${{ github.run_id }}" \
--run-attempt "${{ github.run_attempt }}" \
--run-workflow "footprint-tracking:${{ github.event_name }}" \
--run-branch "${{ github.ref_name }}" \
-i ${ELASTICSEARCH_INDEX} \
./footprint_data/**/twister_footprint.json
#

72
.github/workflows/footprint.yml vendored Normal file
View File

@@ -0,0 +1,72 @@
name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-delta:
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: west setup
run: |
west init -l . || true
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update.log
- name: Detect Changes in Footprint
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=${PWD}
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
git rebase origin/${BASE_REF}
git checkout -b this_pr
west update
west build -b frdm_k64f tests/benchmarks/footprints -t ram_report
cp build/ram.json ram2.json
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/rom.json rom2.json
git checkout origin/${BASE_REF}
west update
west build -p always -b frdm_k64f tests/benchmarks/footprints -t ram_report
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/ram.json ram1.json
cp build/rom.json rom1.json
git checkout this_pr
./scripts/footprint/fpdiff.py ram1.json ram2.json
./scripts/footprint/fpdiff.py rom1.json rom2.json

View File

@@ -1,64 +0,0 @@
name: Greet first time contributor
on:
issues:
types: [opened]
pull_request_target:
types: [opened, closed]
permissions:
contents: read
jobs:
check_for_first_interaction:
runs-on: ubuntu-24.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on pull requests
issues: write # to comment on issues
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: zephyrproject-rtos/action-first-interaction@58853996b1ac504b8e0f6964301f369d2bb22e5c # v1.1.1+zephyr.6
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: >
Hi @${{github.event.issue.user.login}}! We appreciate you submitting your first issue
for our open-source project. 🌟
Even though I'm a bot, I can assure you that the whole community is genuinely grateful
for your time and effort. 🤖💙
pr-opened-message: >
Hello @${{ github.event.pull_request.user.login }}, and thank you very much for your
first pull request to the Zephyr project!
Our Continuous Integration pipeline will execute a series of checks on your Pull Request
commit messages and code, and you are expected to address any failures by updating the PR.
Please take a look at [our commit message guidelines](https://docs.zephyrproject.org/latest/contribute/guidelines.html#commit-message-guidelines)
to find out how to format your commit messages, and at [our contribution workflow](https://docs.zephyrproject.org/latest/contribute/guidelines.html#contribution-workflow)
to understand how to update your Pull Request.
If you haven't already, please make sure to review the project's [Contributor
Expectations](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html)
and update (by amending and force-pushing the commits) your pull request if necessary.
If you are stuck or need help please join us on [Discord](https://chat.zephyrproject.org/)
and ask your question there. Additionally, you can [escalate the review](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html#pr-technical-escalation)
when applicable. 😊
pr-merged-message: >
Hi @${{ github.event.pull_request.user.login }}!
Congratulations on getting your very first Zephyr pull request merged 🎉🥳. This is a
fantastic achievement, and we're thrilled to have you as part of our community!
To celebrate this milestone and showcase your contribution, we'd love to award you the
Zephyr Technical Contributor badge. If you're interested, please claim your badge by
filling out this form: [Claim Your Zephyr Badge](https://forms.gle/oCw9iAPLhUsHTapc8).
Thank you for your valuable input, and we look forward to seeing more of your
contributions in the future! 🪁

View File

@@ -1,96 +0,0 @@
name: Hello World (Multiplatform)
on:
push:
branches:
- main
- v*-branch
- collab-*
pull_request:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/**'
- '.github/workflows/hello_world_multiplatform.yaml'
- 'SDK_VERSION'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
build:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, ubuntu-24.04, ubuntu-24.04-arm, macos-14, windows-2022]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
fetch-depth: 0
- name: Rebase
if: github.event_name == 'pull_request'
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
working-directory: zephyr
shell: bash
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
toolchains: arm-zephyr-eabi:riscv64-zephyr-elf
ccache-cache-key: hw-${{ matrix.os }}
- name: Build firmware
working-directory: zephyr
shell: bash
run: |
if [ "${{ runner.os }}" = "macOS" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --build-only"
elif [ "${{ runner.os }}" = "Windows" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --short-build-path -O/tmp/twister-out"
elif [ "${{ runner.os }}-${{ runner.arch }}" == "Linux-ARM64" ]; then
EXTRA_TWISTER_FLAGS="--exclude-platform native_sim/native"
fi
west twister \
-p native_sim -p qemu_cortex_m0 -p qemu_riscv32 -p qemu_riscv64 \
--runtime-artifact-cleanup \
--force-color \
--inline-logs \
-T samples/hello_world \
-T samples/cpp/hello_world \
-v \
$EXTRA_TWISTER_FLAGS
- name: Upload artifacts
if: failure()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
if-no-files-found: ignore
path:
zephyr/twister-out/*/samples/hello_world/sample.basic.helloworld/build.log
zephyr/twister-out/*/samples/cpp/hello_world/sample.cpp.helloworld/build.log

View File

@@ -2,10 +2,7 @@ name: Issue Tracker
on:
schedule:
- cron: '10 1/4 * * *'
permissions:
contents: read
- cron: '*/10 * * * *'
env:
OUTPUT_FILE_NAME: IssuesReport.md
@@ -17,7 +14,7 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
@@ -30,7 +27,7 @@ jobs:
sudo apt-get update
sudo apt-get install discount
- uses: brcrista/summarize-issues@54c549b7d38b7db39e5c6e06fd9617e12e5c3491 # v4
- uses: brcrista/summarize-issues@v3
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
@@ -38,17 +35,17 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
continue-on-error: true
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@61815dcd50bd041e203e49132bacad1fd04d2708 # v5.1.1
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-region: us-east-1
- name: Post Results

12
.github/workflows/labeler.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
name: 'Pull Request Labeler'
on:
- pull_request_target
jobs:
labeler:
name: Pull Request Labeler
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2.1.1
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'

View File

@@ -2,25 +2,20 @@ name: Scancode
on: [pull_request]
permissions:
contents: read
jobs:
scancode_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
uses: actions/checkout@v1
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@23ef91ce31cd4b954366a7b71eea47520da9b380 # v4
uses: zephyrproject-rtos/action_scancode@v4
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -1,45 +1,29 @@
name: Manifest
on:
pull_request_target:
permissions:
contents: read
paths:
- 'west.yml'
jobs:
contribs:
runs-on: ubuntu-24.04
permissions:
pull-requests: write # to create/update pull request comments
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
working-directory: zephyrproject/zephyr
run: |
pip install -r scripts/requirements-west.txt --require-hashes
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Manifest
uses: zephyrproject-rtos/action-manifest@09983f53d3d878791aa37a7755ae44d695f4c1e5 # v2.0.0
uses: zephyrproject-rtos/action-manifest@2f1ad2908599d4fe747f886f9d733dd7eebae4ef
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
github-token: ${{ secrets.ZB_GITHUB_TOKEN }}
manifest-path: 'west.yml'
checkout-path: 'zephyrproject/zephyr'
use-tree-checkout: 'true'
check-impostor-commits: 'true'
label-prefix: 'manifest-'
verbosity-level: '1'
labels: 'manifest'
dnm-labels: 'DNM (manifest)'
blobs-added-labels: 'Binary Blobs Added'
blobs-modified-labels: 'Binary Blobs Modified'
labels: 'manifest, west'
dnm-labels: 'DNM'

View File

@@ -1,19 +0,0 @@
name: Check SHA-pinned GitHub Actions
on:
pull_request:
paths:
- '.github/workflows/**'
permissions:
contents: read
jobs:
check-sha-pinned-actions:
name: Verify GitHub Actions
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@6124774845927d14c601359ab8138699fa5b70c3 # v4.0.1

View File

@@ -1,46 +0,0 @@
name: PR Metadata Check
on:
pull_request:
types:
- synchronize
- opened
- reopened
- labeled
- unlabeled
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
do-not-merge:
name: Prevent Merging
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run the check script
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python -u scripts/ci/do_not_merge.py \
-p "${{ github.event.pull_request.number }}" \
-o "${{ github.event.repository.owner.login }}" \
-r "${{ github.event.repository.name }}"

View File

@@ -1,53 +0,0 @@
# Copyright (c) 2023 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Misc. Pylib Scripts TestSuite
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/build_helpers/**'
- '.github/workflows/pylib_tests.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/build_helpers/**'
- '.github/workflows/pylib_tests.yml'
permissions:
contents: read
jobs:
pylib-tests:
name: Misc. Pylib Unit Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run pytest for build_helpers
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run build_helpers tests"
PYTHONPATH=./scripts/tests pytest ./scripts/tests/build_helpers

View File

@@ -1,31 +0,0 @@
name: ready to merge
on:
workflow_call:
inputs:
needs_context:
type: string
required: true
permissions:
contents: read
jobs:
all_jobs_passed:
name: all jobs passed
runs-on: ubuntu-latest
steps:
- name: "Check status of all required jobs"
run: |-
NEEDS_CONTEXT='${{ inputs.needs_context }}'
JOB_IDS=$(echo "$NEEDS_CONTEXT" | jq -r 'keys[]')
for JOB_ID in $JOB_IDS; do
RESULT=$(echo "$NEEDS_CONTEXT" | jq -r ".[\"$JOB_ID\"].result")
echo "$JOB_ID job result: $RESULT"
if [[ $RESULT != "success" && $RESULT != "skipped" ]]; then
echo "***"
echo "Error: The $JOB_ID job did not pass."
exit 1
fi
done
echo "All jobs passed or were skipped."

View File

@@ -4,18 +4,12 @@ on:
push:
tags:
- 'v*'
- '!v*rc*'
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-24.04
permissions:
contents: write # to create GitHub release entry
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -23,39 +17,42 @@ jobs:
id: get_version
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "TRIMMED_VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@676e2d560c9a403aa252096d99fcab3e1132b0f5 # v6.0.0
uses: fsfe/reuse-action@v1
with:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
continue-on-error: true
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
path: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: Create empty release notes body
- name: Get Diff since last tag
run: |
echo "TODO: add release overview and notes link" > release-notes.txt
oldtag=$(git describe --abbrev=0 ${{ github.ref }}^)
echo "Changes since ${oldtag}:" > release-notes.txt
echo "" >> release-notes.txt
echo "" >> release-notes.txt
git shortlog ${oldtag}..${{ github.ref }} >> release-notes.txt
- name: Create Release
id: create_release
uses: actions/create-release@0cb9c9b65d5d1901c1f53e5e66eaf4afd303e70e # v1.1.4
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Zephyr ${{ steps.get_version.outputs.TRIMMED_VERSION }}
release_name: Zephyr ${{ github.ref }}
body_path: release-notes.txt
draft: true
prerelease: true
- name: Upload Release Assets
id: upload-release-asset
uses: actions/upload-release-asset@e8f9f06c4b078e705bd2ea027f0926603fc9b4d5 # v1.0.2
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -1,61 +0,0 @@
# This workflow uses actions that are not certified by GitHub. They are provided
# by a third-party and are governed by separate terms of service, privacy
# policy, and support documentation.
name: Scorecards supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '43 7 * * 6'
push:
branches:
- main
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed for Code scanning upload
security-events: write
# Needed for GitHub OIDC token if publish_results is true
id-token: write
steps:
- name: "Checkout code"
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
with:
results_file: results.sarif
results_format: sarif
# Publish results to OpenSSF REST API for easy access by consumers.
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable
# uploads of run results in SARIF format to the repository Actions tab.
# https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts
- name: "Upload artifact"
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@cdefb33c0f6224e58673d9004f47f7cb3e328b89 # v4.31.10
with:
sarif_file: results.sarif

View File

@@ -1,70 +0,0 @@
# Copyright 2023 Google LLC
# SPDX-License-Identifier: Apache-2.0
name: Scripts tests
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
permissions:
contents: read
jobs:
scripts-tests:
name: Scripts tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Rebase
continue-on-error: true
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run pytest
env:
ZEPHYR_BASE: ./
run: |
echo "Run script tests"
pytest ./scripts/build

View File

@@ -1,33 +0,0 @@
name: Stale Workflow Queue Cleanup
on:
workflow_dispatch:
schedule:
# everyday at 15:00
- cron: '0 15 * * *'
permissions:
contents: read
concurrency:
group: stale-workflow-queue-cleanup
cancel-in-progress: true
jobs:
cleanup:
name: Cleanup
runs-on: ubuntu-24.04
if: github.ref == 'refs/heads/main'
permissions:
actions: write # to delete stale workflow runs
steps:
- name: Delete stale queued workflow runs
uses: MajorScruffy/delete-old-workflow-runs@78b5af714fefaefdf74862181c467b061782719e # v0.3.0
with:
repository: ${{ github.repository }}
# Remove any workflow runs in "queued" state for more than 1 day
older-than-seconds: 86400
status: queued
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,33 +3,21 @@ on:
schedule:
- cron: "16 00 * * *"
permissions:
contents: read
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-24.04
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on stale pull requests
issues: write # to comment on stale issues
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@v3
with:
stale-pr-message: 'This pull request has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you
would like to have the label removed otherwise this pull request will automatically be
closed in 14 days. Note, that you can always re-open a closed pull request at any time.'
stale-issue-message: 'This issue has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you
would like to have the label removed otherwise this issue will automatically be closed in
14 days. Note, that you can always re-open a closed issue at any time.'
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: 'This pull request has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this pull request will automatically be closed in 14 days. Note, that you can always re-open a closed pull request at any time.'
stale-issue-message: 'This issue has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this issue will automatically be closed in 14 days. Note, that you can always re-open a closed issue at any time.'
days-before-stale: 60
days-before-close: 14
stale-issue-label: 'Stale'
stale-pr-label: 'Stale'
exempt-pr-labels: 'Blocked,In progress'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process,Coverity'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process'
operations-per-run: 400

View File

@@ -1,38 +0,0 @@
name: Merged PR stats
on:
pull_request_target:
branches:
- main
- v*-branch
types: [closed]
permissions:
contents: read
jobs:
record_merged:
if: github.event.pull_request.merged == true && github.repository == 'zephyrproject-rtos/zephyr'
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: PR event
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
PR_STAT_ES_INDEX: ${{ vars.PR_STAT_ES_INDEX }}
run: |
python3 ./scripts/ci/stats/merged_prs.py --pull-request ${{ github.event.pull_request.number }} --repo ${{ github.repository }}

View File

@@ -1,64 +0,0 @@
name: Publish Twister Test Results
on:
workflow_run:
workflows: ["Run tests with twister"]
branches:
- main
types:
- completed
permissions:
contents: read
jobs:
upload-to-elasticsearch:
if: |
github.repository == 'zephyrproject-rtos/zephyr' &&
github.event.workflow_run.event != 'pull_request'
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
runs-on: ubuntu-24.04
steps:
# Needed for elasticearch and upload script
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@0bd50d53a6d7fb5cb921e607957e9cc12b4ce392 # v12
with:
path: artifacts
workflow: twister.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Upload to elasticsearch
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event.workflow_run.event}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event.workflow_run.event}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi

View File

@@ -5,18 +5,13 @@ on:
branches:
- main
- v*-branch
- collab-*
pull_request:
pull_request_target:
branches:
- main
- v*-branch
- collab-*
schedule:
# Run at 02:00 UTC on every Sunday
- cron: '0 2 * * 0'
permissions:
contents: read
# Run at 00:00 on Wednesday and Saturday
- cron: '0 0 * * 3,6'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -24,67 +19,65 @@ concurrency:
jobs:
twister-build-prep:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: ubuntu-24.04
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
fullrun: ${{ steps.output-services.outputs.fullrun }}
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 20
WEEKLY_MATRIX_SIZE: 200
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TESTS_PER_BUILDER: 900
PUSH_MATRIX_SIZE: 15
DAILY_MATRIX_SIZE: 80
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
TESTS_PER_BUILDER: 700
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
path: zephyr
persist-credentials: false
- name: Set up Python
if: github.event_name == 'pull_request'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: install-packages
working-directory: zephyr
if: github.event_name == 'pull_request'
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Setup Zephyr project
if: github.event_name == 'pull_request'
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
enable-ccache: false
- name: Environment Setup
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Generate Test Plan with Twister
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
id: test-plan
run: |
export ZEPHYR_BASE=${PWD}
@@ -100,62 +93,49 @@ jobs:
- name: Determine matrix size
id: output-services
run: |
if [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
if [ -n "${TWISTER_NODES}" ]; then
subset="[$(seq -s',' 1 ${TWISTER_NODES})]"
else
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
fi
size=${TWISTER_NODES}
elif [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "schedule" -a "${{github.repository}}" = "zephyrproject-rtos/zephyr" ]; then
subset="[$(seq -s',' 1 ${WEEKLY_MATRIX_SIZE})]"
size=${WEEKLY_MATRIX_SIZE}
subset="[$(seq -s',' 1 ${DAILY_MATRIX_SIZE})]"
size=${DAILY_MATRIX_SIZE}
else
size=0
fi
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
twister-build:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
subset: ${{fromJSON(needs.twister-build-prep.outputs.subset)}}
timeout-minutes: 1440
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resolve the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TWISTER_COMMON: ' --test-config tests/test_config_ci.yaml --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
WEEKLY_OPTIONS: ' -M --build-only --all --show-footprint --report-filtered -j 32'
PR_OPTIONS: ' --clobber-output --integration -j 16'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint --report-filtered -j 16'
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.14.2
CLANG_ROOT_DIR: /usr/lib/llvm-12
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 '
DAILY_OPTIONS: ' -M --build-only --all'
PR_OPTIONS: ' --clobber-output --integration'
PUSH_OPTIONS: ' --clobber-output -M'
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
steps:
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
@@ -167,11 +147,11 @@ jobs:
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -179,62 +159,58 @@ jobs:
- name: Environment Setup
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
west init -l . || true
west config manifest.group-filter -- +ci,+optional,+testing
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Check Environment
run: |
cmake --version
${CLANG_ROOT_DIR}/bin/clang --version
gcc --version
cargo --version
rustup target list --installed
ls -la
echo "github.ref: ${{ github.ref }}"
echo "github.base_ref: ${{ github.base_ref }}"
echo "github.ref_name: ${{ github.ref_name }}"
- name: Install Python packages
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
west packages pip --install
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
continue-on-error: true
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Update BabbleSim to manifest revision
- name: ccache stats initial
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
test -d github/home/.ccache && rm -rf /github/home/.ccache && mv github/home/.ccache /github/home/.ccache
ccache -M 10G -s
- if: github.event_name == 'push'
name: Run Tests with Twister (Push)
id: run_twister
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
@@ -246,9 +222,8 @@ jobs:
fi
fi
- if: github.event_name == 'pull_request'
- if: github.event_name == 'pull_request_target'
name: Run Tests with Twister (Pull Request)
id: run_twister_pr
run: |
rm -f testplan.json
export ZEPHYR_BASE=${PWD}
@@ -263,140 +238,64 @@ jobs:
fi
- if: github.event_name == 'schedule'
name: Run Tests with Twister (Weekly)
id: run_twister_sched
name: Run Tests with Twister (Daily)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${DAILY_OPTIONS}
if [ "${{matrix.subset}}" = "1" ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${DAILY_OPTIONS}
fi
fi
- name: Print ccache stats
if: always()
- name: ccache stats post
run: |
ccache -s -vv
ccache -s
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
path: |
twister-out/twister.xml
twister-out/twister.json
module_tests/twister.xml
testplan.json
- if: matrix.subset == 1 && github.event_name == 'push'
name: Save the list of Python packages
shell: bash
run: |
FREEZE_FILE="frozen-requirements.txt"
timestamp="$(date)"
version="$(git describe --abbrev=12 --always)"
echo -e "# Generated at $timestamp ($version)\n" > $FREEZE_FILE
pip freeze | tee -a $FREEZE_FILE
- if: matrix.subset == 1 && github.event_name == 'push'
name: Upload the list of Python packages
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: Frozen PIP package set
path: |
frozen-requirements.txt
twister-test-results:
name: "Publish Unit Tests Results"
needs:
- twister-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create the check run entry with Twister test results
# the build-and-test job might be skipped, we don't need to run this job then
needs: twister-build
runs-on: ubuntu-20.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: Check out source code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@v2
with:
path: artifacts
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/*/twister.xml junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit.html
junit.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() && (github.event_name == 'push') }}
uses: codecov/test-results-action@0fa95f0e1eeaafde2c782583b36b28ad0d8c77d3 # v1.2.1
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@27d65e188ec43221b20d26de30f4892fad91df2f # v2.22.0
uses: EnricoMi/publish-unit-test-result-action@v1
with:
check_name: Unit Test Results
github_token: ${{ secrets.GITHUB_TOKEN }}
files: "**/twister.xml"
comment_mode: off
- name: Analyze Twister Reports
if: needs.twister-build.result == 'failure'
run: |
./scripts/ci/twister_report_analyzer.py artifacts/*/*/twister.json --long-summary --platforms --output-md errors.md
if [[ -s "errors.md" ]]; then
echo '### Error Summary! 🚀' >> $GITHUB_STEP_SUMMARY
cat errors.md >> $GITHUB_STEP_SUMMARY
fi
- name: Upload Twister Analysis Results
if: needs.twister-build.result == 'failure'
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
with:
name: Twister Analysis Results
if-no-files-found: ignore
path: |
twister_report_summary.json
twister-status-check:
if: always()
name: "Check Twister Status"
needs:
- twister-build-prep
- twister-build
uses: ./.github/workflows/ready-to-merge.yml
with:
needs_context: ${{ toJson(needs) }}

View File

@@ -8,26 +8,20 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/pylib/**'
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/**'
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
permissions:
contents: read
jobs:
twister-tests:
@@ -35,35 +29,30 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run pytest for twisterlib
pip3 install pytest colorama pyyaml ply mock
- name: Run pytest
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run twister tests"
PYTHONPATH=./scripts/tests pytest ./scripts/tests/twister
- name: Run pytest for pytest-twister-harness
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
PYTHONPATH: ./scripts/pylib/pytest-twister-harness/src:${PYTHONPATH}
run: |
echo "Run twister tests"
pytest ./scripts/pylib/pytest-twister-harness/tests

View File

@@ -1,66 +0,0 @@
# Copyright (c) 2023 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Twister BlackBox TestSuite
on:
pull_request:
branches:
- main
- collab-*
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/twister_tests_blackbox.yml'
permissions:
contents: read
env:
PYTHONIOENCODING: utf-8
jobs:
twister-tests:
name: Twister Black Box Tests
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
fail-fast: false
runs-on: ubuntu-24.04
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
path: zephyr
fetch-depth: 0
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@360ff9b36e58499d9eb28015cdcde7ca03a5b04d # v1.0.12
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi:riscv64-zephyr-elf:x86_64-zephyr-elf'
enable-ccache: false
west-group-filter: -tools,-bootloader,-babblesim,-hal
west-project-filter: -nrf_hw_models,+cmsis,+hal_xtensa,+cmsis_6
- name: Run Pytest For Twister Black Box Tests
working-directory: zephyr
shell: bash
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
sudo apt-get install -y lcov
echo "Run twister tests"
source zephyr-env.sh
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox/

View File

@@ -8,7 +8,6 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -17,43 +16,64 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
permissions:
contents: read
jobs:
west-commands:
west-commnads:
name: West Command Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-22.04, macos-14, windows-2022]
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest west pyelftools canopen progress mypy intelhex psutil
- name: run pytest-win
if: runner.os == 'Windows'
run: |
python ./scripts/west_commands/run_tests.py
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |

67
.gitignore vendored
View File

@@ -1,6 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The Zephyr Project Contributors
*.o
*.a
*.d
@@ -10,16 +7,11 @@
*.swp
*.swo
*~
# Emacs
.\#*
\#*\#
build*/
!doc/build/
!scripts/build
!share/sysbuild/build
!doc/guides/build
!tests/drivers/build_all
!scripts/pylib/build_helpers
cscope.*
.dir
@@ -33,8 +25,6 @@ outdir
outdir-*
scripts/basic/fixdep
scripts/gen_idt/gen_idt
coverage-report
doc-coverage.info
doc/_build
doc/doxygen
doc/xml
@@ -45,10 +35,7 @@ doc/latex
doc/themes/zephyr-docs-theme
sanity-out*
twister-out*
bsim_out
bsim_bt_out
myresults.xml
tests/RunResults.xml
scripts/grub
doc/reference/kconfig/*.rst
doc/doc.warnings
@@ -59,25 +46,11 @@ doc/doc.warnings
hide-defaults-note
venv
.venv
.envrc
.DS_Store
.clangd
new.info
# Cargo drops lock files in projects to capture resolved dependencies.
# We don't want to record these.
Cargo.lock
# Cargo encourages a .cargo/config.toml file to symlink to a generated file. Don't save these.
.cargo/
# Normal west builds will place the Rust target directory under the build directory. However,
# sometimes IDEs and such will litter these target directories as well.
target/
# CI output
compliance.xml
dts_linter.patch
_error.types
# Tag files
@@ -88,39 +61,3 @@ TAGS
tags
.idea
# from check_compliance.py
# zephyr-keep-sorted-start
BinaryFiles.txt
BoardYml.txt
Checkpatch.txt
ClangFormat.txt
DevicetreeBindings.txt
DevicetreeLinting.txt
GitDiffCheck.txt
Gitlint.txt
Identity.txt
ImageSize.txt
Kconfig.txt
KconfigBasic.txt
KconfigBasicNoModules.txt
KconfigHWMv2.txt
KeepSorted.txt
LicenseAndCopyrightCheck.txt
MaintainersFormat.txt
ModulesMaintainers.txt
Nits.txt
Pylint.txt
PythonCompat.txt
Ruff.txt
SphinxLint.txt
SysbuildKconfig.txt
SysbuildKconfigBasic.txt
SysbuildKconfigBasicNoModules.txt
TextEncoding.txt
YAMLLint.txt
ZephyrModuleFile.txt
# zephyr-keep-sorted-stop
# Node dependecies
node_modules

View File

@@ -1,7 +1,5 @@
# All these sections are optional, edit this file as you like.
# Zephyr-specific defaults are located in scripts/gitlint/zephyr_commit_rules.py
[general]
regex-style-search=true
ignore=title-trailing-punctuation, T3, title-max-length, T1, body-hard-tab, B3, B1
# verbosity should be a value between 1 and 3, the commandline -v flags take precedence over this
verbosity = 3
@@ -18,16 +16,16 @@ debug = false
extra-path=scripts/gitlint
[title-max-length-no-revert]
# line-length=75
line-length=75
[body-min-line-count]
# min-line-count=1
min-line-count=1
[body-max-line-count]
# max-line-count=200
max-line-count=200
[title-starts-with-subsystem]
regex = ^(?!subsys:)(?!treewide:)(([^:]+):)(\s([^:]+):)*\s(.+)$
regex = ^(?!subsys:)(([^:]+):)(\s([^:]+):)*\s(.+)$
[title-must-not-contain-word]
# Comma-separated list of words that should not occur in the title. Matching is case
@@ -44,7 +42,7 @@ words=wip
[max-line-length-with-exceptions]
# B1 = body-max-line-length
# line-length=75
line-length=75
[body-min-length]
min-length=3
@@ -60,7 +58,3 @@ ignore-merge-commits=false
# By specifying this rule, developers can only change the file when they explicitly reference
# it in the commit message.
#files=gitlint/rules.py,README.md
[ignore-by-author-name]
regex=^dependabot\[bot\]$
ignore=all

151
.mailmap
View File

@@ -1,124 +1,35 @@
Alexandr Kolosov <rikorsev@gmail.com>
Alexandre d'Alton <alexandre.dalton@intel.com>
Amir Kaplan <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com>
Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Anthony Smigielski <thebasti0ncode@gmail.com>
Armand Ciejak <armand@riedonetworks.com> <armandciejak@users.noreply.github.com>
Aska Wu <aska.wu@linaro.org>
Bit Pathe <bitpathe@gmail.com>
Bjarki Arge Andreasen <baa@trackunit.com>
Carles Cufi <carles.cufi@nordicsemi.no>
chao an <anchao@xiaomi.com>
Charles E. Youse <charles.youse@intel.com>
Chen Xingyu <hi@xingrz.me>
Christoph Schnetzler <christoph.schnetzler@husqvarnagroup.com>
Christoph Schramm <schramm@makaio.com>
Christopher Friedt <cfriedt@meta.com>
Christopher Friedt <cfriedt@meta.com> <cfriedt@fb.com>
Chuck Jordan <cjordan@synopsys.com>
Chunlin Han <chunlin.han@linaro.org> <chunlin.han@acer.com>
David B. Kinder <david.b.kinder@intel.com>
David Komel <a8961713@gmail.com>
David Leach <david.leach@nxp.com>
Dirk Brandewie <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com>
Enjia Mai <enjia.mai@intel.com>
Enjia Mai <enjia.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com> <Eugeniy.Paltsev@synopsys.com>
Felipe Neves <ryukokki.felipe@gmail.com>
Findlay Feng <i@fengch.me>
Flavio Arieta Netto <flavio@exati.com.br>
Francois Ramu <francois.ramu@st.com>
Gerardo Aceves <gerardo.aceves@intel.com>
Gregory Shue <gregory.shue@legrand.com>
Gregory Shue <gregory.shue@legrand.com> <gregory.shue@legrand.us>
HaiLong Yang <cameledyang@pm.me>
James Johnson <james.johnson672@t-mobile.com>
Jarno Lämsä <jarno.lamsa@nordicsemi.no>
Jędrzej Ciupis <jedrzej.ciupis@nordicsemi.no>
Jeremie Garcia <jeremie.garcia@intel.com>
Jim Benjamin Luther <jilu@oticon.com>
Johan Kruger <johan.kruger@windriver.com>
Johann Fischer <j.fischer@phytec.de>
Jørgen Kvalvaag <jorgen.kvalvaag@nordicsemi.no>
Juan Manuel Cruz Alcaraz <juan.m.cruz.alcaraz@intel.com>
Juan Solano <juanx.solano.menacho@intel.com>
Julien D'Ascenzio <julien.dascenzio@paratronic.fr>
Jun Li <jun.r.li@intel.com>
Justin Watson <jwatson5@gmail.com>
Kamil Sroka <kamil.sroka@nordicsemi.no>
Katarzyna Giądła <katarzyna.giadla@nordicsemi.no>
Keren Siman-Tov <keren.siman-tov@intel.com>
Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com>
Dirk Brandewie <dirk.j.brandewie@intel.com> <dirk.j.brandewie@intel.com>
Mike Hirst <michael.hirst@windriver.com> <michael.hirst@windriver.com>
Johan Kruger <johan.kruger@windriver.com> <johan.kruger@windriver.com>
Leona Cook <leonax.cook@intel.com> <lsc@hackeress.com>
Lixin Guo <lixinx.guo@intel.com>
Łukasz Mazur <lukasz.mazur@hidglobal.com>
Manuel Argüelles <manuel.arguelles@nxp.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <manuel.arguelles@coredumplabs.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <marguelles.dev@gmail.com>
Marc Herbert <marc.herbert@intel.com> <46978960+marc-hb@users.noreply.github.com>
Marin Jurjević <marin.jurjevic@hotmail.com>
Mariusz Ryndzionek <mariusz.ryndzionek@firmwave.com>
Mariusz Skamra <mariusz.skamra@codecoup.pl>
Mariusz Skamra <mariusz.skamra@codecoup.pl> <mariusz.skamra@tieto.com>
Martí Bolívar <marti.bolivar@nordicsemi.no>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.bolivar@linaro.org>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.f.bolivar@gmail.com>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@foundries.io>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@opensourcefoundries.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Mateusz Hołenko <mholenko@antmicro.com>
Michael Rosen <michael.r.rosen@intel.com>
Michal Narajowski <michal.narajowski@codecoup.pl>
Mike Hirst <michael.hirst@windriver.com>
Ming Shao <ming.shao@intel.com>
Mohan Kumar Kumar <mohankm@fb.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com>
Navin Sankar Velliangiri <navin@linumiz.com>
NingX Zhao <ningx.zhao@intel.com>
Leona Cook <leonax.cook@intel.com> <leonax.cook@intel.com>
Thomas Heeley <thomas.heeley@intel.com> <thomas.heeley@intel.com>
Shuang He <shuang.he@intel.com> <shuang.he@intel.com>
Chuck Jordan <cjordan@synopsys.com> <cjordan@synopsys.com>
Jeremie Garcia <jeremie.garcia@intel.com> <jeremie.garcia@intel.com>
Bit Pathe <bitpathe@gmail.com> <bitpathe@gmail.com>
Carles Cufi <carles.cufi@nordicsemi.no> <carles.cufi@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com> <kuo-lang.tseng@intel.com>
Gerardo Aceves <gerardo.aceves@intel.com> <gerardo.aceves@intel.com>
Evan Couzens <evanx.couzens@intel.com> <evanx.couzens@intel.com>
Lei Liu <lei.a.liu@intel.com> <lei.a.liu@intel.com>
Douglas Su <d0u9.su@outlook.com> <d0u9.su@outlook.com>
Keren Siman-Tov <keren.siman-tov@intel.com> <keren.siman-tov@intel.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com> <tulasi.r@tcs.com>
Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Amir Kaplan <amir.kaplan@intel.com> <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com> <anas.nashif@intel.com>
Ruud Derwig <Ruud.Derwig@synopsys.com> <Ruud.Derwig@synopsys.com>
Flavio Arieta Netto <flavio@exati.com.br>
Nishikant Nayak <nishikantax.nayak@intel.com>
Ole Sæther <ole.saether@nordicsemi.no>
Pavel Král <pavel.kral@omsquare.com>
Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
Paweł Czarnecki <pczarnecki@antmicro.com>
Paweł Czarnecki <pczarnecki@antmicro.com>
Paweł Czarnecki <pczarnecki@antmicro.com> <pczarnecki@internships.antmicro.com>
Paweł Kwiek <pawel.kwiek@nordicsemi.no>
Peng Chen <peng1.chen@intel.com>
Peter Bigot <peter.bigot@nordicsemi.no>
Peter Bigot <peter.bigot@nordicsemi.no> <pab@pabigot.com>
Peter Johanson <peter@peterjohanson.com>
Piyush Itankar <piyush.t.itankar@intel.com>
Radosław Koppel <radoslaw.koppel@nordicsemi.no>
Radosław Koppel <radoslaw.koppel@nordicsemi.no> <r.koppel@k-el.com>
Raja D. Singh <rdsingh@iotwizards.com>
Ricardo Salveti <ricardo@opensourcefoundries.com>
Ricardo Salveti <ricardo@opensourcefoundries.com> <ricardo.salveti@linaro.org>
Ruud Derwig <Ruud.Derwig@synopsys.com>
Saku Rautio <saku.rautio@nordicsemi.no>
Scott Worley <scott.worley@microchip.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>
Sharron Liu <sharron.liu@intel.com>
Shilpashree L C <shilpashree.lc@intel.com>
Shuang He <shuang.he@intel.com>
Sigvart Hovland <sigvart.hovland@nordicsemi.no>
Stéphane D'Alu <sdalu@sdalu.com>
Stine Åkredalen <stine.akredalen@nordicsemi.no>
Thomas Heeley <thomas.heeley@intel.com>
Tim Sørensen <tims@demant.com>
Tim Sørensen <tims@demant.com> <tims@oticon.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa@gmail.com>
Justin Watson <jwatson5@gmail.com>
Johann Fischer <j.fischer@phytec.de>
Jun Li <jun.r.li@intel.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com> <yonattan.a.louise.mendoza@linux.intel.com>
YouhuaX Zhu <youhuax.zhu@intel.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>
Marc Herbert <marc.herbert@intel.com> <46978960+marc-hb@users.noreply.github.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Armand Ciejak <armand@riedonetworks.com> <armandciejak@users.noreply.github.com>

File diff suppressed because it is too large Load Diff

View File

@@ -1,30 +0,0 @@
# Copyright (c) 2024 Basalte bv
# SPDX-License-Identifier: Apache-2.0
extend = ".ruff-excludes.toml"
line-length = 100
target-version = "py310"
[lint]
select = [
# zephyr-keep-sorted-start
"B", # flake8-bugbear
"E", # pycodestyle
"F", # pyflakes
"I", # isort
"SIM", # flake8-simplify
"UP", # pyupgrade
"W", # pycodestyle warnings
# zephyr-keep-sorted-stop
]
ignore = [
# zephyr-keep-sorted-start
"SIM108", # Allow if-else blocks instead of forcing ternary operator
# zephyr-keep-sorted-stop
]
[format]
quote-style = "preserve"
line-ending = "lf"

87
.uncrustify.cfg Normal file
View File

@@ -0,0 +1,87 @@
indent_with_tabs = 2 # 1=indent to level only, 2=indent with tabs
input_tab_size = 8 # original tab size
output_tab_size = 8 # new tab size
indent_columns = output_tab_size
indent_label = 1 # pos: absolute col, neg: relative column
indent_switch_case = 0 # number
#
# inter-symbol newlines
#
nl_enum_brace = remove # "enum {" vs "enum \n {"
nl_union_brace = remove # "union {" vs "union \n {"
nl_struct_brace = remove # "struct {" vs "struct \n {"
nl_do_brace = remove # "do {" vs "do \n {"
nl_if_brace = remove # "if () {" vs "if () \n {"
nl_for_brace = remove # "for () {" vs "for () \n {"
nl_else_brace = remove # "else {" vs "else \n {"
nl_while_brace = remove # "while () {" vs "while () \n {"
nl_switch_brace = remove # "switch () {" vs "switch () \n {"
nl_brace_while = remove # "} while" vs "} \n while" - cuddle while
nl_brace_else = remove # "} \n else" vs "} else"
nl_func_var_def_blk = 1
nl_fcall_brace = remove # "list_for_each() {" vs "list_for_each()\n{"
nl_fdef_brace = add # "int foo() {" vs "int foo()\n{"
#
# End of file behavior
#
nl_end_of_file = force # string (add/force/ignore/remove)
nl_end_of_file_min = 1 # The min number of newlines at end of file
#
# Source code modifications
#
mod_paren_on_return = ignore # "return 1;" vs "return (1);"
mod_full_brace_if = add # "if() { } else { }" vs "if() else"
#
# inter-character spacing options
#
sp_sizeof_paren = remove # "sizeof (int)" vs "sizeof(int)"
sp_before_sparen = force # "if (" vs "if("
sp_after_sparen = force # "if () {" vs "if (){"
sp_inside_braces = add # "{ 1 }" vs "{1}"
sp_inside_braces_struct = add # "{ 1 }" vs "{1}"
sp_inside_braces_enum = add # "{ 1 }" vs "{1}"
sp_assign = add
sp_arith = add
sp_bool = add
sp_compare = add
sp_assign = add
sp_after_comma = add
sp_func_def_paren = remove # "int foo (){" vs "int foo(){"
sp_func_call_paren = remove # "foo (" vs "foo("
sp_func_proto_paren = remove # "int foo ();" vs "int foo();"
sp_inside_fparen = remove # "func( arg )" vs "func(arg)"
sp_else_brace = add # ignore/add/remove/force
sp_before_ptr_star = add # ignore/add/remove/force
sp_after_ptr_star = remove # ignore/add/remove/force
sp_between_ptr_star = remove # ignore/add/remove/force
sp_inside_paren = remove # remove spaces inside parens
sp_paren_paren = remove # remove spaces between nested parens
sp_inside_sparen = remove # remove spaces inside parens for if, while and the like
sp_brace_else = add # ignore/add/remove/force
sp_before_nl_cont = ignore
sp_cmt_cpp_start = add
sp_brace_typedef = add # }typedefd_name -> } typedefd_name
cmt_sp_after_star_cont = 1
#
# Aligning stuff
#
align_with_tabs = FALSE # use tabs to align
align_on_tabstop = TRUE # align on tabstops
align_enum_equ_span = 4 # '=' in enum definition
align_struct_init_span = 0 # align stuff in a structure init '= { }'
align_right_cmt_span = 3
align_nl_cont = TRUE
sp_pp_concat = ignore # ignore/add/remove/force

View File

@@ -1,16 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
extends: default
rules:
line-length:
max: 100
comments:
min-spaces-from-content: 1
indentation:
spaces: 2
indent-sequences: consistent
document-start:
present: false
truthy:
check-keys: false

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,824 @@
# Instead of the CODEOWNERS file, The Zephyr Project uses a custom format.
# See MAINTAINERS.yml for details.
#
# DO NOT EDIT THIS FILE.
# CODEOWNERS for autoreview assigning in github
# https://help.github.com/en/articles/about-code-owners#codeowners-syntax
# Order is important; for each modified file, the last matching
# pattern takes the most precedence.
# That is, with the last pattern being
# *.rst @nashif
# if only .rst files are being modified, only nashif is
# automatically requested for review, but you can manually
# add others as needed.
# Do not use wildcard on all source yet
# * @galak @nashif
/.github/ @nashif @stephanosio
/.github/workflows/ @galak @nashif
/MAINTAINERS.yml @MaureenHelm
/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/arch/arm/ @MaureenHelm @galak @ioannisg
/arch/arm/core/aarch32/cortex_m/cmse/ @ioannisg
/arch/arm/include/aarch32/cortex_m/cmse.h @ioannisg
/arch/arm/core/aarch32/cortex_a_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/arm64/ @carlocaione
/arch/arm64/core/cortex_r/ @povergoing
/arch/arm64/core/xen/ @lorc @firscity
/arch/common/ @ioannisg @andyross
/arch/mips/ @frantony
/soc/arc/snps_*/ @abrodkin @ruuddw @evgeniy-paltsev
/soc/nios2/ @nashif
/soc/arm/ @MaureenHelm @galak @ioannisg
/soc/arm/arm/mps2/ @fvincenzo
/soc/arm/aspeed/ @aspeeddylan
/soc/arm/atmel_sam/common/*_sam4l_*.c @nandojve
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4l/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/same70/ @nandojve
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/cypress/ @nandojve
/soc/arm/bcm*/ @sbranden
/soc/arm/gigadevice/ @nandojve
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/nxp*/ @mmahadevan108 @dleach02
/soc/arm/nordic_nrf/ @anangl
/soc/arm/nuvoton_npcx/ @MulinChao @WealianLiao @ChiHuaL
/soc/arm/nuvoton_numicro/ @ssekar15
/soc/arm/quicklogic_eos_s3/ @kowalewskijan @kgugala
/soc/arm/rpi_pico/ @yonsch
/soc/arm/silabs_exx32/efm32pg1b/ @rdmeneze
/soc/arm/silabs_exx32/efr32mg21/ @l-alfred
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/*/power.c @FRASTM
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynq7000/ @ibirnbaum
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/arm/renesas_rcar/ @aaillet @pmarzin
/soc/xtensa/intel_s1000/ @sathishkuttan @dcpleung
/soc/arm64/ @carlocaione
/soc/arm64/qemu_cortex_a53/ @carlocaione
/soc/arm64/bcm_vk/ @abhishek-brcm
/soc/arm64/nxp_layerscape/ @JiafeiPan
/soc/arm64/xenvm/ @lorc @firscity
/soc/arm64/nxp_imx/ @MrVan
/soc/arm64/arm/ @povergoing
/soc/arm64/arm/fvp_aemv8a/ @carlocaione
/submanifests/* @mbolivar-nordic
/arch/x86/ @jhedberg @nashif @jenmwms @aasthagr
/arch/nios2/ @nashif
/arch/posix/ @aescolar @daor-oti
/arch/riscv/ @kgugala @pgielda
/soc/mips/ @frantony
/soc/posix/ @aescolar @daor-oti
/soc/riscv/ @kgugala @pgielda
/soc/riscv/openisa*/ @dleach02
/soc/riscv/riscv-privilege/andes_v5/ @cwshu @kevinwang821020 @jimmyzhe
/soc/riscv/riscv-privilege/neorv32/ @henrikbrixandersen
/soc/riscv/riscv-privilege/gd32vf103/ @soburi
/soc/x86/ @dcpleung @nashif @jenmwms @aasthagr
/arch/xtensa/ @dcpleung @andyross @nashif
/soc/xtensa/ @dcpleung @andyross @nashif
/arch/sparc/ @martin-aberg
/soc/sparc/ @martin-aberg
/boards/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/boards/arm/ @MaureenHelm @galak
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
/boards/arm/96b_stm32_sensor_mez/ @Mani-Sadhasivam
/boards/arm/96b_wistrio/ @Mani-Sadhasivam
/boards/arm/arduino_due/ @ioannisg
/boards/arm/bbc_microbit_v2/ @LingaoM
/boards/arm/bl5340_dvk/ @lairdjm
/boards/arm/bl65*/ @lairdjm
/boards/arm/blackpill_f401ce/ @coderkalyan
/boards/arm/blackpill_f411ce/ @coderkalyan
/boards/arm/bt*10/ @greg-leach
/boards/arm/cc1352r1_launchxl/ @bwitherspoon
/boards/arm/cc26x2r1_launchxl/ @bwitherspoon
/boards/arm/cc3220sf_launchxl/ @vanti
/boards/arm/cy8* @nandojve
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/efm32pg_stk3401a/ @rdmeneze
/boards/arm/faze/ @mbittan @simonguinot
/boards/arm/frdm*/ @mmahadevan108 @dleach02
/boards/arm/frdm*/doc/ @dleach02 @MeganHansen
/boards/arm/gd32*/ @nandojve
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @mmahadevan108 @dleach02
/boards/arm/hexiwear*/doc/ @dleach02 @MeganHansen
/boards/arm/ip_k66f/ @parthitce @lmajewski
/boards/arm/legend/ @mbittan @simonguinot
/boards/arm/lpcxpresso*/ @mmahadevan108 @dleach02
/boards/arm/lpcxpresso*/doc/ @dleach02 @MeganHansen
/boards/arm/mimx8mm_evk/ @Mani-Sadhasivam
/boards/arm/mimxrt*/ @mmahadevan108 @dleach02
/boards/arm/mimxrt*/doc/ @dleach02 @MeganHansen
/boards/arm/mps2_an385/ @fvincenzo
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/npcx7m6fb_evb/ @MulinChao @WealianLiao @ChiHuaL
/boards/arm/nrf*/ @carlescufi @lemrey
/boards/arm/nucleo*/ @erwango @ABOSTM @FRASTM
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nuvoton_pfm_m487/ @ssekar15
/boards/arm/pinnacle_100_dvk/ @rerickson1
/boards/arm/qemu_cortex_a9/ @ibirnbaum
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg @stephanosio
/boards/arm/quick_feather/ @kowalewskijan @kgugala
/boards/arm/rak4631_nrf52840/ @gpaquet85
/boards/arm/rak5010_nrf52840/ @gpaquet85
/boards/arm/rpi_pico/ @yonsch
/boards/arm/ronoth_lodev/ @NorthernDean
/boards/arm/xmc45_relax_kit/ @parthitce
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4l_ek/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_e70_xplained/ @nandojve
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/scobc_module1/ @yashi
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango @ABOSTM @FRASTM
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango @ABOSTM @FRASTM
/boards/arm/rcar_h3ulcb/ @aaillet @pmarzin
/boards/arm/ubx_bmd345eval_nrf52840/ @Navin-Sankar @brec-u-blox
/boards/common/ @mbolivar-nordic
/boards/deprecated.cmake @tejlmand
/boards/mips/ @frantony
/boards/nios2/ @nashif
/boards/nios2/altera_max10/ @nashif
/boards/arm/stm32_min_dev/ @sidcha
/boards/posix/ @aescolar @daor-oti
/boards/posix/nrf52_bsim/ @aescolar @wopu-ot
/boards/riscv/ @kgugala @pgielda
/boards/riscv/rv32m1_vega/ @dleach02
/boards/riscv/beaglev_starlight_jh7100/ @rajnesh-kanwal
/boards/riscv/adp_xc7k_ae350/ @cwshu @kevinwang821020 @jimmyzhe
/boards/riscv/gd32*/ @gmarull
/boards/riscv/longan_nano/ @soburi
/boards/riscv/neorv32/ @henrikbrixandersen
/boards/shields/ @erwango
/boards/shields/atmel_rf2xx/ @nandojve
/boards/shields/esp_8266/ @nandojve
/boards/shields/inventek_eswifi/ @nandojve
/boards/x86/ @dcpleung @nashif @jenmwms @aasthagr
/boards/x86/acrn/ @enjiamai
/boards/xtensa/ @nashif @dcpleung
/boards/xtensa/intel_s1000_crb/ @sathishkuttan @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
/boards/xtensa/nxp_adsp_imx8/ @iuliana-prodan @dbaluta
/boards/sparc/ @martin-aberg
/boards/arm64/qemu_cortex_a53/ @carlocaione
/boards/arm64/bcm958402m2_a72/ @abhishek-brcm
/boards/arm64/imx8mm_evk/ @MrVan
/boards/arm64/imx8mp_evk/ @MrVan
/boards/arm64/nxp_ls1046ardb/ @JiafeiPan
/boards/arm64/xenvm/ @lorc @firscity
/boards/arm64/fvp_baser_aemv8r/ @povergoing
/boards/arm64/fvp_base_revc_2xaemv8a/ @carlocaione
/boards/arm64/intel_socfpga_agilex_socdk/ @siclim @ngboonkhai
# All cmake related files
/cmake/ @tejlmand @nashif
/cmake/*/arcmwdt/ @abrodkin @evgeniy-paltsev @tejlmand
/CMakeLists.txt @tejlmand @nashif
/doc/ @carlescufi
/doc/develop/tools/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/services/device_mgmt/smp_protocol.rst @de-nordic
/doc/services/device_mgmt/smp_groups/ @de-nordic
/doc/CMakeLists.txt @carlescufi
/doc/_scripts/ @carlescufi
/doc/connectivity/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/doc/build/dts/ @galak @mbolivar-nordic
/doc/hardware/preipherals/canbus/ @alexanderwachter @henrikbrixandersen
/doc/security/ @ceolin @d3zd3z
/drivers/debug/ @nashif
/drivers/*/*sam4l* @nandojve
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*gd32* @nandojve
/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/*/*mcux* @mmahadevan108 @dleach02
/drivers/*/*stm32* @erwango @ABOSTM @FRASTM
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*lpc11u6x* @mbittan @simonguinot
/drivers/*/*npcx* @MulinChao @WealianLiao @ChiHuaL
/drivers/*/*andes* @cwshu @kevinwang821020 @jimmyzhe
/drivers/*/*neorv32* @henrikbrixandersen
/drivers/adc/ @anangl
/drivers/adc/adc_stm32.c @cybertale
/drivers/audio/*nrfx* @anangl
/drivers/bbram/* @yperess @sjg20 @jackrosenthal
/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/drivers/bluetooth/hci/hci_esp32.c @sylvioalves
/drivers/cache/ @carlocaione
/drivers/syscon/ @carlocaione @yperess
/drivers/can/ @alexanderwachter @henrikbrixandersen
/drivers/can/*mcp2515* @karstenkoenig
/drivers/can/*rcar* @aaillet @pmarzin
/drivers/clock_control/*agilex* @siclim
/drivers/clock_control/*nrf* @nordic-krch
/drivers/clock_control/*esp32* @extremegtx @glaubermaroto
/drivers/clock_control/*rcar* @aaillet
/drivers/counter/ @nordic-krch
/drivers/console/ipm_console.c @finikorg
/drivers/console/semihost_console.c @luozhongyao
/drivers/console/jailhouse_debug_console.c @MrVan
/drivers/counter/counter_cmos.c @dcpleung
/drivers/counter/counter_ll_stm32_timer.c @kentjhall
/drivers/crypto/*nrf_ecb* @maciekfabia @anangl
/drivers/display/display_framebuf.c @dcpleung
/drivers/display/*rm68200* @mmahadevan108
/drivers/dac/ @martinjaeger
/drivers/dai/ @juimonen
/drivers/dai/intel/ @juimonen
/drivers/dai/intel/ssp/ @juimonen
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale @lowlander
/drivers/dma/*pl330* @raveenp
/drivers/dma/*iproc_pax* @raveenp
/drivers/dma/*cavs* @teburd
/drivers/ec_host_cmd_periph/ @jettr
/drivers/edac/ @finikorg
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*b91* @yurvyn
/drivers/entropy/*bt_hci* @JordanYates
/drivers/entropy/*rv32m1* @dleach02
/drivers/entropy/*gecko* @chrta
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/espi/ @albertofloyd @franciscomunoz @sjvasanth1
/drivers/ethernet/ @tbursztyka @pfalcon
/drivers/ethernet/*dwmac* @npitre
/drivers/ethernet/*stm32* @Nukersson @lochej
/drivers/ethernet/*w5500* @parthitce
/drivers/ethernet/*xlnx_gem* @ibirnbaum
/drivers/ethernet/phy/ @rlubos @tbursztyka @arvinf
/drivers/mdio/ @rlubos @tbursztyka @arvinf
/drivers/flash/ @nashif @nvlsianpu
/drivers/flash/*stm32_qspi* @lmajewski
/drivers/flash/*b91* @yurvyn
/drivers/flash/*nrf* @nvlsianpu
/drivers/flash/*esp32* @glaubermaroto
/drivers/fpga/ @tgorochowik @kgugala
/drivers/gpio/ @mnkp
/drivers/gpio/*b91* @yurvyn
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*nct38xx* @MulinChao @WealianLiao @ChiHuaL
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*eos_s3* @wtatarski @kowalewskijan @kgugala
/drivers/gpio/*rcar* @aaillet
/drivers/gpio/*esp32* @glaubermaroto
/drivers/gpio/*rpi_pico* @yonsch
/drivers/gpio/*xlnx_ps* @ibirnbaum
/drivers/hwinfo/ @alexanderwachter
/drivers/i2c/i2c_common.c @sjg20
/drivers/i2c/i2c_emul.c @sjg20
/drivers/i2c/i2c_ite_enhance.c @GTLin08
/drivers/i2c/i2c_ite_it8xxx2.c @GTLin08
/drivers/i2c/i2c_shell.c @nashif
/drivers/i2c/Kconfig.i2c_emul @sjg20
/drivers/i2c/Kconfig.it8xxx2 @GTLin08
/drivers/i2c/slave/*eeprom* @henrikbrixandersen
/drivers/i2c/Kconfig.test @mbolivar-nordic
/drivers/i2c/i2c_test.c @mbolivar-nordic
/drivers/i2c/*rcar* @aaillet
/drivers/i2s/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2s/*nrfx* @anangl
/drivers/ieee802154/ @rlubos @tbursztyka
/drivers/ieee802154/*b91* @yurvyn
/drivers/ieee802154/ieee802154_nrf5* @jciupis
/drivers/ieee802154/ieee802154_rf2xx* @tbursztyka @nandojve
/drivers/ieee802154/ieee802154_cc13xx* @bwitherspoon @cfriedt
/drivers/interrupt_controller/ @dcpleung @nashif
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/interrupt_controller/*esp32* @glaubermaroto
/drivers/interrupt_controller/intc_nuclei_eclic.c @soburi
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic
/drivers/ipm/ipm_cavs* @dcpleung @andyross
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/ipm/ipm_stm32_hsem.c @cameled
/drivers/kscan/ @VenkatKotakonda @franciscomunoz @sjvasanth1
/drivers/kscan/*xec* @franciscomunoz @sjvasanth1
/drivers/kscan/*ft5336* @MaureenHelm
/drivers/kscan/*ht16k33* @henrikbrixandersen
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar-nordic
/drivers/lora/ @Mani-Sadhasivam
/drivers/mbox/ @carlocaione
/drivers/memc/ @gmarull
/drivers/misc/ @tejlmand
/drivers/misc/ft8xx/ @hubertmis
/drivers/mm/ @dcpleung
/drivers/modem/hl7800.c @LairdCP/zephyr
/drivers/modem/simcom-sim7080.c @lgehreke
/drivers/modem/simcom-sim7080.h @lgehreke
/drivers/modem/Kconfig.hl7800 @LairdCP/zephyr
/drivers/modem/Kconfig.simcom-sim7080 @lgehreke
/drivers/pcie/ @dcpleung @nashif @jhedberg
/drivers/peci/ @albertofloyd @franciscomunoz @sjvasanth1
/drivers/pinctrl/ @gmarull
/drivers/pinctrl/*esp32* @glaubermaroto
/drivers/pinmux/*it8xxx2* @ite
/drivers/pm_cpu_ops/ @carlocaione
/drivers/power_domain/ @ceolin
/drivers/ps2/ @franciscomunoz @sjvasanth1
/drivers/ps2/*xec* @franciscomunoz @sjvasanth1
/drivers/ps2/*npcx* @MulinChao @WealianLiao @ChiHuaL
/drivers/pwm/*b91* @yurvyn
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/*sam0* @nzmichaelh
/drivers/pwm/*stm32* @gmarull
/drivers/pwm/*test* @JordanYates
/drivers/pwm/*xlnx* @henrikbrixandersen
/drivers/pwm/pwm_capture.c @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/pwm/*gecko* @sun681
/drivers/pwm/*it8xxx2* @RuibinChang
/drivers/regulator/*pmic* @danieldegrasse
/drivers/reset/ @andrei-edward-popa
/drivers/sensor/ @MaureenHelm
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
/drivers/sensor/hts*/ @avisconti
/drivers/sensor/ina23*/ @bbilas
/drivers/sensor/lis*/ @avisconti
/drivers/sensor/lps*/ @avisconti
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/mpr/ @sven-hm
/drivers/sensor/st*/ @avisconti
/drivers/serial/*b91* @yurvyn
/drivers/serial/uart_altera_jtag_hal.c @nashif
/drivers/serial/*ns16550* @dcpleung @nashif @jenmwms @aasthagr
/drivers/serial/*nrfx* @Mierunski @anangl
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.mcux_iuart @Mani-Sadhasivam
/drivers/serial/uart_mcux_iuart.c @Mani-Sadhasivam
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/*rpi_pico* @yonsch
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/serial/uart_xlnx_uartlite.c @henrikbrixandersen
/drivers/serial/*xmc4xxx* @parthitce
/drivers/serial/*numicro* @ssekar15
/drivers/serial/*apbuart* @martin-aberg
/drivers/serial/*rcar* @aaillet
/drivers/serial/Kconfig.test @str4t0m
/drivers/serial/serial_test.c @str4t0m
/drivers/serial/Kconfig.xen @lorc @firscity
/drivers/serial/uart_hvc_xen.c @lorc @firscity
/drivers/serial/uart_hvc_xen_consoleio.c @lorc @firscity
/drivers/serial/Kconfig.it8xxx2 @GTLin08
/drivers/serial/uart_ite_it8xxx2.c @GTLin08
/drivers/disk/ @jfischer-no
/drivers/disk/sdmmc_sdhc.h @JunYangNXP
/drivers/disk/sdmmc_spi.c @JunYangNXP
/drivers/disk/usdhc.c @JunYangNXP
/drivers/disk/sdmmc_stm32.c @anthonybrandon
/drivers/net/ @rlubos @tbursztyka
/drivers/ptp_clock/ @tbursztyka
/drivers/spi/ @tbursztyka
/drivers/spi/*b91* @yurvyn
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/spi/*esp32* @glaubermaroto
/drivers/sdhc/ @danieldegrasse
/drivers/timer/*apic* @dcpleung @nashif
/drivers/timer/apic_tsc.c @andyross
/drivers/timer/*arm_arch* @carlocaione
/drivers/timer/*cortex_m_systick* @anangl
/drivers/timer/*altera_avalon* @nashif
/drivers/timer/*riscv_machine* @kgugala @pgielda
/drivers/timer/*ite_it8xxx2* @ite
/drivers/timer/*xlnx_psttc* @wjliang @stephanosio
/drivers/timer/*cc13x2_cc26x2_rtc* @vanti
/drivers/timer/*cavs* @dcpleung
/drivers/timer/*stm32_lptim* @FRASTM
/drivers/timer/*leon_gptimer* @martin-aberg
/drivers/timer/*mips_cp0* @frantony
/drivers/timer/*rcar_cmt* @aaillet
/drivers/timer/*esp32c3_sys* @uLipe
/drivers/timer/*sam0_rtc* @bendiscz
/drivers/timer/*arcv2* @ruuddw
/drivers/timer/*xtensa* @dcpleung
/drivers/timer/*rv32m1_lptmr* @mbolivar
/drivers/timer/*nrf_rtc* @anangl
/drivers/timer/*hpet* @dcpleung
/drivers/usb/ @jfischer-no
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/usbc/ @sambhurst
/drivers/video/ @loicpoulain
/drivers/i2c/*b91* @yurvyn
/drivers/i2c/i2c_ll_stm32* @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/i2c/*tca954x* @kurddt
/drivers/*/*xec* @franciscomunoz @albertofloyd @sjvasanth1
/drivers/mipi_dsi/ @gmarull
/drivers/mipi_dsi/mipi_dsi.c @gmarull
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/*sifive* @katsuster
/drivers/watchdog/wdt_handlers.c @dcpleung @nashif
/drivers/watchdog/*cc32xx* @pavlohamov
/drivers/watchdog/wdt_ite_it8xxx2.c @RuibinChang
/drivers/watchdog/Kconfig.it8xxx2 @RuibinChang
/drivers/watchdog/wdt_counter.c @nordic-krch
/drivers/wifi/ @rlubos @tbursztyka @pfalcon
/drivers/wifi/esp_at/ @mniestroj
/drivers/wifi/eswifi/ @loicpoulain @nandojve
/drivers/wifi/winc1500/ @kludentwo
/drivers/virtualization/ @tbursztyka
/drivers/xen/ @lorc @firscity
/dts/arc/ @abrodkin @ruuddw @iriszzw @evgeniy-paltsev
/dts/arm/acsip/ @NorthernDean
/dts/arm/aspeed/ @aspeeddylan
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/sam4l* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/same70* @nandojve
/dts/arm/atmel/samv71* @nandojve
/dts/arm/atmel/ @galak
/dts/arm/broadcom/ @sbranden
/dts/arm/cypress/ @nandojve
/dts/arm/gigadevice/ @nandojve
/dts/arm/infineon/ @parthitce
/dts/arm64/ @carlocaione
/dts/arm64/armv8-r.dtsi @povergoing
/dts/arm64/nxp/ @JiafeiPan
/dts/arm/quicklogic/ @wtatarski @kowalewskijan @kgugala
/dts/arm/seeed/ @str4t0m
/dts/arm/st/ @erwango
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @anangl @carlescufi
/dts/arm/nuvoton/ @ssekar15 @MulinChao @WealianLiao @ChiHuaL
/dts/arm/nxp/ @mmahadevan108 @dleach02
/dts/arm/microchip/ @franciscomunoz @albertofloyd @sjvasanth1
/dts/arm/rpi_pico/ @yonsch
/dts/arm/silabs/efm32_pg_1b.dtsi @rdmeneze
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efm32_jg_pg* @chrta
/dts/arm/silabs/efr32bg13p* @mnkp
/dts/arm/silabs/efr32xg13p* @mnkp
/dts/arm/silabs/efm32jg12b* @chrta
/dts/arm/silabs/efm32pg12b* @chrta
/dts/arm/silabs/efm32pg1b* @rdmeneze
/dts/arm/silabs/efr32mg21* @l-alfred
/dts/arm/silabs/efr32fg13* @yonsch
/dts/riscv/ @kgugala @pgielda
/dts/riscv/ite/ @ite
/dts/riscv/microsemi-miv.dtsi @galak
/dts/riscv/rv32m1* @dleach02
/dts/riscv/riscv32-litex-vexriscv.dtsi @mateusz-holenko @kgugala @pgielda
/dts/riscv/starfive/ @rajnesh-kanwal
/dts/riscv/andes_v5* @cwshu @kevinwang821020 @jimmyzhe
/dts/arm/armv*m.dtsi @galak @ioannisg
/dts/arm/armv7-a.dtsi @ibirnbaum
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/arm/renesas/ @aaillet @pmarzin
/dts/x86/ @jhedberg
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/xtensa/espressif/ @glaubermaroto
/dts/xtensa/nxp/ @iuliana-prodan @dbaluta
/dts/sparc/ @martin-aberg
/dts/bindings/ @galak
/dts/bindings/can/ @alexanderwachter @henrikbrixandersen
/dts/bindings/i2c/zephyr*i2c-emul*.yaml @sjg20
/dts/bindings/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/modem/*hl7800.yaml @LairdCP/zephyr
/dts/bindings/serial/ns16550.yaml @dcpleung @nashif
/dts/bindings/wifi/*esp-at.yaml @mniestroj
/dts/bindings/*/*gd32* @nandojve
/dts/bindings/*/*npcx* @MulinChao @WealianLiao @ChiHuaL
/dts/bindings/*/*psoc6* @nandojve
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @mmahadevan108 @dleach02
/dts/bindings/*/openisa* @dleach02
/dts/bindings/*/raspberrypi*pico* @yonsch
/dts/bindings/*/st* @erwango
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/litex* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/vexriscv* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/andes* @cwshu @kevinwang821020 @jimmyzhe
/dts/bindings/*/neorv32* @henrikbrixandersen
/dts/bindings/pm_cpu_ops/* @carlocaione
/dts/bindings/ethernet/*gem.yaml @ibirnbaum
/dts/posix/ @aescolar @daor-oti
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/*ina23* @bbilas
/dts/bindings/sensor/st* @avisconti
/dts/common/ @galak
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/zephyr/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/include/zephyr/drivers/adc.h @anangl
/include/zephyr/drivers/can.h @alexanderwachter @henrikbrixandersen
/include/zephyr/drivers/can/ @alexanderwachter @henrikbrixandersen
/include/zephyr/drivers/counter.h @nordic-krch
/include/zephyr/drivers/dac.h @martinjaeger
/include/zephyr/drivers/espi.h @albertofloyd @franciscomunoz @sjvasanth1
/include/zephyr/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/include/zephyr/drivers/flash.h @nashif @carlescufi @galak @MaureenHelm @nvlsianpu
/include/zephyr/drivers/i2c_emul.h @sjg20
/include/zephyr/drivers/led/ht16k33.h @henrikbrixandersen
/include/zephyr/drivers/interrupt_controller/ @dcpleung @nashif
/include/zephyr/drivers/interrupt_controller/gic.h @stephanosio
/include/zephyr/drivers/modem/hl7800.h @LairdCP/zephyr
/include/zephyr/drivers/pcie/ @dcpleung
/include/zephyr/drivers/hwinfo.h @alexanderwachter
/include/zephyr/drivers/led.h @Mani-Sadhasivam
/include/zephyr/drivers/led_strip.h @mbolivar-nordic
/include/zephyr/drivers/sensor.h @MaureenHelm
/include/zephyr/drivers/spi.h @tbursztyka
/include/zephyr/drivers/lora.h @Mani-Sadhasivam
/include/zephyr/drivers/peci.h @albertofloyd @franciscomunoz @sjvasanth1
/include/zephyr/drivers/pm_cpu_ops.h @carlocaione
/include/zephyr/drivers/pm_cpu_ops/ @carlocaione
/include/zephyr/app_memory/ @dcpleung
/include/zephyr/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arc/arch.h @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arc/v2/irq.h @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arm/aarch32/ @MaureenHelm @galak @ioannisg
/include/zephyr/arch/arm/aarch32/cortex_a_r/ @stephanosio
/include/zephyr/arch/arm64/ @carlocaione
/include/zephyr/arch/arm64/cortex_r/ @povergoing
/include/zephyr/arch/arm/aarch32/irq.h @carlocaione
/include/zephyr/arch/mips/ @frantony
/include/zephyr/arch/nios2/ @nashif
/include/zephyr/arch/nios2/arch.h @nashif
/include/zephyr/arch/posix/ @aescolar @daor-oti
/include/zephyr/arch/riscv/ @kgugala @pgielda
/include/zephyr/arch/x86/ @jhedberg @dcpleung
/include/zephyr/arch/common/ @andyross @nashif
/include/zephyr/arch/xtensa/ @andyross @dcpleung
/include/zephyr/arch/sparc/ @martin-aberg
/include/zephyr/sys/atomic.h @andyross
/include/zephyr/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/include/zephyr/bluetooth/audio/ @alwa-nordic @jhedberg @Vudentz @Thalley @asbjornsabo
/include/zephyr/cache.h @carlocaione @andyross
/include/zephyr/canbus/ @alexanderwachter @henrikbrixandersen
/include/zephyr/tracing/ @nashif
/include/zephyr/debug/ @nashif
/include/zephyr/debug/coredump.h @dcpleung
/include/zephyr/debug/gdbstub.h @ceolin
/include/zephyr/device.h @tbursztyka @nashif
/include/zephyr/devicetree.h @galak
/include/zephyr/devicetree/can.h @henrikbrixandersen
/include/zephyr/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/zephyr/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
/include/zephyr/dt-bindings/ethernet/xlnx_gem.h @ibirnbaum
/include/zephyr/dt-bindings/pcie/ @dcpleung
/include/zephyr/dt-bindings/pinctrl/esp* @glaubermaroto
/include/zephyr/dt-bindings/pwm/*it8xxx2* @RuibinChang
/include/zephyr/dt-bindings/usb/usb.h @galak
/include/zephyr/drivers/emul.h @sjg20
/include/zephyr/fs/ @nashif @nvlsianpu @de-nordic
/include/zephyr/init.h @nashif @andyross
/include/zephyr/irq.h @dcpleung @nashif @andyross
/include/zephyr/irq_offload.h @dcpleung @nashif @andyross
/include/zephyr/kernel.h @dcpleung @nashif @andyross
/include/zephyr/kernel_version.h @dcpleung @nashif @andyross
/include/zephyr/linker/app_smem*.ld @dcpleung @nashif
/include/zephyr/linker/ @dcpleung @nashif @andyross
/include/zephyr/logging/ @nordic-krch
/include/zephyr/lorawan/lorawan.h @Mani-Sadhasivam
/include/zephyr/mgmt/osdp.h @sidcha
/include/zephyr/net/ @rlubos @tbursztyka @pfalcon
/include/zephyr/net/buf.h @jhedberg @tbursztyka @pfalcon @rlubos
/include/zephyr/net/coap*.h @rlubos
/include/zephyr/net/lwm2m*.h @rlubos
/include/zephyr/net/mqtt.h @rlubos
/include/zephyr/net/net_pkt_filter.h @npitre
/include/zephyr/posix/ @pfalcon
/include/zephyr/pm/pm.h @nashif @ceolin
/include/zephyr/drivers/ptp_clock.h @tbursztyka
/include/zephyr/shared_irq.h @dcpleung @nashif @andyross
/include/zephyr/shell/ @jakub-uC @nordic-krch
/include/zephyr/shell/shell_mqtt.h @ycsin
/include/zephyr/sw_isr_table.h @dcpleung @nashif @andyross
/include/zephyr/sd/ @danieldegrasse
/include/zephyr/sys_clock.h @dcpleung @nashif @andyross
/include/zephyr/sys/sys_io.h @dcpleung @nashif @andyross
/include/zephyr/sys/kobject.h @dcpleung @nashif
/include/zephyr/toolchain.h @dcpleung @andyross @nashif
/include/zephyr/toolchain/ @dcpleung @nashif @andyross
/include/zephyr/zephyr.h @dcpleung @nashif @andyross
/kernel/ @dcpleung @nashif @andyross
/lib/smf/ @sambhurst
/lib/util/ @carlescufi @jakub-uC
/lib/util/fnmatch/ @carlescufi @jakub-uC
/lib/open-amp/ @arnopo
/lib/os/ @dcpleung @nashif @andyross
/lib/os/cbprintf_packaged.c @npitre
/lib/posix/ @pfalcon
/lib/posix/getopt/ @jakub-uC
/subsys/portability/ @nashif
/lib/libc/ @nashif
/lib/libc/arcmwdt/ @abrodkin @ruuddw @evgeniy-paltsev
/misc/ @tejlmand
/modules/ @nashif
/modules/canopennode/ @henrikbrixandersen
/modules/mbedtls/ @ceolin @d3zd3z
/modules/hal_gigadevice/ @nandojve
/modules/hal_nordic/nrf_802154/ @jciupis
/modules/trusted-firmware-m/ @microbuilder
/kernel/device.c @andyross @nashif
/kernel/idle.c @andyross @nashif
/samples/ @nashif
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/boards/*microbit* @jhe
/samples/bluetooth/ @jhedberg @Vudentz @alwa-nordic
/samples/boards/intel_s1000_crb/ @sathishkuttan @dcpleung @nashif
/samples/compression/ @Navin-Sankar
/samples/drivers/can/ @alexanderwachter @henrikbrixandersen
/samples/drivers/clock_control_litex/ @mateusz-holenko @kgugala @pgielda
/samples/drivers/eeprom/ @henrikbrixandersen
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/subsys/lorawan/ @Mani-Sadhasivam
/samples/modules/canopennode/ @henrikbrixandersen
/samples/net/ @rlubos @tbursztyka @pfalcon
/samples/net/cloud/tagoio_http_post/ @nandojve
/samples/net/dns_resolve/ @rlubos @tbursztyka @pfalcon
/samples/net/lwm2m_client/ @rlubos
/samples/net/mqtt_publisher/ @rlubos
/samples/net/sockets/coap_*/ @rlubos
/samples/net/sockets/ @rlubos @tbursztyka @pfalcon
/samples/net/*civetweb* @Nukersson
/samples/sensor/ @MaureenHelm
/samples/shields/ @avisconti
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/logging/syst/ @dcpleung
/samples/subsys/shell/ @jakub-uC @nordic-krch
/samples/subsys/mgmt/mcumgr/smp_svr/ @aunsbjerg @nvlsianpu
/samples/subsys/mgmt/updatehub/ @nandojve @otavio
/samples/subsys/mgmt/osdp/ @sidcha
/samples/subsys/usb/ @jfischer-no
/samples/subsys/pm/ @nashif @ceolin
/samples/tfm_integration/ @microbuilder
/samples/userspace/ @dcpleung @nashif
/scripts/release/bug_bash.py @cfriedt
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/coredump/ @dcpleung
/scripts/footprint/ @nashif
/scripts/kconfig/ @ulfalizer
/scripts/logging/dictionary/ @dcpleung
/scripts/pylib/twister/expr_parser.py @nashif
/scripts/schemas/twister/ @nashif
/scripts/gen_app_partitions.py @dcpleung @nashif
scripts/gen_image_info.py @tejlmand
/scripts/get_maintainer.py @nashif
/scripts/dts/ @mbolivar-nordic @galak
/scripts/release/ @nashif
/scripts/ci/ @nashif
/arch/x86/gen_gdt.py @dcpleung @nashif
/arch/x86/gen_idt.py @dcpleung @nashif
/scripts/gen_kobject_list.py @dcpleung @nashif
/scripts/gen_kobject_placeholders.py @dcpleung
/scripts/gen_syscalls.py @dcpleung @nashif
/scripts/list_boards.py @mbolivar-nordic
/scripts/process_gperf.py @dcpleung @nashif
/scripts/gen_relocate_app.py @dcpleung
/scripts/requirements*.txt @mbolivar-nordic @galak @nashif
/scripts/tests/twister/ @aasthagr
/scripts/tests/build/test_subfolder_list.py @rmstoi
/scripts/tracing/ @nashif
/scripts/pylib/twister/ @nashif
/scripts/twister @nashif
/scripts/series-push-hook.sh @erwango
/scripts/utils/pinctrl_nrf_migrate.py @gmarull
/scripts/west_commands/ @mbolivar-nordic
/scripts/west_commands/runners/gd32isp.py @mbolivar-nordic @nandojve
/scripts/west_commands/tests/test_gd32isp.py @mbolivar-nordic @nandojve
/scripts/west-commands.yml @mbolivar-nordic
/scripts/zephyr_module.py @tejlmand
/scripts/uf2conv.py @petejohanson
/scripts/user_wordsize.py @cfriedt
/scripts/valgrind.supp @aescolar @daor-oti
/share/zephyr-package/ @tejlmand
/share/zephyrunittest-package/ @tejlmand
/subsys/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/subsys/bluetooth/audio/ @jhedberg @Vudentz @Thalley @asbjornsabo
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot @kruithofa
/subsys/bluetooth/mesh/ @jhedberg @PavelVPV @Vudentz @LingaoM
/subsys/canbus/ @alexanderwachter @henrikbrixandersen
/subsys/debug/ @nashif
/subsys/debug/coredump/ @dcpleung
/subsys/debug/gdbstub/ @ceolin
/subsys/debug/gdbstub.c @ceolin
/subsys/dfu/ @nvlsianpu
/subsys/disk/ @jfischer-no
/subsys/tracing/ @nashif
/subsys/debug/asan_hacks.c @aescolar @daor-oti
/subsys/demand_paging/ @dcpleung @nashif
/subsys/emul/ @sjg20
/subsys/fb/ @jfischer-no
/subsys/fs/ @nashif
/subsys/fs/fcb/ @nvlsianpu
/subsys/fs/nvs/ @Laczen
/subsys/ipc/ @carlocaione
/subsys/logging/ @nordic-krch
/subsys/logging/log_backend_net.c @nordic-krch @rlubos
/subsys/lorawan/ @Mani-Sadhasivam
/subsys/mgmt/ec_host_cmd/ @jettr
/subsys/mgmt/mcumgr/ @carlescufi @nvlsianpu
/subsys/mgmt/mcumgr/lib/ @de-nordic
/subsys/mgmt/hawkbit/ @Navin-Sankar
/subsys/mgmt/mcumgr/smp_udp.c @aunsbjerg
/subsys/mgmt/updatehub/ @nandojve @otavio
/subsys/mgmt/osdp/ @sidcha
/subsys/modbus/ @jfischer-no
/subsys/net/buf.c @jhedberg @tbursztyka @pfalcon @rlubos
/subsys/net/ip/ @rlubos @tbursztyka @pfalcon
/subsys/net/lib/ @rlubos @tbursztyka @pfalcon
/subsys/net/lib/dns/ @rlubos @tbursztyka @pfalcon @cfriedt
/subsys/net/lib/lwm2m/ @rlubos
/subsys/net/lib/config/ @rlubos @tbursztyka @pfalcon
/subsys/net/lib/mqtt/ @rlubos
/subsys/net/lib/coap/ @rlubos
/subsys/net/lib/sockets/socketpair.c @cfriedt
/subsys/net/lib/sockets/ @rlubos @tbursztyka @pfalcon
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @rlubos @tbursztyka
/subsys/net/l2/canbus/ @alexanderwachter
/subsys/net/pkt_filter/ @npitre
/subsys/net/*/openthread/ @rlubos
/subsys/pm/ @nashif @ceolin
/subsys/random/ @dleach02
/subsys/settings/ @nvlsianpu
/subsys/shell/ @jakub-uC @nordic-krch
/subsys/shell/backends/shell_mqtt.c @ycsin
/subsys/stats/ @nvlsianpu
/subsys/storage/ @nvlsianpu
/subsys/sd/ @danieldegrasse
/subsys/task_wdt/ @martinjaeger
/subsys/testsuite/ @nashif
/subsys/testsuite/ztest/*/ztress* @nordic-krch
/subsys/timing/ @nashif @dcpleung
/subsys/usb/ @jfischer-no
/subsys/usb/device/class/dfu/usb_dfu.c @nvlsianpu
/tests/ @nashif
/tests/arch/arm/ @ioannisg @stephanosio
/tests/benchmarks/cmsis_dsp/ @stephanosio
/tests/boards/native_posix/ @aescolar @daor-oti
/tests/boards/intel_s1000_crb/ @dcpleung @sathishkuttan
/tests/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/tests/bluetooth/controller/ @cvinayak @thoh-ot @kruithofa @erbr-ot @sjanc @ppryga
/tests/bluetooth/bsim_bt/ @alwa-nordic @jhedberg @Vudentz @wopu-ot
/tests/bluetooth/bsim_bt/bsim_test_audio/ @alwa-nordic @jhedberg @Vudentz @wopu-ot @Thalley @asbjornsabo
/tests/bluetooth/tester/ @alwa-nordic @jhedberg @Vudentz @sjanc
/tests/posix/ @pfalcon
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin @d3zd3z
/tests/drivers/can/ @alexanderwachter @henrikbrixandersen
/tests/drivers/counter/ @nordic-krch
/tests/drivers/eeprom/ @henrikbrixandersen @sjg20
/tests/drivers/flash_simulator/ @nvlsianpu
/tests/drivers/gpio/ @mnkp
/tests/drivers/hwinfo/ @alexanderwachter
/tests/drivers/spi/ @tbursztyka
/tests/drivers/uart/uart_async_api/ @Mierunski
/tests/kernel/ @dcpleung @andyross @nashif
/tests/lib/ @nashif
/tests/lib/cmsis_dsp/ @stephanosio
/tests/net/ @rlubos @tbursztyka @pfalcon
/tests/net/buf/ @jhedberg @tbursztyka @pfalcon
/tests/net/lib/ @rlubos @tbursztyka @pfalcon
/tests/net/lib/http_header_fields/ @rlubos @tbursztyka
/tests/net/lib/mqtt_packet/ @rlubos
/tests/net/lib/coap/ @rlubos
/tests/net/npf/ @npitre
/tests/net/socket/socketpair/ @cfriedt
/tests/net/socket/ @rlubos @tbursztyka @pfalcon
/tests/subsys/debug/coredump/ @dcpleung
/tests/subsys/fs/ @nashif @nvlsianpu @de-nordic
/tests/subsys/sd/ @danieldegrasse
/tests/subsys/settings/ @nvlsianpu
/tests/subsys/shell/ @jakub-uC @nordic-krch
# Get all docs reviewed
*.rst @nashif
/doc/kernel/ @andyross @nashif
*posix*.rst @aescolar @daor-oti

View File

@@ -1,139 +1,78 @@
# Zephyr Project Code of Conduct
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
Examples of behavior that contributes to creating a positive environment
include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior include:
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
professional setting
## Enforcement Responsibilities
## Our Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
conduct@zephyrproject.org. Reports will be received by the Chair of the Zephyr
Governing Board, the Zephyr Project Director (Linux Foundation), and the Zephyr
Project Developer Advocate (Linux Foundation). You may refer to the [Governing
Board](https://zephyrproject.org/governing-board/) and [Linux Foundation
Staff](https://zephyrproject.org/staff/) web pages to identify who are the
individuals currently holding these positions.
All complaints will be reviewed and investigated promptly and fairly.
reported by contacting the project team at conduct@zephyrproject.org.
Reports will be received by Kate Stewart (Linux Foundation) and Amy Occhialino
(Intel). All complaints will be reviewed and investigated, and will result in a
response that is deemed necessary and appropriate to the circumstances. The
project team is obligated to maintain confidentiality with regard to the
reporter of an incident. Further details of specific enforcement policies may
be posted separately.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
The only changes made by The Zephyr Project to the original document were to
make explicit who the recipients of Code of Conduct incident reports are.
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@@ -1,19 +0,0 @@
# Constant variables to be used across Kconfig options
# Copyright (c) 2024 basalte bv
# SPDX-License-Identifier: Apache-2.0
INT8_MIN := -128
INT16_MIN := -32768
INT32_MIN := -2147483648
INT64_MIN := -9223372036854775808
INT8_MAX := 127
INT16_MAX := 32767
INT32_MAX := 2147483647
INT64_MAX := 9223372036854775807
UINT8_MAX := 255
UINT16_MAX := 65535
UINT32_MAX := 4294967295
UINT64_MAX := 18446744073709551615

View File

@@ -2,16 +2,8 @@
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# Copyright (c) 2016 Intel Corporation
# Copyright (c) 2023 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
source "Kconfig.constants"
osource "$(APPLICATION_SOURCE_DIR)/VERSION"
# This should be sourced early since the autogen Kconfig.dts options
# and macros may get used by shields/boards/SoC defconfig or modules.
source "dts/Kconfig"
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
@@ -21,21 +13,20 @@ source "dts/Kconfig"
# Shield defaults should have precedence over board defaults, which should have
# precedence over SoC defaults, so include them in that order.
#
# $ARCH and $KCONFIG_BOARD_DIR will be glob patterns when building documentation.
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
# This loads custom shields defconfigs (from BOARD_ROOT)
osource "$(KCONFIG_BINARY_DIR)/Kconfig.shield.defconfig"
# This loads Zephyr base shield defconfigs
source "boards/shields/*/Kconfig.defconfig"
osource "$(KCONFIG_BOARD_DIR)/Kconfig.defconfig"
# This loads Zephyr specific SoC root defconfigs
source "$(KCONFIG_BINARY_DIR)/soc/Kconfig.defconfig"
source "$(BOARD_DIR)/Kconfig.defconfig"
# This loads custom SoC root defconfigs
osource "$(KCONFIG_BINARY_DIR)/Kconfig.soc.defconfig"
# This loads Zephyr base SoC root defconfigs
osource "soc/$(ARCH)/*/Kconfig.defconfig"
# This loads the toolchain defconfigs
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig.defconfig"
# This loads the testsuite defconfig
source "subsys/testsuite/Kconfig.defconfig"
menu "Modules"
@@ -47,6 +38,7 @@ source "boards/Kconfig"
source "soc/Kconfig"
source "arch/Kconfig"
source "kernel/Kconfig"
source "dts/Kconfig"
source "drivers/Kconfig"
source "lib/Kconfig"
source "subsys/Kconfig"
@@ -57,7 +49,7 @@ menu "Build and Link Features"
menu "Linker Options"
choice LINKER_ORPHAN_CONFIGURATION
choice
prompt "Linker Orphan Section Handling"
default LINKER_ORPHAN_SECTION_WARN
@@ -142,17 +134,6 @@ config ROM_START_OFFSET
alignment requirements on most ARM targets, although some targets
may require smaller or larger values.
config ROM_END_OFFSET
hex "ROM end offset"
default 0
help
If non-zero, this option reduces the maximum size that the Zephyr image is allowed to
occupy, this is to allow for additional image storage which can be created and used by
other systems such as bootloaders (for MCUboot, this would include the image swap
fields and TLV storage at the end of the image).
If unsure, leave at the default value 0.
config LD_LINKER_SCRIPT_SUPPORTED
bool
default y
@@ -215,6 +196,12 @@ config LINKER_SORT_BY_ALIGNMENT
in decreasing size of symbols. This helps to minimize
padding between symbols.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
help
The option specifies that the vector table should be placed at the
start of SRAM instead of the start of flash.
config HAS_SRAM_OFFSET
bool
help
@@ -254,20 +241,6 @@ config LINKER_USE_PINNED_SECTION
Requires that pinned sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_USE_ONDEMAND_SECTION
bool "Use Evictable Linker Section"
depends on DEMAND_MAPPING
depends on !LINKER_USE_PINNED_SECTION
depends on !ARCH_MAPS_ALL_RAM
help
If enabled, the symbols which may be evicted from memory
will be put into a linker section reserved for on-demand symbols.
During boot, the corresponding memory will be mapped as paged out.
This is conceptually the opposite of CONFIG_LINKER_USE_PINNED_SECTION.
Requires that on-demand sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
bool "Generic sections are present at boot" if DEMAND_PAGING && LINKER_USE_PINNED_SECTION
default y
@@ -281,208 +254,28 @@ config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
If unsure, say Y.
config LINKER_LAST_SECTION_ID
bool "Last section identifier"
default y if !ARM64
depends on ARM || ARM64 || RISCV
help
If enabled, the last section will contain an identifier.
This ensures that the '_flash_used' linker symbol will always be
correctly calculated, even in cases where the location counter may
have been incremented for alignment purposes but no data is placed
after alignment.
Note: in cases where the flash is fully used, for example application
specific data is written at the end of the flash area, then writing a
last section identifier may cause rom region overflow.
In such cases this setting should be disabled.
config LINKER_LAST_SECTION_ID_PATTERN
hex "Last section identifier pattern"
default "0xE015E015"
depends on LINKER_LAST_SECTION_ID
help
Pattern to fill into last section as identifier.
Default pattern is 0xE015 (end of last section), but any pattern can
be used.
The size of the pattern must not exceed 4 bytes.
config LINKER_USE_NO_RELAX
bool
help
Hidden symbol to allow features to force the use of no relax.
config LINKER_USE_RELAX
bool "Linker optimization of call addressing"
depends on !LINKER_USE_NO_RELAX
default y
help
This option performs global optimizations that become possible when the linker resolves
addressing in the program, such as relaxing address modes and synthesizing new
instructions in the output object file. For ld and lld, this enables `--relax`.
On platforms where this is not supported, `--relax' is accepted, but ignored.
Disabling it can reduce performance, as the linker is no longer able to substiture long /
in-effective jump calls to shorter / more effective instructions.
endmenu # "Linker Sections"
config LINKER_ITERABLE_SUBALIGN
int
default 8 if 64BIT
default 4
help
Hidden option for the default subalignment of iterable sections.
config LINKER_DEVNULL_SUPPORT
bool
default y if CPU_CORTEX_M || (RISCV && !64BIT)
config LINKER_DEVNULL_MEMORY
bool "Devnull region"
depends on LINKER_DEVNULL_SUPPORT
help
Devnull region is created. It is stripped from final binary but remains
in byproduct elf file.
config LINKER_DEVNULL_MEMORY_SIZE
int "Devnull region size"
depends on LINKER_DEVNULL_MEMORY
default 262144
help
Size can be adjusted so it fits all data placed in that region.
endmenu
menu "Compiler Options"
config REQUIRES_STD_C99
bool
select DEPRECATED
help
Hidden option to select compiler support C99 standard or higher.
config REQUIRES_STD_C11
bool
select DEPRECATED
select REQUIRES_STD_C99
help
Hidden option to select compiler support C11 standard or higher.
config REQUIRES_STD_C17
bool
help
Hidden option to select compiler support C17 standard or higher.
config REQUIRES_STD_C23
bool
select REQUIRES_STD_C17
help
Hidden option to select compiler support C23 standard or higher.
choice STD_C
prompt "C Standard"
default STD_C23 if REQUIRES_STD_C23
default STD_C17
help
C Standards.
config STD_C90
bool "C90 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C99
help
1989 C standard as completed in 1989 and ratified by ISO/IEC
as ISO/IEC 9899:1990. This version is known as "ANSI C".
config STD_C99
bool "C99 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C11
help
1999 C standard.
config STD_C11
bool "C11 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C17
help
2011 C standard.
config STD_C17
bool "C17"
depends on !REQUIRES_STD_C23
help
2017 C standard, addresses defects in C11 without introducing
new language features.
config STD_C23
bool "C23"
help
2023 C standard.
endchoice
config TOOLCHAIN_SUPPORTS_GNU_EXTENSIONS
bool
default y if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr"
help
Hidden option to signal that toolchain supports GNU Extensions.
config GNU_C_EXTENSIONS
bool "GNU C Extensions"
depends on TOOLCHAIN_SUPPORTS_GNU_EXTENSIONS
help
Enable GNU C Extensions. GNU C provides several language features
not found in ISO standard C.
config CODING_GUIDELINE_CHECK
bool "Enforce coding guideline rules"
help
Use available compiler flags to check coding guideline rules during
the build.
config NATIVE_LIBC
bool
select FULL_LIBC_SUPPORTED
select TC_PROVIDES_POSIX_C_LANG_SUPPORT_R
config NATIVE_APPLICATION
bool "Build as a native host application"
help
Zephyr will use the host system C library.
config NATIVE_LIBCPP
bool
select FULL_LIBCPP_SUPPORTED
help
Zephyr will use the host system C++ library
config NATIVE_BUILD
bool
select NATIVE_LIBC if EXTERNAL_LIBC
select NATIVE_LIBCPP if EXTERNAL_LIBCPP
help
Zephyr will be built targeting the host system for debug and
development purposes.
config NATIVE_LIBRARY
bool
select NATIVE_BUILD
help
Build as a prelinked library for the native host target.
This library can later be built into an executable for the host.
config COMPILER_FREESTANDING
bool "Build in a freestanding compiler mode"
help
Configure the compiler to operate in freestanding mode according to
the C and C++ language specifications. Freestanding mode reduces the
requirements of the compiler and language environment, which can
negatively impact the ability for the compiler to detect errors and
perform optimizations.
Build as a native application that can run on the host and using
resources and libraries provided by the host.
choice COMPILER_OPTIMIZATIONS
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
default SIZE_OPTIMIZATIONS_AGGRESSIVE if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm"
default SIZE_OPTIMIZATIONS
help
Note that these flags shall only control the compiler
@@ -495,12 +288,6 @@ config SIZE_OPTIMIZATIONS
Compiler optimizations will be set to -Os independently of other
options.
config SIZE_OPTIMIZATIONS_AGGRESSIVE
bool "Aggressively optimize for size"
help
Compiler optimizations will be set to -Oz independently of other
options.
config SPEED_OPTIMIZATIONS
bool "Optimize for speed"
help
@@ -519,91 +306,14 @@ config NO_OPTIMIZATIONS
Compiler optimizations will be set to -O0 independently of other
options.
Selecting this option will likely require manual tuning of the
default stack sizes in order to avoid stack overflows.
endchoice
config LTO
bool "Link Time Optimization"
depends on !(GEN_ISR_TABLES || GEN_IRQ_VECTOR_TABLE) || ISR_TABLES_LOCAL_DECLARATION
depends on !NATIVE_LIBRARY
depends on !CODE_DATA_RELOCATION
help
This option enables Link Time Optimization.
config LTO_SINGLE_THREADED
bool "Single-threaded LTO"
depends on LTO
help
This option instructs the linker to use a single thread to process
LTO. See the following issue for more info:
https://github.com/zephyrproject-rtos/sdk-ng/issues/1038
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config DEPRECATION_TEST
bool "Indicate test for deprecated feature"
help
This option is selected by tests which check functionality of
deprecated features. It ensures that COMPILER_WARNINGS_AS_ERRORS
is not selected as that would generate errors when the deprecated
features are used.
config COMPILER_SAVE_TEMPS
bool "Save temporary object files"
help
Instruct the compiler to save the temporary intermediate files
permanently. These can be useful for troubleshooting build issues.
config COMPILER_TRACK_MACRO_EXPANSION
bool "Track macro expansion"
default y
help
When enabled, locations of tokens across macro expansions will be
tracked. Disabling this option may be useful to debug long macro
expansion chains.
config COMPILER_COLOR_DIAGNOSTICS
bool "Colored diagnostics"
default y
help
Compiler diagnostic messages are colorized.
choice COMPILER_SECURITY_FORTIFY
prompt "Detect buffer overflows in libc calls"
default FORTIFY_SOURCE_NONE if NO_OPTIMIZATIONS || MINIMAL_LIBC || NATIVE_BUILD
default FORTIFY_SOURCE_COMPILE_TIME
help
Buffer overflow checking in libc calls. Supported by Clang and
GCC when using Picolibc or Newlib. Requires compiler optimization
to be enabled.
config FORTIFY_SOURCE_NONE
bool "No detection"
help
Disables both compile-time and run-time checking.
config FORTIFY_SOURCE_COMPILE_TIME
bool "Compile-time detection"
help
Enables only compile-time checking. Compile-time checking
doesn't increase executable size or reduce performance, it
limits checking to what can be done with information available
at compile time.
config FORTIFY_SOURCE_RUN_TIME
bool "Compile-time and run-time detection"
help
Enables both compile-time and run-time checking. Run-time
checking increases coverage at the expense of additional code,
and means that applications will raise a runtime exception
when buffer overflow is detected.
endchoice
config COMPILER_OPT
string "Custom compiler options"
help
@@ -622,13 +332,6 @@ config MISRA_SANE
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
config TOOLCHAIN_SUPPORTS_VLA_IN_STATEMENTS
bool
default y
help
Hidden symbol to state if the toolchain can handle vla in
statements.
endmenu
choice
@@ -638,17 +341,17 @@ choice
config ASSERT_ON_ERRORS
bool "Assert on all errors"
help
Assert on errors covered with the CHECKIF() macro.
Assert on errors covered with the CHECK macro.
config NO_RUNTIME_CHECKS
bool "No runtime error checks"
help
Do not do any runtime checks or asserts when using the CHECKIF() macro.
Do not do any runtime checks or asserts when using the CHECK macro.
config RUNTIME_ERROR_CHECKS
bool "Runtime error checks"
help
Always perform runtime checks covered with the CHECKIF() macro. This
Always perform runtime checks covered with the CHECK macro. This
option is the default and the only option used during testing.
endchoice
@@ -667,13 +370,9 @@ config OUTPUT_STAT
help
Create a stat file using readelf -e <elf>
config OUTPUT_SYMBOLS
bool "Create a symbol file"
help
Create a symbol file using nm <elf>
config OUTPUT_DISASSEMBLY
bool "Create a disassembly file"
default y
help
Create an .lst file with the assembly listing of the firmware.
@@ -685,24 +384,6 @@ config OUTPUT_DISASSEMBLE_ALL
The .lst file will contain complete disassembly of the firmware
not just those expected to contain instructions including zeros
config OUTPUT_DISASSEMBLY_WITH_SOURCE
bool "Include source code in output disassembly file"
default y
depends on OUTPUT_DISASSEMBLY && !OUTPUT_DISASSEMBLE_ALL
help
The .lst file will also contain the source code. Having
control over this can be useful for reproducible builds
since it can be used to remove one of the elements of
the .lst file that can vary across platforms because
of reasons such as having ".." include paths.
config OUTPUT_DISASSEMBLY_NO_ALIASES
bool "Disassemble with raw instruction mnemonics"
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain raw instruction mnemonic instead of
pseudo instructions.
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -721,24 +402,10 @@ config CLEANUP_INTERMEDIATE_FILES
bool "Remove all intermediate files"
help
Delete intermediate files to save space and cleanup clutter resulting
from the build process. Note this breaks incremental builds, west spdx
(Software Bill of Material generation), and maybe others.
config BUILD_GAP_FILL_PATTERN
hex "Gap fill pattern"
default 0xFF
help
Pattern used for gap filling of output files.
This value should be set to the value of a clean flash as this can
significantly reduce flash write times.
This setting only defines the gap fill pattern and doesn't enable gap
filling.
Note: binary files are always gap filled as they contain no address
information.
from the build process.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/s19 files [DEPRECATED]."
select DEPRECATED
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
@@ -746,12 +413,6 @@ config BUILD_OUTPUT_HEX
Build an Intel HEX binary zephyr/zephyr.hex in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_HEX_GAP_FILL
bool "Fill gaps in hex files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in hex based files.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
@@ -786,12 +447,6 @@ config BUILD_OUTPUT_S19
Build an S19 binary zephyr/zephyr.s19 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_S19_GAP_FILL
bool "Fill gaps in s19 files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in s19 based files.
config BUILD_OUTPUT_UF2
bool "Build a binary in UF2 format"
depends on BUILD_OUTPUT_BIN
@@ -799,23 +454,16 @@ config BUILD_OUTPUT_UF2
Build a UF2 binary zephyr/zephyr.uf2 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_MOT
bool "Build a binary in MOT format"
help
Build a MOT binary zephyr/zephyr.mot in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
if BUILD_OUTPUT_UF2
config BUILD_OUTPUT_UF2_FAMILY_ID
string "UF2 device family ID"
default "0x1c5f21b0" if SOC_SERIES_ESP32
default "0x1c5f21b0" if SOC_ESP32
default "0x621e937a" if SOC_NRF52833_QIAA
default "0xada52840" if SOC_NRF52840_QIAA
default "0x4fb2d5bd" if SOC_SERIES_IMXRT10XX || SOC_SERIES_IMXRT11XX
default "0x4fb2d5bd" if SOC_SERIES_IMX_RT
default "0x2abc77ec" if SOC_SERIES_LPC55XXX
default "0xe48bff56" if SOC_SERIES_RP2040
default "0xe48bff57" if SOC_SERIES_RP2350
default "0xe48bff56" if SOC_SERIES_RP2XXX
default "0x68ed2b88" if SOC_SERIES_SAMD21
default "0x55114460" if SOC_SERIES_SAMD51
default "0x647824b6" if SOC_SERIES_STM32F0X
@@ -831,7 +479,7 @@ config BUILD_OUTPUT_UF2_FAMILY_ID
default "0x04240bdf" if SOC_SERIES_STM32L5X
default "0x70d16653" if SOC_SERIES_STM32WBX
default "0x5ee21072" if SOC_STM32F103XE
default "0x57755a57" if SOC_SERIES_STM32F4X && (!SOC_STM32F407XE) && (!SOC_STM32F407XG)
default "0x57755a57" if SOC_STM32F401XC || SOC_STM32F401XE
default "0x6d0922fa" if SOC_STM32F407XE
default "0x8fb060fe" if SOC_STM32F407XG
help
@@ -856,11 +504,6 @@ config BUILD_OUTPUT_STRIPPED
Build a stripped binary zephyr/zephyr.strip in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_COMPRESS_DEBUG_SECTIONS
bool "Compress debug sections in the ELF file"
help
Compress debug sections in the ELF file to reduce the file size.
config BUILD_OUTPUT_ADJUST_LMA
string
help
@@ -885,23 +528,6 @@ config BUILD_OUTPUT_ADJUST_LMA
default "$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_IMAGE_M4))-\
$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH))"
config BUILD_OUTPUT_ADJUST_LMA_SECTIONS
def_string "*"
depends on BUILD_OUTPUT_ADJUST_LMA!=""
help
This determines the output sections to which the above LMA adjustment
will be applied.
The value can be the name of a section in the final ELF, like "text".
It can also be a pattern with wildcards, such as "*bss", which could
match more than one section name. Multiple such patterns can be given
as a ";"-separated list. It's possible to supply a 'negative' pattern
starting with "!", to exclude sections matched by a preceding pattern.
By default, all sections will have their LMA adjusted. The following
example excludes one section produced by the code relocation feature:
config BUILD_OUTPUT_ADJUST_LMA_SECTIONS
default "*;!.extflash_text_reloc"
config BUILD_OUTPUT_INFO_HEADER
bool "Create a image information header"
help
@@ -913,15 +539,6 @@ config BUILD_OUTPUT_INFO_HEADER
- VMA address of each segment
- Size of each segment
config BUILD_ALIGN_LMA
bool "Align LMA in output image"
default y if BUILD_OUTPUT_ADJUST_LMA!=""
help
Ensure that the LMA for each section in the output image respects
the alignment requirements of that section. This is required for
some tooling, such as objcopy, to be able to adjust the LMA of the
ELF file.
config APPLICATION_DEFINED_SYSCALL
bool "Scan application folder for any syscall definition"
help
@@ -969,61 +586,8 @@ config BUILD_OUTPUT_META_STATE_PROPAGATE
defined when `west update` was run the last time (`manifest-rev`).
The off state is only present if a west workspace is found.
config BUILD_OUTPUT_STRIP_PATHS
bool "Strip absolute paths from binaries"
default y
help
If the compiler supports it, strip the $(ZEPHYR_BASE) prefix from the
__FILE__ macro used in __ASSERT*, in the
.noinit."/home/joe/zephyr/fu/bar.c" section names and in any
application code.
This saves some memory, stops leaking user locations in binaries, makes
failure logs more deterministic and most importantly makes builds more
deterministic.
Debuggers usually have a path mapping feature to ensure the files are
still found.
config CHECK_INIT_PRIORITIES
bool "Build time initialization priorities check"
default y
# If we are building a native_simulator target, we can only check the init priorities
# if we are building the final output but we are not assembling several images together
depends on !(NATIVE_LIBRARY && (!BUILD_OUTPUT_EXE || NATIVE_SIMULATOR_EXTRA_IMAGE_PATHS != ""))
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "armclang"
help
Check the build for initialization priority issues by comparing the
initialization priority in the build with the device dependency
derived from the devicetree definition.
Fails the build on priority errors (dependent devices, inverted
priority).
config EMIT_ALL_SYSCALLS
bool "Emit all possible syscalls in the tree"
help
This tells the build system to emit all possible syscalls found
in the tree, instead of only those syscalls associated with enabled
drivers and subsystems.
endmenu
config DEPRECATED
bool
help
Symbol that must be selected by a feature or module if it is
considered to be deprecated.
When adding this to an option, remember to follow the instructions in
https://docs.zephyrproject.org/latest/develop/api/api_lifecycle.html#deprecated
config WARN_DEPRECATED
bool
default y
prompt "Warn on deprecated usage"
help
Print a warning when the Kconfig tree is parsed if any deprecated
features are enabled, or at compile time, when deprecated macros or
symbols are used.
config EXPERIMENTAL
bool
help
@@ -1037,34 +601,6 @@ config WARN_EXPERIMENTAL
Print a warning when the Kconfig tree is parsed if any experimental
features are enabled.
config NOT_SECURE
bool
help
Symbol to be selected by a feature to indicate that feature is
not secure.
config TAINT
bool
help
Symbol that must be selected by a feature or module if the Zephyr
build is considered tainted.
config ENFORCE_ZEPHYR_STDINT
bool
prompt "Enforce Zephyr convention for stdint"
depends on !ARCH_POSIX
default y
help
This enforces the Zephyr stdint convention where int32_t = int,
int64_t = long long, and intptr_t = long so that short string
format length modifiers can be used universally across ILP32
and LP64 architectures. Sometimes this is not possible e.g. when
linking against a binary-only C++ library whose type mangling
is incompatible with the Zephyr convention, or if the build
environment doesn't allow such enforcement, in which case this
should be turned off with the caveat that argument type validation
on Zephyr code will be skipped.
endmenu
@@ -1072,10 +608,145 @@ menu "Boot Options"
config IS_BOOTLOADER
bool "Act as a bootloader"
depends on XIP
depends on ARM
help
This option indicates that Zephyr will act as a bootloader to execute
a separate Zephyr image payload.
config BOOTLOADER_SRAM_SIZE
int "SRAM reserved for bootloader"
default 16
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
This option specifies the amount of SRAM (measure in kB) reserved for
a bootloader image, when either:
- the Zephyr image itself is to act as the bootloader, or
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
config MCUBOOT
bool
help
Hidden option used to indicate that the current image is MCUBoot
config BOOTLOADER_MCUBOOT
bool "MCUboot bootloader support"
select USE_DT_CODE_PARTITION
imply INIT_ARCH_HW_AT_BOOT if ARCH_SUPPORTS_ARCH_HW_INIT
depends on !MCUBOOT
help
This option signifies that the target uses MCUboot as a bootloader,
or in other words that the image is to be chain-loaded by MCUboot.
This sets several required build system and Device Tree options in
order for the image generated to be bootable using the MCUboot open
source bootloader. Currently this includes:
* Setting ROM_START_OFFSET to a default value that allows space
for the MCUboot image header
* Activating SW_VECTOR_RELAY_CLIENT on Cortex-M0
(or Armv8-M baseline) targets with no built-in vector relocation
mechanisms
By default, this option instructs Zephyr to initialize the core
architecture HW registers during boot, when this is supported by
the application. This removes the need by MCUboot to reset
the core registers' state itself.
if BOOTLOADER_MCUBOOT
config MCUBOOT_SIGNATURE_KEY_FILE
string "Path to the mcuboot signing key file"
default ""
depends on !MCUBOOT_GENERATE_UNSIGNED_IMAGE
help
The file contains a key pair whose public half is verified
by your target's MCUboot image. The file is in PEM format.
If set to a non-empty value, the build system tries to
sign the final binaries using a 'west sign -t imgtool' command.
The signed binaries are placed in the build directory
at zephyr/zephyr.signed.bin and zephyr/zephyr.signed.hex.
The file names can be customized with CONFIG_KERNEL_BIN_NAME.
The existence of bin and hex files depends on CONFIG_BUILD_OUTPUT_BIN
and CONFIG_BUILD_OUTPUT_HEX.
This option should contain a path to the same file as the
BOOT_SIGNATURE_KEY_FILE option in your MCUboot .config. The path
may be absolute or relative to the west workspace topdir. (The MCUboot
config option is used for the MCUboot bootloader image; this option is
for your application which is to be loaded by MCUboot. The MCUboot
config option can be a relative path from the MCUboot repository
root.)
If left empty, you must sign the Zephyr binaries manually.
config MCUBOOT_ENCRYPTION_KEY_FILE
string "Path to the mcuboot encryption key file"
default ""
depends on MCUBOOT_SIGNATURE_KEY_FILE != ""
help
The file contains the public key that is used to encrypt the
ephemeral key that encrypts the image. The corresponding
private key is hard coded in the MCUboot source code and is
used to decrypt the ephemeral key that is embedded in the
image. The file is in PEM format.
If set to a non-empty value, the build system tries to
sign and encrypt the final binaries using a 'west sign -t imgtool'
command. The binaries are placed in the build directory at
zephyr/zephyr.signed.encrypted.bin and
zephyr/zephyr.signed.encrypted.hex.
The file names can be customized with CONFIG_KERNEL_BIN_NAME.
The existence of bin and hex files depends on CONFIG_BUILD_OUTPUT_BIN
and CONFIG_BUILD_OUTPUT_HEX.
This option should either be an absolute path or a path relative to
the west workspace topdir.
Example: './bootloader/mcuboot/enc-rsa2048-pub.pem'
If left empty, you must encrypt the Zephyr binaries manually.
config MCUBOOT_EXTRA_IMGTOOL_ARGS
string "Extra arguments to pass to imgtool"
default ""
help
If CONFIG_MCUBOOT_SIGNATURE_KEY_FILE is a non-empty string,
you can use this option to pass extra options to imgtool.
For example, you could set this to "--version 1.2".
config MCUBOOT_GENERATE_UNSIGNED_IMAGE
bool "Generate unsigned binary image bootable with MCUboot"
help
Enabling this configuration allows automatic unsigned binary image
generation when MCUboot signing key is not provided,
i.e., MCUBOOT_SIGNATURE_KEY_FILE is left empty.
config MCUBOOT_GENERATE_CONFIRMED_IMAGE
bool "Also generate a padded, confirmed image"
help
The signed, padded, and confirmed binaries are placed in the build
directory at zephyr/zephyr.signed.confirmed.bin and
zephyr/zephyr.signed.confirmed.hex.
The file names can be customized with CONFIG_KERNEL_BIN_NAME.
The existence of bin and hex files depends on CONFIG_BUILD_OUTPUT_BIN
and CONFIG_BUILD_OUTPUT_HEX.
endif # BOOTLOADER_MCUBOOT
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
depends on (SOC_ESP32 || SOC_ESP32S2 || SOC_ESP32C3) && !BOOTLOADER_MCUBOOT
default y
help
This option will trigger the compilation of the ESP-IDF bootloader
inside the build folder.
At flash time, the bootloader will be flashed with the zephyr image
config BOOTLOADER_BOSSA
bool "BOSSA bootloader support"
select USE_DT_CODE_PARTITION
@@ -1121,17 +792,23 @@ endmenu
menu "Compatibility"
config LEGACY_GENERATED_INCLUDE_PATH
bool "Legacy include path for generated headers"
config COMPAT_INCLUDES
bool "Suppress warnings when using header shims"
default y
help
Suppress any warnings from the pre-processor when including
deprecated header files.
endmenu
config LEGACY_INCLUDE_PATH
bool "Allow for the legacy include paths (without the zephyr/ prefix)"
default y
help
Allow applications and libraries to use the Zephyr legacy include
path for the generated headers which does not use the `zephyr/` prefix.
From now on, i.e., the preferred way to include the `version.h` header is to
use <zephyr/version.h>, this Kconfig is currently enabled by default so that
user applications won't immediately fail to compile.
This Kconfig will be deprecated and eventually removed in the future releases.
endmenu
path which does not use the zephyr/ prefix. For example, the
preferred way to include a Zephyr header is to use <zephyr/kernel.h>,
but enabling CONFIG_LEGACY_INCLUDE_PATH will allow developers to
use <kernel.h> instead. This (without the zephyr/ prefix) is
deprecated and should be avoided. Eventually, it will not be
supported.

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,10 +0,0 @@
> [!IMPORTANT]
> The license files in this directory are maintained as per the recommendation
> of the REUSE specification (https://reuse.software/spec-3.3).
> These are licenses that may apply to some components hosted in the source tree
> but that do not end up in built binaries
> (see https://docs.zephyrproject.org/latest/LICENSING.html#zephyr-licensing).
>
> These files do _not_ define the license of the Zephyr project itself.
> Zephyr is licensed under the Apache 2.0 license, as specified in
> the [LICENSE](/LICENSE) file at the root of the repository.

File diff suppressed because it is too large Load Diff

View File

@@ -2,17 +2,16 @@
<a href="https://www.zephyrproject.org">
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="doc/_static/images/logo-readme-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="doc/_static/images/logo-readme-light.svg">
<img src="doc/_static/images/logo-readme-light.svg">
</picture>
<img src="doc/_static/images/logo-readme.svg">
</p>
</a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a href="https://scorecard.dev/viewer/?uri=github.com/zephyrproject-rtos/zephyr"><img src="https://api.securityscorecards.dev/projects/github.com/zephyrproject-rtos/zephyr/badge"></a>
<a href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain"><img src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img
src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a
href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain">
<img
src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
The Zephyr Project is a scalable real-time operating system (RTOS) supporting
@@ -24,7 +23,7 @@ resource-constrained systems: from simple embedded environmental sensors and
LED wearables to sophisticated smart watches and IoT wireless gateways.
The Zephyr kernel supports multiple architectures, including ARM (Cortex-A,
Cortex-R, Cortex-M), Intel x86, ARC, Tensilica Xtensa, and RISC-V,
Cortex-R, Cortex-M), Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V,
SPARC, MIPS, and a large number of `supported boards`_.
.. below included in doc/introduction/introduction.rst
@@ -51,59 +50,39 @@ Resources
Here's a quick summary of resources to help you find your way around:
Getting Started
---------------
* **Help**: `Asking for Help Tips`_
* **Documentation**: http://docs.zephyrproject.org (`Getting Started Guide`_)
* **Source Code**: https://github.com/zephyrproject-rtos/zephyr is the main
repository; https://elixir.bootlin.com/zephyr/latest/source contains a
searchable index
* **Releases**: https://github.com/zephyrproject-rtos/zephyr/releases
* **Samples and example code**: see `Sample and Demo Code Examples`_
* **Mailing Lists**: users@lists.zephyrproject.org and
devel@lists.zephyrproject.org are the main user and developer mailing lists,
respectively. You can join the developer's list and search its archives at
`Zephyr Development mailing list`_. The other `Zephyr mailing list
subgroups`_ have their own archives and sign-up pages.
* **Nightly CI Build Status**: https://lists.zephyrproject.org/g/builds
The builds@lists.zephyrproject.org mailing list archives the CI nightly build results.
* **Chat**: Real-time chat happens in Zephyr's Discord Server. Use
this `Discord Invite`_ to register.
* **Contributing**: see the `Contribution Guide`_
* **Wiki**: `Zephyr GitHub wiki`_
* **Issues**: https://github.com/zephyrproject-rtos/zephyr/issues
* **Security Issues**: Email vulnerabilities@zephyrproject.org to report
security issues; also see our `Security`_ documentation. Security issues are
tracked separately at https://zephyrprojectsec.atlassian.net.
* **Zephyr Project Website**: https://zephyrproject.org
| 📖 `Zephyr Documentation`_
| 🚀 `Getting Started Guide`_
| 🙋🏽 `Tips when asking for help`_
| 💻 `Code samples`_
Code and Development
--------------------
| 🌐 `Source Code Repository`_
| 📦 `Releases`_
| 🤝 `Contribution Guide`_
Community and Support
---------------------
| 💬 `Discord Server`_ for real-time community discussions
| 📧 `User mailing list (users@lists.zephyrproject.org)`_
| 📧 `Developer mailing list (devel@lists.zephyrproject.org)`_
| 📬 `Other project mailing lists`_
| 📚 `Project Wiki`_
Issue Tracking and Security
---------------------------
| 🐛 `GitHub Issues`_
| 🔒 `Security documentation`_
| 🛡️ `Security Advisories Repository`_
| ⚠️ Report security vulnerabilities at vulnerabilities@zephyrproject.org
Additional Resources
--------------------
| 🌐 `Zephyr Project Website`_
| 📺 `Zephyr Tech Talks`_
.. _Zephyr Project Website: https://www.zephyrproject.org
.. _Discord Server: https://chat.zephyrproject.org
.. _supported boards: https://docs.zephyrproject.org/latest/boards/index.html
.. _Zephyr Documentation: https://docs.zephyrproject.org
.. _Introduction to Zephyr: https://docs.zephyrproject.org/latest/introduction/index.html
.. _Getting Started Guide: https://docs.zephyrproject.org/latest/develop/getting_started/index.html
.. _Contribution Guide: https://docs.zephyrproject.org/latest/contribute/index.html
.. _Source Code Repository: https://github.com/zephyrproject-rtos/zephyr
.. _GitHub Issues: https://github.com/zephyrproject-rtos/zephyr/issues
.. _Releases: https://github.com/zephyrproject-rtos/zephyr/releases
.. _Project Wiki: https://github.com/zephyrproject-rtos/zephyr/wiki
.. _User mailing list (users@lists.zephyrproject.org): https://lists.zephyrproject.org/g/users
.. _Developer mailing list (devel@lists.zephyrproject.org): https://lists.zephyrproject.org/g/devel
.. _Other project mailing lists: https://lists.zephyrproject.org/g/main/subgroups
.. _Code samples: https://docs.zephyrproject.org/latest/samples/index.html
.. _Security documentation: https://docs.zephyrproject.org/latest/security/index.html
.. _Security Advisories Repository: https://github.com/zephyrproject-rtos/zephyr/security
.. _Tips when asking for help: https://docs.zephyrproject.org/latest/develop/getting_started/index.html#asking-for-help
.. _Zephyr Tech Talks: https://www.zephyrproject.org/tech-talks
.. _Discord Invite: https://chat.zephyrproject.org
.. _supported boards: http://docs.zephyrproject.org/latest/boards/index.html
.. _Zephyr Documentation: http://docs.zephyrproject.org
.. _Introduction to Zephyr: http://docs.zephyrproject.org/latest/introduction/index.html
.. _Getting Started Guide: http://docs.zephyrproject.org/latest/getting_started/index.html
.. _Contribution Guide: http://docs.zephyrproject.org/latest/contribute/index.html
.. _Zephyr GitHub wiki: https://github.com/zephyrproject-rtos/zephyr/wiki
.. _Zephyr Development mailing list: https://lists.zephyrproject.org/g/devel
.. _Zephyr mailing list subgroups: https://lists.zephyrproject.org/g/main/subgroups
.. _Sample and Demo Code Examples: http://docs.zephyrproject.org/latest/samples/index.html
.. _Security: http://docs.zephyrproject.org/latest/security/index.html
.. _Asking for Help Tips: https://docs.zephyrproject.org/latest/getting_started/index.html#asking-for-help

View File

@@ -1,26 +0,0 @@
version = 1
# Declare default license and copyright text for files that typically do not or cannot include them.
[[annotations]]
path = [
# zephyr-keep-sorted-start
"**/*.cfg",
"**/*.cmake",
"**/*.conf",
"**/*.ecl",
"**/*.html",
"**/*.json",
"**/*.rst",
"**/*.yaml",
"**/*.yml",
"**/*_defconfig",
"**/CMakeLists.txt",
"**/Kconfig*",
"doc/requirements.in",
"doc/requirements.txt",
"scripts/requirements-*.in",
"scripts/requirements-*.txt",
# zephyr-keep-sorted-stop
]
SPDX-License-Identifier = "Apache-2.0"
SPDX-FileCopyrightText = "Copyright The Zephyr Project Contributors"

View File

@@ -1 +0,0 @@
0.17.4

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 4
VERSION_MINOR = 3
PATCHLEVEL = 99
VERSION_MAJOR = 3
VERSION_MINOR = 1
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -1,8 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
# FIXME: SHADOW_VARS: Remove this once we have enabled -Wshadow globally.
add_compile_options($<TARGET_PROPERTY:compiler,warning_shadow_variables>)
add_definitions(-D__ZEPHYR_SUPERVISOR__)
include_directories(

View File

@@ -8,10 +8,8 @@
# Include these first so that any properties (e.g. defaults) below can be
# overridden (by defining symbols in multiple locations)
source "$(KCONFIG_BINARY_DIR)/arch/Kconfig"
# ToDo: Generate a Kconfig.arch for loading of additional arch in HWMv2.
osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
# Note: $ARCH might be a glob pattern
source "$(ARCH_DIR)/$(ARCH)/Kconfig"
# Architecture symbols
#
@@ -21,10 +19,9 @@ osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
config ARC
bool
select ARCH_IS_SET
select HAS_DTS
imply XIP
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_SUPPORTS_ROM_START
select ARCH_HAS_DIRECTED_IPIS
help
ARC architecture
@@ -32,13 +29,11 @@ config ARM
bool
select ARCH_IS_SET
select ARCH_SUPPORTS_COREDUMP if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_THREADS if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if CPU_CORTEX_M
select HAS_DTS
# FIXME: current state of the code for all ARM requires this, but
# is really only necessary for Cortex-M with ARM MPU!
select GEN_PRIV_STACKS
select ARCH_HAS_THREAD_LOCAL_STORAGE if CPU_AARCH32_CORTEX_R || CPU_CORTEX_M || CPU_AARCH32_CORTEX_A
select BARRIER_OPERATIONS_ARCH
help
ARM architecture
@@ -46,18 +41,12 @@ config ARM64
bool
select ARCH_IS_SET
select 64BIT
select ARCH_SUPPORTS_COREDUMP
select HAS_DTS
select HAS_ARM_SMCCC
select ARCH_HAS_THREAD_LOCAL_STORAGE
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select BARRIER_OPERATIONS_ARCH
select ARCH_HAS_DIRECTED_IPIS
select ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_MAPPING
select ARCH_SUPPORTS_EVICTION_TRACKING
select EVICTION_TRACKING if DEMAND_PAGING
select MEM_DOMAIN_HAS_THREAD_LIST if ARM_MPU
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
help
ARM64 (AArch64) architecture
@@ -65,12 +54,14 @@ config MIPS
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
help
MIPS architecture
config SPARC
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select BIG_ENDIAN
@@ -85,88 +76,70 @@ config X86
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !X86_64
select CPU_HAS_MMU
select ARCH_MEM_DOMAIN_DATA if USERSPACE && !X86_COMMON_PAGE_TABLE
select ARCH_MEM_DOMAIN_SYNCHRONOUS_API if USERSPACE
select ARCH_HAS_GDBSTUB if !X86_64
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_HAS_DEMAND_PAGING if !X86_64
select ARCH_HAS_DEMAND_MAPPING if ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_PAGING
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select NEED_LIBC_MEM_PARTITION if USERSPACE && TIMING_FUNCTIONS \
&& !BOARD_HAS_TIMING_FUNCTIONS \
&& !SOC_HAS_TIMING_FUNCTIONS
select ARCH_HAS_STACK_CANARIES_TLS
select ARCH_SUPPORTS_MEM_MAPPED_STACKS if X86_MMU && !DEMAND_PAGING
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
help
x86 architecture
config NIOS2
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
imply XIP
select ARCH_HAS_TIMING_FUNCTIONS
help
Nios II Gen 2 architecture
config RISCV
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C if !RISCV_ISA_EXT_A
select ATOMIC_OPERATIONS_BUILTIN if RISCV_ISA_EXT_A
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !SOC_FAMILY_ESPRESSIF_ESP32
select ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_THREAD_LOCAL_STORAGE
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select USE_SWITCH_SUPPORTED
select USE_SWITCH
select SCHED_IPI_SUPPORTED if SMP
select ARCH_HAS_DIRECTED_IPIS
select BARRIER_OPERATIONS_BUILTIN
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
imply XIP
help
RISCV architecture
config XTENSA
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_MEM_DOMAIN_DATA if USERSPACE
select ARCH_HAS_DIRECTED_IPIS
select THREAD_STACK_INFO
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if !SMP
select ARCH_HAS_USERSPACE if XTENSA_MMU || XTENSA_MPU
imply ARCH_HAS_RESERVED_PAGE_FRAMES if XTENSA_MMU
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
imply ATOMIC_OPERATIONS_ARCH
help
Xtensa architecture
config ARCH_POSIX
bool
select ARCH_IS_SET
select HAS_DTS
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
select ARCH_HAS_THREAD_ABORT
select ARCH_HAS_THREAD_NAME_HOOK
select NATIVE_BUILD
select NATIVE_APPLICATION
select HAS_COVERAGE_SUPPORT
select BARRIER_OPERATIONS_BUILTIN
# POSIX arch based targets get their memory cleared on entry by the host OS
select SKIP_BSS_CLEAR
help
POSIX (native) architecture
config RX
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select USE_SWITCH
select USE_SWITCH_SUPPORTED
help
Renesas RX architecture
config ARCH_IS_SET
bool
help
@@ -175,12 +148,16 @@ config ARCH_IS_SET
menu "General Architecture Options"
source "arch/common/Kconfig"
source "$(ARCH_DIR)/common/Kconfig"
module = ARCH
module-str = arch
source "subsys/logging/Kconfig.template.log_config"
module = MPU
module-str = mpu
source "subsys/logging/Kconfig.template.log_config"
config BIG_ENDIAN
bool
help
@@ -188,17 +165,8 @@ config BIG_ENDIAN
Little-endian architecture is the default and should leave this option
unselected. This option is selected by arch/$ARCH/Kconfig,
soc/**/Kconfig, or boards/**/Kconfig and the user should generally avoid
modifying it. The option is used to select linker script OUTPUT_FORMAT,
the toolchain flags (TOOLCHAIN_C_FLAGS, TOOLCHAIN_LD_FLAGS), and command
line option for gen_isr_tables.py.
config LITTLE_ENDIAN
# Hidden Kconfig option representing the default little-endian architecture
# This is just the opposite of BIG_ENDIAN and is used for non-negative
# conditional compilation
bool
depends on !BIG_ENDIAN
default y
modifying it. The option is used to select linker script OUTPUT_FORMAT
and command line option for gen_isr_tables.py.
config 64BIT
bool
@@ -224,21 +192,11 @@ config SRAM_BASE_ADDRESS
hex "SRAM Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
The SRAM base address. The default value comes from
The SRAM base address. The default value comes from from
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
config XIP
bool "Execute in place"
help
This option allows zephyr to operate with its text and read-only
sections residing in ROM (or similar read-only memory). Not all boards
support this option so it must be used with care; you must also
supply a linker command file when building your image. Enabling this
option increases both the code and data footprint of the image.
if ARC || ARM || ARM64 || X86 || RISCV || RX || ARCH_POSIX
if ARC || ARM || ARM64 || NIOS2 || X86
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_FLASH := zephyr,flash
@@ -246,7 +204,6 @@ DT_CHOSEN_Z_FLASH := zephyr,flash
config FLASH_SIZE
int "Flash Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_FLASH),0,K) if (XIP && (ARM ||ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the size of the flash in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
@@ -255,13 +212,12 @@ config FLASH_SIZE
config FLASH_BASE_ADDRESS
hex "Flash Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH)) if (XIP && (ARM || ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the base address of the flash on the board. It is
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
endif # ARM || ARM64 || ARC || X86 || RISCV || RX || ARCH_POSIX
endif # ARM || ARM64 || ARC || NIOS2 || X86
if ARCH_HAS_TRUSTED_EXECUTION
@@ -311,7 +267,6 @@ config USERSPACE
depends on RUNTIME_ERROR_CHECKS
depends on SRAM_REGION_PERMISSIONS
select THREAD_STACK_INFO
select LINKER_USE_NO_RELAX
help
When enabled, threads may be created or dropped down to user mode,
which has significantly restricted permissions and must interact
@@ -324,21 +279,10 @@ config USERSPACE
privileged mode or handling a system call; to ensure these are always
caught, enable CONFIG_HW_STACK_PROTECTION.
config NOINIT_SNIPPET_FIRST
bool "Place the no-init linker script snippet first"
help
By default the include/zephyr/linker/common-noinit.ld file inserts the
snippets-noinit.ld file at the end of the section. There are times when
the directives in the snippets-noinit.ld file apply to the other directives
in this file. And in that case the include statement for the snippets-noinit.ld
file needs to come at the start of the section. This configuration option
allows that to happen.
config PRIVILEGED_STACK_SIZE
int "Size of privileged stack"
default 2048 if EMUL
default 1024
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
This option sets the privileged stack region size that will be used
in addition to the user mode thread stack. During normal execution,
@@ -348,19 +292,18 @@ config PRIVILEGED_STACK_SIZE
config KOBJECT_TEXT_AREA
int "Size of kobject text area"
default 1024 if UBSAN
default 512 if COVERAGE_GCOV
default 512 if NO_OPTIMIZATIONS
default 512 if STACK_CANARIES && RISCV
default 256
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Size of kernel object text area. Used in linker script.
config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
int "Reserve extra kobject data area (in percentage)"
default 100
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Multiplication factor used to calculate the size of placeholder to
reserve space for kobject metadata hash table. The hash table is
@@ -374,7 +317,7 @@ config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
config KOBJECT_RODATA_AREA_EXTRA_BYTES
int "Reserve extra bytes for kobject rodata area"
default 16
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Reserve a few more bytes for the RODATA region for kobject metadata.
This is to account for the uncertainty of tables generated by gperf.
@@ -436,83 +379,15 @@ config NOCACHE_MEMORY
transfers when cache coherence issues are not optimal or can not
be solved using cache maintenance operations.
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
select OVERRIDE_FRAME_POINTER_DEFAULT
help
Select Y here to gain precise stack traces at the expense of slightly
increased size and decreased speed.
config ARCH_STACKWALK
bool "Compile the stack walking function"
default y
depends on ARCH_HAS_STACKWALK
help
Select Y here to compile the `arch_stack_walk()` function
config ARCH_STACKWALK_MAX_FRAMES
int "Max depth for stack walk function"
default 8
depends on ARCH_STACKWALK
help
Depending on implementation, this can place a hard limit on the depths of the stack
for the stack walk function to examine.
menu "Interrupt Configuration"
config TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
bool
help
Hidden option to signal that toolchain supports local declaration of
interrupt tables.
config ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
bool
default y
# Userspace is currently not supported
depends on !USERSPACE
# List of currently supported architectures
depends on ARM || ARM64 || RISCV
# List of currently supported toolchains
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "gnuarmemb" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm" || TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
config ISR_TABLES_LOCAL_DECLARATION
bool "ISR tables created locally and placed by linker"
depends on ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
help
Enable new scheme of interrupt tables generation.
This is totally different generator that would create tables entries locally
where the IRQ_CONNECT macro is called and then use the linker script to position it
in the right place in memory.
The most important advantage of such approach is that the generated interrupt tables
are LTO compatible.
The drawback is that the support on the architecture port is required.
config DYNAMIC_INTERRUPTS
bool "Installation of IRQs at runtime"
select SRAM_SW_ISR_TABLE
help
Enable installation of interrupts at runtime, which will move some
interrupt-related data structures to RAM instead of ROM, and
on some architectures increase code size.
config SHARED_INTERRUPTS
bool "Set this to enable support for shared interrupts"
depends on GEN_SW_ISR_TABLE
select EXPERIMENTAL
help
Set this to enable support for shared interrupts. Use this with
caution as enabling this will increase the image size by a
non-negligible amount.
config SHARED_IRQ_MAX_NUM_CLIENTS
int "Maximum number of clients allowed per shared interrupt"
default 2
depends on SHARED_INTERRUPTS
help
This option controls the maximum number of clients allowed
per shared interrupt. Set this according to your needs.
config GEN_ISR_TABLES
bool "Use generated IRQ tables"
help
@@ -532,41 +407,6 @@ config GEN_IRQ_VECTOR_TABLE
indexed by IRQ line. In the latter case, the vector table must be
supplied by the application or architecture code.
config ARCH_IRQ_VECTOR_TABLE_ALIGN
int "Alignment size of the interrupt vector table"
default 4
depends on GEN_IRQ_VECTOR_TABLE
help
This option controls alignment size of generated
_irq_vector_table. Some architecture needs an IRQ vector table
to be aligned to architecture specific size. The default
size is 0 for no alignment.
config ARCH_DEVICE_STATE_ALIGN
int "Alignment size of device state"
default 4
help
This option controls alignment size of device state.
choice IRQ_VECTOR_TABLE_TYPE
prompt "IRQ vector table type"
depends on GEN_IRQ_VECTOR_TABLE
default IRQ_VECTOR_TABLE_JUMP_BY_CODE if (RISCV && !RISCV_HAS_CLIC)
default IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS
config IRQ_VECTOR_TABLE_JUMP_BY_ADDRESS
bool "Jump by address"
help
The IRQ vector table contains the address of the interrupt handler.
config IRQ_VECTOR_TABLE_JUMP_BY_CODE
bool "Jump by code"
help
The IRQ vector table contains the opcode of a jump instruction to the
interrupt handler address.
endchoice
config GEN_SW_ISR_TABLE
bool "Generate a software ISR table"
default y
@@ -579,14 +419,13 @@ config GEN_SW_ISR_TABLE
config ARCH_SW_ISR_TABLE_ALIGN
int "Alignment size of a software ISR table"
default 64 if RISCV_HAS_CLIC
default 4
default 0
depends on GEN_SW_ISR_TABLE
help
This option controls alignment size of generated
_sw_isr_table. Some architecture needs a software ISR table
to be aligned to architecture specific size. The default
size is 4.
size is 0 for no alignment.
config GEN_IRQ_START_VECTOR
int
@@ -608,40 +447,14 @@ config IRQ_OFFLOAD
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
depends on ARCH_HAS_VECTOR_TABLE_RELOCATION
depends on XIP
depends on !ROMSTART_RELOCATION_ROM
help
When XiP is enabled, this option will result in the vector table being
relocated from Flash to SRAM.
config SRAM_SW_ISR_TABLE
bool "Place the software ISR table in SRAM instead of flash"
help
The option specifies that the software interrupts vector table will be
placed inside SRAM instead of the flash.
config IRQ_OFFLOAD_NESTED
bool "irq_offload() supports nested IRQs"
depends on IRQ_OFFLOAD
default y if ARM64 || X86 || RISCV || XTENSA
help
When set by the platform layers, indicates that
irq_offload() may legally be called in interrupt context to
cause a synchronous nested interrupt on the current CPU.
Not all hardware is capable.
config EXCEPTION_DEBUG
bool "Unhandled exception debugging"
default y
depends on PRINTK || LOG
help
Install handlers for various CPU exception/trap vectors to
make debugging them easier, at a small expense in code size.
This prints out the specific exception vector and any associated
error codes.
When set by the arch layer, indicates that irq_offload() may
legally be called in interrupt context to cause a
synchronous nested interrupt on the current CPU. Not all
hardware is capable.
config EXTRA_EXCEPTION_INFO
bool "Collect extra exception info"
@@ -651,26 +464,6 @@ config EXTRA_EXCEPTION_INFO
register state, when a fault occurs. This information can be useful
to collect for post-mortem analysis and debug of issues.
config SIMPLIFIED_EXCEPTION_CODES
bool "Convert arch specific exception codes to K_ERR_CPU_EXCEPTION"
default y if ZTEST
help
The same piece of faulty code (NULL dereference, etc) can result in
a multitude of potential exception codes at the CPU level, depending
upon whether addresses exist, an MPU is configured, the particular
implementation of the CPU or any number of other reasons. Enabling
this option collapses all the architecture specific exception codes
down to the generic K_ERR_CPU_EXCEPTION, which makes testing code
much more portable.
config EMPTY_IRQ_SPURIOUS
bool "Create empty spurious interrupt handler"
depends on ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
help
This option changes body of spurious interrupt handler. When enabled,
handler contains only an infinite while loop, when disabled, handler
contains the whole Zephyr fault handling procedure.
endmenu # Interrupt configuration
config INIT_ARCH_HW_AT_BOOT
@@ -721,69 +514,31 @@ config ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config ARCH_HAS_RAMFUNC_SUPPORT
bool
config ARCH_HAS_VECTOR_TABLE_RELOCATION
bool
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
config ARCH_SUPPORTS_COREDUMP
bool
config ARCH_SUPPORTS_COREDUMP_THREADS
bool
config ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
bool
config ARCH_SUPPORTS_COREDUMP_STACK_PTR
bool
config ARCH_SUPPORTS_ARCH_HW_INIT
bool
config ARCH_SUPPORTS_ROM_START
bool
config ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
bool
config ARCH_SUPPORTS_EVICTION_TRACKING
bool
help
Architecture code supports page tracking for eviction algorithms
when demand paging is enabled.
config ARCH_HAS_EXTRA_EXCEPTION_INFO
bool
config ARCH_HAS_GDBSTUB
bool
config ARCH_HAS_COHERENCE
bool
help
When selected, the architecture supports the
arch_mem_coherent() API and can link into incoherent/cached
memory using the ".cached" linker section.
config ARCH_HAS_THREAD_LOCAL_STORAGE
bool
config ARCH_HAS_SUSPEND_TO_RAM
bool
help
When selected, the architecture supports suspend-to-RAM (S2RAM).
config ARCH_HAS_STACK_CANARIES_TLS
bool
config ARCH_SUPPORTS_MEM_MAPPED_STACKS
bool
help
Select when the architecture supports memory mapped stacks.
config ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET
bool
help
Select when the architecture implements arch_thread_priv_stack_space_get().
config ARCH_HAS_HW_SHADOW_STACK
bool
#
# Other architecture related options
#
@@ -791,12 +546,6 @@ config ARCH_HAS_HW_SHADOW_STACK
config ARCH_HAS_THREAD_ABORT
bool
config ARCH_HAS_CODE_DATA_RELOCATION
bool
help
When selected, the architecture/SoC implements support for
CODE_DATA_RELOCATION in its linker scripts.
#
# Hidden CPU family configs
#
@@ -811,7 +560,7 @@ config CPU_HAS_TEE
config CPU_HAS_DCLS
bool
help
This option is enabled when the processor hardware has support for
This option is enabled when the processor hardware is configured in
Dual-redundant Core Lock-step (DCLS) topology.
config CPU_HAS_FPU
@@ -820,11 +569,6 @@ config CPU_HAS_FPU
This option is enabled when the CPU has hardware floating point
unit.
config CPU_HAS_DSP
bool
help
This option is enabled when the CPU has hardware DSP unit.
config CPU_HAS_FPU_DOUBLE_PRECISION
bool
select CPU_HAS_FPU
@@ -849,13 +593,6 @@ config ARCH_HAS_DEMAND_PAGING
This hidden configuration should be selected by the architecture if
demand paging is supported.
config ARCH_HAS_DEMAND_MAPPING
bool
help
This hidden configuration should be selected by the architecture if
demand paging is supported and arch_mem_map() supports
K_MEM_MAP_UNPAGED.
config ARCH_HAS_RESERVED_PAGE_FRAMES
bool
help
@@ -864,30 +601,6 @@ config ARCH_HAS_RESERVED_PAGE_FRAMES
memory mappings. The architecture will need to implement
arch_reserved_pages_update().
config ARCH_HAS_DIRECTED_IPIS
bool
help
This hidden configuration should be selected by the architecture if
it has an implementation for arch_sched_directed_ipi() which allows
for IPIs to be directed to specific CPUs.
config CPU_HAS_DCACHE
bool
help
This hidden configuration should be selected when the CPU has a d-cache.
config CPU_CACHE_INCOHERENT
bool
help
This hidden configuration should be selected when the CPU has
incoherent cache. This applies to intra-CPU multiprocessing
incoherence and makes only sense when MP_MAX_NUM_CPUS > 1.
config CPU_HAS_ICACHE
bool
help
This hidden configuration should be selected when the CPU has an i-cache.
config ARCH_MAPS_ALL_RAM
bool
help
@@ -906,17 +619,7 @@ config ARCH_MAPS_ALL_RAM
virtual addresses elsewhere, this is limited to only management of the
virtual address space. The kernel's page frame ontology will not consider
this mapping at all; non-kernel pages will be considered free (unless marked
as reserved) and K_MEM_PAGE_FRAME_MAPPED will not be set.
config DCLS
bool "Processor is configured in DCLS mode"
depends on CPU_HAS_DCLS
default y
help
This option is enabled when the processor hardware is configured in
Dual-redundant Core Lock-step (DCLS) topology. For the processor that
supports DCLS, but is configured in split-lock mode (by default or
changed at flash time), this option should be disabled.
as reserved) and Z_PAGE_FRAME_MAPPED will not be set.
menuconfig MPU
bool "MPU features"
@@ -926,10 +629,6 @@ menuconfig MPU
is enabled.
if MPU
module = MPU
module-str = mpu
source "subsys/logging/Kconfig.template.log_config"
config MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT
bool
help
@@ -1001,26 +700,6 @@ config SRAM_REGION_PERMISSIONS
paging, do not need memory protection, and would rather not use up
RAM for the alignment between regions.
config CODE_DATA_RELOCATION
bool "Support code/data section relocation"
depends on ARCH_HAS_CODE_DATA_RELOCATION
help
Enable support for relocating .text, data and .bss sections from specified
files and placing them in a chosen memory region. Files to relocate and
the target regions should be specified in CMakeLists.txt using
zephyr_code_relocate().
menu "DSP Options"
config DSP_SHARING
bool "DSP register sharing"
depends on CPU_HAS_DSP
help
This option enables preservation of the hardware DSP registers
across context switches to allow multiple threads to perform concurrent
DSP operations.
endmenu
menu "Floating Point Options"
config FPU
@@ -1065,55 +744,20 @@ endmenu
menu "Cache Options"
config DCACHE
bool "Data cache (d-cache) support"
depends on CPU_HAS_DCACHE
bool "Data cache support"
default y
help
This option enables the support for the data cache (d-cache).
config ICACHE
bool "Instruction cache (i-cache) support"
depends on CPU_HAS_ICACHE
default y
help
This option enables the support for the instruction cache (i-cache).
config CACHE_HAS_MIRRORED_MEMORY_REGIONS
bool "Mirrored memory region(s) for both cached and uncached access"
depends on CPU_CACHE_INCOHERENT
help
Enable this if hardware has mirrored memory regions at different
addressed when accessing one would go through cache, but accessing
the other would go to memory directly. A pointer can be cheaply
converted to cached or uncached access.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
config CACHE_DOUBLEMAP
bool "Cache double-mapping support"
select CACHE_HAS_MIRRORED_MEMORY_REGIONS
select DEPRECATED
help
Double-mapping behavior where a pointer can be cheaply converted to
point to the same cached/uncached memory at different locations.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
This option enables data cache (d-cache).
config CACHE_MANAGEMENT
bool "Cache management features"
depends on DCACHE || ICACHE
help
This option enables the cache management functions backed by arch or
driver code.
if CACHE_MANAGEMENT
if DCACHE
This links in the cache management functions (for d-cache and i-cache
where possible).
config DCACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
depends on CACHE_MANAGEMENT
help
This option enables querying some architecture-specific hardware for
finding the d-cache line size at the expense of taking more memory and
@@ -1124,22 +768,19 @@ config DCACHE_LINE_SIZE_DETECT
using the 'd-cache-line-size' property.
config DCACHE_LINE_SIZE
int "d-cache line size"
depends on !DCACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,d-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,d-cache-line-size)
int "d-cache line size" if !DCACHE_LINE_SIZE_DETECT
depends on CACHE_MANAGEMENT
default 0
help
Size in bytes of a CPU d-cache line.
Size in bytes of a CPU d-cache line. If this is set to 0 the value is
obtained from the 'd-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting DCACHE_LINE_SIZE_DETECT.
endif # DCACHE
if ICACHE
config ICACHE_LINE_SIZE_DETECT
bool "Detect i-cache line size at runtime"
depends on CACHE_MANAGEMENT
help
This option enables querying some architecture-specific hardware for
finding the i-cache line size at the expense of taking more memory and
@@ -1150,52 +791,32 @@ config ICACHE_LINE_SIZE_DETECT
using the 'i-cache-line-size' property.
config ICACHE_LINE_SIZE
int "i-cache line size"
depends on !ICACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,i-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,i-cache-line-size)
int "i-cache line size" if !ICACHE_LINE_SIZE_DETECT
depends on CACHE_MANAGEMENT
default 0
help
Size in bytes of a CPU i-cache line.
Size in bytes of a CPU i-cache line. If this is set to 0 the value is
obtained from the 'i-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting ICACHE_LINE_SIZE_DETECT.
endif # ICACHE
choice CACHE_TYPE
prompt "Cache type"
default ARCH_CACHE
depends on CACHE_MANAGEMENT
default HAS_ARCH_CACHE
config ARCH_CACHE
config HAS_ARCH_CACHE
bool "Integrated cache controller"
help
Integrated on-core cache controller
"Integrated on-core cache controller"
config SOC_CACHE
bool "SoC specific cache controller"
depends on SOC_HAS_CACHE_FUNCTIONS
help
SoC specific cache controller.
This requires soc_cache.h file to exist in search path.
config EXTERNAL_CACHE
config HAS_EXTERNAL_CACHE
bool "External cache controller"
help
External cache controller
"External cache controller or cache management system"
endchoice
config CACHE_CAN_SAY_MEM_COHERENCE
bool
help
sys_cache_is_mem_coherent() is defined when enabled. This function can be
used to determine if a pointer lies inside "coherence regions" and can be
safely used in multiprocessor code without explicit flush or invalidate
operations.
endif # CACHE_MANAGEMENT
endmenu
config ARCH
@@ -1203,46 +824,29 @@ config ARCH
help
System architecture string.
config SOC
string
help
SoC name which can be found under soc/<arch>/<soc name>.
This option holds the directory name used by the build system to locate
the correct linker and header files for the SoC.
config SOC_SERIES
string
help
SoC series name which can be found under soc/<arch>/<family>/<series>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config SOC_FAMILY
string
help
SoC family name which can be found under soc/<arch>/<family>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config TOOLCHAIN_HAS_BUILTIN_FFS
bool
default y if !(64BIT && RISCV)
help
Hidden option to signal that toolchain has __builtin_ffs*().
config ARCH_HAS_CUSTOM_CPU_IDLE
bool
help
This options allows applications to override the default arch idle implementation with
a custom one.
config ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
bool
help
This options allows applications to override the default arch idle implementation with
a custom one.
config ARCH_HAS_CUSTOM_SWAP_TO_MAIN
bool
help
It's possible that an architecture port cannot use z_swap_unlocked()
to swap to the main thread (bg_thread_main), but instead must do
something custom. It must enable this option in that case.
config ARCH_HAS_CUSTOM_BUSY_WAIT
bool
help
It's possible that an architecture port cannot or does not want to use
the provided k_busy_wait(), but instead must do something custom. It must
enable this option in that case.
config ARCH_HAS_CUSTOM_CURRENT_IMPL
bool
help
Select when architecture implements arch_current_thread() &
arch_current_thread_set().
config ARCH_IPI_LAZY_COPROCESSORS_SAVE
bool
help
Select when the architecture has multi-CPU lazy context switching
of coprocessor registers.

View File

@@ -12,16 +12,11 @@ zephyr_cc_option(-fno-delete-null-pointer-checks)
zephyr_cc_option_ifdef(CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS -munaligned-access)
if(NOT COMPILER STREQUAL arcmwdt)
if(CONFIG_THREAD_LOCAL_STORAGE)
# Instruct compiler to use proper register as cached thread pointer for thread local storage.
# For ARCv2 the default register is usually not specified - so we need to specify it
# For ARCv3 the register is fixed to r30, so we don't need to specify it
zephyr_compile_options_ifdef(CONFIG_ISA_ARCV2 -mtp-regno=26)
else()
# If thread local storage isn't used - we can safely schedule thread pointer register
zephyr_compile_options_ifdef(CONFIG_ISA_ARCV2 -mtp-regno=none)
endif()
if(CONFIG_ISA_ARCV2)
# Instruct compiler to use register R26 as thread pointer
# for thread local storage.
# For ARCv3 the register is fixed to r30, so we don't need to specify it
zephyr_cc_option_ifdef(CONFIG_THREAD_LOCAL_STORAGE -mtp-regno=26)
endif()
add_subdirectory(core)
@@ -39,7 +34,7 @@ if(COMPILER STREQUAL arcmwdt)
endif()
if(CONFIG_ISA_ARCV3 AND CONFIG_64BIT)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littlearc64)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littlearc)
elseif(CONFIG_ISA_ARCV3 AND NOT CONFIG_64BIT)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearc64)
else()

View File

@@ -9,6 +9,7 @@ menu "ARC Options"
config ARCH
default "arc"
config CPU_ARCEM
bool
select ATOMIC_OPERATIONS_C
@@ -18,7 +19,6 @@ config CPU_ARCEM
config CPU_ARCHS
bool
select ATOMIC_OPERATIONS_BUILTIN
select BARRIER_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU
@@ -31,7 +31,6 @@ config ISA_ARCV2
bool "ARC ISA v2"
select ARCH_HAS_STACK_PROTECTION if ARC_HAS_STACK_CHECKING || (ARC_MPU && ARC_MPU_VER !=2)
select ARCH_HAS_USERSPACE if ARC_MPU
select ARCH_HAS_SINGLE_THREAD_SUPPORT if !SMP
select USE_SWITCH
select USE_SWITCH_SUPPORTED
help
@@ -39,7 +38,6 @@ config ISA_ARCV2
config ISA_ARCV3
bool "ARC ISA v3"
select ARCH_HAS_SINGLE_THREAD_SUPPORT if !SMP
select USE_SWITCH
select USE_SWITCH_SUPPORTED
@@ -76,26 +74,14 @@ config CPU_EM4_FPUDA
config CPU_EM6
bool
select CPU_ARCEM
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC EM6 CPU
config CPU_HS3X
bool
select CPU_ARCHS
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC HS3x CPU
config CPU_HS4X
bool
select CPU_ARCHS
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an HS4X CPU
If y, the SoC uses an ARC HS3x or HS4x CPU
endif #ISA_ARCV2
@@ -104,8 +90,6 @@ if ISA_ARCV3
config CPU_HS5X
bool
select CPU_ARCHS
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC HS6x CPU
@@ -113,8 +97,6 @@ config CPU_HS6X
bool
select CPU_ARCHS
select 64BIT
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC HS6x CPU
@@ -179,7 +161,6 @@ config ARC_FIRQ
bool "FIRQ enable"
depends on ISA_ARCV2
depends on NUM_IRQ_PRIO_LEVELS > 1
depends on !ARC_HAS_SECURE
default y
help
Fast interrupts are supported (FIRQ). If FIRQ enabled, for interrupts
@@ -253,17 +234,20 @@ config ARC_USE_UNALIGNED_MEM_ACCESS
to support unaligned memory access which is then disabled by default.
Enable unaligned access in hardware and make software to use it.
config ARC_CURRENT_THREAD_USE_NO_TLS
bool
select CURRENT_THREAD_USE_NO_TLS
default y if (RGF_NUM_BANKS > 1) || ("$(ZEPHYR_TOOLCHAIN_VARIANT)" = "arcmwdt")
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
help
Disable current Thread Local Storage for ARC. For cores with more than one
RGF_NUM_BANKS the parameter is disabled by-default because banks synchronization
requires significant time, and it slows down performance.
ARCMWDT works with TLS pointer in different way then GCC. Optimized access to
TLS pointer via the _current symbol does not provide significant advantages
in case of MetaWare.
Different levels for display information when a fault occurs.
2: The default. Display specific and verbose information. Consumes
the most memory (long strings).
1: Display general and short information. Consumes less memory
(short strings).
0: Off.
config GEN_ISR_TABLES
default y
@@ -274,7 +258,7 @@ config GEN_IRQ_START_VECTOR
config HARVARD
bool "Harvard Architecture"
help
The ARC CPU can be configured to have two buses;
The ARC CPU can be configured to have two busses;
one for instruction fetching and another that serves as a data bus.
config CODE_DENSITY
@@ -283,8 +267,8 @@ config CODE_DENSITY
Enable code density option to get better code density
config ARC_HAS_ACCL_REGS
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6 and/or DSP)"
default y if CPU_HS3X || CPU_HS4X || CPU_HS5X || CPU_HS6X
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
default y if CPU_HS3X
help
Depending on the configuration, CPU can contain accumulator reg-pair
(also referred to as r58:r59). These can also be used by gcc as GPR so
@@ -343,17 +327,6 @@ config ARC_NORMAL_FIRMWARE
resources of the ARC processors, and, therefore, it shall avoid
accessing them.
config ARC_VPX_COOPERATIVE_SHARING
bool "Cooperative sharing of ARC VPX vector registers"
select SCHED_CPU_MASK if MP_MAX_NUM_CPUS > 1
help
This option enables the cooperative sharing of the ARC VPX vector
registers. Threads that want to use those registers must successfully
call arc_vpx_lock() before using them, and call arc_vpx_unlock()
when done using them.
source "arch/arc/core/dsp/Kconfig"
menu "ARC MPU Options"
depends on CPU_HAS_MPU
@@ -370,28 +343,6 @@ endmenu
config DCACHE_LINE_SIZE
default 32
config ARC_DCACHE_REGION_OPERATIONS
bool "DCACHE region operations"
depends on CACHE_MANAGEMENT && DCACHE
default n
help
Perform L1 data cache management operations by regions rather than line by line in a loop,
improves performance of cache management operations.
config ARC_SLC
bool "System level cache"
depends on CACHE_MANAGEMENT && DCACHE && (CPU_HS4X || CPU_HS3X)
default n
help
This option enables System Level Cache, and adds SLC support to the data cache management operations.
config ARC_SLC_LINE_SIZE
int "SLC line size"
depends on ARC_SLC
default 128
help
Size in bytes of a CPU system level cache line.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768 if !64BIT
@@ -404,17 +355,16 @@ config ARC_EXCEPTION_STACK_SIZE
endmenu
config ARC_EARLY_SOC_INIT
bool "Make early stage SoC-specific initialization"
config ARC_EXCEPTION_DEBUG
bool "Unhandled exception debugging information"
default n
depends on PRINTK || LOG
help
Call SoC per-core setup code on early stage initialization
(before C runtime initialization). Setup code is called in form of
soc_early_asm_init_percpu assembler macro.
Print human-readable information about exception vectors, cause codes,
and parameters, at a cost of code/data size for the human-readable
strings.
# ARC vector table must be aligned to 1KiB boundary, and will be at the
# start of the ROM region.
config ROM_START_OFFSET
default 0x400 if BOOTLOADER_MCUBOOT
endmenu
config MAIN_STACK_SIZE
default 4096 if 64BIT
@@ -442,5 +392,3 @@ config CMSIS_V2_THREAD_MAX_STACK_SIZE
config CMSIS_V2_THREAD_DYNAMIC_STACK_SIZE
default 2048 if 64BIT
endmenu

View File

@@ -1,5 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
if(CONFIG_ARCMWDT_LIBC OR CONFIG_CPP)
if(CONFIG_ARCMWDT_LIBC OR CONFIG_CPLUSPLUS)
zephyr_sources(arcmwdt-dtr-stubs.c)
endif()

View File

@@ -8,6 +8,14 @@
__weak void *__dso_handle;
int __cxa_atexit(void (*destructor)(void *), void *objptr, void *dso)
{
ARG_UNUSED(destructor);
ARG_UNUSED(objptr);
ARG_UNUSED(dso);
return 0;
}
int atexit(void (*function)(void))
{
return 0;

View File

@@ -19,14 +19,14 @@ zephyr_library_sources(
vector_table.c
)
zephyr_library_sources_ifdef(CONFIG_ARCH_CACHE cache.c)
zephyr_library_sources_ifdef(CONFIG_CACHE_MANAGEMENT cache.c)
zephyr_library_sources_ifdef(CONFIG_ARC_FIRQ fast_irq.S)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT smp.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE tls.c)
@@ -34,5 +34,3 @@ add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_LLEXT elf.c)

View File

@@ -20,7 +20,7 @@ static struct k_spinlock arc_connect_spinlock;
/* Generate an inter-core interrupt to the target core */
void z_arc_connect_ici_generate(uint32_t core)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_IRQ, core);
}
}
@@ -28,7 +28,7 @@ void z_arc_connect_ici_generate(uint32_t core)
/* Acknowledge the inter-core interrupt raised by core */
void z_arc_connect_ici_ack(uint32_t core)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_ACK, core);
}
}
@@ -38,7 +38,7 @@ uint32_t z_arc_connect_ici_read_status(uint32_t core)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_READ_STATUS, core);
ret = z_arc_connect_cmd_readback();
}
@@ -51,7 +51,7 @@ uint32_t z_arc_connect_ici_check_src(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -64,7 +64,7 @@ void z_arc_connect_ici_clear(void)
{
uint32_t cpu, c;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
cpu = z_arc_connect_cmd_readback(); /* 1,2,4,8... */
@@ -85,7 +85,7 @@ void z_arc_connect_ici_clear(void)
/* Reset the cores in core_mask */
void z_arc_connect_debug_reset(uint32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RESET,
0, core_mask);
}
@@ -94,7 +94,7 @@ void z_arc_connect_debug_reset(uint32_t core_mask)
/* Halt the cores in core_mask */
void z_arc_connect_debug_halt(uint32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_HALT,
0, core_mask);
}
@@ -103,7 +103,7 @@ void z_arc_connect_debug_halt(uint32_t core_mask)
/* Run the cores in core_mask */
void z_arc_connect_debug_run(uint32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RUN,
0, core_mask);
}
@@ -112,7 +112,7 @@ void z_arc_connect_debug_run(uint32_t core_mask)
/* Set core mask */
void z_arc_connect_debug_mask_set(uint32_t core_mask, uint32_t mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_MASK,
mask, core_mask);
}
@@ -123,7 +123,7 @@ uint32_t z_arc_connect_debug_mask_read(uint32_t core_mask)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_READ_MASK,
0, core_mask);
ret = z_arc_connect_cmd_readback();
@@ -137,7 +137,7 @@ uint32_t z_arc_connect_debug_mask_read(uint32_t core_mask)
*/
void z_arc_connect_debug_select_set(uint32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_SELECT,
0, core_mask);
}
@@ -148,7 +148,7 @@ uint32_t z_arc_connect_debug_select_read(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_SELECT, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -161,7 +161,7 @@ uint32_t z_arc_connect_debug_en_read(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_EN, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -174,7 +174,7 @@ uint32_t z_arc_connect_debug_cmd_read(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CMD, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -187,7 +187,7 @@ uint32_t z_arc_connect_debug_core_read(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CORE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -198,7 +198,7 @@ uint32_t z_arc_connect_debug_core_read(void)
/* Clear global free running counter */
void z_arc_connect_gfrc_clear(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_CLEAR, 0);
}
}
@@ -233,7 +233,7 @@ uint64_t z_arc_connect_gfrc_read(void)
/* Enable global free running counter */
void z_arc_connect_gfrc_enable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_ENABLE, 0);
}
}
@@ -241,7 +241,7 @@ void z_arc_connect_gfrc_enable(void)
/* Disable global free running counter */
void z_arc_connect_gfrc_disable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_DISABLE, 0);
}
}
@@ -249,7 +249,7 @@ void z_arc_connect_gfrc_disable(void)
/* Disable global free running counter */
void z_arc_connect_gfrc_core_set(uint32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_GFRC_SET_CORE,
0, core_mask);
}
@@ -260,7 +260,7 @@ uint32_t z_arc_connect_gfrc_halt_read(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HALT, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -273,7 +273,7 @@ uint32_t z_arc_connect_gfrc_core_read(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_CORE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -284,7 +284,7 @@ uint32_t z_arc_connect_gfrc_core_read(void)
/* Enable interrupt distribute unit */
void z_arc_connect_idu_enable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ENABLE, 0);
}
}
@@ -292,7 +292,7 @@ void z_arc_connect_idu_enable(void)
/* Disable interrupt distribute unit */
void z_arc_connect_idu_disable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_DISABLE, 0);
}
}
@@ -302,7 +302,7 @@ uint32_t z_arc_connect_idu_read_enable(void)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_ENABLE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -317,7 +317,7 @@ uint32_t z_arc_connect_idu_read_enable(void)
void z_arc_connect_idu_set_mode(uint32_t irq_num,
uint16_t trigger_mode, uint16_t distri_mode)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MODE,
irq_num, (distri_mode | (trigger_mode << 4)));
}
@@ -328,7 +328,7 @@ uint32_t z_arc_connect_idu_read_mode(uint32_t irq_num)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MODE, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -342,7 +342,7 @@ uint32_t z_arc_connect_idu_read_mode(uint32_t irq_num)
*/
void z_arc_connect_idu_set_dest(uint32_t irq_num, uint32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_DEST,
irq_num, core_mask);
}
@@ -353,7 +353,7 @@ uint32_t z_arc_connect_idu_read_dest(uint32_t irq_num)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_DEST, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -364,7 +364,7 @@ uint32_t z_arc_connect_idu_read_dest(uint32_t irq_num)
/* Assert the specified common interrupt */
void z_arc_connect_idu_gen_cirq(uint32_t irq_num)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_GEN_CIRQ, irq_num);
}
}
@@ -372,7 +372,7 @@ void z_arc_connect_idu_gen_cirq(uint32_t irq_num)
/* Acknowledge the specified common interrupt */
void z_arc_connect_idu_ack_cirq(uint32_t irq_num)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ACK_CIRQ, irq_num);
}
}
@@ -382,7 +382,7 @@ uint32_t z_arc_connect_idu_check_status(uint32_t irq_num)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_STATUS, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -395,7 +395,7 @@ uint32_t z_arc_connect_idu_check_source(uint32_t irq_num)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_SOURCE, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -406,7 +406,7 @@ uint32_t z_arc_connect_idu_check_source(uint32_t irq_num)
/* Mask or unmask the specified common interrupt */
void z_arc_connect_idu_set_mask(uint32_t irq_num, uint32_t mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MASK,
irq_num, mask);
}
@@ -417,7 +417,7 @@ uint32_t z_arc_connect_idu_read_mask(uint32_t irq_num)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MASK, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -433,7 +433,7 @@ uint32_t z_arc_connect_idu_check_first(uint32_t irq_num)
{
uint32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_FIRST, irq_num);
ret = z_arc_connect_cmd_readback();
}

192
arch/arc/core/arc_smp.c Normal file
View File

@@ -0,0 +1,192 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief codes required for ARC multicore and Zephyr smp support
*
*/
#include <zephyr/device.h>
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <ksched.h>
#include <soc.h>
#include <zephyr/init.h>
#ifndef IRQ_ICI
#define IRQ_ICI 19
#endif
#define ARCV2_ICI_IRQ_PRIORITY 1
#define MP_PRIMARY_CPU_ID 0
volatile struct {
arch_cpustart_t fn;
void *arg;
} arc_cpu_init[CONFIG_MP_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile uint32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_NUM_CPUS];
/* Called from Zephyr initialization */
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0U) {
;
}
}
#ifdef CONFIG_SMP
static void arc_connect_debug_mask_update(int cpu_num)
{
uint32_t core_mask = 1 << cpu_num;
/*
* MDB debugger may modify debug_select and debug_mask registers on start, so we can't
* rely on debug_select reset value.
*/
if (cpu_num != MP_PRIMARY_CPU_ID) {
core_mask |= z_arc_connect_debug_select_read();
}
z_arc_connect_debug_select_set(core_mask);
/* Debugger halts cores at all conditions:
* ARC_CONNECT_CMD_DEBUG_MASK_H: Core global halt.
* ARC_CONNECT_CMD_DEBUG_MASK_AH: Actionpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_BH: Software breakpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_SH: Self halt.
*/
z_arc_connect_debug_mask_set(core_mask, (ARC_CONNECT_CMD_DEBUG_MASK_SH
| ARC_CONNECT_CMD_DEBUG_MASK_BH | ARC_CONNECT_CMD_DEBUG_MASK_AH
| ARC_CONNECT_CMD_DEBUG_MASK_H));
}
#endif
/* the C entry of slave cores */
void z_arc_slave_start(int cpu_num)
{
arch_cpustart_t fn;
#ifdef CONFIG_SMP
struct arc_connect_bcr bcr;
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(cpu_num);
}
z_irq_setup();
z_arc_connect_ici_clear();
z_irq_priority_set(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY, 0);
irq_enable(IRQ_ICI);
#endif
/* call the function set by arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(arc_cpu_init[cpu_num].arg);
}
#ifdef CONFIG_SMP
static void sched_ipi_handler(const void *unused)
{
ARG_UNUSED(unused);
z_arc_connect_ici_clear();
z_sched_ipi();
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
uint32_t i;
/* broadcast sched_ipi request to other cores
* if the target is current core, hardware will ignore it
*/
for (i = 0U; i < CONFIG_MP_NUM_CPUS; i++) {
z_arc_connect_ici_generate(i);
}
}
static int arc_smp_init(const struct device *dev)
{
ARG_UNUSED(dev);
struct arc_connect_bcr bcr;
/* necessary master core init */
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(MP_PRIMARY_CPU_ID);
}
if (bcr.ipi) {
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY,
sched_ipi_handler, NULL, 0);
irq_enable(IRQ_ICI);
} else {
__ASSERT(0,
"ARC connect has no inter-core interrupt\n");
return -ENODEV;
}
if (bcr.gfrc) {
/* global free running count init */
z_arc_connect_gfrc_enable();
/* when all cores halt, gfrc halt */
z_arc_connect_gfrc_core_set((1 << CONFIG_MP_NUM_CPUS) - 1);
z_arc_connect_gfrc_clear();
} else {
__ASSERT(0,
"ARC connect has no global free running counter\n");
return -ENODEV;
}
return 0;
}
SYS_INIT(arc_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View File

@@ -2,7 +2,6 @@
/*
* Copyright (c) 2016 Synopsys, Inc. All rights reserved.
* Copyright (c) 2025 GSI Technology, All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -30,34 +29,17 @@
size_t sys_cache_line_size;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100 /* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
#define DC_CTRL_INVALIDATE_MODE 0x40 /* d-cache invalidate mode bit */
#define DC_CTRL_REGION_OP 0xe00 /* d-cache region operation */
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100/* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
#define MMU_BUILD_PHYSICAL_ADDR_EXTENSION 0x1000 /* physical address extension enable mask */
#define SLC_CTRL_DISABLE 0x1 /* SLC disable */
#define SLC_CTRL_INVALIDATE_MODE 0x40 /* SLC invalidate mode */
#define SLC_CTRL_BUSY_STATUS 0x100 /* SLC busy status */
#define SLC_CTRL_REGION_OP 0xe00 /* SLC region operation */
#if defined(CONFIG_ARC_SLC)
/*
* spinlock is used for SLC access because depending on HW configuration, the SLC might be shared
* between the cores, and in this case, only one core is allowed to access the SLC register
* interface at a time.
*/
static struct k_spinlock slc_lock;
#endif
static bool dcache_available(void)
{
@@ -74,297 +56,22 @@ static void dcache_dc_ctrl(uint32_t dcache_en_mask)
}
}
static bool pae_exists(void)
{
uint32_t bcr = z_arc_v2_aux_reg_read(_ARC_V2_MMU_BUILD);
return 1 == FIELD_GET(MMU_BUILD_PHYSICAL_ADDR_EXTENSION, bcr);
}
#if defined(CONFIG_ARC_SLC)
static void slc_enable(void)
{
uint32_t val = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
val &= ~SLC_CTRL_DISABLE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, val);
}
static void slc_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END1, 0);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START1, 0);
}
static void slc_flush_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_all(void)
{
K_SPINLOCK(&slc_lock) {
z_arc_v2_aux_reg_write(_ARC_V2_SLC_FLUSH, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
#endif /* CONFIG_ARC_SLC */
void arch_dcache_enable(void)
{
dcache_dc_ctrl(DC_CTRL_DC_ENABLE);
#if defined(CONFIG_ARC_SLC)
slc_enable();
#endif
}
void arch_dcache_disable(void)
{
/* nothing */
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
static void dcache_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_DC_PTAG_HI, 0);
}
static void dcache_flush_region(void *start_addr_ptr, size_t size)
static void arch_dcache_flush(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return;
}
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
arch_irq_unlock(key); /* --exit critical section-- */
}
#else /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
static void dcache_flush_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
@@ -387,25 +94,24 @@ static void dcache_flush_lines(void *start_addr_ptr, size_t size)
} while (start_addr < end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_lines(void *start_addr_ptr, size_t size)
static void arch_dcache_invd(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return;
}
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
@@ -416,179 +122,20 @@ static void dcache_invalidate_lines(void *start_addr_ptr, size_t size)
irq_unlock(key); /* -exit critical section- */
}
static void dcache_flush_and_invalidate_lines(void *start_addr_ptr, size_t size)
int arch_dcache_range(void *addr, size_t size, int op)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
#endif /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
if (op == K_CACHE_INVD) {
/*
* TODO: On invalidate we can contextually flush by setting the
* DC_CTRL_INVALID_FLUSH bit
*/
arch_dcache_invd(addr, size);
} else if (op == K_CACHE_WB) {
arch_dcache_flush(addr, size);
} else {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_region(start_addr_ptr, size);
#else
dcache_flush_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_region(start_addr_ptr, size);
#endif
return 0;
}
int arch_dcache_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_invalidate_region(start_addr_ptr, size);
#else
dcache_invalidate_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_invalidate_region(start_addr_ptr, size);
#endif
return 0;
}
int arch_dcache_flush_and_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_and_invalidate_region(start_addr_ptr, size);
#else
dcache_flush_and_invalidate_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_region(start_addr_ptr, size);
#endif
return 0;
}
int arch_dcache_flush_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLSH, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_all();
#endif
return 0;
}
int arch_dcache_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_invalidate_all();
#endif
return 0;
}
int arch_dcache_flush_and_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_all();
#endif
return 0;
}
@@ -610,81 +157,17 @@ size_t arch_dcache_line_size_get(void)
}
#endif
void arch_icache_enable(void)
static int init_dcache(const struct device *unused)
{
/* nothing */
}
ARG_UNUSED(unused);
void arch_icache_disable(void)
{
/* nothing */
}
int arch_icache_flush_all(void)
{
return -ENOTSUP;
}
int arch_icache_invd_all(void)
{
return -ENOTSUP;
}
int arch_icache_flush_and_invd_all(void)
{
return -ENOTSUP;
}
int arch_icache_flush_range(void *addr, size_t size)
{
ARG_UNUSED(addr);
ARG_UNUSED(size);
return -ENOTSUP;
}
int arch_icache_invd_range(void *addr, size_t size)
{
ARG_UNUSED(addr);
ARG_UNUSED(size);
return -ENOTSUP;
}
int arch_icache_flush_and_invd_range(void *addr, size_t size)
{
ARG_UNUSED(addr);
ARG_UNUSED(size);
return -ENOTSUP;
}
static int init_dcache(void)
{
sys_cache_data_enable();
arch_dcache_enable();
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
init_dcache_line_size();
#endif
/*
* Init high address registers to 0 if PAE exists, cache operations for 40 bit addresses not
* implemented
*/
if (pae_exists()) {
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_high_addr_init();
#endif
#if defined(CONFIG_ARC_SLC)
slc_high_addr_init();
#endif
}
return 0;
}
void arch_cache_init(void)
{
init_dcache();
}
SYS_INIT(init_dcache, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View File

@@ -26,7 +26,6 @@ SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
.align 4
.word 0
#ifndef CONFIG_ARCH_HAS_CUSTOM_CPU_IDLE
/*
* @brief Put the CPU in low-power mode
*
@@ -49,9 +48,7 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
sleep r1
j_s [blink]
nop
#endif
#ifndef CONFIG_ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
/*
* @brief Put the CPU in low-power mode, entered with IRQs locked
*
@@ -59,7 +56,6 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
*
* void arch_cpu_atomic_idle(unsigned int key)
*/
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
#ifdef CONFIG_TRACING
@@ -74,4 +70,3 @@ SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
sleep r1
j_s.d [blink]
seti r0
#endif

View File

@@ -1,70 +0,0 @@
# Digital Signal Processing (DSP) configuration options
# Copyright (c) 2022 Synopsys
# SPDX-License-Identifier: Apache-2.0
menu "ARC DSP Options"
depends on CPU_HAS_DSP
config ARC_DSP
bool "digital signal processing (DSP)"
help
This option enables DSP and DSP instructions.
config ARC_DSP_TURNED_OFF
bool "Turn off DSP if it presents"
depends on !ARC_DSP
help
This option disables DSP block via resetting DSP_CRTL register.
config DSP_SHARING
bool "DSP register sharing"
depends on ARC_DSP && MULTITHREADING
select ARC_HAS_ACCL_REGS
help
This option enables preservation of the hardware DSP registers
across context switches to allow multiple threads to perform concurrent
DSP operations.
config ARC_DSP_BFLY_SHARING
bool "ARC complex DSP operation"
depends on ARC_DSP && CPU_ARCEM
help
This option is to enable Zephyr to store and restore DSP_BFLY0
and FFT_CTRL registers during context switch. This option is
only required when butterfly instructions are used in
multi-thread.
config ARC_XY_ENABLE
bool "ARC address generation unit registers"
help
Processors with XY memory and AGU registers can configure this
option to accelerate DSP instructions.
config ARC_AGU_SHARING
bool "ARC address generation unit register sharing"
depends on ARC_XY_ENABLE && MULTITHREADING
default y if DSP_SHARING
help
This option enables preservation of the hardware AGU registers
across context switches to allow multiple threads to perform concurrent
operations on XY memory. Save and restore small size AGU registers is
set as default, including 4 address pointers regs, 2 address offset regs
and 4 modifiers regs.
config ARC_AGU_MEDIUM
bool "ARC AGU medium size register"
depends on ARC_AGU_SHARING
help
Save and restore medium AGU registers, including 8 address pointers regs,
4 address offset regs and 12 modifiers regs.
config ARC_AGU_LARGE
bool "ARC AGU large size register"
depends on ARC_AGU_SHARING
select ARC_AGU_MEDIUM
help
Save and restore large AGU registers, including 12 address pointers regs,
8 address offset regs and 24 modifiers regs.
endmenu

View File

@@ -1,70 +0,0 @@
/*
* Copyright (c) 2022 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARCv2 DSP and AGU structure member offset definition file
*
*/
#ifdef CONFIG_DSP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_ctrl);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_glo);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_ghi);
#ifdef CONFIG_ARC_DSP_BFLY_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_bfly0);
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_fft_ctrl);
#endif
#endif
#ifdef CONFIG_ARC_AGU_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap0);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap1);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap2);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap3);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os0);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os1);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod0);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod1);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod2);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod3);
#ifdef CONFIG_ARC_AGU_MEDIUM
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap4);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap5);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap6);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap7);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os2);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os3);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod4);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod5);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod6);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod7);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod8);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod9);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod10);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod11);
#endif
#ifdef CONFIG_ARC_AGU_LARGE
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap8);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap9);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap10);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap11);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os4);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os5);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os6);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os7);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod12);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod13);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod14);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod15);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod16);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod17);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod18);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod19);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod20);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod21);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod22);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod23);
#endif
#endif

Some files were not shown because too many files have changed in this diff Show More