Compare commits

..

4 Commits

Author SHA1 Message Date
Gerard Marull-Paretas
9ff30a149d init: remove support for devices
Devices no longer use SYS_INIT infrastructure.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-09-21 12:43:04 +00:00
Fabio Baltieri
db747c085f scripts: check_init_priorities: handle init and device decoupling
Fix check_init_priorities.py to handle the decoupling between SYS_INIT
and devices.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2023-09-21 12:43:04 +00:00
Gerard Marull-Paretas
c55650c200 [RFC] device: devices no longer depend on SYS_INIT
Background
----------

Nowadays, devices rely on the `init.h` (i.e. `SYS_INIT`) infrastructure
to get initialized automatically. The `init.h` infrastructure is a basic
mechanism that allows to register an init function when the system is
initialized (before `main`). It provides with multiple initialization
levels: `PRE_KERNEL_*`, `POST_KERNEL`, etc., and within each level a
numeric priority (0-99). Using this information, each registered init
entry is sorted by the linker so that the Kernel can later iterate over
them in the correct order. This all sounds nice and simple, but when it
comes to devices, this mechanism has proven to be insufficient.

Before starting with the changes proposed in this patch, let's first dig
into the implementation details of the current model. When devices are
defined, using any of the `DEVICE_*DEFINE` macros, they also create an
init entry using the internal `init.h` APIs (see
`Z_DEVICE_INIT_ENTRY_DEFINE`). This entry, stores a pointer to the
device init call and to the device itself. As the reader can imagine,
this implies a coupling between `init.h` and `device.h`.  The only link
between a device and an init entry is the device pointer stored in the
init entry. This allows the Kernel init machinery to call the init
function with the right device pointer. However, there is no direct
relationship between a device and its init function, that is, `struct
device` does not keep the device init function reference. This is not a
problem nowadays, but it could be a problem if features like deferred
initialization or init/de-init have to be implemented. However, in
reality, this is a _secondary_ problem. The most problematic issue we
have today is that devices are mixed with `SYS_INIT` calls. They are all
part of the same init block, and are treated equally. So for example,
one can theoretically have a system where the init sequence can be like:

```
- PRE_KERNEL_1
  - init call 1
  - device 0
  - init call 2
  - init call 3
  - device 1
  - device 2
  - init call 4
- PRE_KERNEL_2
  - init call 5
  - device 3
  ...
```

This is problematic because:

(1) Init calls can depend on devices, but dependency is not tracked
    anywhere. So the user must check that init priorities are correct to
    avoid runtime failures.
(2) Device drivers can have multiple instances, and each instance may
    require a different set of priorities, while `SYS_INIT` calls are
    singletons.
(3) Devices don't likely need so many init priorities (`SMP`?
    `APPLICATION`?)

(1) is particularly important because init calls do not have a specific
purpose. They usage ranges from SoC init code, to system services. So
it's _unpredictable_ what can happen in there. (2) is a tangential
topic, actually not fixed by this patch, even though it helps. (3) is
more of a post-patch cleanup we need.

So, what does this patch propose...?
------------------------------------

**First, this patch is still a HACK, so please, focus on the description
rather than with the implementation**

So it's about providing devices with their own init infrastructure,
minimizing the coupling with init.h. This patch groups devices in a
separate section, but, keeps the same init levels as before. Therefore,
the list above would look like:

```
/* devices */
- PRE_KERNEL_1
  - device 0
  - device 1
  - device 2
- PRE_KERNEL_2
  - device 3
  ...

/* init calls */
- PRE_KERNEL_1
  - init call 1
  - init call 2
  - init call 3
  - init call 4
- PRE_KERNEL_2
  - init call 5
  ...
```

This means that **it is no longer possible** to mix init calls and
devices within the same level. So how does it work now? Within each
level, devices are initialized first, then init calls. It is done this
way because I believe it is init calls who depend on devices and not
vice versa. I may be wrong, and so the whole proposal would need a
rework. So the init order would look like:

```
- PRE_KERNEL_1
  - device 0
  - device 1
  - device 2
  - init call 1
  - init call 2
  - init call 3
  - init call 4
- PRE_KERNEL_2
  - device 3
  - init call 5
  ...
```

Why does this matter for future development?
--------------------------------------------

First, we have devices grouped at least on a level basis. This means
that we could likely start using devicetree ordinals, by reducing the
source of problems to the init level only. We can also re-think device
init levels (probably init levels as well, hey `PRE_KERNEL_1/2`). For
example, there could be pre-kernel and post-kernel devices only. The
other side effect of this change is that `struct device` stores the
device init call, so we can potentially do things like defer device
initialization or implement things like init/de-init. They all come with
their own complexity, of course, but this patch could be a step forward.

Problems
--------

This is a breaking change. Sorry, guys, another one to the list. The API
does not change, so builds should continue to work. However, the system
changes its behavior, so if anyone was relying on mixed init call/device
sequences, things will break. This is why it is important to analyze
lots of use cases before moving forward with this proposal.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-09-21 12:43:04 +00:00
Gerard Marull-Paretas
dac81a202f device: device handles should not be related with SYS_INIT
Device handles are only meant to identify a device, nothing else.
Reserve negative values for any potential future use, though.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
2023-09-21 12:43:04 +00:00
55569 changed files with 824988 additions and 3191069 deletions

View File

@@ -5,6 +5,7 @@
--min-conf-desc-length=1
--typedefsfile=scripts/checkpatch/typedefsfile
--ignore BRACES
--ignore PRINTK_WITHOUT_KERN_LEVEL
--ignore SPLIT_STRING
--ignore VOLATILE
@@ -28,5 +29,3 @@
--ignore MULTISTATEMENT_MACRO_USE_DO_WHILE
--ignore ENOSYS
--ignore IS_ENABLED_CONFIG
--ignore EXPORT_SYMBOL
--ignore COMPARISON_TO_NULL

View File

@@ -32,21 +32,17 @@ ColumnLimit: 100
ConstructorInitializerIndentWidth: 8
ContinuationIndentWidth: 8
ForEachMacros:
- 'ARRAY_FOR_EACH'
- 'ARRAY_FOR_EACH_PTR'
- 'FOR_EACH'
- 'FOR_EACH_FIXED_ARG'
- 'FOR_EACH_IDX'
- 'FOR_EACH_IDX_FIXED_ARG'
- 'FOR_EACH_NONEMPTY_TERM'
- 'FOR_EACH_FIXED_ARG_NONEMPTY_TERM'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_DLIST_FOR_EACH_NODE'
- 'SYS_DLIST_FOR_EACH_NODE_SAFE'
- 'SYS_SEM_LOCK'
- 'SYS_SFLIST_FOR_EACH_CONTAINER'
- 'SYS_SFLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SFLIST_FOR_EACH_NODE'
@@ -70,19 +66,7 @@ ForEachMacros:
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
- 'STRUCT_SECTION_FOREACH'
- 'STRUCT_SECTION_FOREACH_ALTERNATE'
- 'TYPE_SECTION_FOREACH'
- 'K_SPINLOCK'
- 'COAP_RESOURCE_FOREACH'
- 'COAP_SERVICE_FOREACH'
- 'COAP_SERVICE_FOREACH_RESOURCE'
- 'HTTP_RESOURCE_FOREACH'
- 'HTTP_SERVER_CONTENT_TYPE_FOREACH'
- 'HTTP_SERVICE_FOREACH'
- 'HTTP_SERVICE_FOREACH_RESOURCE'
- 'I3C_BUS_FOR_EACH_I3CDEV'
- 'I3C_BUS_FOR_EACH_I2CDEV'
- 'MIN_HEAP_FOREACH'
IfMacros:
- 'CHECKIF'
# Disabled for now, see bug https://github.com/zephyrproject-rtos/zephyr/issues/48520
@@ -97,20 +81,11 @@ IncludeCategories:
- Regex: '.*'
Priority: 3
IndentCaseLabels: false
IndentGotoLabels: false
IndentWidth: 8
InsertBraces: true
InsertNewlineAtEOF: true
SpaceBeforeInheritanceColon: False
SpaceBeforeParens: ControlStatementsExceptControlMacros
SortIncludes: Never
UseTab: ForContinuationAndIndentation
WhitespaceSensitiveMacros:
- COND_CODE_0
- COND_CODE_1
- IF_DISABLED
- IF_ENABLED
- LISTIFY
- STRINGIFY
- Z_STRINGIFY
- DT_FOREACH_PROP_ELEM_SEP

View File

@@ -1,21 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (c) 2024, Basalte bv
analyzer:
# Start by disabling all
- --disable-all
# Enable the sensitive profile
- --enable=sensitive
# Disable unused cases
- --disable=boost
- --disable=mpi
# Many identifiers in zephyr start with _
- --disable=clang-diagnostic-reserved-identifier
- --disable=clang-diagnostic-reserved-macro-identifier
# Cleanup
- --clean

View File

@@ -86,10 +86,6 @@ indent_size = 8
[COMMIT_EDITMSG]
max_line_length = 75
# Patches
[{*.patch,*.diff}]
trim_trailing_whitespace = false
# Kconfig
[Kconfig*]
indent_style = tab

View File

@@ -0,0 +1,57 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
**Notes (delete this)**
Github Discussions (https://github.com/zephyrproject-rtos/zephyr/discussions)
are available to first verify that the issue is a genuine Zephyr bug and not a
consequence of Zephyr services misuse.
This issue list is only for bugs in the main Zephyr code base
(https://github.com/zephyrproject-rtos/zephyr/). If the bug is for a project
fork (such as NCS) specific feature, please open an issue in the fork project
instead.
**Describe the bug**
A clear and concise description of what the bug is.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
- ...
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
Add any other context that could be relevant to your issue, such as pin setting,
target configuration, ...

View File

@@ -1,85 +0,0 @@
name: Bug Report
description: File a bug report.
labels: ["bug"]
type: "Bug"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: textarea
id: what-happened
attributes:
label: Describe the bug
description: |
A clear and concise description of what the bug is.
placeholder: |
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
validations:
required: true
- type: checkboxes
id: regression
attributes:
label: Regression
description: |
Check this box if this is a regression and provide a SHA if you were able to "git bisect" to a specific commit.
options:
- label: This is a regression.
required: false
- type: textarea
id: reproduce
attributes:
label: Steps to reproduce
description: |
Steps to reproduce the behavior.
placeholder: |
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
validations:
required: false
- type: textarea
id: logs
attributes:
label: Relevant log output
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
render: shell
- type: dropdown
attributes:
label: Impact
description: Impact of this bug
multiple: false
options:
- Showstopper Prevents release or major functionality; system unusable.
- Major Severely degrades functionality; workaround is difficult or unavailable.
- Functional Limitation Some features not working as expected, but system usable.
- Annoyance Minor irritation; no significant impact on usability or functionality.
- Intermittent Occurs occasionally; hard to reproduce.
- Not sure
default: 3
validations:
required: true
- type: textarea
id: env
attributes:
label: Environment
description: please complete the following information
placeholder: |
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
- type: textarea
id: context
attributes:
label: Additional Context
description: Provide other context that could be relevant to the bug, such as pin setting, target configuration,etc.

View File

@@ -0,0 +1,20 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,42 +0,0 @@
name: Enhancement
description: Submit an Enhancement
labels: ["Enhancement"]
type: "Enhancement"
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this enhancement proposal.
- type: textarea
id: description
attributes:
label: Summary
description: |
Is your enhancement proposal related to a problem? Please describe.
placeholder: |
A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: |
Describe the solution you'd like
placeholder: |
A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Alternatives
description: Describe alternatives you've considered
placeholder: |
A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: context
attributes:
label: Additional Context
description: Add any other context or graphics (drag-and-drop an image) about the enhancement here.

View File

@@ -0,0 +1,51 @@
---
name: RFC / Proposal
about: Submit an RFC / Proposal
title: ''
labels: RFC
assignees: ''
---
## Introduction
This section targets end users, TSC members, maintainers and anyone else that might
need a quick explanation of your proposed change.
### Problem description
Why do we want this change and what problem are we trying to address?
### Proposed change
A brief summary of the proposed change - the 10,000 ft view on what it will
change once this change is implemented.
## Detailed RFC
In this section of the document the target audience is the dev team. Upon
reading this section each engineer should have a rather clear picture of what
needs to be done in order to implement the described feature.
### Proposed change (Detailed)
This section is freeform - you should describe your change in as much detail
as possible. Please also ensure to include any context or background info here.
For example, do we have existing components which can be reused or altered.
By reading this section, each team member should be able to know what exactly
you're planning to change and how.
### Dependencies
Highlight how the change may affect the rest of the project (new components,
modifications in other areas), or other teams/projects.
### Concerns and Unresolved Questions
List any concerns, unknowns, and generally unresolved questions etc.
## Alternatives
List any alternatives considered, and the reasons for choosing this option
over them.

View File

@@ -1,75 +0,0 @@
name: RFC / Proposal
description: Submit a Proposal (RFC)
labels: ["RFC"]
type: RFC
assignees: []
body:
- type: markdown
attributes:
value: |
## Introduction
This section targets end users, TSC members, maintainers and anyone else
that might need a quick explanation of your proposed change.
- type: textarea
id: problem-description
attributes:
label: Problem Description
description: Why do we want this change and what problem are we trying to address?
placeholder: Explain the problem or limitation this RFC is meant to resolve.
validations:
required: true
- type: textarea
id: proposed-change-summary
attributes:
label: Proposed Change (Summary)
description: A high-level summary of the proposed change.
placeholder: Brief summary of what will change if this RFC is implemented.
validations:
required: true
- type: markdown
attributes:
value: |
## Detailed RFC
This section targets the development team. Upon reading it, each engineer
should understand what must be done to implement the proposed feature.
- type: textarea
id: detailed-change
attributes:
label: Proposed Change (Detailed)
description: Describe the change in as much detail as possible. Include context or background info, and reuse of existing components if applicable.
placeholder: Explain exactly what youre planning to change and how.
validations:
required: true
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: Highlight how this change may affect the rest of the project or other teams/components.
placeholder: List components, modules, or teams affected.
validations:
required: false
- type: textarea
id: concerns
attributes:
label: Concerns and Unresolved Questions
description: List any concerns, unknowns, or unresolved questions related to this proposal.
placeholder: Any areas of uncertainty?
validations:
required: false
- type: textarea
id: alternatives
attributes:
label: Alternatives Considered
description: What alternative solutions were considered? Why was this proposal chosen?
placeholder: List alternatives and explain the rationale behind your choice.
validations:
required: false

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: Feature Request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,29 +0,0 @@
name: Feature Request
description: Suggest a new feature or enhancement
labels: ["Feature Request"]
type: Feature
assignees: []
body:
- type: textarea
id: problem
attributes:
label: Is your feature request related to a problem? Please describe.
description: A clear and concise description of what the problem is.
placeholder: e.g., I'm frustrated when I need to do X manually because Y is missing.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
placeholder: e.g., It would be great if the system could automatically handle X by doing Y.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: Include any alternative solutions or features

View File

@@ -0,0 +1,42 @@
---
name: Contributor Nomination
about: Nominate a GitHub user for additional rights on the Zephyr Project
title: ''
labels: Role Nomination
assignees: ''
---
# Background
The [TSC Project Roles] defines the main roles for the Zephyr Project, including
Maintainer, Collaborator, and Contributor.
By default anyone that contributes code or documentation is a Contributor, but
with the lowest [GitHub Permission Level] of Read. For example, Contributors
with Read permission do not have the permission to add reviewers to a pull
request.
Use this template to nominate a GitHub user for the Contributor role with
Triage permission level, which allows the user to add reviewers to a pull
request and be added as a reviewer by other users.
# Nomination
## GitHub User
Provide the following information about the GitHub user:
1. Full Name
1. GitHub username
1. Organization (optional)
## Supporting Documents
Add links to 3-5 GitHub pull requests, in the Zephyr project, authored or
reviewed by the GitHub user that demonstrate the user's dedication to the
Zephyr project.
[TSC Project Roles]: <https://docs.zephyrproject.org/latest/project/project_roles.html>
[GitHub Permission Level]: <https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization>

View File

@@ -1,57 +0,0 @@
name: Contributor Nomination
description: Nominate a GitHub user for the Contributor role with triage permissions
labels: [Role Nomination]
assignees: ['nashif']
body:
- type: markdown
attributes:
value: |
## Background
The [TSC Project Roles](https://docs.zephyrproject.org/latest/project/project_roles.html) defines the main roles for the Zephyr Project, including Maintainer, Collaborator, and Contributor.
By default, anyone who contributes code or documentation is a Contributor, but with the lowest [GitHub Permission Level](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization) of **Read**.
Use this form to nominate a user for the **Contributor** role with **Triage** permission, which allows the user to:
- Add reviewers to pull requests
- Be added as a reviewer by others
- type: input
id: full-name
attributes:
label: Full Name
description: Full name of the nominated contributor.
placeholder: e.g., Jane Doe
validations:
required: true
- type: input
id: github-username
attributes:
label: GitHub Username
description: GitHub handle of the nominated contributor.
placeholder: e.g., @janedoe
validations:
required: true
- type: input
id: organization
attributes:
label: Organization
description: Organization the nominee is affiliated with (optional).
placeholder: e.g., Acme Corp
validations:
required: false
- type: textarea
id: supporting-documents
attributes:
label: Supporting Documents
description: Provide links to 35 pull requests authored or reviewed by the nominee that demonstrate their dedication to the Zephyr project.
placeholder: |
e.g.,
- https://github.com/zephyrproject-rtos/zephyr/pull/12345
- https://github.com/zephyrproject-rtos/zephyr/pull/23456
- https://github.com/zephyrproject-rtos/zephyr/pull/34567
validations:
required: true

View File

@@ -0,0 +1,68 @@
---
name: External Source Code
about: Submit a proposal to integrate external source code
title: ''
labels: TSC
assignees: ''
---
## Origin
Name of project hosting the original open source code
Provide a link to the source
## Purpose
Brief description of what this software does
## Mode of integration
Describe whether you'd like to integrate this external component in the main tree
or as a module, and why. If the mode of integration is a module, suggest a
repository name for the module
## Maintainership
List the person(s) that will be maintaining the integration of this external code
for the foreseeable future. Please use GitHub IDs to identify them. You can
choose to identify a single maintainer only or add collaborators as well
## Pull Request
Pull request (if any) with the actual implementation of the integration, be it
in the main tree or as a module (pointing to your own fork for now). Make sure
the PR is correctly labeled as "DNM"
## Description
Long description that will help reviewers discuss suitability of the
component to solve the problem at hand (there may be a better options
available.)
What is its primary functionality (e.g., SQLLite is a lightweight
database)?
What problem are you trying to solve? (e.g., a state store is
required to maintain ...)
Why is this the right component to solve it (e.g., SQLite is small,
easy to use, and has a very liberal license.)
## Dependencies
What other components does this package depend on?
Will the Zephyr project have a direct dependency on the component, or
will it be included via an abstraction layer with this component as a
replaceable implementation?
## Revision
Version or SHA you would like to integrate initially
## License
Please use an SPDX identifier (https://spdx.org/licenses/), such as
``BSD-3-Clause``

View File

@@ -1,106 +0,0 @@
name: External Component Integration
description: Propose integration of an external open source component
labels: ["TSC"]
assignees: []
body:
- type: textarea
id: origin
attributes:
label: Origin
description: Name of project hosting the original open source code. Provide a link to the source.
placeholder: e.g., SQLite - https://sqlite.org
validations:
required: true
- type: textarea
id: purpose
attributes:
label: Purpose
description: Brief description of what this software does.
placeholder: |
e.g., A small, fast, self-contained SQL database engine.
validations:
required: true
- type: textarea
id: integration-mode
attributes:
label: Mode of Integration
description: Should this be integrated in the main tree or as a module? Explain your choice and suggest a module repo name if applicable.
placeholder: |
e.g., As a module - proposed repo name: zephyr-sqlite
validations:
required: true
- type: textarea
id: maintainership
attributes:
label: Maintainership
description: List maintainers (GitHub IDs) for this integration. Include at least one primary maintainer.
placeholder: |
e.g., @username1 (primary), @username2 (collaborator)
validations:
required: true
- type: input
id: pull-request
attributes:
label: Pull Request
description: Link to the pull request (if any) for this integration. Must be labeled "DNM" (Do Not Merge).
placeholder: |
e.g., https://github.com/zephyrproject-rtos/zephyr/pull/12345
validations:
required: false
- type: textarea
id: description
attributes:
label: Description
description: Long-form description to justify suitability of this component.
placeholder: |
- What is its primary functionality?
- What problem does it solve?
- Why is this the right component?
validations:
required: true
- type: textarea
id: security
attributes:
label: Security
description: Security-related aspects of this component, including cryptographic functions and known vulnerabilities.
placeholder: |
- Does it use cryptography?
- How are vulnerabilities handled?
- Any known CVEs?
validations:
required: false
- type: textarea
id: dependencies
attributes:
label: Dependencies
description: What does this component depend on, and how will it be integrated (directly or via abstraction)?
placeholder: |
- Other external packages?
- Direct or abstracted use in Zephyr?
validations:
required: false
- type: input
id: revision
attributes:
label: Version or SHA
description: Which version or specific commit should be initially integrated?
placeholder: e.g., v3.45.0 or 79cc94d
validations:
required: true
- type: input
id: license
attributes:
label: License (SPDX)
description: Provide the license using a valid SPDX identifier (e.g., BSD-3-Clause).
placeholder: e.g., MIT or BSD-3-Clause
validations:
required: true

41
.github/ISSUE_TEMPLATE/008_bin-blobs.md vendored Normal file
View File

@@ -0,0 +1,41 @@
---
name: Binary blobs
about: Submit a proposal to integrate binary blob(s)
title: ''
labels: TSC
assignees: ''
---
## Origin
Describe where the binary blob(s) originate from
## Type
- [ ] Precompiled library
- [ ] Firmware image
## Module
The Zephyr module that this blob(s) will be referenced from
## Purpose
Brief description of what the blob(s) do. It is especially important to describe
the functionality that the blob(s) provide, to the largest extent possible
## Pull Request
Link to the Pull request with the actual implementation of the integration. If
you are submitting this as part of a new module you may link to the same Pull
Request that introduced the new module.
## Dependencies
What other components do the blob(s) depend on, if any?
## License
Document the license the blob(s) are distributed under

View File

@@ -1,5 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Zephyr Community Support
url: https://github.com/zephyrproject-rtos/zephyr/discussions
about: Please ask and answer questions here.

8
.github/SECURITY.md vendored
View File

@@ -8,12 +8,12 @@ updates:
- The most recent release, and the release prior to that.
- Active LTS releases.
At this time, with the latest release of v4.3, the supported
At this time, with the latest release of v3.3, the supported
versions are:
- v4.3: Current release
- v4.2: Prior release
- v3.7: Current LTS
- v2.7: Current LTS
- v3.2: Prior release
- v3.3: Current release
## Reporting process

View File

@@ -1,2 +0,0 @@
paths:
- .github

View File

@@ -1,2 +0,0 @@
paths:
- doc

View File

@@ -1,26 +0,0 @@
version: 2
enable-beta-ecosystems: true
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
commit-message:
prefix: "ci: github: "
labels: []
groups:
actions-deps:
patterns:
- "*"
- package-ecosystem: "uv"
directory: "/doc"
schedule:
interval: "weekly"
commit-message:
prefix: "ci: doc: "
labels: []
groups:
doc-deps:
patterns:
- "*"

View File

@@ -1,4 +1,4 @@
name: Pull Request/Issue Assigner
name: Pull Request Assigner
on:
pull_request_target:
@@ -9,80 +9,43 @@ on:
- ready_for_review
branches:
- main
- collab-*
- v*-branch
issues:
types:
- labeled
permissions:
contents: read
jobs:
assignment:
name: Pull Request/Issue Assignment
name: Pull Request Assignment
if: github.event.pull_request.draft == false
runs-on: ubuntu-24.04
permissions:
issues: write # to add assignees to issues
runs-on: ubuntu-22.04
steps:
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U PyGithub>=1.55 west
- name: Check out source code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: zephyrproject-rtos/action-python-env@32e53bef090c33d53aa94f1d9a9d29c93cfdc5f7 # main
with:
python-version: 3.12
- name: Fetch west.yml/Maintainer.yml from pull request
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
run: |
git fetch origin pull/${{ github.event.pull_request.number }}/merge
git show FETCH_HEAD:west.yml > pr_west.yml
git show FETCH_HEAD:MAINTAINERS.yml > pr_MAINTAINERS.yml
- name: west setup
if: >
github.event_name == 'pull_request_target'
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
uses: actions/checkout@v3
- name: Run assignment script
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
FLAGS="-v"
FLAGS+=" -o ${{ github.event.repository.owner.login }}"
FLAGS+=" -r ${{ github.event.repository.name }}"
FLAGS+=" -M MAINTAINERS.yml"
if [ "${{ github.event_name }}" = "pull_request_target" ]; then
if [ "${{ github.base_ref }}" != "main" ]; then
FLAGS+=" -P ${{ github.event.pull_request.number }} --updated-manifest pr_west.yml --updated-maintainer-file pr_MAINTAINERS.yml"
else
FLAGS+=" -P ${{ github.event.pull_request.number }}"
fi
elif [ "${{ github.event_name }}" = "issues" ]; then
FLAGS+=" -I ${{ github.event.issue.number }}"
FLAGS+=" -I ${{ github.event.issue.number }}"
elif [ "${{ github.event_name }}" = "schedule" ]; then
FLAGS+=" --modules"
FLAGS+=" --modules"
else
echo "Unknown event: ${{ github.event_name }}"
exit 1
fi
python3 scripts/ci/set_assignees.py $FLAGS
- name: Check maintainer file changes
if: >
github.event_name == 'pull_request_target' && github.base_ref == 'main'
env:
GITHUB_TOKEN: ${{ secrets.ZB_PR_ASSIGNER_GITHUB_TOKEN }}
run: |
python ./scripts/ci/check_maintainer_changes.py \
--repo zephyrproject-rtos/zephyr MAINTAINERS.yml pr_MAINTAINERS.yml
python3 scripts/set_assignees.py $FLAGS

View File

@@ -7,17 +7,10 @@ on:
branches:
- main
permissions:
contents: read
jobs:
backport:
name: Backport
runs-on: ubuntu-24.04
permissions:
contents: write # to create/push backport branches
pull-requests: write # to create backport PRs
issues: write # to add labels to issue created if backport fails
runs-on: ubuntu-22.04
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
@@ -31,8 +24,8 @@ jobs:
)
steps:
- name: Backport
uses: zephyrproject-rtos/action-backport@7e74f601d11eaca577742445e87775b5651a965f # v2.0.3-3
uses: zephyrproject-rtos/action-backport@v2.0.3-3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_token: ${{ secrets.ZB_GITHUB_TOKEN }}
issue_labels: Backport
labels_template: '["Backport"]'

View File

@@ -2,49 +2,30 @@ name: Backport Issue Check
on:
pull_request_target:
types:
- edited
- opened
- reopened
- synchronize
branches:
- v*-branch
permissions:
contents: read
jobs:
backport:
name: Backport Issue Check
concurrency:
group: backport-issue-check-${{ github.ref }}
cancel-in-progress: true
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
issues: read # to check if associated issue exists for backport
steps:
- name: Check out source code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

99
.github/workflows/blackbox_tests.yml vendored Normal file
View File

@@ -0,0 +1,99 @@
# Copyright (c) 2023 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Twister BlackBox TestSuite
on:
push:
branches:
- main
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/blackbox_tests.yml'
pull_request:
branches:
- main
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/blackbox_tests.yml'
jobs:
twister-tests:
name: Twister Black Box Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
steps:
- name: Apply Container Owner Mismatch Workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Checkout
uses: actions/checkout@v3
- name: Environment Setup
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
west init -l . || true
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Go Into Venv
shell: bash
run: |
python3 -m pip install --user virtualenv
python3 -m venv env
source env/bin/activate
echo "$(which python)"
- name: Install Packages
run: |
python3 -m pip install -U -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt -r scripts/requirements-run-test.txt
- name: Run Pytest For Twister Black Box Tests
shell: bash
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run twister tests"
source zephyr-env.sh
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox
- name: Upload Unit Test Results
if: success() || failure()
uses: actions/upload-artifact@v2
with:
name: Black Box Test Results (Python ${{ matrix.python-version }})
path: |
twister-out*/twister.log
twister-out*/twister.json
twister-out*/testplan.log
retention-days: 14
- name: Clear Workspace
if: success() || failure()
run: |
rm -rf twister-out*/

View File

@@ -5,26 +5,20 @@ on:
workflows: ["BabbleSim Tests"]
types:
- completed
permissions:
contents: read
jobs:
bsim-test-results:
name: "Publish BabbleSim Test Results"
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.event.workflow_run.conclusion != 'skipped'
permissions:
checks: write # to create the check run entry with test results
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v2
with:
run_id: ${{ github.event.workflow_run.id }}
- name: Publish BabbleSim Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: BabbleSim Test Results
comment_mode: off

View File

@@ -8,29 +8,18 @@ on:
- "west.yml"
- "subsys/bluetooth/**"
- "tests/bsim/**"
- "boards/nordic/nrf5*/*dt*"
- "dts/*/nordic/**"
- "tests/bluetooth/**"
- "samples/bluetooth/**"
- "boards/native/**"
- "soc/native/**"
- "boards/posix/**"
- "soc/posix/**"
- "arch/posix/**"
- "include/zephyr/arch/posix/**"
- "scripts/native_simulator/**"
- "samples/net/sockets/echo_*/**"
- "modules/hal_nordic/**"
- "modules/mbedtls/**"
- "modules/openthread/**"
- "subsys/net/l2/openthread/**"
- "include/zephyr/net/openthread.h"
- "drivers/ieee802154/**"
- "include/zephyr/net/ieee802154*"
- "drivers/serial/*nrfx*"
- "tests/drivers/uart/**"
- '!**.rst'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -39,19 +28,20 @@ concurrency:
jobs:
bsim-test:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
permissions:
checks: write # to create the check run entry with test results
bsim_bluetooth_test_results_file: ./bsim_bluetooth/bsim_results.xml
bsim_networking_test_results_file: ./bsim_net/bsim_results.xml
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -61,20 +51,14 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -85,127 +69,96 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Check common triggering files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v35
id: check-common-files
with:
files: |
.github/workflows/bsim-tests.yaml
.github/workflows/bsim-tests-publish.yaml
west.yml
boards/native/
soc/native/
arch/posix/
include/zephyr/arch/posix/
scripts/native_simulator/
boards/posix/**
soc/posix/**
arch/posix/**
include/zephyr/arch/posix/**
scripts/native_simulator/**
tests/bsim/*
boards/nordic/nrf5*/*dt*
dts/*/nordic/
modules/mbedtls/**
modules/hal_nordic/**
- name: Check if Bluethooth files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v35
id: check-bluetooth-files
with:
files: |
samples/bluetooth/
subsys/bluetooth/
tests/bluetooth/
tests/bsim/bluetooth/
tests/bsim/bluetooth/**
samples/bluetooth/**
subsys/bluetooth/**
- name: Check if Networking files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
uses: tj-actions/changed-files@v35
id: check-networking-files
with:
files: |
tests/bsim/net/
samples/net/sockets/echo_*/
modules/openthread/
subsys/net/l2/openthread/
tests/bsim/net/**
samples/net/sockets/echo_*/**
modules/openthread/**
subsys/net/l2/openthread/**
include/zephyr/net/openthread.h
drivers/ieee802154/
drivers/ieee802154/**
include/zephyr/net/ieee802154*
- name: Check if UART files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
id: check-uart-files
with:
files: |
tests/bsim/drivers/uart/
drivers/serial/*nrfx*
tests/drivers/uart/
- name: Update BabbleSim to manifest revision
if: >
steps.check-bluetooth-files.outputs.any_modified == 'true'
|| steps.check-networking-files.outputs.any_modified == 'true'
|| steps.check-uart-files.outputs.any_modified == 'true'
|| steps.check-common-files.outputs.any_modified == 'true'
steps.check-bluetooth-files.outputs.any_changed == 'true'
|| steps.check-networking-files.outputs.any_changed == 'true'
|| steps.check-common-files.outputs.any_changed == 'true'
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
git config --global advice.detachedHead false
git checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- name: Run Bluetooth Tests with BSIM
if: steps.check-bluetooth-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
if: steps.check-bluetooth-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.bt.sh
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_bluetooth nice tests/bsim/bluetooth/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bluetooth_test_results_file} \
SEARCH_PATH=tests/bsim/bluetooth/ tests/bsim/run_parallel.sh
- name: Run Networking Tests with BSIM
if: steps.check-networking-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
if: steps.check-networking-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.net.sh
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_net nice tests/bsim/net/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_networking_test_results_file} \
SEARCH_PATH=tests/bsim/net/ tests/bsim/run_parallel.sh
- name: Run UART Tests with BSIM
if: steps.check-uart-files.outputs.any_modified == 'true' || steps.check-common-files.outputs.any_modified == 'true'
run: |
tests/bsim/ci.uart.sh
- name: Merge Test Results
run: |
junitparser merge --glob "./bsim_*/*bsim_results.*.xml" "./twister-out/twister.xml" junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results in HTML
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
name: bsim-test-results
path: |
junit.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
with:
check_name: Bsim Test Results
files: "junit.xml"
comment_mode: off
./bsim_bluetooth/bsim_results.xml
./bsim_net/bsim_results.xml
${{ github.event_path }}
if-no-files-found: warn
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: event
path: |

View File

@@ -8,34 +8,25 @@ name: Bug Snapshot
on:
workflow_dispatch:
branches: [main]
schedule:
# Run daily at 00:05
- cron: '5 00 * * *'
permissions:
contents: read
# Run daily at 14:05
- cron: '5 14 * * *'
jobs:
make_bugs_pickle:
name: Make bugs pickle
runs-on: ubuntu-24.04
if: github.repository_owner == 'zephyrproject-rtos' && github.ref == 'refs/heads/main'
runs-on: ubuntu-22.04
if: github.repository_owner == 'zephyrproject-rtos'
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Snapshot bugs
env:
@@ -51,7 +42,7 @@ jobs:
echo "BUGS_PICKLE_PATH=${BUGS_PICKLE_PATH}" >> ${GITHUB_ENV}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_SECRET_ACCESS_KEY }}

View File

@@ -1,12 +1,6 @@
name: Build with Clang/LLVM
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -15,22 +9,23 @@ concurrency:
jobs:
clang-build:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
subset: [1, 2]
platform: ["native_posix"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-16
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -40,21 +35,16 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -64,20 +54,16 @@ jobs:
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git clean -f -d
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config --global update.narrow true
west config manifest.group-filter -- +ci,+optional
# In some cases modules are left in a state where they can't be
# updated (i.e. when we cancel a job and the builder is killed),
# So first retry to update, if that does not work, remove all modules
# and start over. (Workaround until we implement more robust module
# west caching).
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
- name: Check Environment
run: |
@@ -86,26 +72,31 @@ jobs:
gcc --version
ls -la
- name: Install Python packages
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Update BabbleSim to manifest revision
- name: ccache stats initial
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && rm -rf /github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
- name: Run Tests with Twister
id: twister
@@ -113,62 +104,50 @@ jobs:
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=llvm
./scripts/twister -p native_sim --force-color --inline-logs -M -N -v --retry-failed 2 \
-T tests --subset ${{matrix.subset}}/2 -j 16
# check if we need to run a full twister or not based on files changed
python3 ./scripts/ci/test_plan.py --platform ${{ matrix.platform }} -c origin/${BASE_REF}..
- name: Print ccache stats
if: always()
# We can limit scope to just what has changed
if [ -s testplan.json ]; then
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --force-color --inline-logs -M -N -v --load-tests testplan.json --retry-failed 2
else
# if nothing is run, skip reporting step
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
run: |
ccache -s -vv
ccache -s
ccache -p
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
path: |
twister-out/twister.xml
twister-out/twister.json
if-no-files-found: ignore
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml
clang-build-results:
name: "Publish Unit Tests Results"
needs: clang-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create GitHub annotations
if: (success() || failure())
runs-on: ubuntu-22.04
if: (success() || failure() ) && needs.clang-build.outputs.report_needed != 0
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Download Artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@v3
with:
path: artifacts
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/twister.xml junit.xml
junit2html junit.xml junit-clang.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
@@ -176,7 +155,7 @@ jobs:
junit-clang.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
check_name: Unit Test Results

View File

@@ -1,14 +1,8 @@
name: Code Coverage with codecov
on:
push:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
schedule:
- cron: '25 */3 * * 1-5'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -16,31 +10,19 @@ concurrency:
jobs:
codecov:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository == 'zephyrproject-rtos/zephyr'
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
platform: ["mps2/an385", "native_sim", "qemu_x86", "unit_testing"]
include:
- platform: 'mps2/an385'
normalized: 'mps2_an385'
- platform: 'native_sim'
normalized: 'native_sim'
- platform: 'qemu_x86'
normalized: 'qemu_x86'
- platform: 'unit_testing'
normalized: 'unit_testing'
platform: ["native_posix", "qemu_x86", "unit_testing"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resovle the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -50,12 +32,6 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
@@ -63,44 +39,48 @@ jobs:
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west update 1> west.update.log || west update 1> west.update-2.log
- name: Environment Setup
- name: Check Environment
run: |
cmake --version
gcc --version
ls -la
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Set up ccache
- name: Prepare ccache keys
id: ccache_cache_prop
shell: cmake -P {0}
run: |
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
with:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
- name: Run Tests with Twister (Push)
continue-on-error: true
@@ -108,91 +88,55 @@ jobs:
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
mkdir -p coverage/reports
./scripts/twister --save-tests ${{matrix.normalized}}-testplan.json
ls -la
./scripts/twister \
-i --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage \
-T tests --coverage-tool gcovr -xCONFIG_TEST_EXTRA_STACK_SIZE=4096 -e nano \
--timeout-multiplier 2
./scripts/twister --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage -T tests
- name: Build Doxygen Coverage
if: matrix.platform == 'unit_testing'
- name: Generate Coverage Report
run: |
pip install -r doc/requirements.txt --require-hashes
sudo apt-get update
sudo apt-get install -y graphviz # dot is needed but currently missing from the Docker image
cmake -B doc/_build -S doc
cmake --build doc/_build --target doxygen-coverage
mv twister-out/coverage.info lcov.pre.info
lcov -q --remove lcov.pre.info mylib.c --remove lcov.pre.info tests/\* \
--remove lcov.pre.info samples/\* --remove lcov.pre.info ext/\* \
--remove lcov.pre.info *generated* \
-o coverage/reports/${{ matrix.platform }}.info --rc lcov_branch_coverage=1
- name: Upload Doxygen Coverage Results
if: matrix.platform == 'unit_testing'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: doxygen-coverage-results
path: |
doc/_build/new.info
doc/_build/coverage-report
- name: Print ccache stats
if: always()
- name: ccache stats post
run: |
ccache -s -vv
- name: Rename coverage files
if: always()
run: |
mv twister-out/coverage.json coverage/reports/${{matrix.normalized}}.json
ccache -s
ccache -p
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.normalized }})
path: |
coverage/reports/${{ matrix.normalized }}.json
${{ matrix.normalized }}-testplan.json
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
codecov-results:
name: "Publish Coverage Results"
needs: codecov
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
# the codecov job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@v3
with:
path: coverage/reports
- name: Move coverage files
run: |
ls -lRt ./coverage/reports
mv ./coverage/reports/*/*testplan.json .
mv ./coverage/reports/*/coverage/reports/*.json ./coverage/reports
mv ./coverage/reports/*/*.info ./coverage/reports
ls -la ./coverage/reports
- name: Generate list of coverage files
id: get-coverage-files
shell: cmake -P {0}
run: |
file(GLOB INPUT_FILES_LIST "coverage/reports/*.json")
file(GLOB INPUT_FILES_LIST "coverage/reports/*.info")
set(MERGELIST "")
set(FILELIST "")
foreach(ITEM ${INPUT_FILES_LIST})
@@ -206,7 +150,7 @@ jobs:
foreach(ITEM ${INPUT_FILES_LIST})
get_filename_component(f ${ITEM} NAME)
if(MERGELIST STREQUAL "")
set(MERGELIST "--add-tracefile ${f}")
set(MERGELIST "-a ${f}")
else()
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
@@ -216,60 +160,17 @@ jobs:
- name: Merge coverage files
run: |
pushd ./coverage/reports
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --json merged.json
gcovr ${{ steps.get-coverage-files.outputs.mergefiles }} --merge-mode-functions=separate --cobertura merged.xml
popd
sudo apt-get update
sudo apt-get install -y lcov
cd ./coverage/reports
lcov ${{ steps.get-coverage-files.outputs.mergefiles }} -o merged.info --rc lcov_branch_coverage=1
- name: Get current date
id: run_date
run: |
echo "run_date=$(date --iso-8601=minutes)" >> "$GITHUB_OUTPUT"
echo "run_date_short=$(date +'%Y-%m-%d')" >> "$GITHUB_OUTPUT"
echo "run_date_year=$(date +'%Y')" >> "$GITHUB_OUTPUT"
echo "run_date_month=$(date +'%m')" >> "$GITHUB_OUTPUT"
- name: Generate Coverage Report
- name: Upload coverage to Codecov
if: always()
run: |
python3 ./scripts/ci/coverage/coverage_analysis.py \
-t native_sim-testplan.json \
-m MAINTAINERS.yml \
-c coverage/reports/merged.json \
-o coverage-report-${{ steps.run_date.outputs.run_date_short }} \
-f all
cp coverage-report-* coverage/reports/
- name: Upload Merged Coverage Results and Report
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: Coverage Data and report
path: |
coverage/reports/merged.json
coverage/reports/merged.xml
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.json
coverage/reports/coverage-report-${{ steps.run_date.outputs.run_date_short }}.xlsx
- name: Upload test coverage to Codecov
if: always()
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
uses: codecov/codecov-action@v3
with:
directory: ./coverage/reports
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/merged.xml
flags: unittests-coverage
- name: Upload Doxygen coverage to Codecov
if: always()
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7 # v5.5.1
with:
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
token: ${{ secrets.CODECOV_TOKEN }}
files: coverage/reports/doxygen-coverage-results/new.info
disable_search: true
flags: doxygen-coverage
files: merged.info

View File

@@ -1,58 +0,0 @@
name: "CodeQL"
on:
push:
branches:
- main
- v*-branch
- collab-*
schedule:
- cron: '34 16 * * 6'
pull_request:
branches:
- main
- v*-branch
- collab-*
permissions:
contents: read
jobs:
analyze:
name: Analyze (${{ matrix.language }})
runs-on: ubuntu-24.04
permissions:
security-events: write
strategy:
fail-fast: false
matrix:
include:
- language: python
build-mode: none
- language: actions
build-mode: none
config: ./.github/codeql/codeql-actions-config.yml
- language: javascript-typescript
build-mode: none
config: ./.github/codeql/codeql-js-config.yml
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Initialize CodeQL
uses: github/codeql-action/init@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
queries: security-extended
config-file: ${{ matrix.config }}
- if: matrix.build-mode == 'manual'
shell: bash
run: |
echo "nothing yet"
exit 0
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
category: "/language:${{matrix.language}}"

View File

@@ -2,37 +2,35 @@ name: Coding Guidelines
on: pull_request
permissions:
contents: read
jobs:
compliance_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
- name: cache-pip
uses: actions/cache@v3
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install Python packages
- name: Install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install unidiff
pip3 install wheel
pip3 install sh
- name: Install Packages
run: |
sudo apt-get update
sudo apt-get install coccinelle
- name: Run Coding Guidelines Checks
- name: Run Coding Guildeines Checks
continue-on-error: true
id: coding_guidelines
env:
@@ -42,10 +40,7 @@ jobs:
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
source zephyr-env.sh
# debug
ls -la

View File

@@ -1,19 +1,10 @@
name: Compliance Checks
on:
pull_request:
types:
- edited
- opened
- reopened
- synchronize
permissions:
contents: read
on: pull_request
jobs:
check_compliance:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -21,12 +12,25 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Rebase onto the target branch
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install python dependencies
run: |
pip3 install setuptools
pip3 install wheel
pip3 install python-magic lxml junitparser gitlint pylint pykwalify yamllint
pip3 install west
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
@@ -36,40 +40,11 @@ jobs:
# Ensure there's no merge commits in the PR
[[ "$(git rev-list --merges --count origin/${BASE_REF}..)" == "0" ]] || \
(echo "::error ::Merge commits not allowed, rebase instead";false)
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
# debug
git log --pretty=oneline | head -n 10
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: west setup
run: |
west init -l . || true
west config manifest.group-filter -- +ci,-optional
west update -o=--depth=1 -n 2>&1 1> west.update.log || west update -o=--depth=1 -n 2>&1 1> west.update2.log
- name: Setup Node.js
uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0
with:
node-version: "lts/*"
cache: npm
check-latest: true
cache-dependency-path: ./scripts/ci/package-lock.json
- name: Install Node dependencies
run: npm --prefix ./scripts/ci ci
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Run Compliance Tests
continue-on-error: true
@@ -81,58 +56,35 @@ jobs:
# debug
ls -la
git log --pretty=oneline | head -n 10
# Increase rename limit to allow for large PRs
git config diff.renameLimit 10000
excludes="-e KconfigBasic -e SysbuildKconfigBasic -e ClangFormat"
# The signed-off-by check for dependabot should be skipped
if [ "${{ github.actor }}" == "dependabot[bot]" ]; then
excludes="$excludes -e Identity"
fi
./scripts/ci/check_compliance.py --annotate $excludes -c origin/${BASE_REF}..
./scripts/ci/check_compliance.py --annotate -e KconfigBasic \
-c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: compliance.xml
path: compliance.xml
- name: Upload dts linter patch
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
continue-on-error: true
if: hashFiles('dts_linter.patch') != ''
with:
name: dts_linter.patch
path: dts_linter.patch
- name: check-warns
run: |
if [[ ! -s "compliance.xml" ]]; then
exit 1;
fi
warns=("ClangFormat" "LicenseAndCopyrightCheck")
files=($(./scripts/ci/check_compliance.py -l))
for file in "${files[@]}"; do
f="${file}.txt"
if [[ -s $f ]]; then
results=$(cat $f)
results="${results//'%'/'%25'}"
results="${results//$'\n'/'%0A'}"
results="${results//$'\r'/'%0D'}"
if [[ "${warns[@]}" =~ "${file}" ]]; then
echo "::warning file=${f}::$results"
else
echo "::error file=${f}::$results"
exit=1
fi
errors=$(cat $f)
errors="${errors//'%'/'%25'}"
errors="${errors//$'\n'/'%0A'}"
errors="${errors//$'\r'/'%0D'}"
echo "::error file=${f}::$errors"
exit=1
fi
done
if [ "${exit}" == "1" ]; then
echo "Compliance error, check for error messages in the \"Run Compliance Tests\" step"
echo "You can run this step locally with the ./scripts/ci/check_compliance.py script."
exit 1;
fi

View File

@@ -10,38 +10,28 @@ on:
branches:
- refs/tags/*
permissions:
contents: read
jobs:
get_version:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .

View File

@@ -20,32 +20,55 @@ on:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
permissions:
contents: read
jobs:
devicetree-checks:
name: Devicetree script tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-22.04, macos-14, windows-2022]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest pyyaml tox
- name: run tox
working-directory: scripts/dts/python-devicetree
run: |

18
.github/workflows/do_not_merge.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: Do Not Merge
on:
pull_request:
types: [synchronize, opened, reopened, labeled, unlabeled]
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'DNM') ||
contains(github.event.*.labels.*.name, 'TSC') }}
name: Prevent Merging
runs-on: ubuntu-22.04
steps:
- name: Check for label
run: |
echo "Pull request is labeled as 'DNM' or 'TSC'"
echo "This workflow fails so that the pull request cannot be merged"
exit 1

View File

@@ -4,126 +4,73 @@
name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
push:
branches:
- main
tags:
- v*
pull_request:
permissions:
contents: read
paths:
- 'doc/**'
- '**.rst'
- 'include/**'
- 'kernel/include/kernel_arch_interface.h'
- 'lib/libc/**'
- 'subsys/testsuite/ztest/include/**'
- 'tests/**'
- '**/Kconfig*'
- 'west.yml'
- '.github/workflows/doc-build.yml'
- 'scripts/dts/**'
- 'doc/requirements.txt'
env:
DOXYGEN_VERSION: 1.15.0
DOXYGEN_SHA256SUM: 0ec2e5b2c3cd82b7106d19cb42d8466450730b8cb7a9e85af712be38bf4523a1
JOB_COUNT: 8
# NOTE: west docstrings will be extracted from the version listed here
WEST_VERSION: 1.0.0
# The latest CMake available directly with apt is 3.18, but we need >=3.20
# so we fetch that through pip.
CMAKE_VERSION: 3.20.5
DOXYGEN_VERSION: 1.9.6
jobs:
doc-file-check:
name: Check for doc changes
runs-on: ubuntu-24.04
outputs:
file_check: ${{ steps.check-doc-files.outputs.any_modified }}
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Check if Documentation related files changed
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
id: check-doc-files
with:
files: |
doc/
boards/**/doc/
**.rst
include/
kernel/include/kernel_arch_interface.h
lib/libc/**
subsys/testsuite/ztest/include/**
**/Kconfig*
west.yml
scripts/dts/
doc/requirements.txt
.github/workflows/doc-build.yml
scripts/pylib/pytest-twister-harness/src/twister_harness/device/device_adapter.py
scripts/pylib/pytest-twister-harness/src/twister_harness/helpers/shell.py
doc-build-html:
name: "Documentation Build (HTML)"
needs: [doc-file-check]
if: >
needs.doc-file-check.outputs.file_check == 'true' || github.event_name != 'pull_request'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
options: '--entrypoint /bin/bash'
timeout-minutes: 60
runs-on: zephyr-runner-linux-x64-4xlarge
timeout-minutes: 45
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
env:
BASE_REF: ${{ github.base_ref }}
steps:
- name: checkout
uses: actions/checkout@v3
- name: Print cloud service information
- name: install-pkgs
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
sudo apt-get update
sudo apt-get install -y ninja-build graphviz
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz
echo "${PWD}/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: cache-pip
uses: actions/cache@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: Environment Setup
- name: install-pip
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
sudo pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
west init -l . || true
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Install Python packages required for documentation build
- name: west setup
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip install -r doc/requirements.txt --require-hashes
west init -l .
- name: Build HTML documentation
- name: build-docs
shell: bash
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
@@ -138,52 +85,30 @@ jobs:
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-j ${JOB_COUNT} -W --keep-going -T" \
SPHINXOPTS_EXTRA="-q -t publish" \
make -C doc ${DOC_TARGET}
DOC_TAG=${DOC_TAG} SPHINXOPTS_EXTRA="-q -t publish" make -C doc ${DOC_TARGET}
# API documentation coverage
python3 -m coverxygen --xml-dir doc/_build/html/doxygen/xml/ --src-dir include/ --output doc-coverage.info
# deprecated page causing issues
lcov --remove doc-coverage.info \*/deprecated > new.info
genhtml --no-function-coverage --no-branch-coverage new.info -o coverage-report
- name: Compress documentation build artifacts
- name: compress-docs
run: |
tar --use-compress-program="xz -T0" -cf html-output.tar.xz --exclude html/_sources --exclude html/doxygen/xml --directory=doc/_build html
tar --use-compress-program="xz -T0" -cf api-output.tar.xz --directory=doc/_build html/doxygen/html
tar --use-compress-program="xz -T0" -cf api-coverage.tar.xz coverage-report
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: Upload HTML output
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- name: upload-build
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: Upload Doxygen coverage artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: api-coverage
path: api-coverage.tar.xz
- name: Summarize PR documentation URLs
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
API_DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/doxygen/html/"
API_COVERAGE_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/api-coverage/"
echo "${PR_NUM}" > pr_num
echo "Documentation will be available shortly at: ${DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Documentation will be available shortly at: ${API_DOC_URL}" >> $GITHUB_STEP_SUMMARY
echo "API Coverage Report will be available shortly at: ${API_COVERAGE_URL}" >> $GITHUB_STEP_SUMMARY
- name: Upload PR number
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
@@ -191,59 +116,48 @@ jobs:
doc-build-pdf:
name: "Documentation Build (PDF)"
needs: [doc-file-check]
if: |
github.event_name != 'pull_request'
runs-on: ubuntu-24.04
timeout-minutes: 120
if: github.event_name != 'pull_request'
runs-on: zephyr-runner-linux-x64-4xlarge
container: texlive/texlive:latest
timeout-minutes: 60
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: doc/requirements.txt
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install --no-install-recommends graphviz librsvg2-bin \
texlive-latex-base texlive-latex-extra latexmk \
texlive-fonts-recommended texlive-fonts-extra texlive-xetex \
imagemagick fonts-noto xindy
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
echo "${DOXYGEN_SHA256SUM} doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz" | sha256sum -c
if [ $? -ne 0 ]; then
echo "Failed to verify doxygen tarball"
exit 1
fi
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
apt-get update
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
- name: cache-pip
uses: actions/cache@v3
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
enable-ccache: false
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: install-pip-pkgs
working-directory: zephyr
- name: setup-venv
run: |
pip install -r doc/requirements.txt --require-hashes
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
working-directory: zephyr
continue-on-error: true
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
@@ -252,28 +166,14 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-q -j ${JOB_COUNT}" \
LATEXMKOPTS="-quiet -halt-on-error" \
make -C doc pdf
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: pdf-output
if-no-files-found: ignore
path: |
zephyr/doc/_build/latex/zephyr.pdf
zephyr/doc/_build/latex/zephyr.log
doc-build-status-check:
if: always()
name: "Documentation Build Status"
needs:
- doc-build-pdf
- doc-file-check
- doc-build-html
uses: ./.github/workflows/ready-to-merge.yml
with:
needs_context: ${{ toJson(needs) }}
doc/_build/latex/zephyr.pdf
doc/_build/latex/zephyr.log

View File

@@ -10,13 +10,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -24,64 +21,43 @@ jobs:
steps:
- name: Download artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Load PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
with:
script: |
let fs = require("fs");
let pr_number = Number(fs.readFileSync("./pr_num/pr_num"));
core.exportVariable("PR_NUM", pr_number);
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
if: steps.download-artifacts.outputs.found_artifact == 'true'
id: check-pr
uses: carpentries/actions/check-valid-pr@2e20fd5ee53b691e27455ce7ca3b16ea885140e8 # v0.15.0
uses: carpentries/actions/check-valid-pr@v0.14.0
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: |
steps.download-artifacts.outputs.found_artifact == 'true' &&
steps.check-pr.outputs.VALID != 'true'
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
tar xf html-output/html-output.tar.xz -C html-output
if [ -f api-coverage/api-coverage.tar.xz ]; then
tar xf api-coverage/api-coverage.tar.xz -C api-coverage
fi
- name: Configure AWS Credentials
if: steps.download-artifacts.outputs.found_artifact == 'true'
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
if: steps.download-artifacts.outputs.found_artifact == 'true'
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete
if [ -d api-coverage/coverage-report ]; then
aws s3 sync --quiet api-coverage/coverage-report/ \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/api-coverage \
--delete
fi

View File

@@ -13,13 +13,10 @@ on:
types:
- completed
permissions:
contents: read
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event != 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
@@ -27,7 +24,7 @@ jobs:
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
@@ -35,12 +32,9 @@ jobs:
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
if [ -f api-coverage/api-coverage.tar.xz ]; then
tar xf api-coverage/api-coverage.tar.xz -C api-coverage
fi
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
@@ -58,7 +52,4 @@ jobs:
aws s3 sync --quiet html-output/html s3://docs.zephyrproject.org/${VERSION} --delete
aws s3 sync --quiet html-output/html/doxygen/html s3://docs.zephyrproject.org/apidoc/${VERSION} --delete
if [ -d api-coverage/coverage-report ]; then
aws s3 sync --quiet api-coverage/coverage-report/ s3://docs.zephyrproject.org/api-coverage/${VERSION} --delete
fi
aws s3 cp --quiet pdf-output/zephyr.pdf s3://docs.zephyrproject.org/${VERSION}/zephyr.pdf

View File

@@ -5,32 +5,28 @@ on:
- '.github/workflows/errno.yml'
- 'lib/libc/minimal/include/errno.h'
- 'scripts/ci/errno.py'
- 'SDK_VERSION'
permissions:
contents: read
jobs:
check-errno:
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
runs-on: ubuntu-22.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
toolchains: 'arm-zephyr-eabi'
west-group-filter: -hal,-tools,-bootloader,-babblesim
west-project-filter: -nrf_hw_models
enable-ccache: false
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@v3
- name: Run errno.py
working-directory: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
export ZEPHYR_BASE=${PWD}
./scripts/ci/errno.py

View File

@@ -1,8 +1,9 @@
name: Footprint Tracking
# Run every 12 hours and on tags
on:
schedule:
- cron: '50 00 * * *'
- cron: '50 1/12 * * *'
push:
paths:
- 'VERSION'
@@ -15,27 +16,21 @@ on:
# same commit
- 'v*'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: ubuntu-22.04
if: github.repository_owner == 'zephyrproject-rtos'
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
defaults:
run:
shell: bash
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
@@ -46,12 +41,6 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
@@ -60,28 +49,14 @@ jobs:
run: |
sudo apt-get update
sudo apt-get install -y python3-venv
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Environment Setup
run: |
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: west setup
run: |
west init -l . || true
@@ -89,7 +64,7 @@ jobs:
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
@@ -106,33 +81,5 @@ jobs:
run: |
python3 -m venv .venv
. .venv/bin/activate
pip3 install awscli
aws s3 sync --quiet footprint_data/ s3://testing.zephyrproject.org/footprint_data/
- name: Transform Footprint data to Twister JSON reports
run: |
shopt -s globstar
export ZEPHYR_BASE=${PWD}
python3 ./scripts/footprint/pack_as_twister.py -vvv \
--plan ./scripts/footprint/plan.txt \
--test-name='name.feature' \
./footprint_data/**/
- name: Upload to ElasticSearch
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
ELASTICSEARCH_INDEX: ${{ vars.FOOTPRINT_TRACKING_INDEX }}
run: |
shopt -s globstar
run_date=`date --iso-8601=minutes`
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--flatten footprint \
--flatten-list-names "{'children':'name'}" \
--transform "{ 'footprint_name': '^(?P<footprint_area>([^\/]+\/){0,2})(?P<footprint_path>([^\/]*\/)*)(?P<footprint_symbol>[^\/]*)$' }" \
--run-id "${{ github.run_id }}" \
--run-attempt "${{ github.run_attempt }}" \
--run-workflow "footprint-tracking:${{ github.event_name }}" \
--run-branch "${{ github.ref_name }}" \
-i ${ELASTICSEARCH_INDEX} \
./footprint_data/**/twister_footprint.json
#

71
.github/workflows/footprint.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-delta:
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: west setup
run: |
west init -l . || true
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update.log
- name: Detect Changes in Footprint
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=${PWD}
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
git rebase origin/${BASE_REF}
git checkout -b this_pr
west update
west build -b frdm_k64f tests/benchmarks/footprints -t ram_report
cp build/ram.json ram2.json
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/rom.json rom2.json
git checkout origin/${BASE_REF}
west update
west build -p always -b frdm_k64f tests/benchmarks/footprints -t ram_report
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/ram.json ram1.json
cp build/rom.json rom1.json
git checkout this_pr
./scripts/footprint/fpdiff.py ram1.json ram2.json
./scripts/footprint/fpdiff.py rom1.json rom2.json

View File

@@ -6,20 +6,14 @@ on:
pull_request_target:
types: [opened, closed]
permissions:
contents: read
jobs:
check_for_first_interaction:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on pull requests
issues: write # to comment on issues
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: zephyrproject-rtos/action-first-interaction@58853996b1ac504b8e0f6964301f369d2bb22e5c # v1.1.1+zephyr.6
- uses: actions/checkout@v3
- uses: zephyrproject-rtos/action-first-interaction@v1.1.1-zephyr-4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
@@ -35,18 +29,12 @@ jobs:
Hello @${{ github.event.pull_request.user.login }}, and thank you very much for your
first pull request to the Zephyr project!
Our Continuous Integration pipeline will execute a series of checks on your Pull Request
commit messages and code, and you are expected to address any failures by updating the PR.
Please take a look at [our commit message guidelines](https://docs.zephyrproject.org/latest/contribute/guidelines.html#commit-message-guidelines)
to find out how to format your commit messages, and at [our contribution workflow](https://docs.zephyrproject.org/latest/contribute/guidelines.html#contribution-workflow)
to understand how to update your Pull Request.
If you haven't already, please make sure to review the project's [Contributor
Expectations](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html)
and update (by amending and force-pushing the commits) your pull request if necessary.
If you are stuck or need help please join us on [Discord](https://chat.zephyrproject.org/)
and ask your question there. Additionally, you can [escalate the review](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html#pr-technical-escalation)
when applicable. 😊
A project maintainer just triggered our CI pipeline to run it against your PR and
ensure it's compliant and doesn't cause any issues. You might want to take this
opportunity to review the project's [Contributor
Expectations](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html)
and make any updates to your pull request if necessary. 😊
pr-merged-message: >
Hi @${{ github.event.pull_request.user.login }}!

View File

@@ -1,88 +0,0 @@
name: Hello World (Multiplatform)
on:
push:
branches:
- main
- v*-branch
- collab-*
pull_request:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/**'
- '.github/workflows/hello_world_multiplatform.yaml'
- 'SDK_VERSION'
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
build:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, ubuntu-24.04, ubuntu-24.04-arm, macos-14, windows-2022]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
fetch-depth: 0
- name: Rebase
if: github.event_name == 'pull_request'
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
working-directory: zephyr
shell: bash
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
toolchains: aarch64-zephyr-elf:arc-zephyr-elf:arc64-zephyr-elf:arm-zephyr-eabi:mips-zephyr-elf:riscv64-zephyr-elf:sparc-zephyr-elf:x86_64-zephyr-elf:xtensa-dc233c_zephyr-elf:xtensa-sample_controller32_zephyr-elf:rx-zephyr-elf
ccache-cache-key: hw-${{ matrix.os }}
- name: Build firmware
working-directory: zephyr
shell: bash
run: |
if [ "${{ runner.os }}" = "macOS" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --build-only"
elif [ "${{ runner.os }}" = "Windows" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --short-build-path -O/tmp/twister-out"
elif [ "${{ runner.os }}-${{ runner.arch }}" == "Linux-ARM64" ]; then
EXTRA_TWISTER_FLAGS="--exclude-platform native_sim/native"
fi
west twister --runtime-artifact-cleanup --force-color --inline-logs -T samples/hello_world -T samples/cpp/hello_world -v $EXTRA_TWISTER_FLAGS
- name: Upload artifacts
if: failure()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
if-no-files-found: ignore
path:
zephyr/twister-out/*/samples/hello_world/sample.basic.helloworld/build.log
zephyr/twister-out/*/samples/cpp/hello_world/sample.cpp.helloworld/build.log

View File

@@ -2,10 +2,7 @@ name: Issue Tracker
on:
schedule:
- cron: '10 1/4 * * *'
permissions:
contents: read
- cron: '*/10 * * * *'
env:
OUTPUT_FILE_NAME: IssuesReport.md
@@ -17,7 +14,7 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
@@ -30,7 +27,7 @@ jobs:
sudo apt-get update
sudo apt-get install discount
- uses: brcrista/summarize-issues@54c549b7d38b7db39e5c6e06fd9617e12e5c3491 # v4
- uses: brcrista/summarize-issues@v3
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
@@ -38,14 +35,14 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@00943011d9042930efac3dcd3a170e4273319bc8 # v5.1.0
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}

View File

@@ -2,25 +2,20 @@ name: Scancode
on: [pull_request]
permissions:
contents: read
jobs:
scancode_job:
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
uses: actions/checkout@v3
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@23ef91ce31cd4b954366a7b71eea47520da9b380 # v4
uses: zephyrproject-rtos/action_scancode@v4
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -2,20 +2,16 @@ name: Manifest
on:
pull_request_target:
permissions:
contents: read
jobs:
contribs:
runs-on: ubuntu-24.04
permissions:
pull-requests: write # to create/update pull request comments
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
@@ -24,22 +20,19 @@ jobs:
BASE_REF: ${{ github.base_ref }}
working-directory: zephyrproject/zephyr
run: |
pip install -r scripts/requirements-west.txt --require-hashes
pip3 install west
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Manifest
uses: zephyrproject-rtos/action-manifest@09983f53d3d878791aa37a7755ae44d695f4c1e5 # v2.0.0
uses: zephyrproject-rtos/action-manifest@f223dce288b0d8f30bfd57eb2b14b18c230a7d8b
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
github-token: ${{ secrets.ZB_GITHUB_TOKEN }}
manifest-path: 'west.yml'
checkout-path: 'zephyrproject/zephyr'
use-tree-checkout: 'true'
check-impostor-commits: 'true'
label-prefix: 'manifest-'
verbosity-level: '1'
labels: 'manifest'
dnm-labels: 'DNM (manifest)'
blobs-added-labels: 'Binary Blobs Added'
blobs-modified-labels: 'Binary Blobs Modified'
dnm-labels: 'DNM'

View File

@@ -1,19 +0,0 @@
name: Check SHA-pinned GitHub Actions
on:
pull_request:
paths:
- '.github/workflows/**'
permissions:
contents: read
jobs:
check-sha-pinned-actions:
name: Verify GitHub Actions
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@9e9574ef04ea69da568d6249bd69539ccc704e74 # v4.0.0

View File

@@ -1,47 +0,0 @@
name: PR Metadata Check
on:
pull_request:
types:
- synchronize
- opened
- reopened
- labeled
- unlabeled
- edited
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
do-not-merge:
name: Prevent Merging
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python dependencies
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run the check script
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
python -u scripts/ci/do_not_merge.py \
-p "${{ github.event.pull_request.number }}" \
-o "${{ github.event.repository.owner.login }}" \
-r "${{ github.event.repository.name }}"

View File

@@ -1,53 +0,0 @@
# Copyright (c) 2023 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Misc. Pylib Scripts TestSuite
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/build_helpers/**'
- '.github/workflows/pylib_tests.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/build_helpers/**'
- '.github/workflows/pylib_tests.yml'
permissions:
contents: read
jobs:
pylib-tests:
name: Misc. Pylib Unit Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Run pytest for build_helpers
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run build_helpers tests"
PYTHONPATH=./scripts/tests pytest ./scripts/tests/build_helpers

View File

@@ -1,31 +0,0 @@
name: ready to merge
on:
workflow_call:
inputs:
needs_context:
type: string
required: true
permissions:
contents: read
jobs:
all_jobs_passed:
name: all jobs passed
runs-on: ubuntu-latest
steps:
- name: "Check status of all required jobs"
run: |-
NEEDS_CONTEXT='${{ inputs.needs_context }}'
JOB_IDS=$(echo "$NEEDS_CONTEXT" | jq -r 'keys[]')
for JOB_ID in $JOB_IDS; do
RESULT=$(echo "$NEEDS_CONTEXT" | jq -r ".[\"$JOB_ID\"].result")
echo "$JOB_ID job result: $RESULT"
if [[ $RESULT != "success" && $RESULT != "skipped" ]]; then
echo "***"
echo "Error: The $JOB_ID job did not pass."
exit 1
fi
done
echo "All jobs passed or were skipped."

View File

@@ -6,16 +6,11 @@ on:
- 'v*'
- '!v*rc*'
permissions:
contents: read
jobs:
release:
runs-on: ubuntu-24.04
permissions:
contents: write # to create GitHub release entry
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -26,12 +21,12 @@ jobs:
echo "TRIMMED_VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@676e2d560c9a403aa252096d99fcab3e1132b0f5 # v6.0.0
uses: fsfe/reuse-action@v1
with:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
@@ -43,7 +38,7 @@ jobs:
- name: Create Release
id: create_release
uses: actions/create-release@0cb9c9b65d5d1901c1f53e5e66eaf4afd303e70e # v1.1.4
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
@@ -55,7 +50,7 @@ jobs:
- name: Upload Release Assets
id: upload-release-asset
uses: actions/upload-release-asset@e8f9f06c4b078e705bd2ea027f0926603fc9b4d5 # v1.0.2
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View File

@@ -1,61 +0,0 @@
# This workflow uses actions that are not certified by GitHub. They are provided
# by a third-party and are governed by separate terms of service, privacy
# policy, and support documentation.
name: Scorecards supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
# To guarantee Maintained check is occasionally updated. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
schedule:
- cron: '43 7 * * 6'
push:
branches:
- main
permissions: read-all
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
permissions:
# Needed for Code scanning upload
security-events: write
# Needed for GitHub OIDC token if publish_results is true
id-token: write
steps:
- name: "Checkout code"
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@4eaacf0543bb3f2c246792bd56e8cdeffafb205a # v2.4.3
with:
results_file: results.sarif
results_format: sarif
# Publish results to OpenSSF REST API for easy access by consumers.
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable
# uploads of run results in SARIF format to the repository Actions tab.
# https://docs.github.com/en/actions/advanced-guides/storing-workflow-data-as-artifacts
- name: "Upload artifact"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@0499de31b99561a6d14a36a5f662c2a54f91beee # v4.31.2
with:
sarif_file: results.sarif

View File

@@ -19,20 +19,17 @@ on:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
permissions:
contents: read
jobs:
scripts-tests:
name: Scripts tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -45,22 +42,26 @@ jobs:
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest
env:

View File

@@ -2,13 +2,11 @@ name: Stale Workflow Queue Cleanup
on:
workflow_dispatch:
branches: [main]
schedule:
# everyday at 15:00
- cron: '0 15 * * *'
permissions:
contents: read
concurrency:
group: stale-workflow-queue-cleanup
cancel-in-progress: true
@@ -16,14 +14,11 @@ concurrency:
jobs:
cleanup:
name: Cleanup
runs-on: ubuntu-24.04
if: github.ref == 'refs/heads/main'
permissions:
actions: write # to delete stale workflow runs
runs-on: ubuntu-22.04
steps:
- name: Delete stale queued workflow runs
uses: MajorScruffy/delete-old-workflow-runs@78b5af714fefaefdf74862181c467b061782719e # v0.3.0
uses: MajorScruffy/delete-old-workflow-runs@v0.3.0
with:
repository: ${{ github.repository }}
# Remove any workflow runs in "queued" state for more than 1 day

View File

@@ -3,20 +3,13 @@ on:
schedule:
- cron: "16 00 * * *"
permissions:
contents: read
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-24.04
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
permissions:
pull-requests: write # to comment on stale pull requests
issues: write # to comment on stale issues
steps:
- uses: actions/stale@5f858e3efba33a5ca4407a664cc011ad407f2008 # v10.1.0
- uses: actions/stale@v8
with:
stale-pr-message: 'This pull request has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you
@@ -31,5 +24,5 @@ jobs:
stale-issue-label: 'Stale'
stale-pr-label: 'Stale'
exempt-pr-labels: 'Blocked,In progress'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process,Coverity'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process'
operations-per-run: 400

View File

@@ -1,38 +0,0 @@
name: Merged PR stats
on:
pull_request_target:
branches:
- main
- v*-branch
types: [closed]
permissions:
contents: read
jobs:
record_merged:
if: github.event.pull_request.merged == true && github.repository == 'zephyrproject-rtos/zephyr'
runs-on: ubuntu-24.04
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: PR event
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
PR_STAT_ES_INDEX: ${{ vars.PR_STAT_ES_INDEX }}
run: |
python3 ./scripts/ci/stats/merged_prs.py --pull-request ${{ github.event.pull_request.number }} --repo ${{ github.repository }}

View File

@@ -1,64 +0,0 @@
name: Publish Twister Test Results
on:
workflow_run:
workflows: ["Run tests with twister"]
branches:
- main
types:
- completed
permissions:
contents: read
jobs:
upload-to-elasticsearch:
if: |
github.repository == 'zephyrproject-rtos/zephyr' &&
github.event.workflow_run.event != 'pull_request'
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
runs-on: ubuntu-24.04
steps:
# Needed for elasticearch and upload script
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
id: download-artifacts
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
with:
path: artifacts
workflow: twister.yml
run_id: ${{ github.event.workflow_run.id }}
if_no_artifact_found: ignore
- name: Upload to elasticsearch
if: steps.download-artifacts.outputs.found_artifact == 'true'
run: |
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event.workflow_run.event}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event.workflow_run.event}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--run-attempt ${{github.run_attempt}} \
--run-branch ${{github.ref_name}} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi

View File

@@ -5,18 +5,13 @@ on:
branches:
- main
- v*-branch
- collab-*
pull_request:
pull_request_target:
branches:
- main
- v*-branch
- collab-*
schedule:
# Run at 02:00 UTC on every Sunday
- cron: '0 2 * * 0'
permissions:
contents: read
# Run at 03:00 UTC on every Sunday
- cron: '0 3 * * 0'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
@@ -25,66 +20,66 @@ concurrency:
jobs:
twister-build-prep:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: ubuntu-24.04
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
fullrun: ${{ steps.output-services.outputs.fullrun }}
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 20
WEEKLY_MATRIX_SIZE: 200
PUSH_MATRIX_SIZE: 15
DAILY_MATRIX_SIZE: 80
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TESTS_PER_BUILDER: 900
TESTS_PER_BUILDER: 700
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request'
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
path: zephyr
persist-credentials: false
- name: Set up Python
if: github.event_name == 'pull_request'
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: install-packages
working-directory: zephyr
if: github.event_name == 'pull_request'
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Setup Zephyr project
if: github.event_name == 'pull_request'
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
enable-ccache: false
- name: Environment Setup
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Generate Test Plan with Twister
working-directory: zephyr
if: github.event_name == 'pull_request'
if: github.event_name == 'pull_request_target'
id: test-plan
run: |
export ZEPHYR_BASE=${PWD}
@@ -100,62 +95,50 @@ jobs:
- name: Determine matrix size
id: output-services
run: |
if [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
if [ -n "${TWISTER_NODES}" ]; then
subset="[$(seq -s',' 1 ${TWISTER_NODES})]"
else
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
fi
size=${TWISTER_NODES}
elif [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "schedule" -a "${{github.repository}}" = "zephyrproject-rtos/zephyr" ]; then
subset="[$(seq -s',' 1 ${WEEKLY_MATRIX_SIZE})]"
size=${WEEKLY_MATRIX_SIZE}
subset="[$(seq -s',' 1 ${DAILY_MATRIX_SIZE})]"
size=${DAILY_MATRIX_SIZE}
else
size=0
fi
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
twister-build:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.28.7.20251127
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
subset: ${{fromJSON(needs.twister-build-prep.outputs.subset)}}
timeout-minutes: 1440
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resolve the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '-specs=* --specs=*'
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TWISTER_COMMON: ' --test-config tests/test_config_ci.yaml --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
WEEKLY_OPTIONS: ' -M --build-only --all --show-footprint --report-filtered -j 32'
PR_OPTIONS: ' --clobber-output --integration -j 16'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint --report-filtered -j 16'
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 '
DAILY_OPTIONS: ' -M --build-only --all --show-footprint'
PR_OPTIONS: ' --clobber-output --integration'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint'
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-20
steps:
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
@@ -167,11 +150,11 @@ jobs:
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /repo-cache/zephyrproject/zephyr .
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -179,62 +162,58 @@ jobs:
- name: Environment Setup
run: |
if [ "${{github.event_name}}" = "pull_request" ]; then
if [ "${{github.event_name}}" = "pull_request_target" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
rm -fr ".git/rebase-merge"
git rebase origin/${BASE_REF}
git clean -f -d
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "$HOME/.cargo/bin" >> $GITHUB_PATH
west init -l . || true
west config manifest.group-filter -- +ci,+optional,+testing
west config --global update.narrow true
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Check Environment
run: |
cmake --version
gcc --version
cargo --version
rustup target list --installed
ls -la
echo "github.ref: ${{ github.ref }}"
echo "github.base_ref: ${{ github.base_ref }}"
echo "github.ref_name: ${{ github.ref_name }}"
- name: Install Python packages
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
west packages pip --install
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: Set up ccache
run: |
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
continue-on-error: true
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Update BabbleSim to manifest revision
- name: ccache stats initial
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git -c advice.detachedHead=false checkout ${BSIM_VERSION}
west update
make everything -s -j 8
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && rm -rf /github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
- if: github.event_name == 'push'
name: Run Tests with Twister (Push)
id: run_twister
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
@@ -246,9 +225,8 @@ jobs:
fi
fi
- if: github.event_name == 'pull_request'
- if: github.event_name == 'pull_request_target'
name: Run Tests with Twister (Pull Request)
id: run_twister_pr
run: |
rm -f testplan.json
export ZEPHYR_BASE=${PWD}
@@ -263,27 +241,26 @@ jobs:
fi
- if: github.event_name == 'schedule'
name: Run Tests with Twister (Weekly)
id: run_twister_sched
name: Run Tests with Twister (Daily)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${DAILY_OPTIONS}
if [ "${{matrix.subset}}" = "1" ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${WEEKLY_OPTIONS}
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${DAILY_OPTIONS}
fi
fi
- name: Print ccache stats
if: always()
- name: ccache stats post
run: |
ccache -s -vv
ccache -p
ccache -s
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -293,110 +270,62 @@ jobs:
module_tests/twister.xml
testplan.json
- if: matrix.subset == 1 && github.event_name == 'push'
name: Save the list of Python packages
shell: bash
run: |
FREEZE_FILE="frozen-requirements.txt"
timestamp="$(date)"
version="$(git describe --abbrev=12 --always)"
echo -e "# Generated at $timestamp ($version)\n" > $FREEZE_FILE
pip freeze | tee -a $FREEZE_FILE
- if: matrix.subset == 1 && github.event_name == 'push'
name: Upload the list of Python packages
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: Frozen PIP package set
path: |
frozen-requirements.txt
twister-test-results:
name: "Publish Unit Tests Results"
needs:
- twister-build
runs-on: ubuntu-24.04
permissions:
checks: write # to create the check run entry with Twister test results
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
needs: twister-build
runs-on: ubuntu-22.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: Check out source code
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Needed for opensearch and upload script
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: 3.12
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
- name: Download Artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
uses: actions/download-artifact@v3
with:
path: artifacts
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Upload to opensearch
run: |
pip3 install elasticsearch
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event_name}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event_name}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi
- name: Merge Test Results
run: |
junitparser merge --glob 'artifacts/*/twister.xml' 'artifacts/*/*/twister.xml' junit.xml
pip3 install junitparser junit2html
junitparser merge artifacts/*/*/twister.xml junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit.html
junit.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() && (github.event_name == 'push') }}
uses: codecov/test-results-action@47f89e9acb64b76debcd5ea40642d25a4adced9f # v1.1.1
with:
token: ${{ secrets.CODECOV_TOKEN }}
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@34d7c956a59aed1bfebf31df77b8de55db9bbaaf # v2.21.0
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: Unit Test Results
files: "**/twister.xml"
comment_mode: off
- name: Analyze Twister Reports
if: needs.twister-build.result == 'failure'
run: |
./scripts/ci/twister_report_analyzer.py artifacts/*/*/twister.json --long-summary --platforms --output-md errors.md
if [[ -s "errors.md" ]]; then
echo '### Error Summary! 🚀' >> $GITHUB_STEP_SUMMARY
cat errors.md >> $GITHUB_STEP_SUMMARY
fi
- name: Upload Twister Analysis Results
if: needs.twister-build.result == 'failure'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: Twister Analysis Results
if-no-files-found: ignore
path: |
twister_report_summary.json
twister-status-check:
if: always()
name: "Check Twister Status"
needs:
- twister-build-prep
- twister-build
uses: ./.github/workflows/ready-to-merge.yml
with:
needs_context: ${{ toJson(needs) }}

View File

@@ -8,26 +8,20 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/pylib/**'
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/**'
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
- 'scripts/schemas/twister/'
permissions:
contents: read
jobs:
twister-tests:
@@ -35,23 +29,26 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest for twisterlib
env:
ZEPHYR_BASE: ./

View File

@@ -1,68 +0,0 @@
# Copyright (c) 2023 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Twister BlackBox TestSuite
on:
pull_request:
branches:
- main
- collab-*
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/twister_tests_blackbox.yml'
permissions:
contents: read
env:
PYTHONIOENCODING: utf-8
jobs:
twister-tests:
name: Twister Black Box Tests
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-24.04, macos-14, windows-2022]
fail-fast: false
runs-on: ${{ matrix.os }}
steps:
- name: Checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: zephyr
fetch-depth: 0
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Setup Zephyr project
uses: zephyrproject-rtos/action-zephyr-setup@c125c5ebeeadbd727fa740b407f862734af1e52a # v1.0.9
with:
app-path: zephyr
toolchains: all
enable-ccache: false
west-group-filter: -tools,-bootloader,-babblesim,-hal
west-project-filter: -nrf_hw_models,+cmsis,+hal_xtensa,+cmsis_6
- name: Run Pytest For Twister Black Box Tests
if: ${{ runner.os == 'Linux' }}
working-directory: zephyr
shell: bash
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
export ZEPHYR_SDK_INSTALL_DIR=${{ github.workspace }}/zephyr-sdk
sudo apt-get install -y lcov
echo "Run twister tests"
source zephyr-env.sh
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox/

View File

@@ -8,7 +8,6 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -17,43 +16,64 @@ on:
branches:
- main
- v*-branch
- collab-*
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
permissions:
contents: read
jobs:
west-commands:
west-commnads:
name: West Command Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12', '3.13']
os: [ubuntu-22.04, macos-14, windows-2022]
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@e797f83bcb11b83ae66e0230d6156d7c80228e7c # v6.0.0
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: pip
cache-dependency-path: scripts/requirements-actions.txt
- name: Install Python packages
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip install -r scripts/requirements-actions.txt --require-hashes
pip3 install wheel
pip3 install pytest west pyelftools canopen natsort progress mypy intelhex psutil ply pyserial
- name: run pytest-win
if: runner.os == 'Windows'
run: |
python ./scripts/west_commands/run_tests.py
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |

44
.gitignore vendored
View File

@@ -1,6 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright The Zephyr Project Contributors
*.o
*.a
*.d
@@ -10,10 +7,8 @@
*.swp
*.swo
*~
# Emacs
.\#*
\#*\#
build*/
!doc/build/
!scripts/build
@@ -32,8 +27,6 @@ outdir
outdir-*
scripts/basic/fixdep
scripts/gen_idt/gen_idt
coverage-report
doc-coverage.info
doc/_build
doc/doxygen
doc/xml
@@ -46,7 +39,6 @@ sanity-out*
twister-out*
bsim_out
bsim_bt_out
myresults.xml
tests/RunResults.xml
scripts/grub
doc/reference/kconfig/*.rst
@@ -58,25 +50,11 @@ doc/doc.warnings
hide-defaults-note
venv
.venv
.envrc
.DS_Store
.clangd
new.info
# Cargo drops lock files in projects to capture resolved dependencies.
# We don't want to record these.
Cargo.lock
# Cargo encourages a .cargo/config.toml file to symlink to a generated file. Don't save these.
.cargo/
# Normal west builds will place the Rust target directory under the build directory. However,
# sometimes IDEs and such will litter these target directories as well.
target/
# CI output
compliance.xml
dts_linter.patch
_error.types
# Tag files
@@ -89,37 +67,17 @@ tags
.idea
# from check_compliance.py
# zephyr-keep-sorted-start
BinaryFiles.txt
BoardYml.txt
Checkpatch.txt
ClangFormat.txt
DevicetreeBindings.txt
DevicetreeLinting.txt
GitDiffCheck.txt
Gitlint.txt
Identity.txt
ImageSize.txt
Kconfig.txt
KconfigBasic.txt
KconfigBasicNoModules.txt
KconfigHWMv2.txt
KeepSorted.txt
LicenseAndCopyrightCheck.txt
MaintainersFormat.txt
ModulesMaintainers.txt
Nits.txt
Pylint.txt
PythonCompat.txt
Ruff.txt
SphinxLint.txt
SysbuildKconfig.txt
SysbuildKconfigBasic.txt
SysbuildKconfigBasicNoModules.txt
TextEncoding.txt
YAMLLint.txt
ZephyrModuleFile.txt
# zephyr-keep-sorted-stop
# Node dependecies
node_modules

View File

@@ -1,7 +1,6 @@
# All these sections are optional, edit this file as you like.
# Zephyr-specific defaults are located in scripts/gitlint/zephyr_commit_rules.py
[general]
regex-style-search=true
ignore=title-trailing-punctuation, T3, title-max-length, T1, body-hard-tab, B3, B1
# verbosity should be a value between 1 and 3, the commandline -v flags take precedence over this
verbosity = 3
@@ -27,7 +26,7 @@ extra-path=scripts/gitlint
# max-line-count=200
[title-starts-with-subsystem]
regex = ^(?!subsys:)(?!treewide:)(([^:]+):)(\s([^:]+):)*\s(.+)$
regex = ^(?!subsys:)(([^:]+):)(\s([^:]+):)*\s(.+)$
[title-must-not-contain-word]
# Comma-separated list of words that should not occur in the title. Matching is case
@@ -60,7 +59,3 @@ ignore-merge-commits=false
# By specifying this rule, developers can only change the file when they explicitly reference
# it in the commit message.
#files=gitlint/rules.py,README.md
[ignore-by-author-name]
regex=^dependabot\[bot\]$
ignore=all

View File

@@ -1,14 +1,14 @@
Alexandr Kolosov <rikorsev@gmail.com>
Alexandre d'Alton <alexandre.dalton@intel.com>
Amir Kaplan <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com>
Amir Kaplan <amir.kaplan@intel.com> <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com> <anas.nashif@intel.com>
Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Anthony Smigielski <thebasti0ncode@gmail.com>
Armand Ciejak <armand@riedonetworks.com> <armandciejak@users.noreply.github.com>
Aska Wu <aska.wu@linaro.org>
Bit Pathe <bitpathe@gmail.com>
Bit Pathe <bitpathe@gmail.com> <bitpathe@gmail.com>
Bjarki Arge Andreasen <baa@trackunit.com>
Carles Cufi <carles.cufi@nordicsemi.no>
Carles Cufi <carles.cufi@nordicsemi.no> <carles.cufi@nordicsemi.no>
chao an <anchao@xiaomi.com>
Charles E. Youse <charles.youse@intel.com>
Chen Xingyu <hi@xingrz.me>
@@ -16,32 +16,32 @@ Christoph Schnetzler <christoph.schnetzler@husqvarnagroup.com>
Christoph Schramm <schramm@makaio.com>
Christopher Friedt <cfriedt@meta.com>
Christopher Friedt <cfriedt@meta.com> <cfriedt@fb.com>
Chuck Jordan <cjordan@synopsys.com>
Chuck Jordan <cjordan@synopsys.com> <cjordan@synopsys.com>
Chunlin Han <chunlin.han@linaro.org> <chunlin.han@acer.com>
David B. Kinder <david.b.kinder@intel.com>
David Komel <a8961713@gmail.com>
David Leach <david.leach@nxp.com>
Dirk Brandewie <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com>
Dirk Brandewie <dirk.j.brandewie@intel.com> <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com> <d0u9.su@outlook.com>
Enjia Mai <enjia.mai@intel.com>
Enjia Mai <enjia.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com>
Enjia Mai <enjia.mai@intel.com> <enjiax.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com> <evanx.couzens@intel.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com> <Eugeniy.Paltsev@synopsys.com>
Felipe Neves <ryukokki.felipe@gmail.com>
Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Findlay Feng <i@fengch.me>
Flavio Arieta Netto <flavio@exati.com.br>
Francois Ramu <francois.ramu@st.com>
Gerardo Aceves <gerardo.aceves@intel.com>
Gerardo Aceves <gerardo.aceves@intel.com> <gerardo.aceves@intel.com>
Gregory Shue <gregory.shue@legrand.com>
Gregory Shue <gregory.shue@legrand.com> <gregory.shue@legrand.us>
HaiLong Yang <cameledyang@pm.me>
James Johnson <james.johnson672@t-mobile.com>
Jarno Lämsä <jarno.lamsa@nordicsemi.no>
Jędrzej Ciupis <jedrzej.ciupis@nordicsemi.no>
Jeremie Garcia <jeremie.garcia@intel.com>
Jeremie Garcia <jeremie.garcia@intel.com> <jeremie.garcia@intel.com>
Jim Benjamin Luther <jilu@oticon.com>
Johan Kruger <johan.kruger@windriver.com>
Johan Kruger <johan.kruger@windriver.com> <johan.kruger@windriver.com>
Johann Fischer <j.fischer@phytec.de>
Jørgen Kvalvaag <jorgen.kvalvaag@nordicsemi.no>
Juan Manuel Cruz Alcaraz <juan.m.cruz.alcaraz@intel.com>
@@ -51,17 +51,16 @@ Jun Li <jun.r.li@intel.com>
Justin Watson <jwatson5@gmail.com>
Kamil Sroka <kamil.sroka@nordicsemi.no>
Katarzyna Giądła <katarzyna.giadla@nordicsemi.no>
Keren Siman-Tov <keren.siman-tov@intel.com>
Keren Siman-Tov <keren.siman-tov@intel.com> <keren.siman-tov@intel.com>
Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com> <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com> <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com> <leonax.cook@intel.com>
Leona Cook <leonax.cook@intel.com> <lsc@hackeress.com>
Lixin Guo <lixinx.guo@intel.com>
Łukasz Mazur <lukasz.mazur@hidglobal.com>
Manuel Argüelles <manuel.arguelles@nxp.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <manuel.arguelles@coredumplabs.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <marguelles.dev@gmail.com>
Marc Herbert <marc.herbert@intel.com> <46978960+marc-hb@users.noreply.github.com>
Marin Jurjević <marin.jurjevic@hotmail.com>
Mariusz Ryndzionek <mariusz.ryndzionek@firmwave.com>
@@ -72,14 +71,14 @@ Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.bolivar@linaro.org>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.f.bolivar@gmail.com>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@foundries.io>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@opensourcefoundries.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Mateusz Hołenko <mholenko@antmicro.com>
Michael Rosen <michael.r.rosen@intel.com>
Michal Narajowski <michal.narajowski@codecoup.pl>
Mike Hirst <michael.hirst@windriver.com>
Mike Hirst <michael.hirst@windriver.com> <michael.hirst@windriver.com>
Ming Shao <ming.shao@intel.com>
Mohan Kumar Kumar <mohankm@fb.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com> <tulasi.r@tcs.com>
Navin Sankar Velliangiri <navin@linumiz.com>
NingX Zhao <ningx.zhao@intel.com>
Nishikant Nayak <nishikantax.nayak@intel.com>
@@ -100,23 +99,21 @@ Radosław Koppel <radoslaw.koppel@nordicsemi.no> <r.koppel@k-el.com>
Raja D. Singh <rdsingh@iotwizards.com>
Ricardo Salveti <ricardo@opensourcefoundries.com>
Ricardo Salveti <ricardo@opensourcefoundries.com> <ricardo.salveti@linaro.org>
Ruud Derwig <Ruud.Derwig@synopsys.com>
Ruud Derwig <Ruud.Derwig@synopsys.com> <Ruud.Derwig@synopsys.com>
Saku Rautio <saku.rautio@nordicsemi.no>
Scott Worley <scott.worley@microchip.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>
Sharron Liu <sharron.liu@intel.com>
Shilpashree L C <shilpashree.lc@intel.com>
Shuang He <shuang.he@intel.com>
Shuang He <shuang.he@intel.com> <shuang.he@intel.com>
Sigvart Hovland <sigvart.hovland@nordicsemi.no>
Stéphane D'Alu <sdalu@sdalu.com>
Stine Åkredalen <stine.akredalen@nordicsemi.no>
Thomas Heeley <thomas.heeley@intel.com>
Thomas Heeley <thomas.heeley@intel.com> <thomas.heeley@intel.com>
Tim Sørensen <tims@demant.com>
Tim Sørensen <tims@demant.com> <tims@oticon.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa@gmail.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com>

File diff suppressed because it is too large Load Diff

View File

@@ -1,30 +0,0 @@
# Copyright (c) 2024 Basalte bv
# SPDX-License-Identifier: Apache-2.0
extend = ".ruff-excludes.toml"
line-length = 100
target-version = "py310"
[lint]
select = [
# zephyr-keep-sorted-start
"B", # flake8-bugbear
"E", # pycodestyle
"F", # pyflakes
"I", # isort
"SIM", # flake8-simplify
"UP", # pyupgrade
"W", # pycodestyle warnings
# zephyr-keep-sorted-stop
]
ignore = [
# zephyr-keep-sorted-start
"SIM108", # Allow if-else blocks instead of forcing ternary operator
# zephyr-keep-sorted-stop
]
[format]
quote-style = "preserve"
line-ending = "lf"

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,956 @@
# Instead of the CODEOWNERS file, The Zephyr Project uses a custom format.
# See MAINTAINERS.yml for details.
#
# DO NOT EDIT THIS FILE.
# CODEOWNERS for autoreview assigning in github
# https://help.github.com/en/articles/about-code-owners#codeowners-syntax
# Order is important; for each modified file, the last matching
# pattern takes the most precedence.
# That is, with the last pattern being
# *.rst @nashif
# if only .rst files are being modified, only nashif is
# automatically requested for review, but you can manually
# add others as needed.
# Do not use wildcard on all source yet
# * @galak @nashif
/.github/ @nashif @stephanosio
/.github/workflows/ @galak @nashif
/MAINTAINERS.yml @MaureenHelm
/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/arch/arm/ @MaureenHelm @galak @ioannisg
/arch/arm/core/cortex_m/cmse/ @ioannisg
/arch/arm/include/cortex_m/cmse.h @ioannisg
/arch/arm/core/cortex_a_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/arm64/ @carlocaione
/arch/arm64/core/cortex_r/ @povergoing
/arch/arm64/core/xen/ @lorc @firscity
/arch/common/ @ioannisg @andyross
/arch/mips/ @frantony
/soc/arc/snps_*/ @abrodkin @ruuddw @evgeniy-paltsev
/soc/nios2/ @nashif
/soc/arm/ @MaureenHelm @galak @ioannisg
/soc/arm/arm/mps2/ @fvincenzo
/soc/arm/aspeed/ @aspeeddylan
/soc/arm/atmel_sam/common/*_sam4l_*.c @nandojve
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4l/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/same70/ @nandojve
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/cypress/ @ifyall @npal-cy
/soc/arm/bcm*/ @sbranden
/soc/arm/gigadevice/ @nandojve
/soc/arm/infineon_cat1/ @ifyall @npal-cy
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/nxp*/ @mmahadevan108 @dleach02
/soc/arm/nxp_s32/ @manuargue
/soc/arm/nordic_nrf/ @anangl
/soc/arm/nuvoton_npcx/ @MulinChao @ChiHuaL
/soc/arm/nuvoton_numicro/ @ssekar15
/soc/arm/quicklogic_eos_s3/ @fkokosinski @kgugala
/soc/arm/rpi_pico/ @yonsch
/soc/arm/silabs_exx32/efm32pg1b/ @rdmeneze
/soc/arm/silabs_exx32/efr32mg21/ @l-alfred
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/*/power.c @FRASTM
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/st_stm32/stm32h7/*stm32h735* @benediktibk
/soc/arm/st_stm32/stm32l4/*stm32l451* @benediktibk
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynq7000/ @ibirnbaum
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/arm/renesas_rcar/ @aaillet
/soc/arm64/ @carlocaione
/soc/arm64/qemu_cortex_a53/ @carlocaione
/soc/arm64/bcm_vk/ @abhishek-brcm
/soc/arm64/nxp_layerscape/ @JiafeiPan
/soc/arm64/xenvm/ @lorc @firscity
/soc/arm64/nxp_imx/ @MrVan @JiafeiPan
/soc/arm64/arm/ @povergoing
/soc/arm64/arm/fvp_aemv8a/ @carlocaione
/soc/arm64/intel_socfpga/* @siclim
/soc/arm64/renesas_rcar/ @lorc @xakep-amatop
/soc/Kconfig @tejlmand @galak @nashif @nordicjm
/submanifests/* @mbolivar-ampere
/arch/x86/ @jhedberg @nashif
/arch/nios2/ @nashif
/arch/posix/ @aescolar @daor-oti
/arch/riscv/ @kgugala @pgielda
/soc/mips/ @frantony
/soc/posix/ @aescolar @daor-oti
/soc/riscv/ @kgugala @pgielda
/soc/riscv/openisa*/ @dleach02
/soc/riscv/riscv-privileged/andes_v5/ @cwshu @kevinwang821020 @jimmyzhe
/soc/riscv/riscv-privileged/neorv32/ @henrikbrixandersen
/soc/riscv/riscv-privileged/gd32vf103/ @soburi
/soc/riscv/riscv-privileged/niosv/ @sweeaun
/soc/x86/ @dcpleung @nashif
/arch/xtensa/ @dcpleung @andyross @nashif
/soc/xtensa/ @dcpleung @andyross @nashif
/arch/sparc/ @julius-barendt
/soc/sparc/ @julius-barendt
/boards/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/boards/arm/ @MaureenHelm @galak
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
/boards/arm/96b_stm32_sensor_mez/ @Mani-Sadhasivam
/boards/arm/96b_wistrio/ @Mani-Sadhasivam
/boards/arm/arduino_due/ @ioannisg
/boards/arm/acn52832/ @sven-hm
/boards/arm/arduino_mkrzero/ @soburi
/boards/arm/bbc_microbit_v2/ @LingaoM
/boards/arm/bl5340_dvk/ @lairdjm
/boards/arm/bl65*/ @lairdjm
/boards/arm/blackpill_f401ce/ @coderkalyan
/boards/arm/blackpill_f411ce/ @coderkalyan
/boards/arm/bt*10/ @greg-leach
/boards/arm/cc1352r1_launchxl/ @bwitherspoon
/boards/arm/cc26x2r1_launchxl/ @bwitherspoon
/boards/arm/cc3220sf_launchxl/ @vanti
/boards/arm/cy8ckit_062_ble/ @ifyall @npal-cy
/boards/arm/cy8ckit_062s4/ @DaWei8823
/boards/arm/cy8ckit_062_wifi_bt/ @ifyall @npal-cy
/boards/arm/cy8cproto_062_4343w/ @ifyall @npal-cy
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/efm32pg_stk3401a/ @rdmeneze
/boards/arm/faze/ @mbittan @simonguinot
/boards/arm/frdm*/ @mmahadevan108 @dleach02
/boards/arm/gd32*/ @nandojve
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @mmahadevan108 @dleach02
/boards/arm/ip_k66f/ @parthitce @lmajewski
/boards/arm/legend/ @mbittan @simonguinot
/boards/arm/lpcxpresso*/ @mmahadevan108 @dleach02
/boards/arm/mg100/ @rerickson1
/boards/arm/mimx8mm_evk/ @Mani-Sadhasivam
/boards/arm/mimx8mm_phyboard_polis @pefech
/boards/arm/mimxrt*/ @mmahadevan108 @dleach02
/boards/arm/mps2_an385/ @fvincenzo
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/npcx7m6fb_evb/ @MulinChao @ChiHuaL
/boards/arm/nrf*/ @carlescufi @lemrey
/boards/arm/nucleo*/ @erwango @ABOSTM @FRASTM
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nuvoton_pfm_m487/ @ssekar15
/boards/arm/pinnacle_100_dvk/ @rerickson1
/boards/arm/qemu_cortex_a9/ @ibirnbaum
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg @stephanosio
/boards/arm/quick_feather/ @fkokosinski @kgugala
/boards/arm/rak4631_nrf52840/ @gpaquet85
/boards/arm/rak5010_nrf52840/ @gpaquet85
/boards/arm/rpi_pico/ @yonsch
/boards/arm/ronoth_lodev/ @NorthernDean
/boards/arm/xmc45_relax_kit/ @parthitce
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4l_ek/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_e70_xplained/ @nandojve
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/scobc_module1/ @yashi
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/s32*/ @manuargue
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango @ABOSTM @FRASTM
/boards/arm/stm32h735g_disco/ @benediktibk
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango @ABOSTM @FRASTM
/boards/arm/rcar_*/ @aaillet
/boards/arm/ubx_bmd345eval_nrf52840/ @Navin-Sankar @brec-u-blox
/boards/arm/nrf5340_audio_dk_nrf5340 @koffes @alexsven @erikrobstad @rick1082 @gWacey
/boards/common/ @mbolivar-ampere
/boards/deprecated.cmake @tejlmand
/boards/mips/ @frantony
/boards/nios2/ @nashif
/boards/nios2/altera_max10/ @nashif
/boards/arm/stm32_min_dev/ @sidcha
/boards/posix/ @aescolar @daor-oti
/boards/posix/nrf52_bsim/ @aescolar @wopu-ot
/boards/riscv/ @kgugala @pgielda
/boards/riscv/rv32m1_vega/ @dleach02
/boards/riscv/adp_xc7k_ae350/ @cwshu @kevinwang821020 @jimmyzhe
/boards/riscv/longan_nano/ @soburi
/boards/riscv/neorv32/ @henrikbrixandersen
/boards/riscv/niosv*/ @sweeaun
/boards/riscv/sparkfun_red_v_things_plus/ @soburi
/boards/riscv/stamp_c3/ @soburi
/boards/shields/ @erwango
/boards/shields/atmel_rf2xx/ @nandojve
/boards/shields/esp_8266/ @nandojve
/boards/shields/inventek_eswifi/ @nandojve
/boards/x86/ @dcpleung @nashif
/boards/x86/acrn/ @enjiamai
/boards/xtensa/ @nashif @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
/boards/xtensa/nxp_adsp_imx8/ @iuliana-prodan @dbaluta
/boards/sparc/ @julius-barendt
/boards/arm64/qemu_cortex_a53/ @carlocaione
/boards/arm64/bcm958402m2_a72/ @abhishek-brcm
/boards/arm64/mimx8mm_evk/ @MrVan @JiafeiPan
/boards/arm64/mimx8mp_evk/ @MrVan @JiafeiPan
/boards/arm64/mimx8mn_evk/ @JiafeiPan
/boards/arm64/mimx93_evk/ @JiafeiPan
/boards/arm64/nxp_ls1046ardb/ @JiafeiPan
/boards/arm64/xenvm/ @lorc @firscity
/boards/arm64/fvp_baser_aemv8r/ @povergoing
/boards/arm64/fvp_base_revc_2xaemv8a/ @carlocaione
/boards/arm64/intel_socfpga_agilex_socdk/ @siclim @ngboonkhai
/boards/arm64/intel_socfpga_agilex5_socdk/ @chongteikheng
/boards/arm64/rcar_*/ @lorc @xakep-amatop
/boards/Kconfig @tejlmand @galak @nashif @nordicjm
# All cmake related files
/cmake/ @tejlmand @nashif
/cmake/*/arcmwdt/ @abrodkin @evgeniy-paltsev @tejlmand
/CMakeLists.txt @tejlmand @nashif
/doc/ @carlescufi
/doc/develop/tools/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/services/device_mgmt/smp_protocol.rst @de-nordic @nordicjm
/doc/services/device_mgmt/smp_groups/ @de-nordic @nordicjm
/doc/services/sensing/ @lixuzha @ghu0510 @qianruh
/doc/CMakeLists.txt @carlescufi
/doc/_scripts/ @carlescufi
/doc/connectivity/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/doc/build/dts/ @galak @mbolivar-ampere
/doc/build/sysbuild/ @tejlmand @nordicjm
/doc/hardware/peripherals/canbus/ @alexanderwachter @henrikbrixandersen
/doc/security/ @ceolin @d3zd3z
/drivers/debug/ @nashif
/drivers/*/*sam4l* @nandojve
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*gd32* @nandojve
/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/*/*mcux* @mmahadevan108 @dleach02
/drivers/*/*stm32* @erwango @ABOSTM @FRASTM
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*lpc11u6x* @mbittan @simonguinot
/drivers/*/*npcx* @MulinChao @ChiHuaL
/drivers/*/*andes* @cwshu @kevinwang821020 @jimmyzhe
/drivers/*/*ifx_cat1* @ifyall @npal-cy
/drivers/*/*neorv32* @henrikbrixandersen
/drivers/*/*_s32* @manuargue
/drivers/adc/ @anangl
/drivers/adc/adc_ads1x1x.c @XenuIsWatching
/drivers/adc/adc_stm32.c @cybertale
/drivers/adc/adc_rpi_pico.c @soburi
/drivers/adc/*ads114s0x* @benediktibk
/drivers/adc/*max11102_17* @benediktibk
/drivers/audio/*nrfx* @anangl
/drivers/auxdisplay/*pt6314* @xingrz
/drivers/auxdisplay/* @thedjnK
/drivers/bbram/* @yperess @sjg20 @jackrosenthal
/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/drivers/bluetooth/hci/hci_esp32.c @sylvioalves
/drivers/cache/ @carlocaione
/drivers/syscon/ @carlocaione @yperess
/drivers/can/ @alexanderwachter @henrikbrixandersen
/drivers/can/*mcp2515* @karstenkoenig
/drivers/can/*rcar* @aaillet
/drivers/clock_control/*agilex* @siclim @gdengi
/drivers/clock_control/*nrf* @nordic-krch
/drivers/clock_control/*esp32* @extremegtx @sylvioalves
/drivers/clock_control/*cpg_mssr* @aaillet
/drivers/counter/ @nordic-krch
/drivers/console/ipm_console.c @finikorg
/drivers/console/semihost_console.c @luozhongyao
/drivers/console/jailhouse_debug_console.c @MrVan
/drivers/counter/counter_cmos.c @dcpleung
/drivers/counter/counter_ll_stm32_timer.c @kentjhall
/drivers/counter/*esp32* @sylvioalves
/drivers/counter/dw_timer.c @pbalsundar
/drivers/counter/counter_timer_shell.c @pbalsundar
/drivers/crypto/*nrf_ecb* @maciekfabia @anangl
/drivers/display/*rm68200* @mmahadevan108
/drivers/display/display_ili9342c.* @extremegtx
/drivers/dac/ @martinjaeger
/drivers/dac/*ad56xx* @benediktibk
/drivers/dai/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/ssp/ @kv2019i @marcinszkudlinski @abonislawski
/drivers/dai/intel/dmic/ @marcinszkudlinski @abonislawski
/drivers/dai/intel/alh/ @abonislawski
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*dw_common* @abonislawski
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale @lowlander
/drivers/dma/*pl330* @raveenp
/drivers/dma/*iproc_pax* @raveenp
/drivers/dma/*intel_adsp* @teburd @abonislawski
/drivers/dma/*rpi_pico* @soburi
/drivers/dma/*xmc4xxx* @talih0
/drivers/edac/ @finikorg
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*b91* @andy-liu-telink
/drivers/entropy/*bt_hci* @JordanYates
/drivers/entropy/*rv32m1* @dleach02
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/espi/ @albertofloyd @franciscomunoz @sjvasanth1
/drivers/ethernet/ @tbursztyka @jukkar
/drivers/ethernet/*dwmac* @npitre
/drivers/ethernet/*stm32* @Nukersson @lochej
/drivers/ethernet/*w5500* @parthitce
/drivers/ethernet/*xlnx_gem* @ibirnbaum
/drivers/ethernet/*smsc91x* @sgrrzhf
/drivers/ethernet/*adin2111* @GeorgeCGV
/drivers/ethernet/phy/ @rlubos @tbursztyka @arvinf @jukkar
/drivers/ethernet/phy/*adin2111* @GeorgeCGV
/drivers/mdio/ @rlubos @tbursztyka @arvinf
/drivers/mdio/*adin2111* @GeorgeCGV
/drivers/flash/ @nashif @de-nordic
/drivers/flash/*stm32_qspi* @lmajewski
/drivers/flash/*b91* @andy-liu-telink
/drivers/flash/*cadence* @ngboonkhai
/drivers/flash/*cc13xx_cc26xx* @pepe2k
/drivers/flash/*nrf* @de-nordic
/drivers/flash/*esp32* @sylvioalves
/drivers/fpga/ @tgorochowik @kgugala
/drivers/gpio/ @mnkp
/drivers/gpio/*b91* @andy-liu-telink
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*nct38xx* @MulinChao @ChiHuaL
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*eos_s3* @fkokosinski @kgugala
/drivers/gpio/*rcar* @aaillet
/drivers/gpio/*esp32* @sylvioalves
/drivers/gpio/*rpi_pico* @yonsch
/drivers/gpio/*xlnx_ps* @ibirnbaum
/drivers/gpio/*ads114s0x* @benediktibk
/drivers/gpio/*bd8lb600fs* @benediktibk
/drivers/gpio/*pcal64xxa* @benediktibk
/drivers/hwinfo/ @alexanderwachter
/drivers/i2c/i2c_common.c @sjg20
/drivers/i2c/i2c_emul.c @sjg20
/drivers/i2c/i2c_ite_enhance.c @GTLin08
/drivers/i2c/i2c_ite_it8xxx2.c @GTLin08
/drivers/i2c/i2c_shell.c @nashif
/drivers/i2c/Kconfig.i2c_emul @sjg20
/drivers/i2c/Kconfig.it8xxx2 @GTLin08
/drivers/i2c/target/*eeprom* @henrikbrixandersen
/drivers/i2c/Kconfig.test @mbolivar-ampere
/drivers/i2c/i2c_test.c @mbolivar-ampere
/drivers/i2c/*rcar* @aaillet
/drivers/i2s/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2s/*nrfx* @anangl
/drivers/i3c/ @dcpleung
/drivers/i3c/i3c_cdns.c @XenuIsWatching
/drivers/ieee802154/ @rlubos @tbursztyka @jukkar @fgrandel
/drivers/ieee802154/*b91* @andy-liu-telink
/drivers/ieee802154/ieee802154_nrf5* @jciupis
/drivers/ieee802154/ieee802154_rf2xx* @tbursztyka @nandojve
/drivers/ieee802154/ieee802154_cc13xx* @bwitherspoon @cfriedt @vaishnavachath
/drivers/interrupt_controller/ @dcpleung @nashif
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/interrupt_controller/*esp32* @sylvioalves
/drivers/interrupt_controller/intc_nuclei_eclic.c @soburi
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic
/drivers/ipm/ipm_cavs* @dcpleung @andyross
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/ipm/ipm_stm32_hsem.c @cameled
/drivers/ipm/ipm_esp32.c @uLipe
/drivers/ipm/ipm_ivshmem.c @uLipe
/drivers/kscan/ @VenkatKotakonda @franciscomunoz @sjvasanth1
/drivers/kscan/*xec* @franciscomunoz @sjvasanth1
/drivers/kscan/*ft5336* @MaureenHelm
/drivers/kscan/*ht16k33* @henrikbrixandersen
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar-ampere
/drivers/lora/ @Mani-Sadhasivam
/drivers/mbox/ @carlocaione
/drivers/misc/ @tejlmand
/drivers/misc/ft8xx/ @hubertmis
/drivers/mm/ @dcpleung
/drivers/modem/hl7800.c @rerickson1
/drivers/modem/simcom-sim7080.c @lgehreke
/drivers/modem/simcom-sim7080.h @lgehreke
/drivers/modem/Kconfig.hl7800 @rerickson1
/drivers/modem/Kconfig.simcom-sim7080 @lgehreke
/drivers/pcie/ @dcpleung @nashif @jhedberg
/drivers/peci/ @albertofloyd @franciscomunoz @sjvasanth1
/drivers/pinctrl/ @gmarull
/drivers/pinctrl/*esp32* @sylvioalves
/drivers/pinctrl/*it8xxx2* @ite
/drivers/pm_cpu_ops/ @carlocaione @gdengi
/drivers/pm_cpu_ops/psci_shell.c @nbalabak @gdengi
/drivers/power_domain/ @ceolin
/drivers/ps2/ @franciscomunoz @sjvasanth1
/drivers/ps2/*xec* @franciscomunoz @sjvasanth1
/drivers/ps2/*npcx* @MulinChao @ChiHuaL
/drivers/pwm/*b91* @andy-liu-telink
/drivers/pwm/*pca9685* @nixward
/drivers/pwm/*rpi_pico* @burumaj
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/*sam0* @nzmichaelh
/drivers/pwm/*test* @JordanYates
/drivers/pwm/*xlnx* @henrikbrixandersen
/drivers/pwm/pwm_capture.c @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/pwm/*gecko* @sun681
/drivers/pwm/*it8xxx2* @RuibinChang
/drivers/pwm/*esp32* @LucasTambor
/drivers/pwm/*rcar* @aaillet
/drivers/pwm/*max31790* @benediktibk
/drivers/regulator/* @gmarull
/drivers/regulator/regulator_pca9420.c @danieldegrasse
/drivers/regulator/regulator_rpi_pico.c @soburi
/drivers/regulator/regulator_shell.c @danieldegrasse
/drivers/reset/ @andrei-edward-popa
/drivers/reset/reset_intel_socfpga.c @nbalabak
/drivers/reset/Kconfig.intel_socfpga @nbalabak
/drivers/sensor/ @MaureenHelm
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
/drivers/sensor/grow_r502a/ @DineshDK03
/drivers/sensor/hts*/ @avisconti
/drivers/sensor/ina23*/ @bbilas
/drivers/sensor/lis*/ @avisconti
/drivers/sensor/lps*/ @avisconti
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/mpr/ @sven-hm
/drivers/sensor/qdec_stm32/ @valeriosetti
/drivers/sensor/rpi_pico_temp/ @soburi
/drivers/sensor/st*/ @avisconti
/drivers/serial/*b91* @andy-liu-telink
/drivers/serial/uart_altera_jtag.c @nashif @gohshunjing
/drivers/serial/uart_altera.c @gohshunjing
/drivers/serial/*ns16550* @dcpleung @nashif @gdengi
/drivers/serial/*nrfx* @anangl
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.mcux_iuart @Mani-Sadhasivam
/drivers/serial/uart_mcux_iuart.c @Mani-Sadhasivam
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/*rpi_pico* @yonsch
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/serial/uart_xlnx_uartlite.c @henrikbrixandersen
/drivers/serial/*xmc4xxx* @parthitce
/drivers/serial/*numicro* @ssekar15
/drivers/serial/*apbuart* @julius-barendt
/drivers/serial/*rcar* @aaillet
/drivers/serial/Kconfig.test @str4t0m
/drivers/serial/serial_test.c @str4t0m
/drivers/serial/Kconfig.xen @lorc @firscity
/drivers/serial/uart_hvc_xen.c @lorc @firscity
/drivers/serial/uart_hvc_xen_consoleio.c @lorc @firscity
/drivers/serial/Kconfig.it8xxx2 @GTLin08
/drivers/serial/uart_ite_it8xxx2.c @GTLin08
/drivers/smbus/ @finikorg
/drivers/sip_svc/ @maheshraotm
/drivers/disk/ @jfischer-no
/drivers/disk/sdmmc_sdhc.h @JunYangNXP
/drivers/disk/sdmmc_stm32.c @anthonybrandon
/drivers/net/ @rlubos @tbursztyka @jukkar
/drivers/ptp_clock/ @tbursztyka @jukkar
/drivers/spi/ @tbursztyka
/drivers/spi/*b91* @andy-liu-telink
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/spi/*esp32* @sylvioalves
/drivers/spi/*pl022* @soburi
/drivers/sdhc/ @danieldegrasse
/drivers/timer/*apic* @dcpleung @nashif
/drivers/timer/apic_tsc.c @andyross
/drivers/timer/*arm_arch* @carlocaione
/drivers/timer/*cortex_m_systick* @anangl
/drivers/timer/*altera_avalon* @nashif
/drivers/timer/*riscv_machine* @kgugala @pgielda
/drivers/timer/*ite_it8xxx2* @ite
/drivers/timer/*xlnx_psttc* @wjliang @stephanosio
/drivers/timer/*cc13xx_cc26xx_rtc* @vanti
/drivers/timer/*cavs* @dcpleung
/drivers/timer/*stm32_lptim* @FRASTM
/drivers/timer/*leon_gptimer* @julius-barendt
/drivers/timer/*mips_cp0* @frantony
/drivers/timer/*rcar_cmt* @aaillet
/drivers/timer/*esp32c3_sys* @uLipe
/drivers/timer/*sam0_rtc* @bendiscz
/drivers/timer/*arcv2* @ruuddw
/drivers/timer/*xtensa* @dcpleung
/drivers/timer/*rv32m1_lptmr* @mbolivar
/drivers/timer/*nrf_rtc* @anangl
/drivers/timer/*hpet* @dcpleung
/drivers/usb/ @jfischer-no
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/usb_c/ @sambhurst
/drivers/video/ @loicpoulain
/drivers/i2c/*b91* @andy-liu-telink
/drivers/i2c/i2c_ll_stm32* @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/i2c/*tca954x* @kurddt
/drivers/*/*xec* @franciscomunoz @albertofloyd @sjvasanth1
/drivers/w1/ @str4t0m
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/*sifive* @katsuster
/drivers/watchdog/wdt_handlers.c @dcpleung @nashif
/drivers/watchdog/*cc32xx* @pavlohamov
/drivers/watchdog/wdt_ite_it8xxx2.c @RuibinChang
/drivers/watchdog/Kconfig.it8xxx2 @RuibinChang
/drivers/watchdog/wdt_counter.c @nordic-krch
/drivers/watchdog/*rpi_pico* @thedjnK
/drivers/watchdog/*dw* @softwarecki
/drivers/watchdog/*ifx* @sreeramIfx
/drivers/wifi/ @rlubos @tbursztyka @jukkar
/drivers/wifi/esp_at/ @mniestroj
/drivers/wifi/eswifi/ @loicpoulain @nandojve
/drivers/wifi/winc1500/ @kludentwo
/drivers/virtualization/ @tbursztyka
/drivers/xen/ @lorc @firscity
/dts/arc/ @abrodkin @ruuddw @iriszzw @evgeniy-paltsev
/dts/arm/acsip/ @NorthernDean
/dts/arm/aspeed/ @aspeeddylan
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/sam4l* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/same70* @nandojve
/dts/arm/atmel/samv71* @nandojve
/dts/arm/atmel/ @galak
/dts/arm/broadcom/ @sbranden
/dts/arm/cypress/ @ifyall @npal-cy
/dts/arm/gigadevice/ @nandojve
/dts/arm/infineon/xmc4* @parthitce @ifyall @npal-cy
/dts/arm/infineon/psoc6/ @ifyall @npal-cy
/dts/arm64/ @carlocaione
/dts/arm64/armv8-r.dtsi @povergoing
/dts/arm64/intel/*intel_socfpga* @siclim
/dts/arm64/nxp/ @JiafeiPan
/dts/arm64/renesas/ @lorc @xakep-amatop
/dts/arm/quicklogic/ @fkokosinski @kgugala
/dts/arm/seeed/ @str4t0m
/dts/arm/st/ @erwango
/dts/arm/st/h7/*stm32h735* @benediktibk
/dts/arm/st/l4/*stm32l451* @benediktibk
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @anangl @carlescufi
/dts/arm/nuvoton/ @ssekar15 @MulinChao @ChiHuaL
/dts/arm/nxp/ @mmahadevan108 @dleach02
/dts/arm/nxp/nxp_s32* @manuargue
/dts/arm/microchip/ @franciscomunoz @albertofloyd @sjvasanth1
/dts/arm/rpi_pico/ @yonsch
/dts/arm/silabs/efm32_pg_1b.dtsi @rdmeneze
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efr32bg13p* @mnkp
/dts/arm/silabs/efr32bg22* @kgugala @fkokosinski @pczarnecki
/dts/arm/silabs/efr32xg13p* @mnkp
/dts/arm/silabs/efm32pg1b* @rdmeneze
/dts/arm/silabs/efr32mg21* @l-alfred
/dts/arm/silabs/efr32fg13* @yonsch
/dts/riscv/ @kgugala @pgielda
/dts/riscv/ite/ @ite
/dts/riscv/microchip/microchip-miv.dtsi @galak
/dts/riscv/openisa/rv32m1* @dleach02
/dts/riscv/riscv32-litex-vexriscv.dtsi @mateusz-holenko @kgugala @pgielda
/dts/riscv/starfive/ @rajnesh-kanwal
/dts/riscv/andes/andes_v5* @cwshu @kevinwang821020 @jimmyzhe
/dts/riscv/niosv/ @sweeaun
/dts/arm/armv*m.dtsi @galak @ioannisg
/dts/arm/armv7-a.dtsi @ibirnbaum
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/arm/renesas/rcar/ @aaillet
/dts/x86/ @jhedberg
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/xtensa/espressif/ @sylvioalves
/dts/xtensa/nxp/ @iuliana-prodan @dbaluta
/dts/sparc/ @julius-barendt
/dts/bindings/ @galak
/dts/bindings/can/ @alexanderwachter @henrikbrixandersen
/dts/bindings/i2c/zephyr*i2c-emul*.yaml @sjg20
/dts/bindings/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/adc/*ads114s08.yaml @benediktibk
/dts/bindings/adc/*max111* @benediktibk
/dts/bindings/modem/*hl7800.yaml @rerickson1
/dts/bindings/serial/ns16550.yaml @dcpleung @nashif
/dts/bindings/counter/snps,dw-timers.yaml @pbalsundar
/dts/bindings/wifi/*esp-at.yaml @mniestroj
/dts/bindings/*/*gd32* @nandojve
/dts/bindings/*/*npcx* @MulinChao @ChiHuaL
/dts/bindings/*/*psoc6* @ifyall @npal-cy
/dts/bindings/*/*infineon*cat1* @ifyall @npal-cy
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @mmahadevan108 @dleach02
/dts/bindings/*/nxp*s32* @manuargue
/dts/bindings/*/openisa* @dleach02
/dts/bindings/*/raspberrypi*pico* @yonsch
/dts/bindings/*/st* @erwango
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/litex* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/vexriscv* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/andes* @cwshu @kevinwang821020 @jimmyzhe
/dts/bindings/*/neorv32* @henrikbrixandersen
/dts/bindings/*/*lan91c111* @sgrrzhf
/dts/bindings/i3c/ @dcpleung
/dts/bindings/pm_cpu_ops/* @carlocaione
/dts/bindings/ethernet/*gem.yaml @ibirnbaum
/dts/bindings/auxdisplay/*pt6314.yaml @xingrz
/dts/bindings/auxdisplay/* @thedjnK
/dts/posix/ @aescolar @daor-oti
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/*ina23* @bbilas
/dts/bindings/sensor/st* @avisconti
/dts/bindings/sensor/zephyr,sensing.yaml @lixuzha @ghu0510 @qianruh
/dts/bindings/sensor/zephyr,sensing*.yaml @lixuzha @ghu0510 @qianruh
/dts/bindings/smbus/ @finikorg
/dts/bindings/sip_svc/ @maheshraotm
/dts/bindings/cpu/intel,niosv.yaml @sweeaun
/dts/bindings/reset/intel,socfpga-reset.yaml @nbalabak
/dts/bindings/gpio/*pcal64* @benediktibk
/dts/bindings/gpio/*bd8lb600fs* @benediktibk
/dts/bindings/gpio/*ads114s0x* @benediktibk
/dts/bindings/pwm/*max31790* @benediktibk
/dts/bindings/dac/*ad56* @benediktibk
/dts/common/ @galak
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/zephyr/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/include/zephyr/drivers/adc.h @anangl
/include/zephyr/drivers/adc/ads114s0x.h @benediktibk
/include/zephyr/drivers/auxdisplay.h @thedjnK
/include/zephyr/drivers/can.h @alexanderwachter @henrikbrixandersen
/include/zephyr/drivers/can/ @alexanderwachter @henrikbrixandersen
/include/zephyr/drivers/counter.h @nordic-krch
/include/zephyr/drivers/dac.h @martinjaeger
/include/zephyr/drivers/espi.h @albertofloyd @franciscomunoz @sjvasanth1
/include/zephyr/drivers/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/include/zephyr/drivers/flash.h @nashif @carlescufi @galak @MaureenHelm @de-nordic
/include/zephyr/drivers/i2c_emul.h @sjg20
/include/zephyr/drivers/i3c.h @dcpleung
/include/zephyr/drivers/i3c/ @dcpleung
/include/zephyr/drivers/led/ht16k33.h @henrikbrixandersen
/include/zephyr/drivers/interrupt_controller/ @dcpleung @nashif
/include/zephyr/drivers/interrupt_controller/gic.h @stephanosio
/include/zephyr/drivers/modem/hl7800.h @rerickson1
/include/zephyr/drivers/pcie/ @dcpleung
/include/zephyr/drivers/hwinfo.h @alexanderwachter
/include/zephyr/drivers/led.h @Mani-Sadhasivam
/include/zephyr/drivers/led_strip.h @mbolivar-ampere
/include/zephyr/drivers/sensor.h @MaureenHelm
/include/zephyr/drivers/smbus.h @finikorg
/include/zephyr/drivers/spi.h @tbursztyka
/include/zephyr/drivers/sip_svc/ @maheshraotm
/include/zephyr/drivers/lora.h @Mani-Sadhasivam
/include/zephyr/drivers/peci.h @albertofloyd @franciscomunoz @sjvasanth1
/include/zephyr/drivers/pm_cpu_ops.h @carlocaione
/include/zephyr/drivers/pm_cpu_ops/ @carlocaione
/include/zephyr/drivers/w1.h @str4t0m
/include/zephyr/drivers/pwm/max31790.h @benediktibk
/include/zephyr/app_memory/ @dcpleung
/include/zephyr/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arc/arch.h @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arc/v2/irq.h @abrodkin @ruuddw @evgeniy-paltsev
/include/zephyr/arch/arm @MaureenHelm @galak @ioannisg
/include/zephyr/arch/arm/cortex_a_r/ @stephanosio
/include/zephyr/arch/arm64/ @carlocaione
/include/zephyr/arch/arm64/cortex_r/ @povergoing
/include/zephyr/arch/arm/irq.h @carlocaione
/include/zephyr/arch/mips/ @frantony
/include/zephyr/arch/nios2/ @nashif
/include/zephyr/arch/nios2/arch.h @nashif
/include/zephyr/arch/posix/ @aescolar @daor-oti
/include/zephyr/arch/riscv/ @kgugala @pgielda
/include/zephyr/arch/x86/ @jhedberg @dcpleung
/include/zephyr/arch/common/ @andyross @nashif
/include/zephyr/arch/xtensa/ @andyross @dcpleung
/include/zephyr/arch/sparc/ @julius-barendt
/include/zephyr/sys/atomic.h @andyross
/include/zephyr/bluetooth/ @alwa-nordic @jhedberg @Vudentz @sjanc
/include/zephyr/bluetooth/audio/ @jhedberg @Vudentz @Thalley
/include/zephyr/cache.h @carlocaione @andyross
/include/zephyr/canbus/ @alexanderwachter @henrikbrixandersen
/include/zephyr/tracing/ @nashif
/include/zephyr/debug/ @nashif
/include/zephyr/debug/coredump.h @dcpleung
/include/zephyr/debug/gdbstub.h @ceolin
/include/zephyr/device.h @tbursztyka @nashif
/include/zephyr/devicetree.h @galak
/include/zephyr/devicetree/can.h @henrikbrixandersen
/include/zephyr/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/zephyr/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
/include/zephyr/dt-bindings/ethernet/xlnx_gem.h @ibirnbaum
/include/zephyr/dt-bindings/pcie/ @dcpleung
/include/zephyr/dt-bindings/pinctrl/esp* @sylvioalves
/include/zephyr/dt-bindings/pwm/*it8xxx2* @RuibinChang
/include/zephyr/dt-bindings/usb/usb.h @galak
/include/zephyr/dt-bindings/adc/ads114s0x_adc.h @benediktibk
/include/zephyr/drivers/emul.h @sjg20
/include/zephyr/fs/ @nashif @de-nordic
/include/zephyr/init.h @nashif @andyross
/include/zephyr/irq.h @dcpleung @nashif @andyross
/include/zephyr/irq_offload.h @dcpleung @nashif @andyross
/include/zephyr/kernel.h @dcpleung @nashif @andyross
/include/zephyr/kernel_version.h @dcpleung @nashif @andyross
/include/zephyr/linker/app_smem*.ld @dcpleung @nashif
/include/zephyr/linker/ @dcpleung @nashif @andyross
/include/zephyr/logging/ @nordic-krch
/include/zephyr/lorawan/lorawan.h @Mani-Sadhasivam
/include/zephyr/mgmt/osdp.h @sidcha
/include/zephyr/mgmt/mcumgr/ @nordicjm
/include/zephyr/net/ @rlubos @tbursztyka @jukkar
/include/zephyr/net/buf.h @jhedberg @tbursztyka @rlubos @jukkar
/include/zephyr/net/coap*.h @rlubos
/include/zephyr/net/conn_mgr*.h @rlubos @glarsennordic @jukkar
/include/zephyr/net/gptp.h @rlubos @jukkar @fgrandel
/include/zephyr/net/ieee802154*.h @rlubos @tbursztyka @jukkar @fgrandel
/include/zephyr/net/lwm2m*.h @rlubos
/include/zephyr/net/mqtt.h @rlubos
/include/zephyr/net/mqtt_sn.h @rlubos @BeckmaR
/include/zephyr/net/net_pkt_filter.h @npitre
/include/zephyr/posix/ @cfreidt
/include/zephyr/pm/pm.h @nashif @ceolin
/include/zephyr/drivers/ptp_clock.h @tbursztyka @jukkar
/include/zephyr/rtio/ @teburd
/include/zephyr/sensing/ @lixuzha @ghu0510 @qianruh
/include/zephyr/shared_irq.h @dcpleung @nashif @andyross
/include/zephyr/shell/ @jakub-uC @nordic-krch
/include/zephyr/shell/shell_mqtt.h @ycsin
/include/zephyr/sw_isr_table.h @dcpleung @nashif @andyross
/include/zephyr/sd/ @danieldegrasse
/include/zephyr/sip_svc/ @maheshraotm
/include/zephyr/sys_clock.h @dcpleung @nashif @andyross
/include/zephyr/sys/sys_io.h @dcpleung @nashif @andyross
/include/zephyr/sys/kobject.h @dcpleung @nashif
/include/zephyr/toolchain.h @dcpleung @andyross @nashif
/include/zephyr/toolchain/ @dcpleung @nashif @andyross
/include/zephyr/zephyr.h @dcpleung @nashif @andyross
/kernel/ @dcpleung @nashif @andyross
/lib/cpp/ @stephanosio
/lib/smf/ @sambhurst
/lib/util/ @carlescufi @jakub-uC
/lib/util/fnmatch/ @carlescufi @jakub-uC
/lib/open-amp/ @arnopo
/lib/os/ @dcpleung @nashif @andyross
/lib/os/cbprintf_packaged.c @npitre
/lib/posix/ @cfriedt
/lib/posix/getopt/ @jakub-uC
/subsys/portability/ @nashif
/subsys/sensing/ @lixuzha @ghu0510 @qianruh
/lib/libc/ @nashif
/lib/libc/arcmwdt/ @abrodkin @ruuddw @evgeniy-paltsev
/misc/ @tejlmand
/modules/ @nashif
/modules/canopennode/ @henrikbrixandersen
/modules/mbedtls/ @ceolin @d3zd3z
/modules/hal_gigadevice/ @nandojve
/modules/hal_nordic/nrf_802154/ @jciupis
/modules/trusted-firmware-m/ @microbuilder
/kernel/device.c @andyross @nashif
/kernel/idle.c @andyross @nashif
/samples/ @nashif
/samples/application_development/sysbuild/ @tejlmand @nordicjm
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/boards/*microbit* @jhe
/samples/bluetooth/ @jhedberg @Vudentz @alwa-nordic @sjanc
/samples/compression/ @Navin-Sankar
/samples/drivers/can/ @alexanderwachter @henrikbrixandersen
/samples/drivers/clock_control_litex/ @mateusz-holenko @kgugala @pgielda
/samples/drivers/eeprom/ @henrikbrixandersen
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/drivers/smbus/ @finikorg
/samples/subsys/lorawan/ @Mani-Sadhasivam
/samples/modules/canopennode/ @henrikbrixandersen
/samples/net/ @rlubos @tbursztyka @jukkar
/samples/net/cloud/tagoio_http_post/ @nandojve
/samples/net/dns_resolve/ @rlubos @tbursztyka @jukkar
/samples/net/gptp/ @rlubos @jukkar @fgrandel
/samples/net/lwm2m_client/ @rlubos
/samples/net/mqtt_publisher/ @rlubos
/samples/net/mqtt_sn_publisher/ @rlubos @BeckmaR
/samples/net/sockets/coap_*/ @rlubos
/samples/net/sockets/ @rlubos @tbursztyka @jukkar
/samples/sensor/ @MaureenHelm
/samples/shields/ @avisconti
/samples/subsys/ipc/ipc_service/icmsg @emob-nordic
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/logging/syst/ @dcpleung
/samples/subsys/shell/ @jakub-uC @nordic-krch @gdengi
/samples/subsys/sip_svc/ @maheshraotm
/samples/subsys/mgmt/mcumgr/ @de-nordic @nordicjm
/samples/subsys/mgmt/updatehub/ @nandojve @otavio
/samples/subsys/mgmt/osdp/ @sidcha
/samples/subsys/usb/ @jfischer-no
/samples/subsys/usb_c/ @sambhurst
/samples/subsys/pm/ @nashif @ceolin
/samples/subsys/sensing/ @lixuzha @ghu0510 @qianruh
/samples/tfm_integration/ @microbuilder
/samples/userspace/ @dcpleung @nashif
/scripts/release/bug_bash.py @cfriedt
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/coredump/ @dcpleung
/scripts/footprint/ @nashif
/scripts/kconfig/ @ulfalizer
/scripts/logging/dictionary/ @dcpleung
/scripts/native_simulator/ @aescolar
/scripts/pylib/twister/expr_parser.py @nashif
/scripts/schemas/twister/ @nashif
/scripts/build/gen_app_partitions.py @dcpleung @nashif
scripts/build/gen_image_info.py @tejlmand
/scripts/get_maintainer.py @nashif
/scripts/dts/ @mbolivar-ampere @galak
/scripts/release/ @nashif
/scripts/ci/ @nashif
/scripts/ci/check_compliance.py @nashif @carlescufi
/arch/x86/gen_gdt.py @dcpleung @nashif
/arch/x86/gen_idt.py @dcpleung @nashif
/scripts/build/gen_kobject_list.py @dcpleung @nashif
/scripts/build/gen_kobject_placeholders.py @dcpleung
/scripts/build/gen_syscalls.py @dcpleung @nashif
/scripts/list_boards.py @mbolivar-ampere
/scripts/build/process_gperf.py @dcpleung @nashif
/scripts/build/gen_relocate_app.py @dcpleung
/scripts/generate_usb_vif/ @madhurimaparuchuri
/scripts/requirements*.txt @mbolivar-ampere @galak @nashif
/scripts/tests/build/test_subfolder_list.py @rmstoi
/scripts/tracing/ @nashif
/scripts/pylib/twister/ @nashif
/scripts/twister @nashif
/scripts/series-push-hook.sh @erwango
/scripts/utils/pinctrl_nrf_migrate.py @gmarull
/scripts/utils/migrate_mcumgr_kconfigs.py @de-nordic
/scripts/west_commands/ @mbolivar-ampere
/scripts/west_commands/blobs.py @carlescufi
/scripts/west_commands/fetchers/ @carlescufi
/scripts/west_commands/runners/gd32isp.py @mbolivar-ampere @nandojve
/scripts/west_commands/tests/test_gd32isp.py @mbolivar-ampere @nandojve
/scripts/west-commands.yml @mbolivar-ampere
/scripts/zephyr_module.py @tejlmand
/scripts/build/uf2conv.py @petejohanson
/scripts/build/user_wordsize.py @cfriedt
/scripts/valgrind.supp @aescolar @daor-oti
/share/sysbuild/ @tejlmand @nordicjm
/share/zephyr-package/ @tejlmand
/share/zephyrunittest-package/ @tejlmand
/subsys/bluetooth/ @alwa-nordic @jhedberg @Vudentz
/subsys/bluetooth/audio/ @jhedberg @Vudentz @Thalley @sjanc
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot @kruithofa
/subsys/bluetooth/host/ @alwa-nordic @jhedberg @Vudentz @sjanc
/subsys/bluetooth/mesh/ @jhedberg @PavelVPV @Vudentz @LingaoM
/subsys/canbus/ @alexanderwachter @henrikbrixandersen
/subsys/debug/ @nashif
/subsys/debug/coredump/ @dcpleung
/subsys/debug/gdbstub/ @ceolin
/subsys/debug/gdbstub.c @ceolin
/subsys/dfu/ @de-nordic @nordicjm
/subsys/disk/ @jfischer-no
/subsys/dsp/ @yperess
/subsys/tracing/ @nashif
/subsys/debug/asan_hacks.c @aescolar @daor-oti
/subsys/demand_paging/ @dcpleung @nashif
/subsys/emul/ @sjg20
/subsys/fb/ @jfischer-no
/subsys/fs/ @nashif
/subsys/fs/nvs/ @Laczen
/subsys/ipc/ @carlocaione
/subsys/ipc/ipc_service/*/*icmsg* @emob-nordic
/subsys/logging/ @nordic-krch
/subsys/logging/backends/log_backend_net.c @nordic-krch @rlubos @jukkar
/subsys/lorawan/ @Mani-Sadhasivam
/subsys/mgmt/ec_host_cmd/ @jettr
/subsys/mgmt/mcumgr/ @carlescufi @de-nordic @nordicjm
/subsys/mgmt/hawkbit/ @Navin-Sankar
/subsys/mgmt/updatehub/ @nandojve @otavio
/subsys/mgmt/osdp/ @sidcha
/subsys/modbus/ @jfischer-no
/subsys/net/buf.c @jhedberg @tbursztyka @rlubos @jukkar
/subsys/net/conn_mgr/ @rlubos @glarsennordic @jukkar
/subsys/net/ip/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/dns/ @rlubos @tbursztyka @cfriedt @jukkar
/subsys/net/lib/lwm2m/ @rlubos
/subsys/net/lib/config/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/mqtt/ @rlubos
/subsys/net/lib/mqtt_sn/ @rlubos @BeckmaR
/subsys/net/lib/coap/ @rlubos
/subsys/net/lib/sockets/socketpair.c @cfriedt
/subsys/net/lib/sockets/ @rlubos @tbursztyka @jukkar
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @rlubos @tbursztyka @jukkar
/subsys/net/l2/ethernet/gptp/ @rlubos @jukkar @fgrandel
/subsys/net/l2/ieee802154/ @rlubos @tbursztyka @jukkar @fgrandel
/subsys/net/l2/canbus/ @alexanderwachter
/subsys/net/pkt_filter/ @npitre
/subsys/net/*/openthread/ @rlubos
/subsys/pm/ @nashif @ceolin
/subsys/random/ @dleach02
/subsys/shell/ @jakub-uC @nordic-krch
/subsys/shell/backends/shell_mqtt.c @ycsin
/subsys/sd/ @danieldegrasse
/subsys/sip_svc/ @maheshraotm
/subsys/task_wdt/ @martinjaeger
/subsys/testsuite/ @nashif
/subsys/testsuite/ztest/*/ztress* @nordic-krch
/subsys/timing/ @nashif @dcpleung
/subsys/usb/ @jfischer-no
/subsys/usb/usb_c/ @sambhurst
/tests/ @nashif
/tests/arch/arm/ @ioannisg @stephanosio
/tests/benchmarks/cmsis_dsp/ @stephanosio
/tests/boards/native_posix/ @aescolar @daor-oti
/tests/bluetooth/ @alwa-nordic @jhedberg @Vudentz @sjanc
/tests/bluetooth/audio/ @jhedberg @Vudentz @wopu-ot @Thalley
/tests/bluetooth/controller/ @cvinayak @thoh-ot @kruithofa @erbr-ot @sjanc @ppryga
/tests/bsim/bluetooth/ @alwa-nordic @jhedberg @Vudentz @wopu-ot
/tests/bsim/bluetooth/audio/ @jhedberg @Vudentz @wopu-ot @Thalley
/tests/bsim/bluetooth/mesh/ @jhedberg @Vudentz @wopu-ot @PavelVPV
/tests/bluetooth/mesh_shell/ @jhedberg @Vudentz @sjanc @PavelVPV
/tests/bluetooth/tester/ @alwa-nordic @jhedberg @Vudentz @sjanc
/tests/posix/ @cfriedt
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin @d3zd3z
/tests/drivers/can/ @alexanderwachter @henrikbrixandersen
/tests/drivers/counter/ @nordic-krch
/tests/drivers/eeprom/ @henrikbrixandersen @sjg20
/tests/drivers/flash_simulator/ @de-nordic
/tests/drivers/gpio/ @mnkp
/tests/drivers/hwinfo/ @alexanderwachter
/tests/drivers/smbus/ @finikorg
/tests/drivers/spi/ @tbursztyka
/tests/drivers/uart/uart_async_api/ @anangl
/tests/drivers/w1/ @str4t0m
/tests/kernel/ @dcpleung @andyross @nashif
/tests/lib/ @nashif
/tests/lib/cmsis_dsp/ @stephanosio
/tests/net/ @rlubos @tbursztyka @jukkar
/tests/net/buf/ @jhedberg @tbursztyka @jukkar
/tests/net/conn_mgr_monitor/ @rlubos @glarsennordic @jukkar
/tests/net/conn_mgr_conn/ @rlubos @glarsennordic @jukkar
/tests/net/ieee802154/l2/ @rlubos @tbursztyka @jukkar @fgrandel
/tests/net/lib/ @rlubos @tbursztyka @jukkar
/tests/net/lib/http_header_fields/ @rlubos @tbursztyka @jukkar
/tests/net/lib/mqtt_packet/ @rlubos
/tests/net/lib/mqtt_sn_packet/ @rlubos @BeckmaR
/tests/net/lib/mqtt_sn_client/ @rlubos @BeckmaR
/tests/net/lib/coap/ @rlubos
/tests/net/npf/ @npitre
/tests/net/socket/socketpair/ @cfriedt
/tests/net/socket/ @rlubos @tbursztyka @jukkar
/tests/subsys/debug/coredump/ @dcpleung
/tests/subsys/fs/ @nashif @de-nordic
/tests/subsys/mgmt/mcumgr/ @de-nordic @nordicjm
/tests/subsys/sd/ @danieldegrasse
/tests/subsys/rtio/ @teburd
/tests/subsys/shell/ @jakub-uC @nordic-krch
# Get all docs reviewed
*.rst @nashif
/doc/kernel/ @andyross @nashif
*posix*.rst @aescolar @daor-oti

View File

@@ -1,139 +1,78 @@
# Zephyr Project Code of Conduct
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
Examples of behavior that contributes to creating a positive environment
include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior include:
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
professional setting
## Enforcement Responsibilities
## Our Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
conduct@zephyrproject.org. Reports will be received by the Chair of the Zephyr
Governing Board, the Zephyr Project Director (Linux Foundation), and the Zephyr
Project Developer Advocate (Linux Foundation). You may refer to the [Governing
Board](https://zephyrproject.org/governing-board/) and [Linux Foundation
Staff](https://zephyrproject.org/staff/) web pages to identify who are the
individuals currently holding these positions.
All complaints will be reviewed and investigated promptly and fairly.
reported by contacting the project team at conduct@zephyrproject.org.
Reports will be received by Kate Stewart (Linux Foundation) and Amy Occhialino
(Intel). All complaints will be reviewed and investigated, and will result in a
response that is deemed necessary and appropriate to the circumstances. The
project team is obligated to maintain confidentiality with regard to the
reporter of an incident. Further details of specific enforcement policies may
be posted separately.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
The only changes made by The Zephyr Project to the original document were to
make explicit who the recipients of Code of Conduct incident reports are.
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@@ -1,19 +0,0 @@
# Constant variables to be used across Kconfig options
# Copyright (c) 2024 basalte bv
# SPDX-License-Identifier: Apache-2.0
INT8_MIN := -128
INT16_MIN := -32768
INT32_MIN := -2147483648
INT64_MIN := -9223372036854775808
INT8_MAX := 127
INT16_MAX := 32767
INT32_MAX := 2147483647
INT64_MAX := 9223372036854775807
UINT8_MAX := 255
UINT16_MAX := 65535
UINT32_MAX := 4294967295
UINT64_MAX := 18446744073709551615

View File

@@ -5,13 +5,7 @@
# Copyright (c) 2023 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
source "Kconfig.constants"
osource "$(APPLICATION_SOURCE_DIR)/VERSION"
# This should be sourced early since the autogen Kconfig.dts options
# and macros may get used by shields/boards/SoC defconfig or modules.
source "dts/Kconfig"
osource "${APPLICATION_SOURCE_DIR}/VERSION"
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
@@ -21,22 +15,27 @@ source "dts/Kconfig"
# Shield defaults should have precedence over board defaults, which should have
# precedence over SoC defaults, so include them in that order.
#
# $ARCH and $KCONFIG_BOARD_DIR will be glob patterns when building documentation.
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
# This loads custom shields defconfigs (from BOARD_ROOT)
osource "$(KCONFIG_BINARY_DIR)/Kconfig.shield.defconfig"
# This loads Zephyr base shield defconfigs
source "boards/shields/*/Kconfig.defconfig"
osource "$(KCONFIG_BOARD_DIR)/Kconfig.defconfig"
# This loads Zephyr specific SoC root defconfigs
source "$(KCONFIG_BINARY_DIR)/soc/Kconfig.defconfig"
source "$(BOARD_DIR)/Kconfig.defconfig"
# This loads custom SoC root defconfigs
osource "$(KCONFIG_BINARY_DIR)/Kconfig.soc.defconfig"
# This loads Zephyr base SoC root defconfigs
osource "soc/$(ARCH)/*/Kconfig.defconfig"
# This loads the toolchain defconfigs
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig.defconfig"
# This loads the testsuite defconfig
source "subsys/testsuite/Kconfig.defconfig"
# This should be early since the autogen Kconfig.dts symbols may get
# used by modules
source "dts/Kconfig"
menu "Modules"
source "modules/Kconfig"
@@ -142,17 +141,6 @@ config ROM_START_OFFSET
alignment requirements on most ARM targets, although some targets
may require smaller or larger values.
config ROM_END_OFFSET
hex "ROM end offset"
default 0
help
If non-zero, this option reduces the maximum size that the Zephyr image is allowed to
occupy, this is to allow for additional image storage which can be created and used by
other systems such as bootloaders (for MCUboot, this would include the image swap
fields and TLV storage at the end of the image).
If unsure, leave at the default value 0.
config LD_LINKER_SCRIPT_SUPPORTED
bool
default y
@@ -215,6 +203,12 @@ config LINKER_SORT_BY_ALIGNMENT
in decreasing size of symbols. This helps to minimize
padding between symbols.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
help
The option specifies that the vector table should be placed at the
start of SRAM instead of the start of flash.
config HAS_SRAM_OFFSET
bool
help
@@ -254,20 +248,6 @@ config LINKER_USE_PINNED_SECTION
Requires that pinned sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_USE_ONDEMAND_SECTION
bool "Use Evictable Linker Section"
depends on DEMAND_MAPPING
depends on !LINKER_USE_PINNED_SECTION
depends on !ARCH_MAPS_ALL_RAM
help
If enabled, the symbols which may be evicted from memory
will be put into a linker section reserved for on-demand symbols.
During boot, the corresponding memory will be mapped as paged out.
This is conceptually the opposite of CONFIG_LINKER_USE_PINNED_SECTION.
Requires that on-demand sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
bool "Generic sections are present at boot" if DEMAND_PAGING && LINKER_USE_PINNED_SECTION
default y
@@ -283,7 +263,7 @@ config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
config LINKER_LAST_SECTION_ID
bool "Last section identifier"
default y if !ARM64
default y
depends on ARM || ARM64 || RISCV
help
If enabled, the last section will contain an identifier.
@@ -326,142 +306,33 @@ config LINKER_USE_RELAX
endmenu # "Linker Sections"
config LINKER_ITERABLE_SUBALIGN
int
default 8 if 64BIT
default 4
help
Hidden option for the default subalignment of iterable sections.
config LINKER_DEVNULL_SUPPORT
bool
default y if CPU_CORTEX_M || (RISCV && !64BIT)
config LINKER_DEVNULL_MEMORY
bool "Devnull region"
depends on LINKER_DEVNULL_SUPPORT
help
Devnull region is created. It is stripped from final binary but remains
in byproduct elf file.
config LINKER_DEVNULL_MEMORY_SIZE
int "Devnull region size"
depends on LINKER_DEVNULL_MEMORY
default 262144
help
Size can be adjusted so it fits all data placed in that region.
endmenu
menu "Compiler Options"
config REQUIRES_STD_C99
bool
select DEPRECATED
help
Hidden option to select compiler support C99 standard or higher.
config REQUIRES_STD_C11
bool
select DEPRECATED
select REQUIRES_STD_C99
help
Hidden option to select compiler support C11 standard or higher.
config REQUIRES_STD_C17
bool
help
Hidden option to select compiler support C17 standard or higher.
config REQUIRES_STD_C23
bool
select REQUIRES_STD_C17
help
Hidden option to select compiler support C23 standard or higher.
choice STD_C
prompt "C Standard"
default STD_C23 if REQUIRES_STD_C23
default STD_C17
help
C Standards.
config STD_C90
bool "C90 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C99
help
1989 C standard as completed in 1989 and ratified by ISO/IEC
as ISO/IEC 9899:1990. This version is known as "ANSI C".
config STD_C99
bool "C99 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C11
help
1999 C standard.
config STD_C11
bool "C11 [DEPRECATED]"
select DEPRECATED
depends on !REQUIRES_STD_C17
help
2011 C standard.
config STD_C17
bool "C17"
depends on !REQUIRES_STD_C23
help
2017 C standard, addresses defects in C11 without introducing
new language features.
config STD_C23
bool "C23"
help
2023 C standard.
endchoice
config TOOLCHAIN_SUPPORTS_GNU_EXTENSIONS
bool
default y if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr"
help
Hidden option to signal that toolchain supports GNU Extensions.
config GNU_C_EXTENSIONS
bool "GNU C Extensions"
depends on TOOLCHAIN_SUPPORTS_GNU_EXTENSIONS
help
Enable GNU C Extensions. GNU C provides several language features
not found in ISO standard C.
config CODING_GUIDELINE_CHECK
bool "Enforce coding guideline rules"
help
Use available compiler flags to check coding guideline rules during
the build.
config NATIVE_LIBC
bool
select FULL_LIBC_SUPPORTED
select TC_PROVIDES_POSIX_C_LANG_SUPPORT_R
help
Zephyr will use the host system C library.
config NATIVE_LIBCPP
bool
select FULL_LIBCPP_SUPPORTED
help
Zephyr will use the host system C++ library
config NATIVE_BUILD
bool
select NATIVE_LIBC if EXTERNAL_LIBC
select NATIVE_LIBCPP if EXTERNAL_LIBCPP
select FULL_LIBC_SUPPORTED
select FULL_LIBCPP_SUPPORTED if CPP
help
Zephyr will be built targeting the host system for debug and
development purposes.
config NATIVE_APPLICATION
bool
default y if ARCH_POSIX
depends on !NATIVE_LIBRARY
select NATIVE_BUILD
help
Build as a native application that can run on the host and using
resources and libraries provided by the host.
config NATIVE_LIBRARY
bool
select NATIVE_BUILD
@@ -482,9 +353,6 @@ choice COMPILER_OPTIMIZATIONS
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
# gcc 14.3 -Os is broken on riscv. This setting should be in the SDK, it's here for testing
default SPEED_OPTIMIZATIONS if "$(TOOLCHAIN_VARIANT_COMPILER)" = "gnu" && RISCV
default SIZE_OPTIMIZATIONS_AGGRESSIVE if "$(TOOLCHAIN_VARIANT_COMPILER)" = "llvm"
default SIZE_OPTIMIZATIONS
help
Note that these flags shall only control the compiler
@@ -497,12 +365,6 @@ config SIZE_OPTIMIZATIONS
Compiler optimizations will be set to -Os independently of other
options.
config SIZE_OPTIMIZATIONS_AGGRESSIVE
bool "Aggressively optimize for size"
help
Compiler optimizations will be set to -Oz independently of other
options.
config SPEED_OPTIMIZATIONS
bool "Optimize for speed"
help
@@ -525,35 +387,11 @@ config NO_OPTIMIZATIONS
default stack sizes in order to avoid stack overflows.
endchoice
config LTO
bool "Link Time Optimization"
depends on !(GEN_ISR_TABLES || GEN_IRQ_VECTOR_TABLE) || ISR_TABLES_LOCAL_DECLARATION
depends on !NATIVE_LIBRARY
depends on !CODE_DATA_RELOCATION
help
This option enables Link Time Optimization.
config LTO_SINGLE_THREADED
bool "Single-threaded LTO"
depends on LTO
help
This option instructs the linker to use a single thread to process
LTO. See the following issue for more info:
https://github.com/zephyrproject-rtos/sdk-ng/issues/1038
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config DEPRECATION_TEST
bool "Indicate test for deprecated feature"
help
This option is selected by tests which check functionality of
deprecated features. It ensures that COMPILER_WARNINGS_AS_ERRORS
is not selected as that would generate errors when the deprecated
features are used.
config COMPILER_SAVE_TEMPS
bool "Save temporary object files"
help
@@ -624,13 +462,6 @@ config MISRA_SANE
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
config TOOLCHAIN_SUPPORTS_VLA_IN_STATEMENTS
bool
default y
help
Hidden symbol to state if the toolchain can handle vla in
statements.
endmenu
choice
@@ -640,17 +471,17 @@ choice
config ASSERT_ON_ERRORS
bool "Assert on all errors"
help
Assert on errors covered with the CHECKIF() macro.
Assert on errors covered with the CHECK macro.
config NO_RUNTIME_CHECKS
bool "No runtime error checks"
help
Do not do any runtime checks or asserts when using the CHECKIF() macro.
Do not do any runtime checks or asserts when using the CHECK macro.
config RUNTIME_ERROR_CHECKS
bool "Runtime error checks"
help
Always perform runtime checks covered with the CHECKIF() macro. This
Always perform runtime checks covered with the CHECK macro. This
option is the default and the only option used during testing.
endchoice
@@ -687,24 +518,6 @@ config OUTPUT_DISASSEMBLE_ALL
The .lst file will contain complete disassembly of the firmware
not just those expected to contain instructions including zeros
config OUTPUT_DISASSEMBLY_WITH_SOURCE
bool "Include source code in output disassembly file"
default y
depends on OUTPUT_DISASSEMBLY && !OUTPUT_DISASSEMBLE_ALL
help
The .lst file will also contain the source code. Having
control over this can be useful for reproducible builds
since it can be used to remove one of the elements of
the .lst file that can vary across platforms because
of reasons such as having ".." include paths.
config OUTPUT_DISASSEMBLY_NO_ALIASES
bool "Disassemble with raw instruction mnemonics"
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain raw instruction mnemonic instead of
pseudo instructions.
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -726,21 +539,8 @@ config CLEANUP_INTERMEDIATE_FILES
from the build process. Note this breaks incremental builds, west spdx
(Software Bill of Material generation), and maybe others.
config BUILD_GAP_FILL_PATTERN
hex "Gap fill pattern"
default 0xFF
help
Pattern used for gap filling of output files.
This value should be set to the value of a clean flash as this can
significantly reduce flash write times.
This setting only defines the gap fill pattern and doesn't enable gap
filling.
Note: binary files are always gap filled as they contain no address
information.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/s19 files [DEPRECATED]."
select DEPRECATED
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
@@ -748,12 +548,6 @@ config BUILD_OUTPUT_HEX
Build an Intel HEX binary zephyr/zephyr.hex in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_HEX_GAP_FILL
bool "Fill gaps in hex files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in hex based files.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
@@ -788,12 +582,6 @@ config BUILD_OUTPUT_S19
Build an S19 binary zephyr/zephyr.s19 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_S19_GAP_FILL
bool "Fill gaps in s19 files"
depends on !BUILD_NO_GAP_FILL
help
Fill gaps in s19 based files.
config BUILD_OUTPUT_UF2
bool "Build a binary in UF2 format"
depends on BUILD_OUTPUT_BIN
@@ -801,12 +589,6 @@ config BUILD_OUTPUT_UF2
Build a UF2 binary zephyr/zephyr.uf2 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_MOT
bool "Build a binary in MOT format"
help
Build a MOT binary zephyr/zephyr.mot in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
if BUILD_OUTPUT_UF2
config BUILD_OUTPUT_UF2_FAMILY_ID
@@ -814,10 +596,9 @@ config BUILD_OUTPUT_UF2_FAMILY_ID
default "0x1c5f21b0" if SOC_SERIES_ESP32
default "0x621e937a" if SOC_NRF52833_QIAA
default "0xada52840" if SOC_NRF52840_QIAA
default "0x4fb2d5bd" if SOC_SERIES_IMXRT10XX || SOC_SERIES_IMXRT11XX
default "0x4fb2d5bd" if SOC_SERIES_IMX_RT
default "0x2abc77ec" if SOC_SERIES_LPC55XXX
default "0xe48bff56" if SOC_SERIES_RP2040
default "0xe48bff57" if SOC_SERIES_RP2350
default "0xe48bff56" if SOC_SERIES_RP2XXX
default "0x68ed2b88" if SOC_SERIES_SAMD21
default "0x55114460" if SOC_SERIES_SAMD51
default "0x647824b6" if SOC_SERIES_STM32F0X
@@ -858,11 +639,6 @@ config BUILD_OUTPUT_STRIPPED
Build a stripped binary zephyr/zephyr.strip in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_COMPRESS_DEBUG_SECTIONS
bool "Compress debug sections in the ELF file"
help
Compress debug sections in the ELF file to reduce the file size.
config BUILD_OUTPUT_ADJUST_LMA
string
help
@@ -887,23 +663,6 @@ config BUILD_OUTPUT_ADJUST_LMA
default "$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_IMAGE_M4))-\
$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH))"
config BUILD_OUTPUT_ADJUST_LMA_SECTIONS
def_string "*"
depends on BUILD_OUTPUT_ADJUST_LMA!=""
help
This determines the output sections to which the above LMA adjustment
will be applied.
The value can be the name of a section in the final ELF, like "text".
It can also be a pattern with wildcards, such as "*bss", which could
match more than one section name. Multiple such patterns can be given
as a ";"-separated list. It's possible to supply a 'negative' pattern
starting with "!", to exclude sections matched by a preceding pattern.
By default, all sections will have their LMA adjusted. The following
example excludes one section produced by the code relocation feature:
config BUILD_OUTPUT_ADJUST_LMA_SECTIONS
default "*;!.extflash_text_reloc"
config BUILD_OUTPUT_INFO_HEADER
bool "Create a image information header"
help
@@ -975,7 +734,7 @@ config BUILD_OUTPUT_STRIP_PATHS
bool "Strip absolute paths from binaries"
default y
help
If the compiler supports it, strip the $(ZEPHYR_BASE) prefix from the
If the compiler supports it, strip the ${ZEPHYR_BASE} prefix from the
__FILE__ macro used in __ASSERT*, in the
.noinit."/home/joe/zephyr/fu/bar.c" section names and in any
application code.
@@ -988,9 +747,7 @@ config BUILD_OUTPUT_STRIP_PATHS
config CHECK_INIT_PRIORITIES
bool "Build time initialization priorities check"
default y
# If we are building a native_simulator target, we can only check the init priorities
# if we are building the final output but we are not assembling several images together
depends on !(NATIVE_LIBRARY && (!BUILD_OUTPUT_EXE || NATIVE_SIMULATOR_EXTRA_IMAGE_PATHS != ""))
depends on !NATIVE_LIBRARY
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "armclang"
help
Check the build for initialization priority issues by comparing the
@@ -998,7 +755,16 @@ config CHECK_INIT_PRIORITIES
derived from the devicetree definition.
Fails the build on priority errors (dependent devices, inverted
priority).
priority), see CHECK_INIT_PRIORITIES_FAIL_ON_WARNING to fail on
warnings (dependent devices, same priority) as well.
config CHECK_INIT_PRIORITIES_FAIL_ON_WARNING
bool "Fail the build on priority check warnings"
depends on CHECK_INIT_PRIORITIES
help
Fail the build if the dependency check script identifies any pair of
devices depending on each other but initialized with the same
priority.
config EMIT_ALL_SYSCALLS
bool "Emit all possible syscalls in the tree"
@@ -1014,8 +780,6 @@ config DEPRECATED
help
Symbol that must be selected by a feature or module if it is
considered to be deprecated.
When adding this to an option, remember to follow the instructions in
https://docs.zephyrproject.org/latest/develop/api/api_lifecycle.html#deprecated
config WARN_DEPRECATED
bool
@@ -1038,12 +802,6 @@ config WARN_EXPERIMENTAL
Print a warning when the Kconfig tree is parsed if any experimental
features are enabled.
config NOT_SECURE
bool
help
Symbol to be selected by a feature to indicate that feature is
not secure.
config TAINT
bool
help
@@ -1073,10 +831,33 @@ menu "Boot Options"
config IS_BOOTLOADER
bool "Act as a bootloader"
depends on XIP
depends on ARM
help
This option indicates that Zephyr will act as a bootloader to execute
a separate Zephyr image payload.
config BOOTLOADER_SRAM_SIZE
int "SRAM reserved for bootloader"
default 0
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
This option specifies the amount of SRAM (measure in kB) reserved for
a bootloader image, when either:
- the Zephyr image itself is to act as the bootloader, or
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
depends on SOC_FAMILY_ESP32 && !BOOTLOADER_MCUBOOT && !MCUBOOT
default y
help
This option will trigger the compilation of the ESP-IDF bootloader
inside the build folder.
At flash time, the bootloader will be flashed with the zephyr image
config BOOTLOADER_BOSSA
bool "BOSSA bootloader support"
select USE_DT_CODE_PARTITION
@@ -1122,17 +903,11 @@ endmenu
menu "Compatibility"
config LEGACY_GENERATED_INCLUDE_PATH
bool "Legacy include path for generated headers"
config COMPAT_INCLUDES
bool "Suppress warnings when using header shims"
default y
help
Allow applications and libraries to use the Zephyr legacy include
path for the generated headers which does not use the `zephyr/` prefix.
From now on, i.e., the preferred way to include the `version.h` header is to
use <zephyr/version.h>, this Kconfig is currently enabled by default so that
user applications won't immediately fail to compile.
This Kconfig will be deprecated and eventually removed in the future releases.
Suppress any warnings from the pre-processor when including
deprecated header files.
endmenu

View File

@@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@@ -1,10 +0,0 @@
> [!IMPORTANT]
> The license files in this directory are maintained as per the recommendation
> of the REUSE specification (https://reuse.software/spec-3.3).
> These are licenses that may apply to some components hosted in the source tree
> but that do not end up in built binaries
> (see https://docs.zephyrproject.org/latest/LICENSING.html#zephyr-licensing).
>
> These files do _not_ define the license of the Zephyr project itself.
> Zephyr is licensed under the Apache 2.0 license, as specified in
> the [LICENSE](/LICENSE) file at the root of the repository.

File diff suppressed because it is too large Load Diff

View File

@@ -10,9 +10,12 @@
</p>
</a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a href="https://scorecard.dev/viewer/?uri=github.com/zephyrproject-rtos/zephyr"><img src="https://api.securityscorecards.dev/projects/github.com/zephyrproject-rtos/zephyr/badge"></a>
<a href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain"><img src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img
src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a
href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain">
<img
src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
The Zephyr Project is a scalable real-time operating system (RTOS) supporting
@@ -24,7 +27,7 @@ resource-constrained systems: from simple embedded environmental sensors and
LED wearables to sophisticated smart watches and IoT wireless gateways.
The Zephyr kernel supports multiple architectures, including ARM (Cortex-A,
Cortex-R, Cortex-M), Intel x86, ARC, Tensilica Xtensa, and RISC-V,
Cortex-R, Cortex-M), Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V,
SPARC, MIPS, and a large number of `supported boards`_.
.. below included in doc/introduction/introduction.rst
@@ -51,59 +54,39 @@ Resources
Here's a quick summary of resources to help you find your way around:
Getting Started
---------------
* **Help**: `Asking for Help Tips`_
* **Documentation**: http://docs.zephyrproject.org (`Getting Started Guide`_)
* **Source Code**: https://github.com/zephyrproject-rtos/zephyr is the main
repository; https://elixir.bootlin.com/zephyr/latest/source contains a
searchable index
* **Releases**: https://github.com/zephyrproject-rtos/zephyr/releases
* **Samples and example code**: see `Sample and Demo Code Examples`_
* **Mailing Lists**: users@lists.zephyrproject.org and
devel@lists.zephyrproject.org are the main user and developer mailing lists,
respectively. You can join the developer's list and search its archives at
`Zephyr Development mailing list`_. The other `Zephyr mailing list
subgroups`_ have their own archives and sign-up pages.
* **Nightly CI Build Status**: https://lists.zephyrproject.org/g/builds
The builds@lists.zephyrproject.org mailing list archives the CI nightly build results.
* **Chat**: Real-time chat happens in Zephyr's Discord Server. Use
this `Discord Invite`_ to register.
* **Contributing**: see the `Contribution Guide`_
* **Wiki**: `Zephyr GitHub wiki`_
* **Issues**: https://github.com/zephyrproject-rtos/zephyr/issues
* **Security Issues**: Email vulnerabilities@zephyrproject.org to report
security issues; also see our `Security`_ documentation. Security issues are
tracked separately at https://zephyrprojectsec.atlassian.net.
* **Zephyr Project Website**: https://zephyrproject.org
| 📖 `Zephyr Documentation`_
| 🚀 `Getting Started Guide`_
| 🙋🏽 `Tips when asking for help`_
| 💻 `Code samples`_
Code and Development
--------------------
| 🌐 `Source Code Repository`_
| 📦 `Releases`_
| 🤝 `Contribution Guide`_
Community and Support
---------------------
| 💬 `Discord Server`_ for real-time community discussions
| 📧 `User mailing list (users@lists.zephyrproject.org)`_
| 📧 `Developer mailing list (devel@lists.zephyrproject.org)`_
| 📬 `Other project mailing lists`_
| 📚 `Project Wiki`_
Issue Tracking and Security
---------------------------
| 🐛 `GitHub Issues`_
| 🔒 `Security documentation`_
| 🛡️ `Security Advisories Repository`_
| ⚠️ Report security vulnerabilities at vulnerabilities@zephyrproject.org
Additional Resources
--------------------
| 🌐 `Zephyr Project Website`_
| 📺 `Zephyr Tech Talks`_
.. _Zephyr Project Website: https://www.zephyrproject.org
.. _Discord Server: https://chat.zephyrproject.org
.. _supported boards: https://docs.zephyrproject.org/latest/boards/index.html
.. _Zephyr Documentation: https://docs.zephyrproject.org
.. _Introduction to Zephyr: https://docs.zephyrproject.org/latest/introduction/index.html
.. _Getting Started Guide: https://docs.zephyrproject.org/latest/develop/getting_started/index.html
.. _Contribution Guide: https://docs.zephyrproject.org/latest/contribute/index.html
.. _Source Code Repository: https://github.com/zephyrproject-rtos/zephyr
.. _GitHub Issues: https://github.com/zephyrproject-rtos/zephyr/issues
.. _Releases: https://github.com/zephyrproject-rtos/zephyr/releases
.. _Project Wiki: https://github.com/zephyrproject-rtos/zephyr/wiki
.. _User mailing list (users@lists.zephyrproject.org): https://lists.zephyrproject.org/g/users
.. _Developer mailing list (devel@lists.zephyrproject.org): https://lists.zephyrproject.org/g/devel
.. _Other project mailing lists: https://lists.zephyrproject.org/g/main/subgroups
.. _Code samples: https://docs.zephyrproject.org/latest/samples/index.html
.. _Security documentation: https://docs.zephyrproject.org/latest/security/index.html
.. _Security Advisories Repository: https://github.com/zephyrproject-rtos/zephyr/security
.. _Tips when asking for help: https://docs.zephyrproject.org/latest/develop/getting_started/index.html#asking-for-help
.. _Zephyr Tech Talks: https://www.zephyrproject.org/tech-talks
.. _Discord Invite: https://chat.zephyrproject.org
.. _supported boards: http://docs.zephyrproject.org/latest/boards/index.html
.. _Zephyr Documentation: http://docs.zephyrproject.org
.. _Introduction to Zephyr: http://docs.zephyrproject.org/latest/introduction/index.html
.. _Getting Started Guide: http://docs.zephyrproject.org/latest/develop/getting_started/index.html
.. _Contribution Guide: http://docs.zephyrproject.org/latest/contribute/index.html
.. _Zephyr GitHub wiki: https://github.com/zephyrproject-rtos/zephyr/wiki
.. _Zephyr Development mailing list: https://lists.zephyrproject.org/g/devel
.. _Zephyr mailing list subgroups: https://lists.zephyrproject.org/g/main/subgroups
.. _Sample and Demo Code Examples: http://docs.zephyrproject.org/latest/samples/index.html
.. _Security: http://docs.zephyrproject.org/latest/security/index.html
.. _Asking for Help Tips: https://docs.zephyrproject.org/latest/develop/getting_started/index.html#asking-for-help

View File

@@ -1,25 +0,0 @@
version = 1
# Declare default license and copyright text for files that typically do not include them.
[[annotations]]
path = [
# zephyr-keep-sorted-start
"**/*.cfg",
"**/*.cmake",
"**/*.conf",
"**/*.ecl",
"**/*.html",
"**/*.rst",
"**/*.yaml",
"**/*.yml",
"**/*_defconfig",
"**/CMakeLists.txt",
"**/Kconfig*",
"doc/requirements.in",
"doc/requirements.txt",
"scripts/requirements-*.in",
"scripts/requirements-*.txt",
# zephyr-keep-sorted-stop
]
SPDX-License-Identifier = "Apache-2.0"
SPDX-FileCopyrightText = "Copyright The Zephyr Project Contributors"

View File

@@ -1 +0,0 @@
0.17.4

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 4
VERSION_MINOR = 3
VERSION_MAJOR = 3
VERSION_MINOR = 4
PATCHLEVEL = 99
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -8,10 +8,8 @@
# Include these first so that any properties (e.g. defaults) below can be
# overridden (by defining symbols in multiple locations)
source "$(KCONFIG_BINARY_DIR)/arch/Kconfig"
# ToDo: Generate a Kconfig.arch for loading of additional arch in HWMv2.
osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
# Note: $ARCH might be a glob pattern
source "$(ARCH_DIR)/$(ARCH)/Kconfig"
# Architecture symbols
#
@@ -21,10 +19,9 @@ osource "$(KCONFIG_BINARY_DIR)/Kconfig.arch"
config ARC
bool
select ARCH_IS_SET
select HAS_DTS
imply XIP
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_SUPPORTS_ROM_START
select ARCH_HAS_DIRECTED_IPIS
help
ARC architecture
@@ -32,8 +29,7 @@ config ARM
bool
select ARCH_IS_SET
select ARCH_SUPPORTS_COREDUMP if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_THREADS if CPU_CORTEX_M
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if CPU_CORTEX_M
select HAS_DTS
# FIXME: current state of the code for all ARM requires this, but
# is really only necessary for Cortex-M with ARM MPU!
select GEN_PRIV_STACKS
@@ -46,18 +42,14 @@ config ARM64
bool
select ARCH_IS_SET
select 64BIT
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select HAS_ARM_SMCCC
select ARCH_HAS_THREAD_LOCAL_STORAGE
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select BARRIER_OPERATIONS_ARCH
select ARCH_HAS_DIRECTED_IPIS
select ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_MAPPING
select ARCH_SUPPORTS_EVICTION_TRACKING
select EVICTION_TRACKING if DEMAND_PAGING
select MEM_DOMAIN_HAS_THREAD_LIST if ARM_MPU
help
ARM64 (AArch64) architecture
@@ -65,12 +57,14 @@ config MIPS
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
help
MIPS architecture
config SPARC
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select BIG_ENDIAN
@@ -85,88 +79,76 @@ config X86
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !X86_64
select CPU_HAS_MMU
select ARCH_MEM_DOMAIN_DATA if USERSPACE && !X86_COMMON_PAGE_TABLE
select ARCH_MEM_DOMAIN_SYNCHRONOUS_API if USERSPACE
select ARCH_HAS_GDBSTUB if !X86_64
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_HAS_DEMAND_PAGING if !X86_64
select ARCH_HAS_DEMAND_MAPPING if ARCH_HAS_DEMAND_PAGING
select ARCH_HAS_DEMAND_PAGING
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select NEED_LIBC_MEM_PARTITION if USERSPACE && TIMING_FUNCTIONS \
&& !BOARD_HAS_TIMING_FUNCTIONS \
&& !SOC_HAS_TIMING_FUNCTIONS
select ARCH_HAS_STACK_CANARIES_TLS
select ARCH_SUPPORTS_MEM_MAPPED_STACKS if X86_MMU && !DEMAND_PAGING
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
help
x86 architecture
config NIOS2
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
imply XIP
select ARCH_HAS_TIMING_FUNCTIONS
help
Nios II Gen 2 architecture
config RISCV
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C if !RISCV_ISA_EXT_A
select ATOMIC_OPERATIONS_BUILTIN if RISCV_ISA_EXT_A
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
select ARCH_SUPPORTS_ROM_START if !SOC_FAMILY_ESPRESSIF_ESP32
select ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_THREAD_LOCAL_STORAGE
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select USE_SWITCH_SUPPORTED
select USE_SWITCH
select SCHED_IPI_SUPPORTED if SMP
select ARCH_HAS_DIRECTED_IPIS
select BARRIER_OPERATIONS_BUILTIN
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
imply XIP
help
RISCV architecture
config XTENSA
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_MEM_DOMAIN_DATA if USERSPACE
select ARCH_HAS_DIRECTED_IPIS
select THREAD_STACK_INFO
select ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET if USERSPACE
select ARCH_SUPPORTS_COREDUMP_STACK_PTR if !SMP
select ARCH_HAS_USERSPACE if XTENSA_MMU || XTENSA_MPU
imply ARCH_HAS_RESERVED_PAGE_FRAMES if XTENSA_MMU
imply ATOMIC_OPERATIONS_ARCH
help
Xtensa architecture
config ARCH_POSIX
bool
select ARCH_IS_SET
select HAS_DTS
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
select ARCH_HAS_THREAD_ABORT
select ARCH_HAS_THREAD_NAME_HOOK
select NATIVE_BUILD
select HAS_COVERAGE_SUPPORT
select BARRIER_OPERATIONS_BUILTIN
# POSIX arch based targets get their memory cleared on entry by the host OS
select SKIP_BSS_CLEAR
help
POSIX (native) architecture
config RX
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select USE_SWITCH
select USE_SWITCH_SUPPORTED
help
Renesas RX architecture
config ARCH_IS_SET
bool
help
@@ -188,9 +170,8 @@ config BIG_ENDIAN
Little-endian architecture is the default and should leave this option
unselected. This option is selected by arch/$ARCH/Kconfig,
soc/**/Kconfig, or boards/**/Kconfig and the user should generally avoid
modifying it. The option is used to select linker script OUTPUT_FORMAT,
the toolchain flags (TOOLCHAIN_C_FLAGS, TOOLCHAIN_LD_FLAGS), and command
line option for gen_isr_tables.py.
modifying it. The option is used to select linker script OUTPUT_FORMAT
and command line option for gen_isr_tables.py.
config LITTLE_ENDIAN
# Hidden Kconfig option representing the default little-endian architecture
@@ -224,21 +205,11 @@ config SRAM_BASE_ADDRESS
hex "SRAM Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
The SRAM base address. The default value comes from
The SRAM base address. The default value comes from from
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
config XIP
bool "Execute in place"
help
This option allows zephyr to operate with its text and read-only
sections residing in ROM (or similar read-only memory). Not all boards
support this option so it must be used with care; you must also
supply a linker command file when building your image. Enabling this
option increases both the code and data footprint of the image.
if ARC || ARM || ARM64 || X86 || RISCV || RX || ARCH_POSIX
if ARC || ARM || ARM64 || NIOS2 || X86 || RISCV
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_FLASH := zephyr,flash
@@ -246,7 +217,6 @@ DT_CHOSEN_Z_FLASH := zephyr,flash
config FLASH_SIZE
int "Flash Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_FLASH),0,K) if (XIP && (ARM ||ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the size of the flash in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
@@ -255,13 +225,12 @@ config FLASH_SIZE
config FLASH_BASE_ADDRESS
hex "Flash Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH)) if (XIP && (ARM || ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the base address of the flash on the board. It is
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
endif # ARM || ARM64 || ARC || X86 || RISCV || RX || ARCH_POSIX
endif # ARM || ARM64 || ARC || NIOS2 || X86 || RISCV
if ARCH_HAS_TRUSTED_EXECUTION
@@ -324,21 +293,11 @@ config USERSPACE
privileged mode or handling a system call; to ensure these are always
caught, enable CONFIG_HW_STACK_PROTECTION.
config NOINIT_SNIPPET_FIRST
bool "Place the no-init linker script snippet first"
help
By default the include/zephyr/linker/common-noinit.ld file inserts the
snippets-noinit.ld file at the end of the section. There are times when
the directives in the snippets-noinit.ld file apply to the other directives
in this file. And in that case the include statement for the snippets-noinit.ld
file needs to come at the start of the section. This configuration option
allows that to happen.
config PRIVILEGED_STACK_SIZE
int "Size of privileged stack"
default 2048 if EMUL
default 1024
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
This option sets the privileged stack region size that will be used
in addition to the user mode thread stack. During normal execution,
@@ -348,19 +307,18 @@ config PRIVILEGED_STACK_SIZE
config KOBJECT_TEXT_AREA
int "Size of kobject text area"
default 1024 if UBSAN
default 512 if COVERAGE_GCOV
default 512 if NO_OPTIMIZATIONS
default 512 if STACK_CANARIES && RISCV
default 256
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Size of kernel object text area. Used in linker script.
config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
int "Reserve extra kobject data area (in percentage)"
default 100
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Multiplication factor used to calculate the size of placeholder to
reserve space for kobject metadata hash table. The hash table is
@@ -374,7 +332,7 @@ config KOBJECT_DATA_AREA_RESERVE_EXTRA_PERCENT
config KOBJECT_RODATA_AREA_EXTRA_BYTES
int "Reserve extra bytes for kobject rodata area"
default 16
depends on USERSPACE
depends on ARCH_HAS_USERSPACE
help
Reserve a few more bytes for the RODATA region for kobject metadata.
This is to account for the uncertainty of tables generated by gperf.
@@ -436,61 +394,10 @@ config NOCACHE_MEMORY
transfers when cache coherence issues are not optimal or can not
be solved using cache maintenance operations.
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
select OVERRIDE_FRAME_POINTER_DEFAULT
help
Select Y here to gain precise stack traces at the expense of slightly
increased size and decreased speed.
config ARCH_STACKWALK
bool "Compile the stack walking function"
default y
depends on ARCH_HAS_STACKWALK
help
Select Y here to compile the `arch_stack_walk()` function
config ARCH_STACKWALK_MAX_FRAMES
int "Max depth for stack walk function"
default 8
depends on ARCH_STACKWALK
help
Depending on implementation, this can place a hard limit on the depths of the stack
for the stack walk function to examine.
menu "Interrupt Configuration"
config TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
bool
help
Hidden option to signal that toolchain supports local declaration of
interrupt tables.
config ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
bool
default y
# Userspace is currently not supported
depends on !USERSPACE
# List of currently supported architectures
depends on ARM || ARM64 || RISCV
# List of currently supported toolchains
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "zephyr" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "gnuarmemb" || "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm" || TOOLCHAIN_SUPPORTS_ISR_TABLES_LOCAL_DECLARATION
config ISR_TABLES_LOCAL_DECLARATION
bool "ISR tables created locally and placed by linker"
depends on ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
help
Enable new scheme of interrupt tables generation.
This is totally different generator that would create tables entries locally
where the IRQ_CONNECT macro is called and then use the linker script to position it
in the right place in memory.
The most important advantage of such approach is that the generated interrupt tables
are LTO compatible.
The drawback is that the support on the architecture port is required.
config DYNAMIC_INTERRUPTS
bool "Installation of IRQs at runtime"
select SRAM_SW_ISR_TABLE
help
Enable installation of interrupts at runtime, which will move some
interrupt-related data structures to RAM instead of ROM, and
@@ -542,12 +449,6 @@ config ARCH_IRQ_VECTOR_TABLE_ALIGN
to be aligned to architecture specific size. The default
size is 0 for no alignment.
config ARCH_DEVICE_STATE_ALIGN
int "Alignment size of device state"
default 4
help
This option controls alignment size of device state.
choice IRQ_VECTOR_TABLE_TYPE
prompt "IRQ vector table type"
depends on GEN_IRQ_VECTOR_TABLE
@@ -608,40 +509,14 @@ config IRQ_OFFLOAD
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
depends on ARCH_HAS_VECTOR_TABLE_RELOCATION
depends on XIP
depends on !ROMSTART_RELOCATION_ROM
help
When XiP is enabled, this option will result in the vector table being
relocated from Flash to SRAM.
config SRAM_SW_ISR_TABLE
bool "Place the software ISR table in SRAM instead of flash"
help
The option specifies that the software interrupts vector table will be
placed inside SRAM instead of the flash.
config IRQ_OFFLOAD_NESTED
bool "irq_offload() supports nested IRQs"
depends on IRQ_OFFLOAD
default y if ARM64 || X86 || RISCV || XTENSA
help
When set by the platform layers, indicates that
irq_offload() may legally be called in interrupt context to
cause a synchronous nested interrupt on the current CPU.
Not all hardware is capable.
config EXCEPTION_DEBUG
bool "Unhandled exception debugging"
default y
depends on PRINTK || LOG
help
Install handlers for various CPU exception/trap vectors to
make debugging them easier, at a small expense in code size.
This prints out the specific exception vector and any associated
error codes.
When set by the arch layer, indicates that irq_offload() may
legally be called in interrupt context to cause a
synchronous nested interrupt on the current CPU. Not all
hardware is capable.
config EXTRA_EXCEPTION_INFO
bool "Collect extra exception info"
@@ -663,14 +538,6 @@ config SIMPLIFIED_EXCEPTION_CODES
down to the generic K_ERR_CPU_EXCEPTION, which makes testing code
much more portable.
config EMPTY_IRQ_SPURIOUS
bool "Create empty spurious interrupt handler"
depends on ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
help
This option changes body of spurious interrupt handler. When enabled,
handler contains only an infinite while loop, when disabled, handler
contains the whole Zephyr fault handling procedure.
endmenu # Interrupt configuration
config INIT_ARCH_HW_AT_BOOT
@@ -721,45 +588,28 @@ config ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config ARCH_HAS_RAMFUNC_SUPPORT
bool
config ARCH_HAS_VECTOR_TABLE_RELOCATION
bool
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
config ARCH_SUPPORTS_COREDUMP
bool
config ARCH_SUPPORTS_COREDUMP_THREADS
bool
config ARCH_SUPPORTS_COREDUMP_PRIV_STACKS
bool
config ARCH_SUPPORTS_COREDUMP_STACK_PTR
bool
config ARCH_SUPPORTS_ARCH_HW_INIT
bool
config ARCH_SUPPORTS_ROM_START
bool
config ARCH_SUPPORTS_EMPTY_IRQ_SPURIOUS
bool
config ARCH_SUPPORTS_EVICTION_TRACKING
bool
help
Architecture code supports page tracking for eviction algorithms
when demand paging is enabled.
config ARCH_HAS_EXTRA_EXCEPTION_INFO
bool
config ARCH_HAS_GDBSTUB
bool
config ARCH_HAS_COHERENCE
bool
help
When selected, the architecture supports the
arch_mem_coherent() API and can link into incoherent/cached
memory using the ".cached" linker section.
config ARCH_HAS_THREAD_LOCAL_STORAGE
bool
@@ -771,19 +621,6 @@ config ARCH_HAS_SUSPEND_TO_RAM
config ARCH_HAS_STACK_CANARIES_TLS
bool
config ARCH_SUPPORTS_MEM_MAPPED_STACKS
bool
help
Select when the architecture supports memory mapped stacks.
config ARCH_HAS_THREAD_PRIV_STACK_SPACE_GET
bool
help
Select when the architecture implements arch_thread_priv_stack_space_get().
config ARCH_HAS_HW_SHADOW_STACK
bool
#
# Other architecture related options
#
@@ -820,11 +657,6 @@ config CPU_HAS_FPU
This option is enabled when the CPU has hardware floating point
unit.
config CPU_HAS_DSP
bool
help
This option is enabled when the CPU has hardware DSP unit.
config CPU_HAS_FPU_DOUBLE_PRECISION
bool
select CPU_HAS_FPU
@@ -849,13 +681,6 @@ config ARCH_HAS_DEMAND_PAGING
This hidden configuration should be selected by the architecture if
demand paging is supported.
config ARCH_HAS_DEMAND_MAPPING
bool
help
This hidden configuration should be selected by the architecture if
demand paging is supported and arch_mem_map() supports
K_MEM_MAP_UNPAGED.
config ARCH_HAS_RESERVED_PAGE_FRAMES
bool
help
@@ -864,25 +689,11 @@ config ARCH_HAS_RESERVED_PAGE_FRAMES
memory mappings. The architecture will need to implement
arch_reserved_pages_update().
config ARCH_HAS_DIRECTED_IPIS
bool
help
This hidden configuration should be selected by the architecture if
it has an implementation for arch_sched_directed_ipi() which allows
for IPIs to be directed to specific CPUs.
config CPU_HAS_DCACHE
bool
help
This hidden configuration should be selected when the CPU has a d-cache.
config CPU_CACHE_INCOHERENT
bool
help
This hidden configuration should be selected when the CPU has
incoherent cache. This applies to intra-CPU multiprocessing
incoherence and makes only sense when MP_MAX_NUM_CPUS > 1.
config CPU_HAS_ICACHE
bool
help
@@ -906,7 +717,7 @@ config ARCH_MAPS_ALL_RAM
virtual addresses elsewhere, this is limited to only management of the
virtual address space. The kernel's page frame ontology will not consider
this mapping at all; non-kernel pages will be considered free (unless marked
as reserved) and K_MEM_PAGE_FRAME_MAPPED will not be set.
as reserved) and Z_PAGE_FRAME_MAPPED will not be set.
config DCLS
bool "Processor is configured in DCLS mode"
@@ -1010,17 +821,6 @@ config CODE_DATA_RELOCATION
the target regions should be specified in CMakeLists.txt using
zephyr_code_relocate().
menu "DSP Options"
config DSP_SHARING
bool "DSP register sharing"
depends on CPU_HAS_DSP
help
This option enables preservation of the hardware DSP registers
across context switches to allow multiple threads to perform concurrent
DSP operations.
endmenu
menu "Floating Point Options"
config FPU
@@ -1078,29 +878,6 @@ config ICACHE
help
This option enables the support for the instruction cache (i-cache).
config CACHE_HAS_MIRRORED_MEMORY_REGIONS
bool "Mirrored memory region(s) for both cached and uncached access"
depends on CPU_CACHE_INCOHERENT
help
Enable this if hardware has mirrored memory regions at different
addressed when accessing one would go through cache, but accessing
the other would go to memory directly. A pointer can be cheaply
converted to cached or uncached access.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
config CACHE_DOUBLEMAP
bool "Cache double-mapping support"
select CACHE_HAS_MIRRORED_MEMORY_REGIONS
select DEPRECATED
help
Double-mapping behavior where a pointer can be cheaply converted to
point to the same cached/uncached memory at different locations.
This applies to intra-CPU multiprocessing incoherence and makes only
sense when MP_MAX_NUM_CPUS > 1.
config CACHE_MANAGEMENT
bool "Cache management features"
depends on DCACHE || ICACHE
@@ -1108,12 +885,9 @@ config CACHE_MANAGEMENT
This option enables the cache management functions backed by arch or
driver code.
if CACHE_MANAGEMENT
if DCACHE
config DCACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
depends on CACHE_MANAGEMENT && DCACHE
help
This option enables querying some architecture-specific hardware for
finding the d-cache line size at the expense of taking more memory and
@@ -1125,21 +899,18 @@ config DCACHE_LINE_SIZE_DETECT
config DCACHE_LINE_SIZE
int "d-cache line size"
depends on !DCACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,d-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,d-cache-line-size)
depends on CACHE_MANAGEMENT && DCACHE && !DCACHE_LINE_SIZE_DETECT
default 0
help
Size in bytes of a CPU d-cache line.
Size in bytes of a CPU d-cache line. If this is set to 0 the value is
obtained from the 'd-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting DCACHE_LINE_SIZE_DETECT.
endif # DCACHE
if ICACHE
config ICACHE_LINE_SIZE_DETECT
bool "Detect i-cache line size at runtime"
depends on CACHE_MANAGEMENT && ICACHE
help
This option enables querying some architecture-specific hardware for
finding the i-cache line size at the expense of taking more memory and
@@ -1151,19 +922,17 @@ config ICACHE_LINE_SIZE_DETECT
config ICACHE_LINE_SIZE
int "i-cache line size"
depends on !ICACHE_LINE_SIZE_DETECT
default $(dt_node_int_prop_int,/cpus/cpu@0,i-cache-line-size) \
if $(dt_node_has_prop,/cpus/cpu@0,i-cache-line-size)
depends on CACHE_MANAGEMENT && ICACHE && !ICACHE_LINE_SIZE_DETECT
default 0
help
Size in bytes of a CPU i-cache line.
Size in bytes of a CPU i-cache line. If this is set to 0 the value is
obtained from the 'i-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting ICACHE_LINE_SIZE_DETECT.
endif # ICACHE
choice CACHE_TYPE
prompt "Cache type"
depends on CACHE_MANAGEMENT
default ARCH_CACHE
config ARCH_CACHE
@@ -1171,14 +940,6 @@ config ARCH_CACHE
help
Integrated on-core cache controller
config SOC_CACHE
bool "SoC specific cache controller"
depends on SOC_HAS_CACHE_FUNCTIONS
help
SoC specific cache controller.
This requires soc_cache.h file to exist in search path.
config EXTERNAL_CACHE
bool "External cache controller"
help
@@ -1186,16 +947,6 @@ config EXTERNAL_CACHE
endchoice
config CACHE_CAN_SAY_MEM_COHERENCE
bool
help
sys_cache_is_mem_coherent() is defined when enabled. This function can be
used to determine if a pointer lies inside "coherence regions" and can be
safely used in multiprocessor code without explicit flush or invalidate
operations.
endif # CACHE_MANAGEMENT
endmenu
config ARCH
@@ -1203,46 +954,29 @@ config ARCH
help
System architecture string.
config SOC
string
help
SoC name which can be found under soc/<arch>/<soc name>.
This option holds the directory name used by the build system to locate
the correct linker and header files for the SoC.
config SOC_SERIES
string
help
SoC series name which can be found under soc/<arch>/<family>/<series>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config SOC_FAMILY
string
help
SoC family name which can be found under soc/<arch>/<family>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config TOOLCHAIN_HAS_BUILTIN_FFS
bool
default y if !(64BIT && RISCV)
help
Hidden option to signal that toolchain has __builtin_ffs*().
config ARCH_HAS_CUSTOM_CPU_IDLE
bool
help
This options allows applications to override the default arch idle implementation with
a custom one.
config ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
bool
help
This options allows applications to override the default arch idle implementation with
a custom one.
config ARCH_HAS_CUSTOM_SWAP_TO_MAIN
bool
help
It's possible that an architecture port cannot use z_swap_unlocked()
to swap to the main thread (bg_thread_main), but instead must do
something custom. It must enable this option in that case.
config ARCH_HAS_CUSTOM_BUSY_WAIT
bool
help
It's possible that an architecture port cannot or does not want to use
the provided k_busy_wait(), but instead must do something custom. It must
enable this option in that case.
config ARCH_HAS_CUSTOM_CURRENT_IMPL
bool
help
Select when architecture implements arch_current_thread() &
arch_current_thread_set().
config ARCH_IPI_LAZY_COPROCESSORS_SAVE
bool
help
Select when the architecture has multi-CPU lazy context switching
of coprocessor registers.

View File

@@ -9,6 +9,7 @@ menu "ARC Options"
config ARCH
default "arc"
config CPU_ARCEM
bool
select ATOMIC_OPERATIONS_C
@@ -18,7 +19,6 @@ config CPU_ARCEM
config CPU_ARCHS
bool
select ATOMIC_OPERATIONS_BUILTIN
select BARRIER_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU
@@ -253,17 +253,20 @@ config ARC_USE_UNALIGNED_MEM_ACCESS
to support unaligned memory access which is then disabled by default.
Enable unaligned access in hardware and make software to use it.
config ARC_CURRENT_THREAD_USE_NO_TLS
bool
select CURRENT_THREAD_USE_NO_TLS
default y if (RGF_NUM_BANKS > 1) || ("$(ZEPHYR_TOOLCHAIN_VARIANT)" = "arcmwdt")
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
help
Disable current Thread Local Storage for ARC. For cores with more than one
RGF_NUM_BANKS the parameter is disabled by-default because banks synchronization
requires significant time, and it slows down performance.
ARCMWDT works with TLS pointer in different way then GCC. Optimized access to
TLS pointer via the _current symbol does not provide significant advantages
in case of MetaWare.
Different levels for display information when a fault occurs.
2: The default. Display specific and verbose information. Consumes
the most memory (long strings).
1: Display general and short information. Consumes less memory
(short strings).
0: Off.
config GEN_ISR_TABLES
default y
@@ -274,7 +277,7 @@ config GEN_IRQ_START_VECTOR
config HARVARD
bool "Harvard Architecture"
help
The ARC CPU can be configured to have two buses;
The ARC CPU can be configured to have two busses;
one for instruction fetching and another that serves as a data bus.
config CODE_DENSITY
@@ -343,15 +346,6 @@ config ARC_NORMAL_FIRMWARE
resources of the ARC processors, and, therefore, it shall avoid
accessing them.
config ARC_VPX_COOPERATIVE_SHARING
bool "Cooperative sharing of ARC VPX vector registers"
select SCHED_CPU_MASK if MP_MAX_NUM_CPUS > 1
help
This option enables the cooperative sharing of the ARC VPX vector
registers. Threads that want to use those registers must successfully
call arc_vpx_lock() before using them, and call arc_vpx_unlock()
when done using them.
source "arch/arc/core/dsp/Kconfig"
menu "ARC MPU Options"
@@ -370,28 +364,6 @@ endmenu
config DCACHE_LINE_SIZE
default 32
config ARC_DCACHE_REGION_OPERATIONS
bool "DCACHE region operations"
depends on CACHE_MANAGEMENT && DCACHE
default n
help
Perform L1 data cache management operations by regions rather than line by line in a loop,
improves performance of cache management operations.
config ARC_SLC
bool "System level cache"
depends on CACHE_MANAGEMENT && DCACHE && (CPU_HS4X || CPU_HS3X)
default n
help
This option enables System Level Cache, and adds SLC support to the data cache management operations.
config ARC_SLC_LINE_SIZE
int "SLC line size"
depends on ARC_SLC
default 128
help
Size in bytes of a CPU system level cache line.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768 if !64BIT
@@ -404,6 +376,15 @@ config ARC_EXCEPTION_STACK_SIZE
endmenu
config ARC_EXCEPTION_DEBUG
bool "Unhandled exception debugging information"
default n
depends on PRINTK || LOG
help
Print human-readable information about exception vectors, cause codes,
and parameters, at a cost of code/data size for the human-readable
strings.
config ARC_EARLY_SOC_INIT
bool "Make early stage SoC-specific initialization"
help
@@ -411,10 +392,7 @@ config ARC_EARLY_SOC_INIT
(before C runtime initialization). Setup code is called in form of
soc_early_asm_init_percpu assembler macro.
# ARC vector table must be aligned to 1KiB boundary, and will be at the
# start of the ROM region.
config ROM_START_OFFSET
default 0x400 if BOOTLOADER_MCUBOOT
endmenu
config MAIN_STACK_SIZE
default 4096 if 64BIT
@@ -442,5 +420,3 @@ config CMSIS_V2_THREAD_MAX_STACK_SIZE
config CMSIS_V2_THREAD_DYNAMIC_STACK_SIZE
default 2048 if 64BIT
endmenu

View File

@@ -8,6 +8,14 @@
__weak void *__dso_handle;
int __cxa_atexit(void (*destructor)(void *), void *objptr, void *dso)
{
ARG_UNUSED(destructor);
ARG_UNUSED(objptr);
ARG_UNUSED(dso);
return 0;
}
int atexit(void (*function)(void))
{
return 0;

View File

@@ -26,7 +26,7 @@ zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT smp.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE tls.c)
@@ -34,5 +34,3 @@ add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_LLEXT elf.c)

193
arch/arc/core/arc_smp.c Normal file
View File

@@ -0,0 +1,193 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief codes required for ARC multicore and Zephyr smp support
*
*/
#include <zephyr/device.h>
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <ksched.h>
#include <zephyr/init.h>
#include <zephyr/irq.h>
#include <arc_irq_offload.h>
volatile struct {
arch_cpustart_t fn;
void *arg;
} arc_cpu_init[CONFIG_MP_MAX_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile uint32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_MAX_NUM_CPUS];
/* Called from Zephyr initialization */
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0U) {
;
}
}
#ifdef CONFIG_SMP
static void arc_connect_debug_mask_update(int cpu_num)
{
uint32_t core_mask = 1 << cpu_num;
/*
* MDB debugger may modify debug_select and debug_mask registers on start, so we can't
* rely on debug_select reset value.
*/
if (cpu_num != ARC_MP_PRIMARY_CPU_ID) {
core_mask |= z_arc_connect_debug_select_read();
}
z_arc_connect_debug_select_set(core_mask);
/* Debugger halts cores at all conditions:
* ARC_CONNECT_CMD_DEBUG_MASK_H: Core global halt.
* ARC_CONNECT_CMD_DEBUG_MASK_AH: Actionpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_BH: Software breakpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_SH: Self halt.
*/
z_arc_connect_debug_mask_set(core_mask, (ARC_CONNECT_CMD_DEBUG_MASK_SH
| ARC_CONNECT_CMD_DEBUG_MASK_BH | ARC_CONNECT_CMD_DEBUG_MASK_AH
| ARC_CONNECT_CMD_DEBUG_MASK_H));
}
#endif
void arc_core_private_intc_init(void);
/* the C entry of slave cores */
void z_arc_slave_start(int cpu_num)
{
arch_cpustart_t fn;
#ifdef CONFIG_SMP
struct arc_connect_bcr bcr;
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(cpu_num);
}
z_irq_setup();
arc_core_private_intc_init();
arc_irq_offload_init_smp();
z_arc_connect_ici_clear();
z_irq_priority_set(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority), 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
#endif
/* call the function set by arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(arc_cpu_init[cpu_num].arg);
}
#ifdef CONFIG_SMP
static void sched_ipi_handler(const void *unused)
{
ARG_UNUSED(unused);
z_arc_connect_ici_clear();
z_sched_ipi();
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
uint32_t i;
/* broadcast sched_ipi request to other cores
* if the target is current core, hardware will ignore it
*/
unsigned int num_cpus = arch_num_cpus();
for (i = 0U; i < num_cpus; i++) {
z_arc_connect_ici_generate(i);
}
}
static int arc_smp_init(void)
{
struct arc_connect_bcr bcr;
/* necessary master core init */
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(ARC_MP_PRIMARY_CPU_ID);
}
if (bcr.ipi) {
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority),
sched_ipi_handler, NULL, 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
} else {
__ASSERT(0,
"ARC connect has no inter-core interrupt\n");
return -ENODEV;
}
if (bcr.gfrc) {
/* global free running count init */
z_arc_connect_gfrc_enable();
/* when all cores halt, gfrc halt */
z_arc_connect_gfrc_core_set((1 << arch_num_cpus()) - 1);
z_arc_connect_gfrc_clear();
} else {
__ASSERT(0,
"ARC connect has no global free running counter\n");
return -ENODEV;
}
return 0;
}
SYS_INIT(arc_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View File

@@ -2,7 +2,6 @@
/*
* Copyright (c) 2016 Synopsys, Inc. All rights reserved.
* Copyright (c) 2025 GSI Technology, All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -30,34 +29,16 @@
size_t sys_cache_line_size;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100 /* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
#define DC_CTRL_INVALIDATE_MODE 0x40 /* d-cache invalidate mode bit */
#define DC_CTRL_REGION_OP 0xe00 /* d-cache region operation */
#define MMU_BUILD_PHYSICAL_ADDR_EXTENSION 0x1000 /* physical address extension enable mask */
#define SLC_CTRL_DISABLE 0x1 /* SLC disable */
#define SLC_CTRL_INVALIDATE_MODE 0x40 /* SLC invalidate mode */
#define SLC_CTRL_BUSY_STATUS 0x100 /* SLC busy status */
#define SLC_CTRL_REGION_OP 0xe00 /* SLC region operation */
#if defined(CONFIG_ARC_SLC)
/*
* spinlock is used for SLC access because depending on HW configuration, the SLC might be shared
* between the cores, and in this case, only one core is allowed to access the SLC register
* interface at a time.
*/
static struct k_spinlock slc_lock;
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
#define DC_CTRL_DC_DISABLE 0x1 /* disable d-cache */
#define DC_CTRL_INVALID_ONLY 0x0 /* invalid d-cache only */
#define DC_CTRL_INVALID_FLUSH 0x40 /* invalid and flush d-cache */
#define DC_CTRL_ENABLE_FLUSH_LOCKED 0x80 /* locked d-cache can be flushed */
#define DC_CTRL_DISABLE_FLUSH_LOCKED 0x0 /* locked d-cache cannot be flushed */
#define DC_CTRL_FLUSH_STATUS 0x100/* flush status */
#define DC_CTRL_DIRECT_ACCESS 0x0 /* direct access mode */
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
static bool dcache_available(void)
{
@@ -74,189 +55,9 @@ static void dcache_dc_ctrl(uint32_t dcache_en_mask)
}
}
static bool pae_exists(void)
{
uint32_t bcr = z_arc_v2_aux_reg_read(_ARC_V2_MMU_BUILD);
return 1 == FIELD_GET(MMU_BUILD_PHYSICAL_ADDR_EXTENSION, bcr);
}
#if defined(CONFIG_ARC_SLC)
static void slc_enable(void)
{
uint32_t val = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
val &= ~SLC_CTRL_DISABLE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, val);
}
static void slc_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END1, 0);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START1, 0);
}
static void slc_flush_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
ctrl &= ~SLC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(SLC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
/*
* END needs to be setup before START (latter triggers the operation)
* END can't be same as START, so add (l2_line_sz - 1) to sz
*/
end_addr = start_addr + size + CONFIG_ARC_SLC_LINE_SIZE - 1;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_END, end_addr);
/* Align start address to cache line size, see STAR 5103816 */
z_arc_v2_aux_reg_write(_ARC_V2_SLC_RGN_START,
start_addr & ~(CONFIG_ARC_SLC_LINE_SIZE - 1));
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_all(void)
{
K_SPINLOCK(&slc_lock) {
z_arc_v2_aux_reg_write(_ARC_V2_SLC_FLUSH, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl &= ~SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
static void slc_flush_and_invalidate_all(void)
{
uint32_t ctrl;
K_SPINLOCK(&slc_lock) {
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
ctrl |= SLC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_SLC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_SLC_INVALIDATE, 0x1);
/* Make sure "busy" bit reports correct status, see STAR 9001165532 */
z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL);
while (z_arc_v2_aux_reg_read(_ARC_V2_SLC_CTRL) & SLC_CTRL_BUSY_STATUS) {
/* Do Nothing */
}
}
}
#endif /* CONFIG_ARC_SLC */
void arch_dcache_enable(void)
{
dcache_dc_ctrl(DC_CTRL_DC_ENABLE);
#if defined(CONFIG_ARC_SLC)
slc_enable();
#endif
}
void arch_dcache_disable(void)
@@ -264,107 +65,17 @@ void arch_dcache_disable(void)
/* nothing */
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
static void dcache_high_addr_init(void)
{
z_arc_v2_aux_reg_write(_ARC_V2_DC_PTAG_HI, 0);
}
static void dcache_flush_region(void *start_addr_ptr, size_t size)
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_REGION_OP;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_flush_and_invalidate_region(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
uint32_t ctrl;
unsigned int key;
key = arch_irq_lock(); /* --enter critical section-- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
ctrl &= ~DC_CTRL_REGION_OP;
ctrl |= FIELD_PREP(DC_CTRL_REGION_OP, 0x1);
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
end_addr = start_addr + size + line_size - 1;
z_arc_v2_aux_reg_write(_ARC_V2_DC_ENDR, end_addr);
z_arc_v2_aux_reg_write(_ARC_V2_DC_STARTR, start_addr);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
arch_irq_unlock(key); /* --exit critical section-- */
}
#else /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
static void dcache_flush_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
@@ -387,81 +98,6 @@ static void dcache_flush_lines(void *start_addr_ptr, size_t size)
} while (start_addr < end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
}
static void dcache_invalidate_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
static void dcache_flush_and_invalidate_lines(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
uint32_t ctrl;
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
}
#endif /* CONFIG_ARC_DCACHE_REGION_OPERATIONS */
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_region(start_addr_ptr, size);
#else
dcache_flush_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_region(start_addr_ptr, size);
#endif
return 0;
}
@@ -469,127 +105,48 @@ int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
int arch_dcache_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_invalidate_region(start_addr_ptr, size);
#else
dcache_invalidate_lines(start_addr_ptr, size);
#endif
key = arch_irq_lock(); /* -enter critical section- */
#if defined(CONFIG_ARC_SLC)
slc_invalidate_region(start_addr_ptr, size);
#endif
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
return 0;
}
int arch_dcache_flush_and_invd_range(void *start_addr_ptr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_flush_and_invalidate_region(start_addr_ptr, size);
#else
dcache_flush_and_invalidate_lines(start_addr_ptr, size);
#endif
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_region(start_addr_ptr, size);
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_flush_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLSH, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_all();
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl &= ~DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_invalidate_all();
#endif
return 0;
return -ENOTSUP;
}
int arch_dcache_flush_and_invd_all(void)
{
size_t line_size = sys_cache_data_line_size_get();
unsigned int key;
uint32_t ctrl;
if (!dcache_available() || line_size == 0U) {
return -ENOTSUP;
}
key = irq_lock();
ctrl = z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL);
ctrl |= DC_CTRL_INVALIDATE_MODE;
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, ctrl);
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 0x1);
while (z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) & DC_CTRL_FLUSH_STATUS) {
/* Do nothing */
}
irq_unlock(key);
#if defined(CONFIG_ARC_SLC)
slc_flush_and_invalidate_all();
#endif
return 0;
return -ENOTSUP;
}
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
@@ -661,30 +218,14 @@ int arch_icache_flush_and_invd_range(void *addr, size_t size)
static int init_dcache(void)
{
sys_cache_data_enable();
arch_dcache_enable();
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
init_dcache_line_size();
#endif
/*
* Init high address registers to 0 if PAE exists, cache operations for 40 bit addresses not
* implemented
*/
if (pae_exists()) {
#if defined(CONFIG_ARC_DCACHE_REGION_OPERATIONS)
dcache_high_addr_init();
#endif
#if defined(CONFIG_ARC_SLC)
slc_high_addr_init();
#endif
}
return 0;
}
void arch_cache_init(void)
{
init_dcache();
}
SYS_INIT(init_dcache, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View File

@@ -26,7 +26,6 @@ SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
.align 4
.word 0
#ifndef CONFIG_ARCH_HAS_CUSTOM_CPU_IDLE
/*
* @brief Put the CPU in low-power mode
*
@@ -49,9 +48,7 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
sleep r1
j_s [blink]
nop
#endif
#ifndef CONFIG_ARCH_HAS_CUSTOM_CPU_ATOMIC_IDLE
/*
* @brief Put the CPU in low-power mode, entered with IRQs locked
*
@@ -59,7 +56,6 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
*
* void arch_cpu_atomic_idle(unsigned int key)
*/
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
#ifdef CONFIG_TRACING
@@ -74,4 +70,3 @@ SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
sleep r1
j_s.d [blink]
seti r0
#endif

View File

@@ -3,8 +3,13 @@
# Copyright (c) 2022 Synopsys
# SPDX-License-Identifier: Apache-2.0
config ARC_HAS_DSP
bool
help
This option is enabled when the ARC CPU has hardware DSP unit.
menu "ARC DSP Options"
depends on CPU_HAS_DSP
depends on ARC_HAS_DSP
config ARC_DSP
bool "digital signal processing (DSP)"
@@ -17,7 +22,7 @@ config ARC_DSP_TURNED_OFF
help
This option disables DSP block via resetting DSP_CRTL register.
config DSP_SHARING
config ARC_DSP_SHARING
bool "DSP register sharing"
depends on ARC_DSP && MULTITHREADING
select ARC_HAS_ACCL_REGS
@@ -39,12 +44,12 @@ config ARC_XY_ENABLE
bool "ARC address generation unit registers"
help
Processors with XY memory and AGU registers can configure this
option to accelerate DSP instructions.
option to accelerate DSP instrctions.
config ARC_AGU_SHARING
bool "ARC address generation unit register sharing"
depends on ARC_XY_ENABLE && MULTITHREADING
default y if DSP_SHARING
default y if ARC_DSP_SHARING
help
This option enables preservation of the hardware AGU registers
across context switches to allow multiple threads to perform concurrent

View File

@@ -9,7 +9,7 @@
* @brief ARCv2 DSP and AGU structure member offset definition file
*
*/
#ifdef CONFIG_DSP_SHARING
#ifdef CONFIG_ARC_DSP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_ctrl);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_glo);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_ghi);

View File

@@ -10,7 +10,7 @@
*
*/
.macro _save_dsp_regs
#ifdef CONFIG_DSP_SHARING
#ifdef CONFIG_ARC_DSP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
bbit0 r13, K_DSP_IDX, dsp_skip_save
lr r13, [_ARC_V2_DSP_CTRL]
@@ -136,7 +136,7 @@ agu_skip_save :
.endm
.macro _load_dsp_regs
#ifdef CONFIG_DSP_SHARING
#ifdef CONFIG_ARC_DSP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
bbit0 r13, K_DSP_IDX, dsp_skip_load
ld_s r13, [sp, ___callee_saved_stack_t_dsp_ctrl_OFFSET]

View File

@@ -1,118 +0,0 @@
/*
* Copyright (c) 2024 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/llext/elf.h>
#include <zephyr/llext/llext.h>
#include <zephyr/llext/llext_internal.h>
#include <zephyr/llext/loader.h>
#include <zephyr/logging/log.h>
#include <zephyr/sys/util.h>
LOG_MODULE_REGISTER(elf, CONFIG_LLEXT_LOG_LEVEL);
#define R_ARC_32 4
#define R_ARC_B26 5 /* AKA R_ARC_64 */
#define R_ARC_S25H_PCREL 16
#define R_ARC_S25W_PCREL 17
#define R_ARC_32_ME 27
/* ARCompact insns packed in memory have Middle Endian encoding */
#define ME(x) (((x & 0xffff0000) >> 16) | ((x & 0xffff) << 16))
/**
* @brief Architecture specific function for relocating shared elf
*
* Elf files contain a series of relocations described in multiple sections.
* These relocation instructions are architecture specific and each architecture
* supporting modules must implement this.
*
* The relocation codes are well documented:
* https://github.com/foss-for-synopsys-dwc-arc-processors/arc-ABI-manual/blob/master/ARCv2_ABI.pdf
* https://github.com/zephyrproject-rtos/binutils-gdb
*/
int arch_elf_relocate(struct llext_loader *ldr, struct llext *ext, elf_rela_t *rel,
const elf_shdr_t *shdr)
{
int ret = 0;
uint32_t value;
const uintptr_t loc = llext_get_reloc_instruction_location(ldr, ext, shdr->sh_info, rel);
uint32_t insn = UNALIGNED_GET((uint32_t *)loc);
elf_sym_t sym;
uintptr_t sym_base_addr;
const char *sym_name;
ret = llext_read_symbol(ldr, ext, rel, &sym);
if (ret != 0) {
LOG_ERR("Could not read symbol from binary!");
return ret;
}
sym_name = llext_symbol_name(ldr, ext, &sym);
ret = llext_lookup_symbol(ldr, ext, &sym_base_addr, rel, &sym, sym_name, shdr);
if (ret != 0) {
LOG_ERR("Could not find symbol %s!", sym_name);
return ret;
}
sym_base_addr += rel->r_addend;
int reloc_type = ELF32_R_TYPE(rel->r_info);
switch (reloc_type) {
case R_ARC_32:
case R_ARC_B26:
UNALIGNED_PUT(sym_base_addr, (uint32_t *)loc);
break;
case R_ARC_S25H_PCREL:
/* ((S + A) - P) >> 1
* S = symbol address
* A = addend
* P = relative offset to PCL
*/
value = (sym_base_addr + rel->r_addend - (loc & ~0x3)) >> 1;
insn = ME(insn);
/* disp25h */
insn = insn & ~0x7feffcf;
insn |= ((value >> 0) & 0x03ff) << 17;
insn |= ((value >> 10) & 0x03ff) << 6;
insn |= ((value >> 20) & 0x000f) << 0;
insn = ME(insn);
UNALIGNED_PUT(insn, (uint32_t *)loc);
break;
case R_ARC_S25W_PCREL:
/* ((S + A) - P) >> 2 */
value = (sym_base_addr + rel->r_addend - (loc & ~0x3)) >> 2;
insn = ME(insn);
/* disp25w */
insn = insn & ~0x7fcffcf;
insn |= ((value >> 0) & 0x01ff) << 18;
insn |= ((value >> 9) & 0x03ff) << 6;
insn |= ((value >> 19) & 0x000f) << 0;
insn = ME(insn);
UNALIGNED_PUT(insn, (uint32_t *)loc);
break;
case R_ARC_32_ME:
UNALIGNED_PUT(ME(sym_base_addr), (uint32_t *)loc);
break;
default:
LOG_ERR("unknown relocation: %u\n", reloc_type);
ret = -ENOEXEC;
break;
}
return ret;
}

View File

@@ -17,37 +17,36 @@
#include <zephyr/arch/cpu.h>
#include <zephyr/logging/log.h>
#include <kernel_arch_data.h>
#include <zephyr/arch/exception.h>
#include <zephyr/arch/arc/v2/exc.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_EXCEPTION_DEBUG
static void dump_arc_esf(const struct arch_esf *esf)
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
static void dump_arc_esf(const z_arch_esf_t *esf)
{
EXCEPTION_DUMP(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR
" r3: 0x%" PRIxPTR "", esf->r0, esf->r1, esf->r2, esf->r3);
EXCEPTION_DUMP(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR
" r7: 0x%" PRIxPTR "", esf->r4, esf->r5, esf->r6, esf->r7);
EXCEPTION_DUMP(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR
" r11: 0x%" PRIxPTR "", esf->r8, esf->r9, esf->r10, esf->r11);
EXCEPTION_DUMP("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
LOG_ERR(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR " r3: 0x%" PRIxPTR "",
esf->r0, esf->r1, esf->r2, esf->r3);
LOG_ERR(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR " r7: 0x%" PRIxPTR "",
esf->r4, esf->r5, esf->r6, esf->r7);
LOG_ERR(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR " r11: 0x%" PRIxPTR "",
esf->r8, esf->r9, esf->r10, esf->r11);
LOG_ERR("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
esf->r12, esf->r13, esf->pc);
EXCEPTION_DUMP(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "",
esf->blink, esf->status32);
LOG_ERR(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "", esf->blink, esf->status32);
#ifdef CONFIG_ARC_HAS_ZOL
EXCEPTION_DUMP("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR
" lp_count: 0x%" PRIxPTR "", esf->lp_end, esf->lp_start, esf->lp_count);
LOG_ERR("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR " lp_count: 0x%" PRIxPTR "",
esf->lp_end, esf->lp_start, esf->lp_count);
#endif /* CONFIG_ARC_HAS_ZOL */
}
#endif
void z_arc_fatal_error(unsigned int reason, const struct arch_esf *esf)
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
#ifdef CONFIG_EXCEPTION_DEBUG
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
if (esf != NULL) {
dump_arc_esf(esf);
}
#endif /* CONFIG_EXCEPTION_DEBUG */
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
z_fatal_error(reason, esf);
}

View File

@@ -20,7 +20,6 @@
#include <zephyr/kernel_structs.h>
#include <zephyr/arch/common/exc_handle.h>
#include <zephyr/logging/log.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_USERSPACE
@@ -52,8 +51,9 @@ static const struct z_exc_handle exceptions[] = {
*/
static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
{
#if defined(CONFIG_MULTITHREADING)
uint32_t guard_end, guard_start;
#if defined(CONFIG_MULTITHREADING)
const struct k_thread *thread = _current;
if (!thread) {
@@ -69,7 +69,7 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
* "guard" installed in this case, instead what's
* happening is that the stack pointer is crashing
* into the privilege mode stack buffer which
* immediately precedes it.
* immediately precededs it.
*/
guard_end = thread->stack_info.start;
guard_start = (uint32_t)thread->stack_obj;
@@ -88,6 +88,7 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
guard_end = thread->stack_info.start;
guard_start = guard_end - Z_ARC_STACK_GUARD_SIZE;
}
#endif /* CONFIG_MULTITHREADING */
/* treat any MPU exceptions within the guard region as a stack
* overflow.As some instrustions
@@ -98,13 +99,12 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
if (fault_addr < guard_end && fault_addr >= guard_start) {
return true;
}
#endif /* CONFIG_MULTITHREADING */
return false;
}
#endif
#ifdef CONFIG_EXCEPTION_DEBUG
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
/* For EV_ProtV, the numbering/semantics of the parameter are consistent across
* several codes, although not all combination will be reported.
*
@@ -137,32 +137,32 @@ static void dump_protv_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("Instruction fetch violation (%s)",
LOG_ERR("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
break;
case 0x1:
EXCEPTION_DUMP("Memory read protection violation (%s)",
LOG_ERR("Memory read protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x2:
EXCEPTION_DUMP("Memory write protection violation (%s)",
LOG_ERR("Memory write protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x3:
EXCEPTION_DUMP("Memory read-modify-write violation (%s)",
LOG_ERR("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
break;
case 0x10:
EXCEPTION_DUMP("Normal vector table in secure memory");
LOG_ERR("Normal vector table in secure memory");
break;
case 0x11:
EXCEPTION_DUMP("NS handler code located in S memory");
LOG_ERR("NS handler code located in S memory");
break;
case 0x12:
EXCEPTION_DUMP("NSC Table Range Violation");
LOG_ERR("NSC Table Range Violation");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
@@ -171,46 +171,46 @@ static void dump_machine_check_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("double fault");
LOG_ERR("double fault");
break;
case 0x1:
EXCEPTION_DUMP("overlapping TLB entries");
LOG_ERR("overlapping TLB entries");
break;
case 0x2:
EXCEPTION_DUMP("fatal TLB error");
LOG_ERR("fatal TLB error");
break;
case 0x3:
EXCEPTION_DUMP("fatal cache error");
LOG_ERR("fatal cache error");
break;
case 0x4:
EXCEPTION_DUMP("internal memory error on instruction fetch");
LOG_ERR("internal memory error on instruction fetch");
break;
case 0x5:
EXCEPTION_DUMP("internal memory error on data fetch");
LOG_ERR("internal memory error on data fetch");
break;
case 0x6:
EXCEPTION_DUMP("illegal overlapping MPU entries");
LOG_ERR("illegal overlapping MPU entries");
if (parameter == 0x1) {
EXCEPTION_DUMP(" - jump and branch target");
LOG_ERR(" - jump and branch target");
}
break;
case 0x10:
EXCEPTION_DUMP("secure vector table not located in secure memory");
LOG_ERR("secure vector table not located in secure memory");
break;
case 0x11:
EXCEPTION_DUMP("NSC jump table not located in secure memory");
LOG_ERR("NSC jump table not located in secure memory");
break;
case 0x12:
EXCEPTION_DUMP("secure handler code not located in secure memory");
LOG_ERR("secure handler code not located in secure memory");
break;
case 0x13:
EXCEPTION_DUMP("NSC target address not located in secure memory");
LOG_ERR("NSC target address not located in secure memory");
break;
case 0x80:
EXCEPTION_DUMP("uncorrectable ECC or parity error in vector memory");
LOG_ERR("uncorrectable ECC or parity error in vector memory");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
@@ -219,54 +219,54 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
{
switch (cause) {
case 0x0:
EXCEPTION_DUMP("Privilege violation");
LOG_ERR("Privilege violation");
break;
case 0x1:
EXCEPTION_DUMP("disabled extension");
LOG_ERR("disabled extension");
break;
case 0x2:
EXCEPTION_DUMP("action point hit");
LOG_ERR("action point hit");
break;
case 0x10:
switch (parameter) {
case 0x1:
EXCEPTION_DUMP("N to S return using incorrect return mechanism");
LOG_ERR("N to S return using incorrect return mechanism");
break;
case 0x2:
EXCEPTION_DUMP("N to S return with incorrect operating mode");
LOG_ERR("N to S return with incorrect operating mode");
break;
case 0x3:
EXCEPTION_DUMP("IRQ/exception return fetch from wrong mode");
LOG_ERR("IRQ/exception return fetch from wrong mode");
break;
case 0x4:
EXCEPTION_DUMP("attempt to halt secure processor in NS mode");
LOG_ERR("attempt to halt secure processor in NS mode");
break;
case 0x20:
EXCEPTION_DUMP("attempt to access secure resource from normal mode");
LOG_ERR("attempt to access secure resource from normal mode");
break;
case 0x40:
EXCEPTION_DUMP("SID violation on resource access (APEX/UAUX/key NVM)");
LOG_ERR("SID violation on resource access (APEX/UAUX/key NVM)");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
break;
case 0x13:
switch (parameter) {
case 0x20:
EXCEPTION_DUMP("attempt to access secure APEX feature from NS mode");
LOG_ERR("attempt to access secure APEX feature from NS mode");
break;
case 0x40:
EXCEPTION_DUMP("SID violation on access to APEX feature");
LOG_ERR("SID violation on access to APEX feature");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
@@ -274,7 +274,7 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
EXCEPTION_DUMP("interrupt %u", vector);
LOG_ERR("interrupt %u", vector);
return;
}
@@ -283,59 +283,59 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
*/
switch (vector) {
case ARC_EV_RESET:
EXCEPTION_DUMP("Reset");
LOG_ERR("Reset");
break;
case ARC_EV_MEM_ERROR:
EXCEPTION_DUMP("Memory Error");
LOG_ERR("Memory Error");
break;
case ARC_EV_INS_ERROR:
EXCEPTION_DUMP("Instruction Error");
LOG_ERR("Instruction Error");
break;
case ARC_EV_MACHINE_CHECK:
EXCEPTION_DUMP("EV_MachineCheck");
LOG_ERR("EV_MachineCheck");
dump_machine_check_exception(cause, parameter);
break;
case ARC_EV_TLB_MISS_I:
EXCEPTION_DUMP("EV_TLBMissI");
LOG_ERR("EV_TLBMissI");
break;
case ARC_EV_TLB_MISS_D:
EXCEPTION_DUMP("EV_TLBMissD");
LOG_ERR("EV_TLBMissD");
break;
case ARC_EV_PROT_V:
EXCEPTION_DUMP("EV_ProtV");
LOG_ERR("EV_ProtV");
dump_protv_exception(cause, parameter);
break;
case ARC_EV_PRIVILEGE_V:
EXCEPTION_DUMP("EV_PrivilegeV");
LOG_ERR("EV_PrivilegeV");
dump_privilege_exception(cause, parameter);
break;
case ARC_EV_SWI:
EXCEPTION_DUMP("EV_SWI");
LOG_ERR("EV_SWI");
break;
case ARC_EV_TRAP:
EXCEPTION_DUMP("EV_Trap");
LOG_ERR("EV_Trap");
break;
case ARC_EV_EXTENSION:
EXCEPTION_DUMP("EV_Extension");
LOG_ERR("EV_Extension");
break;
case ARC_EV_DIV_ZERO:
EXCEPTION_DUMP("EV_DivZero");
LOG_ERR("EV_DivZero");
break;
case ARC_EV_DC_ERROR:
EXCEPTION_DUMP("EV_DCError");
LOG_ERR("EV_DCError");
break;
case ARC_EV_MISALIGNED:
EXCEPTION_DUMP("EV_Misaligned");
LOG_ERR("EV_Misaligned");
break;
case ARC_EV_VEC_UNIT:
EXCEPTION_DUMP("EV_VecUnit");
LOG_ERR("EV_VecUnit");
break;
default:
EXCEPTION_DUMP("unknown");
LOG_ERR("unknown");
break;
}
}
#endif /* CONFIG_EXCEPTION_DEBUG */
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
/*
* @brief Fault handler
@@ -345,7 +345,7 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
* invokes the user provided routine k_sys_fatal_error_handler() which is
* responsible for implementing the error handling policy.
*/
void z_arc_fault(struct arch_esf *esf, uint32_t old_sp)
void _Fault(z_arch_esf_t *esf, uint32_t old_sp)
{
uint32_t vector, cause, parameter;
uint32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
@@ -384,11 +384,10 @@ void z_arc_fault(struct arch_esf *esf, uint32_t old_sp)
return;
}
#ifdef CONFIG_EXCEPTION_DEBUG
EXCEPTION_DUMP("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
LOG_ERR("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
EXCEPTION_DUMP("Address 0x%x", exc_addr);
LOG_ERR("Address 0x%x", exc_addr);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
dump_exception_info(vector, cause, parameter);
#endif

View File

@@ -19,7 +19,7 @@
#include <zephyr/syscall.h>
#include <zephyr/arch/arc/asm-compat/assembler.h>
GTEXT(z_arc_fault)
GTEXT(_Fault)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -99,11 +99,11 @@ _exc_entry:
_save_exc_regs_into_stack
/* sp is parameter of z_arc_fault */
/* sp is parameter of _Fault */
MOVR r0, sp
/* ilink is the thread's original sp */
MOVR r1, ilink
jl z_arc_fault
jl _Fault
_exc_return:
/* the exception cause must be fixed in exception handler when exception returns

View File

@@ -44,11 +44,11 @@ K_KERNEL_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
void z_arc_firq_stack_set(void)
{
#ifdef CONFIG_SMP
char *firq_sp = K_KERNEL_STACK_BUFFER(
char *firq_sp = Z_KERNEL_STACK_BUFFER(
_firq_interrupt_stack[z_arc_v2_core_id()]) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#else
char *firq_sp = K_KERNEL_STACK_BUFFER(_firq_interrupt_stack) +
char *firq_sp = Z_KERNEL_STACK_BUFFER(_firq_interrupt_stack) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#endif

View File

@@ -54,7 +54,7 @@ void arch_irq_offload(irq_offload_routine_t routine, const void *parameter)
}
/* need to be executed on every core in the system */
void arch_irq_offload_init(void)
int arc_irq_offload_init(void)
{
IRQ_CONNECT(IRQ_OFFLOAD_LINE, IRQ_OFFLOAD_PRIO, arc_irq_offload_handler, NULL, 0);
@@ -64,4 +64,8 @@ void arch_irq_offload_init(void)
* with generic irq_enable() but via z_arc_v2_irq_unit_int_enable().
*/
z_arc_v2_irq_unit_int_enable(IRQ_OFFLOAD_LINE);
return 0;
}
SYS_INIT(arc_irq_offload_init, POST_KERNEL, 0);

View File

@@ -26,7 +26,7 @@ GTEXT(_isr_wrapper)
GTEXT(_isr_demux)
#if defined(CONFIG_PM)
GTEXT(pm_system_resume)
GTEXT(z_pm_save_idle_exit)
#endif
/*
@@ -253,7 +253,7 @@ rirq_path:
st 0, [r1, _kernel_offset_to_idle] /* zero idle duration */
PUSHR blink
jl pm_system_resume
jl z_pm_save_idle_exit
POPR blink
_skip_pm_save_idle_exit:

View File

@@ -35,7 +35,5 @@ config ARC_MPU
select GEN_PRIV_STACKS if !(ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if !(ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MPU_REQUIRES_NON_OVERLAPPING_REGIONS if (ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select ARCH_MEM_DOMAIN_SUPPORTS_ISOLATED_STACKS
select MEM_DOMAIN_ISOLATED_STACKS
help
Target has ARC MPU

View File

@@ -34,7 +34,7 @@ int arch_mem_domain_max_partitions_get(void)
/*
* Validate the given buffer is user accessible or not
*/
int arch_buffer_validate(const void *addr, size_t size, int write)
int arch_buffer_validate(void *addr, size_t size, int write)
{
return arc_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -207,7 +207,7 @@ int arc_core_mpu_get_max_domain_partition_regions(void)
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(const void *addr, size_t size, int write)
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
/*
* For ARC MPU, smaller region number takes priority.
@@ -238,7 +238,7 @@ int arc_core_mpu_buffer_validate(const void *addr, size_t size, int write)
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
void arc_mpu_init(void)
static int arc_mpu_init(void)
{
uint32_t num_regions = get_num_regions();
@@ -246,6 +246,7 @@ void arc_mpu_init(void)
if (mpu_config.num_regions > num_regions) {
__ASSERT(0, "Request to configure: %u regions (supported: %u)\n",
mpu_config.num_regions, num_regions);
return -EINVAL;
}
/* Disable MPU */
@@ -277,7 +278,10 @@ void arc_mpu_init(void)
/* Enable MPU */
arc_core_mpu_enable();
return 0;
}
SYS_INIT(arc_mpu_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_COMMON_INTERNAL_H_ */

Some files were not shown because too many files have changed in this diff Show More