Compare commits

..

15 Commits

Author SHA1 Message Date
Johan Hedberg
8746c3fd01 Bluetooth: Mesh: Fix matching for "All Proxies" group address
The bt_mesh_fixed_group_match() function is intended to match the
various well-known group addresses, however it was never updated when
Proxy support was added.

Fixes #19015

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-12-19 13:09:38 -05:00
Mariusz Skamra
a4aaf8bbf4 Bluetooth: ATT: Fix responding to unknown ATT command
Host shall ignore the unknown ATT PDU that has Command Flag set.
Fixes regression introduced in 3b271b8455.

Fixes: GATT/SR/UNS/BI-02-C
Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-12-19 12:37:04 -05:00
Luiz Augusto von Dentz
f8ba0cccb7 Bluetooth: GATT: Fix not storing SC changes
CCC storaged is no longer declared separetly so check if ccc->cfg
matches with sc_ccc_cfg no longer works so instead use the cfg_changed
callback and match against sc_ccc_cfg_changed.

Fixes #19267

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-12-19 11:01:57 -05:00
Vinayak Kariappa Chettimada
1b31b7eb79 Bluetooth: controller: split: Fix to reject invalid enable command
Fix to reject invalid advertise and scan enable commands.

Fixes BT HCI.TS.5.1.1 tests:
HCI/DDI/BI-06-C
HCI/DDI/BI-07-C

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-12-19 11:01:37 -05:00
Mariusz Skamra
0bab5dcec1 Bluetooth: tester: Adapt to BTP Get Attribute Value API change
Adapt the gatt_get_attribute_value_cmd to recent changes in API.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-12-19 11:01:20 -05:00
Johan Hedberg
feef3a64ca Bluetooth: Mesh: Fix Clear Procedure start timestamp initialization
The start timestamp was supposed to signify the starting point of the
clear procedure. The code was incorrectly initializing it to the *end*
point of the procedure.

Fixes #19263

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-12-19 11:01:05 -05:00
Yannis Damigos
3bfd36efd1 i2c_ll_stm32_v2: Send STOP manually after NACK
In master trasmitter mode AutoEndMode is
always disabled, so we need to send STOP
manually if NACK is received.

Fixes #19059

Signed-off-by: Yannis Damigos <giannis.damigos@gmail.com>
2019-12-19 11:00:44 -05:00
Joakim Andersson
2f507cd9ff Bluetooth: ATT: Fix disconnected ATT not releasing buffers
Fix bug in ATT reset handling, not releasing queued notification
buffers when the connection is terminated.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 11:00:16 -05:00
Joakim Andersson
5ccd5c753d Bluetooth: host: Fix whitelist for non-central bluetooth applications
Fix compilation issue when wanting to use whitelist in bluetooth
applications that does not have CONFIG_BT_CENTRAL defined.
These functions are useful even for broadcaster and observer roles.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 10:59:54 -05:00
Joakim Andersson
d515cf298a Bluetooth: GATT: Fix bug in bt_gatt_attr_next unable for static handles
Fix bug in bt_gatt_attr_next when given an attribute that has static
allocation. The handle is then 0 and the function would always return
the attribute with handle 1.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 10:59:37 -05:00
Lingao Meng
1ebe758115 Bluetooth: Mesh: Fixed Provision Random buffer size
Fixed some minor issues, missing a byte for opcode.

Fixes: #19767

Signed-off-by: Lingao Meng <mengabc1086@gmail.com>
2019-12-19 10:58:59 -05:00
Joakim Andersson
7304c52ecc Bluetooth: GATT: Fix gatt buffer leak for write commands and notify
Fix GATT buffer leak when bt_att_send returns error the allocated
buffer is never freed. Discovered case where the link was disconnected
during the function call, so when GATT checkd the link was still
connected, but ATT checkd the link was disconnected.

Fixes: #19889

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 10:58:40 -05:00
Erwan Gouriou
d4a1812e9e dt: stm32f0: Fix clock bus for SPI1 and few timers
There is no APB2 bus on stm32f0 series.
What could be found as APB2 in CMSIS files is actually
second group of APB (A.K.A APB1_2).
Fix nodes that are using this wrong reference accorss the series.

Fixes #20310

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
2019-12-19 10:58:20 -05:00
Pavlo Hamov
cc004036f6 drivers: i2c: stm32_Slave: Fix addr flag handling
In the main Addr handler code the F1 workaround was used.
Add compile time swith depending on SOC family.
So workaround is not afffecting F2/F4 families.

Signed-off-by: Pavlo Hamov <pavlo_hamov@jabil.com>
2019-12-19 10:57:43 -05:00
Anas Nashif
ca3eb0eb31 doc: link-roles: convert bytes to string
Get the correct branch name as a string instead of bytes.

Fixes #19165

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-17 04:37:48 +08:00
32026 changed files with 648378 additions and 3021850 deletions

View File

@@ -1,7 +1,8 @@
--mailback
--emacs
--summary-file
--show-types
--max-line-length=100
--max-line-length=80
--min-conf-desc-length=1
--typedefsfile=scripts/checkpatch/typedefsfile
@@ -10,22 +11,10 @@
--ignore SPLIT_STRING
--ignore VOLATILE
--ignore CONFIG_EXPERIMENTAL
--ignore PREFER_KERNEL_TYPES
--ignore PREFER_SECTION
--ignore AVOID_EXTERNS
--ignore NETWORKING_BLOCK_COMMENT_STYLE
--ignore DATE_TIME
--ignore MINMAX
--ignore CONST_STRUCT
--ignore FILE_PATH_CHANGES
--ignore SPDX_LICENSE_TAG
--ignore C99_COMMENT_TOLERANCE
--ignore REPEATED_WORD
--ignore UNDOCUMENTED_DT_STRING
--ignore DT_SPLIT_BINDING_PATCH
--ignore DT_SCHEMA_BINDING_PATCH
--ignore TRAILING_SEMICOLON
--ignore COMPLEX_MACRO
--ignore MULTISTATEMENT_MACRO_USE_DO_WHILE
--ignore ENOSYS
--ignore IS_ENABLED_CONFIG
--exclude ext

View File

@@ -1,42 +1,82 @@
# SPDX-License-Identifier: Apache-2.0
# SPDX-License-Identifier: GPL-2.0
#
# Note: The list of ForEachMacros can be obtained using:
# clang-format configuration file. Intended for clang-format >= 4.
#
# git grep -h '^#define [^[:space:]]*FOR_EACH[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*FOR_EACH[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
# For more information, see:
#
# Documentation/process/clang-format.rst
# https://clang.llvm.org/docs/ClangFormat.html
# https://clang.llvm.org/docs/ClangFormatStyleOptions.html
#
# References:
# - https://clang.llvm.org/docs/ClangFormatStyleOptions.html
---
BasedOnStyle: LLVM
AlignConsecutiveMacros: AcrossComments
AllowShortBlocksOnASingleLine: Never
AccessModifierOffset: -4
AlignAfterOpenBracket: Align
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false
#AlignEscapedNewlines: Left # Unknown to clang-format-4.0
AlignOperands: true
AlignTrailingComments: false
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortEnumsOnASingleLine: false
AllowShortFunctionsOnASingleLine: None
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AttributeMacros:
- __aligned
- __deprecated
- __packed
- __printf_like
- __syscall
- __syscall_always_inline
- __subsystem
BitFieldColonSpacing: After
BreakBeforeBraces: Linux
ColumnLimit: 100
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: false
AlwaysBreakTemplateDeclarations: false
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
AfterClass: false
AfterControlStatement: false
AfterEnum: false
AfterFunction: true
AfterNamespace: true
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
#AfterExternBlock: false # Unknown to clang-format-5.0
BeforeCatch: false
BeforeElse: false
IndentBraces: false
#SplitEmptyFunction: true # Unknown to clang-format-4.0
#SplitEmptyRecord: true # Unknown to clang-format-4.0
#SplitEmptyNamespace: true # Unknown to clang-format-4.0
BreakBeforeBinaryOperators: None
BreakBeforeBraces: Custom
#BreakBeforeInheritanceComma: false # Unknown to clang-format-4.0
BreakBeforeTernaryOperators: false
BreakConstructorInitializersBeforeComma: false
#BreakConstructorInitializers: BeforeComma # Unknown to clang-format-4.0
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: false
ColumnLimit: 80
CommentPragmas: '^ IWYU pragma:'
#CompactNamespaces: false # Unknown to clang-format-4.0
ConstructorInitializerAllOnOneLineOrOnePerLine: false
ConstructorInitializerIndentWidth: 8
ContinuationIndentWidth: 8
Cpp11BracedListStyle: false
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
#FixNamespaceComments: false # Unknown to clang-format-4.0
# Taken from:
# git grep -h '^#define [^[:space:]]*for_each[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*for_each[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
ForEachMacros:
- 'FOR_EACH'
- 'FOR_EACH_FIXED_ARG'
- 'FOR_EACH_IDX'
- 'FOR_EACH_IDX_FIXED_ARG'
- 'FOR_EACH_NONEMPTY_TERM'
- 'for_each_linux_bus'
- 'for_each_linux_driver'
- 'metal_bitmap_for_each_clear_bit'
- 'metal_bitmap_for_each_set_bit'
- 'metal_for_each_page_size_down'
- 'metal_for_each_page_size_up'
- 'metal_list_for_each'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
@@ -52,40 +92,60 @@ ForEachMacros:
- 'SYS_SLIST_FOR_EACH_NODE'
- 'SYS_SLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
- 'Z_FOR_EACH'
- 'Z_FOR_EACH_ENGINE'
- 'Z_FOR_EACH_EXEC'
- 'Z_FOR_EACH_FIXED_ARG'
- 'Z_FOR_EACH_FIXED_ARG_EXEC'
- 'Z_FOR_EACH_IDX'
- 'Z_FOR_EACH_IDX_EXEC'
- 'Z_FOR_EACH_IDX_FIXED_ARG'
- 'Z_FOR_EACH_IDX_FIXED_ARG_EXEC'
- 'Z_GENLIST_FOR_EACH_CONTAINER'
- 'Z_GENLIST_FOR_EACH_CONTAINER_SAFE'
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
- 'STRUCT_SECTION_FOREACH'
- 'TYPE_SECTION_FOREACH'
IfMacros:
- 'CHECKIF'
# Disabled for now, see bug https://github.com/zephyrproject-rtos/zephyr/issues/48520
#IncludeBlocks: Regroup
#IncludeBlocks: Preserve # Unknown to clang-format-5.0
IncludeCategories:
- Regex: '^".*\.h"$'
Priority: 0
- Regex: '^<(assert|complex|ctype|errno|fenv|float|inttypes|limits|locale|math|setjmp|signal|stdarg|stdbool|stddef|stdint|stdio|stdlib|string|tgmath|time|wchar|wctype)\.h>$'
Priority: 1
- Regex: '^\<zephyr/.*\.h\>$'
Priority: 2
- Regex: '.*'
Priority: 3
Priority: 1
IncludeIsMainRegex: '(Test)?$'
IndentCaseLabels: false
#IndentPPDirectives: None # Unknown to clang-format-5.0
IndentWidth: 8
InsertBraces: true
SpaceBeforeParens: ControlStatementsExceptControlMacros
SortIncludes: Never
UseTab: ForContinuationAndIndentation
WhitespaceSensitiveMacros:
- STRINGIFY
- Z_STRINGIFY
IndentWrappedFunctionNames: false
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: Inner
#ObjCBinPackProtocolList: Auto # Unknown to clang-format-5.0
ObjCBlockIndentWidth: 8
ObjCSpaceAfterProperty: true
ObjCSpaceBeforeProtocolList: true
# Taken from git's rules
#PenaltyBreakAssignment: 10 # Unknown to clang-format-4.0
PenaltyBreakBeforeFirstCallParameter: 30
PenaltyBreakComment: 10
PenaltyBreakFirstLessLess: 0
PenaltyBreakString: 10
PenaltyExcessCharacter: 100
PenaltyReturnTypeOnItsOwnLine: 60
PointerAlignment: Right
ReflowComments: false
SortIncludes: false
#SortUsingDeclarations: false # Unknown to clang-format-4.0
SpaceAfterCStyleCast: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
#SpaceBeforeCtorInitializerColon: true # Unknown to clang-format-5.0
#SpaceBeforeInheritanceColon: true # Unknown to clang-format-5.0
SpaceBeforeParens: ControlStatements
#SpaceBeforeRangeBasedForLoopColon: true # Unknown to clang-format-5.0
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles: false
SpacesInContainerLiterals: false
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp03
TabWidth: 8
UseTab: Always
...

View File

@@ -12,10 +12,10 @@ coverage:
patch: yes
changes: no
# ignore:
# - "tests/**/*"
# - "samples/**/*"
# - "ext/hal/**/*"
#ignore:
# - "tests/**/*"
# - "samples/**/*"
# - "ext/hal/**/*"
parsers:
gcov:

View File

@@ -9,28 +9,12 @@ charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
max_line_length = 100
# Assembly
[*.S]
indent_style = tab
indent_size = 8
# C
[*.{c,h}]
indent_style = tab
indent_size = 8
# C++
[*.{cpp,hpp}]
indent_style = tab
indent_size = 8
# Linker Script
[*.ld]
indent_style = tab
indent_size = 8
# Python
[*.py]
indent_style = space
@@ -41,13 +25,8 @@ indent_size = 4
indent_style = tab
indent_size = 8
# reStructuredText
[*.rst]
indent_style = space
indent_size = 3
# YAML
[*.{yml,yaml}]
[*.yml]
indent_style = space
indent_size = 2
@@ -75,18 +54,3 @@ indent_size = 2
# Makefile
[Makefile]
indent_style = tab
indent_size = 8
# Device tree
[*.{dts,dtsi,overlay}]
indent_style = tab
indent_size = 8
# Git commit messages
[COMMIT_EDITMSG]
max_line_length = 75
# Kconfig
[Kconfig*]
indent_style = tab
indent_size = 8

8
.gitattributes vendored
View File

@@ -3,11 +3,3 @@
.gitattributes export-ignore
.gitignore export-ignore
.mailmap export-ignore
# Tell git to not diff certain files
*.svg -diff
# Tell linguist that generated test pattern files should not be included in the
# language statistics.
*.pat linguist-generated
*.svg linguist-generated

View File

@@ -1,57 +0,0 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
**Notes (delete this)**
Github Discussions (https://github.com/zephyrproject-rtos/zephyr/discussions)
are available to first verify that the issue is a genuine Zephyr bug and not a
consequence of Zephyr services misuse.
This issue list is only for bugs in the main Zephyr code base
(https://github.com/zephyrproject-rtos/zephyr/). If the bug is for a project
fork (such as NCS) specific feature, please open an issue in the fork project
instead.
**Describe the bug**
A clear and concise description of what the bug is.
Please also mention any information which could help others to understand
the problem you're facing:
- What target platform are you using?
- What have you tried to diagnose or workaround this issue?
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
- ...
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
Add any other context that could be relevant to your issue, such as pin setting,
target configuration, ...

View File

@@ -1,20 +0,0 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: Feature Request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,42 +0,0 @@
---
name: Contributor Nomination
about: Nominate a GitHub user for additional rights on the Zephyr Project
title: ''
labels: Role Nomination
assignees: ''
---
# Background
The [TSC Project Roles] defines the main roles for the Zephyr Project, including
Maintainer, Collaborator, and Contributor.
By default anyone that contributes code or documentation is a Contributor, but
with the lowest [GitHub Permission Level] of Read. For example, Contributors
with Read permission do not have the permission to add reviewers to a pull
request.
Use this template to nominate a GitHub user for the Contributor role with
Triage permission level, which allows the user to add reviewers to a pull
request and be added as a reviewer by other users.
# Nomination
## GitHub User
Provide the following information about the GitHub user:
1. Full Name
1. GitHub username
1. Organization (optional)
## Supporting Documents
Add links to 3-5 GitHub pull requests, in the Zephyr project, authored or
reviewed by the GitHub user that demonstrate the user's dedication to the
Zephyr project.
[TSC Project Roles]: <https://docs.zephyrproject.org/latest/project/project_roles.html>
[GitHub Permission Level]: <https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization>

View File

@@ -1,68 +0,0 @@
---
name: External Source Code
about: Submit a proposal to integrate external source code
title: ''
labels: TSC
assignees: ''
---
## Origin
Name of project hosting the original open source code
Provide a link to the source
## Purpose
Brief description of what this software does
## Mode of integration
Describe whether you'd like to integrate this external component in the main tree
or as a module, and why. If the mode of integration is a module, suggest a
repository name for the module
## Maintainership
List the person(s) that will be maintaining the integration of this external code
for the foreseeable future. Please use GitHub IDs to identify them. You can
choose to identify a single maintainer only or add collaborators as well
## Pull Request
Pull request (if any) with the actual implementation of the integration, be it
in the main tree or as a module (pointing to your own fork for now). Make sure
the PR is correctly labeled as "DNM"
## Description
Long description that will help reviewers discuss suitability of the
component to solve the problem at hand (there may be a better options
available.)
What is its primary functionality (e.g., SQLLite is a lightweight
database)?
What problem are you trying to solve? (e.g., a state store is
required to maintain ...)
Why is this the right component to solve it (e.g., SQLite is small,
easy to use, and has a very liberal license.)
## Dependencies
What other components does this package depend on?
Will the Zephyr project have a direct dependency on the component, or
will it be included via an abstraction layer with this component as a
replaceable implementation?
## Revision
Version or SHA you would like to integrate initially
## License
Please use an SPDX identifier (https://spdx.org/licenses/), such as
``BSD-3-Clause``

View File

@@ -1,41 +0,0 @@
---
name: Binary blobs
about: Submit a proposal to integrate binary blob(s)
title: ''
labels: TSC
assignees: ''
---
## Origin
Describe where the binary blob(s) originate from
## Type
- [ ] Precompiled library
- [ ] Firmware image
## Module
The Zephyr module that this blob(s) will be referenced from
## Purpose
Brief description of what the blob(s) do. It is especially important to describe
the functionality that the blob(s) provide, to the largest extent possible
## Pull Request
Link to the Pull request with the actual implementation of the integration. If
you are submitting this as part of a new module you may link to the same Pull
Request that introduced the new module.
## Dependencies
What other components do the blob(s) depend on, if any?
## License
Document the license the blob(s) are distributed under

39
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,39 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
What have you tried to diagnose or workaround this issue?
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Screenshots or console output**
If applicable, add a screenshot (drag-and-drop an image), or console logs
(cut-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue.
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
Add any other context about the problem here.

20
.github/ISSUE_TEMPLATE/enhancement.md vendored Normal file
View File

@@ -0,0 +1,20 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: feature request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

22
.github/SECURITY.md vendored
View File

@@ -1,22 +0,0 @@
# Security Policy
## Supported versions
The Zephyr project supports the following versions with security
updates:
- The most recent release, and the release prior to that.
- Active LTS releases.
At this time, with the latest release of v3.3, the supported
versions are:
- v2.7: Current LTS
- v3.2: Prior release
- v3.3: Current release
## Reporting process
Please see our [Security Vulnerability
Reporting](https://docs.zephyrproject.org/latest/security/reporting.html)
page for details on the process.

View File

@@ -1,16 +0,0 @@
license:
main: apache-2.0
report_missing: true
category: Permissive
copyright:
check: true
exclude:
extensions:
- yml
- yaml
- html
- rst
- conf
- cfg
langs:
- HTML

View File

@@ -1,51 +0,0 @@
name: Pull Request Assigner
on:
pull_request_target:
types:
- opened
- synchronize
- reopened
- ready_for_review
branches:
- main
- v*-branch
issues:
types:
- labeled
jobs:
assignment:
name: Pull Request Assignment
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U PyGithub>=1.55 west
- name: Check out source code
uses: actions/checkout@v3
- name: Run assignment script
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
FLAGS="-v"
FLAGS+=" -o ${{ github.event.repository.owner.login }}"
FLAGS+=" -r ${{ github.event.repository.name }}"
FLAGS+=" -M MAINTAINERS.yml"
if [ "${{ github.event_name }}" = "pull_request_target" ]; then
FLAGS+=" -P ${{ github.event.pull_request.number }}"
elif [ "${{ github.event_name }}" = "issues" ]; then
FLAGS+=" -I ${{ github.event.issue.number }}"
elif [ "${{ github.event_name }}" = "schedule" ]; then
FLAGS+=" --modules"
else
echo "Unknown event: ${{ github.event_name }}"
exit 1
fi
python3 scripts/set_assignees.py $FLAGS

View File

@@ -1,31 +0,0 @@
name: Backport
on:
pull_request_target:
types:
- closed
- labeled
branches:
- main
jobs:
backport:
name: Backport
runs-on: ubuntu-22.04
# Only react to merged PRs for security reasons.
# See https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target.
if: >
github.event.pull_request.merged &&
(
github.event.action == 'closed' ||
(
github.event.action == 'labeled' &&
contains(github.event.label.name, 'backport')
)
)
steps:
- name: Backport
uses: zephyrproject-rtos/action-backport@v2.0.3-3
with:
github_token: ${{ secrets.ZB_GITHUB_TOKEN }}
issue_labels: Backport
labels_template: '["Backport"]'

View File

@@ -1,31 +0,0 @@
name: Backport Issue Check
on:
pull_request_target:
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Check out source code
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -1,99 +0,0 @@
# Copyright (c) 2023 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Twister BlackBox TestSuite
on:
push:
branches:
- main
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/blackbox_tests.yml'
pull_request:
branches:
- main
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister_blackbox/**'
- '.github/workflows/blackbox_tests.yml'
jobs:
twister-tests:
name: Twister Black Box Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
steps:
- name: Apply Container Owner Mismatch Workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Checkout
uses: actions/checkout@v3
- name: Environment Setup
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
west init -l . || true
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Set Up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Go Into Venv
shell: bash
run: |
python3 -m pip install --user virtualenv
python3 -m venv env
source env/bin/activate
echo "$(which python)"
- name: Install Packages
run: |
python3 -m pip install -U -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt -r scripts/requirements-run-test.txt
- name: Run Pytest For Twister Black Box Tests
shell: bash
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run twister tests"
source zephyr-env.sh
PYTHONPATH="./scripts/tests" pytest ./scripts/tests/twister_blackbox
- name: Upload Unit Test Results
if: success() || failure()
uses: actions/upload-artifact@v2
with:
name: Black Box Test Results (Python ${{ matrix.python-version }})
path: |
twister-out*/twister.log
twister-out*/twister.json
twister-out*/testplan.log
retention-days: 14
- name: Clear Workspace
if: success() || failure()
run: |
rm -rf twister-out*/

View File

@@ -1,28 +0,0 @@
name: Publish BabbleSim Tests Results
on:
workflow_run:
workflows: ["BabbleSim Tests"]
types:
- completed
jobs:
bsim-test-results:
name: "Publish BabbleSim Test Results"
runs-on: ubuntu-22.04
if: github.event.workflow_run.conclusion != 'skipped'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
run_id: ${{ github.event.workflow_run.id }}
- name: Publish BabbleSim Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: BabbleSim Test Results
comment_mode: off
commit: ${{ github.event.workflow_run.head_sha }}
event_file: event/event.json
event_name: ${{ github.event.workflow_run.event }}
files: "bsim-test-results/**/bsim_results.xml"

View File

@@ -1,165 +0,0 @@
name: BabbleSim Tests
on:
pull_request:
paths:
- ".github/workflows/bsim-tests.yaml"
- ".github/workflows/bsim-tests-publish.yaml"
- "west.yml"
- "subsys/bluetooth/**"
- "tests/bsim/**"
- "samples/bluetooth/**"
- "boards/posix/**"
- "soc/posix/**"
- "arch/posix/**"
- "include/zephyr/arch/posix/**"
- "scripts/native_simulator/**"
- "samples/net/sockets/echo_*/**"
- "modules/openthread/**"
- "subsys/net/l2/openthread/**"
- "include/zephyr/net/openthread.h"
- "drivers/ieee802154/**"
- "include/zephyr/net/ieee802154*"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bsim-test:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
bsim_bluetooth_test_results_file: ./bsim_bluetooth/bsim_results.xml
bsim_networking_test_results_file: ./bsim_net/bsim_results.xml
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Environment Setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Check common triggering files
uses: tj-actions/changed-files@v35
id: check-common-files
with:
files: |
.github/workflows/bsim-tests.yaml
.github/workflows/bsim-tests-publish.yaml
west.yml
boards/posix/**
soc/posix/**
arch/posix/**
include/zephyr/arch/posix/**
scripts/native_simulator/**
tests/bsim/*
- name: Check if Bluethooth files changed
uses: tj-actions/changed-files@v35
id: check-bluetooth-files
with:
files: |
tests/bsim/bluetooth/**
samples/bluetooth/**
subsys/bluetooth/**
- name: Check if Networking files changed
uses: tj-actions/changed-files@v35
id: check-networking-files
with:
files: |
tests/bsim/net/**
samples/net/sockets/echo_*/**
modules/openthread/**
subsys/net/l2/openthread/**
include/zephyr/net/openthread.h
drivers/ieee802154/**
include/zephyr/net/ieee802154*
- name: Update BabbleSim to manifest revision
if: >
steps.check-bluetooth-files.outputs.any_changed == 'true'
|| steps.check-networking-files.outputs.any_changed == 'true'
|| steps.check-common-files.outputs.any_changed == 'true'
run: |
export BSIM_VERSION=$( west list bsim -f {revision} )
echo "Manifest points to bsim sha $BSIM_VERSION"
cd /opt/bsim_west/bsim
git fetch -n origin ${BSIM_VERSION}
git config --global advice.detachedHead false
git checkout ${BSIM_VERSION}
west update
make everything -s -j 8
- name: Run Bluetooth Tests with BSIM
if: steps.check-bluetooth-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_bluetooth nice tests/bsim/bluetooth/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bluetooth_test_results_file} \
SEARCH_PATH=tests/bsim/bluetooth/ tests/bsim/run_parallel.sh
- name: Run Networking Tests with BSIM
if: steps.check-networking-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_net nice tests/bsim/net/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_networking_test_results_file} \
SEARCH_PATH=tests/bsim/net/ tests/bsim/run_parallel.sh
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v3
with:
name: bsim-test-results
path: |
./bsim_bluetooth/bsim_results.xml
./bsim_net/bsim_results.xml
${{ github.event_path }}
if-no-files-found: warn
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v3
with:
name: event
path: |
${{ github.event_path }}

View File

@@ -1,67 +0,0 @@
# Copyright (c) 2021, 2022 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
# Make a snapshot of open bugs as a python pickle file, compressed
# using xz. Upload the xz file to Amazon S3.
name: Bug Snapshot
on:
workflow_dispatch:
branches: [main]
schedule:
# Run daily at 14:05
- cron: '5 14 * * *'
jobs:
make_bugs_pickle:
name: Make bugs pickle
runs-on: ubuntu-22.04
if: github.repository_owner == 'zephyrproject-rtos'
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Snapshot bugs
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
BUGS_PICKLE_FILENAME="zephyr-bugs-$(date -I).pickle.xz"
BUGS_PICKLE_PATH="${RUNNER_TEMP}/${BUGS_PICKLE_FILENAME}"
set -euo pipefail
python3 scripts/make_bugs_pickle.py | xz > ${BUGS_PICKLE_PATH}
echo "BUGS_PICKLE_FILENAME=${BUGS_PICKLE_FILENAME}" >> ${GITHUB_ENV}
echo "BUGS_PICKLE_PATH=${BUGS_PICKLE_PATH}" >> ${GITHUB_ENV}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_BUG_SNAPSHOT_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
run: |
REPOSITORY_NAME="${GITHUB_REPOSITORY#*/}"
PUBLISH_BUCKET="builds.zephyrproject.org"
PUBLISH_DOMAIN="builds.zephyrproject.io"
if [ "${{github.event_name}}" = "schedule" ]; then
PUBLISH_ROOT="${REPOSITORY_NAME}/bug-snapshot/daily"
else
PUBLISH_ROOT="${REPOSITORY_NAME}/bug-snapshot"
fi
PUBLISH_UPLOAD_URI="s3://${PUBLISH_BUCKET}/${PUBLISH_ROOT}/${BUGS_PICKLE_FILENAME}"
PUBLISH_DOWNLOAD_URI="https://${PUBLISH_DOMAIN}/${PUBLISH_ROOT}/${BUGS_PICKLE_FILENAME}"
aws s3 cp --quiet ${BUGS_PICKLE_PATH} ${PUBLISH_UPLOAD_URI}
echo "Bug pickle is available at: ${PUBLISH_DOWNLOAD_URI}" >> ${GITHUB_STEP_SUMMARY}

View File

@@ -1,163 +0,0 @@
name: Build with Clang/LLVM
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
clang-build:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
platform: ["native_posix"]
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-16
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Environment Setup
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config --global update.narrow true
# In some cases modules are left in a state where they can't be
# updated (i.e. when we cancel a job and the builder is killed),
# So first retry to update, if that does not work, remove all modules
# and start over. (Workaround until we implement more robust module
# west caching).
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
- name: Check Environment
run: |
cmake --version
${LLVM_TOOLCHAIN_PATH}/bin/clang --version
gcc --version
ls -la
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
run: |
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && rm -rf /github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
- name: Run Tests with Twister
id: twister
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=llvm
# check if we need to run a full twister or not based on files changed
python3 ./scripts/ci/test_plan.py --platform ${{ matrix.platform }} -c origin/${BASE_REF}..
# We can limit scope to just what has changed
if [ -s testplan.json ]; then
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --force-color --inline-logs -M -N -v --load-tests testplan.json --retry-failed 2
else
# if nothing is run, skip reporting step
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
run: |
ccache -s
ccache -p
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml
clang-build-results:
name: "Publish Unit Tests Results"
needs: clang-build
runs-on: ubuntu-22.04
if: (success() || failure() ) && needs.clang-build.outputs.report_needed != 0
steps:
- name: Download Artifacts
uses: actions/download-artifact@v3
with:
path: artifacts
- name: Merge Test Results
run: |
pip3 install junitparser junit2html
junitparser merge artifacts/*/twister.xml junit.xml
junit2html junit.xml junit-clang.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit-clang.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
check_name: Unit Test Results
files: "**/twister.xml"
comment_mode: off

View File

@@ -1,176 +0,0 @@
name: Code Coverage with codecov
on:
schedule:
- cron: '25 */3 * * 1-5'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
codecov:
if: github.repository == 'zephyrproject-rtos/zephyr'
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
platform: ["native_posix", "qemu_x86", "unit_testing"]
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: west setup
run: |
west init -l . || true
west update 1> west.update.log || west update 1> west.update-2.log
- name: Check Environment
run: |
cmake --version
gcc --version
ls -la
- name: Prepare ccache keys
id: ccache_cache_prop
shell: cmake -P {0}
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
with:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
- name: Run Tests with Twister (Push)
continue-on-error: true
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
mkdir -p coverage/reports
./scripts/twister --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage -T tests
- name: Generate Coverage Report
run: |
mv twister-out/coverage.info lcov.pre.info
lcov -q --remove lcov.pre.info mylib.c --remove lcov.pre.info tests/\* \
--remove lcov.pre.info samples/\* --remove lcov.pre.info ext/\* \
--remove lcov.pre.info *generated* \
-o coverage/reports/${{ matrix.platform }}.info --rc lcov_branch_coverage=1
- name: ccache stats post
run: |
ccache -s
ccache -p
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
codecov-results:
name: "Publish Coverage Results"
needs: codecov
runs-on: ubuntu-22.04
# the codecov job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
- name: checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Download Artifacts
uses: actions/download-artifact@v3
with:
path: coverage/reports
- name: Move coverage files
run: |
mv ./coverage/reports/*/*.info ./coverage/reports
ls -la ./coverage/reports
- name: Generate list of coverage files
id: get-coverage-files
shell: cmake -P {0}
run: |
file(GLOB INPUT_FILES_LIST "coverage/reports/*.info")
set(MERGELIST "")
set(FILELIST "")
foreach(ITEM ${INPUT_FILES_LIST})
get_filename_component(f ${ITEM} NAME)
if(FILELIST STREQUAL "")
set(FILELIST "${f}")
else()
set(FILELIST "${FILELIST},${f}")
endif()
endforeach()
foreach(ITEM ${INPUT_FILES_LIST})
get_filename_component(f ${ITEM} NAME)
if(MERGELIST STREQUAL "")
set(MERGELIST "-a ${f}")
else()
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
endforeach()
file(APPEND $ENV{GITHUB_OUTPUT} "mergefiles=${MERGELIST}\n")
file(APPEND $ENV{GITHUB_OUTPUT} "covfiles=${FILELIST}\n")
- name: Merge coverage files
run: |
sudo apt-get update
sudo apt-get install -y lcov
cd ./coverage/reports
lcov ${{ steps.get-coverage-files.outputs.mergefiles }} -o merged.info --rc lcov_branch_coverage=1
- name: Upload coverage to Codecov
if: always()
uses: codecov/codecov-action@v3
with:
directory: ./coverage/reports
env_vars: OS,PYTHON
fail_ci_if_error: false
verbose: true
files: merged.info

View File

@@ -1,59 +0,0 @@
name: Coding Guidelines
on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-22.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install python dependencies
run: |
pip3 install unidiff
pip3 install wheel
pip3 install sh
- name: Install Packages
run: |
sudo apt-get update
sudo apt-get install coccinelle
- name: Run Coding Guildeines Checks
continue-on-error: true
id: coding_guidelines
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=$PWD
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
git rebase origin/${BASE_REF}
source zephyr-env.sh
# debug
ls -la
git log --pretty=oneline | head -n 10
./scripts/ci/guideline_check.py --output output.txt -c origin/${BASE_REF}..
- name: check-warns
run: |
if [[ -s "output.txt" ]]; then
errors=$(cat output.txt)
errors="${errors//'%'/'%25'}"
errors="${errors//$'\n'/'%0A'}"
errors="${errors//$'\r'/'%0D'}"
echo "::error file=output.txt::$errors"
exit 1;
fi

View File

@@ -1,90 +0,0 @@
name: Compliance Checks
on: pull_request
jobs:
check_compliance:
runs-on: ubuntu-22.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install python dependencies
run: |
pip3 install setuptools
pip3 install wheel
pip3 install python-magic lxml junitparser gitlint pylint pykwalify yamllint
pip3 install west
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
git remote -v
# Ensure there's no merge commits in the PR
[[ "$(git rev-list --merges --count origin/${BASE_REF}..)" == "0" ]] || \
(echo "::error ::Merge commits not allowed, rebase instead";false)
git rebase origin/${BASE_REF}
# debug
git log --pretty=oneline | head -n 10
west init -l . || true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Run Compliance Tests
continue-on-error: true
id: compliance
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=$PWD
# debug
ls -la
git log --pretty=oneline | head -n 10
./scripts/ci/check_compliance.py --annotate -e KconfigBasic \
-c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: compliance.xml
path: compliance.xml
- name: check-warns
run: |
if [[ ! -s "compliance.xml" ]]; then
exit 1;
fi
files=($(./scripts/ci/check_compliance.py -l))
for file in "${files[@]}"; do
f="${file}.txt"
if [[ -s $f ]]; then
errors=$(cat $f)
errors="${errors//'%'/'%25'}"
errors="${errors//$'\n'/'%0A'}"
errors="${errors//$'\r'/'%0D'}"
echo "::error file=${f}::$errors"
exit=1
fi
done
if [ "${exit}" == "1" ]; then
exit 1;
fi

View File

@@ -1,38 +0,0 @@
# Copyright (c) 2020 Intel Corp.
# SPDX-License-Identifier: Apache-2.0
name: Publish commit for daily testing
on:
schedule:
- cron: '50 22 * * *'
push:
branches:
- refs/tags/*
jobs:
get_version:
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .
aws s3 cp versions.json s3://testing.zephyrproject.org/daily_tests/versions.json

View File

@@ -1,75 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2020 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Devicetree script tests
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
jobs:
devicetree-checks:
name: Devicetree script tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install python dependencies
run: |
pip3 install wheel
pip3 install pytest pyyaml tox
- name: run tox
working-directory: scripts/dts/python-devicetree
run: |
tox

View File

@@ -1,18 +0,0 @@
name: Do Not Merge
on:
pull_request:
types: [synchronize, opened, reopened, labeled, unlabeled]
jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'DNM') ||
contains(github.event.*.labels.*.name, 'TSC') }}
name: Prevent Merging
runs-on: ubuntu-22.04
steps:
- name: Check for label
run: |
echo "Pull request is labeled as 'DNM' or 'TSC'"
echo "This workflow fails so that the pull request cannot be merged"
exit 1

View File

@@ -1,179 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
push:
tags:
- v*
pull_request:
paths:
- 'doc/**'
- '**.rst'
- 'include/**'
- 'kernel/include/kernel_arch_interface.h'
- 'lib/libc/**'
- 'subsys/testsuite/ztest/include/**'
- 'tests/**'
- '**/Kconfig*'
- 'west.yml'
- '.github/workflows/doc-build.yml'
- 'scripts/dts/**'
- 'doc/requirements.txt'
env:
# NOTE: west docstrings will be extracted from the version listed here
WEST_VERSION: 1.0.0
# The latest CMake available directly with apt is 3.18, but we need >=3.20
# so we fetch that through pip.
CMAKE_VERSION: 3.20.5
DOXYGEN_VERSION: 1.9.6
jobs:
doc-build-html:
name: "Documentation Build (HTML)"
runs-on: zephyr-runner-linux-x64-4xlarge
timeout-minutes: 45
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install -y ninja-build graphviz
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz
echo "${PWD}/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: install-pip
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
DOC_TAG="release"
else
DOC_TAG="development"
fi
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
DOC_TARGET="html-fast"
else
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS_EXTRA="-q -t publish" make -C doc ${DOC_TARGET}
- name: compress-docs
run: |
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: upload-build
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
echo "${PR_NUM}" > pr_num
echo "Documentation will be available shortly at: ${DOC_URL}" >> $GITHUB_STEP_SUMMARY
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
path: pr_num
doc-build-pdf:
name: "Documentation Build (PDF)"
if: github.event_name != 'pull_request'
runs-on: zephyr-runner-linux-x64-4xlarge
container: texlive/texlive:latest
timeout-minutes: 60
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v3
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('doc/requirements.txt') }}
- name: setup-venv
run: |
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
pip3 install -r doc/requirements.txt
pip3 install west==${WEST_VERSION}
pip3 install cmake==${CMAKE_VERSION}
- name: west setup
run: |
west init -l .
- name: build-docs
shell: bash
continue-on-error: true
run: |
if [[ "$GITHUB_REF" =~ "refs/tags/v" ]]; then
DOC_TAG="release"
else
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
if: always()
uses: actions/upload-artifact@v3
with:
name: pdf-output
if-no-files-found: ignore
path: |
doc/_build/latex/zephyr.pdf
doc/_build/latex/zephyr.log

View File

@@ -1,63 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish (Pull Request)
on:
workflow_run:
workflows: ["Documentation Build"]
types:
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
- name: Load PR number
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
id: check-pr
uses: carpentries/actions/check-valid-pr@v0.14.0
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete

View File

@@ -1,55 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish
on:
workflow_run:
workflows: ["Documentation Build"]
branches:
- main
- v*
types:
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-22.04
if: |
github.event.workflow_run.event != 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
if [ "${HEAD_BRANCH:0:1}" == "v" ]; then
VERSION=${HEAD_BRANCH:1}
else
VERSION="latest"
fi
aws s3 sync --quiet html-output/html s3://docs.zephyrproject.org/${VERSION} --delete
aws s3 sync --quiet html-output/html/doxygen/html s3://docs.zephyrproject.org/apidoc/${VERSION} --delete
aws s3 cp --quiet pdf-output/zephyr.pdf s3://docs.zephyrproject.org/${VERSION}/zephyr.pdf

View File

@@ -1,32 +0,0 @@
name: Error numbers
on:
pull_request:
paths:
- '.github/workflows/errno.yml'
- 'lib/libc/minimal/include/errno.h'
- 'scripts/ci/errno.py'
jobs:
check-errno:
runs-on: ubuntu-22.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@v3
- name: Run errno.py
run: |
export ZEPHYR_BASE=${PWD}
./scripts/ci/errno.py

View File

@@ -1,85 +0,0 @@
name: Footprint Tracking
# Run every 12 hours and on tags
on:
schedule:
- cron: '50 1/12 * * *'
push:
paths:
- 'VERSION'
- '.github/workflows/footprint-tracking.yml'
branches:
- main
- v*-branch
tags:
# only publish v* tags, do not care about zephyr-v* which point to the
# same commit
- 'v*'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking:
runs-on: ubuntu-22.04
if: github.repository_owner == 'zephyrproject-rtos'
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install packages
run: |
sudo apt-get update
sudo apt-get install -y python3-venv
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: west setup
run: |
west init -l . || true
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update2.log
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Record Footprint
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=${PWD}
./scripts/footprint/track.py -p scripts/footprint/plan.txt
- name: Upload footprint data
run: |
python3 -m venv .venv
. .venv/bin/activate
pip3 install awscli
aws s3 sync --quiet footprint_data/ s3://testing.zephyrproject.org/footprint_data/

View File

@@ -1,71 +0,0 @@
name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-delta:
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: west setup
run: |
west init -l . || true
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update.log
- name: Detect Changes in Footprint
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=${PWD}
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git remote -v
git rebase origin/${BASE_REF}
git checkout -b this_pr
west update
west build -b frdm_k64f tests/benchmarks/footprints -t ram_report
cp build/ram.json ram2.json
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/rom.json rom2.json
git checkout origin/${BASE_REF}
west update
west build -p always -b frdm_k64f tests/benchmarks/footprints -t ram_report
west build -b frdm_k64f tests/benchmarks/footprints -t rom_report
cp build/ram.json ram1.json
cp build/rom.json rom1.json
git checkout this_pr
./scripts/footprint/fpdiff.py ram1.json ram2.json
./scripts/footprint/fpdiff.py rom1.json rom2.json

View File

@@ -1,52 +0,0 @@
name: Greet first time contributor
on:
issues:
types: [opened]
pull_request_target:
types: [opened, closed]
jobs:
check_for_first_interaction:
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/checkout@v3
- uses: zephyrproject-rtos/action-first-interaction@v1.1.1-zephyr-4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: >
Hi @${{github.event.issue.user.login}}! We appreciate you submitting your first issue
for our open-source project. 🌟
Even though I'm a bot, I can assure you that the whole community is genuinely grateful
for your time and effort. 🤖💙
pr-opened-message: >
Hello @${{ github.event.pull_request.user.login }}, and thank you very much for your
first pull request to the Zephyr project!
A project maintainer just triggered our CI pipeline to run it against your PR and
ensure it's compliant and doesn't cause any issues. You might want to take this
opportunity to review the project's [Contributor
Expectations](https://docs.zephyrproject.org/latest/contribute/contributor_expectations.html)
and make any updates to your pull request if necessary. 😊
pr-merged-message: >
Hi @${{ github.event.pull_request.user.login }}!
Congratulations on getting your very first Zephyr pull request merged 🎉🥳. This is a
fantastic achievement, and we're thrilled to have you as part of our community!
To celebrate this milestone and showcase your contribution, we'd love to award you the
Zephyr Technical Contributor badge. If you're interested, please claim your badge by
filling out this form: [Claim Your Zephyr Badge](https://forms.gle/oCw9iAPLhUsHTapc8).
Thank you for your valuable input, and we look forward to seeing more of your
contributions in the future! 🪁

View File

@@ -1,54 +0,0 @@
name: Issue Tracker
on:
schedule:
- cron: '*/10 * * * *'
env:
OUTPUT_FILE_NAME: IssuesReport.md
COMMITTER_EMAIL: actions@github.com
COMMITTER_NAME: github-actions
COMMITTER_USERNAME: github-actions
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/main/.github/workflows/issues-report-config.json
- name: install-packages
run: |
sudo apt-get update
sudo apt-get install discount
- uses: brcrista/summarize-issues@v3
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
outputPath: ${{ env.OUTPUT_FILE_NAME }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Post Results
run: |
mkd2html IssuesReport.md IssuesReport.html
aws s3 cp --quiet IssuesReport.html s3://testing.zephyrproject.org/issues/$GITHUB_REPOSITORY/index.html

View File

@@ -1,37 +0,0 @@
[
{
"section": "High Priority Bugs",
"labels": ["bug", "priority: high"],
"threshold": 0
},
{
"section": "Medium Priority Bugs",
"labels": ["bug", "priority: medium"],
"threshold": 20
},
{
"section": "Low Priority Bugs",
"labels": ["bug", "priority: low"],
"threshold": 100
},
{
"section": "Enhancements",
"labels": ["Enhancement"],
"threshold": 500
},
{
"section": "Features",
"labels": ["Feature"],
"threshold": 100
},
{
"section": "Questions",
"labels": ["question"],
"threshold": 100
},
{
"section": "Static Analysis",
"labels": ["Coverity"],
"threshold": 100
}
]

View File

@@ -1,32 +0,0 @@
name: Scancode
on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-22.04
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@v3
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@v4
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts
- name: Verify
run: |
if [ -s ./artifacts/report.txt ]; then
report=$(cat ./artifacts/report.txt)
report="${report//'%'/'%25'}"
report="${report//$'\n'/'%0A'}"
report="${report//$'\r'/'%0D'}"
echo "::error file=./artifacts/report.txt::$report"
exit 1
fi

View File

@@ -1,38 +0,0 @@
name: Manifest
on:
pull_request_target:
jobs:
contribs:
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
working-directory: zephyrproject/zephyr
run: |
pip3 install west
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
west init -l . || true
- name: Manifest
uses: zephyrproject-rtos/action-manifest@f223dce288b0d8f30bfd57eb2b14b18c230a7d8b
with:
github-token: ${{ secrets.ZB_GITHUB_TOKEN }}
manifest-path: 'west.yml'
checkout-path: 'zephyrproject/zephyr'
use-tree-checkout: 'true'
label-prefix: 'manifest-'
verbosity-level: '1'
labels: 'manifest'
dnm-labels: 'DNM'

View File

@@ -1,60 +0,0 @@
name: Create a Release
on:
push:
tags:
- 'v*'
- '!v*rc*'
jobs:
release:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
echo "TRIMMED_VERSION=${GITHUB_REF#refs/tags/v}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
with:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@v3
continue-on-error: true
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
path: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: Create empty release notes body
run: |
echo "TODO: add release overview and notes link" > release-notes.txt
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Zephyr ${{ steps.get_version.outputs.TRIMMED_VERSION }}
body_path: release-notes.txt
draft: true
prerelease: true
- name: Upload Release Assets
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
asset_name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
asset_content_type: text/plain

View File

@@ -1,71 +0,0 @@
# Copyright 2023 Google LLC
# SPDX-License-Identifier: Apache-2.0
name: Scripts tests
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/build/**'
- '.github/workflows/scripts_tests.yml'
jobs:
scripts-tests:
name: Scripts tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Rebase
continue-on-error: true
env:
BASE_REF: ${{ github.base_ref }}
PR_HEAD: ${{ github.event.pull_request.head.sha }}
run: |
git config --global user.email "actions@zephyrproject.org"
git config --global user.name "Github Actions"
git rebase origin/${BASE_REF}
git log --graph --oneline HEAD...${PR_HEAD}
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest
env:
ZEPHYR_BASE: ./
run: |
echo "Run script tests"
pytest ./scripts/build

View File

@@ -1,28 +0,0 @@
name: Stale Workflow Queue Cleanup
on:
workflow_dispatch:
branches: [main]
schedule:
# everyday at 15:00
- cron: '0 15 * * *'
concurrency:
group: stale-workflow-queue-cleanup
cancel-in-progress: true
jobs:
cleanup:
name: Cleanup
runs-on: ubuntu-22.04
steps:
- name: Delete stale queued workflow runs
uses: MajorScruffy/delete-old-workflow-runs@v0.3.0
with:
repository: ${{ github.repository }}
# Remove any workflow runs in "queued" state for more than 1 day
older-than-seconds: 86400
status: queued
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,28 +0,0 @@
name: "Close stale pull requests/issues"
on:
schedule:
- cron: "16 00 * * *"
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v8
with:
stale-pr-message: 'This pull request has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you
would like to have the label removed otherwise this pull request will automatically be
closed in 14 days. Note, that you can always re-open a closed pull request at any time.'
stale-issue-message: 'This issue has been marked as stale because it has been open (more
than) 60 days with no activity. Remove the stale label or add a comment saying that you
would like to have the label removed otherwise this issue will automatically be closed in
14 days. Note, that you can always re-open a closed issue at any time.'
days-before-stale: 60
days-before-close: 14
stale-issue-label: 'Stale'
stale-pr-label: 'Stale'
exempt-pr-labels: 'Blocked,In progress'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta,Process'
operations-per-run: 400

View File

@@ -1,331 +0,0 @@
name: Run tests with twister
on:
push:
branches:
- main
- v*-branch
pull_request_target:
branches:
- main
- v*-branch
schedule:
# Run at 03:00 UTC on every Sunday
- cron: '0 3 * * 0'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
twister-build-prep:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
fullrun: ${{ steps.output-services.outputs.fullrun }}
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 15
DAILY_MATRIX_SIZE: 80
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TESTS_PER_BUILDER: 700
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Environment Setup
if: github.event_name == 'pull_request_target'
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
rm -fr ".git/rebase-apply"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Generate Test Plan with Twister
if: github.event_name == 'pull_request_target'
id: test-plan
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
python3 ./scripts/ci/test_plan.py -c origin/${BASE_REF}.. --pull-request -t $TESTS_PER_BUILDER
if [ -s .testplan ]; then
cat .testplan >> $GITHUB_ENV
else
echo "TWISTER_NODES=${MATRIX_SIZE}" >> $GITHUB_ENV
fi
rm -f testplan.json .testplan
- name: Determine matrix size
id: output-services
run: |
if [ "${{github.event_name}}" = "pull_request_target" ]; then
if [ -n "${TWISTER_NODES}" ]; then
subset="[$(seq -s',' 1 ${TWISTER_NODES})]"
else
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
fi
size=${TWISTER_NODES}
elif [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "schedule" -a "${{github.repository}}" = "zephyrproject-rtos/zephyr" ]; then
subset="[$(seq -s',' 1 ${DAILY_MATRIX_SIZE})]"
size=${DAILY_MATRIX_SIZE}
else
size=0
fi
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
twister-build:
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
subset: ${{fromJSON(needs.twister-build-prep.outputs.subset)}}
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.16.1
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 '
DAILY_OPTIONS: ' -M --build-only --all --show-footprint'
PR_OPTIONS: ' --clobber-output --integration'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint'
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
# match the container user UID because of the way GitHub
# Actions runner is implemented. Remove this workaround when
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Environment Setup
run: |
if [ "${{github.event_name}}" = "pull_request_target" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
rm -fr ".git/rebase-apply"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
west init -l . || true
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Check Environment
run: |
cmake --version
gcc --version
ls -la
echo "github.ref: ${{ github.ref }}"
echo "github.base_ref: ${{ github.base_ref }}"
echo "github.ref_name: ${{ github.ref_name }}"
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
run: |
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
continue-on-error: true
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && rm -rf /github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
- if: github.event_name == 'push'
name: Run Tests with Twister (Push)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${PUSH_OPTIONS}
if [ "${{matrix.subset}}" = "1" ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${PUSH_OPTIONS}
fi
fi
- if: github.event_name == 'pull_request_target'
name: Run Tests with Twister (Pull Request)
run: |
rm -f testplan.json
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
python3 ./scripts/ci/test_plan.py -c origin/${BASE_REF}.. --pull-request
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} --load-tests testplan.json ${TWISTER_COMMON} ${PR_OPTIONS}
if [ "${{matrix.subset}}" = "1" -a ${{needs.twister-build-prep.outputs.fullrun}} = 'True' ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${PR_OPTIONS}
fi
fi
- if: github.event_name == 'schedule'
name: Run Tests with Twister (Daily)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${DAILY_OPTIONS}
if [ "${{matrix.subset}}" = "1" ]; then
./scripts/zephyr_module.py --twister-out module_tests.args
if [ -s module_tests.args ]; then
./scripts/twister +module_tests.args --outdir module_tests ${TWISTER_COMMON} ${DAILY_OPTIONS}
fi
fi
- name: ccache stats post
run: |
ccache -p
ccache -s
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
path: |
twister-out/twister.xml
twister-out/twister.json
module_tests/twister.xml
testplan.json
twister-test-results:
name: "Publish Unit Tests Results"
env:
ELASTICSEARCH_KEY: ${{ secrets.ELASTICSEARCH_KEY }}
ELASTICSEARCH_SERVER: "https://elasticsearch.zephyrproject.io:443"
needs: twister-build
runs-on: ubuntu-22.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()
steps:
# Needed for opensearch and upload script
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
persist-credentials: false
- name: Download Artifacts
uses: actions/download-artifact@v3
with:
path: artifacts
- if: github.event_name == 'push' || github.event_name == 'schedule'
name: Upload to opensearch
run: |
pip3 install elasticsearch
# set run date on upload to get consistent and unified data across the matrix.
run_date=`date --iso-8601=minutes`
if [ "${{github.event_name}}" = "push" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-push-1 artifacts/*/*/twister.json
elif [ "${{github.event_name}}" = "schedule" ]; then
python3 ./scripts/ci/upload_test_results_es.py -r ${run_date} \
--index zephyr-main-ci-weekly-1 artifacts/*/*/twister.json
fi
- name: Merge Test Results
run: |
pip3 install junitparser junit2html
junitparser merge artifacts/*/*/twister.xml junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore
path: |
junit.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: Unit Test Results
files: "**/twister.xml"
comment_mode: off

View File

@@ -1,66 +0,0 @@
# Copyright (c) 2020 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Twister TestSuite
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
jobs:
twister-tests:
name: Twister Unit Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04]
steps:
- name: checkout
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip3 install -r scripts/requirements-base.txt -r scripts/requirements-build-test.txt
- name: Run pytest for twisterlib
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run twister tests"
PYTHONPATH=./scripts/tests pytest ./scripts/tests/twister
- name: Run pytest for pytest-twister-harness
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
PYTHONPATH: ./scripts/pylib/pytest-twister-harness/src:${PYTHONPATH}
run: |
echo "Run twister tests"
pytest ./scripts/pylib/pytest-twister-harness/tests

View File

@@ -1,80 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Zephyr West Command Tests
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
jobs:
west-commnads:
name: West Command Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.8, 3.9, '3.10']
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip3 install wheel
pip3 install pytest west pyelftools canopen natsort progress mypy intelhex psutil ply pyserial
- name: run pytest-win
if: runner.os == 'Windows'
run: |
python ./scripts/west_commands/run_tests.py
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |
./scripts/west_commands/run_tests.py

33
.gitignore vendored
View File

@@ -7,13 +7,7 @@
*.swp
*.swo
*~
.\#*
\#*\#
build*/
!doc/build/
!scripts/build
!tests/drivers/build_all
!scripts/pylib/build_helpers
cscope.*
.dir
@@ -36,10 +30,7 @@ doc/samples
doc/latex
doc/themes/zephyr-docs-theme
sanity-out*
twister-out*
bsim_out
bsim_bt_out
tests/RunResults.xml
scripts/grub
doc/reference/kconfig/*.rst
doc/doc.warnings
@@ -48,14 +39,6 @@ doc/doc.warnings
.envrc
.vscode
hide-defaults-note
venv
.venv
.DS_Store
.clangd
# CI output
compliance.xml
_error.types
# Tag files
GPATH
@@ -65,19 +48,3 @@ TAGS
tags
.idea
# from check_compliance.py
BinaryFiles.txt
Checkpatch.txt
DevicetreeBindings.txt
Gitlint.txt
Identity.txt
ImageSize.txt
Kconfig.txt
KconfigBasic.txt
KconfigBasicNoModules.txt
MaintainersFormat.txt
ModulesMaintainers.txt
Nits.txt
Pylint.txt
YAMLLint.txt

View File

@@ -1,14 +1,10 @@
# All these sections are optional, edit this file as you like.
# Zephyr-specific defaults are located in scripts/gitlint/zephyr_commit_rules.py
[general]
ignore=title-trailing-punctuation, T3, title-max-length, T1, body-hard-tab, B3, B1
# verbosity should be a value between 1 and 3, the commandline -v flags take precedence over this
verbosity = 3
# By default gitlint will ignore merge commits. Set to 'false' to disable.
ignore-merge-commits=false
ignore-revert-commits=false
ignore-fixup-commits=false
ignore-squash-commits=false
ignore-merge-commits=true
# Enable debug mode (prints more output). Disabled by default
debug = false
@@ -17,13 +13,13 @@ debug = false
extra-path=scripts/gitlint
[title-max-length-no-revert]
# line-length=75
line-length=75
[body-min-line-count]
# min-line-count=1
min-line-count=1
[body-max-line-count]
# max-line-count=200
max-line-count=200
[title-starts-with-subsystem]
regex = ^(?!subsys:)(([^:]+):)(\s([^:]+):)*\s(.+)$
@@ -43,7 +39,7 @@ words=wip
[max-line-length-with-exceptions]
# B1 = body-max-line-length
# line-length=75
line-length=72
[body-min-length]
min-length=3

55
.known-issues/README Normal file
View File

@@ -0,0 +1,55 @@
This directory contains configuration files to ignore errors found in
the build and test process which are known to the developers and for
now can be safely ignored.
To use:
$ cd zephyr
$ make SOMETHING >& result
$ scripts/filter-known-issues.py result
It is included in the source tree so if anyone has to submit anything
that triggers some kind of error that is a false positive, it can
include the "ignore me" file, properly documented.
Each file can contain one or more multiline Python regular expressions
(https://docs.python.org/2/library/re.html#regular-expression-syntax)
that match an error message. Multiple regular expressions are
separated by comment blocks (that start with #). Note that an empty
line still is considered part of the multiline regular expression.
For example
---beginning---
#
# This testcase always fails, pending fix ZEP-1234
#
.*/tests/kernel/grumpy .* FAIL
#
# Documentation issue, masks:
#
# /home/e/inaky/z/kernel.git/doc/api/io_interfaces.rst:28: WARNING: Invalid definition: Expected identifier in nested name. [error at 19]
# struct dev_config::@65 dev_config::bits
# -------------------^
#
^(?P<filename>.+/doc/api/io_interfaces.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^\s+struct dev_config::@[0-9]+ dev_config::bits.*
^\s+-+\^
---end---
Note you want to:
- use relateive paths; instead of
/home/me/mydir/zephyr/something/somewhere.c you will want
^.*/something/somewhere.c (as they will depend on where it is being
built)
- Replace line numbers with [0-9]+, as they will change
- (?P<filename>[-._/\w]+/something/somewhere.c) saves the match on
that file path in a "variable" called 'filename' that later you can
match with (?P=filename) if you want to match multiple lines of the
same error message.
Can get really twisted and interesting in terms of regexps; they are
powerful, so start small :)

View File

@@ -0,0 +1,50 @@
#
# Bluetooth unnamed struct definition
#
# FIXME: all these should match the relative filename
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
^[ \t]*$
^[ \t]*\^$
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
^[ \t]*$
^[ \t]*\^$
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]$
^.*bt_conn_info.__unnamed__.*$
^[- \t]*\^$
#
# bt_gatt_discover_params unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_gatt_discover_params.__unnamed__.*
^[- \t]*\^
#
# Bluetooth GATT unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_gatt_read_params.__unnamed__.*
^[- \t]*\^
#
# Bluetooth mesh unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model.__unnamed__.*
^[- \t]*\^

View File

@@ -0,0 +1,15 @@
#
# Display
#
#
# include
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]display_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*mb_image.__unnamed__
^[- \t]*\^

View File

@@ -0,0 +1,6 @@
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]file_system[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]dma.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]sensor.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.

View File

@@ -0,0 +1,15 @@
#
# Display
#
#
# include
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]misc_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*json_obj_descr.__unnamed__
^[- \t]*\^

View File

@@ -0,0 +1,70 @@
#
# Networking
#
#
# include/net/net_ip.h warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*in[_6]+addr.in[46]_u
^[- \t]*\^
#
# include/net/net_mgmt.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_mgmt_event_callback.__unnamed__
^[- \t]*\^
#
# include/net/buf.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_buf.__unnamed__
^[- \t]*\^
#
# include/net/ieee802154.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*ieee802154_req_params.__unnamed__
^[- \t]*\^
#
# include/net/net_context.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_context.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_context.options
^[- \t]*\^
#
# include/net/net_stats.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_stats.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_stats_tc.[a-z]+
^[- \t]*\^
#
# stray duplicate definition warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_if.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.

View File

@@ -0,0 +1,31 @@
#
# UART unnamed struct definition
#doc/api/peripherals/uart.rst
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]uart.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*uart_device_config.__unnamed__.*
^[- \t]*\^
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]uart.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*uart_event.data
^[- \t]*\^

6
.known-issues/make.conf Normal file
View File

@@ -0,0 +1,6 @@
#
# When filtering output of the build process, ignore lines that don't
# provide any information that helps the invoker tell if there was an
# error.
#
^make: (Entering|Leaving) directory .*

View File

@@ -0,0 +1,11 @@
#
# When executing test cases, ignore the following messages as they are
# not to be considered hard errors.
#
# Block line when test case cannot run in the HW due to server or connection issues
#
^BLCK0/[-a-z0-9:]+ (.+)#test @[^/]+/[^:]+:[^:]+: evaluation blocked(.*)$
#
# Block line when there is an issue with the YKUSH serial connection
#
^BLCK0/[-a-z0-9:]+ (.+)#test @[^/]+/(?P<board>[^:]+):[^:]+: exception: 400: (?P=board): Cannot find YKUSH serial '[A-Z0-9]+'$

View File

@@ -0,0 +1,18 @@
#
# When executing test cases under TCF, ignore the following messages
# as they are not to be considered hard errors.
#
# TCF is run under make for taking advantage of the jobserver; when
# the testcase execution fail, make will complain, which we can
# ignore ('sommersault' was the old name of the target).
#
^/tmp/tcf-[a-zA-Z0-9]+.mk:[0-9]+: recipe for target ('tcf-jobserver-run'|'sommersault') failed$
#
# More of the same
#
^make: \*\*\* \[(tcf-jobserver-run|sommersault)\] Error 1$
#
# TCF's summary line. We don't need to consider it to determine if the
# run failed or passed.
#
^[A-Z]+0/\S+:\s+\S+\s+@\S+: [0-9]+ tests \([0-9]+ passed, [0-9]+ failed, [0-9]+ blocked, [0-9]+ skipped\).*$

View File

@@ -0,0 +1,4 @@
#
# Skip line when test case is eliminated due to filters
#
^SKIP0/\S+\s+\S+: No targets can be used \(all [0-9]+ selected from [0-9]+ available eliminated by testcase filtering\)$

141
.mailmap
View File

@@ -1,121 +1,30 @@
Alexandr Kolosov <rikorsev@gmail.com>
Alexandre d'Alton <alexandre.dalton@intel.com>
Dirk Brandewie <dirk.j.brandewie@intel.com> <dirk.j.brandewie@intel.com>
Mike Hirst <michael.hirst@windriver.com> <michael.hirst@windriver.com>
Johan Kruger <johan.kruger@windriver.com> <johan.kruger@windriver.com>
Leona Cook <leonax.cook@intel.com> <lsc@hackeress.com>
Leona Cook <leonax.cook@intel.com> <leonax.cook@intel.com>
Thomas Heeley <thomas.heeley@intel.com> <thomas.heeley@intel.com>
Shuang He <shuang.he@intel.com> <shuang.he@intel.com>
Chuck Jordan <cjordan@synopsys.com> <cjordan@synopsys.com>
Jeremie Garcia <jeremie.garcia@intel.com> <jeremie.garcia@intel.com>
Bit Pathe <bitpathe@gmail.com> <bitpathe@gmail.com>
Carles Cufi <carles.cufi@nordicsemi.no> <carles.cufi@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com> <kuo-lang.tseng@intel.com>
Gerardo Aceves <gerardo.aceves@intel.com> <gerardo.aceves@intel.com>
Evan Couzens <evanx.couzens@intel.com> <evanx.couzens@intel.com>
Lei Liu <lei.a.liu@intel.com> <lei.a.liu@intel.com>
Douglas Su <d0u9.su@outlook.com> <d0u9.su@outlook.com>
Keren Siman-Tov <keren.siman-tov@intel.com> <keren.siman-tov@intel.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com> <tulasi.r@tcs.com>
Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Amir Kaplan <amir.kaplan@intel.com> <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com> <anas.nashif@intel.com>
Andrzej Kuroś <andrzej.kuros@nordicsemi.no>
Anthony Smigielski <thebasti0ncode@gmail.com>
Armand Ciejak <armand@riedonetworks.com> <armandciejak@users.noreply.github.com>
Aska Wu <aska.wu@linaro.org>
Bit Pathe <bitpathe@gmail.com> <bitpathe@gmail.com>
Bjarki Arge Andreasen <baa@trackunit.com>
Carles Cufi <carles.cufi@nordicsemi.no> <carles.cufi@nordicsemi.no>
chao an <anchao@xiaomi.com>
Charles E. Youse <charles.youse@intel.com>
Chen Xingyu <hi@xingrz.me>
Christoph Schnetzler <christoph.schnetzler@husqvarnagroup.com>
Christoph Schramm <schramm@makaio.com>
Christopher Friedt <cfriedt@meta.com>
Christopher Friedt <cfriedt@meta.com> <cfriedt@fb.com>
Chuck Jordan <cjordan@synopsys.com> <cjordan@synopsys.com>
Chunlin Han <chunlin.han@linaro.org> <chunlin.han@acer.com>
David B. Kinder <david.b.kinder@intel.com>
David Komel <a8961713@gmail.com>
David Leach <david.leach@nxp.com>
Dirk Brandewie <dirk.j.brandewie@intel.com> <dirk.j.brandewie@intel.com>
Douglas Su <d0u9.su@outlook.com> <d0u9.su@outlook.com>
Enjia Mai <enjia.mai@intel.com>
Enjia Mai <enjia.mai@intel.com> <enjiax.mai@intel.com>
Evan Couzens <evanx.couzens@intel.com> <evanx.couzens@intel.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Evgeniy Paltsev <PaltsevEvgeniy@gmail.com> <Eugeniy.Paltsev@synopsys.com>
Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Findlay Feng <i@fengch.me>
Flavio Arieta Netto <flavio@exati.com.br>
Francois Ramu <francois.ramu@st.com>
Gerardo Aceves <gerardo.aceves@intel.com> <gerardo.aceves@intel.com>
Gregory Shue <gregory.shue@legrand.com>
Gregory Shue <gregory.shue@legrand.com> <gregory.shue@legrand.us>
HaiLong Yang <cameledyang@pm.me>
James Johnson <james.johnson672@t-mobile.com>
Jarno Lämsä <jarno.lamsa@nordicsemi.no>
Jędrzej Ciupis <jedrzej.ciupis@nordicsemi.no>
Jeremie Garcia <jeremie.garcia@intel.com> <jeremie.garcia@intel.com>
Jim Benjamin Luther <jilu@oticon.com>
Johan Kruger <johan.kruger@windriver.com> <johan.kruger@windriver.com>
Johann Fischer <j.fischer@phytec.de>
Jørgen Kvalvaag <jorgen.kvalvaag@nordicsemi.no>
Juan Manuel Cruz Alcaraz <juan.m.cruz.alcaraz@intel.com>
Juan Solano <juanx.solano.menacho@intel.com>
Julien D'Ascenzio <julien.dascenzio@paratronic.fr>
Jun Li <jun.r.li@intel.com>
Justin Watson <jwatson5@gmail.com>
Kamil Sroka <kamil.sroka@nordicsemi.no>
Katarzyna Giądła <katarzyna.giadla@nordicsemi.no>
Keren Siman-Tov <keren.siman-tov@intel.com> <keren.siman-tov@intel.com>
Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Kuo-Lang Tseng <kuo-lang.tseng@intel.com> <kuo-lang.tseng@intel.com>
Lei Liu <lei.a.liu@intel.com> <lei.a.liu@intel.com>
Leona Cook <leonax.cook@intel.com> <leonax.cook@intel.com>
Leona Cook <leonax.cook@intel.com> <lsc@hackeress.com>
Lixin Guo <lixinx.guo@intel.com>
Łukasz Mazur <lukasz.mazur@hidglobal.com>
Manuel Argüelles <manuel.arguelles@nxp.com>
Manuel Argüelles <manuel.arguelles@nxp.com> <manuel.arguelles@coredumplabs.com>
Marc Herbert <marc.herbert@intel.com> <46978960+marc-hb@users.noreply.github.com>
Marin Jurjević <marin.jurjevic@hotmail.com>
Mariusz Ryndzionek <mariusz.ryndzionek@firmwave.com>
Mariusz Skamra <mariusz.skamra@codecoup.pl>
Mariusz Skamra <mariusz.skamra@codecoup.pl> <mariusz.skamra@tieto.com>
Martí Bolívar <marti.bolivar@nordicsemi.no>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.bolivar@linaro.org>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti.f.bolivar@gmail.com>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@foundries.io>
Martí Bolívar <marti.bolivar@nordicsemi.no> <marti@opensourcefoundries.com>
Martin Jäger <martin@libre.solar> <17674105+martinjaeger@users.noreply.github.com>
Mateusz Hołenko <mholenko@antmicro.com>
Michael Rosen <michael.r.rosen@intel.com>
Michal Narajowski <michal.narajowski@codecoup.pl>
Mike Hirst <michael.hirst@windriver.com> <michael.hirst@windriver.com>
Ming Shao <ming.shao@intel.com>
Mohan Kumar Kumar <mohankm@fb.com>
Naga Raja Rao Tulasi <tulasi.r@tcs.com> <tulasi.r@tcs.com>
Navin Sankar Velliangiri <navin@linumiz.com>
NingX Zhao <ningx.zhao@intel.com>
Nishikant Nayak <nishikantax.nayak@intel.com>
Ole Sæther <ole.saether@nordicsemi.no>
Pavel Král <pavel.kral@omsquare.com>
Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
Paweł Czarnecki <pczarnecki@antmicro.com>
Paweł Czarnecki <pczarnecki@antmicro.com>
Paweł Czarnecki <pczarnecki@antmicro.com> <pczarnecki@internships.antmicro.com>
Paweł Kwiek <pawel.kwiek@nordicsemi.no>
Peng Chen <peng1.chen@intel.com>
Peter Bigot <peter.bigot@nordicsemi.no>
Peter Bigot <peter.bigot@nordicsemi.no> <pab@pabigot.com>
Peter Johanson <peter@peterjohanson.com>
Piyush Itankar <piyush.t.itankar@intel.com>
Radosław Koppel <radoslaw.koppel@nordicsemi.no>
Radosław Koppel <radoslaw.koppel@nordicsemi.no> <r.koppel@k-el.com>
Raja D. Singh <rdsingh@iotwizards.com>
Ricardo Salveti <ricardo@opensourcefoundries.com>
Ricardo Salveti <ricardo@opensourcefoundries.com> <ricardo.salveti@linaro.org>
Ruud Derwig <Ruud.Derwig@synopsys.com> <Ruud.Derwig@synopsys.com>
Saku Rautio <saku.rautio@nordicsemi.no>
Scott Worley <scott.worley@microchip.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>
Sharron Liu <sharron.liu@intel.com>
Shilpashree L C <shilpashree.lc@intel.com>
Shuang He <shuang.he@intel.com> <shuang.he@intel.com>
Sigvart Hovland <sigvart.hovland@nordicsemi.no>
Stéphane D'Alu <sdalu@sdalu.com>
Stine Åkredalen <stine.akredalen@nordicsemi.no>
Thomas Heeley <thomas.heeley@intel.com> <thomas.heeley@intel.com>
Tim Sørensen <tims@demant.com>
Tim Sørensen <tims@demant.com> <tims@oticon.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Flavio Arieta Netto <flavio@exati.com.br>
Nishikant Nayak <nishikantax.nayak@intel.com>
Justin Watson <jwatson5@gmail.com>
Johann Fischer <j.fischer@phytec.de>
Jun Li <jun.r.li@intel.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com>
Yonattan Louise <yonattan.a.louise.mendoza@intel.com> <yonattan.a.louise.mendoza@linux.intel.com>
YouhuaX Zhu <youhuax.zhu@intel.com>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>

75
.shippable.yml Normal file
View File

@@ -0,0 +1,75 @@
language: c
compiler: gcc
env:
global:
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.10.3
- ZEPHYR_TOOLCHAIN_VARIANT=zephyr
- MATRIX_BUILDS="5"
matrix:
- MATRIX_BUILD="1"
- MATRIX_BUILD="2"
- MATRIX_BUILD="3"
- MATRIX_BUILD="4"
- MATRIX_BUILD="5"
build:
cache: false
cache_dir_list:
- ${SHIPPABLE_BUILD_DIR}/ccache
pre_ci_boot:
image_name: zephyrprojectrtos/ci
image_tag: v0.9
pull: true
options: "-e HOME=/home/buildslave --privileged=true --tty --net=bridge --user buildslave"
ci:
- export CCACHE_DIR=${SHIPPABLE_BUILD_DIR}/ccache/.ccache
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -c -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -c -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
- ccache -s
on_failure:
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -f -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -f -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
on_success:
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -s -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -s -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
branches:
only:
- master
- v*-branch
- topic-*
integrations:
notifications:
- integrationName: slack_integration
type: slack
recipients:
- "#ci"
branches:
only:
- master
on_success: never
on_failure: never
- integrationName: email
type: email
recipients:
- builds@zephyrproject.org
branches:
only:
- master
on_success: never
on_failure: always
on_pull_request: never

80
.uncrustify.cfg Normal file
View File

@@ -0,0 +1,80 @@
indent_with_tabs = 2 # 1=indent to level only, 2=indent with tabs
input_tab_size = 8 # original tab size
output_tab_size = 8 # new tab size
indent_columns = output_tab_size
indent_label = 1 # pos: absolute col, neg: relative column
indent_switch_case = 0 # number
#
# inter-symbol newlines
#
nl_enum_brace = remove # "enum {" vs "enum \n {"
nl_union_brace = remove # "union {" vs "union \n {"
nl_struct_brace = remove # "struct {" vs "struct \n {"
nl_do_brace = remove # "do {" vs "do \n {"
nl_if_brace = remove # "if () {" vs "if () \n {"
nl_for_brace = remove # "for () {" vs "for () \n {"
nl_else_brace = remove # "else {" vs "else \n {"
nl_while_brace = remove # "while () {" vs "while () \n {"
nl_switch_brace = remove # "switch () {" vs "switch () \n {"
nl_brace_while = remove # "} while" vs "} \n while" - cuddle while
nl_brace_else = remove # "} \n else" vs "} else"
nl_func_var_def_blk = 1
nl_fcall_brace = remove # "list_for_each() {" vs "list_for_each()\n{"
nl_fdef_brace = add # "int foo() {" vs "int foo()\n{"
#
# Source code modifications
#
mod_paren_on_return = ignore # "return 1;" vs "return (1);"
mod_full_brace_if = add # "if() { } else { }" vs "if() else"
#
# inter-character spacing options
#
sp_sizeof_paren = remove # "sizeof (int)" vs "sizeof(int)"
sp_before_sparen = force # "if (" vs "if("
sp_after_sparen = force # "if () {" vs "if (){"
sp_inside_braces = add # "{ 1 }" vs "{1}"
sp_inside_braces_struct = add # "{ 1 }" vs "{1}"
sp_inside_braces_enum = add # "{ 1 }" vs "{1}"
sp_assign = add
sp_arith = add
sp_bool = add
sp_compare = add
sp_assign = add
sp_after_comma = add
sp_func_def_paren = remove # "int foo (){" vs "int foo(){"
sp_func_call_paren = remove # "foo (" vs "foo("
sp_func_proto_paren = remove # "int foo ();" vs "int foo();"
sp_inside_fparen = remove # "func( arg )" vs "func(arg)"
sp_else_brace = add # ignore/add/remove/force
sp_before_ptr_star = add # ignore/add/remove/force
sp_after_ptr_star = remove # ignore/add/remove/force
sp_between_ptr_star = remove # ignore/add/remove/force
sp_inside_paren = remove # remove spaces inside parens
sp_paren_paren = remove # remove spaces between nested parens
sp_inside_sparen = remove # remove spaces inside parens for if, while and the like
sp_brace_else = add # ignore/add/remove/force
sp_before_nl_cont = ignore
sp_cmt_cpp_start = add
sp_brace_typedef = add # }typedefd_name -> } typedefd_name
cmt_sp_after_star_cont = 1
#
# Aligning stuff
#
align_with_tabs = FALSE # use tabs to align
align_on_tabstop = TRUE # align on tabstops
align_enum_equ_span = 4 # '=' in enum definition
align_struct_init_span = 0 # align stuff in a structure init '= { }'
align_right_cmt_span = 3
align_nl_cont = TRUE
sp_pp_concat = ignore # ignore/add/remove/force

View File

@@ -1,16 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
extends: default
rules:
line-length:
max: 100
comments:
min-spaces-from-content: 1
indentation:
spaces: 2
indent-sequences: consistent
document-start:
present: false
truthy:
check-keys: false

File diff suppressed because it is too large Load Diff

1117
CODEOWNERS

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,10 @@
# General configuration options
# Kconfig - general configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
mainmenu "Zephyr Kernel Configuration"
source "Kconfig.zephyr"

View File

@@ -1,62 +1,56 @@
# General configuration options
# Kconfig - general configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# Copyright (c) 2016 Intel Corporation
# Copyright (c) 2023 Nordic Semiconductor ASA
#
# SPDX-License-Identifier: Apache-2.0
osource "${APPLICATION_SOURCE_DIR}/VERSION"
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
# After merging all definitions for a symbol/choice, Kconfig picks the first
# property (e.g. the first default) with a satisfied condition.
#
# Shield defaults should have precedence over board defaults, which should have
# precedence over SoC defaults, so include them in that order.
#
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
# This loads custom shields defconfigs (from BOARD_ROOT)
osource "$(KCONFIG_BINARY_DIR)/Kconfig.shield.defconfig"
# This loads Zephyr base shield defconfigs
source "boards/shields/*/Kconfig.defconfig"
source "$(BOARD_DIR)/Kconfig.defconfig"
# This loads custom SoC root defconfigs
osource "$(KCONFIG_BINARY_DIR)/Kconfig.soc.defconfig"
# This loads Zephyr base SoC root defconfigs
osource "soc/$(ARCH)/*/Kconfig.defconfig"
# This loads the toolchain defconfigs
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig.defconfig"
# This loads the testsuite defconfig
source "subsys/testsuite/Kconfig.defconfig"
# This should be early since the autogen Kconfig.dts symbols may get
# used by modules
source "dts/Kconfig"
menu "Modules"
source "$(CMAKE_BINARY_DIR)/Kconfig.modules"
source "modules/Kconfig"
endmenu
# Include these first so that any properties (e.g. defaults) below can be
# overridden in *.defconfig files (by defining symbols in multiple locations).
# After merging all the symbol definitions, Kconfig picks the first property
# (e.g. the first default) with a satisfied condition.
#
# Board defaults should be parsed before SoC defaults, because boards usually
# overrides SoC values.
#
# Note: $ARCH and $BOARD_DIR might be glob patterns.
source "$(BOARD_DIR)/Kconfig.defconfig"
source "$(SOC_DIR)/$(ARCH)/*/Kconfig.defconfig"
source "boards/Kconfig"
source "soc/Kconfig"
source "$(SOC_DIR)/Kconfig"
source "arch/Kconfig"
source "kernel/Kconfig"
source "dts/Kconfig"
source "drivers/Kconfig"
source "lib/Kconfig"
source "subsys/Kconfig"
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig"
source "ext/Kconfig"
menu "Build and Link Features"
menu "Linker Options"
choice LINKER_ORPHAN_CONFIGURATION
choice
prompt "Linker Orphan Section Handling"
default LINKER_ORPHAN_SECTION_WARN
@@ -79,30 +73,32 @@ config LINKER_ORPHAN_SECTION_ERROR
endchoice
config CODE_DATA_RELOCATION
bool "Relocate code/data sections"
depends on ARM
help
When selected this will relocate .text, data and .bss sections from
the specified files and places it in the required memory region. The
files should be specified in the CMakeList.txt file with
a cmake API zephyr_code_relocate().
config HAS_FLASH_LOAD_OFFSET
bool
help
This option is selected by targets having a FLASH_LOAD_OFFSET
and FLASH_LOAD_SIZE.
if HAS_FLASH_LOAD_OFFSET
config USE_DT_CODE_PARTITION
bool "Link application into /chosen/zephyr,code-partition from devicetree"
config USE_CODE_PARTITION
bool "link into code-partition"
depends on HAS_FLASH_LOAD_OFFSET
help
When enabled, the application will be linked into the flash partition
selected by the zephyr,code-partition property in /chosen in devicetree.
When this is disabled, the flash load offset and size can be set manually
below.
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_CODE_PARTITION := zephyr,code-partition
When selected application will be linked into chosen code-partition.
config FLASH_LOAD_OFFSET
# Only user-configurable when USE_DT_CODE_PARTITION is disabled
hex "Kernel load offset" if !USE_DT_CODE_PARTITION
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_CODE_PARTITION)) if USE_DT_CODE_PARTITION
hex "Kernel load offset"
default $(dt_hex_val,DT_CODE_PARTITION_OFFSET) if USE_CODE_PARTITION
default 0
depends on HAS_FLASH_LOAD_OFFSET
help
This option specifies the byte offset from the beginning of flash that
the kernel should be loaded into. Changing this value from zero will
@@ -112,10 +108,10 @@ config FLASH_LOAD_OFFSET
If unsure, leave at the default value 0.
config FLASH_LOAD_SIZE
# Only user-configurable when USE_DT_CODE_PARTITION is disabled
hex "Kernel load size" if !USE_DT_CODE_PARTITION
default $(dt_chosen_reg_size_hex,$(DT_CHOSEN_Z_CODE_PARTITION)) if USE_DT_CODE_PARTITION
hex "Kernel load size"
default $(dt_hex_val,DT_CODE_PARTITION_SIZE) if USE_CODE_PARTITION
default 0
depends on HAS_FLASH_LOAD_OFFSET
help
If non-zero, this option specifies the size, in bytes, of the flash
area that the Zephyr image will be allowed to occupy. If zero, the
@@ -124,53 +120,23 @@ config FLASH_LOAD_SIZE
If unsure, leave at the default value 0.
endif # HAS_FLASH_LOAD_OFFSET
config ROM_START_OFFSET
config TEXT_SECTION_OFFSET
hex
prompt "ROM start offset" if !BOOTLOADER_MCUBOOT
prompt "TEXT section offset" if !BOOTLOADER_MCUBOOT
default 0x200 if BOOTLOADER_MCUBOOT
default 0
help
If the application is built for chain-loading by a bootloader this
variable is required to be set to value that leaves sufficient
space between the beginning of the image and the start of the first
space between the beginning of the image and the start of the .text
section to store an image header or any other metadata.
In the particular case of the MCUboot bootloader this reserves enough
space to store the image header, which should also meet vector table
alignment requirements on most ARM targets, although some targets
may require smaller or larger values.
config LD_LINKER_SCRIPT_SUPPORTED
bool
default y
choice LINKER_SCRIPT
prompt "Linker script"
default LD_LINKER_TEMPLATE if LD_LINKER_SCRIPT_SUPPORTED
config LD_LINKER_TEMPLATE
bool "LD template"
depends on LD_LINKER_SCRIPT_SUPPORTED
help
Select this option to use the LD linker script templates.
The templates are pre-processed by the C pre-processor to create the
final LD linker script.
config CMAKE_LINKER_GENERATOR
bool "CMake generator"
depends on ARM
help
Select this option to use the Zephyr CMake linker script generator.
The linker configuration is written in CMake and the final linker
script will be generated by the toolchain specific linker generator.
For LD based linkers, this will be the ld generator, for ARMClang /
armlink based linkers it will be the scatter generator.
endchoice
config HAVE_CUSTOM_LINKER_SCRIPT
bool "Custom linker script provided"
bool "Custom linker scripts provided"
help
Set this option if you have a custom linker script which needed to
be define in CUSTOM_LINKER_SCRIPT.
@@ -189,6 +155,33 @@ config CUSTOM_LINKER_SCRIPT
linker script and avoid having to change the script provided by
Zephyr.
config CUSTOM_RODATA_LD
bool "(DEPRECATED) Include custom-rodata.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
data and linker directives into the rodata section.
config CUSTOM_RWDATA_LD
bool "(DEPRECATED) Include custom-rwdata.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
data and linker directives into the data section.
config CUSTOM_SECTIONS_LD
bool "(DEPRECATED) Include custom-sections.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
arbitrary sections.
config LINK_WHOLE_ARCHIVE
bool "Allow linking with --whole-archive"
help
This options allows linking external libraries with the
--whole-archive option to keep all symbols.
config KERNEL_ENTRY
string "Kernel entry symbol"
default "__start"
@@ -203,153 +196,17 @@ config LINKER_SORT_BY_ALIGNMENT
in decreasing size of symbols. This helps to minimize
padding between symbols.
config SRAM_VECTOR_TABLE
bool "Place the vector table in SRAM instead of flash"
help
The option specifies that the vector table should be placed at the
start of SRAM instead of the start of flash.
config HAS_SRAM_OFFSET
bool
help
This option is selected by targets that require SRAM_OFFSET.
config SRAM_OFFSET
hex "Kernel SRAM offset" if HAS_SRAM_OFFSET
default 0
help
This option specifies the byte offset from the beginning of SRAM
where the kernel begins. Changing this value from zero will affect
the Zephyr image's link, and will decrease the total amount of
SRAM available for use by application code.
If unsure, leave at the default value 0.
menu "Linker Sections"
config LINKER_USE_BOOT_SECTION
bool "Use Boot Linker Section"
help
If enabled, the symbols which are needed for the boot process
will be put into another linker section reserved for these
symbols.
Requires that boot sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_USE_PINNED_SECTION
bool "Use Pinned Linker Section"
help
If enabled, the symbols which need to be pinned in memory
will be put into another linker section reserved for pinned
symbols. During boot, the corresponding memory will be marked
as pinned.
Requires that pinned sections exist in the architecture, SoC,
board or custom linker script.
config LINKER_GENERIC_SECTIONS_PRESENT_AT_BOOT
bool "Generic sections are present at boot" if DEMAND_PAGING && LINKER_USE_PINNED_SECTION
default y
help
When disabled, the linker sections other than the boot and
pinned sections will be marked as not present in the page
tables. This allows kernel to pull in data pages on demand
as required by current execution context when demand paging
is enabled. There is no need to load all code and data into
memory at once.
If unsure, say Y.
config LINKER_LAST_SECTION_ID
bool "Last section identifier"
default y
depends on ARM || ARM64 || RISCV
help
If enabled, the last section will contain an identifier.
This ensures that the '_flash_used' linker symbol will always be
correctly calculated, even in cases where the location counter may
have been incremented for alignment purposes but no data is placed
after alignment.
Note: in cases where the flash is fully used, for example application
specific data is written at the end of the flash area, then writing a
last section identifier may cause rom region overflow.
In such cases this setting should be disabled.
config LINKER_LAST_SECTION_ID_PATTERN
hex "Last section identifier pattern"
default "0xE015E015"
depends on LINKER_LAST_SECTION_ID
help
Pattern to fill into last section as identifier.
Default pattern is 0xE015 (end of last section), but any pattern can
be used.
The size of the pattern must not exceed 4 bytes.
config LINKER_USE_NO_RELAX
bool
help
Hidden symbol to allow features to force the use of no relax.
config LINKER_USE_RELAX
bool "Linker optimization of call addressing"
depends on !LINKER_USE_NO_RELAX
default y
help
This option performs global optimizations that become possible when the linker resolves
addressing in the program, such as relaxing address modes and synthesizing new
instructions in the output object file. For ld and lld, this enables `--relax`.
On platforms where this is not supported, `--relax' is accepted, but ignored.
Disabling it can reduce performance, as the linker is no longer able to substiture long /
in-effective jump calls to shorter / more effective instructions.
endmenu # "Linker Sections"
endmenu
menu "Compiler Options"
config CODING_GUIDELINE_CHECK
bool "Enforce coding guideline rules"
help
Use available compiler flags to check coding guideline rules during
the build.
config NATIVE_BUILD
bool
select FULL_LIBC_SUPPORTED
select FULL_LIBCPP_SUPPORTED if CPP
help
Zephyr will be built targeting the host system for debug and
development purposes.
config NATIVE_APPLICATION
bool
default y if ARCH_POSIX
depends on !NATIVE_LIBRARY
select NATIVE_BUILD
bool "Build as a native host application"
help
Build as a native application that can run on the host and using
resources and libraries provided by the host.
config NATIVE_LIBRARY
bool
select NATIVE_BUILD
help
Build as a prelinked library for the native host target.
This library can later be built into an executable for the host.
config COMPILER_FREESTANDING
bool "Build in a freestanding compiler mode"
help
Configure the compiler to operate in freestanding mode according to
the C and C++ language specifications. Freestanding mode reduces the
requirements of the compiler and language environment, which can
negatively impact the ability for the compiler to detect errors and
perform optimizations.
choice COMPILER_OPTIMIZATIONS
choice
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
@@ -383,65 +240,6 @@ config NO_OPTIMIZATIONS
Compiler optimizations will be set to -O0 independently of other
options.
Selecting this option will likely require manual tuning of the
default stack sizes in order to avoid stack overflows.
endchoice
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config COMPILER_SAVE_TEMPS
bool "Save temporary object files"
help
Instruct the compiler to save the temporary intermediate files
permanently. These can be useful for troubleshooting build issues.
config COMPILER_TRACK_MACRO_EXPANSION
bool "Track macro expansion"
default y
help
When enabled, locations of tokens across macro expansions will be
tracked. Disabling this option may be useful to debug long macro
expansion chains.
config COMPILER_COLOR_DIAGNOSTICS
bool "Colored diagnostics"
default y
help
Compiler diagnostic messages are colorized.
choice COMPILER_SECURITY_FORTIFY
prompt "Detect buffer overflows in libc calls"
default FORTIFY_SOURCE_NONE if NO_OPTIMIZATIONS || MINIMAL_LIBC || NATIVE_BUILD
default FORTIFY_SOURCE_COMPILE_TIME
help
Buffer overflow checking in libc calls. Supported by Clang and
GCC when using Picolibc or Newlib. Requires compiler optimization
to be enabled.
config FORTIFY_SOURCE_NONE
bool "No detection"
help
Disables both compile-time and run-time checking.
config FORTIFY_SOURCE_COMPILE_TIME
bool "Compile-time detection"
help
Enables only compile-time checking. Compile-time checking
doesn't increase executable size or reduce performance, it
limits checking to what can be done with information available
at compile time.
config FORTIFY_SOURCE_RUN_TIME
bool "Compile-time and run-time detection"
help
Enables both compile-time and run-time checking. Run-time
checking increases coverage at the expense of additional code,
and means that applications will raise a runtime exception
when buffer overflow is detected.
endchoice
config COMPILER_OPT
@@ -454,38 +252,8 @@ config COMPILER_OPT
and can be used to change compiler optimization, warning and error
messages, and so on.
config MISRA_SANE
bool "MISRA standards compliance features"
help
Causes the source code to build in "MISRA" mode, which
disallows some otherwise-permitted features of the C
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
endmenu
choice
prompt "Error checking behavior for CHECK macro"
default RUNTIME_ERROR_CHECKS
config ASSERT_ON_ERRORS
bool "Assert on all errors"
help
Assert on errors covered with the CHECK macro.
config NO_RUNTIME_CHECKS
bool "No runtime error checks"
help
Do not do any runtime checks or asserts when using the CHECK macro.
config RUNTIME_ERROR_CHECKS
bool "Runtime error checks"
help
Always perform runtime checks covered with the CHECK macro. This
option is the default and the only option used during testing.
endchoice
menu "Build Options"
config KERNEL_BIN_NAME
@@ -500,24 +268,12 @@ config OUTPUT_STAT
help
Create a stat file using readelf -e <elf>
config OUTPUT_SYMBOLS
bool "Create a symbol file"
help
Create a symbol file using nm <elf>
config OUTPUT_DISASSEMBLY
bool "Create a disassembly file"
default y
help
Create an .lst file with the assembly listing of the firmware.
config OUTPUT_DISASSEMBLE_ALL
bool "Disassemble all sections with source. Fill zeros."
default n
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain complete disassembly of the firmware
not just those expected to contain instructions including zeros
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -532,156 +288,40 @@ config OUTPUT_PRINT_MEMORY_USAGE
ram_report and
https://sourceware.org/binutils/docs/ld/MEMORY.html
config CLEANUP_INTERMEDIATE_FILES
bool "Remove all intermediate files"
help
Delete intermediate files to save space and cleanup clutter resulting
from the build process. Note this breaks incremental builds, west spdx
(Software Bill of Material generation), and maybe others.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
help
Build an Intel HEX binary zephyr/zephyr.hex in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a binary in HEX format. This will build a zephyr.hex file need
by some platforms.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
help
Build a "raw" binary zephyr/zephyr.bin in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_EFI
bool "Build as an EFI application"
default n
depends on X86_64
help
Build as an EFI application.
This works by creating a "zephyr.efi" EFI binary containing a zephyr
image extracted from a built zephyr.elf file. EFI applications are
relocatable, and cannot be placed at specific locations in memory.
Instead, the stub code will copy the embedded zephyr sections to the
appropriate locations at startup, clear any zero-filled (BSS, etc...)
areas, then jump into the 64 bit entry point.
Build a binary in BIN format. This will build a zephyr.bin file need
by some platforms.
config BUILD_OUTPUT_EXE
bool "Build a binary in ELF format with .exe extension"
help
Build an ELF binary that can run in the host system at
zephyr/zephyr.exe in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a binary in ELF format that can run in the host system. This
will build a zephyr.exe file.
config BUILD_OUTPUT_S19
bool "Build a binary in S19 format"
help
Build an S19 binary zephyr/zephyr.s19 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a binary in S19 format. This will build a zephyr.s19 file need
by some platforms.
config BUILD_OUTPUT_UF2
bool "Build a binary in UF2 format"
depends on BUILD_OUTPUT_BIN
help
Build a UF2 binary zephyr/zephyr.uf2 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
if BUILD_OUTPUT_UF2
config BUILD_OUTPUT_UF2_FAMILY_ID
string "UF2 device family ID"
default "0x1c5f21b0" if SOC_SERIES_ESP32
default "0x621e937a" if SOC_NRF52833_QIAA
default "0xada52840" if SOC_NRF52840_QIAA
default "0x4fb2d5bd" if SOC_SERIES_IMX_RT
default "0x2abc77ec" if SOC_SERIES_LPC55XXX
default "0xe48bff56" if SOC_SERIES_RP2XXX
default "0x68ed2b88" if SOC_SERIES_SAMD21
default "0x55114460" if SOC_SERIES_SAMD51
default "0x647824b6" if SOC_SERIES_STM32F0X
default "0x5d1a0a2e" if SOC_SERIES_STM32F2X
default "0x6b846188" if SOC_SERIES_STM32F3X
default "0x53b80f00" if SOC_SERIES_STM32F7X
default "0x300f5633" if SOC_SERIES_STM32G0X
default "0x4c71240a" if SOC_SERIES_STM32G4X
default "0x6db66082" if SOC_SERIES_STM32H7X
default "0x202e3a91" if SOC_SERIES_STM32L0X
default "0x1e1f432d" if SOC_SERIES_STM32L1X
default "0x00ff6919" if SOC_SERIES_STM32L4X
default "0x04240bdf" if SOC_SERIES_STM32L5X
default "0x70d16653" if SOC_SERIES_STM32WBX
default "0x5ee21072" if SOC_STM32F103XE
default "0x57755a57" if SOC_SERIES_STM32F4X && (!SOC_STM32F407XE) && (!SOC_STM32F407XG)
default "0x6d0922fa" if SOC_STM32F407XE
default "0x8fb060fe" if SOC_STM32F407XG
help
UF2 bootloaders only accept UF2 files with a matching family ID.
This can be either a hex, e.g. 0x68ed2b88, or well-known family
name string. If the SoC in use is known by UF2, the Family ID will
be pre-filled with the known value.
config BUILD_OUTPUT_UF2_USE_FLASH_BASE
bool
default n
config BUILD_OUTPUT_UF2_USE_FLASH_OFFSET
bool
default n
endif # BUILD_OUTPUT_UF2
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/bin/s19 files."
depends on BUILD_OUTPUT_HEX || BUILD_OUTPUT_BIN || BUILD_OUTPUT_S19
config BUILD_OUTPUT_STRIPPED
bool "Build a stripped binary"
help
Build a stripped binary zephyr/zephyr.strip in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_ADJUST_LMA
string
help
This will adjust the LMA address in the final ELF and hex files with
the value provided.
This will not affect the internal address symbols inside the image but
can be useful when adjusting the LMA address for flash tools or multi
stage loaders where a pre-loader may copy image to a second location
before booting a second core.
The value will be evaluated as a math expression, this means that
following are valid expression
- 1024
- 0x1000
- -0x1000
- 0x20000000 - 0x10000000
Note: negative numbers are valid.
To adjust according to a chosen flash partition one can specify a
default as:
DT_CHOSEN_IMAGE_<name> := <name>,<name>-partition
DT_CHOSEN_Z_FLASH := zephyr,flash
config BUILD_OUTPUT_ADJUST_LMA
default "$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_IMAGE_M4))-\
$(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH))"
config BUILD_OUTPUT_INFO_HEADER
bool "Create a image information header"
help
Create an image information header which will contain image
information from the Zephyr binary.
Example of information contained in the header file:
- Number of segments in the image
- LMA address of each segment
- VMA address of each segment
- Size of each segment
config BUILD_ALIGN_LMA
bool "Align LMA in output image"
default y if BUILD_OUTPUT_ADJUST_LMA!=""
help
Ensure that the LMA for each section in the output image respects
the alignment requirements of that section. This is required for
some tooling, such as objcopy, to be able to adjust the LMA of the
ELF file.
Build a stripped binary. This will build a zephyr.stripped file need
by some platforms.
config APPLICATION_DEFINED_SYSCALL
bool "Scan application folder for any syscall definition"
@@ -689,141 +329,7 @@ config APPLICATION_DEFINED_SYSCALL
Scan additional folders inside application source folder
for application defined syscalls.
config MAKEFILE_EXPORTS
bool "Generate build metadata files named Makefile.exports"
help
Generates a file with build information that can be read by
third party Makefile-based build systems.
config BUILD_OUTPUT_META
bool "Create a build meta file"
help
Create a build meta file in the build directory containing lists of:
- Zephyr: path and revision (if git repo)
- Zephyr modules: name, path, and revision (if git repo)
- West:
- manifest: path and revision
- projects: path and revision
- Workspace:
- dirty: one or more repositories are marked dirty
- extra: extra Zephyr modules are manually included in the build
- off: the SHA of one or more west projects are not what the manifest
defined when `west update` was run the last time (`manifest-rev`).
The off state is only present if a west workspace is found.
File extension is .meta
config BUILD_OUTPUT_META_STATE_PROPAGATE
bool "Propagate module and project state"
depends on BUILD_OUTPUT_META
help
Propagate to state of each module to the Zephyr revision field.
If west is used the state of each west project is also propagated to
the Zephyr revision field.
West manifest repo revision field will also
be marked with the same state as the Zephyr revision.
The final revision will become: <SHA>-<state1>-<state2>-<state3>...
If no states are appended to the SHA it means the build is of a clean
tree.
- dirty: one or more repositories are marked dirty
- extra: extra Zephyr modules are manually included in the build
- off: the SHA of one or more west projects are not what the manifest
defined when `west update` was run the last time (`manifest-rev`).
The off state is only present if a west workspace is found.
config BUILD_OUTPUT_STRIP_PATHS
bool "Strip absolute paths from binaries"
default y
help
If the compiler supports it, strip the ${ZEPHYR_BASE} prefix from the
__FILE__ macro used in __ASSERT*, in the
.noinit."/home/joe/zephyr/fu/bar.c" section names and in any
application code.
This saves some memory, stops leaking user locations in binaries, makes
failure logs more deterministic and most importantly makes builds more
deterministic.
Debuggers usually have a path mapping feature to ensure the files are
still found.
config CHECK_INIT_PRIORITIES
bool "Build time initialization priorities check"
default y
depends on !NATIVE_LIBRARY
depends on "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "armclang"
help
Check the build for initialization priority issues by comparing the
initialization priority in the build with the device dependency
derived from the devicetree definition.
Fails the build on priority errors (dependent devices, inverted
priority), see CHECK_INIT_PRIORITIES_FAIL_ON_WARNING to fail on
warnings (dependent devices, same priority) as well.
config CHECK_INIT_PRIORITIES_FAIL_ON_WARNING
bool "Fail the build on priority check warnings"
depends on CHECK_INIT_PRIORITIES
help
Fail the build if the dependency check script identifies any pair of
devices depending on each other but initialized with the same
priority.
config EMIT_ALL_SYSCALLS
bool "Emit all possible syscalls in the tree"
help
This tells the build system to emit all possible syscalls found
in the tree, instead of only those syscalls associated with enabled
drivers and subsystems.
endmenu
config DEPRECATED
bool
help
Symbol that must be selected by a feature or module if it is
considered to be deprecated.
config WARN_DEPRECATED
bool
default y
prompt "Warn on deprecated usage"
help
Print a warning when the Kconfig tree is parsed if any deprecated
features are enabled.
config EXPERIMENTAL
bool
help
Symbol that must be selected by a feature if it is considered to be
at an experimental implementation stage.
config WARN_EXPERIMENTAL
bool
prompt "Warn on experimental usage"
help
Print a warning when the Kconfig tree is parsed if any experimental
features are enabled.
config TAINT
bool
help
Symbol that must be selected by a feature or module if the Zephyr
build is considered tainted.
config ENFORCE_ZEPHYR_STDINT
bool
prompt "Enforce Zephyr convention for stdint"
depends on !ARCH_POSIX
default y
help
This enforces the Zephyr stdint convention where int32_t = int,
int64_t = long long, and intptr_t = long so that short string
format length modifiers can be used universally across ILP32
and LP64 architectures. Sometimes this is not possible e.g. when
linking against a binary-only C++ library whose type mangling
is incompatible with the Zephyr convention, or if the build
environment doesn't allow such enforcement, in which case this
should be turned off with the caveat that argument type validation
on Zephyr code will be skipped.
endmenu
@@ -839,7 +345,7 @@ config IS_BOOTLOADER
config BOOTLOADER_SRAM_SIZE
int "SRAM reserved for bootloader"
default 0
default 16
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
@@ -849,58 +355,66 @@ config BOOTLOADER_SRAM_SIZE
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
config BOOTLOADER_MCUBOOT
bool "MCUboot bootloader support"
select USE_CODE_PARTITION
help
This option signifies that the target uses MCUboot as a bootloader,
or in other words that the image is to be chain-loaded by MCUboot.
This sets several required build system and Device Tree options in
order for the image generated to be bootable using the MCUboot open
source bootloader. Currently this includes:
* Setting TEXT_SECTION_OFFSET to a default value that allows space
for the MCUboot image header
* Activating SW_VECTOR_RELAY on Cortex-M0 (or Armv8-M baseline)
targets with no built-in vector relocation mechanisms
* Including dts/common/mcuboot.overlay when building the Device
Tree in order to place and link the image at the slot0 offset
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
depends on SOC_FAMILY_ESP32 && !BOOTLOADER_MCUBOOT && !MCUBOOT
default y
depends on SOC_ESP32
help
This option will trigger the compilation of the ESP-IDF bootloader
inside the build folder.
At flash time, the bootloader will be flashed with the zephyr image
config BOOTLOADER_BOSSA
bool "BOSSA bootloader support"
select USE_DT_CODE_PARTITION
config BOOTLOADER_KEXEC
bool "Boot using Linux kexec() system call"
depends on X86
help
Signifies that the target uses a BOSSA compatible bootloader. If CDC
ACM USB support is also enabled then the board will reboot into the
bootloader automatically when bossac is run.
This option signifies that Linux boots the kernel using kexec system call
and utility. This method is used to boot the kernel over the network.
config BOOTLOADER_BOSSA_DEVICE_NAME
string "BOSSA CDC ACM device name"
depends on BOOTLOADER_BOSSA && CDC_ACM_DTE_RATE_CALLBACK_SUPPORT
default "CDC_ACM_0"
config BOOTLOADER_CONTEXT_RESTORE
bool "Boot loader has context restore support"
default y
depends on SYS_POWER_DEEP_SLEEP_STATES && BOOTLOADER_CONTEXT_RESTORE_SUPPORTED
help
Sets the CDC ACM port to watch for reboot commands.
This option signifies that the target has a bootloader
that restores CPU context upon resuming from deep sleep
power state.
choice
prompt "BOSSA bootloader variant"
depends on BOOTLOADER_BOSSA
config BOOTLOADER_BOSSA_LEGACY
bool "Legacy"
config REBOOT
bool "Reboot functionality"
select SYSTEM_CLOCK_DISABLE
help
Select the Legacy variant of the BOSSA bootloader. This is defined
for compatibility mode only. The recommendation is use newer
versions like Arduino or Adafruit UF2.
Enable the sys_reboot() API. Enabling this can drag in other subsystems
needed to perform a "safe" reboot (e.g. SYSTEM_CLOCK_DISABLE, to stop the
system clock before issuing a reset).
config BOOTLOADER_BOSSA_ARDUINO
bool "Arduino"
config MISRA_SANE
bool "MISRA standards compliance features"
help
Select the Arduino variant of the BOSSA bootloader. Uses 0x07738135
as the magic value to enter the bootloader.
config BOOTLOADER_BOSSA_ADAFRUIT_UF2
bool "Adafruit UF2"
help
Select the Adafruit UF2 variant of the BOSSA bootloader. Uses
0xf01669ef as the magic value to enter the bootloader.
endchoice
Causes the source code to build in "MISRA" mode, which
disallows some otherwise-permitted features of the C
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
endmenu
menu "Compatibility"
config COMPAT_INCLUDES

File diff suppressed because it is too large Load Diff

25
Makefile Normal file
View File

@@ -0,0 +1,25 @@
#
# Top level makefile for documentation build
#
ifndef ZEPHYR_BASE
$(error The ZEPHYR_BASE environment variable must be set)
endif
BUILDDIR ?= doc/_build
DOC_TAG ?= development
SPHINXOPTS ?= -q
# Documentation targets
# ---------------------------------------------------------------------------
clean:
rm -rf ${BUILDDIR}
htmldocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} htmldocs
htmldocs-fast:
mkdir -p ${BUILDDIR} && cmake -GNinja -DKCONFIG_TURBO_MODE=1 -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} htmldocs
pdfdocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} pdfdocs

View File

@@ -1,21 +1,16 @@
.. raw:: html
<a href="https://www.zephyrproject.org">
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="doc/_static/images/logo-readme-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="doc/_static/images/logo-readme-light.svg">
<img src="doc/_static/images/logo-readme-light.svg">
</picture>
<img src="doc/images/Zephyr-Project.png">
</p>
</a>
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img
src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a
href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain">
<img
src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
src="https://api.shippable.com/projects/58ffb2b8baa5e307002e1d79/badge?branch=master">
The Zephyr Project is a scalable real-time operating system (RTOS) supporting
@@ -26,12 +21,13 @@ The Zephyr OS is based on a small-footprint kernel designed for use on
resource-constrained systems: from simple embedded environmental sensors and
LED wearables to sophisticated smart watches and IoT wireless gateways.
The Zephyr kernel supports multiple architectures, including ARM (Cortex-A,
Cortex-R, Cortex-M), Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V,
SPARC, MIPS, and a large number of `supported boards`_.
The Zephyr kernel supports multiple architectures, including ARM Cortex-M,
Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V, and a large number of
`supported boards`_.
.. below included in doc/introduction/introduction.rst
.. start_include_here
Getting Started
***************
@@ -39,16 +35,12 @@ Getting Started
Welcome to Zephyr! See the `Introduction to Zephyr`_ for a high-level overview,
and the documentation's `Getting Started Guide`_ to start developing.
.. start_include_here
Community Support
*****************
Community support is provided via mailing lists and Discord; see the Resources
Community support is provided via mailing lists and Slack; see the Resources
below for details.
.. _project-resources:
Resources
*********
@@ -59,7 +51,7 @@ Here's a quick summary of resources to help you find your way around:
* **Source Code**: https://github.com/zephyrproject-rtos/zephyr is the main
repository; https://elixir.bootlin.com/zephyr/latest/source contains a
searchable index
* **Releases**: https://github.com/zephyrproject-rtos/zephyr/releases
* **Releases**: https://zephyrproject.org/developers/#downloads
* **Samples and example code**: see `Sample and Demo Code Examples`_
* **Mailing Lists**: users@lists.zephyrproject.org and
devel@lists.zephyrproject.org are the main user and developer mailing lists,
@@ -67,9 +59,10 @@ Here's a quick summary of resources to help you find your way around:
`Zephyr Development mailing list`_. The other `Zephyr mailing list
subgroups`_ have their own archives and sign-up pages.
* **Nightly CI Build Status**: https://lists.zephyrproject.org/g/builds
The builds@lists.zephyrproject.org mailing list archives the CI nightly build results.
* **Chat**: Real-time chat happens in Zephyr's Discord Server. Use
this `Discord Invite`_ to register.
The builds@lists.zephyrproject.org mailing list archives the CI
(shippable) nightly build results.
* **Chat**: Zephyr's Slack workspace is https://zephyrproject.slack.com. Use
this `Slack Invite`_ to register.
* **Contributing**: see the `Contribution Guide`_
* **Wiki**: `Zephyr GitHub wiki`_
* **Issues**: https://github.com/zephyrproject-rtos/zephyr/issues
@@ -78,15 +71,15 @@ Here's a quick summary of resources to help you find your way around:
tracked separately at https://zephyrprojectsec.atlassian.net.
* **Zephyr Project Website**: https://zephyrproject.org
.. _Discord Invite: https://chat.zephyrproject.org
.. _Slack Invite: https://tinyurl.com/y5glwylp
.. _supported boards: http://docs.zephyrproject.org/latest/boards/index.html
.. _Zephyr Documentation: http://docs.zephyrproject.org
.. _Introduction to Zephyr: http://docs.zephyrproject.org/latest/introduction/index.html
.. _Getting Started Guide: http://docs.zephyrproject.org/latest/develop/getting_started/index.html
.. _Getting Started Guide: http://docs.zephyrproject.org/latest/getting_started/index.html
.. _Contribution Guide: http://docs.zephyrproject.org/latest/contribute/index.html
.. _Zephyr GitHub wiki: https://github.com/zephyrproject-rtos/zephyr/wiki
.. _Zephyr Development mailing list: https://lists.zephyrproject.org/g/devel
.. _Zephyr mailing list subgroups: https://lists.zephyrproject.org/g/main/subgroups
.. _Sample and Demo Code Examples: http://docs.zephyrproject.org/latest/samples/index.html
.. _Security: http://docs.zephyrproject.org/latest/security/index.html
.. _Asking for Help Tips: https://docs.zephyrproject.org/latest/develop/getting_started/index.html#asking-for-help
.. _Asking for Help Tips: https://docs.zephyrproject.org/latest/guides/getting-help.html

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 3
VERSION_MINOR = 4
PATCHLEVEL = 99
VERSION_MAJOR = 2
VERSION_MINOR = 0
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -1,14 +1,6 @@
# SPDX-License-Identifier: Apache-2.0
# FIXME: SHADOW_VARS: Remove this once we have enabled -Wshadow globally.
add_compile_options($<TARGET_PROPERTY:compiler,warning_shadow_variables>)
add_definitions(-D__ZEPHYR_SUPERVISOR__)
include_directories(
${ZEPHYR_BASE}/kernel/include
${ARCH_DIR}/${ARCH}/include
)
add_subdirectory(common)
add_subdirectory(${ARCH_DIR}/${ARCH} arch/${ARCH})

File diff suppressed because it is too large Load Diff

View File

@@ -12,36 +12,4 @@ zephyr_cc_option(-fno-delete-null-pointer-checks)
zephyr_cc_option_ifdef(CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS -munaligned-access)
if(NOT COMPILER STREQUAL arcmwdt)
if(CONFIG_THREAD_LOCAL_STORAGE)
# Instruct compiler to use proper register as cached thread pointer for thread local storage.
# For ARCv2 the default register is usually not specified - so we need to specify it
# For ARCv3 the register is fixed to r30, so we don't need to specify it
zephyr_compile_options_ifdef(CONFIG_ISA_ARCV2 -mtp-regno=26)
else()
# If thread local storage isn't used - we can safely schedule thread pointer register
zephyr_compile_options_ifdef(CONFIG_ISA_ARCV2 -mtp-regno=none)
endif()
endif()
add_subdirectory(core)
if(COMPILER STREQUAL arcmwdt)
add_subdirectory(arcmwdt)
if(CONFIG_64BIT)
zephyr_compile_options(-Ml)
# Instruct MWDT assembler not to warn when we load only lower half (32bit) of symbol
# instead of full 64bit address in ASM code. It is valid as we don't support Zephyr
# linkage to high addresses for 64bit ARC platforms.
zephyr_compile_options(-Wa,-offwarn=168)
endif()
endif()
if(CONFIG_ISA_ARCV3 AND CONFIG_64BIT)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littlearc64)
elseif(CONFIG_ISA_ARCV3 AND NOT CONFIG_64BIT)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearc64)
else()
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearc)
endif()

View File

@@ -1,7 +1,10 @@
# ARC options
#
# Copyright (c) 2014, 2019 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
menu "ARC Options"
depends on ARC
@@ -9,135 +12,47 @@ menu "ARC Options"
config ARCH
default "arc"
choice
prompt "ARC core family"
default CPU_ARCEM
config CPU_ARCEM
bool
config CPU_ARCEM
bool "ARC EM cores"
select CPU_ARCV2
select ATOMIC_OPERATIONS_C
help
This option signifies the use of an ARC EM CPU
config CPU_ARCHS
bool
config CPU_ARCHS
bool "ARC HS cores"
select CPU_ARCV2
select ATOMIC_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU
choice
prompt "ARC Instruction Set"
default ISA_ARCV2
config ISA_ARCV2
bool "ARC ISA v2"
select ARCH_HAS_STACK_PROTECTION if ARC_HAS_STACK_CHECKING || (ARC_MPU && ARC_MPU_VER !=2)
select ARCH_HAS_USERSPACE if ARC_MPU
select ARCH_HAS_SINGLE_THREAD_SUPPORT if !SMP
select USE_SWITCH
select USE_SWITCH_SUPPORTED
help
v2 ISA for the ARC-HS & ARC-EM cores
config ISA_ARCV3
bool "ARC ISA v3"
select ARCH_HAS_SINGLE_THREAD_SUPPORT if !SMP
select USE_SWITCH
select USE_SWITCH_SUPPORTED
endchoice
if ISA_ARCV2
menu "ARCv2 Family Options"
config CPU_EM4
config CPU_ARCV2
bool
select CPU_ARCEM
help
If y, the SoC uses an ARC EM4 CPU
config CPU_EM4_DMIPS
bool
select CPU_ARCEM
help
If y, the SoC uses an ARC EM4 DMIPS CPU
config CPU_EM4_FPUS
bool
select CPU_ARCEM
help
If y, the SoC uses an ARC EM4 DMIPS CPU with the single-precision
floating-point extension
config CPU_EM4_FPUDA
bool
select CPU_ARCEM
help
If y, the SoC uses an ARC EM4 DMIPS CPU with single-precision
floating-point and double assist instructions
config CPU_EM6
bool
select CPU_ARCEM
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC EM6 CPU
config CPU_HS3X
bool
select CPU_ARCHS
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC HS3x CPU
config CPU_HS4X
bool
select CPU_ARCHS
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an HS4X CPU
endif #ISA_ARCV2
if ISA_ARCV3
config CPU_HS5X
bool
select CPU_ARCHS
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC HS6x CPU
config CPU_HS6X
bool
select CPU_ARCHS
select 64BIT
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
If y, the SoC uses an ARC HS6x CPU
endif #ISA_ARCV3
config FP_FPU_DA
bool
menu "ARC CPU Options"
config ARC_HAS_ZOL
bool
depends on ISA_ARCV2
select ARCH_HAS_STACK_PROTECTION if ARC_HAS_STACK_CHECKING || ARC_MPU
select ARCH_HAS_USERSPACE if ARC_MPU
select USE_SWITCH
select USE_SWITCH_SUPPORTED
default y
help
ARCv2 CPUs have ZOL hardware loop mechanism which the ARCv3 ISA drops.
Architecturally ZOL provides
- LPcc instruction
- LP_COUNT core reg
- LP_START, LP_END aux regs
Disabling this option removes usage of ZOL regs from code
This option signifies the use of a CPU of the ARCv2 family.
config NUM_IRQ_PRIO_LEVELS
config DATA_ENDIANNESS_LITTLE
bool
default y
help
This is driven by the processor implementation, since it is fixed in
hardware. The BSP should set this value to 'n' if the data is
implemented as big endian.
config NUM_IRQ_PRIO_LEVELS
int "Number of supported interrupt priority levels"
range 1 16
help
@@ -146,7 +61,7 @@ config NUM_IRQ_PRIO_LEVELS
The BSP must provide a valid default for proper operation.
config NUM_IRQS
config NUM_IRQS
int "Upper limit of interrupt numbers/IDs used"
range 17 256
help
@@ -157,10 +72,9 @@ config NUM_IRQS
The BSP must provide a valid default. This drives the size of the
vector table.
config RGF_NUM_BANKS
config RGF_NUM_BANKS
int "Number of General Purpose Register Banks"
depends on ARC_FIRQ
depends on NUM_IRQ_PRIO_LEVELS > 1
depends on CPU_ARCV2
range 1 2
default 2
help
@@ -170,16 +84,9 @@ config RGF_NUM_BANKS
If fast interrupts are supported but there is only 1
register bank, the fast interrupt handler must save
and restore general purpose registers.
NOTE: it's required to have more than one interrupt priority level
to use second register bank - otherwise all interrupts will use
same register bank. Such configuration isn't supported in software
and it is not beneficial from the performance point of view.
config ARC_FIRQ
bool "FIRQ enable"
depends on ISA_ARCV2
depends on NUM_IRQ_PRIO_LEVELS > 1
depends on !ARC_HAS_SECURE
default y
help
Fast interrupts are supported (FIRQ). If FIRQ enabled, for interrupts
@@ -187,52 +94,32 @@ config ARC_FIRQ
other regs will be saved according to the number of register bank;
If FIRQ is disabled, the handle of interrupts with highest priority
will be same with other interrupts.
NOTE: we don't allow the configuration with FIRQ enabled and only one
interrupt priority level (so all interrupts are FIRQ). Such
configuration isn't supported in software and it is not beneficial
from the performance point of view.
config ARC_FIRQ_STACK
bool "Separate firq stack"
depends on ARC_FIRQ && RGF_NUM_BANKS > 1
help
Use separate stack for FIRQ handing. When the fast irq is also a direct
irq, this will get the minimal interrupt latency.
config ARC_FIRQ_STACK_SIZE
int "FIRQ stack size"
depends on ARC_FIRQ_STACK
default 1024
help
The size of firq stack.
config ARC_HAS_STACK_CHECKING
config ARC_HAS_STACK_CHECKING
bool "ARC has STACK_CHECKING"
depends on ISA_ARCV2
default y
help
ARC is configured with STACK_CHECKING which is a mechanism for
checking stack accesses and raising an exception when a stack
overflow or underflow is detected.
config ARC_CONNECT
config ARC_CONNECT
bool "ARC has ARC connect"
select SCHED_IPI_SUPPORTED
help
ARC is configured with ARC CONNECT which is a hardware for connecting
multi cores.
config ARC_STACK_CHECKING
config ARC_STACK_CHECKING
bool
select NO_UNUSED_STACK_INSPECTION
help
Use ARC STACK_CHECKING to do stack protection
config ARC_STACK_PROTECTION
config ARC_STACK_PROTECTION
bool
default y if HW_STACK_PROTECTION
select ARC_STACK_CHECKING if ARC_HAS_STACK_CHECKING
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU && ARC_MPU_VER !=2)
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU)
select THREAD_STACK_INFO
help
This option enables either:
@@ -244,8 +131,9 @@ config ARC_STACK_PROTECTION
selection of the ARC stack checking is
prioritized over the MPU-based stack guard.
config ARC_USE_UNALIGNED_MEM_ACCESS
bool "Unaligned access in HW"
config ARC_USE_UNALIGNED_MEM_ACCESS
bool "Enable unaligned access in HW"
default n if CPU_ARCEM
default y if CPU_ARCHS
depends on (CPU_ARCEM && !ARC_HAS_SECURE) || CPU_ARCHS
help
@@ -253,7 +141,7 @@ config ARC_USE_UNALIGNED_MEM_ACCESS
to support unaligned memory access which is then disabled by default.
Enable unaligned access in hardware and make software to use it.
config FAULT_DUMP
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
@@ -268,6 +156,9 @@ config FAULT_DUMP
0: Off.
config XIP
default y if !UART_NSIM
config GEN_ISR_TABLES
default y
@@ -286,8 +177,8 @@ config CODE_DENSITY
Enable code density option to get better code density
config ARC_HAS_ACCL_REGS
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6 and/or DSP)"
default y if CPU_HS3X || CPU_HS4X || CPU_HS5X || CPU_HS6X
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
default y if FLOAT
help
Depending on the configuration, CPU can contain accumulator reg-pair
(also referred to as r58:r59). These can also be used by gcc as GPR so
@@ -295,7 +186,6 @@ config ARC_HAS_ACCL_REGS
config ARC_HAS_SECURE
bool "ARC has SecureShield"
depends on ISA_ARCV2
select CPU_HAS_TEE
select ARCH_HAS_TRUSTED_EXECUTION
help
@@ -311,7 +201,8 @@ config SJLI_TABLE_SIZE
sjli instruction.
config ARC_SECURE_FIRMWARE
bool "Generate Secure Firmware"
prompt "Generate Secure Firmware"
bool
depends on ARC_HAS_SECURE
default y if TRUSTED_EXECUTION_SECURE
help
@@ -327,7 +218,8 @@ config ARC_SECURE_FIRMWARE
and normal resources of the ARC processors.
config ARC_NORMAL_FIRMWARE
bool "Generate Normal Firmware"
prompt "Generate Normal Firmware"
bool
depends on !ARC_SECURE_FIRMWARE
depends on ARC_HAS_SECURE
default y if TRUSTED_EXECUTION_NONSECURE
@@ -346,13 +238,11 @@ config ARC_NORMAL_FIRMWARE
resources of the ARC processors, and, therefore, it shall avoid
accessing them.
source "arch/arc/core/dsp/Kconfig"
menu "ARC MPU Options"
depends on CPU_HAS_MPU
config ARC_MPU_ENABLE
bool "Memory Protection Unit (MPU)"
bool "Enable MPU"
select ARC_MPU
help
Enable MPU
@@ -361,18 +251,34 @@ source "arch/arc/core/mpu/Kconfig"
endmenu
config DCACHE_LINE_SIZE
default 32
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768 if !64BIT
default 2048 if 64BIT
config CACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
help
Size in bytes of exception handling stack which is at the top of
interrupt stack to get smaller memory footprint because exception
is not frequent. To reduce the impact on interrupt handling,
especially nested interrupt, it cannot be too large.
This option enables querying the d-cache build register for finding
the d-cache line size at the expense of taking more memory and code
and a slightly increased boot time.
If the CPU's d-cache line size is known in advance, disable this
option and manually enter the value for CACHE_LINE_SIZE.
config CACHE_LINE_SIZE
int "Cache line size" if !CACHE_LINE_SIZE_DETECT
default 32
help
Size in bytes of a CPU d-cache line.
Detect automatically at runtime by selecting CACHE_LINE_SIZE_DETECT.
config ARCH_CACHE_FLUSH_DETECT
bool
config CACHE_FLUSHING
bool "Enable d-cache flushing mechanism"
help
This links in the sys_cache_flush() function, which provides a
way to flush multiple lines of the d-cache.
If the d-cache is present, set this to y.
If the d-cache is NOT present, set this to n.
endmenu
@@ -385,38 +291,4 @@ config ARC_EXCEPTION_DEBUG
and parameters, at a cost of code/data size for the human-readable
strings.
config ARC_EARLY_SOC_INIT
bool "Make early stage SoC-specific initialization"
help
Call SoC per-core setup code on early stage initialization
(before C runtime initialization). Setup code is called in form of
soc_early_asm_init_percpu assembler macro.
endmenu
config MAIN_STACK_SIZE
default 4096 if 64BIT
config ISR_STACK_SIZE
default 4096 if 64BIT
config SYSTEM_WORKQUEUE_STACK_SIZE
default 4096 if 64BIT
config IDLE_STACK_SIZE
default 1024 if 64BIT
config IPM_CONSOLE_STACK_SIZE
default 2048 if 64BIT
config TEST_EXTRA_STACK_SIZE
default 2048 if 64BIT
config CMSIS_THREAD_MAX_STACK_SIZE
default 2048 if 64BIT
config CMSIS_V2_THREAD_MAX_STACK_SIZE
default 2048 if 64BIT
config CMSIS_V2_THREAD_DYNAMIC_STACK_SIZE
default 2048 if 64BIT

View File

@@ -1,5 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
if(CONFIG_ARCMWDT_LIBC OR CONFIG_CPP)
zephyr_sources(arcmwdt-dtr-stubs.c)
endif()

View File

@@ -1,22 +0,0 @@
/*
* Copyright (c) 2021 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/toolchain.h>
__weak void *__dso_handle;
int __cxa_atexit(void (*destructor)(void *), void *objptr, void *dso)
{
ARG_UNUSED(destructor);
ARG_UNUSED(objptr);
ARG_UNUSED(dso);
return 0;
}
int atexit(void (*function)(void))
{
return 0;
}

View File

@@ -3,34 +3,28 @@
zephyr_library()
zephyr_library_sources(
thread.c
thread_entry_wrapper.S
cpu_idle.S
fatal.c
fault.c
fault_s.S
irq_manage.c
timestamp.c
isr_wrapper.S
regular_irq.S
switch.S
prep_c.c
reset.S
vector_table.c
)
thread.c
thread_entry_wrapper.S
cpu_idle.S
fatal.c
fault.c
fault_s.S
irq_manage.c
timestamp.c
isr_wrapper.S
regular_irq.S
switch.S
prep_c.c
reset.S
vector_table.c
)
zephyr_library_sources_ifdef(CONFIG_ARCH_CACHE cache.c)
zephyr_library_sources_ifdef(CONFIG_CACHE_FLUSHING cache.c)
zephyr_library_sources_ifdef(CONFIG_ARC_FIRQ fast_irq.S)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE tls.c)
zephyr_library_sources_if_kconfig(irq_offload.c)
add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_SMP arc_smp.c)

View File

@@ -10,35 +10,41 @@
*
*/
#include <zephyr/kernel.h>
#include <zephyr/arch/cpu.h>
#include <zephyr/spinlock.h>
#include <kernel_internal.h>
#include <kernel.h>
#include <arch/cpu.h>
#include <spinlock.h>
static struct k_spinlock arc_connect_spinlock;
#define LOCKED(lck) for (k_spinlock_key_t __i = {}, \
__key = k_spin_lock(lck); \
!__i.key; \
k_spin_unlock(lck, __key), __i.key = 1)
/* Generate an inter-core interrupt to the target core */
void z_arc_connect_ici_generate(uint32_t core)
void z_arc_connect_ici_generate(u32_t core)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_IRQ, core);
}
}
/* Acknowledge the inter-core interrupt raised by core */
void z_arc_connect_ici_ack(uint32_t core)
void z_arc_connect_ici_ack(u32_t core)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_ACK, core);
}
}
/* Read inter-core interrupt status */
uint32_t z_arc_connect_ici_read_status(uint32_t core)
u32_t z_arc_connect_ici_read_status(u32_t core)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_READ_STATUS, core);
ret = z_arc_connect_cmd_readback();
}
@@ -47,11 +53,11 @@ uint32_t z_arc_connect_ici_read_status(uint32_t core)
}
/* Check the source of inter-core interrupt */
uint32_t z_arc_connect_ici_check_src(void)
u32_t z_arc_connect_ici_check_src(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -62,9 +68,9 @@ uint32_t z_arc_connect_ici_check_src(void)
/* Clear the inter-core interrupt */
void z_arc_connect_ici_clear(void)
{
uint32_t cpu, c;
u32_t cpu, c;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
cpu = z_arc_connect_cmd_readback(); /* 1,2,4,8... */
@@ -83,47 +89,47 @@ void z_arc_connect_ici_clear(void)
}
/* Reset the cores in core_mask */
void z_arc_connect_debug_reset(uint32_t core_mask)
void z_arc_connect_debug_reset(u32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RESET,
0, core_mask);
}
}
/* Halt the cores in core_mask */
void z_arc_connect_debug_halt(uint32_t core_mask)
void z_arc_connect_debug_halt(u32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_HALT,
0, core_mask);
}
}
/* Run the cores in core_mask */
void z_arc_connect_debug_run(uint32_t core_mask)
void z_arc_connect_debug_run(u32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RUN,
0, core_mask);
}
}
/* Set core mask */
void z_arc_connect_debug_mask_set(uint32_t core_mask, uint32_t mask)
void z_arc_connect_debug_mask_set(u32_t core_mask, u32_t mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_MASK,
mask, core_mask);
}
}
/* Read core mask */
uint32_t z_arc_connect_debug_mask_read(uint32_t core_mask)
u32_t z_arc_connect_debug_mask_read(u32_t core_mask)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_READ_MASK,
0, core_mask);
ret = z_arc_connect_cmd_readback();
@@ -135,20 +141,20 @@ uint32_t z_arc_connect_debug_mask_read(uint32_t core_mask)
/*
* Select cores that should be halted if the core issuing the command is halted
*/
void z_arc_connect_debug_select_set(uint32_t core_mask)
void z_arc_connect_debug_select_set(u32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_SELECT,
0, core_mask);
}
}
/* Read the select value */
uint32_t z_arc_connect_debug_select_read(void)
u32_t z_arc_connect_debug_select_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_SELECT, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -157,11 +163,11 @@ uint32_t z_arc_connect_debug_select_read(void)
}
/* Read the status, halt or run of all cores in the system */
uint32_t z_arc_connect_debug_en_read(void)
u32_t z_arc_connect_debug_en_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_EN, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -170,11 +176,11 @@ uint32_t z_arc_connect_debug_en_read(void)
}
/* Read the last command sent */
uint32_t z_arc_connect_debug_cmd_read(void)
u32_t z_arc_connect_debug_cmd_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CMD, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -183,11 +189,11 @@ uint32_t z_arc_connect_debug_cmd_read(void)
}
/* Read the value of internal MCD_CORE register */
uint32_t z_arc_connect_debug_core_read(void)
u32_t z_arc_connect_debug_core_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CORE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -198,17 +204,17 @@ uint32_t z_arc_connect_debug_core_read(void)
/* Clear global free running counter */
void z_arc_connect_gfrc_clear(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_CLEAR, 0);
}
}
/* Read total 64 bits of global free running counter */
uint64_t z_arc_connect_gfrc_read(void)
u64_t z_arc_connect_gfrc_read(void)
{
uint32_t low;
uint32_t high;
uint32_t key;
u32_t low;
u32_t high;
u32_t key;
/*
* each core has its own arc connect interface, i.e.,
@@ -217,7 +223,7 @@ uint64_t z_arc_connect_gfrc_read(void)
* sub-components. For GFRC, HW allows simultaneously accessing to
* counters. So an irq lock is enough.
*/
key = arch_irq_lock();
key = z_arch_irq_lock();
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_LO, 0);
low = z_arc_connect_cmd_readback();
@@ -225,15 +231,15 @@ uint64_t z_arc_connect_gfrc_read(void)
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HI, 0);
high = z_arc_connect_cmd_readback();
arch_irq_unlock(key);
z_arch_irq_unlock(key);
return (((uint64_t)high) << 32) | low;
return (((u64_t)high) << 32) | low;
}
/* Enable global free running counter */
void z_arc_connect_gfrc_enable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_ENABLE, 0);
}
}
@@ -241,26 +247,26 @@ void z_arc_connect_gfrc_enable(void)
/* Disable global free running counter */
void z_arc_connect_gfrc_disable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_DISABLE, 0);
}
}
/* Disable global free running counter */
void z_arc_connect_gfrc_core_set(uint32_t core_mask)
void z_arc_connect_gfrc_core_set(u32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_GFRC_SET_CORE,
0, core_mask);
}
}
/* Set the relevant cores to halt global free running counter */
uint32_t z_arc_connect_gfrc_halt_read(void)
u32_t z_arc_connect_gfrc_halt_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HALT, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -269,11 +275,11 @@ uint32_t z_arc_connect_gfrc_halt_read(void)
}
/* Read the internal CORE register */
uint32_t z_arc_connect_gfrc_core_read(void)
u32_t z_arc_connect_gfrc_core_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_CORE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -284,7 +290,7 @@ uint32_t z_arc_connect_gfrc_core_read(void)
/* Enable interrupt distribute unit */
void z_arc_connect_idu_enable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ENABLE, 0);
}
}
@@ -292,17 +298,17 @@ void z_arc_connect_idu_enable(void)
/* Disable interrupt distribute unit */
void z_arc_connect_idu_disable(void)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_DISABLE, 0);
}
}
/* Read enable status of interrupt distribute unit */
uint32_t z_arc_connect_idu_read_enable(void)
u32_t z_arc_connect_idu_read_enable(void)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_ENABLE, 0);
ret = z_arc_connect_cmd_readback();
}
@@ -314,21 +320,21 @@ uint32_t z_arc_connect_idu_read_enable(void)
* Set the triggering mode and distribution mode for the specified common
* interrupt
*/
void z_arc_connect_idu_set_mode(uint32_t irq_num,
uint16_t trigger_mode, uint16_t distri_mode)
void z_arc_connect_idu_set_mode(u32_t irq_num,
u16_t trigger_mode, u16_t distri_mode)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MODE,
irq_num, (distri_mode | (trigger_mode << 4)));
}
}
/* Read the internal MODE register of the specified common interrupt */
uint32_t z_arc_connect_idu_read_mode(uint32_t irq_num)
u32_t z_arc_connect_idu_read_mode(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MODE, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -340,20 +346,20 @@ uint32_t z_arc_connect_idu_read_mode(uint32_t irq_num)
* Set the target cores to receive the specified common interrupt
* when it is triggered
*/
void z_arc_connect_idu_set_dest(uint32_t irq_num, uint32_t core_mask)
void z_arc_connect_idu_set_dest(u32_t irq_num, u32_t core_mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_DEST,
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MODE,
irq_num, core_mask);
}
}
/* Read the internal DEST register of the specified common interrupt */
uint32_t z_arc_connect_idu_read_dest(uint32_t irq_num)
u32_t z_arc_connect_idu_read_dest(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_DEST, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -362,27 +368,27 @@ uint32_t z_arc_connect_idu_read_dest(uint32_t irq_num)
}
/* Assert the specified common interrupt */
void z_arc_connect_idu_gen_cirq(uint32_t irq_num)
void z_arc_connect_idu_gen_cirq(u32_t irq_num)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_GEN_CIRQ, irq_num);
}
}
/* Acknowledge the specified common interrupt */
void z_arc_connect_idu_ack_cirq(uint32_t irq_num)
void z_arc_connect_idu_ack_cirq(u32_t irq_num)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ACK_CIRQ, irq_num);
}
}
/* Read the internal STATUS register of the specified common interrupt */
uint32_t z_arc_connect_idu_check_status(uint32_t irq_num)
u32_t z_arc_connect_idu_check_status(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_STATUS, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -391,11 +397,11 @@ uint32_t z_arc_connect_idu_check_status(uint32_t irq_num)
}
/* Read the internal SOURCE register of the specified common interrupt */
uint32_t z_arc_connect_idu_check_source(uint32_t irq_num)
u32_t z_arc_connect_idu_check_source(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_SOURCE, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -404,20 +410,20 @@ uint32_t z_arc_connect_idu_check_source(uint32_t irq_num)
}
/* Mask or unmask the specified common interrupt */
void z_arc_connect_idu_set_mask(uint32_t irq_num, uint32_t mask)
void z_arc_connect_idu_set_mask(u32_t irq_num, u32_t mask)
{
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MASK,
irq_num, mask);
}
}
/* Read the internal MASK register of the specified common interrupt */
uint32_t z_arc_connect_idu_read_mask(uint32_t irq_num)
u32_t z_arc_connect_idu_read_mask(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MASK, irq_num);
ret = z_arc_connect_cmd_readback();
}
@@ -429,11 +435,11 @@ uint32_t z_arc_connect_idu_read_mask(uint32_t irq_num)
* Check if it is the first-acknowledging core to the common interrupt
* if IDU is programmed in the first-acknowledged mode
*/
uint32_t z_arc_connect_idu_check_first(uint32_t irq_num)
u32_t z_arc_connect_idu_check_first(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
K_SPINLOCK(&arc_connect_spinlock) {
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_FIRST, irq_num);
ret = z_arc_connect_cmd_readback();
}

View File

@@ -6,21 +6,67 @@
/**
* @file
* @brief codes required for ARC multicore and Zephyr smp support
* @brief codes required for ARC smp support
*
*/
#include <zephyr/device.h>
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <device.h>
#include <kernel.h>
#include <kernel_structs.h>
#include <ksched.h>
#include <zephyr/init.h>
#include <zephyr/irq.h>
#include <arc_irq_offload.h>
#include <soc.h>
#include <init.h>
#ifndef IRQ_ICI
#define IRQ_ICI 19
#endif
#define ARCV2_ICI_IRQ_PRIORITY 1
static void sched_ipi_handler(void *unused)
{
ARG_UNUSED(unused);
z_arc_connect_ici_clear();
z_sched_ipi();
}
/**
* @brief Check whether need to do thread switch in isr context
*
* @details u64_t is used to let compiler use (r0, r1) as return register.
* use register r0 and register r1 as return value, r0 has
* new thread, r1 has old thread. If r0 == 0, it means no thread switch.
*/
u64_t z_arch_smp_switch_in_isr(void)
{
u64_t ret = 0;
u32_t new_thread;
u32_t old_thread;
if (!_current_cpu->swap_ok) {
return 0;
}
old_thread = (u32_t)_current;
new_thread = (u32_t)z_get_next_ready_thread();
if (new_thread != old_thread) {
_current_cpu->swap_ok = 0;
((struct k_thread *)new_thread)->base.cpu =
z_arch_curr_cpu()->id;
_current = (struct k_thread *) new_thread;
ret = new_thread | ((u64_t)(old_thread) << 32);
}
return ret;
}
volatile struct {
arch_cpustart_t fn;
void (*fn)(int, void*);
void *arg;
} arc_cpu_init[CONFIG_MP_MAX_NUM_CPUS];
} arc_cpu_init[CONFIG_MP_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
@@ -29,144 +75,78 @@ volatile struct {
* master core that it's waken
*
*/
volatile uint32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
volatile u32_t arc_cpu_wake_flag;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_MAX_NUM_CPUS];
volatile _cpu_t *_curr_cpu[CONFIG_MP_NUM_CPUS];
/* Called from Zephyr initialization */
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
void z_arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
void (*fn)(int, void *), void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0U) {
while (arc_cpu_wake_flag != 0) {
;
}
}
#ifdef CONFIG_SMP
static void arc_connect_debug_mask_update(int cpu_num)
{
uint32_t core_mask = 1 << cpu_num;
/*
* MDB debugger may modify debug_select and debug_mask registers on start, so we can't
* rely on debug_select reset value.
*/
if (cpu_num != ARC_MP_PRIMARY_CPU_ID) {
core_mask |= z_arc_connect_debug_select_read();
}
z_arc_connect_debug_select_set(core_mask);
/* Debugger halts cores at all conditions:
* ARC_CONNECT_CMD_DEBUG_MASK_H: Core global halt.
* ARC_CONNECT_CMD_DEBUG_MASK_AH: Actionpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_BH: Software breakpoint halt.
* ARC_CONNECT_CMD_DEBUG_MASK_SH: Self halt.
*/
z_arc_connect_debug_mask_set(core_mask, (ARC_CONNECT_CMD_DEBUG_MASK_SH
| ARC_CONNECT_CMD_DEBUG_MASK_BH | ARC_CONNECT_CMD_DEBUG_MASK_AH
| ARC_CONNECT_CMD_DEBUG_MASK_H));
}
#endif
void arc_core_private_intc_init(void);
/* the C entry of slave cores */
void z_arc_slave_start(int cpu_num)
void z_arch_slave_start(int cpu_num)
{
arch_cpustart_t fn;
#ifdef CONFIG_SMP
struct arc_connect_bcr bcr;
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(cpu_num);
}
void (*fn)(int, void*);
z_icache_setup();
z_irq_setup();
arc_core_private_intc_init();
z_irq_priority_set(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY, 0);
irq_enable(IRQ_ICI);
arc_irq_offload_init_smp();
z_arc_connect_ici_clear();
z_irq_priority_set(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority), 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
#endif
/* call the function set by arch_start_cpu */
/* call the function set by z_arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(arc_cpu_init[cpu_num].arg);
}
#ifdef CONFIG_SMP
static void sched_ipi_handler(const void *unused)
{
ARG_UNUSED(unused);
z_arc_connect_ici_clear();
z_sched_ipi();
fn(cpu_num, arc_cpu_init[cpu_num].arg);
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
void z_arch_sched_ipi(void)
{
uint32_t i;
u32_t i;
/* broadcast sched_ipi request to other cores
/* broadcast sched_ipi request to all cores
* if the target is current core, hardware will ignore it
*/
unsigned int num_cpus = arch_num_cpus();
for (i = 0U; i < num_cpus; i++) {
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++) {
z_arc_connect_ici_generate(i);
}
}
static int arc_smp_init(void)
static int arc_smp_init(struct device *dev)
{
ARG_UNUSED(dev);
struct arc_connect_bcr bcr;
/* necessary master core init */
_kernel.cpus[0].id = 0;
_kernel.cpus[0].irq_stack = Z_THREAD_STACK_BUFFER(_interrupt_stack)
+ CONFIG_ISR_STACK_SIZE;
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.dbg) {
/* configure inter-core debug unit if available */
arc_connect_debug_mask_update(ARC_MP_PRIMARY_CPU_ID);
}
if (bcr.ipi) {
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(DT_IRQN(DT_NODELABEL(ici)),
DT_IRQ(DT_NODELABEL(ici), priority),
sched_ipi_handler, NULL, 0);
IRQ_CONNECT(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY,
sched_ipi_handler, NULL, 0);
irq_enable(DT_IRQN(DT_NODELABEL(ici)));
irq_enable(IRQ_ICI);
} else {
__ASSERT(0,
"ARC connect has no inter-core interrupt\n");
@@ -178,7 +158,7 @@ static int arc_smp_init(void)
z_arc_connect_gfrc_enable();
/* when all cores halt, gfrc halt */
z_arc_connect_gfrc_core_set((1 << arch_num_cpus()) - 1);
z_arc_connect_gfrc_core_set((1 << CONFIG_MP_NUM_CPUS) - 1);
z_arc_connect_gfrc_clear();
} else {
__ASSERT(0,
@@ -190,4 +170,3 @@ static int arc_smp_init(void)
}
SYS_INIT(arc_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View File

@@ -13,20 +13,26 @@
* This module contains functions for manipulation of the d-cache.
*/
#include <zephyr/kernel.h>
#include <zephyr/arch/cpu.h>
#include <zephyr/sys/util.h>
#include <zephyr/toolchain.h>
#include <zephyr/cache.h>
#include <zephyr/linker/linker-defs.h>
#include <zephyr/arch/arc/v2/aux_regs.h>
#include <kernel.h>
#include <arch/cpu.h>
#include <sys/util.h>
#include <toolchain.h>
#include <cache.h>
#include <linker/linker-defs.h>
#include <arch/arc/v2/aux_regs.h>
#include <kernel_internal.h>
#include <zephyr/sys/__assert.h>
#include <zephyr/init.h>
#include <sys/__assert.h>
#include <init.h>
#include <stdbool.h>
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
size_t sys_cache_line_size;
#if (CONFIG_CACHE_LINE_SIZE == 0) && !defined(CONFIG_CACHE_LINE_SIZE_DETECT)
#error Cannot use this implementation with a cache line size of 0
#endif
#if defined(CONFIG_CACHE_LINE_SIZE_DETECT)
#define DCACHE_LINE_SIZE sys_cache_line_size
#else
#define DCACHE_LINE_SIZE CONFIG_CACHE_LINE_SIZE
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
@@ -40,6 +46,7 @@ size_t sys_cache_line_size;
#define DC_CTRL_INDIRECT_ACCESS 0x20 /* indirect access mode */
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
static bool dcache_available(void)
{
unsigned long val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
@@ -48,45 +55,54 @@ static bool dcache_available(void)
return (val == 0) ? false : true;
}
static void dcache_dc_ctrl(uint32_t dcache_en_mask)
static void dcache_dc_ctrl(u32_t dcache_en_mask)
{
if (dcache_available()) {
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, dcache_en_mask);
}
}
void arch_dcache_enable(void)
static void dcache_enable(void)
{
dcache_dc_ctrl(DC_CTRL_DC_ENABLE);
}
void arch_dcache_disable(void)
{
/* nothing */
}
int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
/**
*
* @brief Flush multiple d-cache lines to memory
*
* No alignment is required for either <start_addr> or <size>, but since
* dcache_flush_mlines() iterates on the d-cache lines, a cache line
* alignment for both is optimal.
*
* The d-cache line size is specified either via the CONFIG_CACHE_LINE_SIZE
* kconfig option or it is detected at runtime.
*
* @param start_addr the pointer to start the multi-line flush
* @param size the number of bytes that are to be flushed
*
* @return N/A
*/
static void dcache_flush_mlines(u32_t start_addr, u32_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
u32_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
if (!dcache_available() || (size == 0U)) {
return;
}
end_addr = start_addr + size;
end_addr = start_addr + size - 1;
start_addr &= (u32_t)(~(DCACHE_LINE_SIZE - 1));
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* --enter critical section-- */
key = irq_lock(); /* --enter critical section-- */
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
__asm__ volatile("nop_s");
__asm__ volatile("nop_s");
__asm__ volatile("nop_s");
/* wait for flush completion */
do {
if ((z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) &
@@ -94,65 +110,42 @@ int arch_dcache_flush_range(void *start_addr_ptr, size_t size)
break;
}
} while (1);
start_addr += line_size;
} while (start_addr < end_addr);
start_addr += DCACHE_LINE_SIZE;
} while (start_addr <= end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
irq_unlock(key); /* --exit critical section-- */
return 0;
}
int arch_dcache_invd_range(void *start_addr_ptr, size_t size)
/**
*
* @brief Flush d-cache lines to main memory
*
* No alignment is required for either <virt> or <size>, but since
* sys_cache_flush() iterates on the d-cache lines, a d-cache line alignment for
* both is optimal.
*
* The d-cache line size is specified either via the CONFIG_CACHE_LINE_SIZE
* kconfig option or it is detected at runtime.
*
* @param start_addr the pointer to start the multi-line flush
* @param size the number of bytes that are to be flushed
*
* @return N/A
*/
void sys_cache_flush(vaddr_t start_addr, size_t size)
{
size_t line_size = sys_cache_data_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return -ENOTSUP;
}
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
return 0;
dcache_flush_mlines((u32_t)start_addr, (u32_t)size);
}
int arch_dcache_flush_and_invd_range(void *start_addr_ptr, size_t size)
{
return -ENOTSUP;
}
int arch_dcache_flush_all(void)
{
return -ENOTSUP;
}
int arch_dcache_invd_all(void)
{
return -ENOTSUP;
}
int arch_dcache_flush_and_invd_all(void)
{
return -ENOTSUP;
}
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
#if defined(CONFIG_CACHE_LINE_SIZE_DETECT)
size_t sys_cache_line_size;
static void init_dcache_line_size(void)
{
uint32_t val;
u32_t val;
val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
__ASSERT((val&0xff) != 0U, "d-cache is not present");
@@ -160,68 +153,15 @@ static void init_dcache_line_size(void)
val *= 16U;
sys_cache_line_size = (size_t) val;
}
size_t arch_dcache_line_size_get(void)
{
return sys_cache_line_size;
}
#endif
void arch_icache_enable(void)
static int init_dcache(struct device *unused)
{
/* nothing */
}
ARG_UNUSED(unused);
void arch_icache_disable(void)
{
/* nothing */
}
dcache_enable();
int arch_icache_flush_all(void)
{
return -ENOTSUP;
}
int arch_icache_invd_all(void)
{
return -ENOTSUP;
}
int arch_icache_flush_and_invd_all(void)
{
return -ENOTSUP;
}
int arch_icache_flush_range(void *addr, size_t size)
{
ARG_UNUSED(addr);
ARG_UNUSED(size);
return -ENOTSUP;
}
int arch_icache_invd_range(void *addr, size_t size)
{
ARG_UNUSED(addr);
ARG_UNUSED(size);
return -ENOTSUP;
}
int arch_icache_flush_and_invd_range(void *addr, size_t size)
{
ARG_UNUSED(addr);
ARG_UNUSED(size);
return -ENOTSUP;
}
static int init_dcache(void)
{
arch_dcache_enable();
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
#if defined(CONFIG_CACHE_LINE_SIZE_DETECT)
init_dcache_line_size();
#endif

View File

@@ -11,19 +11,18 @@
* CPU power management routines.
*/
#include <zephyr/kernel_structs.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
#include <zephyr/arch/cpu.h>
#include <zephyr/arch/arc/asm-compat/assembler.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
GTEXT(arch_cpu_idle)
GTEXT(arch_cpu_atomic_idle)
GDATA(z_arc_cpu_sleep_mode)
GTEXT(k_cpu_idle)
GTEXT(k_cpu_atomic_idle)
GDATA(k_cpu_sleep_mode)
SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
.align 4
SECTION_VAR(BSS, k_cpu_sleep_mode)
.balign 4
.word 0
/*
@@ -34,16 +33,15 @@ SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
* void nanCpuIdle(void)
*/
SECTION_FUNC(TEXT, arch_cpu_idle)
SECTION_FUNC(TEXT, k_cpu_idle)
#ifdef CONFIG_TRACING
PUSHR blink
jl sys_trace_idle
POPR blink
push_s blink
jl z_sys_trace_idle
pop_s blink
#endif
/* z_arc_cpu_sleep_mode is 32 bit despite of platform bittnes */
ld r1, [z_arc_cpu_sleep_mode]
ld r1, [k_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
sleep r1
j_s [blink]
@@ -54,18 +52,17 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
*
* This function exits with interrupts restored to <key>.
*
* void arch_cpu_atomic_idle(unsigned int key)
* void k_cpu_atomic_idle(unsigned int key)
*/
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
SECTION_FUNC(TEXT, k_cpu_atomic_idle)
#ifdef CONFIG_TRACING
PUSHR blink
jl sys_trace_idle
POPR blink
push_s blink
jl z_sys_trace_idle
pop_s blink
#endif
/* z_arc_cpu_sleep_mode is 32 bit despite of platform bittnes */
ld r1, [z_arc_cpu_sleep_mode]
ld r1, [k_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
sleep r1
j_s.d [blink]

View File

@@ -1,75 +0,0 @@
# Digital Signal Processing (DSP) configuration options
# Copyright (c) 2022 Synopsys
# SPDX-License-Identifier: Apache-2.0
config ARC_HAS_DSP
bool
help
This option is enabled when the ARC CPU has hardware DSP unit.
menu "ARC DSP Options"
depends on ARC_HAS_DSP
config ARC_DSP
bool "digital signal processing (DSP)"
help
This option enables DSP and DSP instructions.
config ARC_DSP_TURNED_OFF
bool "Turn off DSP if it presents"
depends on !ARC_DSP
help
This option disables DSP block via resetting DSP_CRTL register.
config ARC_DSP_SHARING
bool "DSP register sharing"
depends on ARC_DSP && MULTITHREADING
select ARC_HAS_ACCL_REGS
help
This option enables preservation of the hardware DSP registers
across context switches to allow multiple threads to perform concurrent
DSP operations.
config ARC_DSP_BFLY_SHARING
bool "ARC complex DSP operation"
depends on ARC_DSP && CPU_ARCEM
help
This option is to enable Zephyr to store and restore DSP_BFLY0
and FFT_CTRL registers during context switch. This option is
only required when butterfly instructions are used in
multi-thread.
config ARC_XY_ENABLE
bool "ARC address generation unit registers"
help
Processors with XY memory and AGU registers can configure this
option to accelerate DSP instrctions.
config ARC_AGU_SHARING
bool "ARC address generation unit register sharing"
depends on ARC_XY_ENABLE && MULTITHREADING
default y if ARC_DSP_SHARING
help
This option enables preservation of the hardware AGU registers
across context switches to allow multiple threads to perform concurrent
operations on XY memory. Save and restore small size AGU registers is
set as default, including 4 address pointers regs, 2 address offset regs
and 4 modifiers regs.
config ARC_AGU_MEDIUM
bool "ARC AGU medium size register"
depends on ARC_AGU_SHARING
help
Save and restore medium AGU registers, including 8 address pointers regs,
4 address offset regs and 12 modifiers regs.
config ARC_AGU_LARGE
bool "ARC AGU large size register"
depends on ARC_AGU_SHARING
select ARC_AGU_MEDIUM
help
Save and restore large AGU registers, including 12 address pointers regs,
8 address offset regs and 24 modifiers regs.
endmenu

View File

@@ -1,70 +0,0 @@
/*
* Copyright (c) 2022 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARCv2 DSP and AGU structure member offset definition file
*
*/
#ifdef CONFIG_ARC_DSP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_ctrl);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_glo);
GEN_OFFSET_SYM(_callee_saved_stack_t, acc0_ghi);
#ifdef CONFIG_ARC_DSP_BFLY_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_bfly0);
GEN_OFFSET_SYM(_callee_saved_stack_t, dsp_fft_ctrl);
#endif
#endif
#ifdef CONFIG_ARC_AGU_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap0);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap1);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap2);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap3);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os0);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os1);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod0);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod1);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod2);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod3);
#ifdef CONFIG_ARC_AGU_MEDIUM
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap4);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap5);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap6);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap7);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os2);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os3);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod4);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod5);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod6);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod7);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod8);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod9);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod10);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod11);
#endif
#ifdef CONFIG_ARC_AGU_LARGE
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap8);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap9);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap10);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_ap11);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os4);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os5);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os6);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_os7);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod12);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod13);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod14);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod15);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod16);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod17);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod18);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod19);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod20);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod21);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod22);
GEN_OFFSET_SYM(_callee_saved_stack_t, agu_mod23);
#endif
#endif

View File

@@ -1,269 +0,0 @@
/*
* Copyright (c) 2022 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief save and load macro for ARCv2 DSP and AGU regs
*
*/
.macro _save_dsp_regs
#ifdef CONFIG_ARC_DSP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
bbit0 r13, K_DSP_IDX, dsp_skip_save
lr r13, [_ARC_V2_DSP_CTRL]
st_s r13, [sp, ___callee_saved_stack_t_dsp_ctrl_OFFSET]
lr r13, [_ARC_V2_ACC0_GLO]
st_s r13, [sp, ___callee_saved_stack_t_acc0_glo_OFFSET]
lr r13, [_ARC_V2_ACC0_GHI]
st_s r13, [sp, ___callee_saved_stack_t_acc0_ghi_OFFSET]
#ifdef CONFIG_ARC_DSP_BFLY_SHARING
lr r13, [_ARC_V2_DSP_BFLY0]
st_s r13, [sp, ___callee_saved_stack_t_dsp_bfly0_OFFSET]
lr r13, [_ARC_V2_DSP_FFT_CTRL]
st_s r13, [sp, ___callee_saved_stack_t_dsp_fft_ctrl_OFFSET]
#endif
#endif
dsp_skip_save :
#ifdef CONFIG_ARC_AGU_SHARING
_save_agu_regs
#endif
.endm
.macro _save_agu_regs
#ifdef CONFIG_ARC_AGU_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
btst r13, K_AGU_IDX
jeq agu_skip_save
lr r13, [_ARC_V2_AGU_AP0]
st r13, [sp, ___callee_saved_stack_t_agu_ap0_OFFSET]
lr r13, [_ARC_V2_AGU_AP1]
st r13, [sp, ___callee_saved_stack_t_agu_ap1_OFFSET]
lr r13, [_ARC_V2_AGU_AP2]
st r13, [sp, ___callee_saved_stack_t_agu_ap2_OFFSET]
lr r13, [_ARC_V2_AGU_AP3]
st r13, [sp, ___callee_saved_stack_t_agu_ap3_OFFSET]
lr r13, [_ARC_V2_AGU_OS0]
st r13, [sp, ___callee_saved_stack_t_agu_os0_OFFSET]
lr r13, [_ARC_V2_AGU_OS1]
st r13, [sp, ___callee_saved_stack_t_agu_os1_OFFSET]
lr r13, [_ARC_V2_AGU_MOD0]
st r13, [sp, ___callee_saved_stack_t_agu_mod0_OFFSET]
lr r13, [_ARC_V2_AGU_MOD1]
st r13, [sp, ___callee_saved_stack_t_agu_mod1_OFFSET]
lr r13, [_ARC_V2_AGU_MOD2]
st r13, [sp, ___callee_saved_stack_t_agu_mod2_OFFSET]
lr r13, [_ARC_V2_AGU_MOD3]
st r13, [sp, ___callee_saved_stack_t_agu_mod3_OFFSET]
#ifdef CONFIG_ARC_AGU_MEDIUM
lr r13, [_ARC_V2_AGU_AP4]
st r13, [sp, ___callee_saved_stack_t_agu_ap4_OFFSET]
lr r13, [_ARC_V2_AGU_AP5]
st r13, [sp, ___callee_saved_stack_t_agu_ap5_OFFSET]
lr r13, [_ARC_V2_AGU_AP6]
st r13, [sp, ___callee_saved_stack_t_agu_ap6_OFFSET]
lr r13, [_ARC_V2_AGU_AP7]
st r13, [sp, ___callee_saved_stack_t_agu_ap7_OFFSET]
lr r13, [_ARC_V2_AGU_OS2]
st r13, [sp, ___callee_saved_stack_t_agu_os2_OFFSET]
lr r13, [_ARC_V2_AGU_OS3]
st r13, [sp, ___callee_saved_stack_t_agu_os3_OFFSET]
lr r13, [_ARC_V2_AGU_MOD4]
st r13, [sp, ___callee_saved_stack_t_agu_mod4_OFFSET]
lr r13, [_ARC_V2_AGU_MOD5]
st r13, [sp, ___callee_saved_stack_t_agu_mod5_OFFSET]
lr r13, [_ARC_V2_AGU_MOD6]
st r13, [sp, ___callee_saved_stack_t_agu_mod6_OFFSET]
lr r13, [_ARC_V2_AGU_MOD7]
st r13, [sp, ___callee_saved_stack_t_agu_mod7_OFFSET]
lr r13, [_ARC_V2_AGU_MOD8]
st r13, [sp, ___callee_saved_stack_t_agu_mod8_OFFSET]
lr r13, [_ARC_V2_AGU_MOD9]
st r13, [sp, ___callee_saved_stack_t_agu_mod9_OFFSET]
lr r13, [_ARC_V2_AGU_MOD10]
st r13, [sp, ___callee_saved_stack_t_agu_mod10_OFFSET]
lr r13, [_ARC_V2_AGU_MOD11]
st r13, [sp, ___callee_saved_stack_t_agu_mod11_OFFSET]
#endif
#ifdef CONFIG_ARC_AGU_LARGE
lr r13, [_ARC_V2_AGU_AP8]
st r13, [sp, ___callee_saved_stack_t_agu_ap8_OFFSET]
lr r13, [_ARC_V2_AGU_AP9]
st r13, [sp, ___callee_saved_stack_t_agu_ap9_OFFSET]
lr r13, [_ARC_V2_AGU_AP10]
st r13, [sp, ___callee_saved_stack_t_agu_ap10_OFFSET]
lr r13, [_ARC_V2_AGU_AP11]
st r13, [sp, ___callee_saved_stack_t_agu_ap11_OFFSET]
lr r13, [_ARC_V2_AGU_OS4]
st r13, [sp, ___callee_saved_stack_t_agu_os4_OFFSET]
lr r13, [_ARC_V2_AGU_OS5]
st r13, [sp, ___callee_saved_stack_t_agu_os5_OFFSET]
lr r13, [_ARC_V2_AGU_OS6]
st r13, [sp, ___callee_saved_stack_t_agu_os6_OFFSET]
lr r13, [_ARC_V2_AGU_OS7]
st r13, [sp, ___callee_saved_stack_t_agu_os7_OFFSET]
lr r13, [_ARC_V2_AGU_MOD12]
st r13, [sp, ___callee_saved_stack_t_agu_mod12_OFFSET]
lr r13, [_ARC_V2_AGU_MOD13]
st r13, [sp, ___callee_saved_stack_t_agu_mod13_OFFSET]
lr r13, [_ARC_V2_AGU_MOD14]
st r13, [sp, ___callee_saved_stack_t_agu_mod14_OFFSET]
lr r13, [_ARC_V2_AGU_MOD15]
st r13, [sp, ___callee_saved_stack_t_agu_mod15_OFFSET]
lr r13, [_ARC_V2_AGU_MOD16]
st r13, [sp, ___callee_saved_stack_t_agu_mod16_OFFSET]
lr r13, [_ARC_V2_AGU_MOD17]
st r13, [sp, ___callee_saved_stack_t_agu_mod17_OFFSET]
lr r13, [_ARC_V2_AGU_MOD18]
st r13, [sp, ___callee_saved_stack_t_agu_mod18_OFFSET]
lr r13, [_ARC_V2_AGU_MOD19]
st r13, [sp, ___callee_saved_stack_t_agu_mod19_OFFSET]
lr r13, [_ARC_V2_AGU_MOD20]
st r13, [sp, ___callee_saved_stack_t_agu_mod20_OFFSET]
lr r13, [_ARC_V2_AGU_MOD21]
_st32_huge_offset r13, sp, ___callee_saved_stack_t_agu_mod21_OFFSET, r1
lr r13, [_ARC_V2_AGU_MOD22]
_st32_huge_offset r13, sp, ___callee_saved_stack_t_agu_mod22_OFFSET, r1
lr r13, [_ARC_V2_AGU_MOD23]
_st32_huge_offset r13, sp, ___callee_saved_stack_t_agu_mod23_OFFSET, r1
#endif
#endif
agu_skip_save :
.endm
.macro _load_dsp_regs
#ifdef CONFIG_ARC_DSP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
bbit0 r13, K_DSP_IDX, dsp_skip_load
ld_s r13, [sp, ___callee_saved_stack_t_dsp_ctrl_OFFSET]
sr r13, [_ARC_V2_DSP_CTRL]
ld_s r13, [sp, ___callee_saved_stack_t_acc0_glo_OFFSET]
sr r13, [_ARC_V2_ACC0_GLO]
ld_s r13, [sp, ___callee_saved_stack_t_acc0_ghi_OFFSET]
sr r13, [_ARC_V2_ACC0_GHI]
#ifdef CONFIG_ARC_DSP_BFLY_SHARING
ld_s r13, [sp, ___callee_saved_stack_t_dsp_bfly0_OFFSET]
sr r13, [_ARC_V2_DSP_BFLY0]
ld_s r13, [sp, ___callee_saved_stack_t_dsp_fft_ctrl_OFFSET]
sr r13, [_ARC_V2_DSP_FFT_CTRL]
#endif
#endif
dsp_skip_load :
#ifdef CONFIG_ARC_AGU_SHARING
_load_agu_regs
#endif
.endm
.macro _load_agu_regs
#ifdef CONFIG_ARC_AGU_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
btst r13, K_AGU_IDX
jeq agu_skip_load
ld r13, [sp, ___callee_saved_stack_t_agu_ap0_OFFSET]
sr r13, [_ARC_V2_AGU_AP0]
ld r13, [sp, ___callee_saved_stack_t_agu_ap1_OFFSET]
sr r13, [_ARC_V2_AGU_AP1]
ld r13, [sp, ___callee_saved_stack_t_agu_ap2_OFFSET]
sr r13, [_ARC_V2_AGU_AP2]
ld r13, [sp, ___callee_saved_stack_t_agu_ap3_OFFSET]
sr r13, [_ARC_V2_AGU_AP3]
ld r13, [sp, ___callee_saved_stack_t_agu_os0_OFFSET]
sr r13, [_ARC_V2_AGU_OS0]
ld r13, [sp, ___callee_saved_stack_t_agu_os1_OFFSET]
sr r13, [_ARC_V2_AGU_OS1]
ld r13, [sp, ___callee_saved_stack_t_agu_mod0_OFFSET]
sr r13, [_ARC_V2_AGU_MOD0]
ld r13, [sp, ___callee_saved_stack_t_agu_mod1_OFFSET]
sr r13, [_ARC_V2_AGU_MOD1]
ld r13, [sp, ___callee_saved_stack_t_agu_mod2_OFFSET]
sr r13, [_ARC_V2_AGU_MOD2]
ld r13, [sp, ___callee_saved_stack_t_agu_mod3_OFFSET]
sr r13, [_ARC_V2_AGU_MOD3]
#ifdef CONFIG_ARC_AGU_MEDIUM
ld r13, [sp, ___callee_saved_stack_t_agu_ap4_OFFSET]
sr r13, [_ARC_V2_AGU_AP4]
ld r13, [sp, ___callee_saved_stack_t_agu_ap5_OFFSET]
sr r13, [_ARC_V2_AGU_AP5]
ld r13, [sp, ___callee_saved_stack_t_agu_ap6_OFFSET]
sr r13, [_ARC_V2_AGU_AP6]
ld r13, [sp, ___callee_saved_stack_t_agu_ap7_OFFSET]
sr r13, [_ARC_V2_AGU_AP7]
ld r13, [sp, ___callee_saved_stack_t_agu_os2_OFFSET]
sr r13, [_ARC_V2_AGU_OS2]
ld r13, [sp, ___callee_saved_stack_t_agu_os3_OFFSET]
sr r13, [_ARC_V2_AGU_OS3]
ld r13, [sp, ___callee_saved_stack_t_agu_mod4_OFFSET]
sr r13, [_ARC_V2_AGU_MOD4]
ld r13, [sp, ___callee_saved_stack_t_agu_mod5_OFFSET]
sr r13, [_ARC_V2_AGU_MOD5]
ld r13, [sp, ___callee_saved_stack_t_agu_mod6_OFFSET]
sr r13, [_ARC_V2_AGU_MOD6]
ld r13, [sp, ___callee_saved_stack_t_agu_mod7_OFFSET]
sr r13, [_ARC_V2_AGU_MOD7]
ld r13, [sp, ___callee_saved_stack_t_agu_mod8_OFFSET]
sr r13, [_ARC_V2_AGU_MOD8]
ld r13, [sp, ___callee_saved_stack_t_agu_mod9_OFFSET]
sr r13, [_ARC_V2_AGU_MOD9]
ld r13, [sp, ___callee_saved_stack_t_agu_mod10_OFFSET]
sr r13, [_ARC_V2_AGU_MOD10]
ld r13, [sp, ___callee_saved_stack_t_agu_mod11_OFFSET]
sr r13, [_ARC_V2_AGU_MOD11]
#endif
#ifdef CONFIG_ARC_AGU_LARGE
ld r13, [sp, ___callee_saved_stack_t_agu_ap8_OFFSET]
sr r13, [_ARC_V2_AGU_AP8]
ld r13, [sp, ___callee_saved_stack_t_agu_ap9_OFFSET]
sr r13, [_ARC_V2_AGU_AP9]
ld r13, [sp, ___callee_saved_stack_t_agu_ap10_OFFSET]
sr r13, [_ARC_V2_AGU_AP10]
ld r13, [sp, ___callee_saved_stack_t_agu_ap11_OFFSET]
sr r13, [_ARC_V2_AGU_AP11]
ld r13, [sp, ___callee_saved_stack_t_agu_os4_OFFSET]
sr r13, [_ARC_V2_AGU_OS4]
ld r13, [sp, ___callee_saved_stack_t_agu_os5_OFFSET]
sr r13, [_ARC_V2_AGU_OS5]
ld r13, [sp, ___callee_saved_stack_t_agu_os6_OFFSET]
sr r13, [_ARC_V2_AGU_OS6]
ld r13, [sp, ___callee_saved_stack_t_agu_os7_OFFSET]
sr r13, [_ARC_V2_AGU_OS7]
ld r13, [sp, ___callee_saved_stack_t_agu_mod12_OFFSET]
sr r13, [_ARC_V2_AGU_MOD12]
ld r13, [sp, ___callee_saved_stack_t_agu_mod13_OFFSET]
sr r13, [_ARC_V2_AGU_MOD13]
ld r13, [sp, ___callee_saved_stack_t_agu_mod14_OFFSET]
sr r13, [_ARC_V2_AGU_MOD14]
ld r13, [sp, ___callee_saved_stack_t_agu_mod15_OFFSET]
sr r13, [_ARC_V2_AGU_MOD15]
ld r13, [sp, ___callee_saved_stack_t_agu_mod16_OFFSET]
sr r13, [_ARC_V2_AGU_MOD16]
ld r13, [sp, ___callee_saved_stack_t_agu_mod17_OFFSET]
sr r13, [_ARC_V2_AGU_MOD17]
ld r13, [sp, ___callee_saved_stack_t_agu_mod18_OFFSET]
sr r13, [_ARC_V2_AGU_MOD18]
ld r13, [sp, ___callee_saved_stack_t_agu_mod19_OFFSET]
sr r13, [_ARC_V2_AGU_MOD19]
ld r13, [sp, ___callee_saved_stack_t_agu_mod20_OFFSET]
sr r13, [_ARC_V2_AGU_MOD20]
ld r13, [sp, ___callee_saved_stack_t_agu_mod21_OFFSET]
sr r13, [_ARC_V2_AGU_MOD21]
ld r13, [sp, ___callee_saved_stack_t_agu_mod22_OFFSET]
sr r13, [_ARC_V2_AGU_MOD22]
ld r13, [sp, ___callee_saved_stack_t_agu_mod23_OFFSET]
sr r13, [_ARC_V2_AGU_MOD23]
#endif
#endif
agu_skip_load :
.endm
.macro _dsp_extension_probe
#ifdef CONFIG_ARC_DSP_TURNED_OFF
mov r0, 0 /* DSP_CTRL_DISABLED_ALL */
sr r0, [_ARC_V2_DSP_CTRL]
#endif
.endm

View File

@@ -13,17 +13,17 @@
* See isr_wrapper.S for details.
*/
#include <zephyr/kernel_structs.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
#include <zephyr/arch/cpu.h>
#include <toolchain.h>
#include <arch/cpu.h>
#include <swap_macros.h>
GTEXT(_firq_enter)
GTEXT(_firq_exit)
/**
*
* @brief Work to be done before handing control to a FIRQ ISR
*
* The processor switches to a second register bank so registers from the
@@ -40,6 +40,8 @@ GTEXT(_firq_exit)
* interrupt. An exception, however, can be taken.
*
* Assumption by _isr_demux: r3 is untouched by _firq_enter.
*
* @return N/A
*/
SECTION_FUNC(TEXT, _firq_enter)
@@ -78,7 +80,7 @@ SECTION_FUNC(TEXT, _firq_enter)
_check_and_inc_int_nest_counter r0, r1
bne.d firq_nest
mov_s r0, sp
mov r0, sp
_get_curr_cpu_irq_stack sp
#if CONFIG_RGF_NUM_BANKS != 1
@@ -99,36 +101,34 @@ firq_nest:
* save original value of _ARC_V2_USER_SP and ilink into
* the stack of interrupted context first, then restore them later
*/
push ilink
PUSHAX ilink, _ARC_V2_USER_SP
st ilink, [sp]
lr ilink, [_ARC_V2_USER_SP]
st ilink, [sp, -4]
/* sp here is the sp of interrupted context */
sr sp, [_ARC_V2_USER_SP]
/* here, bank 0 sp must go back to the value before push and
* PUSHAX as we will switch to bank1, the pop and POPAX later will
* change bank1's sp, not bank0's sp
*/
add sp, sp, 8
/* switch back to banked reg, only ilink can be used */
lr ilink, [_ARC_V2_STATUS32]
or ilink, ilink, _ARC_V2_STATUS32_RB(1)
kflag ilink
lr sp, [_ARC_V2_USER_SP]
POPAX ilink, _ARC_V2_USER_SP
pop ilink
ld ilink, [sp, -4]
sr ilink, [_ARC_V2_USER_SP]
ld ilink, [sp]
firq_nest_1:
#else
firq_nest:
#endif
push_s r0
j _isr_demux
j @_isr_demux
/**
*
* @brief Work to be done exiting a FIRQ
*
* @return N/A
*/
SECTION_FUNC(TEXT, _firq_exit)
@@ -143,23 +143,34 @@ SECTION_FUNC(TEXT, _firq_exit)
_check_nest_int_by_irq_act r0, r1
jne _firq_no_switch
jne _firq_no_reschedule
/* sp is struct k_thread **old of z_arc_switch_in_isr which is a wrapper of
* z_get_next_switch_handle. r0 contains the 1st thread in ready queue. If it isn't NULL,
* then do switch to this thread.
*/
_get_next_switch_handle
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif
CMPR r0, 0
bne _firq_switch
#ifdef CONFIG_PREEMPT_ENABLED
/* fall to no switch */
#ifdef CONFIG_SMP
bl z_arch_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
brne r0, 0, _firq_reschedule
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
.align 4
_firq_no_switch:
/* restore interrupted context' sp */
/* Check if the current thread (in r2) is the cached thread */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
brne r0, r2, _firq_reschedule
#endif
/* fall to no rescheduling */
#endif /* CONFIG_PREEMPT_ENABLED */
.balign 4
_firq_no_reschedule:
pop sp
/*
* Keeping this code block close to those that use it allows using brxx
* instruction instead of a pair of cmp and bxx
@@ -169,55 +180,33 @@ _firq_no_switch:
#endif
rtie
.align 4
_firq_switch:
/* restore interrupted context' sp */
#ifdef CONFIG_PREEMPT_ENABLED
.balign 4
_firq_reschedule:
pop sp
#if CONFIG_RGF_NUM_BANKS != 1
#ifdef CONFIG_SMP
/*
* save r0, r2 in irq stack for a while, as they will be changed by register
* save r0, r1 in irq stack for a while, as they will be changed by register
* bank switch
*/
_get_curr_cpu_irq_stack r1
st r0, [r1, -4]
st r2, [r1, -8]
_get_curr_cpu_irq_stack r2
st r0, [r2, -4]
st r1, [r2, -8]
#endif
/*
* We know there is no interrupted interrupt of lower priority at this
* point, so when switching back to register bank 0, it will contain the
* registers from the interrupted thread.
*/
#if defined(CONFIG_USERSPACE)
/* when USERSPACE is configured, here need to consider the case where firq comes
* out in user mode, according to ARCv2 ISA and nsim, the following micro ops
* will be executed:
* sp<-reg bank1'sp
* switch between sp and _ARC_V2_USER_SP
* then:
* sp is the sp of kernel stack of interrupted thread
* _ARC_V2_USER_SP is reg bank1'sp
* the sp of user stack of interrupted thread is reg bank0'sp
* if firq comes out in kernel mode, the following micro ops will be executed:
* sp<-reg bank'sp
* so, sw needs to do necessary handling to set up the correct sp
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bbit0 r0, 31, _firq_from_kernel
aex sp, [_ARC_V2_USER_SP]
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
aex sp, [_ARC_V2_USER_SP]
b _firq_create_irq_stack_frame
_firq_from_kernel:
#endif
/* chose register bank #0 */
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
_firq_create_irq_stack_frame:
/* we're back on the outgoing thread's stack */
_create_irq_stack_frame
@@ -230,35 +219,68 @@ _firq_create_irq_stack_frame:
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET] /* ilink into pc */
#ifdef CONFIG_SMP
/*
* load r0, r2 from irq stack
* load r0, r1 from irq stack
*/
_get_curr_cpu_irq_stack r1
ld r0, [r1, -4]
ld r2, [r1, -8]
_get_curr_cpu_irq_stack r2
ld r0, [r2, -4]
ld r1, [r2, -8]
#endif
/* r2 is old thread */
#endif
#ifdef CONFIG_SMP
mov r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
#endif
_save_callee_saved_regs
st _CAUSE_FIRQ, [r2, _thread_offset_to_relinquish_cause]
_irq_store_old_thread_callee_regs
/* mov new thread (r0) to r2 */
#ifdef CONFIG_SMP
mov r2, r0
_load_new_thread_callee_regs
#else
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
breq r3, _CAUSE_RIRQ, _firq_switch_from_rirq
nop_s
breq r3, _CAUSE_FIRQ, _firq_switch_from_firq
nop_s
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* see comments in regular_irq.S
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _firq_return_from_rirq
nop
breq r3, _CAUSE_FIRQ, _firq_return_from_firq
nop
/* fall through */
.align 4
_firq_switch_from_coop:
_set_misc_regs_irq_switch_from_coop
.balign 4
_firq_return_from_coop:
/* pc into ilink */
pop_s r0
mov ilink, r0
@@ -266,20 +288,11 @@ _firq_switch_from_coop:
pop_s r0 /* status32 into r0 */
sr r0, [_ARC_V2_STATUS32_P0]
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
#endif
rtie
.align 4
_firq_switch_from_rirq:
_firq_switch_from_firq:
_set_misc_regs_irq_switch_from_irq
.balign 4
_firq_return_from_rirq:
_firq_return_from_firq:
_pop_irq_stack_frame
@@ -287,12 +300,7 @@ _firq_switch_from_firq:
sr ilink, [_ARC_V2_STATUS32_P0]
ld ilink, [sp, -8] /* pc into ilink */
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
#endif
/* LP registers are already restored, just switch back to bank 0 */
rtie
#endif /* CONFIG_PREEMPT_ENABLED */

View File

@@ -12,59 +12,24 @@
* ARCv2 CPUs.
*/
#include <zephyr/kernel.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <zephyr/arch/cpu.h>
#include <zephyr/logging/log.h>
#include <kernel_arch_data.h>
#include <zephyr/arch/arc/v2/exc.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
static void dump_arc_esf(const z_arch_esf_t *esf)
{
LOG_ERR(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR " r3: 0x%" PRIxPTR "",
esf->r0, esf->r1, esf->r2, esf->r3);
LOG_ERR(" r4: 0x%" PRIxPTR " r5: 0x%" PRIxPTR " r6: 0x%" PRIxPTR " r7: 0x%" PRIxPTR "",
esf->r4, esf->r5, esf->r6, esf->r7);
LOG_ERR(" r8: 0x%" PRIxPTR " r9: 0x%" PRIxPTR " r10: 0x%" PRIxPTR " r11: 0x%" PRIxPTR "",
esf->r8, esf->r9, esf->r10, esf->r11);
LOG_ERR("r12: 0x%" PRIxPTR " r13: 0x%" PRIxPTR " pc: 0x%" PRIxPTR "",
esf->r12, esf->r13, esf->pc);
LOG_ERR(" blink: 0x%" PRIxPTR " status32: 0x%" PRIxPTR "", esf->blink, esf->status32);
#ifdef CONFIG_ARC_HAS_ZOL
LOG_ERR("lp_end: 0x%" PRIxPTR " lp_start: 0x%" PRIxPTR " lp_count: 0x%" PRIxPTR "",
esf->lp_end, esf->lp_start, esf->lp_count);
#endif /* CONFIG_ARC_HAS_ZOL */
}
#endif
#include <toolchain.h>
#include <arch/cpu.h>
#include <logging/log_ctrl.h>
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
if (esf != NULL) {
dump_arc_esf(esf);
if (reason == K_ERR_CPU_EXCEPTION) {
z_fatal_print("Faulting instruction address = 0x%lx",
z_arc_v2_aux_reg_read(_ARC_V2_ERET));
}
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
z_fatal_error(reason, esf);
}
FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
FUNC_NORETURN void z_arch_syscall_oops(void *ssf_ptr)
{
/* TODO: convert ssf_ptr contents into an esf, they are not the same */
ARG_UNUSED(ssf_ptr);
z_arc_fatal_error(K_ERR_KERNEL_OOPS, NULL);
CODE_UNREACHABLE;
}
FUNC_NORETURN void arch_system_halt(unsigned int reason)
{
ARG_UNUSED(reason);
__asm__("brk");
z_arc_fatal_error(K_ERR_KERNEL_OOPS, ssf_ptr);
CODE_UNREACHABLE;
}

View File

@@ -11,97 +11,114 @@
* Common fault handler for ARCv2 processors.
*/
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <inttypes.h>
#include <zephyr/kernel.h>
#include <kernel_internal.h>
#include <zephyr/kernel_structs.h>
#include <zephyr/arch/common/exc_handle.h>
#include <zephyr/logging/log.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#include <kernel.h>
#include <kernel_structs.h>
#include <exc_handle.h>
#include <logging/log_ctrl.h>
#ifdef CONFIG_USERSPACE
Z_EXC_DECLARE(z_arc_user_string_nlen);
Z_EXC_DECLARE(z_arch_user_string_nlen);
static const struct z_exc_handle exceptions[] = {
Z_EXC_HANDLE(z_arc_user_string_nlen)
Z_EXC_HANDLE(z_arch_user_string_nlen)
};
#endif
#if defined(CONFIG_MPU_STACK_GUARD)
#define IS_MPU_GUARD_VIOLATION(guard_start, fault_addr, stack_ptr) \
((fault_addr >= guard_start) && \
(fault_addr < (guard_start + STACK_GUARD_SIZE)) && \
(stack_ptr <= (guard_start + STACK_GUARD_SIZE)))
/**
* @brief Assess occurrence of current thread's stack corruption
*
* This function performs an assessment whether a memory fault (on a given
* memory address) is the result of a stack overflow of the current thread.
* This function performs an assessment whether a memory fault (on a
* given memory address) is the result of stack memory corruption of
* the current thread.
*
* When called, we know at this point that we received an ARC
* protection violation, with any cause code, with the protection access
* error either "MPU" or "Secure MPU". In other words, an MPU fault of
* some kind. Need to determine whether this is a general MPU access
* exception or the specific case of a stack overflow.
* Thread stack corruption for supervisor threads or user threads in
* privilege mode (when User Space is supported) is reported upon an
* attempt to access the stack guard area (if MPU Stack Guard feature
* is supported). Additionally the current thread stack pointer
* must be pointing inside or below the guard area.
*
* Thread stack corruption for user threads in user mode is reported,
* if the current stack pointer is pointing below the start of the current
* thread's stack.
*
* Notes:
* - we assume a fully descending stack,
* - we assume a stacking error has occurred,
* - the function shall be called when handling MPU privilege violation
*
* If stack corruption is detected, the function returns the lowest
* allowed address where the Stack Pointer can safely point to, to
* prevent from errors when un-stacking the corrupted stack frame
* upon exception return.
*
* @param fault_addr memory address on which memory access violation
* has been reported.
* @param sp stack pointer when exception comes out
* @retval True if this appears to be a stack overflow
* @retval False if this does not appear to be a stack overflow
*
* @return The lowest allowed stack frame pointer, if error is a
* thread stack corruption, otherwise return 0.
*/
static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
static u32_t z_check_thread_stack_fail(const u32_t fault_addr, u32_t sp)
{
uint32_t guard_end, guard_start;
#if defined(CONFIG_MULTITHREADING)
const struct k_thread *thread = _current;
if (!thread) {
/* TODO: Under what circumstances could we get here ? */
return false;
return 0;
}
#ifdef CONFIG_USERSPACE
if ((thread->base.user_options & K_USER) != 0) {
if ((z_arc_v2_aux_reg_read(_ARC_V2_ERSTATUS) &
_ARC_V2_STATUS32_U) != 0) {
/* Normal user mode context. There is no specific
* "guard" installed in this case, instead what's
* happening is that the stack pointer is crashing
* into the privilege mode stack buffer which
* immediately precededs it.
*/
guard_end = thread->stack_info.start;
guard_start = (uint32_t)thread->stack_obj;
#if defined(CONFIG_USERSPACE)
if (thread->arch.priv_stack_start) {
/* User thread */
if (z_arc_v2_aux_reg_read(_ARC_V2_ERSTATUS)
& _ARC_V2_STATUS32_U) {
/* Thread's user stack corruption */
#ifdef CONFIG_ARC_HAS_SECURE
sp = z_arc_v2_aux_reg_read(_ARC_V2_SEC_U_SP);
#else
sp = z_arc_v2_aux_reg_read(_ARC_V2_USER_SP);
#endif
if (sp <= (u32_t)thread->stack_obj) {
return (u32_t)thread->stack_obj;
}
} else {
/* Special case: handling a syscall on privilege stack.
* There is guard memory reserved immediately before
* it.
*/
guard_end = thread->arch.priv_stack_start;
guard_start = guard_end - Z_ARC_STACK_GUARD_SIZE;
/* User thread in privilege mode */
if (IS_MPU_GUARD_VIOLATION(
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
fault_addr, sp)) {
/* Thread's privilege stack corruption */
return thread->arch.priv_stack_start;
}
}
} else
#endif /* CONFIG_USERSPACE */
{
} else {
/* Supervisor thread */
guard_end = thread->stack_info.start;
guard_start = guard_end - Z_ARC_STACK_GUARD_SIZE;
if (IS_MPU_GUARD_VIOLATION((u32_t)thread->stack_obj,
fault_addr, sp)) {
/* Supervisor thread stack corruption */
return (u32_t)thread->stack_obj + STACK_GUARD_SIZE;
}
}
#endif /* CONFIG_MULTITHREADING */
/* treat any MPU exceptions within the guard region as a stack
* overflow.As some instrustions
* (like enter_s {r13-r26, fp, blink}) push a collection of
* registers on to the stack. In this situation, the fault_addr
* will less than guard_end, but sp will greater than guard_end.
*/
if (fault_addr < guard_end && fault_addr >= guard_start) {
return true;
#else /* CONFIG_USERSPACE */
if (IS_MPU_GUARD_VIOLATION(thread->stack_info.start,
fault_addr, sp)) {
/* Thread stack corruption */
return thread->stack_info.start + STACK_GUARD_SIZE;
}
#endif /* CONFIG_USERSPACE */
return false;
return 0;
}
#endif
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
@@ -111,7 +128,7 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
* These codes and parameters do not have associated* names in
* the technical manual, just switch on the values in Table 6-5
*/
static const char *get_protv_access_err(uint32_t parameter)
static const char *get_protv_access_err(u32_t parameter)
{
switch (parameter) {
case 0x1:
@@ -133,148 +150,148 @@ static const char *get_protv_access_err(uint32_t parameter)
}
}
static void dump_protv_exception(uint32_t cause, uint32_t parameter)
static void dump_protv_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
break;
case 0x1:
LOG_ERR("Memory read protection violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Memory read protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x2:
LOG_ERR("Memory write protection violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Memory write protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x3:
LOG_ERR("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
break;
case 0x10:
LOG_ERR("Normal vector table in secure memory");
z_fatal_print("Normal vector table in secure memory");
break;
case 0x11:
LOG_ERR("NS handler code located in S memory");
z_fatal_print("NS handler code located in S memory");
break;
case 0x12:
LOG_ERR("NSC Table Range Violation");
z_fatal_print("NSC Table Range Violation");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
static void dump_machine_check_exception(uint32_t cause, uint32_t parameter)
static void dump_machine_check_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("double fault");
z_fatal_print("double fault");
break;
case 0x1:
LOG_ERR("overlapping TLB entries");
z_fatal_print("overlapping TLB entries");
break;
case 0x2:
LOG_ERR("fatal TLB error");
z_fatal_print("fatal TLB error");
break;
case 0x3:
LOG_ERR("fatal cache error");
z_fatal_print("fatal cache error");
break;
case 0x4:
LOG_ERR("internal memory error on instruction fetch");
z_fatal_print("internal memory error on instruction fetch");
break;
case 0x5:
LOG_ERR("internal memory error on data fetch");
z_fatal_print("internal memory error on data fetch");
break;
case 0x6:
LOG_ERR("illegal overlapping MPU entries");
z_fatal_print("illegal overlapping MPU entries");
if (parameter == 0x1) {
LOG_ERR(" - jump and branch target");
z_fatal_print(" - jump and branch target");
}
break;
case 0x10:
LOG_ERR("secure vector table not located in secure memory");
z_fatal_print("secure vector table not located in secure memory");
break;
case 0x11:
LOG_ERR("NSC jump table not located in secure memory");
z_fatal_print("NSC jump table not located in secure memory");
break;
case 0x12:
LOG_ERR("secure handler code not located in secure memory");
z_fatal_print("secure handler code not located in secure memory");
break;
case 0x13:
LOG_ERR("NSC target address not located in secure memory");
z_fatal_print("NSC target address not located in secure memory");
break;
case 0x80:
LOG_ERR("uncorrectable ECC or parity error in vector memory");
z_fatal_print("uncorrectable ECC or parity error in vector memory");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
static void dump_privilege_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("Privilege violation");
z_fatal_print("Privilege violation");
break;
case 0x1:
LOG_ERR("disabled extension");
z_fatal_print("disabled extension");
break;
case 0x2:
LOG_ERR("action point hit");
z_fatal_print("action point hit");
break;
case 0x10:
switch (parameter) {
case 0x1:
LOG_ERR("N to S return using incorrect return mechanism");
z_fatal_print("N to S return using incorrect return mechanism");
break;
case 0x2:
LOG_ERR("N to S return with incorrect operating mode");
z_fatal_print("N to S return with incorrect operating mode");
break;
case 0x3:
LOG_ERR("IRQ/exception return fetch from wrong mode");
z_fatal_print("IRQ/exception return fetch from wrong mode");
break;
case 0x4:
LOG_ERR("attempt to halt secure processor in NS mode");
z_fatal_print("attempt to halt secure processor in NS mode");
break;
case 0x20:
LOG_ERR("attempt to access secure resource from normal mode");
z_fatal_print("attempt to access secure resource from normal mode");
break;
case 0x40:
LOG_ERR("SID violation on resource access (APEX/UAUX/key NVM)");
z_fatal_print("SID violation on resource access (APEX/UAUX/key NVM)");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
break;
case 0x13:
switch (parameter) {
case 0x20:
LOG_ERR("attempt to access secure APEX feature from NS mode");
z_fatal_print("attempt to access secure APEX feature from NS mode");
break;
case 0x40:
LOG_ERR("SID violation on access to APEX feature");
z_fatal_print("SID violation on access to APEX feature");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parameter)
static void dump_exception_info(u32_t vector, u32_t cause, u32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
LOG_ERR("interrupt %u", vector);
z_fatal_print("interrupt %u", vector);
return;
}
@@ -283,55 +300,55 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
*/
switch (vector) {
case ARC_EV_RESET:
LOG_ERR("Reset");
z_fatal_print("Reset");
break;
case ARC_EV_MEM_ERROR:
LOG_ERR("Memory Error");
z_fatal_print("Memory Error");
break;
case ARC_EV_INS_ERROR:
LOG_ERR("Instruction Error");
z_fatal_print("Instruction Error");
break;
case ARC_EV_MACHINE_CHECK:
LOG_ERR("EV_MachineCheck");
z_fatal_print("EV_MachineCheck");
dump_machine_check_exception(cause, parameter);
break;
case ARC_EV_TLB_MISS_I:
LOG_ERR("EV_TLBMissI");
z_fatal_print("EV_TLBMissI");
break;
case ARC_EV_TLB_MISS_D:
LOG_ERR("EV_TLBMissD");
z_fatal_print("EV_TLBMissD");
break;
case ARC_EV_PROT_V:
LOG_ERR("EV_ProtV");
z_fatal_print("EV_ProtV");
dump_protv_exception(cause, parameter);
break;
case ARC_EV_PRIVILEGE_V:
LOG_ERR("EV_PrivilegeV");
z_fatal_print("EV_PrivilegeV");
dump_privilege_exception(cause, parameter);
break;
case ARC_EV_SWI:
LOG_ERR("EV_SWI");
z_fatal_print("EV_SWI");
break;
case ARC_EV_TRAP:
LOG_ERR("EV_Trap");
z_fatal_print("EV_Trap");
break;
case ARC_EV_EXTENSION:
LOG_ERR("EV_Extension");
z_fatal_print("EV_Extension");
break;
case ARC_EV_DIV_ZERO:
LOG_ERR("EV_DivZero");
z_fatal_print("EV_DivZero");
break;
case ARC_EV_DC_ERROR:
LOG_ERR("EV_DCError");
z_fatal_print("EV_DCError");
break;
case ARC_EV_MISALIGNED:
LOG_ERR("EV_Misaligned");
z_fatal_print("EV_Misaligned");
break;
case ARC_EV_VEC_UNIT:
LOG_ERR("EV_VecUnit");
z_fatal_print("EV_VecUnit");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
@@ -345,19 +362,19 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
* invokes the user provided routine k_sys_fatal_error_handler() which is
* responsible for implementing the error handling policy.
*/
void _Fault(z_arch_esf_t *esf, uint32_t old_sp)
void _Fault(z_arch_esf_t *esf, u32_t old_sp)
{
uint32_t vector, cause, parameter;
uint32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
uint32_t ecr = z_arc_v2_aux_reg_read(_ARC_V2_ECR);
u32_t vector, cause, parameter;
u32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
u32_t ecr = z_arc_v2_aux_reg_read(_ARC_V2_ECR);
#ifdef CONFIG_USERSPACE
for (int i = 0; i < ARRAY_SIZE(exceptions); i++) {
uint32_t start = (uint32_t)exceptions[i].start;
uint32_t end = (uint32_t)exceptions[i].end;
u32_t start = (u32_t)exceptions[i].start;
u32_t end = (u32_t)exceptions[i].end;
if (esf->pc >= start && esf->pc < end) {
esf->pc = (uint32_t)(exceptions[i].fixup);
esf->pc = (u32_t)(exceptions[i].fixup);
return;
}
}
@@ -384,9 +401,9 @@ void _Fault(z_arch_esf_t *esf, uint32_t old_sp)
return;
}
LOG_ERR("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
LOG_ERR("Address 0x%x", exc_addr);
z_fatal_print("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
z_fatal_print("Address 0x%x", exc_addr);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
dump_exception_info(vector, cause, parameter);
#endif

View File

@@ -12,14 +12,14 @@
* Fault handlers for ARCv2 processors.
*/
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
#include <zephyr/arch/cpu.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <swap_macros.h>
#include <zephyr/syscall.h>
#include <zephyr/arch/arc/asm-compat/assembler.h>
#include <syscall.h>
GTEXT(_Fault)
GTEXT(z_do_kernel_oops)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -34,29 +34,12 @@ GTEXT(__ev_extension)
GTEXT(__ev_div_zero)
GTEXT(__ev_dc_error)
GTEXT(__ev_maligned)
.macro _save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#ifdef CONFIG_IRQ_OFFLOAD
GTEXT(z_irq_do_offload);
#endif
LRR r0, [_ARC_V2_ERET]
STR r0, sp, ___isf_t_pc_OFFSET
LRR r0, [_ARC_V2_ERSTATUS]
STR r0, sp, ___isf_t_status32_OFFSET
.endm
/*
* The exception handling will use top part of interrupt stack to
* get smaller memory footprint, because exception is not frequent.
* To reduce the impact on interrupt handling, especially nested interrupt
* the top part of interrupt stack cannot be too large, so add a check
* here
*/
#if CONFIG_ARC_EXCEPTION_STACK_SIZE > (CONFIG_ISR_STACK_SIZE >> 1)
#error "interrupt stack size is too small"
#endif
/* the necessary stack size for exception handling */
#define EXCEPTION_STACK_SIZE 384
/*
* @brief Fault handler installed in the fault and reserved vectors
@@ -82,9 +65,9 @@ _exc_entry:
* and exception is raised, then here it's guaranteed that
* exception handling has necessary stack to use
*/
MOVR ilink, sp
mov ilink, sp
_get_curr_cpu_irq_stack sp
SUBR sp, sp, (CONFIG_ISR_STACK_SIZE - CONFIG_ARC_EXCEPTION_STACK_SIZE)
sub sp, sp, (CONFIG_ISR_STACK_SIZE - EXCEPTION_STACK_SIZE)
/*
* save caller saved registers
@@ -97,12 +80,20 @@ _exc_entry:
*/
_create_irq_stack_frame
_save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* sp is parameter of _Fault */
MOVR r0, sp
mov r0, sp
/* ilink is the thread's original sp */
MOVR r1, ilink
mov r1, ilink
jl _Fault
_exc_return:
@@ -114,16 +105,22 @@ _exc_return:
* exception comes out, thread context?irq_context?nest irq context?
*/
_get_next_switch_handle
#ifdef CONFIG_PREEMPT_ENABLED
#ifdef CONFIG_SMP
bl z_arch_smp_switch_in_isr
breq r0, 0, _exc_return_from_exc
mov r2, r0
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
BREQR r0, 0, _exc_return_from_exc
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
breq r0, r2, _exc_return_from_exc
/* Save old thread into switch handle which is required by z_sched_switch_spin which
* will be called during old thread abort.
*/
STR r2, r2, ___thread_t_switch_handle_OFFSET
MOVR r2, r0
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/*
@@ -148,8 +145,8 @@ _exc_return:
*/
mov ilink, r2
#endif
LRR r3, [_ARC_V2_STATUS32]
ANDR r3, r3, (~(_ARC_V2_STATUS32_AE | _ARC_V2_STATUS32_RB(7)))
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~(_ARC_V2_STATUS32_AE | _ARC_V2_STATUS32_RB(7)))
kflag r3
/* pretend lowest priority interrupt happened to use common handler
* if exception is raised in irq, i.e., _ARC_V2_AUX_IRQ_ACT !=0,
@@ -159,78 +156,190 @@ _exc_return:
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
mov_s r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
mov r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
MOVR r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
mov r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
push_s r2
mov_s r0, _ARC_V2_AUX_IRQ_ACT
mov_s r1, r3
mov_s r6, ARC_S_CALL_AUX_WRITE
push r2
mov r0, _ARC_V2_AUX_IRQ_ACT
mov r1, r3
mov r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
pop_s r2
pop r2
#else
SRR r3, [_ARC_V2_AUX_IRQ_ACT]
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
#if defined(CONFIG_ARC_FIRQ) && CONFIG_RGF_NUM_BANKS != 1
mov r2, ilink
#endif
/* Assumption: r2 has next thread */
b _rirq_newthread_switch
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
#endif
_exc_return_from_exc:
/* exception handler may change return address.
* reload it
*/
LDR r0, sp, ___isf_t_pc_OFFSET
SRR r0, [_ARC_V2_ERET]
ld_s r0, [sp, ___isf_t_pc_OFFSET]
sr r0, [_ARC_V2_ERET]
_pop_irq_stack_frame
MOVR sp, ilink
mov sp, ilink
rtie
/* separated entry for trap which may be used by irq_offload, USERPSACE */
SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_trap)
/* get the id of trap_s */
LRR ilink, [_ARC_V2_ECR]
ANDR ilink, ilink, 0x3f
lr ilink, [_ARC_V2_ECR]
and ilink, ilink, 0x3f
#ifdef CONFIG_USERSPACE
cmp ilink, _TRAP_S_CALL_SYSTEM_CALL
bne _do_non_syscall_trap
/* do sys_call */
/* do sys_call */
mov ilink, K_SYSCALL_LIMIT
cmp r6, ilink
blo valid_syscall_id
blt valid_syscall_id
mov_s r0, r6
mov_s r6, K_SYSCALL_BAD
mov r0, r6
mov r6, K_SYSCALL_BAD
valid_syscall_id:
/* create a sys call frame
* caller regs (r0 - 12) are saved in _create_irq_stack_frame
* ok to use them later
*/
_create_irq_stack_frame
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr ilink, [_ARC_V2_ERSEC_STAT]
push ilink
#endif
lr ilink, [_ARC_V2_ERET]
push ilink
lr ilink, [_ARC_V2_ERSTATUS]
push ilink
_save_exc_regs_into_stack
/* exc return and do sys call in kernel mode,
* so need to clear U bit, r0 is already loaded
* with ERSTATUS in _save_exc_regs_into_stack
*/
bclr ilink, ilink, _ARC_V2_STATUS32_U_BIT
sr ilink, [_ARC_V2_ERSTATUS]
bclr r0, r0, _ARC_V2_STATUS32_U_BIT
sr r0, [_ARC_V2_ERSTATUS]
mov_s r0, _arc_do_syscall
sr r0, [_ARC_V2_ERET]
mov ilink, _arc_do_syscall
sr ilink, [_ARC_V2_ERET]
rtie
_do_non_syscall_trap:
#endif /* CONFIG_USERSPACE */
#ifdef CONFIG_IRQ_OFFLOAD
/*
* IRQ_OFFLOAD is to simulate interrupt handling through exception,
* so its entry is different with normal exception handling, it is
* handled in isr stack
*/
cmp ilink, _TRAP_S_SCALL_IRQ_OFFLOAD
bne _exc_entry
/* save caller saved registers */
_create_irq_stack_frame
#ifdef CONFIG_ARC_HAS_SECURE
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
bne.d exc_nest_handle
mov r0, sp
_get_curr_cpu_irq_stack sp
exc_nest_handle:
push_s r0
jl z_irq_do_offload
pop sp
_dec_int_nest_counter r0, r1
lr r0, [_ARC_V2_AUX_IRQ_ACT]
and r0, r0, 0xffff
cmp r0, 0
bne _exc_return_from_exc
#ifdef CONFIG_PREEMPT_ENABLED
#ifdef CONFIG_SMP
bl z_arch_smp_switch_in_isr
breq r0, 0, _exc_return_from_irqoffload_trap
mov r2, r1
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
mov r2, r0
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
breq r0, r2, _exc_return_from_irqoffload_trap
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/*
* sync up the ERSEC_STAT.ERM and SEC_STAT.IRM.
* use a fake interrupt return to simulate an exception turn.
* ERM and IRM record which mode the cpu should return, 1: secure
* 0: normal
*/
lr r3,[_ARC_V2_ERSEC_STAT]
btst r3, 31
bset.nz r3, r3, _ARC_V2_SEC_STAT_IRM_BIT
bclr.z r3, r3, _ARC_V2_SEC_STAT_IRM_BIT
sflag r3
/* save _ARC_V2_SEC_STAT */
and r3, r3, 0xff
push r3
#endif
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
/* note: Ok to use _CAUSE_RIRQ since everything is saved */
mov r2, r0
#ifndef CONFIG_SMP
st_s r2, [r1, _kernel_offset_to_current]
#endif
/* clear AE bit to forget this was an exception */
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~_ARC_V2_STATUS32_AE)
kflag r3
/* pretend lowest priority interrupt happened to use common handler */
lr r3, [_ARC_V2_AUX_IRQ_ACT]
#ifdef CONFIG_ARC_SECURE_FIRMWARE
or r3, r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
or r3, r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
push_s r2
mov r0, _ARC_V2_AUX_IRQ_ACT
mov r1, r3
mov r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
pop_s r2
#else
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
#endif
_exc_return_from_irqoffload_trap:
_pop_irq_stack_frame
rtie
#endif /* CONFIG_IRQ_OFFLOAD */
b _exc_entry

View File

@@ -17,191 +17,51 @@
* number from 16 to last IRQ number on the platform.
*/
#include <zephyr/kernel.h>
#include <zephyr/arch/cpu.h>
#include <zephyr/sys/__assert.h>
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
#include <zephyr/sw_isr_table.h>
#include <zephyr/irq.h>
#include <zephyr/sys/printk.h>
#include <kernel.h>
#include <arch/cpu.h>
#include <sys/__assert.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <sw_isr_table.h>
#include <irq.h>
#include <sys/printk.h>
/*
* storage space for the interrupt stack of fast_irq
*/
#if defined(CONFIG_ARC_FIRQ_STACK)
#if defined(CONFIG_SMP)
K_KERNEL_STACK_ARRAY_DEFINE(_firq_interrupt_stack, CONFIG_MP_MAX_NUM_CPUS,
CONFIG_ARC_FIRQ_STACK_SIZE);
#else
K_KERNEL_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
#endif
/**
* @brief Set the stack pointer for firq handling
*/
void z_arc_firq_stack_set(void)
{
#ifdef CONFIG_SMP
char *firq_sp = Z_KERNEL_STACK_BUFFER(
_firq_interrupt_stack[z_arc_v2_core_id()]) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#else
char *firq_sp = Z_KERNEL_STACK_BUFFER(_firq_interrupt_stack) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#endif
/* the z_arc_firq_stack_set must be called when irq diasbled, as
* it can be called not only in the init phase but also other places
*/
unsigned int key = arch_irq_lock();
__asm__ volatile (
/* only ilink will not be banked, so use ilink as channel
* between 2 banks
*/
"mov %%ilink, %0\n\t"
"lr %0, [%1]\n\t"
"or %0, %0, %2\n\t"
"kflag %0\n\t"
"mov %%sp, %%ilink\n\t"
/* switch back to bank0, use ilink to avoid the pollution of
* bank1's gp regs.
*/
"lr %%ilink, [%1]\n\t"
"and %%ilink, %%ilink, %3\n\t"
"kflag %%ilink\n\t"
:
: "r"(firq_sp), "i"(_ARC_V2_STATUS32),
"i"(_ARC_V2_STATUS32_RB(1)),
"i"(~_ARC_V2_STATUS32_RB(7))
);
arch_irq_unlock(key);
}
#endif
/*
* ARC CPU interrupt controllers hierarchy.
*
* Single-core (UP) case:
*
* --------------------------
* | CPU core 0 |
* --------------------------
* | core 0 (private) |
* | interrupt controller |
* --------------------------
* |
* [internal interrupts]
* [external interrupts]
*
*
* Multi-core (SMP) case:
*
* -------------------------- --------------------------
* | CPU core 0 | | CPU core 1 |
* -------------------------- --------------------------
* | core 0 (private) | | core 1 (private) |
* | interrupt controller | | interrupt controller |
* -------------------------- --------------------------
* | | | | | |
* | | [core 0 private internal interrupts] | | [core 1 private internal interrupts]
* | | | |
* | | | |
* | ------------------------------------------- |
* | | IDU (Interrupt Distribution Unit) | |
* | ------------------------------------------- |
* | | |
* | [common (shared) interrupts] |
* | |
* | |
* [core 0 private external interrupts] [core 1 private external interrupts]
*
*
*
* The interrupts are grouped in HW in the same order - firstly internal interrupts
* (with lowest line numbers in IVT), than common interrupts (if present), than external
* interrupts (with highest line numbers in IVT).
*
* NOTE: in case of SMP system we currently support in Zephyr only private internal and common
* interrupts, so the core-private external interrupts are currently not supported for SMP.
*/
/**
* @brief Enable an interrupt line
*
* Clear possible pending interrupts on the line, and enable the interrupt
* line. After this call, the CPU will receive interrupts for the specified
* @a irq.
*
* @return N/A
*/
void arch_irq_enable(unsigned int irq);
/**
void z_arch_irq_enable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_enable(irq);
irq_unlock(key);
}
/*
* @brief Disable an interrupt line
*
* Disable an interrupt line. After this call, the CPU will stop receiving
* interrupts for the specified @a irq.
*/
void arch_irq_disable(unsigned int irq);
/**
* @brief Return IRQ enable state
*
* @param irq IRQ line
* @return interrupt enable state, true or false
* @return N/A
*/
int arch_irq_is_enabled(unsigned int irq);
#ifdef CONFIG_ARC_CONNECT
#define IRQ_NUM_TO_IDU_NUM(id) ((id) - ARC_CONNECT_IDU_IRQ_START)
#define IRQ_IS_COMMON(id) ((id) >= ARC_CONNECT_IDU_IRQ_START)
void arch_irq_enable(unsigned int irq)
void z_arch_irq_disable(unsigned int irq)
{
if (IRQ_IS_COMMON(irq)) {
z_arc_connect_idu_set_mask(IRQ_NUM_TO_IDU_NUM(irq), 0x0);
} else {
z_arc_v2_irq_unit_int_enable(irq);
}
}
unsigned int key = irq_lock();
void arch_irq_disable(unsigned int irq)
{
if (IRQ_IS_COMMON(irq)) {
z_arc_connect_idu_set_mask(IRQ_NUM_TO_IDU_NUM(irq), 0x1);
} else {
z_arc_v2_irq_unit_int_disable(irq);
}
}
int arch_irq_is_enabled(unsigned int irq)
{
if (IRQ_IS_COMMON(irq)) {
return !z_arc_connect_idu_read_mask(IRQ_NUM_TO_IDU_NUM(irq));
} else {
return z_arc_v2_irq_unit_int_enabled(irq);
}
}
#else
void arch_irq_enable(unsigned int irq)
{
z_arc_v2_irq_unit_int_enable(irq);
}
void arch_irq_disable(unsigned int irq)
{
z_arc_v2_irq_unit_int_disable(irq);
irq_unlock(key);
}
int arch_irq_is_enabled(unsigned int irq)
{
return z_arc_v2_irq_unit_int_enabled(irq);
}
#endif /* CONFIG_ARC_CONNECT */
/**
/*
* @internal
*
* @brief Set an interrupt's priority
@@ -211,15 +71,19 @@ int arch_irq_is_enabled(unsigned int irq)
* The priority is verified if ASSERT_ON is enabled; max priority level
* depends on CONFIG_NUM_IRQ_PRIO_LEVELS.
*
* @return N/A
*/
void z_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
ARG_UNUSED(flags);
unsigned int key = irq_lock();
__ASSERT(prio < CONFIG_NUM_IRQ_PRIO_LEVELS,
"invalid priority %d for irq %d", prio, irq);
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocated to secure world
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocted to secure world
* left prio levels allocated to normal world
*/
#if defined(CONFIG_ARC_SECURE_FIRMWARE)
@@ -230,25 +94,28 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
ARC_N_IRQ_START_LEVEL : prio;
#endif
z_arc_v2_irq_unit_prio_set(irq, prio);
irq_unlock(key);
}
/**
/*
* @brief Spurious interrupt handler
*
* Installed in all dynamic interrupt slots at boot time. Throws an error if
* called.
*
* @return N/A
*/
void z_irq_spurious(const void *unused)
void z_irq_spurious(void *unused)
{
ARG_UNUSED(unused);
z_fatal_error(K_ERR_SPURIOUS_IRQ, NULL);
}
#ifdef CONFIG_DYNAMIC_INTERRUPTS
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(const void *parameter),
const void *parameter, uint32_t flags)
int z_arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(void *parameter), void *parameter,
u32_t flags)
{
z_isr_install(irq, routine, parameter);
z_irq_priority_set(irq, priority, flags);

View File

@@ -1,6 +1,5 @@
/*
* Copyright (c) 2015 Intel corporation
* Copyright (c) 2022 Synopsys
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -9,63 +8,30 @@
* @file Software interrupts utility code - ARC implementation
*/
#include <zephyr/kernel.h>
#include <zephyr/irq_offload.h>
#include <zephyr/init.h>
#include <kernel.h>
#include <irq_offload.h>
/* Choose a reasonable default for interrupt line which is used for irq_offload with the option
* to override it by setting interrupt line via device tree.
*/
#if DT_NODE_EXISTS(DT_NODELABEL(test_irq_offload_line_0))
#define IRQ_OFFLOAD_LINE DT_IRQN(DT_NODELABEL(test_irq_offload_line_0))
#else
/* Last two lined are already used in the IRQ tests, so we choose 3rd from the end line */
#define IRQ_OFFLOAD_LINE (CONFIG_NUM_IRQS - 3)
#endif
static irq_offload_routine_t offload_routine;
static void *offload_param;
#define IRQ_OFFLOAD_PRIO 0
#define CURR_CPU (IS_ENABLED(CONFIG_SMP) ? arch_curr_cpu()->id : 0)
static struct {
volatile irq_offload_routine_t fn;
const void *volatile arg;
} offload_params[CONFIG_MP_MAX_NUM_CPUS];
static void arc_irq_offload_handler(const void *unused)
/* Called by trap_s exception handler */
void z_irq_do_offload(void)
{
ARG_UNUSED(unused);
offload_params[CURR_CPU].fn(offload_params[CURR_CPU].arg);
offload_routine(offload_param);
}
void arch_irq_offload(irq_offload_routine_t routine, const void *parameter)
void irq_offload(irq_offload_routine_t routine, void *parameter)
{
offload_params[CURR_CPU].fn = routine;
offload_params[CURR_CPU].arg = parameter;
compiler_barrier();
unsigned int key;
z_arc_v2_aux_reg_write(_ARC_V2_AUX_IRQ_HINT, IRQ_OFFLOAD_LINE);
key = irq_lock();
offload_routine = routine;
offload_param = parameter;
__asm__ volatile("sync");
__asm__ volatile ("trap_s %[id]"
:
: [id] "i"(_TRAP_S_SCALL_IRQ_OFFLOAD) : );
/* If _current was aborted in the offload routine, we shouldn't be here */
__ASSERT_NO_MSG((_current->base.thread_state & _THREAD_DEAD) == 0);
irq_unlock(key);
}
/* need to be executed on every core in the system */
int arc_irq_offload_init(void)
{
IRQ_CONNECT(IRQ_OFFLOAD_LINE, IRQ_OFFLOAD_PRIO, arc_irq_offload_handler, NULL, 0);
/* The line is triggered and controlled with core private interrupt controller,
* so even in case common (IDU) interrupt line usage on SMP we need to enable it not
* with generic irq_enable() but via z_arc_v2_irq_unit_int_enable().
*/
z_arc_v2_irq_unit_int_enable(IRQ_OFFLOAD_LINE);
return 0;
}
SYS_INIT(arc_irq_offload_init, POST_KERNEL, 0);

View File

@@ -14,19 +14,18 @@
*/
#include <offsets_short.h>
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
#include <zephyr/sw_isr_table.h>
#include <zephyr/kernel_structs.h>
#include <zephyr/arch/cpu.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <sw_isr_table.h>
#include <kernel_structs.h>
#include <arch/cpu.h>
#include <swap_macros.h>
#include <zephyr/arch/arc/asm-compat/assembler.h>
GTEXT(_isr_wrapper)
GTEXT(_isr_demux)
#if defined(CONFIG_PM)
GTEXT(z_pm_save_idle_exit)
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
GTEXT(z_sys_power_save_idle_exit)
#endif
/*
@@ -36,38 +35,40 @@ _rirq_enter/_firq_enter: they are jump points.
The flow is the following:
ISR -> _isr_wrapper -- + -> _rirq_enter -> _isr_demux -> ISR -> _rirq_exit
|
+ -> _firq_enter -> _isr_demux -> ISR -> _firq_exit
|
+ -> _firq_enter -> _isr_demux -> ISR -> _firq_exit
Context switch explanation:
The context switch code is spread in these files:
isr_wrapper.s, switch.s, swap_macros.h, fast_irq.s, regular_irq.s
isr_wrapper.s, switch.s, swap_macros.s, fast_irq.s, regular_irq.s
IRQ stack frame layout:
high address
high address
status32
pc
lp_count
lp_start
lp_end
blink
r13
...
sp -> r0
status32
pc
lp_count
lp_start
lp_end
blink
r13
...
sp -> r0
low address
low address
The context switch code adopts this standard so that it is easier to follow:
- r2 contains _kernel.current ASAP, and the incoming thread when we
transition from outgoing thread to incoming thread
- r1 contains _kernel ASAP and is not overwritten over the lifespan of
the functions.
- r2 contains _kernel.current ASAP, and the incoming thread when we
transition from outgoing thread to incoming thread
Not loading _kernel into r0 allows loading _kernel without stomping on
the parameter in r0 in arch_switch().
the parameter in r0 in z_arch_switch().
ARCv2 processors have two kinds of interrupts: fast (FIRQ) and regular. The
@@ -99,49 +100,47 @@ done upfront, and the rest is done when needed:
o RIRQ
All needed registers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread's stack.
All needed registers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread control structure (TCS).
o FIRQ
First, a FIRQ can be interrupting a lower-priority RIRQ: if this is
the case, the FIRQ does not take a scheduling decision and leaves it
the RIRQ to handle. This limits the amount of code that has to run at
interrupt-level.
First, a FIRQ can be interrupting a lower-priority RIRQ: if this is the case,
the FIRQ does not take a scheduling decision and leaves it the RIRQ to
handle. This limits the amount of code that has to run at interrupt-level.
CONFIG_RGF_NUM_BANKS==1 case:
Registers are saved on the stack frame just as they are for RIRQ.
Context switch can happen just as it does in the RIRQ case, however,
if the FIRQ interrupted a RIRQ, the FIRQ will return from interrupt
and let the RIRQ do the context switch. At entry, one register is
needed in order to have code to save other registers. r0 is saved
first in the stack and restored later
CONFIG_RGF_NUM_BANKS==1 case:
Registers are saved on the stack frame just as they are for RIRQ.
Context switch can happen just as it does in the RIRQ case, however,
if the FIRQ interrupted a RIRQ, the FIRQ will return from interrupt and
let the RIRQ do the context switch. At entry, one register is needed in order
to have code to save other registers. r0 is saved first in a global called
saved_r0.
CONFIG_RGF_NUM_BANKS!=1 case:
During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own
stack. GPRs are banked, loop registers are saved in unused callee saved
regs upon interrupt entry. If returning to a thread, loop registers are
restored and the CPU switches back to bank 0 for the GPRs. If a context
switch is needed, at this point only are all the registers saved.
First, a stack frame with the same layout as the automatic RIRQ one is
created and then the callee-saved GPRs are saved in the stack.
status32_p0 and ilink are saved in this case, not status32 and pc.
To create the stack frame, the FIRQ handling code must first go back to
using bank0 of registers, since that is where the registers containing
the exiting thread are saved. Care must be taken not to touch any
register before saving them: the only one usable at that point is the
stack pointer.
CONFIG_RGF_NUM_BANKS!=1 case:
During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own stack.
GPRs are banked, loop registers are saved in unused callee saved regs upon
interrupt entry. If returning to a thread, loop registers are restored and the
CPU switches back to bank 0 for the GPRs. If a context switch is
needed, at this point only are all the registers saved. First, a
stack frame with the same layout as the automatic RIRQ one is created
and then the callee-saved GPRs are saved in the TCS. status32_p0 and
ilink are saved in this case, not status32 and pc.
To create the stack frame, the FIRQ handling code must first go back to using
bank0 of registers, since that is where the registers containing the exiting
thread are saved. Care must be taken not to touch any register before saving
them: the only one usable at that point is the stack pointer.
o coop
When a coop context switch is done, the callee-saved registers are
saved in the stack. The other GPRs do not need to be saved, since the
compiler has already placed them on the stack.
When a coop context switch is done, the callee-saved registers are
saved in the TCS. The other GPRs do not need to be saved, since the
compiler has already placed them on the stack.
For restoring the contexts, there are six cases. In all cases, the
callee-saved registers of the incoming thread have to be restored. Then, there
@@ -149,87 +148,88 @@ are specifics for each case:
From coop:
o to coop
o to coop
Do a normal function call return.
Do a normal function call return.
o to any irq
o to any irq
The incoming interrupted thread has an IRQ stack frame containing the
caller-saved registers that has to be popped. status32 has to be
restored, then we jump to the interrupted instruction.
The incoming interrupted thread has an IRQ stack frame containing the
caller-saved registers that has to be popped. status32 has to be restored,
then we jump to the interrupted instruction.
From FIRQ:
When CONFIG_RGF_NUM_BANKS==1, context switch is done as it is for RIRQ.
When CONFIG_RGF_NUM_BANKS!=1, the processor is put back to using bank0,
not bank1 anymore, because it had to save the outgoing context from
bank0, and now has to load the incoming one into bank0.
When CONFIG_RGF_NUM_BANKS==1, context switch is done as it is for RIRQ.
When CONFIG_RGF_NUM_BANKS!=1, the processor is put back to using bank0,
not bank1 anymore, because it had to save the outgoing context from bank0,
and now has to load the incoming one
into bank0.
o to coop
o to coop
The address of the returning instruction from arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
The address of the returning instruction from z_arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
o to any irq
o to any irq
The IRQ has saved the caller-saved registers in a stack frame, which
must be popped, and status32 and pc loaded in status32_p0 and ilink.
The IRQ has saved the caller-saved registers in a stack frame, which must be
popped, and status32 and pc loaded in status32_p0 and ilink.
From RIRQ:
o to coop
o to coop
The interrupt return mechanism in the processor expects a stack frame,
but the outgoing context did not create one. A fake one is created
here, with only the relevant values filled in: pc, status32.
The interrupt return mechanism in the processor expects a stack frame, but
the outgoing context did not create one. A fake one is created here, with
only the relevant values filled in: pc, status32.
There is a discrepancy between the ABI from the ARCv2 docs,
including the way the processor pushes GPRs in pairs in the IRQ stack
frame, and the ABI GCC uses. r13 should be a callee-saved register,
but GCC treats it as caller-saved. This means that the processor pushes
it in the stack frame along with r12, but the compiler does not save it
before entering a function. So, it is saved as part of the callee-saved
registers, and restored there, but the processor restores it _a second
time_ when popping the IRQ stack frame. Thus, the correct value must
also be put in the fake stack frame when returning to a thread that
context switched out cooperatively.
There is a discrepancy between the ABI from the ARCv2 docs, including the
way the processor pushes GPRs in pairs in the IRQ stack frame, and the ABI
GCC uses. r13 should be a callee-saved register, but GCC treats it as
caller-saved. This means that the processor pushes it in the stack frame
along with r12, but the compiler does not save it before entering a
function. So, it is saved as part of the callee-saved registers, and
restored there, but the processor restores it _a second time_ when popping
the IRQ stack frame. Thus, the correct value must also be put in the fake
stack frame when returning to a thread that context switched out
cooperatively.
o to any irq
o to any irq
Both types of IRQs already have an IRQ stack frame: simply return from
interrupt.
Both types of IRQs already have an IRQ stack frame: simply return from
interrupt.
*/
SECTION_FUNC(TEXT, _isr_wrapper)
#ifdef CONFIG_ARC_FIRQ
#if defined(CONFIG_ARC_FIRQ)
#if CONFIG_RGF_NUM_BANKS == 1
/* free r0 here, use r0 to check whether irq is firq.
* for rirq, as sp will not change and r0 already saved, this action
* in fact is useless
* in fact is an action like nop.
* for firq, r0 will be restored later
*/
push r0
st r0, [sp]
#endif
lr r0, [_ARC_V2_AUX_IRQ_ACT]
ffs r0, r0
cmp r0, 0
#if CONFIG_RGF_NUM_BANKS == 1
bnz rirq_path
pop r0
ld r0, [sp]
/* 1-register bank FIRQ handling must save registers on stack */
_create_irq_stack_frame
lr r0, [_ARC_V2_STATUS32_P0]
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET]
lr r0, [_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET]
mov_s r3, _firq_exit
mov_s r2, _firq_enter
mov r3, _firq_exit
mov r2, _firq_enter
j_s [r2]
rirq_path:
add sp, sp, 4
mov_s r3, _rirq_exit
mov_s r2, _rirq_enter
mov r3, _rirq_exit
mov r2, _rirq_enter
j_s [r2]
#else
mov.z r3, _firq_exit
@@ -239,53 +239,71 @@ rirq_path:
j_s [r2]
#endif
#else
MOVR r3, _rirq_exit
MOVR r2, _rirq_enter
mov r3, _rirq_exit
mov r2, _rirq_enter
j_s [r2]
#endif
/* r0, r1, and r3 will be used in exit_tickless_idle macro */
#if defined(CONFIG_TRACING)
GTEXT(z_sys_trace_isr_enter)
.macro log_interrupt_k_event
clri r0 /* do not interrupt event logger operations */
push_s r0
push_s blink
jl z_sys_trace_isr_enter
pop_s blink
pop_s r0
seti r0
.endm
#else
#define log_interrupt_k_event
#endif
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
.macro exit_tickless_idle
#if defined(CONFIG_PM)
clri r0 /* do not interrupt exiting tickless idle operations */
MOVR r1, _kernel
breq r3, 0, _skip_pm_save_idle_exit
push_s r1
push_s r0
mov_s r1, _kernel
ld_s r0, [r1, _kernel_offset_to_idle] /* requested idle duration */
breq r0, 0, _skip_sys_power_save_idle_exit
st 0, [r1, _kernel_offset_to_idle] /* zero idle duration */
PUSHR blink
jl z_pm_save_idle_exit
POPR blink
push_s blink
jl z_sys_power_save_idle_exit
pop_s blink
_skip_pm_save_idle_exit:
_skip_sys_power_save_idle_exit:
pop_s r0
pop_s r1
seti r0
#endif
.endm
#else
#define exit_tickless_idle
#endif
/* when getting here, r3 contains the interrupt exit stub to call */
SECTION_FUNC(TEXT, _isr_demux)
PUSHR r3
push_s r3
/* according to ARCv2 ISA, r25, r30, r58, r59 are caller-saved
* scratch registers, possibly used by interrupt handlers
*/
PUSHR r25
PUSHR r30
push r25
push r30
#ifdef CONFIG_ARC_HAS_ACCL_REGS
PUSHR r58
#ifndef CONFIG_64BIT
PUSHR r59
#endif /* !CONFIG_64BIT */
push r58
push r59
#endif
#ifdef CONFIG_SCHED_THREAD_USAGE
bl z_sched_usage_stop
#endif
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_enter
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_start_of_isr
#endif
/* cannot be done before this point because we must be able to run C */
/* r0 is available to be stomped here, and exit_tickless_idle uses it */
exit_tickless_idle
log_interrupt_k_event
lr r0, [_ARC_V2_ICAUSE]
/* handle software triggered interrupt */
@@ -296,32 +314,29 @@ irq_hint_handled:
sub r0, r0, 16
MOVR r1, _sw_isr_table
/* SW ISR table entries are 8-bytes wide for 32bit ISA and
* 16-bytes wide for 64bit ISA */
ASLR r0, r0, (ARC_REGSHIFT + 1)
ADDR r0, r1, r0
/* ISR into r1 */
LDR r1, r0, ARC_REGSZ
jl_s.d [r1]
/* delay slot: ISR parameter into r0 */
LDR r0, r0
mov r1, _sw_isr_table
add3 r0, r1, r0 /* table entries are 8-bytes wide */
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_exit
ld_s r1, [r0, 4] /* ISR into r1 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
push_s r1
bl read_timer_end_of_isr
pop_s r1
pop_s r0
#endif
jl_s.d [r1]
ld_s r0, [r0] /* delay slot: ISR parameter into r0 */
#ifdef CONFIG_ARC_HAS_ACCL_REGS
#ifndef CONFIG_64BIT
POPR r59
#endif /* !CONFIG_64BIT */
POPR r58
pop r59
pop r58
#endif
POPR r30
POPR r25
pop r30
pop r25
/* back from ISR, jump to exit stub */
POPR r3
pop_s r3
j_s [r3]
nop_s
nop

View File

@@ -2,5 +2,5 @@
zephyr_library()
zephyr_library_sources_ifdef(CONFIG_ARC_CORE_MPU arc_core_mpu.c)
zephyr_library_sources_ifdef(CONFIG_ARC_MPU arc_mpu.c)
zephyr_library_sources_if_kconfig(arc_core_mpu.c)
zephyr_library_sources_if_kconfig(arc_mpu.c)

View File

@@ -1,16 +1,17 @@
# Memory Protection Unit (MPU) configuration options
# Kconfig - Memory Protection Unit (MPU) configuration options
#
# Copyright (c) 2017 Synopsys
#
# SPDX-License-Identifier: Apache-2.0
#
config ARC_MPU_VER
int "ARC MPU version"
range 2 8
range 2 4
default 2
help
ARC MPU has several versions. For MPU v2, the minimum region is 2048 bytes;
For other versions, the minimum region is 32 bytes; v4 has secure features,
v6 supports up to 32 regions. Note: MPU v5 & v7 are not supported.
For MPU v3, the minimum region is 32 bytes
config ARC_CORE_MPU
bool "ARC Core MPU functionalities"
@@ -19,21 +20,16 @@ config ARC_CORE_MPU
config MPU_STACK_GUARD
bool "Thread Stack Guards"
depends on ARC_CORE_MPU && ARC_MPU_VER !=2
depends on ARC_CORE_MPU
help
Enable thread stack guards via MPU. ARC supports built-in stack protection.
If your core supports that, it is preferred over MPU stack guard.
For ARC_MPU_VER == 2, it requires 2048 extra bytes and a strong start address
alignment, this will bring big waste of memory, so no support for it.
If your core supports that, it is preferred over MPU stack guard
config ARC_MPU
bool "ARC MPU Support"
select MPU
select SRAM_REGION_PERMISSIONS
select ARC_CORE_MPU
select THREAD_STACK_INFO
select GEN_PRIV_STACKS if !(ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if !(ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MPU_REQUIRES_NON_OVERLAPPING_REGIONS if (ARC_MPU_VER = 4 || ARC_MPU_VER = 8)
select MEMORY_PROTECTION
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if ARC_MPU_VER = 2
help
Target has ARC MPU
Target has ARC MPU (currently only works for EMSK 2.2/2.3 ARCEM7D)

View File

@@ -4,11 +4,12 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/device.h>
#include <zephyr/init.h>
#include <zephyr/kernel.h>
#include <zephyr/arch/arc/v2/mpu/arc_core_mpu.h>
#include <zephyr/kernel_structs.h>
#include <device.h>
#include <init.h>
#include <kernel.h>
#include <soc.h>
#include <arch/arc/v2/mpu/arc_core_mpu.h>
#include <kernel_structs.h>
/*
* @brief Configure MPU for the thread
@@ -26,15 +27,73 @@ void configure_mpu_thread(struct k_thread *thread)
#if defined(CONFIG_USERSPACE)
int arch_mem_domain_max_partitions_get(void)
int z_arch_mem_domain_max_partitions_get(void)
{
return arc_core_mpu_get_max_domain_partition_regions();
}
/*
* Reset MPU region for a single memory partition
*/
void z_arch_mem_domain_partition_remove(struct k_mem_domain *domain,
u32_t partition_id)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_remove_mem_partition(domain, partition_id);
arc_core_mpu_enable();
}
/*
* Configure MPU memory domain
*/
void z_arch_mem_domain_thread_add(struct k_thread *thread)
{
if (_current != thread) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_configure_mem_domain(thread);
arc_core_mpu_enable();
}
/*
* Destroy MPU regions for the mem domain
*/
void z_arch_mem_domain_destroy(struct k_mem_domain *domain)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_remove_mem_domain(domain);
arc_core_mpu_enable();
}
void z_arch_mem_domain_partition_add(struct k_mem_domain *domain,
u32_t partition_id)
{
/* No-op on this architecture */
}
void z_arch_mem_domain_thread_remove(struct k_thread *thread)
{
if (_current != thread) {
return;
}
z_arch_mem_domain_destroy(thread->mem_domain_info.mem_domain);
}
/*
* Validate the given buffer is user accessible or not
*/
int arch_buffer_validate(void *addr, size_t size, int write)
int z_arch_buffer_validate(void *addr, size_t size, int write)
{
return arc_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -4,36 +4,37 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/device.h>
#include <zephyr/init.h>
#include <zephyr/kernel.h>
#include <zephyr/arch/arc/v2/aux_regs.h>
#include <zephyr/arch/arc/v2/mpu/arc_mpu.h>
#include <zephyr/arch/arc/v2/mpu/arc_core_mpu.h>
#include <zephyr/linker/linker-defs.h>
#include <device.h>
#include <init.h>
#include <kernel.h>
#include <soc.h>
#include <arch/arc/v2/aux_regs.h>
#include <arch/arc/v2/mpu/arc_mpu.h>
#include <arch/arc/v2/mpu/arc_core_mpu.h>
#include <linker/linker-defs.h>
#define LOG_LEVEL CONFIG_MPU_LOG_LEVEL
#include <zephyr/logging/log.h>
#include <logging/log.h>
LOG_MODULE_REGISTER(mpu);
/**
* @brief Get the number of supported MPU regions
*
*/
static inline uint8_t get_num_regions(void)
static inline u8_t get_num_regions(void)
{
uint32_t num = z_arc_v2_aux_reg_read(_ARC_V2_MPU_BUILD);
u32_t num = z_arc_v2_aux_reg_read(_ARC_V2_MPU_BUILD);
num = (num & 0xFF00U) >> 8U;
return (uint8_t)num;
return (u8_t)num;
}
/**
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct parameter set.
*/
static inline uint32_t get_region_attr_by_type(uint32_t type)
static inline u32_t get_region_attr_by_type(u32_t type)
{
switch (type) {
case THREAD_STACK_USER_REGION:
@@ -51,8 +52,8 @@ static inline uint32_t get_region_attr_by_type(uint32_t type)
}
}
#if (CONFIG_ARC_MPU_VER == 4) || (CONFIG_ARC_MPU_VER == 8)
#include "arc_mpu_v4_internal.h"
#else
#include "arc_mpu_common_internal.h"
#if CONFIG_ARC_MPU_VER == 2
#include "arc_mpu_v2_internal.h"
#elif CONFIG_ARC_MPU_VER == 3
#include "arc_mpu_v3_internal.h"
#endif

View File

@@ -1,287 +0,0 @@
/*
* Copyright (c) 2021 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_COMMON_INTERNAL_H_
#define ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_COMMON_INTERNAL_H_
#if CONFIG_ARC_MPU_VER == 2 || CONFIG_ARC_MPU_VER == 3
#include "arc_mpu_v2_internal.h"
#elif CONFIG_ARC_MPU_VER == 6
#include "arc_mpu_v6_internal.h"
#else
#error "Unsupported MPU version"
#endif
/**
* @brief configure the base address and size for an MPU region
*
* @param type MPU region type
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(uint8_t type, uint32_t base, uint32_t size)
{
int32_t region_index = get_region_index_by_type(type);
uint32_t region_attr = get_region_attr_by_type(type);
LOG_DBG("Region info: 0x%x 0x%x", base, size);
if (region_attr == 0U || region_index < 0) {
return -EINVAL;
}
/*
* For ARC MPU, MPU regions can be overlapped, smaller
* region index has higher priority.
*/
_region_init(region_index, base, size, region_attr);
return 0;
}
/* ARC Core MPU Driver API Implementation for ARC MP */
/**
* @brief enable the MPU
*/
void arc_core_mpu_enable(void)
{
/* Enable MPU */
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN,
z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) | AUX_MPU_EN_ENABLE);
}
/**
* @brief disable the MPU
*/
void arc_core_mpu_disable(void)
{
/* Disable MPU */
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN,
z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) & AUX_MPU_EN_DISABLE);
}
/**
* @brief configure the thread's MPU regions
*
* @param thread the target thread
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
#if defined(CONFIG_USERSPACE)
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {
LOG_DBG("configure user thread %p's stack", thread);
if (_mpu_configure(THREAD_STACK_USER_REGION,
(uint32_t)thread->stack_info.start,
thread->stack_info.size) < 0) {
LOG_ERR("user thread %p's stack failed", thread);
return;
}
}
LOG_DBG("configure thread %p's domain", thread);
arc_core_mpu_configure_mem_domain(thread);
#endif
}
/**
* @brief configure the default region
*
* @param region_attr region attribute of default region
*/
void arc_core_mpu_default(uint32_t region_attr)
{
uint32_t val = z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) & (~AUX_MPU_RDP_ATTR_MASK);
region_attr &= AUX_MPU_RDP_ATTR_MASK;
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN, region_attr | val);
}
/**
* @brief configure the MPU region
*
* @param index MPU region index
* @param base base address
* @param region_attr region attribute
*/
int arc_core_mpu_region(uint32_t index, uint32_t base, uint32_t size, uint32_t region_attr)
{
if (index >= get_num_regions()) {
return -EINVAL;
}
region_attr &= AUX_MPU_RDP_ATTR_MASK;
_region_init(index, base, size, region_attr);
return 0;
}
#if defined(CONFIG_USERSPACE)
/**
* @brief configure MPU regions for the memory partitions of the memory domain
*
* @param thread the thread which has memory domain
*/
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
int region_index = get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
uint32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = NULL;
if (thread) {
mem_domain = thread->mem_domain_info.mem_domain;
}
if (mem_domain) {
LOG_DBG("configure domain: %p", mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
LOG_DBG("disable domain partition regions");
num_partitions = 0U;
pparts = NULL;
}
for (; region_index >= 0; region_index--) {
if (num_partitions) {
LOG_DBG("set region 0x%x 0x%lx 0x%x",
region_index, pparts->start, pparts->size);
_region_init(region_index, pparts->start, pparts->size, pparts->attr);
num_partitions--;
} else {
/* clear the left mpu entries */
_region_init(region_index, 0, 0, 0);
}
pparts++;
}
}
/**
* @brief remove MPU regions for the memory partitions of the memory domain
*
* @param mem_domain the target memory domain
*/
void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
{
ARG_UNUSED(mem_domain);
int region_index = get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
for (; region_index >= 0; region_index--) {
_region_init(region_index, 0, 0, 0);
}
}
/**
* @brief reset MPU region for a single memory partition
*
* @param domain the target memory domain
* @param partition_id memory partition id
*/
void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain, uint32_t part_id)
{
ARG_UNUSED(domain);
int region_index = get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
LOG_DBG("disable region 0x%x", region_index + part_id);
/* Disable region */
_region_init(region_index + part_id, 0, 0, 0);
}
/**
* @brief get the maximum number of free regions for memory domain partitions
*/
int arc_core_mpu_get_max_domain_partition_regions(void)
{
return get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION) + 1;
}
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
/*
* For ARC MPU, smaller region number takes priority.
* we can stop the iteration immediately once we find the
* matched region that grants permission or denies access.
*
*/
for (int r_index = 0; r_index < get_num_regions(); r_index++) {
if (!_is_enabled_region(r_index) || !_is_in_region(r_index, (uint32_t)addr, size)) {
continue;
}
if (_is_user_accessible_region(r_index, write)) {
return 0;
} else {
return -EPERM;
}
}
return -EPERM;
}
#endif /* CONFIG_USERSPACE */
/* ARC MPU Driver Initial Setup */
/*
* @brief MPU default initialization and configuration
*
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
static int arc_mpu_init(void)
{
uint32_t num_regions = get_num_regions();
if (mpu_config.num_regions > num_regions) {
__ASSERT(0, "Request to configure: %u regions (supported: %u)\n",
mpu_config.num_regions, num_regions);
return -EINVAL;
}
/* Disable MPU */
arc_core_mpu_disable();
/*
* the MPU regions are filled in the reverse order.
* According to ARCv2 ISA, the MPU region with smaller
* index has higher priority. The static background MPU
* regions in mpu_config will be in the bottom. Then
* the special type regions will be above.
*/
int r_index = num_regions - mpu_config.num_regions;
/* clear all the regions first */
for (uint32_t i = 0U; i < r_index; i++) {
_region_init(i, 0, 0, 0);
}
/* configure the static regions */
for (uint32_t i = 0U; i < mpu_config.num_regions; i++) {
_region_init(r_index, mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size, mpu_config.mpu_regions[i].attr);
r_index++;
}
/* default region: no read, write and execute */
arc_core_mpu_default(0);
/* Enable MPU */
arc_core_mpu_enable();
return 0;
}
SYS_INIT(arc_mpu_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_COMMON_INTERNAL_H_ */

Some files were not shown because too many files have changed in this diff Show More