Compare commits

..

76 Commits

Author SHA1 Message Date
Abramo Bagnara
07c3d4529a coding guidelines: comply with MISRA C:2012 Rule 12.2
- explicit with a cast the destination bitwidth of left shift
  ensuring to not break DTS

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-07-05 10:15:20 -04:00
Abramo Bagnara
f77c7bb2fe coding guidelines: comply with MISRA C:2012 Rule 17.7
- added explicit cast to void when returned value is expectedly ignored

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-22 07:38:47 -04:00
Abramo Bagnara
7b6cdcbed7 coding guidelines: comply with MISRA C:2012 Rule 8.3
- fixed the code generator so to use "more" parameter name
  consistently

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-21 20:19:21 -04:00
Abramo Bagnara
878d4338bb coding guidelines: comply with MISRA C:2012 Rule 8.3
- fixed the code generator so to obtain congruent parameter names

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-21 20:19:21 -04:00
Abramo Bagnara
9043b65acf coding guidelines: comply with MISRA C:2012 Rule 11.2
- avoid to convert pointers to incomplete type using the pointer to first item

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-07 21:51:02 -04:00
Abramo Bagnara
071def1bf1 coding guidelines: comply with MISRA C:2012 Rule 15.6
- added missing braces

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-07 16:17:33 -04:00
Abramo Bagnara
922cde06dc coding guidelines: comply with MISRA C:2012 Rule 20.9
- avoid to use undefined macros in #if expressions

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-07 16:17:06 -04:00
Abramo Bagnara
7229c12721 coding guidelines: comply with MISRA C:2012 Rule 21.15
- made explicit the copied data type

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-07 14:17:44 -04:00
Abramo Bagnara
88608b2e78 coding guidelines: comply with MISRA C:2012 Rule 2.2
- avoided dead stores

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-07 14:06:39 -04:00
Gerard Marull-Paretas
77efdc73ca doc: css: update code documentation directives style
New Sphinx version (or docutils) has slightly changed the output format
for code documentation directives. These changes try to mimic previous
behavior, even though it does not achieve 100% equal result. In some
cases the new default style does not require further tweaks, and in some
others styling as before is not possible.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-04-07 10:02:32 -04:00
Gerard Marull-Paretas
f30ce73f67 doc: update requirements
breathe: for simplicity, require versions > 4.30 (lower versions have
known issues, so do not take risks).
Sphinx: start requiring versions >=4.x. Keep with compatible versions,
since Sphinx major updrages can easily break extensions, themes, etc.
sphinx_rtd_theme: upgrade to >=1.x. Again, keep with compatible versions
since we have style customizations that can likely break on major
upgrades.
pygments: Allow any version >=2.9 (version that introduced DT support).
We do not have strong compatibility requirements here.
sphinx-notfound-page: Remove any requirements, we do not have strong
requirements for this one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-04-07 10:02:32 -04:00
Abramo Bagnara
bdc5f2c7da coding guidelines: comply with MISRA C:2012 Rule 8.3
In particular:

- use always the same parameter names in every redeclarations

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-06 14:54:08 -04:00
Abramo Bagnara
29155bdd6c coding guidelines: comply with MISRA C:2012 Rule 12.1.
In particular:

- added requested parentheses verifying the lack of ambiguities

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-06 14:53:55 -04:00
Abramo Bagnara
5b627ad8b3 coding guidelines: comply with MISRA C:2012 Rule 2.7
In particular:

- added missing ARG_UNUSED

- added void to cast where ARG_UNUSED macro is not available

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-06 14:09:08 -04:00
Abramo Bagnara
839fa857c8 coding guidelines: comply with MISRA C:2012 Rule 11.9
In particular:

- avoid to obtain an unwanted null pointer

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-04-06 13:13:43 -04:00
Abramo Bagnara
64336f467c coding guidelines: comply with MISRA C:2012 Rule 13.4
In particular:

- avoid to use assignment expression value

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-29 16:29:35 -04:00
Gerard Marull-Paretas
37f1423213 ci: split Bluetooth workflow
Split Bluetooth tests workflow into 2 steps:

- One that runs the actual tests and stored results
- A second one that fecthes and uploads tests results

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-29 18:10:17 +02:00
Gerard Marull-Paretas
4071c6143c ci: make git credentials non-persistent
With this setting enabled, Git credentials are not kept after checkout.
Credentials are not necessary after the checkout step since we do not
do any further manual push/pull operations.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-29 18:09:30 +02:00
Abramo Bagnara
64816d53c0 coding guidelines: comply with MISRA C:2012 Rule 11.6
In particular:

- avoided unneeded conversions from integer to pointer

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-29 10:52:25 -04:00
Abramo Bagnara
ac487cf5a4 coding guidelines: comply with MISRA C:2012 Rule 15.2
In particular:

- moved switch clause so to avoid backward jump

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-29 10:51:17 -04:00
Abramo Bagnara
08619905cd coding guidelines: comply with MISRA C:2012 Rule 20.7
In particular:

- added missing parenthesis around macro argument expansion

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-29 10:50:47 -04:00
Abramo Bagnara
aba70ce903 coding guidelines: comply with MISRA C:2012 Rule 8.8
In particular:

- moved declarations so to not conflict with static inline stubs

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-29 10:49:11 -04:00
Abramo Bagnara
671153b94d coding guidelines: comply with MISRA C:2012 Rule 13.3
In particular:

- avoided ++/-- on volatile

- avoided to have more than one ++/-- in for third clause

- moved ++/-- before or after the value use

- avoided pointless ++/--

- replaced consecutive ++ with a single +=

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-23 07:47:49 -04:00
Abramo Bagnara
f953e929d8 coding guidelines: partially comply with MISRA C:2012 Rule 11.8
In particular:

- modified parameter types to receive a const pointer when a
  non-const pointer is not needed

- avoided redundant casts

- used cast to const pointer when a non-const pointer is not needed

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-14 07:47:43 -04:00
Abramo Bagnara
829f63f93f coding guidelines: partially comply with MISRA C:2012 Rule 7.4
In particual:

- avoided to assign string literals to non-const char *

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-14 07:47:22 -04:00
Abramo Bagnara
d03fa8d8e9 coding guidelines: partially comply with MISRA C:2012 essential types rules.
In particular:

- use bool when the data nature is Boolean;

- use explicit comparison with 0 or NULL;

- avoid mixing signed and unsigned integers in computations and
  comparisons;

- avoid mixing enumerations with integers: when this is unavoidable,
  always convert enums to integers and not the other way around;

- avoid mixing characters with integers;

- ensure computations are done in the destination precision;

- added /*? comments when the developer intentions are not clear;

- added U suffix to unsigned constants (except for the CONFIG_* macro
  constants, as they cannot be changed and then their use as unsigned
  constants should be prefixed with a cast).

Violations for rules 10.1, 10.2, 10.3, 10.4, 10.5, 10.6, 10.7, 10.8,
11.7, 12.2, 14.4 and 16.7 in the reference builds have been reduced
from 67818 to 60.  The commit cannot be divided on a per-rule basis
due to numerous cross-dependencies between changes.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-03-01 09:50:15 -05:00
Michał Barnaś
40cf447b5a i2c: fix for MISRA in i2c shell commands
This commit changes the shell parameter name to meet the MISRA
check requirements.

Signed-off-by: Michał Barnaś <mb@semihalf.com>
2022-02-22 10:10:44 -08:00
Flavio Ceolin
3d6f386cca lib: rb: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- use explicit comparison with 0 or NULL
- ensure computations are done in the destination precision
- avoid mixing enumerations with integers: when this is unavoidable,
  always convert enums to integers and not the other way around
- avoid mixing signed and unsigned integers in computations and
  comparisons

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:52 -05:00
Flavio Ceolin
747db471bb lib: printk: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Ensure computations are done in the destination precision
- Avoid mixing characters with integers

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
ec69bda0d4 lib: hex: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Avoid mixing characters with integers
- Ensure computations are done in the destination precision

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
04379cbe09 lib: heap-validate: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Ensure computations are done in the destination precision
- Avoid mixing signed and unsigned integers in computations and
  comparisons

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
019e0d9573 lib: fdtable: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Avoid mixing signed and unsigned integers in computations and
  comparisons
- Ensure computations are done in the destination precision

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
da3a04a17b lib: fdtable: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Use explicit comparison with 0 or NULL to avoid boolean promotion

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
bbc6f78c7c lib: timeutil: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Ensure computations are done in the destination precision
- Avoid mixing signed and unsigned integers in computations and
  comparisons

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
259c805c3c lib: timeutil: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- Avoid promotion from boolean to integer

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
c3d715c6ee lib: sem: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- avoid mixing signed and unsigned integers in computations and
  comparisons

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
f5884c7db4 lib: sem: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- use bool when the data nature is Boolean

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
672934d206 lib: heap: Code guideline fixes
Fixes violations code guideline related with essential types rules:

- ensure computations are done in the destination precision
- avoid mixing signed and unsigned integers in computations and
  comparisons

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:17:15 -05:00
Flavio Ceolin
9c0f9e6214 tests: footprints: Avoid boolean promotion to int
Explicitly returning int instead of promoting a boolean.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-02-08 10:14:24 -05:00
Flavio Ceolin
844843ce47 tests: footprints: Use proper unsigned types
Avoid mixing signed and unsigned integers in computations and
comparisons.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:14:24 -05:00
Flavio Ceolin
cf873c6eae tests: footprints: Cast variable to proper type
Ensuring that a computation is done in the destination precision

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:14:24 -05:00
Flavio Ceolin
5b6fef09da tests: footprints: Use proper type
Change len in run_libc to be size_t since strlen returns
this type.

Avoid mixing signed and unsigned integers in computations and
comparisons

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2022-02-08 10:14:24 -05:00
Peter Mitsis
0075f17e1f kernel: add 'static' keyword to select routines
Applies the 'static' keyword to the following inlined routines:
    z_priq_dumb_add()
    z_priq_mq_add()
    z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-12-10 07:20:58 -05:00
Abramo Bagnara
3c1d1a11e9 coding guidelines: partially comply with MISRA C:2012 Rule 20.7
MISRA C:2012 Rule 20.7 (Expressions resulting from the expansion of
macro parameters shall be enclosed in parentheses.)

Where possible, change, e.g.,

 #define EXAMPLE1(h, k) (h + k)

to

 #define EXAMPLE1(h, k) ((h) + (k))

Where this is not possible and we have a macro we cannot change, e.g.,

 #define EXAMPLE2(h, k) (h * k)

change, e.g.,

  EXAMPLE2(x - y, a - b)

to

  EXAMPLE2((x - y), (a - b))

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-12-10 07:19:31 -05:00
Abramo Bagnara
63afaf742a coding guidelines: partially comply with MISRA C:2012 Rule 10.2
MISRA C:2012 Rule 10.2 (Expressions of essentially character type
shall not be used inappropriately in addition and subtraction
operations.)

Use the compliant form char - integer instead of the non-compliant
one integer - char.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-30 06:56:22 -05:00
Abramo Bagnara
7eadb9c5eb coding guidelines: partially comply with MISRA C:2012 Rule 10.5
MISRA C:2012 Rule 10.5 (The value of an expression should not be cast
to an inappropriate essential type.)

Avoid integer-to-Boolean and Boolean-to-integer type casts.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-22 23:14:03 -05:00
Anas Nashif
8e6e745c76 scripts: gen_app_partitions: do not load empty files
Do not load empty files through the ELF parser and raise exception when
magic number of ELF is not matched.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-22 21:19:33 -05:00
Anas Nashif
bea9a92819 actions: twister: abort if branch is not up to date
Make sure we have the latest actions are available and that the PR is
rebased on top of latest branch HEAD.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-22 12:39:02 -05:00
Anas Nashif
dceab47f74 ci: buildkite: do not use BK on this branch
We now use GH actions on this branch, so remove buildkite related files.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-22 12:39:02 -05:00
Abramo Bagnara
5d02614e34 coding guidelines: partially comply with MISRA C:2012 Rule 14.4
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)

Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
Use comparisons with NULL instead of implicitly testing pointers.
Use comparisons with NUL instead of implicitly testing plain chars.
Use `bool' instead of `int' to represent Boolean values.
Use `while (true)' instead of `while (1)' to express infinite loops.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-21 20:11:37 -05:00
Anas Nashif
5b7cc3bffa actions: twister: fix action cron
Fix cron for twister action and fix target branch.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:57:06 -05:00
Anas Nashif
4c26541842 actions: run twister using github action
This action replaces current buildkite workflow and uses github actions
to build and run tests in the zephyr tree using twister. The main
differences to current builtkite workflow:

- the action handles all 3 events: pull requests, push and schedule

- the action determines size of matrix (number of build hosts) based on
  the change with a minimum of 1 builder. If more tests are built/run
  due to changes to boards or tests/samples, the matrix size is
  increased. This will avoid timeouts when running over capacity due to
  board/test changes.

- We use ccache and store cache files on amazon S3 for more flexibility

- Results are collected per build host and merged in the final step and
  failures are posted into github action check runs.

- It runs on more powerful instances that can handle more load.
  Currently we have 10 build hosts per run (that can increase depending
  on number of tests run) and can deliver results within 1 hour.

- the action can deal with non code changes and will not allocate more
  than required to deal with changes to documentation and other files
  that do not require running twister

The goal long-term is better integrate this workflow with other actions
and not run unncessarily if other workflows have failed, for example, if
commit message is bogus, we should stop at that check, to avoid wasting
resources given that the commit message will have to be fixed anyways
which would later trigger another run on the same code.

Currently there is 1 open issue with this action related to a github
workflow bug where the final results are not posted to the same workflow
and might appear under other workflows. Github is working on this bug.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
0a15b510f4 actions: clang: add branch name to ccache key
Add branch name to the ccache key to avoid cache contamination from old
branches.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
d21b0e8ffe actions: remove main branch actions
Remove actions not relevant to a branch.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
11dddc5364 ci: hotfix: disable test exclusion by tags
Many tests and CI activties are being missed by excluding tests
mistakingly when running twister.
This is visibile when you change one or more tests in kernel/ for
example, twister does not run those tests that have changed at all and
marking the PR as tested and ready to be merged.

Temporary fix for #40235.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
a984ca8b70 actions: clang: set reporting before calling twister
Otherwise reporting is skipped and failures are not recorded.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
4a8da1624d actions: clang: use ccache
Use ccache to speed up builds.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
18b6921b1b actions: retry west update on various workflows
Retry west when update fails and use update.narrow configuration.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
efa7239352 actions: clang: fix typo
Add missing ")".

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
3e68e6b550 actions: clang: do not rebase, use commit range
Avoid rebasing and instead use the commit range. This avoids issues with
trees having intermediate rebase data after a reboot (due to
cancellation).

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
16dc0c550f actions: run code coverage only on main tree
Run code coverage reporting on main zephyr repo only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
7a3ca740e7 actions: follow namespace for job names
To avoid conflicts in reporting.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
f0488b4956 actions: conflict: update version
Use released version instead of master.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
3472b3b212 actions: compliance: minor improvements
Namespace job names and retry west update if something goes wrong the
first time.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
093a9ea724 actions: bluetooth: rename action and make it obvious
Rename to make action file name obvious referring to bluetooth, rather
than the tool used.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
820aa10ac9 actions: bluetooth: fix job names and description
Misc fixes including:
- unique job names
- Change description to mention Bluetooth
- Retry west update
- Use latest unit test publication action

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
711f44a1fc action: codecov: do not run on weekends
No need to run on weekends, nothing much happens, so lets save some
resources.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
d09cb7f1da actions: fix typo in clang action
Fix a minor typo in action and always set variable controlling reports.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
06e25ea826 action: configure git user
Configure git for rebase by setting user name, email.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
3ff5e8b7ce actions: optimize clang actions
- use zephyr runner
- reduce number of builders and adapt matrix to be platform based
- check for changed files and optimize run accordingly, should reduce
  build times depending on what has changed
- If no source has changed, skip twister completely.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Anas Nashif
22fdbf18d5 ci: add code coverage action
Add a code coverage collection action that triggers based on a schedule
on the main branch and posts results to

 https://app.codecov.io/gh/zephyrproject-rtos/zephyr

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-11-21 09:49:10 -05:00
Abramo Bagnara
b652faeb91 coding guidelines: comply with MISRA C:2012 Rule 9.3
MISRA C:2012 Rule 9.3 (Arrays shall not be partially initialized.)

Systematically use `{0}' to specify full 0 initialization
(not `{}', not `{0U}').

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-20 16:15:52 -05:00
Abramo Bagnara
c87097cad4 coding guidelines: comply with MISRA C:2012 Rule 4.1
MISRA C:2012 Rule 4.1 (Octal and hexadecimal escape sequences shall be
terminated.)

Use string literal concatenation to properly terminate hexadecimal
escape sequences.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-18 17:47:15 -05:00
Abramo Bagnara
f3c9c0ae19 coding guidelines: comply with MISRA C:2012 Rule 8.2
MISRA C:2012 Rule 8.2 (Function types shall be in prototype form with
named parameters.)

Added missing parameter names.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-18 17:46:33 -05:00
Abramo Bagnara
835451e36f coding guidelines: comply with MISRA C:2012 Rule 21.13
MISRA C:2012 Rule 21.13 (Any value passed to a function in <ctype.h>
shall be representable as an unsigned char or be the value EOF).

Functions in <ctype.h> have undefined behavior if they are called with
any other value. Callers affected by this change are not prepared to
handle EOF anyway. The addition of these casts avoids the issue
and does not result in any performance penalty.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-15 08:35:17 -05:00
Abramo Bagnara
f6ce289342 coding guidelines: comply with MISRA C:2012 Rule 7.2
MISRA C:2012 Rule 7.2 (A `u' or `U' suffix shall be applied to all
integer constants that are represented in an unsigned type)

Added missing `U' suffixes in constants that are involved in the
analyzed build, plus a few more not to introduce inconsistencies
with respect to nearby constants that are either unused in the
build (but implicitly unsigned) or are used and are immediately
converted to unsigned.

Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
2021-11-14 09:59:26 -05:00
595 changed files with 4998 additions and 7958 deletions

View File

@@ -1,35 +0,0 @@
steps:
- command:
- .buildkite/run.sh
env:
ZEPHYR_TOOLCHAIN_VARIANT: "zephyr"
ZEPHYR_SDK_INSTALL_DIR: "/opt/toolchains/zephyr-sdk-0.13.1"
parallelism: 475
timeout_in_minutes: 210
retry:
manual: true
plugins:
- docker#v3.5.0:
image: "zephyrprojectrtos/ci:v0.18.4"
propagate-environment: true
volumes:
- "/var/lib/buildkite-agent/git-mirrors:/var/lib/buildkite-agent/git-mirrors"
- "/var/lib/buildkite-agent/zephyr-module-cache:/var/lib/buildkite-agent/zephyr-module-cache"
- "/var/lib/buildkite-agent/zephyr-ccache:/root/.ccache"
workdir: "/workdir/zephyr"
agents:
- "queue=default"
- wait: ~
continue_on_failure: true
- plugins:
- junit-annotate#v1.7.0:
artifacts: twister-*.xml
- command:
- .buildkite/mergejunit.sh
notify:
- email: "builds+int+399+7809482394022958124@lists.zephyrproject.org"
if: build.state != "passed"

View File

@@ -1,8 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
# report disk usage:
echo "--- $0 disk usage"
df -h

View File

@@ -1,44 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
# Save off where we started so we can go back there
WORKDIR=${PWD}
echo "--- $0 disk usage"
df -h
du -hs /var/lib/buildkite-agent/*
docker images -a
docker system df -v
if [ -n "${BUILDKITE_PULL_REQUEST_BASE_BRANCH}" ]; then
git fetch -v origin ${BUILDKITE_PULL_REQUEST_BASE_BRANCH}
git checkout FETCH_HEAD
git config --local user.email "builds@zephyrproject.org"
git config --local user.name "Zephyr CI"
git merge --no-edit "${BUILDKITE_COMMIT}" || {
local merge_result=$?
echo "Merge failed: ${merge_result}"
git merge --abort
exit $merge_result
}
fi
mkdir -p /var/lib/buildkite-agent/zephyr-ccache/
# create cache dirs, no-op if they already exist
mkdir -p /var/lib/buildkite-agent/zephyr-module-cache/modules
mkdir -p /var/lib/buildkite-agent/zephyr-module-cache/tools
mkdir -p /var/lib/buildkite-agent/zephyr-module-cache/bootloader
# Clean cache - if it already exists
cd /var/lib/buildkite-agent/zephyr-module-cache
find -type f -not -path "*/.git/*" -not -name ".git" -delete
# Remove any stale locks
find -name index.lock -delete
# return from where we started so we can find pipeline files from
# git repo
cd ${WORKDIR}

View File

@@ -1,19 +0,0 @@
#!/bin/bash
# Copyright (c) 2021 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
set -eE
buildkite-agent artifact download twister-*.xml .
xmls=""
for f in twister-*xml; do [ -s ${f} ] && xmls+="${f} "; done
if [ "${xmls}" ]; then
junitparser merge ${xmls} junit.xml
buildkite-agent artifact upload junit.xml
junit2html junit.xml
buildkite-agent artifact upload junit.xml.html
buildkite-agent annotate --style "info" "Read the <a href=\"artifact://junit.xml.html\">JUnit test report</a>"
fi

View File

@@ -1,31 +0,0 @@
steps:
- command:
- .buildkite/run.sh
env:
ZEPHYR_TOOLCHAIN_VARIANT: "zephyr"
ZEPHYR_SDK_INSTALL_DIR: "/opt/toolchains/zephyr-sdk-0.13.1"
parallelism: 20
timeout_in_minutes: 180
retry:
manual: true
plugins:
- docker#v3.5.0:
image: "zephyrprojectrtos/ci:v0.18.4"
propagate-environment: true
volumes:
- "/var/lib/buildkite-agent/git-mirrors:/var/lib/buildkite-agent/git-mirrors"
- "/var/lib/buildkite-agent/zephyr-module-cache:/var/lib/buildkite-agent/zephyr-module-cache"
- "/var/lib/buildkite-agent/zephyr-ccache:/root/.ccache"
workdir: "/workdir/zephyr"
agents:
- "queue=default"
- wait: ~
continue_on_failure: true
- plugins:
- junit-annotate#v1.7.0:
artifacts: twister-*.xml
- command:
- .buildkite/mergejunit.sh

View File

@@ -1,78 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
set -eE
function cleanup()
{
# Rename twister junit xml for use with junit-annotate-buildkite-plugin
# create dummy file if twister did nothing
if [ ! -f twister-out/twister.xml ]; then
touch twister-out/twister.xml
fi
mv twister-out/twister.xml twister-${BUILDKITE_JOB_ID}.xml
buildkite-agent artifact upload twister-${BUILDKITE_JOB_ID}.xml
# Upload test_file to get list of tests that are build/run
if [ -f test_file.txt ]; then
buildkite-agent artifact upload test_file.txt
fi
# ccache stats
echo "--- ccache stats at finish"
ccache -s
# Cleanup on exit
rm -fr *
# disk usage
echo "--- disk usage at finish"
df -h
}
trap cleanup ERR
echo "--- run $0"
git log -n 5 --oneline --decorate --abbrev=12
# Setup module cache
cd /workdir
ln -s /var/lib/buildkite-agent/zephyr-module-cache/modules
ln -s /var/lib/buildkite-agent/zephyr-module-cache/tools
ln -s /var/lib/buildkite-agent/zephyr-module-cache/bootloader
cd /workdir/zephyr
export JOB_NUM=$((${BUILDKITE_PARALLEL_JOB}+1))
# ccache stats
echo ""
echo "--- ccache stats at start"
ccache -s
if [ -n "${DAILY_BUILD}" ]; then
TWISTER_OPTIONS=" --inline-logs -M -N --build-only --all --retry-failed 3 -v "
echo "--- DAILY BUILD"
west init -l .
west update 1> west.update.log || west update 1> west.update-2.log
west forall -c 'git reset --hard HEAD'
source zephyr-env.sh
./scripts/twister --subset ${JOB_NUM}/${BUILDKITE_PARALLEL_JOB_COUNT} ${TWISTER_OPTIONS}
else
if [ -n "${BUILDKITE_PULL_REQUEST_BASE_BRANCH}" ]; then
./scripts/ci/run_ci.sh -c -b ${BUILDKITE_PULL_REQUEST_BASE_BRANCH} -r origin \
-m ${JOB_NUM} -M ${BUILDKITE_PARALLEL_JOB_COUNT} -p ${BUILDKITE_PULL_REQUEST}
else
./scripts/ci/run_ci.sh -c -b ${BUILDKITE_BRANCH} -r origin \
-m ${JOB_NUM} -M ${BUILDKITE_PARALLEL_JOB_COUNT};
fi
fi
TWISTER_EXIT_STATUS=$?
cleanup
exit ${TWISTER_EXIT_STATUS}

View File

@@ -9,7 +9,7 @@ on:
jobs:
backport:
runs-on: ubuntu-20.04
runs-on: ubuntu-18.04
name: Backport
steps:
- name: Backport

View File

@@ -1,30 +0,0 @@
name: Backport Issue Check
on:
pull_request_target:
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-20.04
steps:
- name: Check out source code
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -8,7 +8,7 @@ on:
jobs:
bluetooth-test-results:
name: "Publish Bluetooth Test Results"
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
if: github.event.workflow_run.conclusion != 'skipped'
steps:

View File

@@ -10,13 +10,17 @@ on:
- "soc/posix/**"
- "arch/posix/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bluetooth-test:
runs-on: ubuntu-20.04
bluetooth-test-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
bluetooth-test-build:
runs-on: ubuntu-latest
needs: bluetooth-test-prep
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -34,7 +38,7 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: west setup
run: |
@@ -51,7 +55,7 @@ jobs:
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: bluetooth-test-results
path: |
@@ -60,7 +64,7 @@ jobs:
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: event
path: |

View File

@@ -2,18 +2,20 @@ name: Build with Clang/LLVM
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
clang-build-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
clang-build:
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on: zephyr_runner
needs: clang-build-prep
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -22,39 +24,33 @@ jobs:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.13.1
CLANG_ROOT_DIR: /usr/lib/llvm-12
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Clone cached Zephyr repository
continue-on-error: true
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Update PATH for west
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout
uses: actions/checkout@v3
- name: checkout
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Environment Setup
- name: west setup
run: |
pip3 install GitPython
echo "$HOME/.local/bin" >> $GITHUB_PATH
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
west config --global update.narrow true
# In some cases modules are left in a state where they can't be
# updated (i.e. when we cancel a job and the builder is killed),
# So first retry to update, if that does not work, remove all modules
# and start over. (Workaround until we implement more robust module
# west caching).
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules && west update --path-cache /github/cache/zephyrproject)
west update 2>&1 1> west.log || west update 2>&1 1> west2.log || ( rm -rf ../modules && west update)
- name: Check Environment
run: |
@@ -62,7 +58,6 @@ jobs:
${CLANG_ROOT_DIR}/bin/clang --version
gcc --version
ls -la
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
@@ -70,7 +65,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
message("::set-output name=repo::${repo2}")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
@@ -84,26 +79,36 @@ jobs:
- name: ccache stats initial
run: |
test -d github/home/.ccache && rm -rf /github/home/.ccache && mv github/home/.ccache /github/home/.ccache
test -d github/home/.ccache && mv github/home/.ccache /github/home/.ccache
ccache -M 10G -s
- name: Run Tests with Twister
id: twister
run: |
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=llvm
# check if we need to run a full twister or not based on files changed
python3 ./scripts/ci/test_plan.py --platform ${{ matrix.platform }} -c origin/${BASE_REF}..
# We can limit scope to just what has changed
if [ -s testplan.csv ]; then
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --inline-logs -M -N -v --load-tests testplan.csv --retry-failed 2
SC=$(./scripts/ci/what_changed.py --commits ${COMMIT_RANGE})
# Get twister arguments based on the files changed
./scripts/ci/get_twister_opt.py --commits ${COMMIT_RANGE}
if [ "$SC" = "full" ]; then
# Full twister
echo "::set-output name=report_needed::1";
./scripts/twister --inline-logs -M -N -v -p ${{ matrix.platform }} --retry-failed 2
else
# if nothing is run, skip reporting step
echo "report_needed=0" >> $GITHUB_OUTPUT
# We can limit scope to just what has changed
if [ -s modified_tests.args ]; then
# we are working with one platform at a time
sed -i '/--all/d' modified_tests.args
echo "::set-output name=report_needed::1";
# Full twister but with options based on changes
./scripts/twister --inline-logs -M -N -v -p ${{ matrix.platform }} +modified_tests.args --retry-failed 2
else
# if nothing is run, skip reporting step
echo "::set-output name=report_needed::0";
fi
fi
- name: ccache stats post
@@ -112,7 +117,7 @@ jobs:
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml

View File

@@ -4,18 +4,22 @@ on:
schedule:
- cron: '25 */3 * * 1-5'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
codecov-prep:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
codecov:
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on: zephyr_runner
needs: codecov-prep
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -28,14 +32,8 @@ jobs:
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
fetch-depth: 0
@@ -56,7 +54,7 @@ jobs:
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
message("::set-output name=repo::${repo2}")
- name: use cache
id: cache-ccache
@@ -96,7 +94,7 @@ jobs:
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
@@ -110,7 +108,7 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Download Artifacts
@@ -146,8 +144,8 @@ jobs:
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
endforeach()
file(APPEND $ENV{GITHUB_OUTPUT} "mergefiles=${MERGELIST}\n")
file(APPEND $ENV{GITHUB_OUTPUT} "covfiles=${FILELIST}\n")
message("::set-output name=mergefiles::${MERGELIST}")
message("::set-output name=covfiles::${FILELIST}")
- name: Merge coverage files
run: |

View File

@@ -4,17 +4,17 @@ on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip

View File

@@ -4,11 +4,11 @@ on: pull_request
jobs:
maintainer_check:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: Check MAINTAINERS file
steps:
- name: Checkout the code
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -20,7 +20,7 @@ jobs:
python3 ./scripts/get_maintainer.py path CMakeLists.txt
check_compliance:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -28,13 +28,13 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
@@ -72,7 +72,7 @@ jobs:
./scripts/ci/check_compliance.py -m Codeowners -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: compliance.xml

View File

@@ -1,14 +0,0 @@
name: Conflict Finder
on:
push:
branches-ignore:
- '**'
jobs:
conflict:
runs-on: ubuntu-latest
steps:
- uses: mschilde/auto-label-merge-conflicts@v2
with:
CONFLICT_LABEL_NAME: "has conflicts"
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,38 +0,0 @@
# Copyright (c) 2020 Intel Corp.
# SPDX-License-Identifier: Apache-2.0
name: Publish commit for daily testing
on:
schedule:
- cron: '50 22 * * *'
push:
branches:
- refs/tags/*
jobs:
get_version:
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .
aws s3 cp versions.json s3://testing.zephyrproject.org/daily_tests/versions.json

View File

@@ -6,14 +6,10 @@ name: Devicetree script tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
@@ -25,22 +21,20 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-20.04, macos-11, windows-2022]
os: [ubuntu-latest, macos-latest, windows-latest]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
- os: macos-latest
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -48,7 +42,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -57,7 +51,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,10 +5,10 @@ name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
- cron: '0 */3 * * *'
push:
tags:
- v*
- v*
pull_request:
paths:
- 'doc/**'
@@ -34,23 +34,18 @@ env:
jobs:
doc-build-html:
name: "Documentation Build (HTML)"
runs-on: ubuntu-20.04
timeout-minutes: 30
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
runs-on: ubuntu-latest
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen graphviz
- name: cache-pip
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -74,53 +69,26 @@ jobs:
DOC_TAG="development"
fi
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
DOC_TARGET="html-fast"
else
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W" make -C doc ${DOC_TARGET}
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W -j auto" make -C doc html
- name: compress-docs
run: |
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: upload-build
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@master
with:
name: html-output
path: html-output.tar.xz
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
echo "${PR_NUM}" > pr_num
echo "::notice:: Documentation will be available shortly at: ${DOC_URL}"
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
path: pr_num
doc-build-pdf:
name: "Documentation Build (PDF)"
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
container: texlive/texlive:latest
timeout-minutes: 30
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: install-pkgs
run: |
@@ -128,7 +96,7 @@ jobs:
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -155,7 +123,7 @@ jobs:
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@master
with:
name: pdf-output
path: doc/_build/latex/zephyr.pdf

View File

@@ -1,63 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish (Pull Request)
on:
workflow_run:
workflows: ["Documentation Build"]
types:
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
- name: Load PR number
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
id: check-pr
uses: carpentries/actions/check-valid-pr@v0.8
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete

View File

@@ -2,21 +2,23 @@
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish
name: Publish Documentation
on:
workflow_run:
workflows: ["Documentation Build"]
branches:
- main
- v*
- main
- v*
tags:
- v*
types:
- completed
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:

View File

@@ -6,13 +6,13 @@ on:
jobs:
check-errno:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
container:
image: zephyrprojectrtos/ci:v0.18.4
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: Run errno.py
run: |

View File

@@ -13,14 +13,19 @@ on:
# same commit
- 'v*'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking:
runs-on: ubuntu-20.04
footprint-tracking-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-tracking:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-tracking-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -39,7 +44,7 @@ jobs:
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -2,14 +2,19 @@ name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-delta:
runs-on: ubuntu-20.04
footprint-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-delta:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -20,12 +25,16 @@ jobs:
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -1,53 +0,0 @@
name: Issue Tracker
on:
schedule:
- cron: '*/10 * * * *'
env:
OUTPUT_FILE_NAME: IssuesReport.md
COMMITTER_EMAIL: actions@github.com
COMMITTER_NAME: github-actions
COMMITTER_USERNAME: github-actions
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/main/.github/workflows/issues-report-config.json
- name: install-packages
run: |
sudo apt-get install discount
- uses: brcrista/summarize-issues@v3
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
outputPath: ${{ env.OUTPUT_FILE_NAME }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-region: us-east-1
- name: Post Results
run: |
mkd2html IssuesReport.md IssuesReport.html
aws s3 cp --quiet IssuesReport.html s3://testing.zephyrproject.org/issues/$GITHUB_REPOSITORY/index.html

View File

@@ -1,37 +0,0 @@
[
{
"section": "High Priority Bugs",
"labels": ["bug", "priority: high"],
"threshold": 0
},
{
"section": "Medium Priority Bugs",
"labels": ["bug", "priority: medium"],
"threshold": 20
},
{
"section": "Low Priority Bugs",
"labels": ["bug", "priority: low"],
"threshold": 100
},
{
"section": "Enhancements",
"labels": ["Enhancement"],
"threshold": 500
},
{
"section": "Features",
"labels": ["Feature"],
"threshold": 100
},
{
"section": "Questions",
"labels": ["question"],
"threshold": 100
},
{
"section": "Static Analysis",
"labels": ["Coverity"],
"threshold": 100
}
]

View File

@@ -4,7 +4,7 @@ on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: Scan code for licenses
steps:
- name: Checkout the code
@@ -15,7 +15,7 @@ jobs:
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v1
with:
name: scancode
path: ./artifacts

View File

@@ -6,11 +6,11 @@ on:
jobs:
contribs:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -7,16 +7,15 @@ on:
jobs:
release:
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
run: echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
@@ -24,7 +23,7 @@ jobs:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx

View File

@@ -1,23 +0,0 @@
name: "Close stale pull requests/issues"
on:
schedule:
- cron: "16 00 * * *"
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: 'This pull request has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this pull request will automatically be closed in 14 days. Note, that you can always re-open a closed pull request at any time.'
stale-issue-message: 'This issue has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this issue will automatically be closed in 14 days. Note, that you can always re-open a closed issue at any time.'
days-before-stale: 60
days-before-close: 14
stale-issue-label: 'Stale'
stale-pr-label: 'Stale'
exempt-pr-labels: 'Blocked,In progress'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta'
operations-per-run: 400

View File

@@ -3,117 +3,112 @@ name: Run tests with twister
on:
push:
branches:
- v2.7-branch
- v2.7-auditable-branch
pull_request_target:
branches:
- v2.7-branch
- v2.7-auditable-branch
schedule:
# Run at 00:00 on Saturday
- cron: '20 0 * * 6'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
- cron: '0 8 * * 6'
jobs:
twister-build-prep:
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on: zephyr_runner
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 15
DAILY_MATRIX_SIZE: 80
DAILY_MATRIX_SIZE: 120
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.13.1
CLANG_ROOT_DIR: /usr/lib/llvm-12
TESTS_PER_BUILDER: 700
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Checkout
- name: checkout
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v3
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Environment Setup
- name: west setup
if: github.event_name == 'pull_request_target'
run: |
pip3 install GitPython
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
west init -l . || true
# no need for west update here
west config --global update.narrow true
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update.log
west forall -c 'git reset --hard HEAD'
- name: Generate Test Plan with Twister
if: github.event_name == 'pull_request_target'
id: test-plan
run: |
sudo apt-get install -y bc
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Bot"
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
# temporary until we have all PRs rebased on top of this commit.
git log -n 500 --oneline | grep -q "run twister using github action" || (
echo "Your branch is not up to date, you need to rebase on top of latest HEAD of main branch"
exit 1
)
python3 ./scripts/ci/test_plan.py -c origin/${BASE_REF}.. --pull-request -t $TESTS_PER_BUILDER
if [ -s .testplan ]; then
cat .testplan >> $GITHUB_ENV
./scripts/ci/run_ci.sh -S -c -b ${{github.base_ref}} -r origin \
-p ${{github.event.pull_request.number}} -R ${COMMIT_RANGE}
# remove all tests to be skipped
grep -v skipped test_file.txt > no_skipped.txt
# get number of tests
lines=$(wc -l < no_skipped.txt)
if [ "$lines" = 1 ]; then
# no tests, so we need 0 nodes
nodes=0
else
echo "TWISTER_NODES=${MATRIX_SIZE}" >> $GITHUB_ENV
nodes=$(echo "${lines} / ${TESTS_PER_BUILDER}" | bc)
if [ "${nodes}" = 0 ]; then
# for less than TESTS_PER_BUILDER, we take at least 1 node
nodes=1
fi
fi
rm -f testplan.csv .testplan
echo "::set-output name=calculated_matrix_size::${nodes}";
rm test_file.txt no_skipped.txt
- name: Determine matrix size
id: output-services
run: |
if [ "${{github.event_name}}" = "pull_request_target" ]; then
if [ -n "${TWISTER_NODES}" ]; then
subset="[$(seq -s',' 1 ${TWISTER_NODES})]"
if [ -n "${{steps.test-plan.outputs.calculated_matrix_size}}" ]; then
subset="[$(seq -s',' 1 ${{steps.test-plan.outputs.calculated_matrix_size}})]"
else
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
fi
size=${TWISTER_NODES}
size=${{ steps.test-plan.outputs.calculated_matrix_size }}
elif [ "${{github.event_name}}" = "push" ]; then
subset="[$(seq -s',' 1 ${PUSH_MATRIX_SIZE})]"
subset="[$(seq -s',' 1 ${MATRIX_SIZE})]"
size=${MATRIX_SIZE}
elif [ "${{github.event_name}}" = "schedule" && "${{github.repository}}" = "zephyrproject-rtos/zephyr" ]; then
else
subset="[$(seq -s',' 1 ${DAILY_MATRIX_SIZE})]"
size=${DAILY_MATRIX_SIZE}
else
size=0
fi
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "::set-output name=subset::${subset}";
echo "::set-output name=size::${size}";
twister-build:
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on: zephyr_runner
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -121,40 +116,25 @@ jobs:
env:
ZEPHYR_SDK_INSTALL_DIR: /opt/toolchains/zephyr-sdk-0.13.1
CLANG_ROOT_DIR: /usr/lib/llvm-12
TWISTER_COMMON: ' --inline-logs -v -N -M --retry-failed 3 '
DAILY_OPTIONS: ' -M --build-only --all '
PR_OPTIONS: ' --clobber-output --integration '
PUSH_OPTIONS: ' --clobber-output -M '
DAILY_OPTIONS: ' --inline-logs -M -N --build-only --all --retry-failed 3 -v '
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Clone cached Zephyr repository
continue-on-error: true
- name: Update PATH for west
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout
uses: actions/checkout@v3
- name: checkout
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- name: Environment Setup
- name: west setup
run: |
pip3 install GitPython
if [ "${{github.event_name}}" = "pull_request_target" ]; then
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
git rebase origin/${BASE_REF}
git log --pretty=oneline | head -n 10
fi
echo "$HOME/.local/bin" >> $GITHUB_PATH
west init -l . || true
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules && west update --path-cache /github/cache/zephyrproject)
west update 2>&1 1> west.update.log || west update 2>&1 1> west.update.log
west forall -c 'git reset --hard HEAD'
- name: Check Environment
@@ -174,7 +154,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
message("::set-output name=repo::${repo2}")
- name: use cache
id: cache-ccache
@@ -189,7 +169,7 @@ jobs:
- name: ccache stats initial
run: |
test -d github/home/.ccache && rm -rf /github/home/.ccache && mv github/home/.ccache /github/home/.ccache
test -d github/home/.ccache && mv github/home/.ccache /github/home/.ccache
ccache -M 10G -s
- if: github.event_name == 'push'
@@ -197,23 +177,26 @@ jobs:
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${PUSH_OPTIONS}
./scripts/ci/run_ci.sh -c -b main -r origin -m ${{matrix.subset}} \
-M ${{ strategy.job-total }}
- if: github.event_name == 'pull_request_target'
name: Run Tests with Twister (Pull Request)
run: |
rm -f testplan.csv
git config --global user.email "bot@zephyrproject.org"
git config --global user.name "Zephyr Builder"
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
python3 ./scripts/ci/test_plan.py -c origin/${BASE_REF}.. --pull-request
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} --load-tests testplan.csv ${TWISTER_COMMON} ${PR_OPTIONS}
./scripts/ci/run_ci.sh -c -b ${{github.base_ref}} -r origin \
-m ${{matrix.subset}} -M ${{ strategy.job-total }} \
-p ${{github.event.pull_request.number}} -R ${COMMIT_RANGE}
- if: github.event_name == 'schedule'
name: Run Tests with Twister (Daily)
run: |
export ZEPHYR_BASE=${PWD}
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${TWISTER_COMMON} ${DAILY_OPTIONS}
./scripts/twister --subset ${{matrix.subset}}/${{ strategy.job-total }} ${DAILY_OPTIONS}
- name: ccache stats post
run: |
@@ -221,18 +204,15 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v2
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
path: |
twister-out/twister.xml
testplan.csv
path: twister-out/twister.xml
twister-test-results:
name: "Publish Unit Tests Results"
needs: twister-build
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()

View File

@@ -5,16 +5,12 @@ name: Twister TestSuite
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
@@ -28,17 +24,17 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-20.04]
os: [ubuntu-latest]
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,15 +5,11 @@ name: Zephyr West Command Tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -26,22 +22,20 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-20.04, macos-11, windows-2022]
os: [ubuntu-latest, macos-latest, windows-latest]
exclude:
- os: macos-11
python-version: 3.6
- os: windows-2022
- os: macos-latest
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v3
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -49,7 +43,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -58,7 +52,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v3
uses: actions/cache@v1
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -1044,7 +1044,7 @@ if(CONFIG_USERSPACE)
OUTPUT ${APP_SMEM_UNALIGNED_LD} ${APP_SMEM_PINNED_UNALIGNED_LD}
COMMAND ${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/gen_app_partitions.py
-f ${CMAKE_BINARY_DIR}/compile_commands.json
-d ${OBJ_FILE_DIR}
-o ${APP_SMEM_UNALIGNED_LD}
$<$<BOOL:${APP_SMEM_PINNED_UNALIGNED_LD}>:--pinoutput=${APP_SMEM_PINNED_UNALIGNED_LD}>
${APP_SMEM_PINNED_PARTITION_LIST_ARG}
@@ -1385,17 +1385,6 @@ if(CONFIG_OUTPUT_PRINT_MEMORY_USAGE)
endif()
endif()
if(NOT CONFIG_EXCEPTIONS)
set(eh_frame_section ".eh_frame")
else()
set(eh_frame_section "")
endif()
set(remove_sections_argument_list "")
foreach(section .comment COMMON ${eh_frame_section})
list(APPEND remove_sections_argument_list
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>${section})
endforeach()
if(CONFIG_BUILD_OUTPUT_HEX OR BOARD_FLASH_RUNNER STREQUAL openocd)
get_property(elfconvert_formats TARGET bintools PROPERTY elfconvert_formats)
if(ihex IN_LIST elfconvert_formats)
@@ -1405,7 +1394,9 @@ if(CONFIG_BUILD_OUTPUT_HEX OR BOARD_FLASH_RUNNER STREQUAL openocd)
$<TARGET_PROPERTY:bintools,elfconvert_flag>
${GAP_FILL}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outtarget>ihex
${remove_sections_argument_list}
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>.comment
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>COMMON
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>.eh_frame
$<TARGET_PROPERTY:bintools,elfconvert_flag_infile>${KERNEL_ELF_NAME}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outfile>${KERNEL_HEX_NAME}
$<TARGET_PROPERTY:bintools,elfconvert_flag_final>
@@ -1427,7 +1418,9 @@ if(CONFIG_BUILD_OUTPUT_BIN)
$<TARGET_PROPERTY:bintools,elfconvert_flag>
${GAP_FILL}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outtarget>binary
${remove_sections_argument_list}
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>.comment
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>COMMON
$<TARGET_PROPERTY:bintools,elfconvert_flag_section_remove>.eh_frame
$<TARGET_PROPERTY:bintools,elfconvert_flag_infile>${KERNEL_ELF_NAME}
$<TARGET_PROPERTY:bintools,elfconvert_flag_outfile>${KERNEL_BIN_NAME}
$<TARGET_PROPERTY:bintools,elfconvert_flag_final>

View File

@@ -15,7 +15,6 @@
/.github/ @nashif
/.github/workflows/ @galak @nashif
/.buildkite/ @galak
/MAINTAINERS.yml @ioannisg @MaureenHelm
/arch/arc/ @abrodkin @ruuddw @evgeniy-paltsev
/arch/arm/ @MaureenHelm @galak @ioannisg

View File

@@ -1632,7 +1632,6 @@ CI:
- galak
files:
- .github/
- .buildkite/
- scripts/ci/
- .checkpatch.conf
- scripts/gitlint/

View File

@@ -8,10 +8,9 @@
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img
src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a
href="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml?query=branch%3Amain">
<a href="https://buildkite.com/zephyr/zephyr">
<img
src="https://github.com/zephyrproject-rtos/zephyr/actions/workflows/twister.yaml/badge.svg?event=push"></a>
src="https://badge.buildkite.com/f5bd0dc88306cee17c9b38e78d11bb74a6291e3f40e7d13f31.svg?branch=main"></a>
The Zephyr Project is a scalable real-time operating system (RTOS) supporting
@@ -63,7 +62,8 @@ Here's a quick summary of resources to help you find your way around:
`Zephyr Development mailing list`_. The other `Zephyr mailing list
subgroups`_ have their own archives and sign-up pages.
* **Nightly CI Build Status**: https://lists.zephyrproject.org/g/builds
The builds@lists.zephyrproject.org mailing list archives the CI nightly build results.
The builds@lists.zephyrproject.org mailing list archives the CI
(buildkite) nightly build results.
* **Chat**: Real-time chat happens in Zephyr's Discord Server. Use
this `Discord Invite`_ to register.
* **Contributing**: see the `Contribution Guide`_

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 4
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -56,7 +56,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;

View File

@@ -35,7 +35,7 @@ int arch_mem_domain_max_partitions_get(void)
/*
* Validate the given buffer is user accessible or not
*/
int arch_buffer_validate(void *addr, size_t size, int write)
int arch_buffer_validate(const void *addr, size_t size, bool write)
{
return arc_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -207,7 +207,7 @@ int arc_core_mpu_get_max_domain_partition_regions(void)
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
int arc_core_mpu_buffer_validate(const void *addr, size_t size, bool write)
{
/*
* For ARC MPU, smaller region number takes priority.

View File

@@ -779,7 +779,7 @@ int arc_core_mpu_get_max_domain_partition_regions(void)
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
int arc_core_mpu_buffer_validate(const void *addr, size_t size, bool write)
{
int r_index;
int key = arch_irq_lock();

View File

@@ -849,7 +849,7 @@ static inline z_arch_esf_t *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_ret
bool *nested_exc)
{
bool alternative_state_exc = false;
z_arch_esf_t *ptr_esf = NULL;
z_arch_esf_t *ptr_esf;
*nested_exc = false;

View File

@@ -40,7 +40,7 @@ static inline uint64_t z_arm_dwt_freq_get(void)
/* SysTick and DWT both run at CPU frequency,
* reflected in the system timer HW cycles/sec.
*/
return sys_clock_hw_cycles_per_sec();
return CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC;
#else
static uint64_t dwt_frequency;
uint32_t cyc_start, cyc_end;

View File

@@ -88,7 +88,7 @@ void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
__ASSERT(prio <= (BIT(NUM_IRQ_PRIO_BITS) - 1),
"invalid priority %d for %d irq! values must be less than %lu\n",
prio - _IRQ_PRIO_OFFSET, irq,
BIT(NUM_IRQ_PRIO_BITS) - (_IRQ_PRIO_OFFSET));
(unsigned long)BIT(NUM_IRQ_PRIO_BITS) - (_IRQ_PRIO_OFFSET));
NVIC_SetPriority((IRQn_Type)irq, prio);
}

View File

@@ -324,7 +324,7 @@ int arch_mem_domain_max_partitions_get(void)
return ARM_CORE_MPU_MAX_DOMAIN_PARTITIONS_GET(available_regions);
}
int arch_buffer_validate(void *addr, size_t size, int write)
int arch_buffer_validate(const void *addr, size_t size, bool write)
{
return arm_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -261,7 +261,7 @@ int arm_core_mpu_get_max_available_dyn_regions(void);
* spans multiple enabled MPU regions (even if these regions all
* permit user access).
*/
int arm_core_mpu_buffer_validate(void *addr, size_t size, int write);
int arm_core_mpu_buffer_validate(const void *addr, size_t size, bool write);
#endif /* CONFIG_ARM_MPU */

View File

@@ -253,7 +253,7 @@ int arm_core_mpu_get_max_available_dyn_regions(void)
*
* Presumes the background mapping is NOT user accessible.
*/
int arm_core_mpu_buffer_validate(void *addr, size_t size, int write)
int arm_core_mpu_buffer_validate(const void *addr, size_t size, bool write)
{
return mpu_buffer_validate(addr, size, write);
}

View File

@@ -169,7 +169,7 @@ static inline int is_user_accessible_region(uint32_t r_index, int write)
* This internal function validates whether a given memory buffer
* is user accessible or not.
*/
static inline int mpu_buffer_validate(void *addr, size_t size, int write)
static inline int mpu_buffer_validate(const void *addr, size_t size, bool write)
{
int32_t r_index;
int rc = -EPERM;

View File

@@ -270,7 +270,7 @@ static inline int is_enabled_region(uint32_t index)
* in case the fast address range check fails.
*
*/
static inline int mpu_buffer_validate(void *addr, size_t size, int write)
static inline int mpu_buffer_validate(const void *addr, size_t size, bool write)
{
uint32_t _addr = (uint32_t)addr;
uint32_t _size = (uint32_t)size;

View File

@@ -536,7 +536,7 @@ static inline int is_user_accessible_region(uint32_t r_index, int write)
/**
* @brief validate the given buffer is user accessible or not
*/
int arm_core_mpu_buffer_validate(void *addr, size_t size, int write)
int arm_core_mpu_buffer_validate(const void *addr, size_t size, bool write)
{
uint8_t r_index;

View File

@@ -26,7 +26,7 @@
extern "C" {
#endif
K_KERNEL_STACK_ARRAY_EXTERN(z_interrupt_stacks, CONFIG_MP_NUM_CPUS,
extern K_KERNEL_STACK_ARRAY_DEFINE(z_interrupt_stacks, CONFIG_MP_NUM_CPUS,
CONFIG_ISR_STACK_SIZE);
/**

View File

@@ -65,7 +65,7 @@
"pop {r0-r3}\n\t" \
load_lr "\n\t" \
::); \
} while (0)
} while (false)
/**
* @brief Macro for "sandwiching" a function call (@p name) in two other calls

View File

@@ -101,28 +101,9 @@ spurious_continue:
* x0: 1st thread in the ready queue
* x1: _current thread
*/
#ifdef CONFIG_SMP
/*
* 2 possibilities here:
* - x0 != NULL (implies x0 != x1): we need to context switch and set
* the switch_handle in the context switch code
* - x0 == NULL: no context switch
*/
cmp x0, #0x0
bne switch
/*
* No context switch. Restore x0 from x1 (they are the same thread).
* See also comments to z_arch_get_next_switch_handle()
*/
mov x0, x1
b exit
switch:
#else
cmp x0, x1
beq exit
#endif
/* Switch thread */
bl z_arm64_context_switch

View File

@@ -129,11 +129,7 @@ void z_arm64_el2_init(void)
zero_cntvoff_el2(); /* Set 64-bit virtual timer offset to 0 */
zero_cnthctl_el2();
#ifdef CONFIG_CPU_AARCH64_CORTEX_R
zero_cnthps_ctl_el2();
#else
zero_cnthp_ctl_el2();
#endif
/*
* Enable this if/when we use the hypervisor timer.
* write_cnthp_cval_el2(~(uint64_t)0);

View File

@@ -124,28 +124,9 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
void *z_arch_get_next_switch_handle(struct k_thread **old_thread)
{
/*
* When returning from this function we will have the current thread
* onto the stack to be popped in x1 and the next thread in x0 returned
* from z_get_next_switch_handle() (see isr_wrapper.S)
*/
*old_thread = _current;
#ifdef CONFIG_SMP
/*
* XXX: see thread in #41840 and #40795
*
* The scheduler API requires a complete switch handle here, but arm64
* optimizes things such that the callee-save registers are still
* unsaved here (they get written out in z_arm64_context_switch()
* below). So pass a NULL instead, which the scheduler will store into
* the thread switch_handle field. The resulting thread won't be
* switched into until we write that ourselves.
*/
return z_get_next_switch_handle(NULL);
#else
return z_get_next_switch_handle(*old_thread);
#endif
}
#ifdef CONFIG_USERSPACE

View File

@@ -50,7 +50,7 @@ strlen_done:
ret
/*
* int arch_buffer_validate(void *addr, size_t size, int write)
* int arch_buffer_validate(const void *addr, size_t size, bool write)
*/
GTEXT(arch_buffer_validate)

View File

@@ -53,15 +53,15 @@ void z_irq_do_offload(void);
#if ALT_CPU_ICACHE_SIZE > 0
void z_nios2_icache_flush_all(void);
#else
#define z_nios2_icache_flush_all() do { } while (0)
#define z_nios2_icache_flush_all() do { } while (false)
#endif
#if ALT_CPU_DCACHE_SIZE > 0
void z_nios2_dcache_flush_all(void);
void z_nios2_dcache_flush_no_writeback(void *start, uint32_t len);
#else
#define z_nios2_dcache_flush_all() do { } while (0)
#define z_nios2_dcache_flush_no_writeback(x, y) do { } while (0)
#define z_nios2_dcache_flush_all() do { } while (false)
#define z_nios2_dcache_flush_no_writeback(x, y) do { } while (false)
#endif
#endif /* _ASMLANGUAGE */

View File

@@ -328,7 +328,7 @@ void z_riscv_pmp_add_dynamic(struct k_thread *thread,
}
}
int arch_buffer_validate(void *addr, size_t size, int write)
int arch_buffer_validate(const void *addr, size_t size, bool write)
{
uint32_t index, i;
ulong_t pmp_type, pmp_addr_start, pmp_addr_stop;

View File

@@ -6,7 +6,6 @@ zephyr_library()
zephyr_library_sources(cpuhalt.c)
zephyr_library_sources(prep_c.c)
zephyr_library_sources(fatal.c)
zephyr_library_sources(cpuid.c)
zephyr_library_sources(spec_ctrl.c)
zephyr_library_sources_ifdef(CONFIG_X86_MEMMAP memmap.c)

View File

@@ -15,7 +15,7 @@ static bool check_sum(struct acpi_sdt *t)
{
uint8_t sum = 0U, *p = (uint8_t *)t;
for (int i = 0; i < t->length; i++) {
for (uint32_t i = 0; i < t->length; i++) {
sum += p[i];
}
@@ -26,7 +26,7 @@ static void find_rsdp(void)
{
uint8_t *bda_seg, *zero_page_base;
uint64_t *search;
uintptr_t search_phys, rsdp_phys = 0U;
uintptr_t search_phys, rsdp_phys;
size_t search_length, rsdp_length;
if (is_rsdp_searched) {
@@ -49,7 +49,7 @@ static void find_rsdp(void)
* first megabyte and are directly accessible.
*/
bda_seg = 0x040e + zero_page_base;
search_phys = (long)(((int)*(uint16_t *)bda_seg) << 4);
search_phys = ((uintptr_t)*(uint16_t *)bda_seg) << 4;
/* Unmap after use */
z_phys_unmap(zero_page_base, 4096);
@@ -57,14 +57,14 @@ static void find_rsdp(void)
/* Might be nothing there, check before we inspect.
* Note that EBDA usually is in 0x80000 to 0x100000.
*/
if ((POINTER_TO_UINT(search_phys) >= 0x80000UL) &&
(POINTER_TO_UINT(search_phys) < 0x100000UL)) {
if ((search_phys >= 0x80000UL) &&
(search_phys < 0x100000UL)) {
search_length = 1024;
z_phys_map((uint8_t **)&search, search_phys, search_length, 0);
for (int i = 0; i < 1024/8; i++) {
for (size_t i = 0; i < (1024/8); i++) {
if (search[i] == ACPI_RSDP_SIGNATURE) {
rsdp_phys = search_phys + i * 8;
rsdp_phys = search_phys + (i * 8);
rsdp = (void *)&search[i];
goto found;
}
@@ -80,10 +80,9 @@ static void find_rsdp(void)
search_length = 128 * 1024;
z_phys_map((uint8_t **)&search, search_phys, search_length, 0);
rsdp_phys = 0U;
for (int i = 0; i < 128*1024/8; i++) {
for (size_t i = 0; i < ((128*1024)/8); i++) {
if (search[i] == ACPI_RSDP_SIGNATURE) {
rsdp_phys = search_phys + i * 8;
rsdp_phys = search_phys + (i * 8);
rsdp = (void *)&search[i];
goto found;
}
@@ -133,11 +132,11 @@ void *z_acpi_find_table(uint32_t signature)
find_rsdp();
if (!rsdp) {
if (rsdp == NULL) {
return NULL;
}
if (rsdp->rsdt_ptr) {
if (rsdp->rsdt_ptr != 0U) {
z_phys_map((uint8_t **)&rsdt, rsdp->rsdt_ptr, sizeof(*rsdt), 0);
tbl_found = false;
@@ -150,11 +149,11 @@ void *z_acpi_find_table(uint32_t signature)
uint32_t *end = (uint32_t *)((char *)rsdt + rsdt->sdt.length);
for (uint32_t *tp = &rsdt->table_ptrs[0]; tp < end; tp++) {
t_phys = (long)*tp;
t_phys = (uintptr_t)*tp;
z_phys_map(&mapped_tbl, t_phys, sizeof(*t), 0);
t = (void *)mapped_tbl;
if (t->signature == signature && check_sum(t)) {
if ((t->signature == signature) && check_sum(t)) {
tbl_found = true;
break;
}
@@ -174,7 +173,7 @@ void *z_acpi_find_table(uint32_t signature)
return NULL;
}
if (rsdp->xsdt_ptr) {
if (rsdp->xsdt_ptr != 0ULL) {
z_phys_map((uint8_t **)&xsdt, rsdp->xsdt_ptr, sizeof(*xsdt), 0);
tbl_found = false;
@@ -187,11 +186,11 @@ void *z_acpi_find_table(uint32_t signature)
uint64_t *end = (uint64_t *)((char *)xsdt + xsdt->sdt.length);
for (uint64_t *tp = &xsdt->table_ptrs[0]; tp < end; tp++) {
t_phys = (long)*tp;
t_phys = (uintptr_t)*tp;
z_phys_map(&mapped_tbl, t_phys, sizeof(*t), 0);
t = (void *)mapped_tbl;
if (t->signature == signature && check_sum(t)) {
if ((t->signature == signature) && check_sum(t)) {
tbl_found = true;
break;
}
@@ -229,7 +228,7 @@ struct acpi_cpu *z_acpi_get_cpu(int n)
uintptr_t base = POINTER_TO_UINT(madt);
uintptr_t offset;
if (!madt) {
if (madt == NULL) {
return NULL;
}

View File

@@ -1,51 +0,0 @@
/*
* Copyright (c) 2022 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <cpuid.h> /* Header provided by the toolchain. */
#include <kernel_structs.h>
#include <arch/x86/cpuid.h>
#include <kernel.h>
uint32_t z_x86_cpuid_extended_features(void)
{
uint32_t eax, ebx, ecx = 0U, edx;
if (__get_cpuid(CPUID_EXTENDED_FEATURES_LVL,
&eax, &ebx, &ecx, &edx) == 0) {
return 0;
}
return edx;
}
#define INITIAL_APIC_ID_SHIFT (24)
#define INITIAL_APIC_ID_MASK (0xFF)
uint8_t z_x86_cpuid_get_current_physical_apic_id(void)
{
uint32_t eax, ebx, ecx, edx;
if (IS_ENABLED(CONFIG_X2APIC)) {
/* leaf 0x1F should be used first prior to using 0x0B */
if (__get_cpuid(CPUID_EXTENDED_TOPOLOGY_ENUMERATION_V2,
&eax, &ebx, &ecx, &edx) == 0) {
if (__get_cpuid(CPUID_EXTENDED_TOPOLOGY_ENUMERATION,
&eax, &ebx, &ecx, &edx) == 0) {
return 0;
}
}
} else {
if (__get_cpuid(CPUID_BASIC_INFO_1,
&eax, &ebx, &ecx, &edx) == 0) {
return 0;
}
edx = (ebx >> INITIAL_APIC_ID_SHIFT);
}
return (uint8_t)(edx & INITIAL_APIC_ID_MASK);
}

View File

@@ -21,8 +21,8 @@
* together.
*/
static mm_reg_t mmio;
#define IN(reg) (sys_read32(mmio + reg * 4) & 0xff)
#define OUT(reg, val) sys_write32((val) & 0xff, mmio + reg * 4)
#define IN(reg) (sys_read32(mmio + ((reg) * 4U)) & 0xffU)
#define OUT(reg, val) sys_write32((uint32_t)(val) & 0xffU, mmio + ((reg) * 4U))
#elif defined(X86_SOC_EARLY_SERIAL_MMIO8_ADDR)
/* Still other devices use a MMIO region containing packed byte
* registers
@@ -49,21 +49,21 @@ static mm_reg_t mmio;
#define REG_BRDH 0x01 /* Baud rate divisor (MSB) */
#define IER_DISABLE 0x00
#define LCR_8N1 (BIT(0) | BIT(1))
#define LCR_DLAB_SELECT BIT(7)
#define MCR_DTR BIT(0)
#define MCR_RTS BIT(1)
#define LSR_THRE BIT(5)
#define LCR_8N1 (BIT32(0) | BIT32(1))
#define LCR_DLAB_SELECT BIT32(7)
#define MCR_DTR BIT32(0)
#define MCR_RTS BIT32(1)
#define LSR_THRE BIT32(5)
#define FCR_FIFO BIT(0) /* enable XMIT and RCVR FIFO */
#define FCR_RCVRCLR BIT(1) /* clear RCVR FIFO */
#define FCR_XMITCLR BIT(2) /* clear XMIT FIFO */
#define FCR_FIFO_1 0 /* 1 byte in RCVR FIFO */
#define FCR_FIFO_1 0x00U /* 1 byte in RCVR FIFO */
static bool early_serial_init_done;
static uint32_t suppressed_chars;
static void serout(int c)
static void serout(uint8_t c)
{
while ((IN(REG_LSR) & LSR_THRE) == 0) {
}
@@ -77,10 +77,10 @@ int arch_printk_char_out(int c)
return c;
}
if (c == '\n') {
serout('\r');
if (c == (int)'\n') {
serout((uint8_t)'\r');
}
serout(c);
serout((uint8_t)c);
return c;
}
@@ -100,8 +100,8 @@ void z_x86_early_serial_init(void)
OUT(REG_IER, IER_DISABLE); /* Disable interrupts */
OUT(REG_LCR, LCR_DLAB_SELECT); /* DLAB select */
OUT(REG_BRDL, 1); /* Baud divisor = 1 */
OUT(REG_BRDH, 0);
OUT(REG_BRDL, 1U); /* Baud divisor = 1 */
OUT(REG_BRDH, 0U);
OUT(REG_LCR, LCR_8N1); /* LCR = 8n1 + DLAB off */
OUT(REG_MCR, MCR_DTR | MCR_RTS);

View File

@@ -57,10 +57,10 @@ bool z_x86_check_stack_bounds(uintptr_t addr, size_t size, uint16_t cs)
{
uintptr_t start, end;
if (_current == NULL || arch_is_in_isr()) {
if ((_current == NULL) || arch_is_in_isr()) {
/* We were servicing an interrupt or in early boot environment
* and are supposed to be on the interrupt stack */
int cpu_id;
uint8_t cpu_id;
#ifdef CONFIG_SMP
cpu_id = arch_curr_cpu()->id;
@@ -71,8 +71,8 @@ bool z_x86_check_stack_bounds(uintptr_t addr, size_t size, uint16_t cs)
z_interrupt_stacks[cpu_id]);
end = start + CONFIG_ISR_STACK_SIZE;
#ifdef CONFIG_USERSPACE
} else if ((cs & 0x3U) == 0U &&
(_current->base.user_options & K_USER) != 0) {
} else if (((cs & 0x3U) == 0U) &&
((_current->base.user_options & K_USER) != 0)) {
/* The low two bits of the CS register is the privilege
* level. It will be 0 in supervisor mode and 3 in user mode
* corresponding to ring 0 / ring 3.
@@ -90,7 +90,7 @@ bool z_x86_check_stack_bounds(uintptr_t addr, size_t size, uint16_t cs)
_current->stack_info.size);
}
return (addr <= start) || (addr + size > end);
return (addr <= start) || ((addr + size) > end);
}
#endif
@@ -158,7 +158,7 @@ static inline uintptr_t get_cr3(const z_arch_esf_t *esf)
/* If the interrupted thread was in user mode, we did a page table
* switch when we took the exception via z_x86_trampoline_to_kernel
*/
if ((esf->cs & 0x3) != 0) {
if ((esf->cs & 0x3U) != 0) {
return _current->arch.ptables;
}
#else
@@ -307,8 +307,8 @@ static void dump_page_fault(z_arch_esf_t *esf)
LOG_ERR("Linear address not present in page tables");
}
LOG_ERR("Access violation: %s thread not allowed to %s",
(err & PF_US) != 0U ? "user" : "supervisor",
(err & PF_ID) != 0U ? "execute" : ((err & PF_WR) != 0U ?
((err & PF_US) != 0U) ? "user" : "supervisor",
((err & PF_ID) != 0U) ? "execute" : (((err & PF_WR) != 0U) ?
"write" :
"read"));
if ((err & PF_PK) != 0) {
@@ -356,7 +356,7 @@ FUNC_NORETURN void z_x86_unhandled_cpu_exception(uintptr_t vector,
#else
ARG_UNUSED(vector);
#endif
z_x86_fatal_error(K_ERR_CPU_EXCEPTION, esf);
z_x86_fatal_error((unsigned int)K_ERR_CPU_EXCEPTION, esf);
}
#ifdef CONFIG_USERSPACE
@@ -413,18 +413,16 @@ void z_x86_page_fault_handler(z_arch_esf_t *esf)
#endif
#ifdef CONFIG_USERSPACE
int i;
for (i = 0; i < ARRAY_SIZE(exceptions); i++) {
for (size_t i = 0; i < ARRAY_SIZE(exceptions); i++) {
#ifdef CONFIG_X86_64
if ((void *)esf->rip >= exceptions[i].start &&
(void *)esf->rip < exceptions[i].end) {
if (((void *)esf->rip >= exceptions[i].start) &&
((void *)esf->rip < exceptions[i].end)) {
esf->rip = (uint64_t)(exceptions[i].fixup);
return;
}
#else
if ((void *)esf->eip >= exceptions[i].start &&
(void *)esf->eip < exceptions[i].end) {
if (((void *)esf->eip >= exceptions[i].start) &&
((void *)esf->eip < exceptions[i].end)) {
esf->eip = (unsigned int)(exceptions[i].fixup);
return;
}
@@ -435,21 +433,21 @@ void z_x86_page_fault_handler(z_arch_esf_t *esf)
dump_page_fault(esf);
#endif
#ifdef CONFIG_THREAD_STACK_INFO
if (z_x86_check_stack_bounds(esf_get_sp(esf), 0, esf->cs)) {
z_x86_fatal_error(K_ERR_STACK_CHK_FAIL, esf);
if (z_x86_check_stack_bounds(esf_get_sp(esf), 0, (uint16_t)esf->cs)) {
z_x86_fatal_error((unsigned int)K_ERR_STACK_CHK_FAIL, esf);
}
#endif
z_x86_fatal_error(K_ERR_CPU_EXCEPTION, esf);
z_x86_fatal_error((unsigned int)K_ERR_CPU_EXCEPTION, esf);
CODE_UNREACHABLE;
}
__pinned_func
void z_x86_do_kernel_oops(const z_arch_esf_t *esf)
{
uintptr_t reason;
unsigned int reason;
#ifdef CONFIG_X86_64
reason = esf->rax;
reason = (unsigned int)esf->rax;
#else
uintptr_t *stack_ptr = (uintptr_t *)esf->esp;
@@ -460,9 +458,9 @@ void z_x86_do_kernel_oops(const z_arch_esf_t *esf)
/* User mode is only allowed to induce oopses and stack check
* failures via this software interrupt
*/
if ((esf->cs & 0x3) != 0 && !(reason == K_ERR_KERNEL_OOPS ||
reason == K_ERR_STACK_CHK_FAIL)) {
reason = K_ERR_KERNEL_OOPS;
if (((esf->cs & 0x3U) != 0) && !((reason == (unsigned int)K_ERR_KERNEL_OOPS) ||
(reason == (unsigned int)K_ERR_STACK_CHK_FAIL))) {
reason = (unsigned int)K_ERR_KERNEL_OOPS;
}
#endif

View File

@@ -42,18 +42,18 @@ void z_x86_spurious_irq(const z_arch_esf_t *esf)
}
__pinned_func
void arch_syscall_oops(void *ssf)
void arch_syscall_oops(void *ssf_ptr)
{
struct _x86_syscall_stack_frame *ssf_ptr =
(struct _x86_syscall_stack_frame *)ssf;
struct _x86_syscall_stack_frame *ssf =
(struct _x86_syscall_stack_frame *)ssf_ptr;
z_arch_esf_t oops = {
.eip = ssf_ptr->eip,
.cs = ssf_ptr->cs,
.eflags = ssf_ptr->eflags
.eip = ssf->eip,
.cs = ssf->cs,
.eflags = ssf->eflags
};
if (oops.cs == USER_CODE_SEG) {
oops.esp = ssf_ptr->esp;
oops.esp = ssf->esp;
}
z_x86_fatal_error(K_ERR_KERNEL_OOPS, &oops);

View File

@@ -79,7 +79,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
void *swap_entry;
struct _x86_initial_frame *initial_frame;
#if CONFIG_X86_STACK_PROTECTION
#ifdef CONFIG_X86_STACK_PROTECTION
z_x86_set_stack_guard(stack);
#endif
@@ -114,18 +114,4 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
thread->arch.excNestCount = 0;
#endif /* CONFIG_LAZY_FPU_SHARING */
thread->arch.flags = 0;
/*
* When "eager FPU sharing" mode is enabled, FPU registers must be
* initialised at the time of thread creation because the floating-point
* context is always active and no further FPU initialisation is performed
* later.
*/
#if defined(CONFIG_EAGER_FPU_SHARING)
thread->arch.preempFloatReg.floatRegsUnion.fpRegs.fcw = 0x037f;
thread->arch.preempFloatReg.floatRegsUnion.fpRegs.ftw = 0xffff;
#if defined(CONFIG_X86_SSE)
thread->arch.preempFloatReg.floatRegsUnion.fpRegsEx.mxcsr = 0x1f80;
#endif /* CONFIG_X86_SSE */
#endif /* CONFIG_EAGER_FPU_SHARING */
}

View File

@@ -128,7 +128,7 @@ struct x86_cpuboot x86_cpuboot[] = {
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
{
uint8_t vector = ((unsigned long) x86_ap_start) >> 12;
uint8_t vector = (uint8_t)(((uintptr_t)x86_ap_start) >> 12);
uint8_t apic_id;
if (IS_ENABLED(CONFIG_ACPI)) {
@@ -143,8 +143,8 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
apic_id = x86_cpu_loapics[cpu_num];
x86_cpuboot[cpu_num].sp = (uint64_t) Z_KERNEL_STACK_BUFFER(stack) + sz;
x86_cpuboot[cpu_num].stack_size = sz;
x86_cpuboot[cpu_num].sp = (uint64_t) Z_KERNEL_STACK_BUFFER(stack) + (size_t)sz;
x86_cpuboot[cpu_num].stack_size = (size_t)sz;
x86_cpuboot[cpu_num].fn = fn;
x86_cpuboot[cpu_num].arg = arg;
@@ -188,6 +188,6 @@ FUNC_NORETURN void z_x86_cpu_init(struct x86_cpuboot *cpuboot)
#endif
/* Enter kernel, never return */
cpuboot->ready++;
cpuboot->ready += 1;
cpuboot->fn(cpuboot->arg);
}

View File

@@ -15,6 +15,8 @@ LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
*/
__weak bool z_x86_do_kernel_nmi(const z_arch_esf_t *esf)
{
ARG_UNUSED(esf);
return false;
}
@@ -46,6 +48,6 @@ void arch_syscall_oops(void *ssf_ptr)
LOG_ERR("Bad system call from RIP 0x%lx", ssf->rip);
z_x86_fatal_error(K_ERR_KERNEL_OOPS, NULL);
z_x86_fatal_error((unsigned int)K_ERR_KERNEL_OOPS, NULL);
}
#endif /* CONFIG_USERSPACE */

View File

@@ -16,7 +16,7 @@
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
unsigned char _irq_to_interrupt_vector[CONFIG_MAX_IRQ_LINES];
uint8_t _irq_to_interrupt_vector[CONFIG_MAX_IRQ_LINES];
/*
* The low-level interrupt code consults these arrays to dispatch IRQs, so
@@ -26,40 +26,43 @@ unsigned char _irq_to_interrupt_vector[CONFIG_MAX_IRQ_LINES];
#define NR_IRQ_VECTORS (IV_NR_VECTORS - IV_IRQS) /* # vectors free for IRQs */
void (*x86_irq_funcs[NR_IRQ_VECTORS])(const void *);
void (*x86_irq_funcs[NR_IRQ_VECTORS])(const void *arg);
const void *x86_irq_args[NR_IRQ_VECTORS];
static void irq_spurious(const void *arg)
{
LOG_ERR("Spurious interrupt, vector %d\n", (uint32_t)(uint64_t)arg);
z_fatal_error(K_ERR_SPURIOUS_IRQ, NULL);
z_fatal_error((unsigned int)K_ERR_SPURIOUS_IRQ, NULL);
}
void x86_64_irq_init(void)
{
for (int i = 0; i < NR_IRQ_VECTORS; i++) {
for (unsigned int i = 0; i < NR_IRQ_VECTORS; i++) {
x86_irq_funcs[i] = irq_spurious;
x86_irq_args[i] = (const void *)(long)(i + IV_IRQS);
x86_irq_args[i] = (const void *)((uintptr_t)i + IV_IRQS);
}
}
int z_x86_allocate_vector(unsigned int priority, int prev_vector)
{
const int VECTORS_PER_PRIORITY = 16;
const int MAX_PRIORITY = 13;
const unsigned int VECTORS_PER_PRIORITY = 16;
const unsigned int MAX_PRIORITY = 13;
int vector = prev_vector;
int i;
if (priority >= MAX_PRIORITY) {
priority = MAX_PRIORITY;
}
if (vector == -1) {
vector = (priority * VECTORS_PER_PRIORITY) + IV_IRQS;
const unsigned int uvector = (priority * VECTORS_PER_PRIORITY) + IV_IRQS;
vector = (int)uvector;
}
for (i = 0; i < VECTORS_PER_PRIORITY; ++i, ++vector) {
if (prev_vector != 1 && vector == prev_vector) {
const int end_vector = vector + (int) VECTORS_PER_PRIORITY;
for (; vector < end_vector; ++vector) {
if ((prev_vector != 1) && (vector == prev_vector)) {
continue;
}
@@ -72,7 +75,7 @@ int z_x86_allocate_vector(unsigned int priority, int prev_vector)
continue;
}
if (x86_irq_funcs[vector - IV_IRQS] == irq_spurious) {
if (x86_irq_funcs[(unsigned int)vector - IV_IRQS] == irq_spurious) {
return vector;
}
}
@@ -98,8 +101,8 @@ void z_x86_irq_connect_on_vector(unsigned int irq,
*/
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*func)(const void *arg),
const void *arg, uint32_t flags)
void (*routine)(const void *parameter),
const void *parameter, uint32_t flags)
{
uint32_t key;
int vector;
@@ -110,7 +113,7 @@ int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
vector = z_x86_allocate_vector(priority, -1);
if (vector >= 0) {
z_x86_irq_connect_on_vector(irq, vector, func, arg, flags);
z_x86_irq_connect_on_vector(irq, (uint8_t)vector, routine, parameter, flags);
}
irq_unlock(key);

View File

@@ -10,7 +10,7 @@
#include <offsets_short.h>
#include <x86_mmu.h>
extern void x86_sse_init(struct k_thread *); /* in locore.S */
extern void x86_sse_init(struct k_thread *thread); /* in locore.S */
/* FIXME: This exists to make space for a "return address" at the top
* of the stack. Obviously this is unused at runtime, but is required
@@ -32,8 +32,10 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
void *switch_entry;
struct x86_initial_frame *iframe;
#if CONFIG_X86_STACK_PROTECTION
#ifdef CONFIG_X86_STACK_PROTECTION
z_x86_set_stack_guard(stack);
#else
ARG_UNUSED(stack);
#endif
#ifdef CONFIG_USERSPACE
switch_entry = z_x86_userspace_prepare_thread(thread);
@@ -44,17 +46,17 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
#endif
iframe = Z_STACK_PTR_TO_FRAME(struct x86_initial_frame, stack_ptr);
iframe->rip = 0U;
thread->callee_saved.rsp = (long) iframe;
thread->callee_saved.rip = (long) switch_entry;
thread->callee_saved.rsp = (uint64_t) iframe;
thread->callee_saved.rip = (uint64_t) switch_entry;
thread->callee_saved.rflags = EFLAGS_INITIAL;
/* Parameters to entry point, which is populated in
* thread->callee_saved.rip
*/
thread->arch.rdi = (long) entry;
thread->arch.rsi = (long) p1;
thread->arch.rdx = (long) p2;
thread->arch.rcx = (long) p3;
thread->arch.rdi = (uint64_t) entry;
thread->arch.rsi = (uint64_t) p1;
thread->arch.rdx = (uint64_t) p2;
thread->arch.rcx = (uint64_t) p3;
x86_sse_init(thread);
@@ -65,11 +67,16 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
int arch_float_disable(struct k_thread *thread)
{
/* x86-64 always has FP/SSE enabled so cannot be disabled */
ARG_UNUSED(thread);
return -ENOTSUP;
}
int arch_float_enable(struct k_thread *thread, unsigned int options)
{
/* x86-64 always has FP/SSE enabled so nothing to do here */
ARG_UNUSED(thread);
ARG_UNUSED(options);
return 0;
}

View File

@@ -16,8 +16,6 @@
#include <kernel_arch_func.h>
#include <device.h>
#include <drivers/pcie/msi.h>
#include <drivers/interrupt_controller/sysapic.h>
#include <arch/x86/cpuid.h>
#endif
/* PCI Express Extended Configuration Mechanism (MMIO) */
@@ -39,19 +37,20 @@ static void pcie_mm_init(void)
struct acpi_mcfg *m = z_acpi_find_table(ACPI_MCFG_SIGNATURE);
if (m != NULL) {
int n = (m->sdt.length - sizeof(*m)) / sizeof(m->pci_segs[0]);
size_t n = (m->sdt.length - sizeof(*m)) / sizeof(m->pci_segs[0]);
for (int i = 0; i < n && i < MAX_PCI_BUS_SEGMENTS; i++) {
for (size_t i = 0; (i < n) && (i < MAX_PCI_BUS_SEGMENTS); i++) {
size_t size;
uintptr_t phys_addr;
bus_segs[i].start_bus = m->pci_segs[i].start_bus;
bus_segs[i].n_buses = 1 + m->pci_segs[i].end_bus
bus_segs[i].n_buses = (uint32_t)1 + m->pci_segs[i].end_bus
- m->pci_segs[i].start_bus;
phys_addr = m->pci_segs[i].base_addr;
/* 32 devices & 8 functions per bus, 4k per device */
size = bus_segs[i].n_buses * (32 * 8 * 4096);
size = bus_segs[i].n_buses;
size *= 32 * 8 * 4096;
device_map((mm_reg_t *)&bus_segs[i].mmio, phys_addr,
size, K_MEM_CACHE_NONE);
@@ -65,10 +64,11 @@ static void pcie_mm_init(void)
static inline void pcie_mm_conf(pcie_bdf_t bdf, unsigned int reg,
bool write, uint32_t *data)
{
for (int i = 0; i < ARRAY_SIZE(bus_segs); i++) {
int off = PCIE_BDF_TO_BUS(bdf) - bus_segs[i].start_bus;
for (size_t i = 0; i < ARRAY_SIZE(bus_segs); i++) {
/* Wrapping is deliberate and will be filtered by conditional below */
uint32_t off = PCIE_BDF_TO_BUS(bdf) - bus_segs[i].start_bus;
if (off >= 0 && off < bus_segs[i].n_buses) {
if (off < bus_segs[i].n_buses) {
bdf = PCIE_BDF(off,
PCIE_BDF_TO_DEV(bdf),
PCIE_BDF_TO_FUNC(bdf));
@@ -181,17 +181,16 @@ static bool get_vtd(void)
/* these functions are explained in include/drivers/pcie/msi.h */
#define MSI_MAP_DESTINATION_ID_SHIFT 12
#define MSI_RH BIT(3)
uint32_t pcie_msi_map(unsigned int irq,
msi_vector_t *vector)
{
uint32_t map;
ARG_UNUSED(irq);
ARG_UNUSED(irq);
#if defined(CONFIG_INTEL_VTD_ICTL)
#if !defined(CONFIG_PCIE_MSI_X)
ARG_UNUSED(vector);
if (vector != NULL) {
map = vtd_remap_msi(vtd, vector);
} else
@@ -200,17 +199,11 @@ uint32_t pcie_msi_map(unsigned int irq,
map = vtd_remap_msi(vtd, vector);
} else
#endif
#else
ARG_UNUSED(vector);
#endif
{
uint32_t dest_id;
dest_id = z_x86_cpuid_get_current_physical_apic_id() <<
MSI_MAP_DESTINATION_ID_SHIFT;
/* Directing to current physical CPU (may not be BSP)
* Destination ID - RH 1 - DM 0
*/
map = 0xFEE00000U | dest_id | MSI_RH;
map = 0xFEE00000U; /* standard delivery to BSP local APIC */
}
return map;

View File

@@ -45,7 +45,7 @@ FUNC_NORETURN void z_x86_prep_c(void *arg)
ARG_UNUSED(info);
#endif
#if CONFIG_X86_STACK_PROTECTION
#ifdef CONFIG_X86_STACK_PROTECTION
for (int i = 0; i < CONFIG_MP_NUM_CPUS; i++) {
z_x86_set_stack_guard(z_interrupt_stacks[i]);
}

View File

@@ -4,12 +4,13 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include <cpuid.h> /* Header provided by the toolchain. */
#include <init.h>
#include <kernel_structs.h>
#include <kernel_arch_data.h>
#include <kernel_arch_func.h>
#include <arch/x86/msr.h>
#include <arch/x86/cpuid.h>
#include <kernel.h>
/*
@@ -17,13 +18,31 @@
* https://software.intel.com/security-software-guidance/api-app/sites/default/files/336996-Speculative-Execution-Side-Channel-Mitigations.pdf
*/
#define CPUID_EXTENDED_FEATURES_LVL 7
/* Bits to check in CPUID extended features */
#define CPUID_SPEC_CTRL_SSBD BIT(31)
#define CPUID_SPEC_CTRL_IBRS BIT(26)
#if defined(CONFIG_DISABLE_SSBD) || defined(CONFIG_ENABLE_EXTENDED_IBRS)
static uint32_t cpuid_extended_features(void)
{
uint32_t eax, ebx, ecx = 0U, edx;
if (__get_cpuid(CPUID_EXTENDED_FEATURES_LVL,
&eax, &ebx, &ecx, &edx) == 0) {
return 0;
}
return edx;
}
static int spec_ctrl_init(const struct device *dev)
{
ARG_UNUSED(dev);
uint32_t enable_bits = 0U;
uint32_t cpuid7 = z_x86_cpuid_extended_features();
uint32_t cpuid7 = cpuid_extended_features();
#ifdef CONFIG_DISABLE_SSBD
if ((cpuid7 & CPUID_SPEC_CTRL_SSBD) != 0U) {

View File

@@ -177,7 +177,7 @@ static const struct paging_level paging_levels[] = {
}
};
#define NUM_LEVELS ARRAY_SIZE(paging_levels)
#define NUM_LEVELS ((unsigned int)ARRAY_SIZE(paging_levels))
#define PTE_LEVEL (NUM_LEVELS - 1)
#define PDE_LEVEL (NUM_LEVELS - 2)
@@ -203,14 +203,14 @@ static const struct paging_level paging_levels[] = {
#endif /* !CONFIG_X86_64 && !CONFIG_X86_PAE */
/* Memory range covered by an instance of various table types */
#define PT_AREA ((uintptr_t)(CONFIG_MMU_PAGE_SIZE * NUM_PT_ENTRIES))
#define PT_AREA ((uintptr_t)CONFIG_MMU_PAGE_SIZE * NUM_PT_ENTRIES)
#define PD_AREA (PT_AREA * NUM_PD_ENTRIES)
#ifdef CONFIG_X86_64
#define PDPT_AREA (PD_AREA * NUM_PDPT_ENTRIES)
#endif
#define VM_ADDR CONFIG_KERNEL_VM_BASE
#define VM_SIZE CONFIG_KERNEL_VM_SIZE
#define VM_ADDR ((uintptr_t)CONFIG_KERNEL_VM_BASE)
#define VM_SIZE ((uintptr_t)CONFIG_KERNEL_VM_SIZE)
/* Define a range [PT_START, PT_END) which is the memory range
* covered by all the page tables needed for the address space
@@ -257,7 +257,7 @@ static const struct paging_level paging_levels[] = {
#endif /* CONFIG_X86_64 */
#define INITIAL_PTABLE_PAGES \
(NUM_TABLE_PAGES + CONFIG_X86_EXTRA_PAGE_TABLE_PAGES)
(NUM_TABLE_PAGES + (uintptr_t)CONFIG_X86_EXTRA_PAGE_TABLE_PAGES)
#ifdef CONFIG_X86_PAE
/* Toplevel PDPT wasn't included as it is not a page in size */
@@ -265,7 +265,7 @@ static const struct paging_level paging_levels[] = {
((INITIAL_PTABLE_PAGES * CONFIG_MMU_PAGE_SIZE) + 0x20)
#else
#define INITIAL_PTABLE_SIZE \
(INITIAL_PTABLE_PAGES * CONFIG_MMU_PAGE_SIZE)
(INITIAL_PTABLE_PAGES * (uintptr_t)CONFIG_MMU_PAGE_SIZE)
#endif
/* "dummy" pagetables for the first-phase build. The real page tables
@@ -283,48 +283,48 @@ static __used char dummy_pagetables[INITIAL_PTABLE_SIZE];
* the provided virtual address
*/
__pinned_func
static inline int get_index(void *virt, int level)
static inline uintptr_t get_index(void *virt, unsigned int level)
{
return (((uintptr_t)virt >> paging_levels[level].shift) %
paging_levels[level].entries);
}
__pinned_func
static inline pentry_t *get_entry_ptr(pentry_t *ptables, void *virt, int level)
static inline pentry_t *get_entry_ptr(pentry_t *ptables, void *virt, unsigned int level)
{
return &ptables[get_index(virt, level)];
}
__pinned_func
static inline pentry_t get_entry(pentry_t *ptables, void *virt, int level)
static inline pentry_t get_entry(pentry_t *ptables, void *virt, unsigned int level)
{
return ptables[get_index(virt, level)];
}
/* Get the physical memory address associated with this table entry */
__pinned_func
static inline uintptr_t get_entry_phys(pentry_t entry, int level)
static inline uintptr_t get_entry_phys(pentry_t entry, unsigned int level)
{
return entry & paging_levels[level].mask;
}
/* Return the virtual address of a linked table stored in the provided entry */
__pinned_func
static inline pentry_t *next_table(pentry_t entry, int level)
static inline pentry_t *next_table(pentry_t entry, unsigned int level)
{
return z_mem_virt_addr(get_entry_phys(entry, level));
}
/* Number of table entries at this level */
__pinned_func
static inline size_t get_num_entries(int level)
static inline size_t get_num_entries(unsigned int level)
{
return paging_levels[level].entries;
}
/* 4K for everything except PAE PDPTs */
__pinned_func
static inline size_t table_size(int level)
static inline size_t table_size(unsigned int level)
{
return get_num_entries(level) * sizeof(pentry_t);
}
@@ -333,7 +333,7 @@ static inline size_t table_size(int level)
* that an entry within the table covers
*/
__pinned_func
static inline size_t get_entry_scope(int level)
static inline size_t get_entry_scope(unsigned int level)
{
return (1UL << paging_levels[level].shift);
}
@@ -342,7 +342,7 @@ static inline size_t get_entry_scope(int level)
* that this entire table covers
*/
__pinned_func
static inline size_t get_table_scope(int level)
static inline size_t get_table_scope(unsigned int level)
{
return get_entry_scope(level) * get_num_entries(level);
}
@@ -351,7 +351,7 @@ static inline size_t get_table_scope(int level)
* stored in any other bits
*/
__pinned_func
static inline bool is_leaf(int level, pentry_t entry)
static inline bool is_leaf(unsigned int level, pentry_t entry)
{
if (level == PTE_LEVEL) {
/* Always true for PTE */
@@ -363,15 +363,15 @@ static inline bool is_leaf(int level, pentry_t entry)
/* This does NOT (by design) un-flip KPTI PTEs, it's just the raw PTE value */
__pinned_func
static inline void pentry_get(int *paging_level, pentry_t *val,
static inline void pentry_get(unsigned int *paging_level, pentry_t *val,
pentry_t *ptables, void *virt)
{
pentry_t *table = ptables;
for (int level = 0; level < NUM_LEVELS; level++) {
for (unsigned int level = 0; level < NUM_LEVELS; level++) {
pentry_t entry = get_entry(table, virt, level);
if ((entry & MMU_P) == 0 || is_leaf(level, entry)) {
if (((entry & MMU_P) == 0) || is_leaf(level, entry)) {
*val = entry;
if (paging_level != NULL) {
*paging_level = level;
@@ -398,7 +398,7 @@ static inline void tlb_flush_page(void *addr)
__pinned_func
static inline bool is_flipped_pte(pentry_t pte)
{
return (pte & MMU_P) == 0 && (pte & PTE_ZERO) != 0;
return ((pte & MMU_P) == 0) && ((pte & PTE_ZERO) != 0);
}
#endif
@@ -449,6 +449,8 @@ static inline void assert_addr_aligned(uintptr_t addr)
#if __ASSERT_ON
__ASSERT((addr & (CONFIG_MMU_PAGE_SIZE - 1)) == 0U,
"unaligned address 0x%" PRIxPTR, addr);
#else
ARG_UNUSED(addr);
#endif
}
@@ -465,6 +467,8 @@ static inline void assert_region_page_aligned(void *addr, size_t size)
#if __ASSERT_ON
__ASSERT((size & (CONFIG_MMU_PAGE_SIZE - 1)) == 0U,
"unaligned size %zu", size);
#else
ARG_UNUSED(size);
#endif
}
@@ -477,18 +481,18 @@ static inline void assert_region_page_aligned(void *addr, size_t size)
#define COLOR_PAGE_TABLES 1
#if COLOR_PAGE_TABLES
#define ANSI_DEFAULT "\x1B[0m"
#define ANSI_RED "\x1B[1;31m"
#define ANSI_GREEN "\x1B[1;32m"
#define ANSI_YELLOW "\x1B[1;33m"
#define ANSI_BLUE "\x1B[1;34m"
#define ANSI_MAGENTA "\x1B[1;35m"
#define ANSI_CYAN "\x1B[1;36m"
#define ANSI_GREY "\x1B[1;90m"
#define ANSI_DEFAULT "\x1B" "[0m"
#define ANSI_RED "\x1B" "[1;31m"
#define ANSI_GREEN "\x1B" "[1;32m"
#define ANSI_YELLOW "\x1B" "[1;33m"
#define ANSI_BLUE "\x1B" "[1;34m"
#define ANSI_MAGENTA "\x1B" "[1;35m"
#define ANSI_CYAN "\x1B" "[1;36m"
#define ANSI_GREY "\x1B" "[1;90m"
#define COLOR(x) printk(_CONCAT(ANSI_, x))
#else
#define COLOR(x) do { } while (0)
#define COLOR(x) do { } while (false)
#endif
__pinned_func
@@ -521,7 +525,7 @@ static char get_entry_code(pentry_t value)
if ((value & MMU_US) != 0U) {
/* Uppercase indicates user mode access */
ret = toupper(ret);
ret = (char)toupper((int)(unsigned char)ret);
}
}
@@ -529,12 +533,12 @@ static char get_entry_code(pentry_t value)
}
__pinned_func
static void print_entries(pentry_t entries_array[], uint8_t *base, int level,
static void print_entries(pentry_t entries_array[], uint8_t *base, unsigned int level,
size_t count)
{
int column = 0;
for (int i = 0; i < count; i++) {
for (size_t i = 0; i < count; i++) {
pentry_t entry = entries_array[i];
uintptr_t phys = get_entry_phys(entry, level);
@@ -546,7 +550,7 @@ static void print_entries(pentry_t entries_array[], uint8_t *base, int level,
if (phys == virt) {
/* Identity mappings */
COLOR(YELLOW);
} else if (phys + Z_MEM_VM_OFFSET == virt) {
} else if ((phys + Z_MEM_VM_OFFSET) == virt) {
/* Permanent RAM mappings */
COLOR(GREEN);
} else {
@@ -602,7 +606,7 @@ static void print_entries(pentry_t entries_array[], uint8_t *base, int level,
}
__pinned_func
static void dump_ptables(pentry_t *table, uint8_t *base, int level)
static void dump_ptables(pentry_t *table, uint8_t *base, unsigned int level)
{
const struct paging_level *info = &paging_levels[level];
@@ -630,12 +634,12 @@ static void dump_ptables(pentry_t *table, uint8_t *base, int level)
}
/* Dump all linked child tables */
for (int j = 0; j < info->entries; j++) {
for (size_t j = 0; j < info->entries; j++) {
pentry_t entry = table[j];
pentry_t *next;
if ((entry & MMU_P) == 0U ||
(entry & MMU_PS) != 0U) {
if (((entry & MMU_P) == 0U) ||
((entry & MMU_PS) != 0U)) {
/* Not present or big page, skip */
continue;
}
@@ -672,7 +676,10 @@ SYS_INIT(dump_kernel_tables, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
__pinned_func
static void str_append(char **buf, size_t *size, const char *str)
{
int ret = snprintk(*buf, *size, "%s", str);
/*? snprintk has int return type, but negative values are not tested
* and not currently returned from implementation
*/
size_t ret = (size_t)snprintk(*buf, *size, "%s", str);
if (ret >= *size) {
/* Truncated */
@@ -685,7 +692,7 @@ static void str_append(char **buf, size_t *size, const char *str)
}
__pinned_func
static void dump_entry(int level, void *virt, pentry_t entry)
static void dump_entry(unsigned int level, void *virt, pentry_t entry)
{
const struct paging_level *info = &paging_levels[level];
char buf[24] = { 0 };
@@ -697,7 +704,7 @@ static void dump_entry(int level, void *virt, pentry_t entry)
if ((entry & MMU_##bit) != 0U) { \
str_append(&pos, &sz, #bit " "); \
} \
} while (0)
} while (false)
DUMP_BIT(RW);
DUMP_BIT(US);
@@ -715,7 +722,7 @@ static void dump_entry(int level, void *virt, pentry_t entry)
}
__pinned_func
void z_x86_pentry_get(int *paging_level, pentry_t *val, pentry_t *ptables,
void z_x86_pentry_get(unsigned int *paging_level, pentry_t *val, pentry_t *ptables,
void *virt)
{
pentry_get(paging_level, val, ptables, virt);
@@ -729,7 +736,7 @@ __pinned_func
void z_x86_dump_mmu_flags(pentry_t *ptables, void *virt)
{
pentry_t entry = 0;
int level = 0;
unsigned int level = 0;
pentry_get(&level, &entry, ptables, virt);
@@ -775,16 +782,19 @@ static inline pentry_t reset_pte(pentry_t old_val)
*/
__pinned_func
static inline pentry_t pte_finalize_value(pentry_t val, bool user_table,
int level)
unsigned int level)
{
#ifdef CONFIG_X86_KPTI
static const uintptr_t shared_phys_addr =
Z_MEM_PHYS_ADDR(POINTER_TO_UINT(&z_shared_kernel_page_start));
if (user_table && (val & MMU_US) == 0 && (val & MMU_P) != 0 &&
get_entry_phys(val, level) != shared_phys_addr) {
if (user_table && ((val & MMU_US) == 0) && ((val & MMU_P) != 0) &&
(get_entry_phys(val, level) != shared_phys_addr)) {
val = ~val;
}
#else
ARG_UNUSED(user_table);
ARG_UNUSED(level);
#endif
return val;
}
@@ -798,7 +808,7 @@ static inline pentry_t pte_finalize_value(pentry_t val, bool user_table,
__pinned_func
static inline pentry_t atomic_pte_get(const pentry_t *target)
{
return (pentry_t)atomic_ptr_get((atomic_ptr_t *)target);
return (pentry_t)atomic_ptr_get((const atomic_ptr_t *)target);
}
__pinned_func
@@ -843,23 +853,23 @@ static inline bool atomic_pte_cas(pentry_t *target, pentry_t old_value,
* page tables need nearly all pages that don't have the US bit to also
* not be Present.
*/
#define OPTION_USER BIT(0)
#define OPTION_USER BIT32(0)
/* Indicates that the operation requires TLBs to be flushed as we are altering
* existing mappings. Not needed for establishing new mappings
*/
#define OPTION_FLUSH BIT(1)
#define OPTION_FLUSH BIT32(1)
/* Indicates that each PTE's permission bits should be restored to their
* original state when the memory was mapped. All other bits in the PTE are
* preserved.
*/
#define OPTION_RESET BIT(2)
#define OPTION_RESET BIT32(2)
/* Indicates that the mapping will need to be cleared entirely. This is
* mainly used for unmapping the memory region.
*/
#define OPTION_CLEAR BIT(3)
#define OPTION_CLEAR BIT32(3)
/**
* Atomically update bits in a page table entry
@@ -954,12 +964,8 @@ static void page_map_set(pentry_t *ptables, void *virt, pentry_t entry_val,
pentry_t *table = ptables;
bool flush = (options & OPTION_FLUSH) != 0U;
for (int level = 0; level < NUM_LEVELS; level++) {
int index;
pentry_t *entryp;
index = get_index(virt, level);
entryp = &table[index];
for (unsigned int level = 0; level < NUM_LEVELS; level++) {
pentry_t *entryp = &table[get_index(virt, level)];
/* Check if we're a PTE */
if (level == PTE_LEVEL) {
@@ -1072,8 +1078,8 @@ __pinned_func
static void range_map(void *virt, uintptr_t phys, size_t size,
pentry_t entry_flags, pentry_t mask, uint32_t options)
{
LOG_DBG("%s: %p -> %p (%zu) flags " PRI_ENTRY " mask "
PRI_ENTRY " opt 0x%x", __func__, (void *)phys, virt, size,
LOG_DBG("%s: 0x%" PRIxPTR " -> %p (%zu) flags " PRI_ENTRY " mask "
PRI_ENTRY " opt 0x%x", __func__, phys, virt, size,
entry_flags, mask, options);
#ifdef CONFIG_X86_64
@@ -1178,7 +1184,7 @@ void arch_mem_map(void *virt, uintptr_t phys, size_t size, uint32_t flags)
/* unmap region addr..addr+size, reset entries and flush TLB */
void arch_mem_unmap(void *addr, size_t size)
{
range_map_unlocked((void *)addr, 0, size, 0, 0,
range_map_unlocked(addr, 0, size, 0, 0,
OPTION_FLUSH | OPTION_CLEAR);
}
@@ -1237,7 +1243,7 @@ void z_x86_mmu_init(void)
#endif
}
#if CONFIG_X86_STACK_PROTECTION
#ifdef CONFIG_X86_STACK_PROTECTION
__pinned_func
void z_x86_set_stack_guard(k_thread_stack_t *stack)
{
@@ -1258,9 +1264,9 @@ void z_x86_set_stack_guard(k_thread_stack_t *stack)
__pinned_func
static bool page_validate(pentry_t *ptables, uint8_t *addr, bool write)
{
pentry_t *table = (pentry_t *)ptables;
pentry_t *table = ptables;
for (int level = 0; level < NUM_LEVELS; level++) {
for (unsigned int level = 0; level < NUM_LEVELS; level++) {
pentry_t entry = get_entry(table, addr, level);
if (is_leaf(level, entry)) {
@@ -1303,7 +1309,7 @@ static inline void bcb_fence(void)
}
__pinned_func
int arch_buffer_validate(void *addr, size_t size, int write)
int arch_buffer_validate(const void *addr, size_t size, bool write)
{
pentry_t *ptables = z_x86_thread_page_tables_get(_current);
uint8_t *virt;
@@ -1311,7 +1317,7 @@ int arch_buffer_validate(void *addr, size_t size, int write)
int ret = 0;
/* addr/size arbitrary, fix this up into an aligned region */
k_mem_region_align((uintptr_t *)&virt, &aligned_size,
(void)k_mem_region_align((uintptr_t *)&virt, &aligned_size,
(uintptr_t)addr, size, CONFIG_MMU_PAGE_SIZE);
for (size_t offset = 0; offset < aligned_size;
@@ -1526,9 +1532,9 @@ static void *page_pool_get(void)
/* Debugging function to show how many pages are free in the pool */
__pinned_func
static inline unsigned int pages_free(void)
static inline size_t pages_free(void)
{
return (page_pos - page_pool) / CONFIG_MMU_PAGE_SIZE;
return (size_t)(page_pos - page_pool) / CONFIG_MMU_PAGE_SIZE;
}
/**
@@ -1548,11 +1554,11 @@ static inline unsigned int pages_free(void)
* @retval -ENOMEM Insufficient page pool memory
*/
__pinned_func
static int copy_page_table(pentry_t *dst, pentry_t *src, int level)
static int copy_page_table(pentry_t *dst, pentry_t *src, unsigned int level)
{
if (level == PTE_LEVEL) {
/* Base case: leaf page table */
for (int i = 0; i < get_num_entries(level); i++) {
for (size_t i = 0; i < get_num_entries(level); i++) {
dst[i] = pte_finalize_value(reset_pte(src[i]), true,
PTE_LEVEL);
}
@@ -1560,7 +1566,7 @@ static int copy_page_table(pentry_t *dst, pentry_t *src, int level)
/* Recursive case: allocate sub-structures as needed and
* make recursive calls on them
*/
for (int i = 0; i < get_num_entries(level); i++) {
for (size_t i = 0; i < get_num_entries(level); i++) {
pentry_t *child_dst;
int ret;
@@ -1647,8 +1653,8 @@ static inline void apply_region(pentry_t *ptables, void *start,
__pinned_func
static void set_stack_perms(struct k_thread *thread, pentry_t *ptables)
{
LOG_DBG("update stack for thread %p's ptables at %p: %p (size %zu)",
thread, ptables, (void *)thread->stack_info.start,
LOG_DBG("update stack for thread %p's ptables at %p: 0x%" PRIxPTR " (size %zu)",
thread, ptables, thread->stack_info.start,
thread->stack_info.size);
apply_region(ptables, (void *)thread->stack_info.start,
thread->stack_info.size,
@@ -1792,8 +1798,8 @@ void arch_mem_domain_thread_add(struct k_thread *thread)
}
thread->arch.ptables = z_mem_phys_addr(domain->arch.ptables);
LOG_DBG("set thread %p page tables to %p", thread,
(void *)thread->arch.ptables);
LOG_DBG("set thread %p page tables to 0x%" PRIxPTR, thread,
thread->arch.ptables);
/* Check if we're doing a migration from a different memory domain
* and have to remove permissions from its old domain.
@@ -1921,7 +1927,8 @@ void arch_reserved_pages_update(void)
int arch_page_phys_get(void *virt, uintptr_t *phys)
{
pentry_t pte = 0;
int level, ret;
unsigned int level;
int ret;
__ASSERT(POINTER_TO_UINT(virt) % CONFIG_MMU_PAGE_SIZE == 0U,
"unaligned address %p to %s", virt, __func__);
@@ -1930,7 +1937,7 @@ int arch_page_phys_get(void *virt, uintptr_t *phys)
if ((pte & MMU_P) != 0) {
if (phys != NULL) {
*phys = (uintptr_t)get_entry_phys(pte, PTE_LEVEL);
*phys = get_entry_phys(pte, PTE_LEVEL);
}
ret = 0;
} else {
@@ -2053,7 +2060,7 @@ __pinned_func
enum arch_page_location arch_page_location_get(void *addr, uintptr_t *location)
{
pentry_t pte;
int level;
unsigned int level;
/* TODO: since we only have to query the current set of page tables,
* could optimize this with recursive page table mapping
@@ -2080,7 +2087,7 @@ __pinned_func
bool z_x86_kpti_is_access_ok(void *addr, pentry_t *ptables)
{
pentry_t pte;
int level;
unsigned int level;
pentry_get(&level, &pte, ptables, addr);

View File

@@ -89,7 +89,7 @@ void z_x86_dump_mmu_flags(pentry_t *ptables, void *virt);
* @param ptables Toplevel pointer to page tables
* @param virt Virtual address to lookup
*/
void z_x86_pentry_get(int *paging_level, pentry_t *val, pentry_t *ptables,
void z_x86_pentry_get(unsigned int *paging_level, pentry_t *val, pentry_t *ptables,
void *virt);
/**
@@ -209,14 +209,16 @@ extern pentry_t z_x86_kernel_ptables[];
static inline pentry_t *z_x86_thread_page_tables_get(struct k_thread *thread)
{
#if defined(CONFIG_USERSPACE) && !defined(CONFIG_X86_COMMON_PAGE_TABLE)
if (!IS_ENABLED(CONFIG_X86_KPTI) ||
(thread->base.user_options & K_USER) != 0U) {
if (!(IS_ENABLED(CONFIG_X86_KPTI)) ||
((thread->base.user_options & K_USER) != 0U)) {
/* If KPTI is enabled, supervisor threads always use
* the kernel's page tables and not the page tables associated
* with their memory domain.
*/
return z_mem_virt_addr(thread->arch.ptables);
}
#else
ARG_UNUSED(thread);
#endif
return z_x86_kernel_ptables;
}

View File

@@ -10,10 +10,10 @@
#define __abi __attribute__((ms_abi))
typedef uintptr_t __abi (*efi_fn1_t)(void *);
typedef uintptr_t __abi (*efi_fn2_t)(void *, void *);
typedef uintptr_t __abi (*efi_fn3_t)(void *, void *, void *);
typedef uintptr_t __abi (*efi_fn4_t)(void *, void *, void *, void *);
typedef uintptr_t __abi (*efi_fn1_t)(void *arg1);
typedef uintptr_t __abi (*efi_fn2_t)(void *arg1, void *arg2);
typedef uintptr_t __abi (*efi_fn3_t)(void *arg1, void *arg2, void *arg3);
typedef uintptr_t __abi (*efi_fn4_t)(void *arg1, void *arg2, void *arg3, void *arg4);
struct efi_simple_text_output {
efi_fn2_t Reset;

View File

@@ -5,6 +5,8 @@
*/
#include <stdarg.h>
#include <stdbool.h>
#include <stddef.h>
#include <stdint.h>
/* Tiny, but not-as-primitive-as-it-looks implementation of something
* like s/n/printf(). Handles %d, %x, %p, %c and %s only, allows a
@@ -15,21 +17,21 @@
struct _pfr {
char *buf;
int len;
int idx;
size_t len;
size_t idx;
};
/* Set this function pointer to something that generates output */
static void (*z_putchar)(int c);
static void pc(struct _pfr *r, int c)
static void pc(struct _pfr *r, char c)
{
if (r->buf) {
if (r->buf != NULL) {
if (r->idx <= r->len) {
r->buf[r->idx] = c;
}
} else {
z_putchar(c);
z_putchar((int)c);
}
r->idx++;
}
@@ -41,30 +43,34 @@ static void prdec(struct _pfr *r, long v)
v = -v;
}
char digs[11 * sizeof(long)/4];
int i = sizeof(digs) - 1;
char digs[11U * sizeof(long) / 4];
size_t i = sizeof(digs) - 1;
digs[i--] = 0;
while (v || i == 9) {
digs[i--] = '0' + (v % 10);
digs[i] = '\0';
--i;
while ((v != 0) || (i == 9)) {
digs[i] = '0' + (v % 10);
--i;
v /= 10;
}
while (digs[++i]) {
++i;
while (digs[i] != '\0') {
pc(r, digs[i]);
++i;
}
}
static void endrec(struct _pfr *r)
{
if (r->buf && r->idx < r->len) {
r->buf[r->idx] = 0;
if ((r->buf != NULL) && (r->idx < r->len)) {
r->buf[r->idx] = '\0';
}
}
static int vpf(struct _pfr *r, const char *f, va_list ap)
static size_t vpf(struct _pfr *r, const char *f, va_list ap)
{
for (/**/; *f; f++) {
for (/**/; *f != '\0'; f++) {
bool islong = false;
if (*f != '%') {
@@ -78,30 +84,32 @@ static int vpf(struct _pfr *r, const char *f, va_list ap)
}
/* Ignore (but accept) field width and precision values */
while (f[1] >= '0' && f[1] <= '9') {
while ((f[1] >= '0') && (f[1] <= '9')) {
f++;
}
if (f[1] == '.') {
f++;
}
while (f[1] >= '0' && f[1] <= '9') {
while ((f[1] >= '0') && (f[1] <= '9')) {
f++;
}
switch (*(++f)) {
case 0:
case '\0':
return r->idx;
case '%':
pc(r, '%');
break;
case 'c':
pc(r, va_arg(ap, int));
pc(r, (char)va_arg(ap, int));
break;
case 's': {
char *s = va_arg(ap, char *);
while (*s)
pc(r, *s++);
while (*s != '\0') {
pc(r, *s);
++s;
}
break;
}
case 'p':
@@ -109,15 +117,21 @@ static int vpf(struct _pfr *r, const char *f, va_list ap)
pc(r, 'x'); /* fall through... */
islong = sizeof(long) > 4;
case 'x': {
int sig = 0;
bool sig = false;
unsigned long v = islong ? va_arg(ap, unsigned long)
: va_arg(ap, unsigned int);
for (int i = 2*sizeof(long) - 1; i >= 0; i--) {
int d = (v >> (i*4)) & 0xf;
size_t i = 2 * sizeof(v);
sig += !!d;
if (sig || i == 0)
while (i > 0) {
--i;
uint8_t d = (uint8_t)((v >> (i * 4)) & 0x0f);
if (d != 0) {
sig = true;
}
if (sig || (i == 0)) {
pc(r, "0123456789abcdef"[d]);
}
}
break;
}
@@ -136,12 +150,12 @@ static int vpf(struct _pfr *r, const char *f, va_list ap)
#define CALL_VPF(rec) \
va_list ap; \
va_start(ap, f); \
ret = vpf(&r, f, ap); \
ret = (int)vpf(&r, f, ap); \
va_end(ap);
static inline int snprintf(char *buf, unsigned long len, const char *f, ...)
static inline int snprintf(char *buf, size_t len, const char *f, ...)
{
int ret = 0;
int ret;
struct _pfr r = { .buf = buf, .len = len };
CALL_VPF(&r);
@@ -150,7 +164,7 @@ static inline int snprintf(char *buf, unsigned long len, const char *f, ...)
static inline int sprintf(char *buf, const char *f, ...)
{
int ret = 0;
int ret;
struct _pfr r = { .buf = buf, .len = 0x7fffffff };
CALL_VPF(&r);
@@ -159,7 +173,7 @@ static inline int sprintf(char *buf, const char *f, ...)
static inline int printf(const char *f, ...)
{
int ret = 0;
int ret;
struct _pfr r = {0};
CALL_VPF(&r);

View File

@@ -18,7 +18,7 @@
* stuff after.
*/
static __attribute__((section(".runtime_data_end")))
uint64_t runtime_data_end[1] = { 0x1111aa8888aa1111L };
uint64_t runtime_data_end[1] = { 0x1111aa8888aa1111ULL };
#define EXT_DATA_START ((void *) &runtime_data_end[1])
@@ -29,13 +29,14 @@ static void efi_putchar(int c)
static uint16_t efibuf[PUTCHAR_BUFSZ + 1];
static int n;
if (c == '\n') {
efi_putchar('\r');
if (c == (int)'\n') {
efi_putchar((int)'\r');
}
efibuf[n++] = c;
efibuf[n] = (uint16_t)c;
++n;
if (c == '\n' || n == PUTCHAR_BUFSZ) {
if ((c == (int)'\n') || (n == PUTCHAR_BUFSZ)) {
efibuf[n] = 0U;
efi->ConOut->OutputString(efi->ConOut, efibuf);
n = 0;
@@ -57,7 +58,7 @@ static void disable_hpet(void)
{
uint64_t *hpet = (uint64_t *)0xfed00000L;
hpet[32] &= ~4;
hpet[32] &= ~4ULL;
}
/* FIXME: if you check the generated code, "ms_abi" calls like this
@@ -67,29 +68,31 @@ static void disable_hpet(void)
*/
uintptr_t __abi efi_entry(void *img_handle, struct efi_system_table *sys_tab)
{
(void)img_handle;
efi = sys_tab;
z_putchar = efi_putchar;
printf("*** Zephyr EFI Loader ***\n");
for (int i = 0; i < sizeof(zefi_zsegs)/sizeof(zefi_zsegs[0]); i++) {
int bytes = zefi_zsegs[i].sz;
for (size_t i = 0; i < (sizeof(zefi_zsegs)/sizeof(zefi_zsegs[0])); i++) {
uint32_t bytes = zefi_zsegs[i].sz;
uint8_t *dst = (uint8_t *)zefi_zsegs[i].addr;
printf("Zeroing %d bytes of memory at %p\n", bytes, dst);
for (int j = 0; j < bytes; j++) {
printf("Zeroing %u bytes of memory at %p\n", bytes, dst);
for (uint32_t j = 0; j < bytes; j++) {
dst[j] = 0U;
}
}
for (int i = 0; i < sizeof(zefi_dsegs)/sizeof(zefi_dsegs[0]); i++) {
int bytes = zefi_dsegs[i].sz;
int off = zefi_dsegs[i].off;
for (size_t i = 0; i < (sizeof(zefi_dsegs)/sizeof(zefi_dsegs[0])); i++) {
uint32_t bytes = zefi_dsegs[i].sz;
uint32_t off = zefi_dsegs[i].off;
uint8_t *dst = (uint8_t *)zefi_dsegs[i].addr;
uint8_t *src = &((uint8_t *)EXT_DATA_START)[off];
printf("Copying %d data bytes to %p from image offset %d\n",
printf("Copying %u data bytes to %p from image offset %u\n",
bytes, dst, zefi_dsegs[i].off);
for (int j = 0; j < bytes; j++) {
for (uint32_t j = 0; j < bytes; j++) {
dst[j] = src[j];
}
@@ -101,7 +104,7 @@ uintptr_t __abi efi_entry(void *img_handle, struct efi_system_table *sys_tab)
* starts, because the very first thing it does is
* install its own page table that disallows writes.
*/
if (((long)dst & 0xfff) == 0 && dst < (uint8_t *)0x100000L) {
if ((((uintptr_t)dst & 0xfff) == 0) && ((uintptr_t)dst < 0x100000ULL)) {
for (int i = 0; i < 8; i++) {
dst[i] = 0x90; /* 0x90 == 1-byte NOP */
}
@@ -120,7 +123,7 @@ uintptr_t __abi efi_entry(void *img_handle, struct efi_system_table *sys_tab)
* to drain before we start banging on the same UART from the
* OS.
*/
for (volatile int i = 0; i < 50000000; i++) {
for (volatile int i = 0; i < 50000000; i += 1) {
}
__asm__ volatile("cli; jmp *%0" :: "r"(code));

View File

@@ -12,7 +12,6 @@ endif()
string(REPLACE "nsim" "mdb" MDB_ARGS "${BOARD}.args")
if((CONFIG_SOC_NSIM_HS_SMP OR CONFIG_SOC_NSIM_HS6X_SMP))
board_runner_args(mdb-nsim "--cores=${CONFIG_MP_NUM_CPUS}" "--nsim_args=${MDB_ARGS}")
board_runner_args(mdb-hw "--cores=${CONFIG_MP_NUM_CPUS}")
else()
board_runner_args(mdb-nsim "--nsim_args=${MDB_ARGS}")
endif()

View File

@@ -1,6 +1,6 @@
# BL654 USB adapter board configuration
# Copyright (c) 2021-2022 Laird Connectivity
# Copyright (c) 2021 Laird Connectivity
# SPDX-License-Identifier: Apache-2.0
if BOARD_BL654_USB
@@ -13,21 +13,16 @@ config BOARD
# Nordic nRF5 bootloader exists outside of the partitions specified in the
# DTS file, so we manually override FLASH_LOAD_OFFSET to link the application
# correctly, after Nordic MBR, and limit the maximum size to not protude into
# the bootloader at the end of flash.
# correctly, after Nordic MBR.
# When building MCUBoot, MCUBoot itself will select USE_DT_CODE_PARTITION
# which will make it link into the correct partition specified in DTS file,
# so no override or limit is necessary.
# so no override is necessary.
config FLASH_LOAD_OFFSET
default 0x1000
depends on !USE_DT_CODE_PARTITION
config FLASH_LOAD_SIZE
default 0xdf000
depends on !USE_DT_CODE_PARTITION
if USB_DEVICE_STACK
config USB_UART_CONSOLE

View File

@@ -1,4 +1,4 @@
# BT610 board configuration
# BT6X0 board configuration
# Copyright (c) 2021 Laird Connectivity
# SPDX-License-Identifier: Apache-2.0
@@ -7,4 +7,4 @@ config BOARD_ENABLE_DCDC
bool "Enable DCDC mode"
select SOC_DCDC_NRF52X
default y
depends on BOARD_BT610
depends on BOARD_BT6X0

View File

@@ -1,8 +1,8 @@
# BT610 board configuration
# BT6X0 board configuration
# Copyright (c) 2021 Laird Connectivity
# SPDX-License-Identifier: Apache-2.0
config BOARD_BT610
bool "BT610"
config BOARD_BT6X0
bool "BT6X0"
depends on SOC_NRF52840_QIAA

View File

@@ -3,10 +3,10 @@
# Copyright (c) 2021 Laird Connectivity
# SPDX-License-Identifier: Apache-2.0
if BOARD_BT610
if BOARD_BT6X0
config BOARD
default "bt610"
default "bt6x0"
config IEEE802154_NRF5
default y
@@ -21,4 +21,4 @@ DT_COMPAT_TI_TCA9538 := ti,tca9538
config I2C
default $(dt_compat_on_bus,$(DT_COMPAT_TI_TCA9538),i2c)
endif # BOARD_BT610
endif # BOARD_BT6X0

View File

@@ -8,8 +8,8 @@
#include <nordic/nrf52840_qiaa.dtsi>
/ {
model = "Laird BT610 Sensor";
compatible = "lairdconnect,bt610";
model = "Laird BT6x0 Sensor";
compatible = "lairdconnect,bt6x0";
chosen {
zephyr,console = &uart0;

View File

@@ -1,5 +1,5 @@
identifier: bt610
name: BT610
identifier: bt6x0
name: BT6X0
type: mcu
arch: arm
ram: 256

View File

@@ -2,7 +2,7 @@
CONFIG_SOC_SERIES_NRF52X=y
CONFIG_SOC_NRF52840_QIAA=y
CONFIG_BOARD_BT610=y
CONFIG_BOARD_BT6X0=y
# Enable MPU
CONFIG_ARM_MPU=y

View File

@@ -1,12 +1,12 @@
.. _bt610:
.. _bt6x0:
Laird Connectivity Sentrius BT610 Sensor
Laird Connectivity Sentrius BT6x0 Sensor
########################################
Overview
********
The Sentrius™ BT610 Sensor is a battery powered, Bluetooth v5 Long Range
The Sentrius™ BT6x0 Sensor is a battery powered, Bluetooth v5 Long Range
integrated sensor platform that uses a Nordic Semiconductor nRF52840 ARM
Cortex-M4F CPU.
@@ -28,19 +28,19 @@ The sensor has the following features:
* :abbr:`UART (Universal Asynchronous Receiver-Transmitter)`
* :abbr:`WDT (Watchdog Timer)`
.. figure:: img/bt610_front.jpg
.. figure:: img/bt6x0_front.jpg
:width: 500px
:align: center
:alt: Sentrius BT610 Sensor, front view
:alt: Sentrius BT6x0 Sensor, front view
Sentrius BT610 Sensor, front view
Sentrius BT6x0 Sensor, front view
.. figure:: img/bt610_back.jpg
.. figure:: img/bt6x0_back.jpg
:width: 500px
:align: center
:alt: Sentrius BT610 Sensor, rear view
:alt: Sentrius BT6x0 Sensor, rear view
Sentrius BT610 Sensor, rear view
Sentrius BT6x0 Sensor, rear view
More information about the board can be found at the
`Sentrius BT610 website`_.
@@ -51,7 +51,7 @@ Hardware
Supported Features
==================
The BT610 Sensor supports the following
The BT6x0 Sensor supports the following
hardware features:
+-----------+------------+----------------------+
@@ -89,12 +89,12 @@ hardware features:
| WDT | on-chip | watchdog |
+-----------+------------+----------------------+
.. figure:: img/bt610_board.jpg
.. figure:: img/bt6x0_board.jpg
:width: 500px
:align: center
:alt: Sentrius BT610 Sensor, board layout
:alt: Sentrius BT6x0 Sensor, board layout
Sentrius BT610 Sensor, board layout
Sentrius BT6x0 Sensor, board layout
Connections and IOs
===================
@@ -102,7 +102,7 @@ Connections and IOs
LED
---
Two LEDs are visible through the BT610 housing lid. Note that the LEDs can be
Two LEDs are visible through the BT6x0 housing lid. Note that the LEDs can be
driven either directly, or via PWM. PWM should be used when current consumption
is required to be minimised.
@@ -115,7 +115,7 @@ is required to be minimised.
Push button
------------
The BT610 incorporates three mechanical push buttons. Note these are only
The BT6x0 incorporates three mechanical push buttons. Note these are only
accessible with the housing cover removed.
Two of the buttons are available for use via the board DTS file, as follows.
@@ -129,7 +129,7 @@ microcontroller.
Magnetoresistive sensor
-----------------------
The BT610 incorporates a Honeywell SM351LT magnetoresistive sensor. Refer to
The BT6x0 incorporates a Honeywell SM351LT magnetoresistive sensor. Refer to
the `Honeywell SM351LT datasheet`_ for further details.
* MAG_1 = SW2 = P1.15 (SM3531LT_0)
@@ -157,7 +157,7 @@ This can deliver up to 50mA peak and 20mA continuous current.
Sensor connectivity
-------------------
The BT610 incorporates three terminal blocks J5, J6 & J7 that allow
The BT6x0 incorporates three terminal blocks J5, J6 & J7 that allow
connectivity to its sensor inputs, as follows.
Terminal Block J5
@@ -485,12 +485,12 @@ Required pins are as follows.
Programming and Debugging
*************************
Applications for the ``bt610`` board configuration can be
Applications for the ``bt6x0`` board configuration can be
built and flashed in the usual way (see :ref:`build_an_application`
and :ref:`application_run` for more details); however, the standard
debugging targets are not currently available.
The BT610 features a 10 way header, J3, for connection of a
The BT6x0 features a 10 way header, J3, for connection of a
programmer/debugger, with pinout as follows.
+-----------+------------+----------------------+
@@ -537,7 +537,7 @@ pinout as follows.
+-----------+------------+----------------------+-----------+
Note that pin 3 requires a solder bridge to be closed to enable powering of the
BT610 board via the UART connector.
BT6x0 board via the UART connector.
Flashing
========
@@ -552,7 +552,7 @@ Here is an example for the :ref:`hello_world` application.
First, run your favorite terminal program to listen for output.
NOTE: On the BT610, the UART lines are at TTL levels and must be passed through
NOTE: On the BT6x0, the UART lines are at TTL levels and must be passed through
an appropriate line driver circuit for translation to RS232 levels. Refer to
the `MAX3232 datasheet`_ for a suitable driver IC.
@@ -560,14 +560,14 @@ the `MAX3232 datasheet`_ for a suitable driver IC.
$ minicom -D <tty_device> -b 115200
Replace :code:`<tty_device>` with the port where the BT610 can be found. For
Replace :code:`<tty_device>` with the port where the BT6x0 can be found. For
example, under Linux, :code:`/dev/ttyUSB0`.
Then build and flash the application in the usual way.
.. zephyr-app-commands::
:zephyr-app: samples/hello_world
:board: bt610
:board: bt6x0
:goals: build flash
Note that an external debugger is required to perform application flashing.
@@ -575,14 +575,14 @@ Note that an external debugger is required to perform application flashing.
Debugging
=========
The ``bt610`` board does not have an on-board J-Link debug IC
The ``bt6x0`` board does not have an on-board J-Link debug IC
as some nRF5x development boards, however, instructions from the
:ref:`nordic_segger` page also apply to this board, with the additional step
of connecting an external debugger.
Testing Bluetooth on the BT610
Testing Bluetooth on the BT6x0
***********************************
Many of the Bluetooth examples will work on the BT610.
Many of the Bluetooth examples will work on the BT6x0.
Try them out:
* :ref:`ble_peripheral`
@@ -590,7 +590,7 @@ Try them out:
* :ref:`bluetooth-ibeacon-sample`
Testing the LEDs and buttons on the BT610
Testing the LEDs and buttons on the BT6x0
*****************************************
There are 2 samples that allow you to test that the buttons (switches) and LEDs
@@ -601,7 +601,7 @@ on the board are working properly with Zephyr:
You can build and flash the examples to make sure Zephyr is running correctly
on your board. The button, LED and sensor device definitions can be found in
:zephyr_file:`boards/arm/bt610/bt610.dts`.
:zephyr_file:`boards/arm/bt6x0/bt6x0.dts`.
References

View File

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 76 KiB

View File

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 78 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

@@ -216,7 +216,7 @@ Run a serial host program to connect with your NUCLEO-H745ZI-Q board.
$ minicom -b 115200 -D /dev/ttyACM0
or use screen:
or use scrreen:
.. code-block:: console
@@ -235,20 +235,15 @@ You should see the following message on the console:
$ Hello World! nucleo_h745zi_q_m7
.. note::
Sometimes, flashing is not working. It is necessary to erase the flash
(with STM32CubeProgrammer for example) to make it work again.
Similarly, you can build and flash samples on the M4 target. For this, please
take care of the resource sharing (UART port used for console for instance).
Here is an example for the :ref:`blinky-sample` application on M4 core.
Blinky example can also be used:
.. zephyr-app-commands::
:zephyr-app: samples/basic/blinky
:board: nucleo_h745zi_q_m4
:board: nucleo_h745zi_q_m7
:goals: build flash
Same way M4 core can be flashed.
.. note::
Flashing both M4 and M7 and pushing RESTART button on the board leads
@@ -266,10 +261,6 @@ You can debug an application in the usual way. Here is an example for the
:maybe-skip-config:
:goals: debug
Debugging with west is currently not available on Cortex M4 side.
In order to debug a Zephyr application on Cortex M4 side, you can use
`STM32CubeIDE`_.
.. _Nucleo H745ZI-Q website:
https://www.st.com/en/evaluation-tools/nucleo-h745zi-q.html

View File

@@ -6,9 +6,6 @@ CONFIG_SOC_STM32H745XX=y
# Board config should be specified since there are 2 possible targets
CONFIG_BOARD_NUCLEO_H745ZI_Q_M7=y
# Enable the internal SMPS regulator
CONFIG_POWER_SUPPLY_DIRECT_SMPS=y
# Enable MPU
CONFIG_ARM_MPU=y

View File

@@ -255,17 +255,19 @@ flashed in the usual way (see :ref:`build_an_application` and
Flashing
========
Nucleo L552ZE Q board includes an ST-LINK/V2-1 embedded debug tool
interface. Support can be enabled on pyocd by adding "pack" support with the
following pyocd command:
Nucleo L552ZE Q board includes an ST-LINK/V3E embedded debug tool
interface. This interface is not yet supported by the openocd version.
Instead, support can be enabled on pyocd by adding "pack" support with
the following pyocd command:
.. code-block:: console
$ pyocd pack --update
$ pyocd pack --install stm32l552ze
Alternatively, this interface is supported by the openocd version
included in the Zephyr SDK since v0.13.1.
Nucleo L552ZE Q board includes an ST-LINK/V2-1 embedded debug tool
interface. This interface is supported by the openocd version
included in the Zephyr SDK since v0.9.2.
Flashing an application to Nucleo L552ZE Q
------------------------------------------

View File

@@ -1,19 +0,0 @@
source [find interface/stlink.cfg]
transport select hla_swd
source [find target/stm32l5x.cfg]
# use hardware reset
reset_config srst_only srst_nogate
$_TARGETNAME configure -event gdb-attach {
echo "Debugger attaching: halting execution"
reset halt
gdb_breakpoint_override hard
}
$_TARGETNAME configure -event gdb-detach {
echo "Debugger detaching: resuming execution"
resume
}

View File

@@ -1,7 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
board_runner_args(pyocd "--target=stm32wb55rgvx")
board_runner_args(stm32cubeprogrammer "--port=swd" "--reset=hw")
include(${ZEPHYR_BASE}/boards/common/openocd.board.cmake)
include(${ZEPHYR_BASE}/boards/common/pyocd.board.cmake)
include(${ZEPHYR_BASE}/boards/common/stm32cubeprogrammer.board.cmake)

View File

@@ -186,9 +186,8 @@ To operate bluetooth on Nucleo WB55RG, Cortex-M0 core should be flashed with
a valid STM32WB Coprocessor binaries (either 'Full stack' or 'HCI Layer').
These binaries are delivered in STM32WB Cube packages, under
Projects/STM32WB_Copro_Wireless_Binaries/STM32WB5x/
For compatibility information with the various versions of these binaries,
please check `modules/hal/stm32/lib/stm32wb/hci/README <https://github.com/zephyrproject-rtos/hal_stm32/blob/main/lib/stm32wb/hci/README>`__
in the hal_stm32 repo.
To date, interoperability and backward compatibility has been tested and is
guaranteed up to version 1.5 of STM32Cube package releases.
Connections and IOs
===================

View File

@@ -3,9 +3,6 @@
CONFIG_SOC_SERIES_STM32H7X=y
CONFIG_SOC_STM32H735XX=y
# Enable the internal SMPS regulator
CONFIG_POWER_SUPPLY_DIRECT_SMPS=y
# Enable MPU
CONFIG_ARM_MPU=y

View File

@@ -220,6 +220,24 @@ the USB port to prepare it for flashing. Then build and flash your application.
Here is an example for the :ref:`hello_world` application.
.. zephyr-app-commands::
:zephyr-app: samples/hello_world
:board: stm32h747i_disco_m7
:goals: build
Use the following commands to flash either m7 or m4 target:
.. code-block:: console
$ ./STM32_Programmer_CLI -c port=SWD mode=UR -w <path_to_m7_binary> 0x8000000
$ ./STM32_Programmer_CLI -c port=SWD mode=UR -w <path_to_m4_binary> 0x8100000
Alternatively it is possible to flash with OpenOcd but with some restrictions:
Sometimes, flashing is not working. It is necessary to erase the flash
(with STM32CubeProgrammer for example) to make it work again.
Debugging with OpenOCD is currently working for this board only with Cortex M7,
not Cortex M4.
.. zephyr-app-commands::
:zephyr-app: samples/hello_world
:board: stm32h747i_disco_m7
@@ -237,20 +255,6 @@ You should see the following message on the console:
Hello World! stm32h747i_disco_m7
.. note::
Sometimes, flashing is not working. It is necessary to erase the flash
(with STM32CubeProgrammer for example) to make it work again.
Similarly, you can build and flash samples on the M4 target. For this, please
take care of the resource sharing (UART port used for console for instance).
Here is an example for the :ref:`blinky-sample` application on M4 core.
.. zephyr-app-commands::
:zephyr-app: samples/basic/blinky
:board: stm32h747i_disco_m4
:goals: build flash
Debugging
=========
@@ -262,9 +266,6 @@ You can debug an application in the usual way. Here is an example for the
:board: stm32h747i_disco_m7
:goals: debug
Debugging with west is currently not available on Cortex M4 side.
In order to debug a Zephyr application on Cortex M4 side, you can use
`STM32CubeIDE`_.
.. _STM32H747I-DISCO website:
http://www.st.com/en/evaluation-tools/stm32h747i-disco.html
@@ -283,6 +284,3 @@ In order to debug a Zephyr application on Cortex M4 side, you can use
.. _DISCO_H747I modifications for Ethernet:
https://os.mbed.com/teams/ST/wiki/DISCO_H747I-modifications-for-Ethernet
.. _STM32CubeIDE:
https://www.st.com/en/development-tools/stm32cubeide.html

View File

@@ -9,9 +9,6 @@ CONFIG_BOARD_STM32H747I_DISCO_M7=y
# enable pinmux
CONFIG_PINMUX=y
# Enable the internal SMPS regulator
CONFIG_POWER_SUPPLY_DIRECT_SMPS=y
# enable GPIO
CONFIG_GPIO=y

View File

@@ -252,16 +252,18 @@ Flashing
========
STM32L562E-DK Discovery board includes an ST-LINK/V3E embedded debug tool
interface. Support can be enabled on pyocd by adding "pack" support with the
following pyocd command:
interface. This interface is not yet supported by the openocd version.
Instead, support can be enabled on pyocd by adding "pack" support with
the following pyocd command:
.. code-block:: console
$ pyocd pack --update
$ pyocd pack --install stm32l562qe
Alternatively, this interface is supported by the openocd version
included in the Zephyr SDK since v0.13.1.
STM32L562E-DK Discovery board includes an ST-LINK/V2-1 embedded debug tool
interface. This interface is supported by the openocd version
included in the Zephyr SDK since v0.9.2.
Flashing an application to STM32L562E-DK Discovery
--------------------------------------------------

Some files were not shown because too many files have changed in this diff Show More