Compare commits

...

127 Commits

Author SHA1 Message Date
Chris Friedt
7da64958f0 release: Zephyr 2.7.4
Set version to 2.7.4

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-12-22 20:33:40 -05:00
Chris Friedt
49e965fd63 release: update v2.7.4 release notes
* add bugifixes to v2.7.4 release
* partial CVE notes from security

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-12-22 19:17:19 -05:00
Flavio Ceolin
c09b95fafd net: tcp: Fix possible buffer underflow
Fix possible underflow in tcp flags parse.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit ac2e13b9a1)
2022-12-22 12:17:46 -05:00
Andy Ross
88f09f2eac kernel/sched: Fix SMP race on pend
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich).  This
process happens with the caller's lock held, so local interrupts are
masked.  But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!

Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also.  Now we hold the scheduler lock between pend and
the final context switch.

Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.

[1] z_swap() is a historical API implemented in per-arch assembly for
    older architectures (like ARM32!).  It was designed to be called
    with what at the time was a global IRQ lock, so it doesn't
    understand the idea of a separate scheduler lock.  When we finally
    get all archictures on arch_switch() this design can be cleaned up
    quite a bit.

Signed-off-by: Andy Ross <andyross@google.com>
(cherry picked from commit c32f376e99)
2022-12-20 20:54:21 -05:00
Ian Oliver
568c09ce3a log_core: Add Kconfig symbol for init priority
Users may want to do some configuration after the kernel is up, but
before initializing the log_core. Making the log_core's init priority
configurable makes that possible.

Signed-off-by: Ian Oliver <io@amperecomputing.com>
(cherry picked from commit 1675d49b4c)
2022-12-20 15:23:12 -05:00
Chris Friedt
79f6c538c1 tests: posix: add tests for sleep() and usleep()
Previously, there was no test coverage for `sleep()` and
`usleep()`.

This change adds full test coverage.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 027b79ecc4)
2022-12-06 12:47:16 -05:00
Chris Friedt
3400e6d9db lib: posix: update usleep() to follow the POSIX spec
The original implementation of `usleep()` was not compliant
to the POSIX spec in 3 ways.
- calling thread may not be suspended (because `k_busy_wait()`
  was previously used for short durations)
- if `usecs` > 1000000, previously we did not return -1 or set
  `errno` to `EINVAL`
- if interrupted, previously we did not return -1 or set
  `errno` to `EINTR`

This change addresses those issues to make `usleep()` more
POSIX-compliant.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7b95428fa0)
2022-12-06 12:47:16 -05:00
Chris Friedt
fbea9e74c2 lib: posix: sleep() should report unslept time in seconds
In the case that `sleep()` is interrupted, the POSIX spec requires
it to return the number of "unslept" seconds (i.e. the number of
seconds requested minus the number of seconds actually slept).

Since `k_sleep()` already returns the amount of "unslept" time
in ms, we can simply use that.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit dcfcc6454b)
2022-12-06 12:47:16 -05:00
Chris Friedt
4d929827ac tests: posix: clock: do not use usleep in a broken way
Using `usleep()` for >= 10000000 microseconds results
in an error, so this test was kind of defective, having
explicitly called `usleep()` for seconds.

Also, check the return values of `clock_gettime()`.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 23a1f0a672)
2022-12-06 12:47:16 -05:00
Jamie McCrae
37b3641f00 manifest: Update mcumgr revision
Updates mcumgr to resolve an issue with the state of a firmware
update not being reset if an error occurs or if the underlying
area is erased.

Fixes #52247
Backporting commit 4c48b4f21a

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-11-28 16:02:09 -05:00
Jamie McCrae
3d940f1d1b net: Synchronise user data size with mcumgr
Fixes an issue with 2 user data sizes being out of sync causing
CI failures.

Fixes #52591

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-11-28 15:26:32 -05:00
Martí Bolívar
f0d2a3e2fe python-devicetree: CI hotfix
Pin the types-PyYAML version to 6.0.7. Version 6.0.8 is causing CI
errors for other pull requests, so we need this in to get other PRs
moving.

Fixes: #46286

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7d405a43b1 ci: Clone cached Zephyr repository with shared objects
In the new ephemeral Zephyr runners, the cached repository files are
located in a foreign file system and Git clone operation cannot create
hard-links to the cached repository objects, which forces the Git clone
operation to copy the objects from the cache file system to the runner
container file system.

This commit updates the CI workflows to instead perform a "shared
clone" of the cached repository, which allows the cloned repository to
utilise the object database of the cached repository.

While "shared clone" can be often dangerous because the source
repository objects can be deleted, in this case, the source repository
(i.e. cached repository) is mounted as read-only and immutable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
2dbe845f21 ci: codecov: Clone cached Zephyr repository
This commit updates the codecov workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
4efa225daa ci: codecov: Use zephyr-runner
This commit updates the codecov workflow to use the new Kubernetes-
based zephyr-runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0ee1955d5b ci: clang: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
a0eb50be3e ci: clang: Clone cached Zephyr repository
This commit updates the clang workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
2da9d7577f ci: clang: Use zephyr-runner
This commit updates the clang workflow to use the new Kubernetes-based
zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
d5e2a071c1 ci: twister: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
633dd420d9 ci: twister: Clone cached Zephyr repository
This commit updates the twister workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
780b4e08cb ci: twister: Use zephyr-runner
This commit updates the twister workflow to use the new Kubernetes-
based zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
281185e49d ci: Use actions/cache@v3
This commit updates the CI workflows to use the latest "cache" action
v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
cb240b4f4c ci: Use actions/setup-python@v4
This commit updates the CI workflows to use the latest "setup-python"
action v4, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ff5ee88ac0 ci: Use actions/upload-artifact@v3
This commit updates the CI workflows to use the latest
"upload-artifact" action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c3ef958116 ci: Use actions/checkout@v3
This commit updates the CI workflows to use the latest "checkout"
action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
727806f483 ci: twister: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ec6c9d3637 ci: release: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c5e88dbbda ci: codecov: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c3e4d65dd1 ci: clang: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
16207ae32f ci: footprint-tracking: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
dbf2ca1b0a ci: footprint: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0e204784ee ci: codecov: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c44406e091 ci: clang: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
1da82633b2 ci: twister: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ad6636f09c ci: bluetooth-tests: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Anas Nashif
8f4b366c0f ci: update cancel-workflow-action action to 0.11.0
Update action to use latest release which resolves a warning Node 12
being deprecated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
5f8960f5ef ci: compliance: Use upload-artifact action v3
This commit updates the "Create a release" workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
d9200eb55d ci: issue_count: Use upload-artifact action v3
This commit updates the issue count tracker workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ccdc1d3777 ci: doc-build: Use upload-artifact action v3
This commit updates the documentation build workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
197c4ddcbd ci: compliance: Use upload-artifact action v3
This commit updates the compliance check workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7287947535 ci: backport: Use Ubuntu 20.04 runner image
This commit updates the backport workflow to use the ubuntu-20.04
runner image because the ubuntu-18.04 image is deprecated and will
become unsupported by December 1, 2022.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
b0be164419 ci: west_cmds: Use specific version of runner image
This commit updates the "West Command Tests" workflow to use a specific
runner image version (ubuntu-20.04, macos-11, windows-2022) instead of
the latest version in order to prevent any potential breakages due to
the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
fab06842d5 ci: devicetree_checks: Use specific version of runner image
This commit updates the "Devicetree script tests" workflow to use a
specific runner image version (ubuntu-20.04, macos-11, windows-2022)
instead of the latest version in order to prevent any potential
breakages due to the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9b4eafc54a ci: twister: Use Ubuntu 20.04 runner image
This commit updates the "Run tests with twister" workflow to use a
specific runner image version, ubuntu-20.04, instead of the latest
version in order to prevent any potential breakages due to the 'latest'
version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9becb117b2 ci: twister_tests: Use Ubuntu 20.04 runner image
This commit updates the Twister Testsuite workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
87ab3e4d16 ci: stale_issue: Use Ubuntu 20.04 runner image
This commit updates the stale issue workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9b8305cc11 ci: release: Use Ubuntu 20.04 runner image
This commit updates the "Create a Release" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
728e5720cc ci: manifest: Use Ubuntu 20.04 runner image
This commit updates the manifest check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
6c11685863 ci: license_check: Use Ubuntu 20.04 runner image
This commit updates the license check workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0f783a4ce0 ci: issue_count: Use Ubuntu 20.04 runner image
This commit updates the issue count tracker workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
02dba17a59 ci: footprint: Use Ubuntu 20.04 runner image
This commit updates the footprint delta workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
860e7307bc ci: footprint-tracking: Use Ubuntu 20.04 runner image
This commit updates the footprint tracking workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7b087b8ac5 ci: errno: Use Ubuntu 20.04 runner image
This commit updates the error number check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
15f39300c0 ci: doc: Use Ubuntu 20.04 runner image
This commit updates the documentation build and publish workflows to
use a specific runner image version, ubuntu-20.04, instead of the
latest version in order to prevent any potential breakages due to the
'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
f6f69516ac ci: daily_test_version: Use Ubuntu 20.04 runner image
This commit updates the daily test version workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
5f9dd18a87 ci: compliance: Use Ubuntu 20.04 runner image
This commit updates the compliance check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7d8639b4a8 ci: coding_guidelines: Use Ubuntu 20.04 runner image
This commit updates the coding guidelines workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
6e723ff755 ci: clang: Use Ubuntu 20.04 runner image
This commit updates the Clang workflow to use a specific runner image
version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
bff97ed4cc ci: backport_issue_check: Use Ubuntu 20.04 runner image
This commit updates the backport issue check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
97e2959452 ci: bluetooth-tests: Use Ubuntu 20.04 runner image
This commit updates the Bluetooth tests workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
91970658ec ci: issue_count: Fix stale reference to master branch
This commit fixes a stale reference to the 'master' branch when
downloading the issue report configuration file.

Note that the 'master' branch is no longer the default
branch -- 'main' is.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
fc2585af00 ci: doc-build: skip Kconfig docs build on pull requests
The documentation supports a special target named "html-fast" that skips
generation of all Kconfig pages. Instead, it creates a single dummy page
where a reference to all existing Kconfig options is placed. This means
that references are resolved, but content is not rendered. Since Kconfig
help is rendered as a literal, chances of breaking documentation
build due to Kconfig changes should be low. The change proposed in this
patch should speed up documentation build on pull requests while a
proper solution is found for the Kconfig docs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
f95edd3a85 ci: doc-build: use concurrency group to cancel in progress builds
Add the documentation build jobs to a concurrency group so that branch
force pushes will automatically cancel in progress jobs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
2b9ed76734 ci: doc-build: disable parallel build
When an error happens during the Sphinx build (e.g. due to a broken
reference), the process hangs on CI when run with both `-j auto`
(parallel build) and `-W` (warnings as errors) options. The root cause
of the issue is unknown, and does not seem to happen locally. Parallel
builds are experimental on Sphinx, so they have been disabled on CI for
now.  Because CI runner is a single core machine, the build time should
remain equal or similar. The option is still left as default, so local
builds will continue to benefit from parallelization.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
5a041bff3d ci: doc-build: set timeout to 30 minutes
Give documentation build up to 30 minutes to finish. This should improve
the user experience when Sphinx hangs due to broken references, a
problem that needs some investigation.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Chris Friedt
f61664c6f8 tests: kernel: mutex: move race timeout test to mutex_api
Previously, this change was added to `mutex_error_case`.

That worked fine in `main`, but once the change was backported to
`v2.7-branch`, the test would fail because it *did not* cause a
failure. The reason for that, was that the `mutex_error_case`
suite has `CONFIG_ZTEST_FATAL_HOOK=y`.

With the newer ztest API, it allowed a separate suite to be used,
allowing the test to pass (although it did not really fit in with
the rest of the testsuite).

The solution is to simply merge it with the `mutex_api` suite
which uses non-inverted success logic.

This change will also have to be cherry-picked for the backport
in #49031.

Fixes #48056.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-11-18 17:33:10 -05:00
Qi Yang
e2f05e9328 kernel: mutex: fix races when lock timeout
Say threadA holds a mutex and threadB tries
to lock it with a timeout, a race would occur
if threadA unlock that mutex after threadB
got unpended by sys_clock and before it gets
scheduled and calls k_spin_lock.

This patch fixes this issue by checking the
mutex's status again after k_spin_lock calls.

Fixes #48056

Signed-off-by: Qi Yang <qi.yang@cmind-semi.com>
(cherry picked from commit 89c4a074dc)
2022-11-18 17:33:10 -05:00
Jamie McCrae
ea0b53b150 mgmt: mcumgr: Fix Bluetooth transport issues
This fixes issues with the Bluetooth SMP transport whereby deadlocks
could arise from connection references being held in long-lasting
mcumgr command processing functions.

Note: Heavily modified from original PR due to differences in MCUmgr
operation since the Zephyr 2.7 release.

Fixes #51846

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 2aca1242f7)
2022-11-17 18:41:57 -05:00
Stephanos Ioannidis
56664826b2 ci: doc: Publish pull request docs to builds.zephyrproject.io
This commit updates the CI documentation build workflow to upload the
HTML pull request documentation builds to the S3 builds.zephyrproject.io
bucket so that they are directly accessible from the web.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1dd92ec865)
2022-11-16 04:23:46 +09:00
Chris Friedt
bbb49dec38 net: sockets: socketpair: do not allow blocking IO in ISR context
Using a socketpair for communication in an ISR is not a great
solution, but the implementation should be robust in that case
as well.

It is not acceptible to block in ISR context, so robustness here
means to return -1 to indicate an error, setting errno to `EAGAIN`
(which is synonymous with `EWOULDBLOCK`).

Fixes #25417

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit d832b04e96)
2022-11-15 12:12:40 -05:00
Stephanos Ioannidis
8211ebf759 ci: Limit workflow scope to v2.7-branch
This commit updates the CI workflows to limit their event trigger scope
to the v2.7-branch in order to prevent the workflows from running when
the backport branches are pushed.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-02 03:12:21 +09:00
Jay Vasanth
1f3121b6b2 soc arm: MEC172x soc.h - Include custom IRQn_Type
Fix for issue #41012 to allow compiler to treat
IRQn_Type to be more than 8-bit. This will ensure NVIC numbers
more than 127 (required for MEC172x device) will work
correctly with irq_enable() API

Signed-off-by: Jay Vasanth <jay.vasanth@microchip.com>
(cherry picked from commit 4495f43dca)
2022-10-11 20:17:47 -04:00
Jamie McCrae
d7820faf7c drivers: counter: Update counter_set_channel_alarm documentation
Adds the -EBUSY return code to the documentation of the
counter_set_channel_alarm function which is returned when an alarm
is already active.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit c607599068)
2022-10-07 18:23:58 -04:00
Jordan Yates
5d29d52445 scripts: zspdx: fix writing custom license IDs
The builtin list function `.sort()` sorts the list in-place and returns
None. As this is an invalid type for iteration, use the builtin `sorted`
function, which returns a sorted copy of the list, which we can iterate
over.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
(cherry picked from commit 06aae61019)
2022-10-04 09:09:14 -07:00
Yong Cong Sin
be11187e09 mgmt/hawkbit: Print hrefs only if there's an update
If the is no update from the server, the _links will be NULL.
Check if it is NULL before trying to LOG these strings.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2022-10-04 07:43:25 -07:00
Ruud Derwig
9044091e21 ARC: fx possible memory corruption with userspace
Use  Z_KERNEL_STACK_BUFFER instead of
Z_THREAD_STACK_BUFFER for initial stack.

Fixes #50467

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
(cherry picked from commit 9bccb5cc4b)
2022-09-23 13:57:53 -04:00
Daniel Leung
170ba8dfcb soc: esp32: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit b820cde7a9)
2022-09-23 13:57:40 -04:00
Daniel Leung
e3f1b6fc54 soc: intel_adsp: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit 74df88d8f5)
2022-09-23 13:57:40 -04:00
Daniel Leung
7ac05528ca tests: coredump: skip acrn_ehl_crb
The coredump tests output quite a large amount of data into
the console. However, the ACRN console only has very limited
history (comparatively), such that twister is unable to
match the necessary strings to consider the tests passed.
So skip those tests on acrn_ehl_crb.

Fixes #40887

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit c1ac125068)
2022-09-20 17:06:00 +01:00
Martí Bolívar
64f411f0fb edtlib: remove python 3.5 workaround
Remove a yaml monkeypatch. It is no longer needed since we support 3.6
or later on Zephyr v2.7 LTS and 3.8 or later on what will become v3.2.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
(cherry picked from commit 7ef9c4b20e)
2022-09-18 08:32:31 -04:00
Anas Nashif
2e2dd96ae4 actions: west/devicetree: exclude python 3.6 on windows
This version of python is not available anymore. Excluding for now to
unblock CI.

Fixes: #49139

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
(cherry picked from commit 773242cbb7)
2022-09-18 08:32:31 -04:00
Torsten Rasmussen
a311291294 cmake: kconfig: preserved quotes for Kconfig string values
Fixes: #49569

Kconfig requires quoted strings in its configuration files, like this:
> CONFIG_A_STRING="foo bar"

But CMake requires expects that strings are without additional qoutes,
and therefore qoutes are stripped when loading Kconfig config filers
into CMake.

This is particular important when the string in Kconfig is a path to a
file. In this case, not stripping the quotes leads to an error as the
file cannot be found.

When users pass a string to Kconfig through CMake, they are expected to
pass it so that qoutes are correct seen from Kconfig, that is:
> cmake -DCONFIG_A_STRING=\"foo bar\"

In CMake, those qoutes are written as-is to Kconfig extra config file,
and then removed in the CMake cache.
After Kconfig processing, the Kconfig settings are read back to CMake
but without quotes. Settings that was passed through the CMake cache,
for example using `-D` are written back to the cache, but this time
without the qoutes. This results in Kconfig errors on sub-sequent CMake
runs.

Instead of writing the Kconfig value setting back to the CMake cache,
introduce an internal shadow symbol in the cache prefixed with `CLI_`.
This allows the CMake cache to keep the value correctly formatted for
Kconfig extra config creation, while at the same time keep existing
behavior for CONFIG_ symbols read from Kconfig.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2022-08-31 06:26:03 -04:00
Gerard Marull-Paretas
5221787303 scripts: west_commands: runners: jlink: support pylink >= 0.14.2
It looks like the latest release, 0.14.2, changed the contents of
JLINK_SDK_NAME as it was before 0.14.0 release. That means that the
previous fix is only applicable to a couple of releases: 0.14.0/0.14.1.

Fixes #49564

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
(cherry picked from commit 2d712c6c55)
2022-08-31 06:24:41 -04:00
Gerard Marull-Paretas
63d0c7fcae scripts: west_commands: runners: jlink: support pylink >= 0.14
pylink 0.14.0 changed the class variable where JLink DLL library name
(libjlinkarm) is stored. This patch adds support for new pylink
libraries while keeping backwards compatibility.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
(cherry picked from commit a57001347f)
2022-08-31 06:24:41 -04:00
Yong Cong Sin
8abef50e97 subsys/mgmt/hawkbit: Set ai_socktype if IPV4/IPV6
Follows the implementation of updatehub and set the
`ai_socktype` only if IPV4/IPV6

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit dd9d6bbb44)
2022-08-30 13:01:23 -04:00
Yong Cong Sin
0306e75a5f subsys/mgmt/hawkbit: Init the hints struct to a known value
Initialize the `hints` struct to a known value so that it won't
cause undetermined behavior when used in `getaddrinfo()`.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 2ed88e998a)
2022-08-30 13:01:23 -04:00
Christopher Friedt
003de78ce0 release: Zephyr 2.7.3
Set version to 2.7.3

Fixes #49354

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
2022-08-22 12:12:49 -04:00
Flavio Ceolin
9502d500b6 release: security: Notes for 2.7.3
Add security release notes for 2.7.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-08-22 10:54:58 -04:00
Christopher Friedt
2a88e08296 release: update v2.7.3 release notes
* add bugifixes to v2.7.3 release
* awating CVE notes from security

Fixes #49310

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
2022-08-22 10:54:58 -04:00
Jamie McCrae
e1ee34e55c drivers: sensor: sm351lt: Fix global thread triggering bug
This fixes a bug in the sm351lt driver whereby global triggering will
cause an MPU fault due to an unset pointer.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-08-03 19:05:58 -04:00
Szymon Janc
2ad1ef651b Bluetooth: host: Fix L2CAP reconfigure response with invalid CID
When an L2CAP_CREDIT_BASED_RECONFIGURE_REQ packet is received with
invalid parameters, the recipient shall send an
L2CAP_CREDIT_BASED_RECONFIGURE_RSP PDU with a non-zero Result field
and not change any MTU and MPS values.

This fix incorrectly reconfiguring valid channels while responding with
0x003 (Reconfiguration failed - one or more Destination CIDs invalid)
result code.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
(cherry picked from commit 253070b76b)
2022-08-02 12:36:44 -04:00
Szymon Janc
089675af45 Bluetooth: host: Fix L2CAP reconfigure response with invalid MTU
TSE18813 clarified IUT behavior and rejecting reconfiguration which
would result in MTU decrease is enough. There is no need to disconnect
L2CAP channel(s).

This was affecting L2CAP/ECFC/BI-03-C qualification test case
(TCRL 2022-2).

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
(cherry picked from commit 266394dea4)
2022-08-02 12:36:44 -04:00
Andriy Gelman
03ff0d471e net: route: Fix pkt leak if net_send_data() fails
If the call to net_send_data() fails, for example if the forwading
interface is down, then the pkt will leak. The reference taken by
net_pkt_shallow_clone() will never be released. Fix the problem
by dropping the rerefence count in the error path.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit a3cdb2102c)
2022-08-02 10:41:52 -04:00
Erwan Gouriou
cd96136bcb boards: nucleo_wb55rg: Fix documentation about BLE binary compatibility
Rather than stating a version information that will get out of date
at each release, refer to the source of information located in hal_stm32
module.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit 6656607d02)
2022-07-25 05:54:23 -04:00
Henrik Brix Andersen
567fda57df tests: drivers: can: api: add test for RTR filter matching
Add test for CAN RX filtering of RTR frames.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
b14f356c96 drivers: can: loopback: check frame ID type and RTR bit in filters
Check the frame ID type and RTR bit when comparing loopback CAN frames
against installed RX filters.

Fixes: #47904

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
874d77bc75 drivers: can: mcux: flexcan: fix handling of RTR frames
When installing a RX filter, the driver uses "filter->rtr &
filter->rtr_mask" for setting the filter mask. It should just be using
filter->rtr_mask, otherwise filters for non-RTR frames will match RTR
frames as well.

When transmitting a RTR frame, the hardware automatically switches the
mailbox used for TX to RX in order to receive the reply. This, however,
does not match the Zephyr CAN driver model, where mailboxes are
dedicated to either RX or TX. Attempting to reuse the TX mailbox (which
was automatically switched to an RX mailbox by the hardware) fails on
the first call, after which the mailbox is reset and can be reused for
TX. To overcome this, the driver must abort the RX mailbox operation
when the hardware performs the TX to RX switch.

Fixes: #47902

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
ec0befb938 drivers: can: mcan: acknowledge all received frames
The Bosch M_CAN IP does not support RX filtering of the RTR bit, so the
driver handles this bit in software.

If a recevied frame matches a filter with RTR enabled, the RTR bit of
the frame must match that of the filter in order to be passed to the RX
callback function. If the RTR bits do not match the frame must be
dropped.

Improve the readability of the the logic for determining if a frame
should be dropped and add a missing FIFO acknowledge write for dropped
frames.

Fixes: #47204

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Christopher Friedt
273e90a86f scripts: release: list_backports: use older python dict merge method
In Python versions >= 3.9, dicts can be merged with the `|` operator.

This is not the case for python versions < 3.9, and the simplest way
is to use `dict_c = {**dict_a, **dict_b}`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3783cf8353)
2022-07-19 01:30:16 +09:00
Christopher Friedt
59dc65a7b4 ci: backports: check if a backport PR has a valid issue
This is an automated check for the Backports project to
require one or more `Fixes #<issue>` items in the body
of the pull request.

Fixes #46164

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit aa4e437573)
2022-07-18 23:32:19 +09:00
Christopher Friedt
8ff8cafc18 scripts: release: list_backports.py
Created list_backports.py to examine prs applied to a backport
branch and extract associated issues. This is helpful for
adding to release notes.

The script may also be used to ensure that backported changes
also have one or more associated issues.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 57762ca12c)
2022-07-18 23:32:19 +09:00
Christopher Friedt
ba07347b60 scripts: release: use GITHUB_TOKEN and start_date in scripts
Updated bug_bash.py and list_issues.py to use the GITHUB_TOKEN
environment variable for consistency with other scripts.

Updated bug_bash.py to use `-s / --start-date` instead of
`-b / --begin-date`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3b3fc27860)
2022-07-18 23:32:19 +09:00
Christopher Friedt
e423902617 tests: posix: pthread: test for pthread descriptor leaks
Add a simple test to ensure that we can create and join a
single thread `CONFIG_MAX_PTHREAD_COUNT` * 2 times. If
there are leaks, then `pthread_create()` should
eventually return `EAGAIN`. If there are no leaks, then
the test should pass.

Fixes #47609

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit d37350bc19)
2022-07-12 16:02:26 -04:00
Christopher Friedt
018f836c4d posix: pthread: consider PTHREAD_EXITED state in pthread_create
If a thread is joined using `pthread_join()`, then the
internal state would be set to `PTHREAD_EXITED`.

Previously, `pthread_create()` would only consider pthreads
with internal state `PTHREAD_TERMINATED` as candidates for new
threads. However, that causes a descriptor leak.

We should be able to reuse a single thread an infinite number
of times.

Here, we also consider threads with internal state
`PTHREAD_EXITED` as candiates in `pthread_create()`.

Fixes #47609

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit da0398d198)
2022-07-12 16:02:26 -04:00
Stephanos Ioannidis
f4466c4760 tests: cpp: cxx: Add qemu_cortex_a53 as integration platform
This commit adds the `qemu_cortex_a53`, which is an MMU-based platform,
as an integration platform for the C++ subsystem tests.

This ensures that the `test_global_static_ctor_dynmem` test, which
verifies that the dynamic memory allocation service is functional
during the global static object constructor invocation, is tested on
an MMU-based platform, which may have a different libc heap
initialisation path.

In addition to the above, this increases the overall test coverage
ensuring that the C++ subsystem is functional on an MMU-based platform
in general.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 8e6322a21a)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
9a5cbe3568 tests: cpp: cxx: Test with various types of libc
This commit changes the C++ subsystem test, which previously was only
being run with the minimal libc, to be run with all the mainstream C
libraries (minimal libc, newlib, newlib-nano, picolibc).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 03f0693125)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
5b7b15fb2d tests: cpp: cxx: Add dynamic memory availability test for static init
This commit adds a test to verify that the dynamic memory allocation
service (the `new` operator) is available and functional when the C++
static global object constructors are invoked called during the system
initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit dc4895b876)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
e5a92a1fab tests: cpp: cxx: Add static global constructor invocation test
This commit adds a test to verify that the C++ static global object
constructors are invoked called during the system initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 6e0063af29)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
74f0b6443a lib: libc: newlib: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the newlib malloc heap
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 43e1c28a25)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
6c16b3492b lib: libc: minimal: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the minimal libc malloc
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit db0748c462)
2022-07-12 12:05:56 -04:00
Christopher Friedt
1831431bab lib: posix: semaphore: use consistent timebase in sem_timedwait
In the Zephyr implementation, `sem_timedwait()` uses a
potentially wildly different timebase for comparison via
`k_uptime_get()` (uptime in ms).

The standard specifies `CLOCK_REALTIME`. However, the real-time
clock can be modified to an arbitrary value via clock_settime()
and there is no guarantee that it will always reflect uptime.

This change ensures that `sem_timedwait()` uses a more
consistent timebase for comparison.

Fixes #46807

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
(cherry picked from commit 9d433c89a2)
2022-07-06 07:30:37 -04:00
Torsten Rasmussen
765f63c6b9 cmake: remove xtensa workaround in Zephyr toolchain code.
Remove xtensa specific workaround as this code is now present in Zephyr
SDK cmake code.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 92a1ca61eb)
2022-06-28 12:37:43 -04:00
Torsten Rasmussen
062306fc0b cmake: zephyr toolchain code cleanup
With the revert of commit 820d327b46 then
some additional code can be cleaned up.

This removes the final left-overs from Zephyr SDK 0.11.1 support and
older.

It further aligns message printing when including Zephyr SDK toolchain
to other toolchain message printing.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit fb3a113eb8)
2022-06-28 12:37:43 -04:00
Torsten Rasmussen
8fcf7f1d78 Revert "cmake: Zephyr sdk backward compatibility with 0.11.1 and 0.11.2"
This reverts commit 820d327b46.

Commit b973cdc9e8 updated the minimum
required Zephyr SDK version to 0.13.

Therefore revert commit 820d327b46 as
backward support for 0.11.1 and 0.11.2 is no longer required.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit e747fe73cd)
2022-06-28 12:37:43 -04:00
Vinayak Kariappa Chettimada
f06b3d922c Bluetooth: Controller: Fix PHY update for unsupported PHY
Fix PHY update procedure to handle unsupported PHY requested
by peer central device. PHY update complete will not be
generated to Host, connection is maintained on the old
PHY and the Controller will not respond to PDUs received on
the unsupported PHY.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 620a5524a5)
2022-06-28 11:39:37 -04:00
Francois Ramu
b75c012c55 drivers: spi: stm32 spi with dma must enable cs after periph
When using DMA to transfer over the spi, the spi_stm32_cs_control
is done after enabling the SPI. The same sequence applies
in the transceive_dma function as in transceive function

Signed-off-by: Francois Ramu <francois.ramu@st.com>
2022-06-28 11:39:17 -04:00
Stephanos Ioannidis
1efe6de3fe drivers: i2c: Fix infinite recursion in driver unregister function
This commit fixes an infinite recusion in the
`z_vrfy_i2c_slave_driver_unregister` function.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 745b7d202e)
2022-06-21 13:00:55 -04:00
Pavel Vasilyev
39270ed4a0 Bluetooth: Mesh: Fix segmentation when sending proxy message
Previous check in the if-statement would never allow to send last
segment if msg->len + 2 == MTU * x.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit 1efce43a00)
2022-06-21 12:17:16 -04:00
Pavel Vasilyev
81ffa550ee Bluetooth: Mesh: Check SegN when receiving Transaction Start PDU
When receiving Transaction Start PDU, assure that number of segments
needed to send a Provisioning PDU with TotalLength size is equal to SegN
value provided in the Transaction Start PDU.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit a63c515679)
2022-06-20 11:31:10 -04:00
Aleksandr Khromykh
8c2965e017 Bluetooth: Mesh: add check for rx buffer overflow in pb adv
There is potential buffer overflow in pb adv.
If Transaction Continuation PDU comes before
Transaction Start PDU the last segment number is set to 0xff.
The current implementation has a strictly limited buffer size.
It is possible to receive malformed frame with wrong segment
number. All segments with number 2 and above will be stored
in the memory behind Rx buffer.

Signed-off-by: Aleksandr Khromykh <Aleksandr.Khromykh@nordicsemi.no>
(cherry picked from commit 6896075b62)
2022-06-20 11:25:30 -04:00
Alexander Wachter
7aa38b4ac8 drivers: can: m_can: fix alignmed issues
Make sure that all access to the msg_sram
is 32 bit aligned.

Signed-off-by: Alexander Wachter <alexander@wachter.cloud>
2022-06-10 21:01:47 -07:00
Christopher Friedt
6dd320f791 release: update v2.7.2 release notes
* add backported bugfixes
* add backported fix for CVE-2021-3966

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2022-05-24 12:13:21 -04:00
Alexej Rempel
ecac165d36 logging: shell: fix shell stats null pointer dereference
If CONFIG_SHELL_STATS is disabled, shell->stats is NULL and
must not be dereferenced. Guard against it.

Fixes #44089

Signed-off-by: Alexej Rempel <Alexej.Rempel@de.eckerle-gruppe.com>
(cherry picked from commit 476d199752)
2022-05-24 12:12:22 -04:00
Michał Narajowski
132d90d1bc tests/bluetooth/tester: Refactor Read UUID callback
ATT_READ_BY_TYPE_RSP returns Attribute Data List so handle it in the
application.

This affects GATT/CL/GAR/BV-03-C.

Signed-off-by: Michał Narajowski <michal.narajowski@codecoup.pl>
(cherry picked from commit 54cd46ac68)
2022-05-17 18:33:37 +02:00
Mark Holden
58356313ac coredump: adjust mem_region find in gdbstub
Adjust get_mem_region to not return region when address == end
as there will be nothing to read there. Also, a subsequent region
may have that address as a start address and would be a more appropriate
selection.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit d04ab82943)
2022-05-17 18:06:16 +02:00
Piotr Pryga
99cfd3e4d7 Bluetooth: Controller: Fix per adv scheduling issue
sw_switch implementation uses two parallel groups of
PPIs connecting radio and timer tasks and events.
The groups are used interchaneably, one is set for
following radio TX/RX event while the other is in use
(enabled).

The group should be disabled by timer compare event that
starts Radio to TX/RX a PDU. The timer is responsible for
maintenance of TIFS/TMAFS. The disabled group collects
all PPIs required to maintain the TIFS/TMASF. After
the time is reached Radio is started and the group is
disabled. It will be enabled again by software radio
swich during next call.

If the group is not disabled then it will work in parallel
to other one. That causes issues in correct maintenance of
instant when radio shoudl be started for next TX/RX  event
e.g. radio may be enabled to early.

In case the PHY CODED was enabled and periodic advertising
included chained PDUs, that are transmitted back-to-back,
there was missing group delay disable. The missing case was
sw_switch function called with dir_curr and dir_next set
to SW_SWITCH_TX.

Signed-off-by: Piotr Pryga <piotr.pryga@nordicsemi.no>
2022-05-16 10:07:40 +02:00
Andrei Emeltchenko
780588bd33 edac: ibecc: Add support for EHL SKU13, SKU14, SKU15
Add support for missing EHL SKUs. The information about SKUs is
already public and available in Linux kernel:
https://github.com/torvalds/linux/blob/
38f80f42147ff658aff218edb0a88c37e58bf44f/drivers/edac/
igen6_edac.c#L197-L208

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
(cherry picked from commit f6069aa8fa)
2022-05-10 20:03:44 -04:00
99 changed files with 1903 additions and 554 deletions

View File

@@ -9,7 +9,7 @@ on:
jobs:
backport:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
name: Backport
steps:
- name: Backport

View File

@@ -0,0 +1,30 @@
name: Backport Issue Check
on:
pull_request_target:
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-20.04
steps:
- name: Check out source code
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -8,7 +8,7 @@ on:
jobs:
bluetooth-test-results:
name: "Publish Bluetooth Test Results"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.event.workflow_run.conclusion != 'skipped'
steps:

View File

@@ -10,17 +10,13 @@ on:
- "soc/posix/**"
- "arch/posix/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bluetooth-test-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
bluetooth-test-build:
runs-on: ubuntu-latest
needs: bluetooth-test-prep
bluetooth-test:
runs-on: ubuntu-20.04
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -38,7 +34,7 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: west setup
run: |
@@ -55,7 +51,7 @@ jobs:
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: bluetooth-test-results
path: |
@@ -64,7 +60,7 @@ jobs:
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: event
path: |

View File

@@ -2,22 +2,18 @@ name: Build with Clang/LLVM
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
clang-build-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
clang-build:
runs-on: zephyr_runner
needs: clang-build-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -30,12 +26,14 @@ jobs:
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -72,7 +70,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
@@ -100,12 +98,12 @@ jobs:
# We can limit scope to just what has changed
if [ -s testplan.csv ]; then
echo "::set-output name=report_needed::1";
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --inline-logs -M -N -v --load-tests testplan.csv --retry-failed 2
else
# if nothing is run, skip reporting step
echo "::set-output name=report_needed::0";
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
@@ -114,7 +112,7 @@ jobs:
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml

View File

@@ -4,22 +4,18 @@ on:
schedule:
- cron: '25 */3 * * 1-5'
jobs:
codecov-prep:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
codecov:
runs-on: zephyr_runner
needs: codecov-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -32,8 +28,14 @@ jobs:
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -54,7 +56,7 @@ jobs:
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -94,7 +96,7 @@ jobs:
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
@@ -108,7 +110,7 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Download Artifacts
@@ -144,8 +146,8 @@ jobs:
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
endforeach()
message("::set-output name=mergefiles::${MERGELIST}")
message("::set-output name=covfiles::${FILELIST}")
file(APPEND $ENV{GITHUB_OUTPUT} "mergefiles=${MERGELIST}\n")
file(APPEND $ENV{GITHUB_OUTPUT} "covfiles=${FILELIST}\n")
- name: Merge coverage files
run: |

View File

@@ -4,17 +4,17 @@ on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip

View File

@@ -4,11 +4,11 @@ on: pull_request
jobs:
maintainer_check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Check MAINTAINERS file
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -20,7 +20,7 @@ jobs:
python3 ./scripts/get_maintainer.py path CMakeLists.txt
check_compliance:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -28,13 +28,13 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
@@ -72,7 +72,7 @@ jobs:
./scripts/ci/check_compliance.py -m Codeowners -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: compliance.xml

View File

@@ -12,7 +12,7 @@ on:
jobs:
get_version:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
@@ -28,7 +28,7 @@ jobs:
pip3 install gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0

View File

@@ -6,10 +6,14 @@ name: Devicetree script tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
@@ -21,20 +25,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -42,7 +48,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -51,7 +57,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,10 +5,10 @@ name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
- cron: '0 */3 * * *'
push:
tags:
- v*
- v*
pull_request:
paths:
- 'doc/**'
@@ -34,18 +34,23 @@ env:
jobs:
doc-build-html:
name: "Documentation Build (HTML)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 30
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen graphviz
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -69,26 +74,53 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W -j auto" make -C doc html
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
DOC_TARGET="html-fast"
else
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W" make -C doc ${DOC_TARGET}
- name: compress-docs
run: |
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
echo "${PR_NUM}" > pr_num
echo "::notice:: Documentation will be available shortly at: ${DOC_URL}"
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
path: pr_num
doc-build-pdf:
name: "Documentation Build (PDF)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container: texlive/texlive:latest
timeout-minutes: 30
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
@@ -96,7 +128,7 @@ jobs:
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -123,7 +155,7 @@ jobs:
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: pdf-output
path: doc/_build/latex/zephyr.pdf

63
.github/workflows/doc-publish-pr.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish (Pull Request)
on:
workflow_run:
workflows: ["Documentation Build"]
types:
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
- name: Load PR number
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
id: check-pr
uses: carpentries/actions/check-valid-pr@v0.8
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete

View File

@@ -2,23 +2,21 @@
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Publish Documentation
name: Documentation Publish
on:
workflow_run:
workflows: ["Documentation Build"]
branches:
- main
- v*
tags:
- v*
- main
- v*
types:
- completed
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:

View File

@@ -6,13 +6,13 @@ on:
jobs:
check-errno:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: zephyrprojectrtos/ci:v0.18.4
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Run errno.py
run: |

View File

@@ -13,19 +13,14 @@ on:
# same commit
- 'v*'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-tracking:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-tracking-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -44,7 +39,7 @@ jobs:
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -2,19 +2,14 @@ name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-delta:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -25,16 +20,12 @@ jobs:
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -14,13 +14,13 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/master/.github/workflows/issues-report-config.json
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/main/.github/workflows/issues-report-config.json
- name: install-packages
run: |
@@ -34,7 +34,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}

View File

@@ -4,7 +4,7 @@ on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Scan code for licenses
steps:
- name: Checkout the code
@@ -15,7 +15,7 @@ jobs:
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -6,11 +6,11 @@ on:
jobs:
contribs:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -7,15 +7,16 @@ on:
jobs:
release:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
@@ -23,7 +24,7 @@ jobs:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx

View File

@@ -6,7 +6,7 @@ on:
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v3

View File

@@ -2,29 +2,27 @@ name: Run tests with twister
on:
push:
branches:
- v2.7-branch
pull_request_target:
branches:
- v2.7-branch
schedule:
# Run at 00:00 on Saturday
- cron: '20 0 * * 6'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
twister-build-cleanup:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
twister-build-prep:
runs-on: zephyr_runner
needs: twister-build-cleanup
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
@@ -38,14 +36,16 @@ jobs:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -102,18 +102,18 @@ jobs:
else
size=0
fi
echo "::set-output name=subset::${subset}";
echo "::set-output name=size::${size}";
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
twister-build:
runs-on: zephyr_runner
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -128,13 +128,14 @@ jobs:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -173,7 +174,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -220,7 +221,7 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -231,7 +232,7 @@ jobs:
twister-test-results:
name: "Publish Unit Tests Results"
needs: twister-build
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()

View File

@@ -5,12 +5,16 @@ name: Twister TestSuite
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
@@ -24,17 +28,17 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest]
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,11 +5,15 @@ name: Zephyr West Command Tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -22,20 +26,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -43,7 +49,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -52,7 +58,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 2
PATCHLEVEL = 4
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -56,7 +56,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;

View File

@@ -186,8 +186,9 @@ To operate bluetooth on Nucleo WB55RG, Cortex-M0 core should be flashed with
a valid STM32WB Coprocessor binaries (either 'Full stack' or 'HCI Layer').
These binaries are delivered in STM32WB Cube packages, under
Projects/STM32WB_Copro_Wireless_Binaries/STM32WB5x/
To date, interoperability and backward compatibility has been tested and is
guaranteed up to version 1.5 of STM32Cube package releases.
For compatibility information with the various versions of these binaries,
please check `modules/hal/stm32/lib/stm32wb/hci/README <https://github.com/zephyrproject-rtos/hal_stm32/blob/main/lib/stm32wb/hci/README>`__
in the hal_stm32 repo.
Connections and IOs
===================

View File

@@ -1,6 +1,8 @@
# SPDX-License-Identifier: Apache-2.0
include(${ZEPHYR_BASE}/cmake/toolchain/zephyr/host-tools.cmake)
if(ZEPHYR_SDK_HOST_TOOLS)
include(${ZEPHYR_BASE}/cmake/toolchain/zephyr/host-tools.cmake)
endif()
# dtc is an optional dependency
find_program(

View File

@@ -163,12 +163,23 @@ endforeach()
unset(EXTRA_KCONFIG_OPTIONS)
get_cmake_property(cache_variable_names CACHE_VARIABLES)
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
if("${name}" MATCHES "^CLI_CONFIG_")
# Variable was set by user in earlier invocation, let's append to extra
# config unless a new value has been given.
string(REGEX REPLACE "^CLI_" "" org_name ${name})
if(NOT DEFINED ${org_name})
set(EXTRA_KCONFIG_OPTIONS
"${EXTRA_KCONFIG_OPTIONS}\n${org_name}=${${name}}"
)
endif()
elseif("${name}" MATCHES "^CONFIG_")
# When a cache variable starts with 'CONFIG_', it is assumed to be
# a Kconfig symbol assignment from the CMake command line.
set(EXTRA_KCONFIG_OPTIONS
"${EXTRA_KCONFIG_OPTIONS}\n${name}=${${name}}"
)
set(CLI_${name} "${${name}}")
list(APPEND cli_config_list ${name})
endif()
endforeach()
@@ -296,21 +307,20 @@ add_custom_target(config-twister DEPENDS ${DOTCONFIG})
# Remove the CLI Kconfig symbols from the namespace and
# CMakeCache.txt. If the symbols end up in DOTCONFIG they will be
# re-introduced to the namespace through 'import_kconfig'.
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
unset(${name})
unset(${name} CACHE)
endif()
foreach (name ${cli_config_list})
unset(${name})
unset(${name} CACHE)
endforeach()
# Parse the lines prefixed with CONFIG_ in the .config file from Kconfig
import_kconfig(CONFIG_ ${DOTCONFIG})
# Re-introduce the CLI Kconfig symbols that survived
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
if(DEFINED ${name})
set(${name} ${${name}} CACHE STRING "")
endif()
# Cache the CLI Kconfig symbols that survived through Kconfig, prefixed with CLI_.
# Remove those who might have changed compared to earlier runs, if they no longer appears.
foreach (name ${cli_config_list})
if(DEFINED ${name})
set(CLI_${name} ${CLI_${name}} CACHE INTERNAL "")
else()
unset(CLI_${name} CACHE)
endif()
endforeach()

View File

@@ -1,8 +0,0 @@
# Zephyr 0.11 SDK Toolchain
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
config TOOLCHAIN_ZEPHYR_0_11
def_bool y
select HAS_NEWLIB_LIBC_NANO if (ARC || ARM || RISCV)

View File

@@ -1,34 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(TOOLCHAIN_HOME ${ZEPHYR_SDK_INSTALL_DIR})
set(COMPILER gcc)
set(LINKER ld)
set(BINTOOLS gnu)
# Find some toolchain that is distributed with this particular SDK
file(GLOB toolchain_paths
LIST_DIRECTORIES true
${TOOLCHAIN_HOME}/xtensa/*/*-zephyr-elf
${TOOLCHAIN_HOME}/*-zephyr-elf
${TOOLCHAIN_HOME}/*-zephyr-eabi
)
if(toolchain_paths)
list(GET toolchain_paths 0 some_toolchain_path)
get_filename_component(one_toolchain_root "${some_toolchain_path}" DIRECTORY)
get_filename_component(one_toolchain "${some_toolchain_path}" NAME)
set(CROSS_COMPILE_TARGET ${one_toolchain})
set(SYSROOT_TARGET ${one_toolchain})
endif()
if(NOT CROSS_COMPILE_TARGET)
message(FATAL_ERROR "Unable to find 'x86_64-zephyr-elf' or any other architecture in ${TOOLCHAIN_HOME}")
endif()
set(CROSS_COMPILE ${one_toolchain_root}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
set(SYSROOT_DIR ${one_toolchain_root}/${SYSROOT_TARGET}/${SYSROOT_TARGET})
set(TOOLCHAIN_HAS_NEWLIB ON CACHE BOOL "True if toolchain supports newlib")

View File

@@ -1,12 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(HOST_TOOLS_HOME ${ZEPHYR_SDK_INSTALL_DIR}/sysroots/${TOOLCHAIN_ARCH}-pokysdk-linux)
# Path used for searching by the find_*() functions, with appropriate
# suffixes added. Ensures that the SDK's host tools will be found when
# we call, e.g. find_program(QEMU qemu-system-x86)
list(APPEND CMAKE_PREFIX_PATH ${HOST_TOOLS_HOME}/usr)
# TODO: Use find_* somehow for these as well?
set_ifndef(QEMU_BIOS ${HOST_TOOLS_HOME}/usr/share/qemu)
set_ifndef(OPENOCD_DEFAULT_PATH ${HOST_TOOLS_HOME}/usr/share/openocd/scripts)

View File

@@ -1,53 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(CROSS_COMPILE_TARGET_arm arm-zephyr-eabi)
set(CROSS_COMPILE_TARGET_arm64 aarch64-zephyr-elf)
set(CROSS_COMPILE_TARGET_nios2 nios2-zephyr-elf)
set(CROSS_COMPILE_TARGET_riscv riscv64-zephyr-elf)
set(CROSS_COMPILE_TARGET_mips mipsel-zephyr-elf)
set(CROSS_COMPILE_TARGET_xtensa xtensa-zephyr-elf)
set(CROSS_COMPILE_TARGET_arc arc-zephyr-elf)
set(CROSS_COMPILE_TARGET_x86 x86_64-zephyr-elf)
set(CROSS_COMPILE_TARGET_sparc sparc-zephyr-elf)
set(CROSS_COMPILE_TARGET ${CROSS_COMPILE_TARGET_${ARCH}})
set(SYSROOT_TARGET ${CROSS_COMPILE_TARGET})
if("${ARCH}" STREQUAL "xtensa")
# Xtensa GCC needs a different toolchain per SOC
if("${SOC_SERIES}" STREQUAL "cavs_v15")
set(SR_XT_TC_SOC intel_apl_adsp)
elseif("${SOC_SERIES}" STREQUAL "cavs_v18")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "cavs_v20")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "cavs_v25")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "baytrail_adsp")
set(SR_XT_TC_SOC intel_byt_adsp)
elseif("${SOC_SERIES}" STREQUAL "broadwell_adsp")
set(SR_XT_TC_SOC intel_bdw_adsp)
elseif("${SOC_SERIES}" STREQUAL "imx8")
set(SR_XT_TC_SOC nxp_imx_adsp)
elseif("${SOC_SERIES}" STREQUAL "imx8m")
set(SR_XT_TC_SOC nxp_imx8m_adsp)
else()
message(FATAL_ERROR "Not compiler set for SOC_SERIES ${SOC_SERIES}")
endif()
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/xtensa/${SR_XT_TC_SOC}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/xtensa/${SR_XT_TC_SOC}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
else()
# Non-Xtensa SDK toolchains follow a simpler convention
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/${SYSROOT_TARGET}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
endif()
if("${ARCH}" STREQUAL "x86")
if(CONFIG_X86_64)
list(APPEND TOOLCHAIN_C_FLAGS -m64)
list(APPEND TOOLCHAIN_LD_FLAGS -m64)
else()
list(APPEND TOOLCHAIN_C_FLAGS -m32)
list(APPEND TOOLCHAIN_LD_FLAGS -m32)
endif()
endif()

View File

@@ -1,13 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/generic.cmake)
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/generic.cmake)
set(TOOLCHAIN_KCONFIG_DIR ${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR})
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/generic.cmake)
set(TOOLCHAIN_KCONFIG_DIR ${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr)
set(TOOLCHAIN_KCONFIG_DIR ${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr)
endif()
message(STATUS "Found toolchain: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")

View File

@@ -1,55 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
# This is the minimum required version for Zephyr to work (Old style)
set(REQUIRED_SDK_VER 0.11.1)
cmake_host_system_information(RESULT TOOLCHAIN_ARCH QUERY OS_PLATFORM)
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/host-tools.cmake)
if(NOT DEFINED ZEPHYR_SDK_INSTALL_DIR)
# Until https://github.com/zephyrproject-rtos/zephyr/issues/4912 is
# resolved we use ZEPHYR_SDK_INSTALL_DIR to determine whether the user
# wants to use the Zephyr SDK or not.
return()
endif()
# Cache the Zephyr SDK install dir.
set(ZEPHYR_SDK_INSTALL_DIR ${ZEPHYR_SDK_INSTALL_DIR} CACHE PATH "Zephyr SDK install directory")
if(NOT DEFINED SDK_VERSION)
if(ZEPHYR_TOOLCHAIN_VARIANT AND ZEPHYR_SDK_INSTALL_DIR)
# Manual detection for Zephyr SDK 0.11.1 and 0.11.2 for backward compatibility.
set(sdk_version_path ${ZEPHYR_SDK_INSTALL_DIR}/sdk_version)
if(NOT (EXISTS ${sdk_version_path}))
message(FATAL_ERROR
"The file '${ZEPHYR_SDK_INSTALL_DIR}/sdk_version' was not found. \
Is ZEPHYR_SDK_INSTALL_DIR=${ZEPHYR_SDK_INSTALL_DIR} misconfigured?")
endif()
# Read version as published by the SDK
file(READ ${sdk_version_path} SDK_VERSION_PRE1)
# Remove any pre-release data, for example 0.10.0-beta4 -> 0.10.0
string(REGEX REPLACE "-.*" "" SDK_VERSION_PRE2 ${SDK_VERSION_PRE1})
# Strip any trailing spaces/newlines from the version string
string(STRIP ${SDK_VERSION_PRE2} SDK_VERSION)
string(REGEX MATCH "([0-9]*).([0-9]*)" SDK_MAJOR_MINOR ${SDK_VERSION})
string(REGEX MATCH "([0-9]+)\.([0-9]+)\.([0-9]+)" SDK_MAJOR_MINOR_MICRO ${SDK_VERSION})
#at least 0.0.0
if(NOT SDK_MAJOR_MINOR_MICRO)
message(FATAL_ERROR "sdk version: ${SDK_MAJOR_MINOR_MICRO} improper format.
Expected format: x.y.z
Check whether the Zephyr SDK was installed correctly.
")
endif()
endif()
endif()
message(STATUS "Using toolchain: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/host-tools.cmake)
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/host-tools.cmake)
endif()
message(STATUS "Found host-tools: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")

View File

@@ -1,18 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/target.cmake)
elseif(("${ARCH}" STREQUAL "sparc") AND (${SDK_VERSION} VERSION_LESS 0.12))
# SDK 0.11.3, 0.11.4 does not have SPARC target support.
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/target.cmake)
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/target.cmake)
# Workaround, FIXME: Waiting for new SDK.
if("${ARCH}" STREQUAL "xtensa")
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/xtensa/${SOC_TOOLCHAIN_NAME}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/xtensa/${SOC_TOOLCHAIN_NAME}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
endif()
endif()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/target.cmake)

View File

@@ -90,10 +90,6 @@ if(NOT DEFINED ZEPHYR_TOOLCHAIN_VARIANT)
if (NOT Zephyr-sdk_CONSIDERED_VERSIONS)
set(error_msg "ZEPHYR_TOOLCHAIN_VARIANT not specified and no Zephyr SDK is installed.\n")
string(APPEND error_msg "Please set ZEPHYR_TOOLCHAIN_VARIANT to the toolchain to use or install the Zephyr SDK.")
if(NOT ZEPHYR_TOOLCHAIN_VARIANT AND NOT ZEPHYR_SDK_INSTALL_DIR)
set(error_note "Note: If you are using Zephyr SDK 0.11.1 or 0.11.2, remember to set ZEPHYR_SDK_INSTALL_DIR and ZEPHYR_TOOLCHAIN_VARIANT")
endif()
else()
# Note: When CMake mimimun version becomes >= 3.17, change this loop into:
# foreach(version config IN ZIP_LISTS Zephyr-sdk_CONSIDERED_VERSIONS Zephyr-sdk_CONSIDERED_CONFIGS)
@@ -116,11 +112,17 @@ if(NOT DEFINED ZEPHYR_TOOLCHAIN_VARIANT)
message(FATAL_ERROR "${error_msg}
The Zephyr SDK can be downloaded from:
https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v${TOOLCHAIN_ZEPHYR_MINIMUM_REQUIRED_VERSION}/zephyr-sdk-${TOOLCHAIN_ZEPHYR_MINIMUM_REQUIRED_VERSION}-setup.run
${error_note}
")
endif()
if(DEFINED ZEPHYR_SDK_INSTALL_DIR)
# Cache the Zephyr SDK install dir.
set(ZEPHYR_SDK_INSTALL_DIR ${ZEPHYR_SDK_INSTALL_DIR} CACHE PATH "Zephyr SDK install directory")
# Use the Zephyr SDK host-tools.
set(ZEPHYR_SDK_HOST_TOOLS TRUE)
endif()
if(CMAKE_SCRIPT_MODE_FILE)
if("${FORMAT}" STREQUAL "json")
set(json "{\"ZEPHYR_TOOLCHAIN_VARIANT\" : \"${ZEPHYR_TOOLCHAIN_VARIANT}\", ")

View File

@@ -2,24 +2,240 @@
.. _zephyr_2.7:
.. _zephyr_2.7.1:
.. _zephyr_2.7.4:
Zephyr 2.7.1
Zephyr 2.7.4
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.3 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`25417` - net: socket: socketpair: check for ISR context
* :github:`41012` - irq_enable() doesnt support enabling NVIC IRQ number more than 127
* :github:`44070` - west spdx TypeError: 'NoneType' object is not iterable
* :github:`46072` - subsys/hawkBit: Debug log error in hawkbit example "CONFIG_LOG_STRDUP_MAX_STRING"
* :github:`48056` - Possible null pointer dereference after k_mutex_lock times out
* :github:`49102` - hawkbit - dns name randomly not resolved
* :github:`49139` - can't run west or DT tests on windows / py 3.6
* :github:`49564` - Newer versions of pylink are not supported in latest zephyr 2.7 release
* :github:`49569` - Backport cmake string cache fix to v2.7 branch
* :github:`50221` - tests: debug: test case subsys/debug/coredump failed on acrn_ehl_crb on branch v2.7
* :github:`50467` - Possible memory corruption on ARC when userspace is enabled
* :github:`50468` - Incorrect Z_THREAD_STACK_BUFFER in arch_start_cpu for Xtensa
* :github:`50961` - drivers: counter: Update counter_set_channel_alarm documentation
* :github:`51714` - Bluetooth: Application with buffer that cannot unref it in disconnect handler leads to advertising issues
* :github:`51776` - POSIX API is not portable across arches
* :github:`52247` - mgmt: mcumgr: image upload, then image erase, then image upload does not restart upload from start
* :github:`52517` - lib: posix: sleep() does not return the number of seconds left if interrupted
* :github:`52518` - lib: posix: usleep() does not follow the POSIX spec
* :github:`52542` - lib: posix: make sleep() and usleep() standards-compliant
* :github:`52591` - mcumgr user data size out of sync with net buffer user data size
* :github:`52829` - kernel/sched: Fix SMP race on pend
* :github:`53088` - Unable to chage initialization priority of logging subsys
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* (N/A)
* CVE-2022-2741: `Zephyr project bug tracker GHSA-hx5v-j59q-c3j8
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hx5v-j59q-c3j8>`_
* CVE-2022-1841: `Zephyr project bug tracker GHSA-5c3j-p8cr-2pgh
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-5c3j-p8cr-2pgh>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.3:
Zephyr 2.7.3
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.2 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`39882` - Bluetooth Host qualification on 2.7 branch
* :github:`41074` - can_mcan_send sends corrupted CAN frames with a byte-by-byte memcpy implementation
* :github:`43479` - Bluetooth: Controller: Fix per adv scheduling issue
* :github:`43694` - drivers: spi: stm32 spi with dma must enable cs after periph
* :github:`44089` - logging: shell backend: null-deref when logs are dropped
* :github:`45341` - Add new EHL SKUs for IBECC
* :github:`45529` - GdbStub get_mem_region bugfix
* :github:`46621` - drivers: i2c: Infinite recursion in driver unregister function
* :github:`46698` - sm351 driver faults when using global thread
* :github:`46706` - add missing checks for segment number
* :github:`46757` - Bluetooth: Controller: Missing validation of unsupported PHY when performing PHY update
* :github:`46807` - lib: posix: semaphore: use consistent timebase in sem_timedwait
* :github:`46822` - L2CAP disconnected packet timing in ecred reconf function
* :github:`46994` - Incorrect Xtensa toolchain path resolution
* :github:`47356` - cpp: global static object initialisation may fail for MMU and MPU platforms
* :github:`47609` - posix: pthread: descriptor leak with pthread_join
* :github:`47955` - drivers: can: various RTR fixes
* :github:`48249` - boards: nucleo_wb55rg: documentation BLE binary compatibility issue
* :github:`48271` - net: Possible net_pkt leak in ipv6 multicast forwarding
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2022-2741: Under embargo until 2022-10-14
* CVE-2022-1042: `Zephyr project bug tracker GHSA-j7v7-w73r-mm5x
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-j7v7-w73r-mm5x>`_
* CVE-2022-1041: `Zephyr project bug tracker GHSA-p449-9hv9-pj38
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-p449-9hv9-pj38>`_
* CVE-2021-3966: `Zephyr project bug tracker GHSA-hfxq-3w6x-fv2m
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hfxq-3w6x-fv2m>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.2:
Zephyr 2.7.2
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.1 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`23419` - posix: clock: No thread safety clock_getime / clock_settime
* :github:`30367` - TCP2 does not send our MSS to peer
* :github:`37389` - nucleo_g0b1re: Swapping image in mcuboot results in hard fault and softbricks the device
* :github:`38268` - Multiple defects in "Multi Producer Single Consumer Packet Buffer" library
* :github:`38576` - net shell: self-connecting to TCP might lead to a crash
* :github:`39184` - HawkBit hash mismatch
* :github:`39242` - net: sockets: Zephyr Fatal in dns_resolve_cb if dns request was attempted in offline state
* :github:`39399` - linker: Missing align __itcm_load_start / __dtcm_data_load_start linker symbols
* :github:`39608` - stm32: lpuart: 9600 baudrate doesn't work
* :github:`39609` - spi: slave: division by zero in timeout calculation
* :github:`39660` - poll() not notified when a TLS/TCP connection is closed without TLS close_notify
* :github:`39687` - sensor: qdec_nrfx: PM callback has incorrect signature
* :github:`39774` - modem: uart mux reading optimization never used
* :github:`39882` - Bluetooth Host qualification on 2.7 branch
* :github:`40163` - Use correct clock frequency for systick+DWT
* :github:`40464` - Dereferencing NULL with getsockname() on TI Simplelink Platform
* :github:`40578` - MODBUS RS-485 transceiver support broken on several platforms due to DE race condition
* :github:`40614` - poll: the code judgment condition is always true
* :github:`40640` - drivers: usb_dc_native_posix: segfault when using composite USB device
* :github:`40730` - More power supply modes on STM32H7XX
* :github:`40775` - stm32: multi-threading broken after #40173
* :github:`40795` - Timer signal thread execution loop break SMP on ARM64
* :github:`40925` - mesh_badge not working reel_board_v2
* :github:`40985` - net: icmpv6: Add support for Route Info option in Router Advertisement
* :github:`41026` - LoRa: sx126x: DIO1 interrupt left enabled in sleep mode
* :github:`41077` - console: gsm_mux: could not send more than 128 bytes of data on dlci
* :github:`41089` - power modes for STM32H7
* :github:`41095` - libc: newlib: 'gettimeofday' causes stack overflow on non-POSIX builds
* :github:`41237` - drivers: ieee802154_dw1000: use dedicated workqueue
* :github:`41240` - logging can get messed up when messages are dropped
* :github:`41284` - pthread_cond_wait return value incorrect
* :github:`41339` - stm32, Unable to read UART while checking from Framing error.
* :github:`41488` - Stall logging on nrf52840
* :github:`41499` - drivers: iwdg: stm32: WDT_OPT_PAUSE_HALTED_BY_DBG might not work
* :github:`41503` - including net/socket.h fails with redefinition of struct zsock_timeval (sometimes :-) )
* :github:`41529` - documentation: generate Doxygen tag file
* :github:`41536` - Backport STM32 SMPS Support to v2.7.0
* :github:`41582` - stm32h7: CSI as PLL source is broken
* :github:`41683` - http_client: Unreliable rsp->body_start pointer
* :github:`41915` - regression: Build fails after switching logging to V2
* :github:`41942` - k_delayable_work being used as k_work in work's handler
* :github:`41952` - Log timestamp overflows when using LOGv2
* :github:`42164` - tests/bluetooth/tester broken after switch to logging v2
* :github:`42271` - drivers: can: m_can: The can_set_bitrate() function doesn't work.
* :github:`42299` - spi: nRF HAL driver asserts when PM is used
* :github:`42373` - add k_spin_lock() to doxygen prior to v3.0 release
* :github:`42581` - include: drivers: clock_control: stm32 incorrect DT_PROP is used for xtpre
* :github:`42615` - Bluetooth: Controller: Missing ticks slot offset calculation in Periodic Advertising event scheduling
* :github:`42622` - pm: pm_device structure bigger than nessecary when PM_DEVICE_RUNTIME not set
* :github:`42631` - Unable to identify owner of net_mgmt_lock easily
* :github:`42825` - MQTT client disconnection (EAGAIN) on publish with big payload
* :github:`42862` - Bluetooth: L2CAP: Security check on l2cap request is wrong
* :github:`43117` - Not possible to create more than one shield.
* :github:`43130` - STM32WL ADC idles / doesn't work
* :github:`43176` - net/icmpv4: client possible to ddos itself when there's an error for the broadcasted packet
* :github:`43177` - net: shell: errno not cleared before calling the strtol
* :github:`43178` - net: ip: route: log_strdup misuse
* :github:`43179` - net: tcp: forever loop in tcp_resend_data
* :github:`43180` - net: tcp: possible deadlock in tcp_conn_unref()
* :github:`43181` - net: sockets: net_pkt leak in accept
* :github:`43182` - net: arp: ARP retransmission source address selection
* :github:`43183` - net: mqtt: setsockopt leak on failure
* :github:`43184` - arm: Wrong macro used for z_interrupt_stacks declaration in stack.h
* :github:`43185` - arm: cortex-m: uninitialised ptr_esf in get_esf() in fault.c
* :github:`43470` - wifi: esp_at: race condition on mutex's leading to deadlock
* :github:`43490` - net: sockets: userspace accept() crashes with NULL addr/addrlen pointer
* :github:`43548` - gen_relocate_app truncates files on incremental builds
* :github:`43572` - stm32: wrong clock the LSI freq for stm32l0x mcus
* :github:`43580` - hl7800: tcp stack freezes on slow response from modem
* :github:`43807` - Test "cpp.libcxx.newlib.exception" failed on platforms which use zephyr.bin to run tests.
* :github:`43839` - Bluetooth: controller: missing NULL assign to df_cfg in ll_adv_set
* :github:`43853` - X86 MSI messages always get to BSP core (need a fix to be backported)
* :github:`43858` - mcumgr seems to lock up when it receives command for group that does not exist
* :github:`44107` - The SMP nsim boards are started incorrectly when launching on real HW
* :github:`44310` - net: gptp: type mismatch calculation error in gptp_mi
* :github:`44336` - nucleo_wb55rg: stm32cubeprogrammer runner is missing for twister tests
* :github:`44337` - twister: Miss sn option to stm32cubeprogrgammer runner
* :github:`44352` - stm32l5x boards missing the openocd runner
* :github:`44497` - Add guide for disabling MSD on JLink OB devices and link to from smp_svr page
* :github:`44531` - bl654_usb without mcuboot maximum image size is not limited
* :github:`44886` - Unable to boot Zephyr on FVP_BaseR_AEMv8R
* :github:`44902` - x86: FPU registers are not initialised for userspace (eager FPU sharing)
* :github:`45869` - doc: update requirements
* :github:`45870` - drivers: virt_ivshmem: Allow multiple instances of ivShMem devices
* :github:`45871` - ci: split Bluetooth workflow
* :github:`45872` - ci: make git credentials non-persistent
* :github:`45873` - soc: esp32: use PYTHON_EXECUTABLE from build system
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2021-3966: `Zephyr project bug tracker GHSA-hfxq-3w6x-fv2m
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hfxq-3w6x-fv2m>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.1:
Zephyr 2.7.1
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************

View File

@@ -11,6 +11,9 @@
#include "can_loopback.h"
#include <logging/log.h>
#include "can_utils.h"
LOG_MODULE_DECLARE(can_driver, CONFIG_CAN_LOG_LEVEL);
K_KERNEL_STACK_DEFINE(tx_thread_stack,
@@ -41,13 +44,6 @@ static void dispatch_frame(const struct zcan_frame *frame,
filter->rx_cb(&frame_tmp, filter->cb_arg);
}
static inline int check_filter_match(const struct zcan_frame *frame,
const struct zcan_filter *filter)
{
return ((filter->id & filter->id_mask) ==
(frame->id & filter->id_mask));
}
void tx_thread(void *data_arg, void *arg2, void *arg3)
{
ARG_UNUSED(arg2);
@@ -63,7 +59,7 @@ void tx_thread(void *data_arg, void *arg2, void *arg3)
for (int i = 0; i < CONFIG_CAN_MAX_FILTER; i++) {
filter = &data->filters[i];
if (filter->rx_cb &&
check_filter_match(&frame.frame, &filter->filter)) {
can_utils_filter_match(&frame.frame, &filter->filter) != 0) {
dispatch_frame(&frame.frame, filter);
}
}

View File

@@ -23,6 +23,34 @@ LOG_MODULE_DECLARE(can_driver, CONFIG_CAN_LOG_LEVEL);
#define MCAN_MAX_DLC CAN_MAX_DLC
#endif
static void memcpy32_volatile(volatile void *dst_, const volatile void *src_,
size_t len)
{
volatile uint32_t *dst = dst_;
const volatile uint32_t *src = src_;
__ASSERT(len % 4 == 0, "len must be a multiple of 4!");
len /= sizeof(uint32_t);
while (len--) {
*dst = *src;
++dst;
++src;
}
}
static void memset32_volatile(volatile void *dst_, uint32_t val, size_t len)
{
volatile uint32_t *dst = dst_;
__ASSERT(len % 4 == 0, "len must be a multiple of 4!");
len /= sizeof(uint32_t);
while (len--) {
*dst++ = val;
}
}
static int can_exit_sleep_mode(struct can_mcan_reg *can)
{
uint32_t start_time;
@@ -389,12 +417,7 @@ int can_mcan_init(const struct device *dev, const struct can_mcan_config *cfg,
}
/* No memset because only aligned ptr are allowed */
for (uint32_t *ptr = (uint32_t *)msg_ram;
ptr < (uint32_t *)msg_ram +
sizeof(struct can_mcan_msg_sram) / sizeof(uint32_t);
ptr++) {
*ptr = 0;
}
memset32_volatile(msg_ram, 0, sizeof(struct can_mcan_msg_sram));
return 0;
}
@@ -486,15 +509,17 @@ static void can_mcan_get_message(struct can_mcan_data *data,
uint32_t get_idx, filt_idx;
struct zcan_frame frame;
can_rx_callback_t cb;
volatile uint32_t *src, *dst, *end;
int data_length;
void *cb_arg;
struct can_mcan_rx_fifo_hdr hdr;
bool rtr_filter_mask;
bool rtr_filter;
while ((*fifo_status_reg & CAN_MCAN_RXF0S_F0FL)) {
get_idx = (*fifo_status_reg & CAN_MCAN_RXF0S_F0GI) >>
CAN_MCAN_RXF0S_F0GI_POS;
hdr = fifo[get_idx].hdr;
memcpy32_volatile(&hdr, &fifo[get_idx].hdr,
sizeof(struct can_mcan_rx_fifo_hdr));
if (hdr.xtd) {
frame.id = hdr.ext_id;
@@ -514,24 +539,25 @@ static void can_mcan_get_message(struct can_mcan_data *data,
filt_idx = hdr.fidx;
/* Check if RTR must match */
if ((hdr.xtd && data->ext_filt_rtr_mask & (1U << filt_idx) &&
((data->ext_filt_rtr >> filt_idx) & 1U) != frame.rtr) ||
(data->std_filt_rtr_mask & (1U << filt_idx) &&
((data->std_filt_rtr >> filt_idx) & 1U) != frame.rtr)) {
if (hdr.xtd != 0) {
rtr_filter_mask = (data->ext_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->ext_filt_rtr & BIT(filt_idx)) != 0;
} else {
rtr_filter_mask = (data->std_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->std_filt_rtr & BIT(filt_idx)) != 0;
}
if (rtr_filter_mask && (rtr_filter != frame.rtr)) {
/* RTR bit does not match filter RTR mask and bit, drop frame */
*fifo_ack_reg = get_idx;
continue;
}
data_length = can_dlc_to_bytes(frame.dlc);
if (data_length <= sizeof(frame.data)) {
/* data needs to be written in 32 bit blocks!*/
for (src = fifo[get_idx].data_32,
dst = frame.data_32,
end = dst + CAN_DIV_CEIL(data_length, sizeof(uint32_t));
dst < end;
src++, dst++) {
*dst = *src;
}
memcpy32_volatile(frame.data_32, fifo[get_idx].data_32,
ROUND_UP(data_length, sizeof(uint32_t)));
if (frame.id_type == CAN_STANDARD_IDENTIFIER) {
LOG_DBG("Frame on filter %d, ID: 0x%x",
@@ -647,8 +673,6 @@ int can_mcan_send(const struct can_mcan_config *cfg,
uint32_t put_idx;
int ret;
struct can_mcan_mm mm;
volatile uint32_t *dst, *end;
const uint32_t *src;
LOG_DBG("Sending %d bytes. Id: 0x%x, ID type: %s %s %s %s",
data_length, frame->id,
@@ -696,15 +720,9 @@ int can_mcan_send(const struct can_mcan_config *cfg,
tx_hdr.ext_id = frame->id;
}
msg_ram->tx_buffer[put_idx].hdr = tx_hdr;
for (src = frame->data_32,
dst = msg_ram->tx_buffer[put_idx].data_32,
end = dst + CAN_DIV_CEIL(data_length, sizeof(uint32_t));
dst < end;
src++, dst++) {
*dst = *src;
}
memcpy32_volatile(&msg_ram->tx_buffer[put_idx].hdr, &tx_hdr, sizeof(tx_hdr));
memcpy32_volatile(msg_ram->tx_buffer[put_idx].data_32, frame->data_32,
ROUND_UP(data_length, 4));
data->tx_fin_cb[put_idx] = callback;
data->tx_fin_cb_arg[put_idx] = callback_arg;
@@ -761,7 +779,8 @@ int can_mcan_attach_std(struct can_mcan_data *data,
filter_element.sfce = filter_nr & 0x01 ? CAN_MCAN_FCE_FIFO1 :
CAN_MCAN_FCE_FIFO0;
msg_ram->std_filt[filter_nr] = filter_element;
memcpy32_volatile(&msg_ram->std_filt[filter_nr], &filter_element,
sizeof(struct can_mcan_std_filter));
k_mutex_unlock(&data->inst_mutex);
@@ -820,7 +839,8 @@ static int can_mcan_attach_ext(struct can_mcan_data *data,
filter_element.efce = filter_nr & 0x01 ? CAN_MCAN_FCE_FIFO1 :
CAN_MCAN_FCE_FIFO0;
msg_ram->ext_filt[filter_nr] = filter_element;
memcpy32_volatile(&msg_ram->ext_filt[filter_nr], &filter_element,
sizeof(struct can_mcan_ext_filter));
k_mutex_unlock(&data->inst_mutex);
@@ -874,9 +894,6 @@ int can_mcan_attach_isr(struct can_mcan_data *data,
void can_mcan_detach(struct can_mcan_data *data,
struct can_mcan_msg_sram *msg_ram, int filter_nr)
{
const struct can_mcan_ext_filter ext_filter = {0};
const struct can_mcan_std_filter std_filter = {0};
k_mutex_lock(&data->inst_mutex, K_FOREVER);
if (filter_nr >= NUM_STD_FILTER_DATA) {
filter_nr -= NUM_STD_FILTER_DATA;
@@ -885,10 +902,12 @@ void can_mcan_detach(struct can_mcan_data *data,
return;
}
msg_ram->ext_filt[filter_nr] = ext_filter;
memset32_volatile(&msg_ram->ext_filt[filter_nr], 0,
sizeof(struct can_mcan_ext_filter));
data->rx_cb_ext[filter_nr] = NULL;
} else {
msg_ram->std_filt[filter_nr] = std_filter;
memset32_volatile(&msg_ram->std_filt[filter_nr], 0,
sizeof(struct can_mcan_std_filter));
data->rx_cb_std[filter_nr] = NULL;
}

View File

@@ -48,7 +48,7 @@ struct can_mcan_rx_fifo_hdr {
volatile uint32_t res : 2; /* Reserved */
volatile uint32_t fidx : 7; /* Filter Index */
volatile uint32_t anmf : 1; /* Accepted non-matching frame */
} __packed;
} __packed __aligned(4);
struct can_mcan_rx_fifo {
struct can_mcan_rx_fifo_hdr hdr;
@@ -56,7 +56,7 @@ struct can_mcan_rx_fifo {
volatile uint8_t data[64];
volatile uint32_t data_32[16];
};
} __packed;
} __packed __aligned(4);
struct can_mcan_mm {
volatile uint8_t idx : 5;
@@ -84,7 +84,7 @@ struct can_mcan_tx_buffer_hdr {
volatile uint8_t res2 : 1; /* Reserved */
volatile uint8_t efc : 1; /* Event FIFO control (Store Tx events) */
struct can_mcan_mm mm; /* Message marker */
} __packed;
} __packed __aligned(4);
struct can_mcan_tx_buffer {
struct can_mcan_tx_buffer_hdr hdr;
@@ -92,7 +92,7 @@ struct can_mcan_tx_buffer {
volatile uint8_t data[64];
volatile uint32_t data_32[16];
};
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_TE_TX 0x1 /* TX event */
#define CAN_MCAN_TE_TXC 0x2 /* TX event in spite of cancellation */
@@ -109,7 +109,7 @@ struct can_mcan_tx_event_fifo {
volatile uint8_t fdf : 1; /* FD Format */
volatile uint8_t et : 2; /* Event type */
struct can_mcan_mm mm; /* Message marker */
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_FCE_DISABLE 0x0
#define CAN_MCAN_FCE_FIFO0 0x1
@@ -130,7 +130,7 @@ struct can_mcan_std_filter {
volatile uint32_t id1 : 11;
volatile uint32_t sfce : 3; /* Filter config */
volatile uint32_t sft : 2; /* Filter type */
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_EFT_RANGE_XIDAM 0x0
#define CAN_MCAN_EFT_DUAL 0x1
@@ -143,7 +143,7 @@ struct can_mcan_ext_filter {
volatile uint32_t id2 : 29; /* ID2 for dual or range, mask otherwise */
volatile uint32_t res : 1;
volatile uint32_t eft : 2; /* Filter type */
} __packed;
} __packed __aligned(4);
struct can_mcan_msg_sram {
volatile struct can_mcan_std_filter std_filt[NUM_STD_FILTER_ELEMENTS];
@@ -153,7 +153,7 @@ struct can_mcan_msg_sram {
volatile struct can_mcan_rx_fifo rx_buffer[NUM_RX_BUF_ELEMENTS];
volatile struct can_mcan_tx_event_fifo tx_event_fifo[NUM_TX_BUF_ELEMENTS];
volatile struct can_mcan_tx_buffer tx_buffer[NUM_TX_BUF_ELEMENTS];
} __packed;
} __packed __aligned(4);
struct can_mcan_data {
struct k_mutex inst_mutex;

View File

@@ -253,13 +253,11 @@ static void mcux_flexcan_copy_zfilter_to_mbconfig(const struct zcan_filter *src,
if (src->id_type == CAN_STANDARD_IDENTIFIER) {
dest->format = kFLEXCAN_FrameFormatStandard;
dest->id = FLEXCAN_ID_STD(src->id);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask, src->rtr_mask, 1);
} else {
dest->format = kFLEXCAN_FrameFormatExtend;
dest->id = FLEXCAN_ID_EXT(src->id);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask, src->rtr_mask, 1);
}
if ((src->rtr & src->rtr_mask) == CAN_DATAFRAME) {
@@ -646,6 +644,7 @@ static inline void mcux_flexcan_transfer_rx_idle(const struct device *dev,
static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
{
struct mcux_flexcan_data *data = (struct mcux_flexcan_data *)userData;
const struct mcux_flexcan_config *config = data->dev->config;
switch (status) {
case kStatus_FLEXCAN_UnHandled:
@@ -654,6 +653,7 @@ static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
mcux_flexcan_transfer_error_status(data->dev, (uint64_t)result);
break;
case kStatus_FLEXCAN_TxSwitchToRx:
FLEXCAN_TransferAbortReceive(config->base, &data->handle, (uint64_t)result);
__fallthrough;
case kStatus_FLEXCAN_TxIdle:
/* The result field is a MB value which is limited to 32bit value */

View File

@@ -302,6 +302,12 @@ int edac_ibecc_init(const struct device *dev)
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU11):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU12):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU13):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU14):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU15):
break;
default:
LOG_ERR("PCI Probe failed");

View File

@@ -18,6 +18,9 @@
#define PCI_DEVICE_ID_SKU10 0x452e
#define PCI_DEVICE_ID_SKU11 0x4532
#define PCI_DEVICE_ID_SKU12 0x4518
#define PCI_DEVICE_ID_SKU13 0x451a
#define PCI_DEVICE_ID_SKU14 0x4534
#define PCI_DEVICE_ID_SKU15 0x4536
/* TODO: Move to correct place NMI registers */

View File

@@ -71,7 +71,7 @@ static inline int z_vrfy_i2c_slave_driver_register(const struct device *dev)
static inline int z_vrfy_i2c_slave_driver_unregister(const struct device *dev)
{
Z_OOPS(Z_SYSCALL_OBJ(dev, K_OBJ_DRIVER_I2C));
return z_vrfy_i2c_slave_driver_unregister(dev);
return z_impl_i2c_slave_driver_unregister(dev);
}
#include <syscalls/i2c_slave_driver_unregister_mrsh.c>

View File

@@ -209,9 +209,9 @@ static int sm351lt_init(const struct device *dev)
}
#if defined(CONFIG_SM351LT_TRIGGER)
#if defined(CONFIG_SM351LT_TRIGGER_OWN_THREAD)
data->dev = dev;
#if defined(CONFIG_SM351LT_TRIGGER_OWN_THREAD)
k_sem_init(&data->gpio_sem, 0, K_SEM_MAX_LIMIT);
k_thread_create(&data->thread, data->thread_stack,

View File

@@ -718,11 +718,11 @@ static int transceive_dma(const struct device *dev,
/* Set buffers info */
spi_context_buffers_setup(&data->ctx, tx_bufs, rx_bufs, 1);
LL_SPI_Enable(spi);
/* This is turned off in spi_stm32_complete(). */
spi_stm32_cs_control(dev, true);
LL_SPI_Enable(spi);
while (data->ctx.rx_len > 0 || data->ctx.tx_len > 0) {
size_t dma_len;

View File

@@ -385,6 +385,7 @@ static inline int z_impl_counter_get_value(const struct device *dev,
* interrupts or requested channel).
* @retval -EINVAL if alarm settings are invalid.
* @retval -ETIME if absolute alarm was set too late.
* @retval -EBUSY if alarm is already active.
*/
__syscall int counter_set_channel_alarm(const struct device *dev,
uint8_t chan_id,

View File

@@ -161,15 +161,21 @@ int z_impl_k_mutex_lock(struct k_mutex *mutex, k_timeout_t timeout)
key = k_spin_lock(&lock);
struct k_thread *waiter = z_waitq_head(&mutex->wait_q);
/*
* Check if mutex was unlocked after this thread was unpended.
* If so, skip adjusting owner's priority down.
*/
if (likely(mutex->owner != NULL)) {
struct k_thread *waiter = z_waitq_head(&mutex->wait_q);
new_prio = (waiter != NULL) ?
new_prio_for_inheritance(waiter->base.prio, mutex->owner_orig_prio) :
mutex->owner_orig_prio;
new_prio = (waiter != NULL) ?
new_prio_for_inheritance(waiter->base.prio, mutex->owner_orig_prio) :
mutex->owner_orig_prio;
LOG_DBG("adjusting prio down on mutex %p", mutex);
LOG_DBG("adjusting prio down on mutex %p", mutex);
resched = adjust_owner_prio(mutex, new_prio) || resched;
resched = adjust_owner_prio(mutex, new_prio) || resched;
}
if (resched) {
z_reschedule(&lock, key);

View File

@@ -626,17 +626,13 @@ static void add_thread_timeout(struct k_thread *thread, k_timeout_t timeout)
}
}
static void pend(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
static void pend_locked(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
{
#ifdef CONFIG_KERNEL_COHERENCE
__ASSERT_NO_MSG(wait_q == NULL || arch_mem_coherent(wait_q));
#endif
LOCKED(&sched_spinlock) {
add_to_waitq_locked(thread, wait_q);
}
add_to_waitq_locked(thread, wait_q);
add_thread_timeout(thread, timeout);
}
@@ -644,7 +640,9 @@ void z_pend_thread(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
{
__ASSERT_NO_MSG(thread == _current || is_thread_dummy(thread));
pend(thread, wait_q, timeout);
LOCKED(&sched_spinlock) {
pend_locked(thread, wait_q, timeout);
}
}
static inline void unpend_thread_no_timeout(struct k_thread *thread)
@@ -686,7 +684,12 @@ void z_thread_timeout(struct _timeout *timeout)
int z_pend_curr_irqlock(uint32_t key, _wait_q_t *wait_q, k_timeout_t timeout)
{
pend(_current, wait_q, timeout);
/* This is a legacy API for pre-switch architectures and isn't
* correctly synchronized for multi-cpu use
*/
__ASSERT_NO_MSG(!IS_ENABLED(CONFIG_SMP));
pend_locked(_current, wait_q, timeout);
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
pending_current = _current;
@@ -709,8 +712,20 @@ int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
pending_current = _current;
#endif
pend(_current, wait_q, timeout);
return z_swap(lock, key);
__ASSERT_NO_MSG(sizeof(sched_spinlock) == 0 || lock != &sched_spinlock);
/* We do a "lock swap" prior to calling z_swap(), such that
* the caller's lock gets released as desired. But we ensure
* that we hold the scheduler lock and leave local interrupts
* masked until we reach the context swich. z_swap() itself
* has similar code; the duplication is because it's a legacy
* API that doesn't expect to be called with scheduler lock
* held.
*/
(void) k_spin_lock(&sched_spinlock);
pend_locked(_current, wait_q, timeout);
k_spin_release(lock);
return z_swap(&sched_spinlock, key);
}
struct k_thread *z_unpend1_no_timeout(_wait_q_t *wait_q)

View File

@@ -94,7 +94,7 @@ void free(void *ptr)
(void) sys_mutex_unlock(&z_malloc_heap_mutex);
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#else /* No malloc arena */
void *malloc(size_t size)
{

View File

@@ -133,7 +133,7 @@ static int malloc_prepare(const struct device *unused)
return 0;
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
/* Current offset from HEAP_BASE of unused memory */
LIBC_BSS static size_t heap_sz;

View File

@@ -19,6 +19,7 @@ config POSIX_API
config PTHREAD_IPC
bool "POSIX pthread IPC API"
default y if POSIX_API
depends on POSIX_CLOCK
help
This enables a mostly-standards-compliant implementation of
the pthread mutex, condition variable and barrier IPC

View File

@@ -150,7 +150,7 @@ int pthread_create(pthread_t *newthread, const pthread_attr_t *attr,
for (pthread_num = 0;
pthread_num < CONFIG_MAX_PTHREAD_COUNT; pthread_num++) {
thread = &posix_thread_pool[pthread_num];
if (thread->state == PTHREAD_TERMINATED) {
if (thread->state == PTHREAD_EXITED || thread->state == PTHREAD_TERMINATED) {
thread->state = PTHREAD_JOINABLE;
break;
}

View File

@@ -91,6 +91,7 @@ int sem_post(sem_t *semaphore)
int sem_timedwait(sem_t *semaphore, struct timespec *abstime)
{
int32_t timeout;
struct timespec current;
int64_t current_ms, abstime_ms;
__ASSERT(abstime, "abstime pointer NULL");
@@ -100,8 +101,12 @@ int sem_timedwait(sem_t *semaphore, struct timespec *abstime)
return -1;
}
current_ms = (int64_t)k_uptime_get();
if (clock_gettime(CLOCK_REALTIME, &current) < 0) {
return -1;
}
abstime_ms = (int64_t)_ts_to_ms(abstime);
current_ms = (int64_t)_ts_to_ms(&current);
if (abstime_ms <= current_ms) {
timeout = 0;

View File

@@ -4,6 +4,8 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include <errno.h>
#include <kernel.h>
#include <posix/unistd.h>
@@ -14,8 +16,12 @@
*/
unsigned sleep(unsigned int seconds)
{
k_sleep(K_SECONDS(seconds));
return 0;
int rem;
rem = k_sleep(K_SECONDS(seconds));
__ASSERT_NO_MSG(rem >= 0);
return rem / MSEC_PER_SEC;
}
/**
* @brief Suspend execution for microsecond intervals.
@@ -24,10 +30,19 @@ unsigned sleep(unsigned int seconds)
*/
int usleep(useconds_t useconds)
{
if (useconds < USEC_PER_MSEC) {
k_busy_wait(useconds);
} else {
k_msleep(useconds / USEC_PER_MSEC);
int32_t rem;
if (useconds >= USEC_PER_SEC) {
errno = EINVAL;
return -1;
}
rem = k_usleep(useconds);
__ASSERT_NO_MSG(rem >= 0);
if (rem > 0) {
/* sleep was interrupted by a call to k_wakeup() */
errno = EINTR;
return -1;
}
return 0;

View File

@@ -12,4 +12,5 @@ CONFIG_TEST_RANDOM_GENERATOR=y
# Use Portable threads
CONFIG_PTHREAD_IPC=y
CONFIG_POSIX_CLOCK=y
CONFIG_NET_SOCKETS_POSIX_NAMES=y

View File

@@ -19,3 +19,6 @@ CONFIG_STATS_NAMES=n
# Disable Logging for footprint reduction
CONFIG_LOG=n
# Network settings
CONFIG_NET_BUF_USER_DATA_SIZE=8

View File

@@ -16,3 +16,6 @@ CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE=2304
# Enable file system commands
CONFIG_MCUMGR_CMD_FS_MGMT=y
# Network settings
CONFIG_NET_BUF_USER_DATA_SIZE=8

View File

@@ -122,7 +122,7 @@ class GdbStub(abc.ABC):
def get_mem_region(addr):
for r in self.mem_regions:
if r['start'] <= addr <= r['end']:
if r['start'] <= addr < r['end']:
return r
return None

View File

@@ -2769,18 +2769,6 @@ class _BindingLoader(Loader):
# Add legacy '!include foo.yaml' handling
_BindingLoader.add_constructor("!include", _binding_include)
# Use OrderedDict instead of plain dict for YAML mappings, to preserve
# insertion order on Python 3.5 and earlier (plain dicts only preserve
# insertion order on Python 3.6+). This makes testing easier and avoids
# surprises.
#
# Adapted from
# https://stackoverflow.com/questions/5121931/in-python-how-can-you-load-yaml-mappings-as-ordereddicts.
# Hopefully this API stays stable.
_BindingLoader.add_constructor(
yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
lambda loader, node: OrderedDict(loader.construct_pairs(node)))
#
# "Default" binding for properties which are defined by the spec.
#

View File

@@ -5,7 +5,7 @@ envlist=py3
deps =
setuptools-scm
pytest
types-PyYAML
types-PyYAML==6.0.7
mypy
setenv =
TOXTEMPDIR={envtmpdir}

View File

@@ -26,17 +26,17 @@ def parse_args():
parser.add_argument('-a', '--all', dest='all',
help='Show all bugs squashed', action='store_true')
parser.add_argument('-t', '--token', dest='tokenfile',
help='File containing GitHub token', metavar='FILE')
parser.add_argument('-b', '--begin', dest='begin', help='begin date (YYYY-mm-dd)',
metavar='date', type=valid_date_type, required=True)
help='File containing GitHub token (alternatively, use GITHUB_TOKEN env variable)', metavar='FILE')
parser.add_argument('-s', '--start', dest='start', help='start date (YYYY-mm-dd)',
metavar='START_DATE', type=valid_date_type, required=True)
parser.add_argument('-e', '--end', dest='end', help='end date (YYYY-mm-dd)',
metavar='date', type=valid_date_type, required=True)
metavar='END_DATE', type=valid_date_type, required=True)
args = parser.parse_args()
if args.end < args.begin:
if args.end < args.start:
raise ValueError(
'end date {} is before begin date {}'.format(args.end, args.begin))
'end date {} is before start date {}'.format(args.end, args.start))
if args.tokenfile:
with open(args.tokenfile, 'r') as file:
@@ -53,12 +53,12 @@ def parse_args():
class BugBashTally(object):
def __init__(self, gh, begin_date, end_date):
def __init__(self, gh, start_date, end_date):
"""Create a BugBashTally object with the provided Github object,
begin datetime object, and end datetime object"""
start datetime object, and end datetime object"""
self._gh = gh
self._repo = gh.get_repo('zephyrproject-rtos/zephyr')
self._begin_date = begin_date
self._start_date = start_date
self._end_date = end_date
self._issues = []
@@ -122,12 +122,12 @@ class BugBashTally(object):
cutoff = self._end_date + timedelta(1)
issues = self._repo.get_issues(state='closed', labels=[
'bug'], since=self._begin_date)
'bug'], since=self._start_date)
for i in issues:
# the PyGithub API and v3 REST API do not facilitate 'until'
# or 'end date' :-/
if i.closed_at < self._begin_date or i.closed_at > cutoff:
if i.closed_at < self._start_date or i.closed_at > cutoff:
continue
ipr = i.pull_request
@@ -167,7 +167,7 @@ def print_top_ten(top_ten):
def main():
args = parse_args()
bbt = BugBashTally(Github(args.token), args.begin, args.end)
bbt = BugBashTally(Github(args.token), args.start, args.end)
if args.all:
# print one issue per line
issues = bbt.get_issues()

341
scripts/release/list_backports.py Executable file
View File

@@ -0,0 +1,341 @@
#!/usr/bin/env python3
# Copyright (c) 2022, Meta
#
# SPDX-License-Identifier: Apache-2.0
"""Query issues in a release branch
This script searches for issues referenced via pull-requests in a release
branch in order to simplify tracking changes such as automated backports,
manual backports, security fixes, and stability fixes.
A formatted report is printed to standard output either in JSON or
reStructuredText.
Since an issue is required for all changes to release branches, merged PRs
must have at least one instance of the phrase "Fixes #1234" in the body. This
script will throw an error if a PR has been made without an associated issue.
Usage:
./scripts/release/list_backports.py \
-t ~/.ghtoken \
-b v2.7-branch \
-s 2021-12-15 -e 2022-04-22 \
-P 45074 -P 45868 -P 44918 -P 41234 -P 41174 \
-j | jq . | tee /tmp/backports.json
GITHUB_TOKEN="<secret>" \
./scripts/release/list_backports.py \
-b v3.0-branch \
-p 43381 \
-j | jq . | tee /tmp/backports.json
"""
import argparse
from datetime import datetime, timedelta
import io
import json
import logging
import os
import re
import sys
# Requires PyGithub
from github import Github
# https://gist.github.com/monkut/e60eea811ef085a6540f
def valid_date_type(arg_date_str):
"""custom argparse *date* type for user dates values given from the
command line"""
try:
return datetime.strptime(arg_date_str, "%Y-%m-%d")
except ValueError:
msg = "Given Date ({0}) not valid! Expected format, YYYY-MM-DD!".format(arg_date_str)
raise argparse.ArgumentTypeError(msg)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-t', '--token', dest='tokenfile',
help='File containing GitHub token (alternatively, use GITHUB_TOKEN env variable)', metavar='FILE')
parser.add_argument('-b', '--base', dest='base',
help='branch (base) for PRs (e.g. v2.7-branch)', metavar='BRANCH', required=True)
parser.add_argument('-j', '--json', dest='json', action='store_true',
help='print output in JSON rather than RST')
parser.add_argument('-s', '--start', dest='start', help='start date (YYYY-mm-dd)',
metavar='START_DATE', type=valid_date_type)
parser.add_argument('-e', '--end', dest='end', help='end date (YYYY-mm-dd)',
metavar='END_DATE', type=valid_date_type)
parser.add_argument("-o", "--org", default="zephyrproject-rtos",
help="Github organisation")
parser.add_argument('-p', '--include-pull', dest='includes',
help='include pull request (can be specified multiple times)',
metavar='PR', type=int, action='append', default=[])
parser.add_argument('-P', '--exclude-pull', dest='excludes',
help='exlude pull request (can be specified multiple times, helpful for version bumps and release notes)',
metavar='PR', type=int, action='append', default=[])
parser.add_argument("-r", "--repo", default="zephyr",
help="Github repository")
args = parser.parse_args()
if args.includes:
if getattr(args, 'start'):
logging.error(
'the --start argument should not be used with --include-pull')
return None
if getattr(args, 'end'):
logging.error(
'the --end argument should not be used with --include-pull')
return None
else:
if not getattr(args, 'start'):
logging.error(
'if --include-pr PR is not used, --start START_DATE is required')
return None
if not getattr(args, 'end'):
setattr(args, 'end', datetime.now())
if args.end < args.start:
logging.error(
f'end date {args.end} is before start date {args.start}')
return None
if args.tokenfile:
with open(args.tokenfile, 'r') as file:
token = file.read()
token = token.strip()
else:
if 'GITHUB_TOKEN' not in os.environ:
raise ValueError('No credentials specified')
token = os.environ['GITHUB_TOKEN']
setattr(args, 'token', token)
return args
class Backport(object):
def __init__(self, repo, base, pulls):
self._base = base
self._repo = repo
self._issues = []
self._pulls = pulls
self._pulls_without_an_issue = []
self._pulls_with_invalid_issues = {}
@staticmethod
def by_date_range(repo, base, start_date, end_date, excludes):
"""Create a Backport object with the provided repo,
base, start datetime object, and end datetime objects, and
list of excluded PRs"""
pulls = []
unfiltered_pulls = repo.get_pulls(
base=base, state='closed')
for p in unfiltered_pulls:
if not p.merged:
# only consider merged backports
continue
if p.closed_at < start_date or p.closed_at >= end_date + timedelta(1):
# only concerned with PRs within time window
continue
if p.number in excludes:
# skip PRs that have been explicitly excluded
continue
pulls.append(p)
# paginated_list.sort() does not exist
pulls = sorted(pulls, key=lambda x: x.number)
return Backport(repo, base, pulls)
@staticmethod
def by_included_prs(repo, base, includes):
"""Create a Backport object with the provided repo,
base, and list of included PRs"""
pulls = []
for i in includes:
try:
p = repo.get_pull(i)
except Exception:
p = None
if not p:
logging.error(f'{i} is not a valid pull request')
return None
if p.base.ref != base:
logging.error(
f'{i} is not a valid pull request for base {base} ({p.base.label})')
return None
pulls.append(p)
# paginated_list.sort() does not exist
pulls = sorted(pulls, key=lambda x: x.number)
return Backport(repo, base, pulls)
@staticmethod
def sanitize_title(title):
# TODO: sanitize titles such that they are suitable for both JSON and ReStructured Text
# could also automatically fix titles like "Automated backport of PR #1234"
return title
def print(self):
for i in self.get_issues():
title = Backport.sanitize_title(i.title)
# * :github:`38972` - logging: Cleaning references to tracing in logging
print(f'* :github:`{i.number}` - {title}')
def print_json(self):
issue_objects = []
for i in self.get_issues():
obj = {}
obj['id'] = i.number
obj['title'] = Backport.sanitize_title(i.title)
obj['url'] = f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{i.number}'
issue_objects.append(obj)
print(json.dumps(issue_objects))
def get_pulls(self):
return self._pulls
def get_issues(self):
"""Return GitHub issues fixed in the provided date window"""
if self._issues:
return self._issues
issue_map = {}
self._pulls_without_an_issue = []
self._pulls_with_invalid_issues = {}
for p in self._pulls:
# check for issues in this pr
issues_for_this_pr = {}
with io.StringIO(p.body) as buf:
for line in buf.readlines():
line = line.strip()
match = re.search(r"^Fixes[:]?\s*#([1-9][0-9]*).*", line)
if not match:
match = re.search(
rf"^Fixes[:]?\s*https://github\.com/{self._repo.organization.login}/{self._repo.name}/issues/([1-9][0-9]*).*", line)
if not match:
continue
issue_number = int(match[1])
issue = self._repo.get_issue(issue_number)
if not issue:
if not self._pulls_with_invalid_issues[p.number]:
self._pulls_with_invalid_issues[p.number] = [
issue_number]
else:
self._pulls_with_invalid_issues[p.number].append(
issue_number)
logging.error(
f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{p.number} references invalid issue number {issue_number}')
continue
issues_for_this_pr[issue_number] = issue
# report prs missing issues later
if len(issues_for_this_pr) == 0:
logging.error(
f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{p.number} does not have an associated issue')
self._pulls_without_an_issue.append(p)
continue
# FIXME: when we have upgrade to python3.9+, use "issue_map | issues_for_this_pr"
issue_map = {**issue_map, **issues_for_this_pr}
issues = list(issue_map.values())
# paginated_list.sort() does not exist
issues = sorted(issues, key=lambda x: x.number)
self._issues = issues
return self._issues
def get_pulls_without_issues(self):
if self._pulls_without_an_issue:
return self._pulls_without_an_issue
self.get_issues()
return self._pulls_without_an_issue
def get_pulls_with_invalid_issues(self):
if self._pulls_with_invalid_issues:
return self._pulls_with_invalid_issues
self.get_issues()
return self._pulls_with_invalid_issues
def main():
args = parse_args()
if not args:
return os.EX_DATAERR
try:
gh = Github(args.token)
except Exception:
logging.error('failed to authenticate with GitHub')
return os.EX_DATAERR
try:
repo = gh.get_repo(args.org + '/' + args.repo)
except Exception:
logging.error('failed to obtain Github repository')
return os.EX_DATAERR
bp = None
if args.includes:
bp = Backport.by_included_prs(repo, args.base, set(args.includes))
else:
bp = Backport.by_date_range(repo, args.base,
args.start, args.end, set(args.excludes))
if not bp:
return os.EX_DATAERR
pulls_with_invalid_issues = bp.get_pulls_with_invalid_issues()
if pulls_with_invalid_issues:
logging.error('The following PRs link to invalid issues:')
for (p, lst) in pulls_with_invalid_issues:
logging.error(
f'\nhttps://github.com/{repo.organization.login}/{repo.name}/pull/{p.number}: {lst}')
return os.EX_DATAERR
pulls_without_issues = bp.get_pulls_without_issues()
if pulls_without_issues:
logging.error(
'Please ensure the body of each PR to a release branch contains "Fixes #1234"')
logging.error('The following PRs are lacking associated issues:')
for p in pulls_without_issues:
logging.error(
f'https://github.com/{repo.organization.login}/{repo.name}/pull/{p.number}')
return os.EX_DATAERR
if args.json:
bp.print_json()
else:
bp.print()
return os.EX_OK
if __name__ == '__main__':
sys.exit(main())

View File

@@ -147,10 +147,10 @@ def parse_args():
def main():
parse_args()
token = os.environ.get('GH_TOKEN', None)
token = os.environ.get('GITHUB_TOKEN', None)
if not token:
sys.exit("""Github token not set in environment,
set the env. variable GH_TOKEN please and retry.""")
set the env. variable GITHUB_TOKEN please and retry.""")
i = Issues(args.org, args.repo, token)
@@ -213,5 +213,6 @@ set the env. variable GH_TOKEN please and retry.""")
f.write("* :github:`{}` - {}\n".format(
item['number'], item['title']))
if __name__ == '__main__':
main()

View File

@@ -16,6 +16,7 @@ import tempfile
from runners.core import ZephyrBinaryRunner, RunnerCaps
try:
import pylink
from pylink.library import Library
MISSING_REQUIREMENTS = False
except ImportError:
@@ -141,16 +142,23 @@ class JLinkBinaryRunner(ZephyrBinaryRunner):
# to load the shared library distributed with the tools, which
# provides an API call for getting the version.
if not hasattr(self, '_jlink_version'):
# pylink 0.14.0/0.14.1 exposes JLink SDK DLL (libjlinkarm) in
# JLINK_SDK_STARTS_WITH, while other versions use JLINK_SDK_NAME
if pylink.__version__ in ('0.14.0', '0.14.1'):
sdk = Library.JLINK_SDK_STARTS_WITH
else:
sdk = Library.JLINK_SDK_NAME
plat = sys.platform
if plat.startswith('win32'):
libname = Library.get_appropriate_windows_sdk_name() + '.dll'
elif plat.startswith('linux'):
libname = Library.JLINK_SDK_NAME + '.so'
libname = sdk + '.so'
elif plat.startswith('darwin'):
libname = Library.JLINK_SDK_NAME + '.dylib'
libname = sdk + '.dylib'
else:
self.logger.warning(f'unknown platform {plat}; assuming UNIX')
libname = Library.JLINK_SDK_NAME + '.so'
libname = sdk + '.so'
lib = Library(dllpath=os.fspath(Path(self.commander).parent /
libname))

View File

@@ -125,7 +125,7 @@ Created: {datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")}
# write other license info, if any
if len(doc.customLicenseIDs) > 0:
for lic in list(doc.customLicenseIDs).sort():
for lic in sorted(list(doc.customLicenseIDs)):
writeOtherLicenseSPDX(f, lic)
# Open SPDX document file for writing, write the document, and calculate

View File

@@ -24,6 +24,224 @@
#define __FPU_PRESENT CONFIG_CPU_HAS_FPU
#define __MPU_PRESENT CONFIG_CPU_HAS_ARM_MPU
#define __CM4_REV 0x0201 /*!< Core Revision r2p1 */
#define __VTOR_PRESENT 1 /*!< Set to 1 if VTOR is present */
#define __NVIC_PRIO_BITS 3 /*!< Number of Bits used for Priority Levels */
#define __Vendor_SysTickConfig 0 /*!< 0 use default SysTick HW */
#define __FPU_DP 0 /*!< Set to 1 if FPU is double precision */
#define __ICACHE_PRESENT 0 /*!< Set to 1 if I-Cache is present */
#define __DCACHE_PRESENT 0 /*!< Set to 1 if D-Cache is present */
#define __DTCM_PRESENT 0 /*!< Set to 1 if DTCM is present */
/** @brief ARM Cortex-M4 NVIC Interrupt Numbers
* CM4 NVIC implements 16 internal interrupt sources. CMSIS macros use
* negative numbers [-15, -1]. Lower numerical value indicates higher
* priority.
* -15 = Reset Vector invoked on POR or any CPU reset.
* -14 = NMI
* -13 = Hard Fault. At POR or CPU reset all faults map to Hard Fault.
* -12 = Memory Management Fault. If enabled Hard Faults caused by access
* violations, no address match, or MPU mismatch.
* -11 = Bus Fault. If enabled pre-fetch, AHB access faults.
* -10 = Usage Fault. If enabled Undefined instructions, illegal state
* transition (Thumb -> ARM mode), unaligned, etc.
* -9 through -6 are not implemented (reserved).
* -5 System call via SVC instruction.
* -4 Debug Monitor.
* -3 not implemented (reserved).
* -2 PendSV for system service.
* -1 SysTick NVIC system timer.
* Numbers >= 0 are external peripheral interrupts.
*/
typedef enum {
/* ========== ARM Cortex-M4 Specific Interrupt Numbers ============ */
Reset_IRQn = -15, /*!< POR/CPU Reset Vector */
NonMaskableInt_IRQn = -14, /*!< NMI */
HardFault_IRQn = -13, /*!< Hard Faults */
MemoryManagement_IRQn = -12, /*!< Memory Management faults */
BusFault_IRQn = -11, /*!< Bus Access faults */
UsageFault_IRQn = -10, /*!< Usage/instruction faults */
SVCall_IRQn = -5, /*!< SVC */
DebugMonitor_IRQn = -4, /*!< Debug Monitor */
PendSV_IRQn = -2, /*!< PendSV */
SysTick_IRQn = -1, /*!< SysTick */
/* ============== MEC172x Specific Interrupt Numbers ============ */
GIRQ08_IRQn = 0, /*!< GPIO 0140 - 0176 */
GIRQ09_IRQn = 1, /*!< GPIO 0100 - 0136 */
GIRQ10_IRQn = 2, /*!< GPIO 0040 - 0076 */
GIRQ11_IRQn = 3, /*!< GPIO 0000 - 0036 */
GIRQ12_IRQn = 4, /*!< GPIO 0200 - 0236 */
GIRQ13_IRQn = 5, /*!< SMBus Aggregated */
GIRQ14_IRQn = 6, /*!< DMA Aggregated */
GIRQ15_IRQn = 7,
GIRQ16_IRQn = 8,
GIRQ17_IRQn = 9,
GIRQ18_IRQn = 10,
GIRQ19_IRQn = 11,
GIRQ20_IRQn = 12,
GIRQ21_IRQn = 13,
/* GIRQ22(peripheral clock wake) is not connected to NVIC */
GIRQ23_IRQn = 14,
GIRQ24_IRQn = 15,
GIRQ25_IRQn = 16,
GIRQ26_IRQn = 17, /*!< GPIO 0240 - 0276 */
/* Reserved 18-19 */
/* GIRQ's 8 - 12, 24 - 26 no direct connections */
I2C_SMB_0_IRQn = 20, /*!< GIRQ13 b[0] */
I2C_SMB_1_IRQn = 21, /*!< GIRQ13 b[1] */
I2C_SMB_2_IRQn = 22, /*!< GIRQ13 b[2] */
I2C_SMB_3_IRQn = 23, /*!< GIRQ13 b[3] */
DMA0_IRQn = 24, /*!< GIRQ14 b[0] */
DMA1_IRQn = 25, /*!< GIRQ14 b[1] */
DMA2_IRQn = 26, /*!< GIRQ14 b[2] */
DMA3_IRQn = 27, /*!< GIRQ14 b[3] */
DMA4_IRQn = 28, /*!< GIRQ14 b[4] */
DMA5_IRQn = 29, /*!< GIRQ14 b[5] */
DMA6_IRQn = 30, /*!< GIRQ14 b[6] */
DMA7_IRQn = 31, /*!< GIRQ14 b[7] */
DMA8_IRQn = 32, /*!< GIRQ14 b[8] */
DMA9_IRQn = 33, /*!< GIRQ14 b[9] */
DMA10_IRQn = 34, /*!< GIRQ14 b[10] */
DMA11_IRQn = 35, /*!< GIRQ14 b[11] */
DMA12_IRQn = 36, /*!< GIRQ14 b[12] */
DMA13_IRQn = 37, /*!< GIRQ14 b[13] */
DMA14_IRQn = 38, /*!< GIRQ14 b[14] */
DMA15_IRQn = 39, /*!< GIRQ14 b[15] */
UART0_IRQn = 40, /*!< GIRQ15 b[0] */
UART1_IRQn = 41, /*!< GIRQ15 b[1] */
EMI0_IRQn = 42, /*!< GIRQ15 b[2] */
EMI1_IRQn = 43, /*!< GIRQ15 b[3] */
EMI2_IRQn = 44, /*!< GIRQ15 b[4] */
ACPI_EC0_IBF_IRQn = 45, /*!< GIRQ15 b[5] */
ACPI_EC0_OBE_IRQn = 46, /*!< GIRQ15 b[6] */
ACPI_EC1_IBF_IRQn = 47, /*!< GIRQ15 b[7] */
ACPI_EC1_OBE_IRQn = 48, /*!< GIRQ15 b[8] */
ACPI_EC2_IBF_IRQn = 49, /*!< GIRQ15 b[9] */
ACPI_EC2_OBE_IRQn = 50, /*!< GIRQ15 b[10] */
ACPI_EC3_IBF_IRQn = 51, /*!< GIRQ15 b[11] */
ACPI_EC3_OBE_IRQn = 52, /*!< GIRQ15 b[12] */
ACPI_EC4_IBF_IRQn = 53, /*!< GIRQ15 b[13] */
ACPI_EC4_OBE_IRQn = 54, /*!< GIRQ15 b[14] */
ACPI_PM1_CTL_IRQn = 55, /*!< GIRQ15 b[15] */
ACPI_PM1_EN_IRQn = 56, /*!< GIRQ15 b[16] */
ACPI_PM1_STS_IRQn = 57, /*!< GIRQ15 b[17] */
KBC_OBE_IRQn = 58, /*!< GIRQ15 b[18] */
KBC_IBF_IRQn = 59, /*!< GIRQ15 b[19] */
MBOX_IRQn = 60, /*!< GIRQ15 b[20] */
/* reserved 61 */
P80BD_0_IRQn = 62, /*!< GIRQ15 b[22] */
/* reserved 63-64 */
PKE_IRQn = 65, /*!< GIRQ16 b[0] */
/* reserved 66 */
RNG_IRQn = 67, /*!< GIRQ16 b[2] */
AESH_IRQn = 68, /*!< GIRQ16 b[3] */
/* reserved 69 */
PECI_IRQn = 70, /*!< GIRQ17 b[0] */
TACH_0_IRQn = 71, /*!< GIRQ17 b[1] */
TACH_1_IRQn = 72, /*!< GIRQ17 b[2] */
TACH_2_IRQn = 73, /*!< GIRQ17 b[3] */
RPMFAN_0_FAIL_IRQn = 74, /*!< GIRQ17 b[20] */
RPMFAN_0_STALL_IRQn = 75, /*!< GIRQ17 b[21] */
RPMFAN_1_FAIL_IRQn = 76, /*!< GIRQ17 b[22] */
RPMFAN_1_STALL_IRQn = 77, /*!< GIRQ17 b[23] */
ADC_SNGL_IRQn = 78, /*!< GIRQ17 b[8] */
ADC_RPT_IRQn = 79, /*!< GIRQ17 b[9] */
RCID_0_IRQn = 80, /*!< GIRQ17 b[10] */
RCID_1_IRQn = 81, /*!< GIRQ17 b[11] */
RCID_2_IRQn = 82, /*!< GIRQ17 b[12] */
LED_0_IRQn = 83, /*!< GIRQ17 b[13] */
LED_1_IRQn = 84, /*!< GIRQ17 b[14] */
LED_2_IRQn = 85, /*!< GIRQ17 b[15] */
LED_3_IRQn = 86, /*!< GIRQ17 b[16] */
PHOT_IRQn = 87, /*!< GIRQ17 b[17] */
/* reserved 88-89 */
SPIP_0_IRQn = 90, /*!< GIRQ18 b[0] */
QMSPI_0_IRQn = 91, /*!< GIRQ18 b[1] */
GPSPI_0_TXBE_IRQn = 92, /*!< GIRQ18 b[2] */
GPSPI_0_RXBF_IRQn = 93, /*!< GIRQ18 b[3] */
GPSPI_1_TXBE_IRQn = 94, /*!< GIRQ18 b[4] */
GPSPI_1_RXBF_IRQn = 95, /*!< GIRQ18 b[5] */
BCL_0_ERR_IRQn = 96, /*!< GIRQ18 b[7] */
BCL_0_BCLR_IRQn = 97, /*!< GIRQ18 b[6] */
/* reserved 98-99 */
PS2_0_ACT_IRQn = 100, /*!< GIRQ18 b[10] */
/* reserved 101-102 */
ESPI_PC_IRQn = 103, /*!< GIRQ19 b[0] */
ESPI_BM1_IRQn = 104, /*!< GIRQ19 b[1] */
ESPI_BM2_IRQn = 105, /*!< GIRQ19 b[2] */
ESPI_LTR_IRQn = 106, /*!< GIRQ19 b[3] */
ESPI_OOB_UP_IRQn = 107, /*!< GIRQ19 b[4] */
ESPI_OOB_DN_IRQn = 108, /*!< GIRQ19 b[5] */
ESPI_FLASH_IRQn = 109, /*!< GIRQ19 b[6] */
ESPI_RESET_IRQn = 110, /*!< GIRQ19 b[7] */
RTMR_IRQn = 111, /*!< GIRQ23 b[10] */
HTMR_0_IRQn = 112, /*!< GIRQ23 b[16] */
HTMR_1_IRQn = 113, /*!< GIRQ23 b[17] */
WK_IRQn = 114, /*!< GIRQ21 b[3] */
WKSUB_IRQn = 115, /*!< GIRQ21 b[4] */
WKSEC_IRQn = 116, /*!< GIRQ21 b[5] */
WKSUBSEC_IRQn = 117, /*!< GIRQ21 b[6] */
WKSYSPWR_IRQn = 118, /*!< GIRQ21 b[7] */
RTC_IRQn = 119, /*!< GIRQ21 b[8] */
RTC_ALARM_IRQn = 120, /*!< GIRQ21 b[9] */
VCI_OVRD_IN_IRQn = 121, /*!< GIRQ21 b[10] */
VCI_IN0_IRQn = 122, /*!< GIRQ21 b[11] */
VCI_IN1_IRQn = 123, /*!< GIRQ21 b[12] */
VCI_IN2_IRQn = 124, /*!< GIRQ21 b[13] */
VCI_IN3_IRQn = 125, /*!< GIRQ21 b[14] */
VCI_IN4_IRQn = 126, /*!< GIRQ21 b[15] */
/* reserved 127-128 */
PS2_0A_WAKE_IRQn = 129, /*!< GIRQ21 b[18] */
PS2_0B_WAKE_IRQn = 130, /*!< GIRQ21 b[19] */
/* reserved 131-134 */
KEYSCAN_IRQn = 135, /*!< GIRQ21 b[25] */
B16TMR_0_IRQn = 136, /*!< GIRQ23 b[0] */
B16TMR_1_IRQn = 137, /*!< GIRQ23 b[1] */
B16TMR_2_IRQn = 138, /*!< GIRQ23 b[2] */
B16TMR_3_IRQn = 139, /*!< GIRQ23 b[3] */
B32TMR_0_IRQn = 140, /*!< GIRQ23 b[4] */
B32TMR_1_IRQn = 141, /*!< GIRQ23 b[5] */
CTMR_0_IRQn = 142, /*!< GIRQ23 b[6] */
CTMR_1_IRQn = 143, /*!< GIRQ23 b[7] */
CTMR_2_IRQn = 144, /*!< GIRQ23 b[8] */
CTMR_3_IRQn = 145, /*!< GIRQ23 b[9] */
CCT_IRQn = 146, /*!< GIRQ18 b[20] */
CCT_CAP0_IRQn = 147, /*!< GIRQ18 b[21] */
CCT_CAP1_IRQn = 148, /*!< GIRQ18 b[22] */
CCT_CAP2_IRQn = 149, /*!< GIRQ18 b[23] */
CCT_CAP3_IRQn = 150, /*!< GIRQ18 b[24] */
CCT_CAP4_IRQn = 151, /*!< GIRQ18 b[25] */
CCT_CAP5_IRQn = 152, /*!< GIRQ18 b[26] */
CCT_CMP0_IRQn = 153, /*!< GIRQ18 b[27] */
CCT_CMP1_IRQn = 154, /*!< GIRQ18 b[28] */
EEPROMC_IRQn = 155, /*!< GIRQ18 b[13] */
ESPI_VWIRE_IRQn = 156, /*!< GIRQ19 b[8] */
/* reserved 157 */
I2C_SMB_4_IRQn = 158, /*!< GIRQ13 b[4] */
TACH_3_IRQn = 159, /*!< GIRQ17 b[4] */
/* reserved 160-165 */
SAF_DONE_IRQn = 166, /*!< GIRQ19 b[9] */
SAF_ERR_IRQn = 167, /*!< GIRQ19 b[10] */
/* reserved 168 */
SAF_CACHE_IRQn = 169, /*!< GIRQ19 b[11] */
/* reserved 170 */
WDT_0_IRQn = 171, /*!< GIRQ21 b[2] */
GLUE_IRQn = 172, /*!< GIRQ21 b[26] */
OTP_RDY_IRQn = 173, /*!< GIRQ20 b[3] */
CLK32K_MON_IRQn = 174, /*!< GIRQ20 b[9] */
ACPI_EC0_IRQn = 175, /* ACPI EC OBE and IBF combined into one */
ACPI_EC1_IRQn = 176, /* No GIRQ connection. Status in ACPI blocks */
ACPI_EC2_IRQn = 177, /* Code uses level bits and NVIC bits */
ACPI_EC3_IRQn = 178,
ACPI_EC4_IRQn = 179,
ACPI_PM1_IRQn = 180,
MAX_IRQn
} IRQn_Type;
#include <sys/util.h>
/* chip specific register defines */

View File

@@ -205,12 +205,12 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
sr.cpu = cpu_num;
sr.fn = fn;
sr.stack_top = Z_THREAD_STACK_BUFFER(stack) + sz;
sr.stack_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
sr.arg = arg;
sr.vecbase = vb;
sr.alive = &alive_flag;
appcpu_top = Z_THREAD_STACK_BUFFER(stack) + sz;
appcpu_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
start_rec = &sr;

View File

@@ -331,7 +331,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
start_rec.vecbase = vecbase;
start_rec.alive = 0;
z_mp_stack_top = Z_THREAD_STACK_BUFFER(stack) + sz;
z_mp_stack_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
/* Pre-2.x cAVS delivers the IDC to ROM code, so unmask it */
CAVS_INTCTRL[cpu_num].l2.clear = CAVS_L2_IDC;

View File

@@ -571,7 +571,7 @@ void sw_switch(uint8_t dir_curr, uint8_t dir_next, uint8_t phy_curr, uint8_t fla
hal_radio_sw_switch_coded_tx_config_set(ppi_en, ppi_dis,
cc_s2, sw_tifs_toggle);
} else if (!dir_curr) {
} else {
/* Switching to TX after RX on LE 1M/2M PHY */
hal_radio_sw_switch_coded_config_clear(ppi_en,

View File

@@ -4169,6 +4169,7 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
struct lll_conn *lll = &conn->lll;
struct node_rx_pdu *rx;
uint8_t old_tx, old_rx;
uint8_t phy_bitmask;
/* Acquire additional rx node for Data length notification as
* a peripheral.
@@ -4198,6 +4199,15 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
conn->llcp_ack = conn->llcp_req;
}
/* supported PHYs mask */
phy_bitmask = PHY_1M;
if (IS_ENABLED(CONFIG_BT_CTLR_PHY_2M)) {
phy_bitmask |= PHY_2M;
}
if (IS_ENABLED(CONFIG_BT_CTLR_PHY_CODED)) {
phy_bitmask |= PHY_CODED;
}
/* apply new phy */
old_tx = lll->phy_tx;
old_rx = lll->phy_rx;
@@ -4211,7 +4221,10 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
if (conn->llcp.phy_upd_ind.tx) {
lll->phy_tx = conn->llcp.phy_upd_ind.tx;
if (conn->llcp.phy_upd_ind.tx & phy_bitmask) {
lll->phy_tx = conn->llcp.phy_upd_ind.tx &
phy_bitmask;
}
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
eff_tx_time = calc_eff_time(lll->max_tx_octets,
@@ -4221,7 +4234,10 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
}
if (conn->llcp.phy_upd_ind.rx) {
lll->phy_rx = conn->llcp.phy_upd_ind.rx;
if (conn->llcp.phy_upd_ind.rx & phy_bitmask) {
lll->phy_rx = conn->llcp.phy_upd_ind.rx &
phy_bitmask;
}
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
eff_rx_time =

View File

@@ -1295,18 +1295,13 @@ static void le_ecred_reconf_req(struct bt_l2cap *l2cap, uint8_t ident,
chan = bt_l2cap_le_lookup_tx_cid(conn, scid);
if (!chan) {
result = BT_L2CAP_RECONF_INVALID_CID;
continue;
goto response;
}
/* If the MTU value is decreased for any of the included
* channels, then the receiver shall disconnect all
* included channels.
*/
if (BT_L2CAP_LE_CHAN(chan)->tx.mtu > mtu) {
BT_ERR("chan %p decreased MTU %u -> %u", chan,
BT_L2CAP_LE_CHAN(chan)->tx.mtu, mtu);
result = BT_L2CAP_RECONF_INVALID_MTU;
bt_l2cap_chan_disconnect(chan);
goto response;
}

View File

@@ -29,6 +29,7 @@
#define START_PAYLOAD_MAX 20
#define CONT_PAYLOAD_MAX 23
#define RX_BUFFER_MAX 65
#define START_LAST_SEG(gpc) (gpc >> 2)
#define CONT_SEG_INDEX(gpc) (gpc >> 2)
@@ -38,7 +39,8 @@
#define LINK_ACK 0x01
#define LINK_CLOSE 0x02
#define XACT_SEG_DATA(_seg) (&link.rx.buf->data[20 + ((_seg - 1) * 23)])
#define XACT_SEG_OFFSET(_seg) (20 + ((_seg - 1) * 23))
#define XACT_SEG_DATA(_seg) (&link.rx.buf->data[XACT_SEG_OFFSET(_seg)])
#define XACT_SEG_RECV(_seg) (link.rx.seg &= ~(1 << (_seg)))
#define XACT_ID_MAX 0x7f
@@ -116,7 +118,7 @@ struct prov_rx {
uint8_t gpc;
};
NET_BUF_SIMPLE_DEFINE_STATIC(rx_buf, 65);
NET_BUF_SIMPLE_DEFINE_STATIC(rx_buf, RX_BUFFER_MAX);
static struct pb_adv link = { .rx = { .buf = &rx_buf } };
@@ -147,7 +149,7 @@ static struct bt_mesh_send_cb buf_sent_cb = {
.end = buf_sent,
};
static uint8_t last_seg(uint8_t len)
static uint8_t last_seg(uint16_t len)
{
if (len <= START_PAYLOAD_MAX) {
return 0;
@@ -383,6 +385,11 @@ static void gen_prov_cont(struct prov_rx *rx, struct net_buf_simple *buf)
return;
}
if (XACT_SEG_OFFSET(seg) + buf->len > RX_BUFFER_MAX) {
BT_WARN("Rx buffer overflow. Malformed generic prov frame?");
return;
}
memcpy(XACT_SEG_DATA(seg), buf->data, buf->len);
XACT_SEG_RECV(seg);
@@ -475,6 +482,13 @@ static void gen_prov_start(struct prov_rx *rx, struct net_buf_simple *buf)
return;
}
if (START_LAST_SEG(rx->gpc) != last_seg(link.rx.buf->len)) {
BT_ERR("Invalid SegN (%u, calculated %u)", START_LAST_SEG(rx->gpc),
last_seg(link.rx.buf->len));
prov_failed(PROV_ERR_NVAL_FMT);
return;
}
prov_clear_tx();
link.rx.last_seg = START_LAST_SEG(rx->gpc);

View File

@@ -222,7 +222,7 @@ int bt_mesh_proxy_msg_send(struct bt_mesh_proxy_role *role, uint8_t type,
net_buf_simple_pull(msg, mtu);
while (msg->len) {
if (msg->len + 1 < mtu) {
if (msg->len + 1 <= mtu) {
net_buf_simple_push_u8(msg, PDU_HDR(SAR_LAST, type));
err = role->cb.send(conn, msg->data, msg->len, end, user_data);
if (err) {

View File

@@ -10,6 +10,11 @@ menuconfig LOG
if LOG
config LOG_CORE_INIT_PRIORITY
int "Log Core Initialization Priority"
range 0 99
default 0
rsource "Kconfig.mode"
rsource "Kconfig.filtering"

View File

@@ -1276,4 +1276,4 @@ static int enable_logger(const struct device *arg)
return 0;
}
SYS_INIT(enable_logger, POST_KERNEL, 0);
SYS_INIT(enable_logger, POST_KERNEL, CONFIG_LOG_CORE_INIT_PRIORITY);

View File

@@ -239,14 +239,16 @@ static bool start_http_client(void)
int protocol = IPPROTO_TCP;
#endif
(void)memset(&hints, 0, sizeof(hints));
if (IS_ENABLED(CONFIG_NET_IPV6)) {
hints.ai_family = AF_INET6;
hints.ai_socktype = SOCK_STREAM;
} else if (IS_ENABLED(CONFIG_NET_IPV4)) {
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
}
hints.ai_socktype = SOCK_STREAM;
while (resolve_attempts--) {
ret = getaddrinfo(CONFIG_HAWKBIT_SERVER, CONFIG_HAWKBIT_PORT,
&hints, &addr);
@@ -412,6 +414,8 @@ static int hawkbit_find_cancelAction_base(struct hawkbit_ctl_res *res,
return 0;
}
LOG_DBG("_links.%s.href=%s", "cancelAction", href);
helper = strstr(href, "cancelAction/");
if (!helper) {
/* A badly formatted cancel base is a server error */
@@ -465,6 +469,8 @@ static int hawkbit_find_deployment_base(struct hawkbit_ctl_res *res,
return 0;
}
LOG_DBG("_links.%s.href=%s", "deploymentBase", href);
helper = strstr(href, "deploymentBase/");
if (!helper) {
/* A badly formatted deployment base is a server error */
@@ -573,17 +579,6 @@ static int hawkbit_parse_deployment(struct hawkbit_dep_res *res,
return 0;
}
static void hawkbit_dump_base(struct hawkbit_ctl_res *r)
{
LOG_DBG("config.polling.sleep=%s", log_strdup(r->config.polling.sleep));
LOG_DBG("_links.deploymentBase.href=%s",
log_strdup(r->_links.deploymentBase.href));
LOG_DBG("_links.configData.href=%s",
log_strdup(r->_links.configData.href));
LOG_DBG("_links.cancelAction.href=%s",
log_strdup(r->_links.cancelAction.href));
}
static void hawkbit_dump_deployment(struct hawkbit_dep_res *d)
{
struct hawkbit_dep_res_chunk *c = &d->deployment.chunks[0];
@@ -1098,9 +1093,9 @@ enum hawkbit_response hawkbit_probe(void)
if (hawkbit_results.base.config.polling.sleep) {
/* Update the sleep time. */
hawkbit_update_sleep(&hawkbit_results.base);
LOG_DBG("config.polling.sleep=%s", hawkbit_results.base.config.polling.sleep);
}
hawkbit_dump_base(&hawkbit_results.base);
if (hawkbit_results.base._links.cancelAction.href) {
ret = hawkbit_find_cancelAction_base(&hawkbit_results.base,
@@ -1127,6 +1122,8 @@ enum hawkbit_response hawkbit_probe(void)
}
if (hawkbit_results.base._links.configData.href) {
LOG_DBG("_links.%s.href=%s", "configData",
hawkbit_results.base._links.configData.href);
memset(hb_context.url_buffer, 0, sizeof(hb_context.url_buffer));
hb_context.dl.http_content_size = 0;
hb_context.url_buffer_size = URL_BUFFER_SIZE;

View File

@@ -425,13 +425,15 @@ config MCUMGR_BUF_USER_DATA_SIZE
int "Size of mcumgr buffer user data"
default 24 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV6
default 8 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV4
default 8 if MCUMGR_SMP_BT
default 4
help
The size, in bytes, of user data to allocate for each mcumgr buffer.
Different mcumgr transports impose different requirements for this
setting. A value of 4 is sufficient for UART, shell, and bluetooth.
For UDP, the userdata must be large enough to hold a IPv4/IPv6 address.
setting. A value of 4 is sufficient for UART and shell, a value of 8
is sufficient for Bluetooth. For UDP, the userdata must be large
enough to hold a IPv4/IPv6 address.
Note that CONFIG_NET_BUF_USER_DATA_SIZE must be at least as big as
MCUMGR_BUF_USER_DATA_SIZE.

View File

@@ -1,5 +1,6 @@
/*
* Copyright Runtime.io 2018. All rights reserved.
* Copyright (c) 2022 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -21,13 +22,28 @@
#include <mgmt/mcumgr/smp.h>
#include <logging/log.h>
LOG_MODULE_REGISTER(smp_bt, CONFIG_MCUMGR_LOG_LEVEL);
struct device;
struct smp_bt_user_data {
struct bt_conn *conn;
uint8_t id;
};
/* Verification of user data being able to fit */
BUILD_ASSERT(sizeof(struct smp_bt_user_data) <= CONFIG_MCUMGR_BUF_USER_DATA_SIZE,
"CONFIG_MCUMGR_BUF_USER_DATA_SIZE not large enough to fit Bluetooth user data");
struct conn_param_data {
struct bt_conn *conn;
uint8_t id;
};
static uint8_t next_id;
static struct zephyr_smp_transport smp_bt_transport;
static struct conn_param_data conn_data[CONFIG_BT_MAX_CONN];
/* SMP service.
* {8D53DC1D-1DB7-4CD3-868B-8A527460AA84}
@@ -41,6 +57,56 @@ static struct bt_uuid_128 smp_bt_svc_uuid = BT_UUID_INIT_128(
static struct bt_uuid_128 smp_bt_chr_uuid = BT_UUID_INIT_128(
BT_UUID_128_ENCODE(0xda2e7828, 0xfbce, 0x4e01, 0xae9e, 0x261174997c48));
/* Helper function that allocates conn_param_data for a conn. */
static struct conn_param_data *conn_param_data_alloc(struct bt_conn *conn)
{
for (size_t i = 0; i < ARRAY_SIZE(conn_data); i++) {
if (conn_data[i].conn == NULL) {
bool valid = false;
conn_data[i].conn = conn;
/* Generate an ID for this connection and reset semaphore */
while (!valid) {
valid = true;
conn_data[i].id = next_id;
++next_id;
if (next_id == 0) {
/* Avoid use of 0 (invalid ID) */
++next_id;
}
for (size_t l = 0; l < ARRAY_SIZE(conn_data); l++) {
if (l != i && conn_data[l].conn != NULL &&
conn_data[l].id == conn_data[i].id) {
valid = false;
break;
}
}
}
return &conn_data[i];
}
}
/* Conn data must exists. */
__ASSERT_NO_MSG(false);
return NULL;
}
/* Helper function that returns conn_param_data associated with a conn. */
static struct conn_param_data *conn_param_data_get(const struct bt_conn *conn)
{
for (size_t i = 0; i < ARRAY_SIZE(conn_data); i++) {
if (conn_data[i].conn == conn) {
return &conn_data[i];
}
}
return NULL;
}
/**
* Write handler for the SMP characteristic; processes an incoming SMP request.
*/
@@ -51,6 +117,12 @@ static ssize_t smp_bt_chr_write(struct bt_conn *conn,
{
struct smp_bt_user_data *ud;
struct net_buf *nb;
struct conn_param_data *cpd = conn_param_data_get(conn);
if (cpd == NULL) {
printk("cpd is null");
return len;
}
nb = mcumgr_buf_alloc();
if (!nb) {
@@ -59,7 +131,8 @@ static ssize_t smp_bt_chr_write(struct bt_conn *conn,
net_buf_add_mem(nb, buf, len);
ud = net_buf_user_data(nb);
ud->conn = bt_conn_ref(conn);
ud->conn = conn;
ud->id = cpd->id;
zephyr_smp_rx_req(&smp_bt_transport, nb);
@@ -113,7 +186,7 @@ static struct bt_conn *smp_bt_conn_from_pkt(const struct net_buf *nb)
return NULL;
}
return bt_conn_ref(ud->conn);
return ud->conn;
}
/**
@@ -131,7 +204,6 @@ static uint16_t smp_bt_get_mtu(const struct net_buf *nb)
}
mtu = bt_gatt_get_mtu(conn);
bt_conn_unref(conn);
/* Account for the three-byte notification header. */
return mtu - 3;
@@ -142,8 +214,8 @@ static void smp_bt_ud_free(void *ud)
struct smp_bt_user_data *user_data = ud;
if (user_data->conn) {
bt_conn_unref(user_data->conn);
user_data->conn = NULL;
user_data->id = 0;
}
}
@@ -153,7 +225,8 @@ static int smp_bt_ud_copy(struct net_buf *dst, const struct net_buf *src)
struct smp_bt_user_data *dst_ud = net_buf_user_data(dst);
if (src_ud->conn) {
dst_ud->conn = bt_conn_ref(src_ud->conn);
dst_ud->conn = src_ud->conn;
dst_ud->id = src_ud->id;
}
return 0;
@@ -165,17 +238,26 @@ static int smp_bt_ud_copy(struct net_buf *dst, const struct net_buf *src)
static int smp_bt_tx_pkt(struct zephyr_smp_transport *zst, struct net_buf *nb)
{
struct bt_conn *conn;
struct smp_bt_user_data *ud = net_buf_user_data(nb);
int rc;
conn = smp_bt_conn_from_pkt(nb);
if (conn == NULL) {
rc = -1;
} else {
rc = smp_bt_tx_rsp(conn, nb->data, nb->len);
bt_conn_unref(conn);
struct conn_param_data *cpd = conn_param_data_get(conn);
if (cpd == NULL) {
rc = -1;
} else if (cpd->id == 0 || cpd->id != ud->id) {
/* Connection has been lost or is a different device */
rc = -1;
} else {
rc = smp_bt_tx_rsp(conn, nb->data, nb->len);
}
}
smp_bt_ud_free(net_buf_user_data(nb));
smp_bt_ud_free(ud);
mcumgr_buf_free(nb);
return rc;
@@ -191,10 +273,41 @@ int smp_bt_unregister(void)
return bt_gatt_service_unregister(&smp_bt_svc);
}
/* BT connected callback. */
static void connected(struct bt_conn *conn, uint8_t err)
{
if (err == 0) {
(void)conn_param_data_alloc(conn);
}
}
/* BT disconnected callback. */
static void disconnected(struct bt_conn *conn, uint8_t reason)
{
struct conn_param_data *cpd = conn_param_data_get(conn);
/* Clear cpd. */
if (cpd != NULL) {
cpd->id = 0;
cpd->conn = NULL;
} else {
LOG_ERR("Null cpd object for connection %p", (void *)conn);
}
}
static int smp_bt_init(const struct device *dev)
{
ARG_UNUSED(dev);
next_id = 1;
/* Register BT callbacks */
static struct bt_conn_cb conn_callbacks = {
.connected = connected,
.disconnected = disconnected,
};
bt_conn_cb_register(&conn_callbacks);
zephyr_smp_transport_init(&smp_bt_transport, smp_bt_tx_pkt,
smp_bt_get_mtu, smp_bt_ud_copy,
smp_bt_ud_free);

View File

@@ -15,7 +15,9 @@ if NET_BUF
config NET_BUF_USER_DATA_SIZE
int "Size of user_data available in every network buffer"
default 8 if ((BT || NET_TCP2) && 64BIT) || BT_ISO
default 24 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV6
default 8 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV4
default 8 if ((BT || NET_TCP2) && 64BIT) || BT_ISO || MCUMGR_SMP_BT
default 4
range 4 65535 if BT || NET_TCP2
range 0 65535

View File

@@ -691,6 +691,7 @@ int net_route_mcast_forward_packet(struct net_pkt *pkt,
if (net_send_data(pkt_cpy) >= 0) {
++ret;
} else {
net_pkt_unref(pkt_cpy);
--err;
}
}

View File

@@ -231,7 +231,9 @@ static const char *tcp_flags(uint8_t flags)
len += snprintk(buf + len, BUF_SIZE - len, "URG,");
}
buf[len - 1] = '\0'; /* delete the last comma */
if (len > 0) {
buf[len - 1] = '\0'; /* delete the last comma */
}
}
#undef BUF_SIZE
return buf;

View File

@@ -463,6 +463,11 @@ static ssize_t spair_write(void *obj, const void *buffer, size_t count)
}
if (will_block) {
if (k_is_in_isr()) {
errno = EAGAIN;
res = -1;
goto out;
}
for (int signaled = false, result = -1; !signaled;
result = -1) {
@@ -652,6 +657,11 @@ static ssize_t spair_read(void *obj, void *buffer, size_t count)
}
if (will_block) {
if (k_is_in_isr()) {
errno = EAGAIN;
res = -1;
goto out;
}
for (int signaled = false, result = -1; !signaled;
result = -1) {

View File

@@ -319,7 +319,9 @@ static void dropped(const struct log_backend *const backend, uint32_t cnt)
const struct shell *shell = (const struct shell *)backend->cb->ctx;
const struct shell_log_backend *log_backend = shell->log_backend;
atomic_add(&shell->stats->log_lost_cnt, cnt);
if (IS_ENABLED(CONFIG_SHELL_STATS)) {
atomic_add(&shell->stats->log_lost_cnt, cnt);
}
atomic_add(&log_backend->control_block->dropped_cnt, cnt);
}

View File

@@ -575,6 +575,12 @@ struct gatt_read_rp {
uint8_t data[];
} __packed;
struct gatt_char_value {
uint16_t handle;
uint8_t data_len;
uint8_t data[0];
} __packed;
#define GATT_READ_UUID 0x12
struct gatt_read_uuid_cmd {
uint8_t address_type;
@@ -586,8 +592,8 @@ struct gatt_read_uuid_cmd {
} __packed;
struct gatt_read_uuid_rp {
uint8_t att_response;
uint16_t data_length;
uint8_t data[];
uint8_t values_count;
struct gatt_char_value values[0];
} __packed;
#define GATT_READ_LONG 0x13

View File

@@ -1395,6 +1395,51 @@ static uint8_t read_cb(struct bt_conn *conn, uint8_t err,
return BT_GATT_ITER_CONTINUE;
}
static uint8_t read_uuid_cb(struct bt_conn *conn, uint8_t err,
struct bt_gatt_read_params *params, const void *data,
uint16_t length)
{
struct gatt_read_uuid_rp *rp = (void *)gatt_buf.buf;
struct gatt_char_value value;
/* Respond to the Lower Tester with ATT Error received */
if (err) {
rp->att_response = err;
}
/* read complete */
if (!data) {
tester_send(BTP_SERVICE_ID_GATT, btp_opcode, CONTROLLER_INDEX,
gatt_buf.buf, gatt_buf.len);
read_destroy(params);
return BT_GATT_ITER_STOP;
}
value.handle = params->by_uuid.start_handle;
value.data_len = length;
if (!gatt_buf_add(&value, sizeof(struct gatt_char_value))) {
tester_rsp(BTP_SERVICE_ID_GATT, btp_opcode,
CONTROLLER_INDEX, BTP_STATUS_FAILED);
read_destroy(params);
return BT_GATT_ITER_STOP;
}
if (!gatt_buf_add(data, length)) {
tester_rsp(BTP_SERVICE_ID_GATT, btp_opcode,
CONTROLLER_INDEX, BTP_STATUS_FAILED);
read_destroy(params);
return BT_GATT_ITER_STOP;
}
rp->values_count++;
return BT_GATT_ITER_CONTINUE;
}
static void read_data(uint8_t *data, uint16_t len)
{
const struct gatt_read_cmd *cmd = (void *) data;
@@ -1448,7 +1493,7 @@ static void read_uuid(uint8_t *data, uint16_t len)
goto fail;
}
if (!gatt_buf_reserve(sizeof(struct gatt_read_rp))) {
if (!gatt_buf_reserve(sizeof(struct gatt_read_uuid_rp))) {
goto fail;
}
@@ -1456,7 +1501,7 @@ static void read_uuid(uint8_t *data, uint16_t len)
read_params.handle_count = 0;
read_params.by_uuid.start_handle = sys_le16_to_cpu(cmd->start_handle);
read_params.by_uuid.end_handle = sys_le16_to_cpu(cmd->end_handle);
read_params.func = read_cb;
read_params.func = read_uuid_cb;
btp_opcode = GATT_READ_UUID;

View File

@@ -89,6 +89,28 @@ const struct zcan_frame test_ext_msg_2 = {
.data = {1, 2, 3, 4, 5, 6, 7, 8}
};
/**
* @brief Standard (11-bit) CAN ID RTR frame 1.
*/
const struct zcan_frame test_std_rtr_msg_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_STD_ID_1,
.dlc = 0,
.data = {0}
};
/**
* @brief Extended (29-bit) CAN ID RTR frame 1.
*/
const struct zcan_frame test_ext_rtr_msg_1 = {
.id_type = CAN_EXTENDED_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_EXT_ID_1,
.dlc = 0,
.data = {0}
};
const struct zcan_filter test_std_filter_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_DATAFRAME,
@@ -154,6 +176,30 @@ const struct zcan_filter test_ext_masked_filter_2 = {
.id_mask = TEST_CAN_EXT_MASK
};
/**
* @brief Standard (11-bit) CAN ID RTR filter 1. This filter matches
* ``test_std_rtr_frame_1``.
*/
const struct zcan_filter test_std_rtr_filter_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_STD_ID_1,
.rtr_mask = 1,
.id_mask = CAN_STD_ID_MASK
};
/**
* @brief Extended (29-bit) CAN ID RTR filter 1. This filter matches
* ``test_ext_rtr_frame_1``.
*/
const struct zcan_filter test_ext_rtr_filter_1 = {
.id_type = CAN_EXTENDED_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_EXT_ID_1,
.rtr_mask = 1,
.id_mask = CAN_EXT_ID_MASK
};
const struct zcan_filter test_std_some_filter = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_DATAFRAME,
@@ -517,6 +563,55 @@ static void send_receive(const struct zcan_filter *filter1,
can_detach(can_dev, filter_id_1);
}
/**
* @brief Perform a send/receive test with a set of CAN ID filters and CAN frames, RTR and data
* frames.
*
* @param data_filter CAN data filter
* @param rtr_filter CAN RTR filter
* @param data_frame CAN data frame
* @param rtr_frame CAN RTR frame
*/
void send_receive_rtr(const struct zcan_filter *data_filter,
const struct zcan_filter *rtr_filter,
const struct zcan_frame *data_frame,
const struct zcan_frame *rtr_frame)
{
struct zcan_frame frame;
int filter_id;
int err;
filter_id = attach_msgq(can_dev, rtr_filter);
/* Verify that RTR filter does not match data frame */
send_test_msg(can_dev, data_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, -EAGAIN, "Data frame passed RTR filter");
/* Verify that RTR filter matches RTR frame */
send_test_msg(can_dev, rtr_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, 0, "receive timeout");
check_msg(&frame, rtr_frame, 0);
can_detach(can_dev, filter_id);
filter_id = attach_msgq(can_dev, data_filter);
/* Verify that data filter does not match RTR frame */
send_test_msg(can_dev, rtr_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, -EAGAIN, "RTR frame passed data filter");
/* Verify that data filter matches data frame */
send_test_msg(can_dev, data_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, 0, "receive timeout");
check_msg(&frame, data_frame, 0);
can_detach(can_dev, filter_id);
}
/*
* Set driver to loopback mode
* The driver stays in loopback mode after that.
@@ -691,6 +786,24 @@ void test_send_receive_buffer(void)
can_detach(can_dev, filter_id);
}
/**
* @brief Test send/receive with standard (11-bit) CAN IDs and remote transmission request (RTR).
*/
void test_send_receive_std_id_rtr(void)
{
send_receive_rtr(&test_std_filter_1, &test_std_rtr_filter_1,
&test_std_msg_1, &test_std_rtr_msg_1);
}
/**
* @brief Test send/receive with extended (29-bit) CAN IDs and remote transmission request (RTR).
*/
void test_send_receive_ext_id_rtr(void)
{
send_receive_rtr(&test_ext_filter_1, &test_ext_rtr_filter_1,
&test_ext_msg_1, &test_ext_rtr_msg_1);
}
/*
* Attach to a filter that should not pass the message and send a message
* with a different id.
@@ -746,6 +859,8 @@ void test_main(void)
ztest_unit_test(test_send_receive_ext),
ztest_unit_test(test_send_receive_std_masked),
ztest_unit_test(test_send_receive_ext_masked),
ztest_user_unit_test(test_send_receive_std_id_rtr),
ztest_user_unit_test(test_send_receive_ext_id_rtr),
ztest_unit_test(test_send_receive_buffer),
ztest_unit_test(test_send_receive_wrong_id));
ztest_run_test_suite(can_driver);

View File

@@ -0,0 +1,8 @@
# SPDX-License-Identifier: Apache-2.0
# Copyright (c) 2022 Meta
config TEST_MUTEX_API_THREAD_CREATE_TICKS
int "Wait time (in ticks) after thread creation"
default 42
source "Kconfig"

View File

@@ -386,6 +386,68 @@ void test_mutex_priority_inheritance(void)
k_msleep(TIMEOUT+1000);
}
static void tThread_mutex_lock_should_fail(void *p1, void *p2, void *p3)
{
k_timeout_t timeout;
struct k_mutex *mutex = (struct k_mutex *)p1;
timeout.ticks = 0;
timeout.ticks |= (uint64_t)(uintptr_t)p2 << 32;
timeout.ticks |= (uint64_t)(uintptr_t)p3 << 0;
zassert_equal(-EAGAIN, k_mutex_lock(mutex, timeout), NULL);
}
/**
* @brief Test fix for subtle race during priority inversion
*
* - A low priority thread (Tlow) locks mutex A.
* - A high priority thread (Thigh) blocks on mutex A, boosting the priority
* of Tlow.
* - Thigh times out waiting for mutex A.
* - Before Thigh has a chance to execute, Tlow unlocks mutex A (which now
* has no owner) and drops its own priority.
* - Thigh now gets a chance to execute and finds that it timed out, and
* then enters the block of code to lower the priority of the thread that
* owns mutex A (now nobody).
* - Thigh tries to the dereference the owner of mutex A (which is nobody,
* and thus it is NULL). This leads to an exception.
*
* @ingroup kernel_mutex_tests
*
* @see k_mutex_lock()
*/
static void test_mutex_timeout_race_during_priority_inversion(void)
{
k_timeout_t timeout;
uintptr_t timeout_upper;
uintptr_t timeout_lower;
int helper_prio = k_thread_priority_get(k_current_get()) + 1;
k_mutex_init(&mutex);
/* align to tick boundary */
k_sleep(K_TICKS(1));
/* allow non-kobject data to be shared (via registers) */
timeout = K_TIMEOUT_ABS_TICKS(k_uptime_ticks()
+ CONFIG_TEST_MUTEX_API_THREAD_CREATE_TICKS);
timeout_upper = timeout.ticks >> 32;
timeout_lower = timeout.ticks & BIT64_MASK(32);
k_mutex_lock(&mutex, K_FOREVER);
k_thread_create(&tdata, tstack, K_THREAD_STACK_SIZEOF(tstack),
tThread_mutex_lock_should_fail, &mutex, (void *)timeout_upper,
(void *)timeout_lower, helper_prio,
K_USER | K_INHERIT_PERMS, K_NO_WAIT);
k_thread_priority_set(k_current_get(), K_HIGHEST_THREAD_PRIO);
k_sleep(timeout);
k_mutex_unlock(&mutex);
}
/*test case main entry*/
void test_main(void)
{
@@ -400,7 +462,8 @@ void test_main(void)
ztest_user_unit_test(test_mutex_reent_lock_timeout_fail),
ztest_1cpu_user_unit_test(test_mutex_reent_lock_timeout_pass),
ztest_user_unit_test(test_mutex_recursive),
ztest_user_unit_test(test_mutex_priority_inheritance)
ztest_user_unit_test(test_mutex_priority_inheritance),
ztest_1cpu_unit_test(test_mutex_timeout_race_during_priority_inversion)
);
ztest_run_test_suite(mutex_api);
}

View File

@@ -23,11 +23,9 @@ void test_posix_clock(void)
NULL);
zassert_equal(errno, EINVAL, NULL);
clock_gettime(CLOCK_MONOTONIC, &ts);
/* 2 Sec Delay */
sleep(SLEEP_SECONDS);
usleep(SLEEP_SECONDS * USEC_PER_SEC);
clock_gettime(CLOCK_MONOTONIC, &te);
zassert_ok(clock_gettime(CLOCK_MONOTONIC, &ts), NULL);
zassert_ok(k_sleep(K_SECONDS(SLEEP_SECONDS)), NULL);
zassert_ok(clock_gettime(CLOCK_MONOTONIC, &te), NULL);
if (te.tv_nsec >= ts.tv_nsec) {
secs_elapsed = te.tv_sec - ts.tv_sec;
@@ -38,7 +36,7 @@ void test_posix_clock(void)
}
/*TESTPOINT: Check if POSIX clock API test passes*/
zassert_equal(secs_elapsed, (2 * SLEEP_SECONDS),
zassert_equal(secs_elapsed, SLEEP_SECONDS,
"POSIX clock API test failed");
printk("POSIX clock APIs test done\n");

View File

@@ -21,6 +21,7 @@ extern void test_posix_pthread_create_negative(void);
extern void test_posix_pthread_termination(void);
extern void test_posix_multiple_threads_single_key(void);
extern void test_posix_single_thread_multiple_keys(void);
extern void test_pthread_descriptor_leak(void);
extern void test_nanosleep_NULL_NULL(void);
extern void test_nanosleep_NULL_notNULL(void);
extern void test_nanosleep_notNULL_NULL(void);
@@ -36,6 +37,8 @@ extern void test_nanosleep_0_500000000(void);
extern void test_nanosleep_1_0(void);
extern void test_nanosleep_1_1(void);
extern void test_nanosleep_1_1001(void);
extern void test_sleep(void);
extern void test_usleep(void);
void test_main(void)
{
@@ -45,6 +48,7 @@ void test_main(void)
ztest_unit_test(test_posix_pthread_termination),
ztest_unit_test(test_posix_multiple_threads_single_key),
ztest_unit_test(test_posix_single_thread_multiple_keys),
ztest_unit_test(test_pthread_descriptor_leak),
ztest_unit_test(test_posix_clock),
ztest_unit_test(test_posix_semaphore),
ztest_unit_test(test_posix_normal_mutex),
@@ -67,7 +71,9 @@ void test_main(void)
ztest_unit_test(test_nanosleep_1_0),
ztest_unit_test(test_nanosleep_1_1),
ztest_unit_test(test_nanosleep_1_1001),
ztest_unit_test(test_posix_pthread_create_negative)
ztest_unit_test(test_posix_pthread_create_negative),
ztest_unit_test(test_sleep),
ztest_unit_test(test_usleep)
);
ztest_run_test_suite(posix_apis);
}

View File

@@ -560,3 +560,20 @@ void test_posix_pthread_create_negative(void)
ret = pthread_create(&pthread1, &attr1, create_thread1, (void *)1);
zassert_equal(ret, EINVAL, "create thread with 0 size");
}
void test_pthread_descriptor_leak(void)
{
void *unused;
pthread_t pthread1;
pthread_attr_t attr;
zassert_ok(pthread_attr_init(&attr), NULL);
zassert_ok(pthread_attr_setstack(&attr, &stack_e[0][0], STACKS), NULL);
/* If we are leaking descriptors, then this loop will never complete */
for (size_t i = 0; i < CONFIG_MAX_PTHREAD_COUNT * 2; ++i) {
zassert_ok(pthread_create(&pthread1, &attr, create_thread1, NULL),
"unable to create thread %zu", i);
zassert_ok(pthread_join(pthread1, &unused), "unable to join thread %zu", i);
}
}

View File

@@ -0,0 +1,87 @@
/*
* Copyright (c) 2022, Meta
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <posix/unistd.h>
#include <ztest.h>
struct waker_work {
k_tid_t tid;
struct k_work_delayable dwork;
};
static struct waker_work ww;
static void waker_func(struct k_work *work)
{
struct waker_work *ww;
struct k_work_delayable *dwork = k_work_delayable_from_work(work);
ww = CONTAINER_OF(dwork, struct waker_work, dwork);
k_wakeup(ww->tid);
}
K_WORK_DELAYABLE_DEFINE(waker, waker_func);
void test_sleep(void)
{
uint32_t then;
uint32_t now;
/* call sleep(10), wakeup after 1s, expect >= 8s left */
const uint32_t sleep_min_s = 1;
const uint32_t sleep_max_s = 10;
const uint32_t sleep_rem_s = 8;
/* sleeping for 0s should return 0 */
zassert_ok(sleep(0), NULL);
/* test that sleeping for 1s sleeps for at least 1s */
then = k_uptime_get();
zassert_equal(0, sleep(1), NULL);
now = k_uptime_get();
zassert_true((now - then) >= 1 * MSEC_PER_SEC, NULL);
/* test that sleeping for 2s sleeps for at least 2s */
then = k_uptime_get();
zassert_equal(0, sleep(2), NULL);
now = k_uptime_get();
zassert_true((now - then) >= 2 * MSEC_PER_SEC, NULL);
/* test that sleep reports the remainder */
ww.tid = k_current_get();
k_work_init_delayable(&ww.dwork, waker_func);
zassert_equal(1, k_work_schedule(&ww.dwork, K_SECONDS(sleep_min_s)), NULL);
zassert_true(sleep(sleep_max_s) >= sleep_rem_s, NULL);
}
void test_usleep(void)
{
uint32_t then;
uint32_t now;
/* test usleep works for small values */
/* Note: k_usleep(), an implementation detail, is a cancellation point */
zassert_equal(0, usleep(0), NULL);
zassert_equal(0, usleep(1), NULL);
/* sleep for the spec limit */
then = k_uptime_get();
zassert_equal(0, usleep(USEC_PER_SEC - 1), NULL);
now = k_uptime_get();
zassert_true(((now - then) * USEC_PER_MSEC) / (USEC_PER_SEC - 1) >= 1, NULL);
/* sleep for exactly the limit threshold */
zassert_equal(-1, usleep(USEC_PER_SEC), NULL);
zassert_equal(errno, EINVAL, NULL);
/* sleep for over the spec limit */
zassert_equal(-1, usleep((useconds_t)ULONG_MAX), NULL);
zassert_equal(errno, EINVAL, NULL);
/* test that sleep reports errno = EINTR when woken up */
ww.tid = k_current_get();
k_work_init_delayable(&ww.dwork, waker_func);
zassert_equal(1, k_work_schedule(&ww.dwork, K_USEC(USEC_PER_SEC / 2)), NULL);
zassert_equal(-1, usleep(USEC_PER_SEC - 1), NULL);
zassert_equal(EINTR, errno, NULL);
}

View File

@@ -57,6 +57,24 @@ static int test_init(const struct device *dev)
SYS_INIT(test_init, APPLICATION, CONFIG_APPLICATION_INIT_PRIORITY);
/* Check that global static object constructors are called. */
foo_class static_foo(12345678);
static void test_global_static_ctor(void)
{
zassert_equal(static_foo.get_foo(), 12345678, NULL);
}
/*
* Check that dynamic memory allocation (usually, the C library heap) is
* functional when the global static object constructors are called.
*/
foo_class *static_init_dynamic_foo = new foo_class(87654321);
static void test_global_static_ctor_dynmem(void)
{
zassert_equal(static_init_dynamic_foo->get_foo(), 87654321, NULL);
}
static void test_new_delete(void)
{
@@ -68,6 +86,8 @@ static void test_new_delete(void)
void test_main(void)
{
ztest_test_suite(cpp_tests,
ztest_unit_test(test_global_static_ctor),
ztest_unit_test(test_global_static_ctor_dynmem),
ztest_unit_test(test_new_delete)
);

View File

@@ -1,5 +1,22 @@
tests:
cpp.main:
common:
tags: cpp
integration_platforms:
- mps2_an385
tags: cpp
- qemu_cortex_a53
tests:
cpp.main.minimal:
extra_configs:
- CONFIG_MINIMAL_LIBC=y
cpp.main.newlib:
filter: TOOLCHAIN_HAS_NEWLIB == 1
min_ram: 32
extra_configs:
- CONFIG_NEWLIB_LIBC=y
- CONFIG_NEWLIB_LIBC_NANO=n
cpp.main.newlib_nano:
filter: TOOLCHAIN_HAS_NEWLIB == 1 and CONFIG_HAS_NEWLIB_LIBC_NANO
min_ram: 24
extra_configs:
- CONFIG_NEWLIB_LIBC=y
- CONFIG_NEWLIB_LIBC_NANO=y

View File

@@ -2,6 +2,7 @@ tests:
coredump.logging_backend:
tags: ignore_faults ignore_qemu_crash
filter: CONFIG_ARCH_SUPPORTS_COREDUMP
platform_exclude: acrn_ehl_crb
harness: console
harness_config:
type: multi_line

View File

@@ -2,6 +2,7 @@ tests:
coredump.backends.logging:
tags: ignore_faults ignore_qemu_crash
filter: CONFIG_ARCH_SUPPORTS_COREDUMP
platform_exclude: acrn_ehl_crb
harness: console
harness_config:
type: multi_line

View File

@@ -161,7 +161,7 @@ manifest:
revision: 70bfbd21cdf5f6d1402bc8d0031e197222ed2ec0
path: bootloader/mcuboot
- name: mcumgr
revision: 9ffebd5e92d9d069667b9af2a3a028f4a033cfd3
revision: 480cb48e42703e49595a697bb2410a1fcd105f5e
path: modules/lib/mcumgr
- name: mipi-sys-t
path: modules/debug/mipi-sys-t