Compare commits

...

119 Commits

Author SHA1 Message Date
Yudong Zhang
0c70f3bad7 shell: mqtt: fix call to bin2hex
Fixes: f2affbd ("os: lib: bin2hex: fix memory overwrite")
Signed-off-by: Yudong Zhang <mtwget@gmail.com>
2023-03-27 14:20:50 -07:00
Yudong Zhang
5875676425 mgmt: hawkbit: fix call to bin2hex
Fixes: f2affbd ("os: lib: bin2hex: fix memory overwrite")
Signed-off-by: Yudong Zhang <mtwget@gmail.com>
2023-03-27 14:20:50 -07:00
Yudong Zhang
5bae87ae69 mgmt: updatehub: fix call to bin2hex
We noticed that in the master branch, updatehub fails to start.
That is because of the behaviour change in bin2hex caused by
commit f2affbd ("os: lib: bin2hex: fix memory overwrite").

Fixes: f2affbd ("os: lib: bin2hex: fix memory overwrite")
Signed-off-by: Yudong Zhang <mtwget@gmail.com>
2023-03-27 14:20:50 -07:00
Stephanos Ioannidis
eb5bbf9863 ci: backport_issue_check: Use ubuntu-22.04 virtual environment
This commit updates the pull request backport issue check workflow to
use the Ubuntu 22.04 virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cadd6e6fa4)
2023-03-22 03:15:13 +09:00
Stephanos Ioannidis
20b20501e0 ci: manifest: Use ubuntu-22.04 virtual environment
This commit updates the manifest workflow to use the Ubuntu 22.04
virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit af6d77f7a7)
2023-03-22 02:59:28 +09:00
Gerard Marull-Paretas
74929f47b0 ci: doc-build: fix PDF build
New LaTeX Docker image (Debian based) uses Python 3.11. On Debian
systems, this version does not allow to install packages to the system
environment using pip.  Use a virtual environment instead.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
(cherry picked from commit e6d9ff2948)
2023-02-28 23:22:01 +09:00
Théo Battrel
3c4bdfa595 Bluetooth: Host: Check returned value by LE_READ_BUFFER_SIZE
`rp->le_max_num` was passed unchecked into `k_sem_init()`, this could
lead to the value being uninitialized and an unknown behavior.

To fix that issue, the `rp->le_max_num` value is checked the same way as
`bt_dev.le.acl_mtu` was already checked. The same things has been done
for `rp->acl_max_num` and `rp->iso_max_num` in
`read_buffer_size_v2_complete()` function.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
(cherry picked from commit ac3dec5212)
2023-02-24 09:20:05 -08:00
Chris Friedt
18869d0f5d net: sockets: socketpair: do not allow blocking IO in ISR context
Using a socketpair for communication in an ISR is not a great
solution, but the implementation should be robust in that case
as well.

It is not acceptible to block in ISR context, so robustness here
means to return -1 to indicate an error, setting errno to `EAGAIN`
(which is synonymous with `EWOULDBLOCK`).

Fixes #25417

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit d832b04e96)
2023-01-10 09:51:56 -08:00
Martí Bolívar
a7d946331f python-devicetree: CI hotfix
Pin the types-PyYAML version to 6.0.7. Version 6.0.8 is causing CI
errors for other pull requests, so we need this in to get other PRs
moving.

Fixes: #46286

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
207bd9a52e ci: Clone cached Zephyr repository with shared objects
In the new ephemeral Zephyr runners, the cached repository files are
located in a foreign file system and Git clone operation cannot create
hard-links to the cached repository objects, which forces the Git clone
operation to copy the objects from the cache file system to the runner
container file system.

This commit updates the CI workflows to instead perform a "shared
clone" of the cached repository, which allows the cloned repository to
utilise the object database of the cached repository.

While "shared clone" can be often dangerous because the source
repository objects can be deleted, in this case, the source repository
(i.e. cached repository) is mounted as read-only and immutable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
7fdafbefc4 ci: Limit workflow scope branches
This commit updates the CI workflows that trigger on both push and pull
request events to limit their event trigger scope to the main and the
release branches.

This prevents these workflows from simultaneously triggering on both
push and pull request events when a pull request is created from an
upstream branch to another upstream branch (e.g. pull requests from
the backport branches to the release branches).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
50800033f7 ci: codecov: Clone cached Zephyr repository
This commit updates the codecov workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
4f83f58ae3 ci: codecov: Use zephyr-runner
This commit updates the codecov workflow to use the new Kubernetes-
based zephyr-runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
7b03c3783c ci: clang: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
79a4063ceb ci: clang: Clone cached Zephyr repository
This commit updates the clang workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
ca020d65f8 ci: clang: Use zephyr-runner
This commit updates the clang workflow to use the new Kubernetes-based
zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
dc12754b51 ci: twister: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
869566c139 ci: twister: Clone cached Zephyr repository
This commit updates the twister workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
5923021c60 ci: twister: Use zephyr-runner
This commit updates the twister workflow to use the new Kubernetes-
based zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
e164b03f70 ci: Use actions/cache@v3
This commit updates the CI workflows to use the latest "cache" action
v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
74f9846f46 ci: Use actions/setup-python@v4
This commit updates the CI workflows to use the latest "setup-python"
action v4, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
74039285dc ci: Use actions/upload-artifact@v3
This commit updates the CI workflows to use the latest
"upload-artifact" action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
c6af6d8595 ci: Use actions/checkout@v3
This commit updates the CI workflows to use the latest "checkout"
action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
1065bccac3 ci: twister: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
34e66d8f0e ci: release: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
172b858e19 ci: codecov: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
0a8f0510e6 ci: clang: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
ed880e19d9 ci: footprint-tracking: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
41ce47af4b ci: footprint: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
2dd9b5b5e7 ci: codecov: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
59203c1137 ci: clang: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
a0b55ac113 ci: twister: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
67209f59f1 ci: bluetooth-tests: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Anas Nashif
244b352d9f ci: update cancel-workflow-action action to 0.11.0
Update action to use latest release which resolves a warning Node 12
being deprecated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
284868c9d1 ci: compliance: Use upload-artifact action v3
This commit updates the "Create a release" workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
aa6fc02be8 ci: issue_count: Use upload-artifact action v3
This commit updates the issue count tracker workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
c02e8142d0 ci: doc-build: Use upload-artifact action v3
This commit updates the documentation build workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
34854a8e78 ci: compliance: Use upload-artifact action v3
This commit updates the compliance check workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
aae46a1840 ci: backport: Use Ubuntu 20.04 runner image
This commit updates the backport workflow to use the ubuntu-20.04
runner image because the ubuntu-18.04 image is deprecated and will
become unsupported by December 1, 2022.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
fbc2ba9a8a ci: west_cmds: Use specific version of runner image
This commit updates the "West Command Tests" workflow to use a specific
runner image version (ubuntu-20.04, macos-11, windows-2022) instead of
the latest version in order to prevent any potential breakages due to
the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
86eaae79f4 ci: devicetree_checks: Use specific version of runner image
This commit updates the "Devicetree script tests" workflow to use a
specific runner image version (ubuntu-20.04, macos-11, windows-2022)
instead of the latest version in order to prevent any potential
breakages due to the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
2e4423814d ci: twister: Use Ubuntu 20.04 runner image
This commit updates the "Run tests with twister" workflow to use a
specific runner image version, ubuntu-20.04, instead of the latest
version in order to prevent any potential breakages due to the 'latest'
version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
230337f51b ci: twister_tests: Use Ubuntu 20.04 runner image
This commit updates the Twister Testsuite workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
648cc40c57 ci: stale_issue: Use Ubuntu 20.04 runner image
This commit updates the stale issue workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
0aa9a177ca ci: release: Use Ubuntu 20.04 runner image
This commit updates the "Create a Release" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
36b7fe0e5c ci: manifest: Use Ubuntu 20.04 runner image
This commit updates the manifest check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
a6f817562e ci: license_check: Use Ubuntu 20.04 runner image
This commit updates the license check workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
9415cd4de5 ci: issue_count: Use Ubuntu 20.04 runner image
This commit updates the issue count tracker workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
6d8cc95715 ci: footprint: Use Ubuntu 20.04 runner image
This commit updates the footprint delta workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
91af641b6c ci: footprint-tracking: Use Ubuntu 20.04 runner image
This commit updates the footprint tracking workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
ae679d0a60 ci: errno: Use Ubuntu 20.04 runner image
This commit updates the error number check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
1806442523 ci: doc: Use Ubuntu 20.04 runner image
This commit updates the documentation build and publish workflows to
use a specific runner image version, ubuntu-20.04, instead of the
latest version in order to prevent any potential breakages due to the
'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
46b3d0f4d9 ci: do_not_merge: Use Ubuntu 20.04 runner image
This commit updates the "Do Not Merge" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
2445934b93 ci: daily_test_version: Use Ubuntu 20.04 runner image
This commit updates the daily test version workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
80fa032333 ci: compliance: Use Ubuntu 20.04 runner image
This commit updates the compliance check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
1a4105b6b5 ci: coding_guidelines: Use Ubuntu 20.04 runner image
This commit updates the coding guidelines workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
d5b2e2fed0 ci: clang: Use Ubuntu 20.04 runner image
This commit updates the Clang workflow to use a specific runner image
version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
6c38e20028 ci: backport_issue_check: Use Ubuntu 20.04 runner image
This commit updates the backport issue check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
87de8ef5a5 ci: bluetooth-tests: Use Ubuntu 20.04 runner image
This commit updates the Bluetooth tests workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Stephanos Ioannidis
41310ecbcd ci: issue_count: Fix stale reference to master branch
This commit fixes a stale reference to the 'master' branch when
downloading the issue report configuration file.

Note that the 'master' branch is no longer the default
branch -- 'main' is.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:39 +09:00
Andries Kruithof
133dc2da84 Bluetooth: controller: llcp: initialise DLE parameters
The initialisation of DLE parameters for the peripheral
was done before the intialisation of the PHY settings.
Since the DLE parameters depend on PHY settings this
can result in incorrect parameters for tx/rx time and
octets
One scenario is where a previous connection set the PHY to
2M or CODED, then when a new connection is established
it uses the same memory-locations for connection settings as the
previous connection, and the (uninitialised) PHY settings will be
set to 2M or CODED, and thus the DLE parameters will be wrong
This PR moves the initialisation of DLE parameters after
that of PHY settings

EBQ tests effected include LL/CON/PER/BV-77-C.

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 6d7a04a0ba)
2022-11-02 07:54:57 -07:00
Chen Peng1
2f793c8d8d tests: interrupt: remove unused macro TRIGGER_IRQ_INT.
On X86 platforms, the interrupt trigger method has been
changed to use APIC IPI, we don't use INT command to trigger
interrupt, so remove this unused macro.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-11-01 18:52:34 -07:00
Chen Peng1
30cdfad7b2 test: interrupt: change the test interrupt line to a bigger one.
change the test interrupt line number to a bigger one, because
the low number usually will be used by other devices.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-11-01 18:52:34 -07:00
Chen Peng1
db28910f7f tests: interrupt: add some nop operations in trigger_irq function.
On X86 platforms, the interrupt trigger method has been changed
from using INT command to using APIC IPI, we need to make sure
the IPI interrupt is handled before do our check, so add some
nop operations.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2022-11-01 18:52:34 -07:00
Erik Brockhoff
71ccc7ee7e Bluetooth: controller: fixing issue re. erroneous DLE changed events
Only apply change to effective DLE times if current max times are too
small to accommodate. Similar to legacy implementation
Update unit tests to new DLE ntf behavior

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 522e0b5ade)
2022-10-07 13:39:26 +02:00
Andries Kruithof
62d4b476e2 Bluetooth: controller: llcp: fix DLE related EBQ tests
Calculation of the DLE related parameters (rx/tx octets and time) depend
on the actual phy in use. For this reason the PHY settings must be
initialised before doing the DLE parameter calculations

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 19bf928ffb)
2022-10-07 13:39:18 +02:00
Andries Kruithof
ef2fd9306a Bluetooth: controller: llcp: fix typo
Fixed a typo: the correct term is 'link layer', not
'linked layer'

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 9eaf102e1b)
2022-10-07 13:39:08 +02:00
Andries Kruithof
88d6d34394 Bluetooth: controller: llcp: avoid regression errors
The change in this commit is required to avoid regression errors
on EBQ test for the PHY update procedure
When in the peripheral role transmission of data must be resumed
while waiting for the PHY IND response from peer.
In other words: in the LP_PU_STATE_WAIT_TX_ACK_PHY_REQ state
data transmission must resume when acting as peripheral,
but not when in the central role

Following tests are effected
LL/CON/PER/BV-49-C
LL/CON/PER/BV-50-C
LL/CON/PER/BV-52-C
LL/CON/PER/BV-53-C
LL/CON/PER/BV-54-C
LL/CON/PER/BV-55-C
LL/CON/PER/BV-56-C
LL/CON/PER/BV-58-C

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit 1d1a2f8b57)
2022-10-07 13:39:08 +02:00
Andries Kruithof
0e0d3dcb03 Bluetooth: controller: llcp: update unit tests with changes in PHY proc
The pausing and timing of data transmission is changed in the PHY update
procedure, so that conformance tests passes. This requires an update
in unittests as well

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit c16f788959)
2022-10-07 13:39:08 +02:00
Andries Kruithof
5e5b9e7867 Bluetooth: controller: llcp: fix PHY procedure for conformance test
This PR fixes the PHY update procedures for conformance tests when
being a Central
The problem was that data was in the LLL tx queue and was still being
queued before the PHY IND was queued (with a given instant).
As a result by the time the PHY IND was transmitted over the air the
instant was in the past.
The fix is to ensure that the LLL tx queue is empty, and to stop
queueing new data  before queueing the PHY IND

Following tests are fixed:
LL/CON/CEN/BV-49-C
LL/CON/CEN/BV-50-C
LL/CON/CEN/BV-53-C
LL/CON/CEN/BV-54-C

Signed-off-by: Andries Kruithof <andries.kruithof@nordicsemi.no>
(cherry picked from commit aa80f7da5f)
2022-10-07 13:39:08 +02:00
Thomas Ebert Hansen
3f23efbcd1 Bluetooth: controller: llcp: fix issue re. version exchange
Complete the remote initiated version exchange if a LL_VERSION_IND is
received while already having responded in an earlier version exchange
procedure.

Clarify comment regarding how to handle this invalid behaviour.

This has been seen when running the LL/CON/CEN/BI-12-C test on EBQ.

Signed-off-by: Thomas Ebert Hansen <thoh@oticon.com>
(cherry picked from commit d9774bd925)
2022-10-07 13:39:01 +02:00
Erik Brockhoff
f64f70b1a5 Bluetooth: controller: llcp: fixing tx buffer queue handling
Misc. fixups to get the tx buffer alloc mechanism to work as intended

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 1ff458ec87)
2022-10-07 13:38:52 +02:00
Thomas Ebert Hansen
049a3d1b5f Bluetooth: controller: llcp: Fix data pause/resume
llcp_tx_pause_data() calls ull_tx_q_pause_data() for each pause_mask,
while llcp_tx_resume_data() only calls ull_tx_q_resume_data() when
conn->llcp.tx_q_pause_data_mask == 0 leading to an unbalanced number of
calls to ull_tx_q_pause_data()/ull_tx_q_resume_data() which can leave
the data path of the TX Q paused.

Fix such that only the first call to llcp_tx_pause_data() will pause the
data path of the TX Q.

Add unit test to verify correct pause/resume behavior.

Signed-off-by: Thomas Ebert Hansen <thoh@oticon.com>
(cherry picked from commit 154d67d90b)
2022-10-07 13:38:36 +02:00
Vinayak Kariappa Chettimada
d81bc781ab Bluetooth: Controller: Fix missing recv fifo reset
Fix missing recv fifo reset on HCI reset. This fix handles a
scenario where in Rx Prio thread has enqueued a node rx, Tx
thread handles HCI Reset Command, and Rx thread wakes up
from call to k_fifo_get to handle invalid node rx. The
changes here ensure Rx thread does not get any invalid node
rx post HCI Reset Command handled in Tx thread.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit e027f0671a)
2022-10-07 13:36:30 +02:00
Yong Cong Sin
338acfc80a subsys/mgmt/hawkbit: Set ai_socktype if IPV4/IPV6
Follows the implementation of updatehub and set the
`ai_socktype` only if IPV4/IPV6

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit dd9d6bbb44)
2022-10-07 13:36:23 +02:00
Yong Cong Sin
6fc5d08bf9 subsys/mgmt/hawkbit: Init the hints struct to a known value
Initialize the `hints` struct to a known value so that it won't
cause undetermined behavior when used in `getaddrinfo()`.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 2ed88e998a)
2022-10-07 13:36:23 +02:00
Reto Schneider
fceb688a2c net: context: Fix memory leak
Allocated, but undersized packets must not just be logged, but also
unreferenced before returning an error.

Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
(cherry picked from commit 6de54e0d03)
2022-10-07 13:36:17 +02:00
Yong Cong Sin
1a98e00f13 mgmt/hawkbit: Print hrefs only if there's an update
If the is no update from the server, the _links will be NULL.
Check if it is NULL before trying to LOG these strings.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2022-09-29 16:43:58 +02:00
Ruud Derwig
c1fc2d957a ARC: fx possible memory corruption with userspace
Use  Z_KERNEL_STACK_BUFFER instead of
Z_THREAD_STACK_BUFFER for initial stack.

Fixes #50467

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
(cherry picked from commit 9bccb5cc4b)
2022-09-22 18:02:09 +02:00
Daniel Leung
4b670b584c soc: esp32: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit b820cde7a9)
2022-09-22 18:01:52 +02:00
Daniel Leung
97c9de27fc soc: intel_adsp: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit 74df88d8f5)
2022-09-22 18:01:52 +02:00
Erwan Gouriou
c4957772c1 boards: nucleo_wb55rg: Update regarding supported M0 BLE f/w
Since STM32WBCube release V1.13.2, only "HCI Only" f/w are now compatible
for a use with Zephyr.
"Full stack" f/w which used to be supported are not compatible anymore.

Update board documentation to mention this restriction.

Additionally, update flash partition to reflect the gain implied
by the use of this smaller f/w.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit d00f8505bf)
2022-09-22 15:24:20 +02:00
Thomas Stranger
aa8cf13e25 dts/arm: stm32f105: enable master can gating clock for can2
The can2 only works if gating clock of the master can (can1)
is enabled, therefore also set that bit for can2.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit 24594cf7ce)
2022-09-22 15:24:12 +02:00
Thomas Stranger
1cb45f17c3 include: dt-bindings: pinctrl: stm32f1-afio: fix can & eth pinmap
The bindings for the stm32f105 afio pin remap had defined the wrong
offset for CAN and ETH.
This commit corrects those to the bits specified in RM0008 Rev.21,
but the changes could not be verfied on hw.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit 705c203f26)
2022-09-22 15:24:02 +02:00
Erwan Gouriou
c32e7773ae drivers: gpio: stm32: Apply GPIOG specific code to U5 series
In STM32U5 as well it is required to enable VDD before use.
Difference is that U5 enables this under PWR_SVMCR_IO2SV flag.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit e1cb0845b4)
2022-09-22 15:23:54 +02:00
Nicolas Pitre
1884e2b889 riscv: smp: fix secondary cpus' initial stack
Z_THREAD_STACK_BUFFER() must not be used here. This is meant for stacks
defined with K_THREAD_STACK_ARRAY_DEFINE() whereas in this case we are
given a stack created with K_KERNEL_STACK_ARRAY_DEFINE().

If CONFIG_USERSPACE=y then K_THREAD_STACK_RESERVED gets defined with
a bigger value than K_KERNEL_STACK_RESERVED. Then Z_THREAD_STACK_BUFFER()
returns a pointer that is more advanced than expected, resulting in a
stack pointer outside its actual stack area and therefore memory
corruption ensues.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit c76d8c88c0)
2022-09-21 15:35:47 +02:00
Stephanos Ioannidis
5fc3da40f2 ci: twister: Check out west modules for test plan
This commit updates the twister workflow to check out the west modules
before running the test plan script for the pull request runs because
the twister test plan logic resolves the the module dependencies and
filters tests based on the local availability of the required modules.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit fad899d2ad)
2022-09-16 21:14:11 +09:00
Sylvio Alves
2cd7fa9cbf soc: esp32: opt to make device handles in dram
ESP32 linker loader needs all sections to be align correctly.
When MCUBoot is enabled, device handles provide by device-handles.ld
does not make the ALIGN(4) at the end, which breaks the loader
initialization. This PR make sure that this particular section
is placed in DRAM instead.

For now this is a workaround until this can be handled in loader script.

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
(cherry picked from commit 54ca96f523)
2022-08-18 22:11:10 -07:00
Benjamin Gwin
6628b0631a twister: Exit with a bad status if there were build errors
Previously, twister would exit with a 0 status code if there were build
failures but no execution failures. This makes it easier to catch
build breakages when using twister in CI workflows.

Signed-off-by: Benjamin Gwin <bgwin@google.com>
(cherry picked from commit 3f8d5c49b3)
2022-07-26 15:03:34 -04:00
Erik Brockhoff
7a5bdff5cd Bluetooth: controller: llcp: phy update proc, validate phys and instant
Implementing proper validation of PHY selection for PHY UPDATE procedure
Implement connection termination on PHY UPDATE with instant in the past

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 96817164ea)
2022-07-25 10:56:50 -07:00
Erwan Gouriou
4a3e1dd05d boards: nucleo_wb55rg: Fix documentation about BLE binary compatibility
Rather than stating a version information that will get out of date
at each release, refer to the source of information located in hal_stm32
module.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit 6656607d02)
2022-07-25 10:54:59 -07:00
Henrik Brix Andersen
bbe4c639da drivers: can: mcux: mcan: add pinctrl support
Add pinctrl support to the NXP LPC driver front-end.

Fixes: #47742

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 7b6ca29941)
2022-07-25 10:52:20 -07:00
Henrik Brix Andersen
434d3f98f3 tests: drivers: can: api: add test for RTR filter matching
Add test for CAN RX filtering of RTR frames.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 097cb04916)
2022-07-25 10:52:10 -07:00
Henrik Brix Andersen
4fba98cf9b drivers: can: loopback: check frame ID type and RTR bit in filters
Check the frame ID type and RTR bit when comparing loopback CAN frames
against installed RX filters.

Fixes: #47904

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 588f06d511)
2022-07-25 10:52:10 -07:00
Henrik Brix Andersen
41ba03d0be drivers: can: mcux: flexcan: fix handling of RTR frames
When installing a RX filter, the driver uses "filter->rtr &
filter->rtr_mask" for setting the filter mask. It should just be using
filter->rtr_mask, otherwise filters for non-RTR frames will match RTR
frames as well.

When transmitting a RTR frame, the hardware automatically switches the
mailbox used for TX to RX in order to receive the reply. This, however,
does not match the Zephyr CAN driver model, where mailboxes are dedicated
to either RX or TX. Attempting to reuse the TX mailbox (which was
automatically switched to an RX mailbox by the hardware) fails on the first
call, after which the mailbox is reset and can be reused for TX. To
overcome this, the driver must abort the RX mailbox operation when the
hardware performs the TX to RX switch.

Fixes: #47902

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 3e5aaf837e)
2022-07-25 10:52:10 -07:00
Henrik Brix Andersen
b285c2a275 drivers: can: mcan: acknowledge all received frames
The Bosch M_CAN IP does not support RX filtering of the RTR bit, so the
driver handles this bit in software.

If a recevied frame matches a filter with RTR enabled, the RTR bit of the
frame must match that of the filter in order to be passed to the RX
callback function. If the RTR bits do not match the frame must be dropped.

Improve the readability of the the logic for determining if a frame should
be dropped and add a missing FIFO acknowledge write for dropped frames.

Fixes: #47204

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 5e74f72220)
2022-07-25 10:52:10 -07:00
Andriy Gelman
01e51fed23 tests: net: Check leaks when appending frags to a cloned packet
This test checks that new frags appended to a cloned packet are properly
freed.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit d67a130368)
2022-07-25 10:51:56 -07:00
Andriy Gelman
2b6bbef0e5 tests: net: Add test to detect leak in pkt shallow clone
Checks that taking a shallow of a packet does not leak buffers after the
packet and its clone are unreffed.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit f119a368a2)
2022-07-25 10:51:56 -07:00
Andriy Gelman
bd110756f5 net: pkt: Fix leak when using shallow clone
Currently a shallow clone of a packet will bump the reference count on
all the fragments. The net_pkt_unref() function, however, only drops the
reference count on the head fragment. Fix this by only bumping the ref
count on the head buf during shallow clone.

Only bumping the ref count of head is more in line with the idea that
head buf is not responsible for the fragments of its child.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit f12f9d5e95)
2022-07-25 10:51:56 -07:00
Andriy Gelman
c0c1390ecf net: route: Fix pkt leak if net_send_data() fails
If the call to net_send_data() fails, for example if the forwading
interface is down, then the pkt will leak. The reference taken by
net_pkt_shallow_clone() will never be released. Fix the problem
by dropping the rerefence count in the error path.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit a3cdb2102c)
2022-07-25 10:50:30 -07:00
Stephanos Ioannidis
c6b1d932ff tests: cpp: cxx: Add qemu_cortex_a53 as integration platform
This commit adds the `qemu_cortex_a53`, which is an MMU-based platform,
as an integration platform for the C++ subsystem tests.

This ensures that the `test_global_static_ctor_dynmem` test, which
verifies that the dynamic memory allocation service is functional
during the global static object constructor invocation, is tested on
an MMU-based platform, which may have a different libc heap
initialisation path.

In addition to the above, this increases the overall test coverage
ensuring that the C++ subsystem is functional on an MMU-based platform
in general.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 8e6322a21a)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
1a58ca22e1 tests: cpp: cxx: Test with various types of libc
This commit changes the C++ subsystem test, which previously was only
being run with the minimal libc, to be run with all the mainstream C
libraries (minimal libc, newlib, newlib-nano, picolibc).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 03f0693125)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
0dc782efae tests: cpp: cxx: Add dynamic memory availability test for static init
This commit adds a test to verify that the dynamic memory allocation
service (the `new` operator) is available and functional when the C++
static global object constructors are invoked called during the system
initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit dc4895b876)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
eff3c0d65a tests: cpp: cxx: Add static global constructor invocation test
This commit adds a test to verify that the C++ static global object
constructors are invoked called during the system initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 6e0063af29)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
2c3b4ba5cf lib: libc: newlib: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the newlib malloc heap
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 43e1c28a25)
2022-07-19 14:04:02 -07:00
Stephanos Ioannidis
0a5feb6832 lib: libc: minimal: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the minimal libc malloc
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit db0748c462)
2022-07-19 14:04:02 -07:00
Christopher Friedt
63e27ea69a scripts: release: list_backports: use older python dict merge method
In Python versions >= 3.9, dicts can be merged with the `|` operator.

This is not the case for python versions < 3.9, and the simplest way
is to use `dict_c = {**dict_a, **dict_b}`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3783cf8353)
2022-07-19 01:04:45 +09:00
Christopher Friedt
c5498f4be0 ci: backports: check if a backport PR has a valid issue
This is an automated check for the Backports project to
require one or more `Fixes #<issue>` items in the body
of the pull request.

Fixes #46164

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit aa4e437573)
2022-07-18 23:32:23 +09:00
Christopher Friedt
551ef9c2e4 scripts: release: list_backports.py
Created list_backports.py to examine prs applied to a backport
branch and extract associated issues. This is helpful for
adding to release notes.

The script may also be used to ensure that backported changes
also have one or more associated issues.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 57762ca12c)
2022-07-18 23:32:23 +09:00
Christopher Friedt
89fd222c70 scripts: release: use GITHUB_TOKEN and start_date in scripts
Updated bug_bash.py and list_issues.py to use the GITHUB_TOKEN
environment variable for consistency with other scripts.

Updated bug_bash.py to use `-s / --start-date` instead of
`-b / --begin-date`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3b3fc27860)
2022-07-18 23:32:23 +09:00
Thomas Stranger
a8cebfc437 dts: bindings: clock: Fix STM32G4 device clk src selection definitions
Some device clock sources selection helpers were not correctly defined.

With this commit the definitions are updated to match the desciption
in the reference manual RM0453.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit 87ed12d796)
2022-07-12 18:08:03 +02:00
Erik Brockhoff
0f6c57f3dd Bluetooth: controller: llcp: fix issue re. missing ack of terminate ind
On remote terminate on central the conn clean-up would happen before ack
of terminate ind was sent to peer.
Now clean-up is 'postponed' until subsequent event.
Also now data tx is paused on rx of terminate ind to ensure no data is
tx'ed after rx of terminate ind

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 8b1d50b981)
2022-07-12 18:07:56 +02:00
Erik Brockhoff
8e09d5445d Bluetooth: controller: llcp: fix issue re. missing release of tx node
On disconnect with refactored LLCP, if data tx is paused,
possibly 'waiting' tx nodes would not get released.

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 8b912f1488)
2022-07-12 18:07:50 +02:00
Vinayak Kariappa Chettimada
689e5f209d Bluetooth: Controller: Replace k_sem_take loop with k_sem_reset
Replace k_sem_take loop used for consuming the remaining
sem give counts with k_sem_reset.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 7ff8581916)
2022-07-12 18:07:41 +02:00
Vinayak Kariappa Chettimada
9e950f5259 Bluetooth: Controller: Fix pdu_free_sem_give assertion under ZLI use
Fix assertion due to multiple mayfly_enqueue calls used
under ZLI when pdu_free_sem_give is invoked from the LLL.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit ae8e7f4c22)
2022-07-12 18:07:41 +02:00
Glauber Maroto Ferreira
c30d375272 west.yml: hal_espressif: updates to latest revision
Updates hal_espressif's revision to include:
- latest pinctrl definitions.
- support for building ESP32C3 USB driver

Signed-off-by: Glauber Maroto Ferreira <glauber.ferreira@espressif.com>
(cherry picked from commit 7e1baee44b)
2022-07-12 18:07:29 +02:00
Martin Jäger
1c45956e06 drivers: can: stm32: enable can1 clock also for can2
For devices with more than one CAN peripherals, CAN1 is the master and
its clock has to be enabled also if only CAN2 is used.

Signed-off-by: Martin Jäger <martin@libre.solar>
(cherry picked from commit 8ab81b02dd)
2022-07-12 18:07:17 +02:00
Stephanos Ioannidis
4ab83425e0 drivers: i2c: Fix infinite recursion in driver unregister function
This commit fixes an infinite recusion in the
`z_vrfy_i2c_slave_driver_unregister` function.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 745b7d202e)
2022-06-22 11:55:15 -07:00
Carles Cufi
44358e62ec version: Fix the EXTRAVERSION field
This was supposed to be left empty, but was instead set to 0.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2022-06-05 20:17:02 +02:00
93 changed files with 1448 additions and 469 deletions

View File

@@ -9,7 +9,7 @@ on:
jobs:
backport:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
name: Backport
steps:
- name: Backport

View File

@@ -0,0 +1,30 @@
name: Backport Issue Check
on:
pull_request_target:
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-22.04
steps:
- name: Check out source code
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -8,7 +8,7 @@ on:
jobs:
bluetooth-test-results:
name: "Publish Bluetooth Test Results"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.event.workflow_run.conclusion != 'skipped'
steps:

View File

@@ -11,17 +11,13 @@ on:
- "soc/posix/**"
- "arch/posix/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bluetooth-test-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
bluetooth-test-build:
runs-on: ubuntu-latest
needs: bluetooth-test-prep
bluetooth-test:
runs-on: ubuntu-20.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
@@ -47,7 +43,7 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -72,7 +68,7 @@ jobs:
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: bluetooth-test-results
path: |
@@ -81,7 +77,7 @@ jobs:
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: event
path: |

View File

@@ -17,7 +17,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Install Python dependencies
run: |

View File

@@ -2,22 +2,18 @@ name: Build with Clang/LLVM
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
clang-build-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
clang-build:
runs-on: zephyr_runner
needs: clang-build-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -38,12 +34,14 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -80,7 +78,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
@@ -108,12 +106,12 @@ jobs:
# We can limit scope to just what has changed
if [ -s testplan.json ]; then
echo "::set-output name=report_needed::1";
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --force-color --inline-logs -M -N -v --load-tests testplan.json --retry-failed 2
else
# if nothing is run, skip reporting step
echo "::set-output name=report_needed::0";
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
@@ -122,7 +120,7 @@ jobs:
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml
@@ -145,7 +143,7 @@ jobs:
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore

View File

@@ -4,22 +4,18 @@ on:
schedule:
- cron: '25 */3 * * 1-5'
jobs:
codecov-prep:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
codecov:
runs-on: zephyr_runner
needs: codecov-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -40,8 +36,14 @@ jobs:
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -62,7 +64,7 @@ jobs:
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -102,7 +104,7 @@ jobs:
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
@@ -116,7 +118,7 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Download Artifacts
@@ -152,8 +154,8 @@ jobs:
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
endforeach()
message("::set-output name=mergefiles::${MERGELIST}")
message("::set-output name=covfiles::${FILELIST}")
file(APPEND $ENV{GITHUB_OUTPUT} "mergefiles=${MERGELIST}\n")
file(APPEND $ENV{GITHUB_OUTPUT} "covfiles=${FILELIST}\n")
- name: Merge coverage files
run: |

View File

@@ -4,17 +4,17 @@ on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip

View File

@@ -4,11 +4,11 @@ on: pull_request
jobs:
maintainer_check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Check MAINTAINERS file
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -20,7 +20,7 @@ jobs:
python3 ./scripts/get_maintainer.py path CMakeLists.txt
check_compliance:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -28,13 +28,13 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
@@ -72,7 +72,7 @@ jobs:
./scripts/ci/check_compliance.py -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: compliance.xml

View File

@@ -12,7 +12,7 @@ on:
jobs:
get_version:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
@@ -28,7 +28,7 @@ jobs:
pip3 install gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0

View File

@@ -6,10 +6,16 @@ name: Devicetree script tests
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
@@ -21,22 +27,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-latest
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -44,7 +50,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -53,7 +59,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -8,7 +8,7 @@ jobs:
do-not-merge:
if: ${{ contains(github.event.*.labels.*.name, 'DNM') }}
name: Prevent Merging
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: Check for label
run: |

View File

@@ -35,15 +35,16 @@ env:
jobs:
doc-build-html:
name: "Documentation Build (HTML)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 30
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
@@ -54,7 +55,7 @@ jobs:
echo "${PWD}/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -91,7 +92,7 @@ jobs:
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
@@ -107,7 +108,7 @@ jobs:
echo "Documentation will be available shortly at: ${DOC_URL}" >> $GITHUB_STEP_SUMMARY
- name: upload-pr-number
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
@@ -115,7 +116,7 @@ jobs:
doc-build-pdf:
name: "Documentation Build (PDF)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container: texlive/texlive:latest
timeout-minutes: 30
concurrency:
@@ -124,19 +125,25 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: setup-venv
run: |
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
@@ -159,7 +166,7 @@ jobs:
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: pdf-output
path: doc/_build/latex/zephyr.pdf

View File

@@ -13,7 +13,7 @@ on:
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&

View File

@@ -16,7 +16,7 @@ on:
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event != 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&

View File

@@ -8,7 +8,7 @@ on:
jobs:
check-errno:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
env:
@@ -24,7 +24,7 @@ jobs:
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Run errno.py
run: |

View File

@@ -13,19 +13,14 @@ on:
# same commit
- 'v*'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-tracking:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-tracking-cancel
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
@@ -52,7 +47,7 @@ jobs:
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -2,19 +2,14 @@ name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-delta:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-cancel
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
@@ -25,11 +20,6 @@ jobs:
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
@@ -43,7 +33,7 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -14,13 +14,13 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/master/.github/workflows/issues-report-config.json
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/main/.github/workflows/issues-report-config.json
- name: install-packages
run: |
@@ -35,7 +35,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}

View File

@@ -4,7 +4,7 @@ on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Scan code for licenses
steps:
- name: Checkout the code
@@ -15,7 +15,7 @@ jobs:
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -6,11 +6,11 @@ on:
jobs:
contribs:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -7,15 +7,16 @@ on:
jobs:
release:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
@@ -23,7 +24,7 @@ jobs:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx

View File

@@ -6,7 +6,7 @@ on:
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v3

View File

@@ -13,24 +13,18 @@ on:
# Run at 00:00 on Wednesday and Saturday
- cron: '0 0 * * 3,6'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
twister-build-cleanup:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
twister-build-prep:
runs-on: zephyr_runner
needs: twister-build-cleanup
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
@@ -53,14 +47,16 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Cleanup
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -76,7 +72,9 @@ jobs:
git log --pretty=oneline | head -n 10
west init -l . || true
west config manifest.group-filter -- +ci
# no need for west update here
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
- name: Generate Test Plan with Twister
if: github.event_name == 'pull_request_target'
@@ -111,19 +109,19 @@ jobs:
else
size=0
fi
echo "::set-output name=subset::${subset}";
echo "::set-output name=size::${size}";
echo "::set-output name=fullrun::${TWISTER_FULL}";
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
twister-build:
runs-on: zephyr_runner
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.23.3
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -146,13 +144,14 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -191,7 +190,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -257,7 +256,7 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -268,7 +267,7 @@ jobs:
twister-test-results:
name: "Publish Unit Tests Results"
needs: twister-build
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()
@@ -286,7 +285,7 @@ jobs:
- name: Upload Unit Test Results in HTML
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: HTML Unit Test Results
if-no-files-found: ignore

View File

@@ -5,12 +5,18 @@ name: Twister TestSuite
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
@@ -24,17 +30,17 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest]
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,11 +5,17 @@ name: Zephyr West Command Tests
on:
push:
branches:
- main
- v*-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
branches:
- main
- v*-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -22,22 +28,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-latest
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -45,7 +51,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -54,7 +60,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -2,4 +2,4 @@ VERSION_MAJOR = 3
VERSION_MINOR = 1
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION = 0
EXTRAVERSION =

View File

@@ -58,7 +58,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;

View File

@@ -22,7 +22,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
riscv_cpu_init[cpu_num].fn = fn;
riscv_cpu_init[cpu_num].arg = arg;
riscv_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
riscv_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
riscv_cpu_wake_flag = cpu_num;
while (riscv_cpu_wake_flag != 0U) {

View File

@@ -186,8 +186,12 @@ To operate bluetooth on Nucleo WB55RG, Cortex-M0 core should be flashed with
a valid STM32WB Coprocessor binaries (either 'Full stack' or 'HCI Layer').
These binaries are delivered in STM32WB Cube packages, under
Projects/STM32WB_Copro_Wireless_Binaries/STM32WB5x/
To date, interoperability and backward compatibility has been tested and is
guaranteed up to version 1.5 of STM32Cube package releases.
For compatibility information with the various versions of these binaries,
please check `modules/hal/stm32/lib/stm32wb/hci/README <https://github.com/zephyrproject-rtos/hal_stm32/blob/main/lib/stm32wb/hci/README>`__
in the hal_stm32 repo.
Note that since STM32WB Cube package V1.13.2, "full stack" binaries are not compatible
anymore for a use in Zephyr and only "HCI Only" versions should be used on the M0
side.
Connections and IOs
===================

View File

@@ -196,10 +196,9 @@ zephyr_udc0: &usb {
/*
* Configure partitions while leaving space for M0 BLE f/w
* First 794K are configured for Zephyr to run on M4 core
* Last 232K are left for BLE f/w on the M0 core
* This partition set up is compatible with use of
* stm32wb5x_BLE_Stack_full_fw.bin v1.13.x
* Since STM32WBCube release V1.13.2, only _HCIOnly_ f/w are supported.
* These FW are expected to be located not before 0x080DB000
* Current partition is using the first 876K of the flash for M4
*/
boot_partition: partition@0 {
@@ -208,19 +207,19 @@ zephyr_udc0: &usb {
};
slot0_partition: partition@c000 {
label = "image-0";
reg = <0x0000c000 DT_SIZE_K(360)>;
reg = <0x0000c000 DT_SIZE_K(400)>;
};
slot1_partition: partition@66000 {
slot1_partition: partition@70000 {
label = "image-1";
reg = <0x00066000 DT_SIZE_K(360)>;
reg = <0x00070000 DT_SIZE_K(400)>;
};
scratch_partition: partition@c0000 {
scratch_partition: partition@d4000 {
label = "image-scratch";
reg = <0x000c0000 DT_SIZE_K(16)>;
reg = <0x000d4000 DT_SIZE_K(16)>;
};
storage_partition: partition@c4000 {
storage_partition: partition@d8000 {
label = "storage";
reg = <0x000c4000 DT_SIZE_K(8)>;
reg = <0x000d8000 DT_SIZE_K(8)>;
};
};

View File

@@ -14,6 +14,8 @@
#include <zephyr/kernel.h>
#include <zephyr/logging/log.h>
#include "can_utils.h"
LOG_MODULE_REGISTER(can_loopback, CONFIG_CAN_LOG_LEVEL);
struct can_loopback_frame {
@@ -56,13 +58,6 @@ static void dispatch_frame(const struct device *dev,
filter->rx_cb(dev, &frame_tmp, filter->cb_arg);
}
static inline int check_filter_match(const struct zcan_frame *frame,
const struct zcan_filter *filter)
{
return ((filter->id & filter->id_mask) ==
(frame->id & filter->id_mask));
}
static void tx_thread(void *arg1, void *arg2, void *arg3)
{
const struct device *dev = arg1;
@@ -80,7 +75,7 @@ static void tx_thread(void *arg1, void *arg2, void *arg3)
for (int i = 0; i < CONFIG_CAN_MAX_FILTER; i++) {
filter = &data->filters[i];
if (filter->rx_cb &&
check_filter_match(&frame.frame, &filter->filter)) {
can_utils_filter_match(&frame.frame, &filter->filter) != 0) {
dispatch_frame(dev, &frame.frame, filter);
}
}

View File

@@ -625,6 +625,8 @@ static void can_mcan_get_message(const struct device *dev,
int data_length;
void *cb_arg;
struct can_mcan_rx_fifo_hdr hdr;
bool rtr_filter_mask;
bool rtr_filter;
while ((*fifo_status_reg & CAN_MCAN_RXF0S_F0FL)) {
get_idx = (*fifo_status_reg & CAN_MCAN_RXF0S_F0GI) >>
@@ -653,11 +655,17 @@ static void can_mcan_get_message(const struct device *dev,
filt_idx = hdr.fidx;
/* Check if RTR must match */
if ((hdr.xtd && data->ext_filt_rtr_mask & (1U << filt_idx) &&
((data->ext_filt_rtr >> filt_idx) & 1U) != frame.rtr) ||
(data->std_filt_rtr_mask & (1U << filt_idx) &&
((data->std_filt_rtr >> filt_idx) & 1U) != frame.rtr)) {
if (hdr.xtd != 0) {
rtr_filter_mask = (data->ext_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->ext_filt_rtr & BIT(filt_idx)) != 0;
} else {
rtr_filter_mask = (data->std_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->std_filt_rtr & BIT(filt_idx)) != 0;
}
if (rtr_filter_mask && (rtr_filter != frame.rtr)) {
/* RTR bit does not match filter RTR mask and bit, drop frame */
*fifo_ack_reg = get_idx;
continue;
}

View File

@@ -284,13 +284,11 @@ static void mcux_flexcan_copy_zfilter_to_mbconfig(const struct zcan_filter *src,
if (src->id_type == CAN_STANDARD_IDENTIFIER) {
dest->format = kFLEXCAN_FrameFormatStandard;
dest->id = FLEXCAN_ID_STD(src->id);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask, src->rtr_mask, 1);
} else {
dest->format = kFLEXCAN_FrameFormatExtend;
dest->id = FLEXCAN_ID_EXT(src->id);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask, src->rtr_mask, 1);
}
if ((src->rtr & src->rtr_mask) == CAN_DATAFRAME) {
@@ -644,6 +642,7 @@ static inline void mcux_flexcan_transfer_rx_idle(const struct device *dev,
static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
{
struct mcux_flexcan_data *data = (struct mcux_flexcan_data *)userData;
const struct mcux_flexcan_config *config = data->dev->config;
/*
* The result field can either be a MB index (which is limited to 32 bit
* value) or a status flags value, which is 32 bit on some platforms but
@@ -663,6 +662,7 @@ static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
mcux_flexcan_transfer_error_status(data->dev, status_flags);
break;
case kStatus_FLEXCAN_TxSwitchToRx:
FLEXCAN_TransferAbortReceive(config->base, &data->handle, mb);
__fallthrough;
case kStatus_FLEXCAN_TxIdle:
mcux_flexcan_transfer_tx_idle(data->dev, mb);

View File

@@ -7,6 +7,9 @@
#include <zephyr/device.h>
#include <zephyr/drivers/can.h>
#include <zephyr/drivers/clock_control.h>
#ifdef CONFIG_PINCTRL
#include <zephyr/drivers/pinctrl.h>
#endif
#include <zephyr/logging/log.h>
#include "can_mcan.h"
@@ -19,6 +22,9 @@ struct mcux_mcan_config {
const struct device *clock_dev;
clock_control_subsys_t clock_subsys;
void (*irq_config_func)(const struct device *dev);
#ifdef CONFIG_PINCTRL
const struct pinctrl_dev_config *pincfg;
#endif
};
struct mcux_mcan_data {
@@ -40,6 +46,13 @@ static int mcux_mcan_init(const struct device *dev)
const struct mcux_mcan_config *mcux_config = mcan_config->custom;
int err;
#ifdef CONFIG_PINCTRL
err = pinctrl_apply_state(mcux_config->pincfg, PINCTRL_STATE_DEFAULT);
if (err) {
return err;
}
#endif /* CONFIG_PINCTRL */
err = clock_control_on(mcux_config->clock_dev, mcux_config->clock_subsys);
if (err) {
LOG_ERR("failed to enable clock (err %d)", err);
@@ -120,7 +133,17 @@ static const struct can_driver_api mcux_mcan_driver_api = {
#endif /* CONFIG_CAN_FD_MODE */
};
#ifdef CONFIG_PINCTRL
#define MCUX_MCAN_PINCTRL_DEFINE(n) PINCTRL_DT_INST_DEFINE(n)
#define MCUX_MCAN_PINCTRL_INIT(n) .pincfg = PINCTRL_DT_INST_DEV_CONFIG_GET(n),
#else
#define MCUX_MCAN_PINCTRL_DEFINE(n)
#define MCUX_MCAN_PINCTRL_INIT(n)
#endif
#define MCUX_MCAN_INIT(n) \
MCUX_MCAN_PINCTRL_DEFINE(n); \
\
static void mcux_mcan_irq_config_##n(const struct device *dev); \
\
static const struct mcux_mcan_config mcux_mcan_config_##n = { \
@@ -128,6 +151,7 @@ static const struct can_driver_api mcux_mcan_driver_api = {
.clock_subsys = (clock_control_subsys_t) \
DT_INST_CLOCKS_CELL(n, name), \
.irq_config_func = mcux_mcan_irq_config_##n, \
MCUX_MCAN_PINCTRL_INIT(n) \
}; \
\
static const struct can_mcan_config can_mcan_config_##n = \

View File

@@ -1248,7 +1248,9 @@ static const struct can_stm32_config can_stm32_cfg_2 = {
.ts2 = DT_PROP_OR(DT_NODELABEL(can2), phase_seg2, 0),
.one_shot = DT_PROP(DT_NODELABEL(can2), one_shot),
.pclken = {
.enr = DT_CLOCKS_CELL(DT_NODELABEL(can2), bits),
/* can1 (master) clock must be enabled for can2 as well */
.enr = DT_CLOCKS_CELL(DT_NODELABEL(can1), bits) |
DT_CLOCKS_CELL(DT_NODELABEL(can2), bits),
.bus = DT_CLOCKS_CELL(DT_NODELABEL(can2), bus),
},
.config_irq = config_can_2_irq,

View File

@@ -629,11 +629,16 @@ static int gpio_stm32_init(const struct device *dev)
data->dev = dev;
#if defined(PWR_CR2_IOSV) && DT_NODE_HAS_STATUS(DT_NODELABEL(gpiog), okay)
#if (defined(PWR_CR2_IOSV) || defined(PWR_SVMCR_IO2SV)) && \
DT_NODE_HAS_STATUS(DT_NODELABEL(gpiog), okay)
z_stm32_hsem_lock(CFG_HW_RCC_SEMID, HSEM_LOCK_DEFAULT_RETRY);
/* Port G[15:2] requires external power supply */
/* Cf: L4/L5 RM, Chapter "Independent I/O supply rail" */
/* Cf: L4/L5/U5 RM, Chapter "Independent I/O supply rail" */
#ifdef CONFIG_SOC_SERIES_STM32U5X
LL_PWR_EnableVDDIO2();
#else
LL_PWR_EnableVddIO2();
#endif
z_stm32_hsem_unlock(CFG_HW_RCC_SEMID);
#endif
/* enable port clock (if runtime PM is not enabled) */

View File

@@ -81,7 +81,7 @@ static inline int z_vrfy_i2c_slave_driver_register(const struct device *dev)
static inline int z_vrfy_i2c_slave_driver_unregister(const struct device *dev)
{
Z_OOPS(Z_SYSCALL_OBJ(dev, K_OBJ_DRIVER_I2C));
return z_vrfy_i2c_slave_driver_unregister(dev);
return z_impl_i2c_slave_driver_unregister(dev);
}
#include <syscalls/i2c_slave_driver_unregister_mrsh.c>

View File

@@ -48,7 +48,8 @@
reg = <0x40006800 0x400>;
interrupts = <63 0>, <64 0>, <65 0>, <66 0>;
interrupt-names = "TX", "RX0", "RX1", "SCE";
clocks = <&rcc STM32_CLOCK_BUS_APB1 0x04000000>;
/* also enabling clock for can1 (master instance) */
clocks = <&rcc STM32_CLOCK_BUS_APB1 0x06000000>;
status = "disabled";
label = "CAN_2";
sjw = <1>;

View File

@@ -2,7 +2,7 @@ description: NXP LPC SoC series MCAN CAN-FD controller
compatible: "nxp,lpc-mcan"
include: [can-fd-controller.yaml, "bosch,m_can-base.yaml"]
include: [can-fd-controller.yaml, "bosch,m_can-base.yaml", pinctrl-device.yaml]
properties:
reg:

View File

@@ -76,15 +76,15 @@
#define USART2_SEL(val) STM32_CLOCK(val, 3, 2, CCIPR_REG)
#define USART3_SEL(val) STM32_CLOCK(val, 3, 4, CCIPR_REG)
#define USART4_SEL(val) STM32_CLOCK(val, 3, 6, CCIPR_REG)
#define LPUART1_SEL(val) STM32_CLOCK(val, 3, 8, CCIPR_REG)
#define I2C1_SEL(val) STM32_CLOCK(val, 3, 10, CCIPR_REG)
#define I2C2_SEL(val) STM32_CLOCK(val, 3, 12, CCIPR_REG)
#define I2C3_SEL(val) STM32_CLOCK(val, 3, 14, CCIPR_REG)
#define LPTIM1_SEL(val) STM32_CLOCK(val, 3, 16, CCIPR_REG)
#define LPTIM2_SEL(val) STM32_CLOCK(val, 3, 18, CCIPR_REG)
#define LPTIM3_SEL(val) STM32_CLOCK(val, 3, 20, CCIPR_REG)
#define SAI1_SEL(val) STM32_CLOCK(val, 3, 22, CCIPR_REG)
#define SAI2_SEL(val) STM32_CLOCK(val, 3, 24, CCIPR_REG)
#define USART5_SEL(val) STM32_CLOCK(val, 3, 8, CCIPR_REG)
#define LPUART1_SEL(val) STM32_CLOCK(val, 3, 10, CCIPR_REG)
#define I2C1_SEL(val) STM32_CLOCK(val, 3, 12, CCIPR_REG)
#define I2C2_SEL(val) STM32_CLOCK(val, 3, 14, CCIPR_REG)
#define I2C3_SEL(val) STM32_CLOCK(val, 3, 16, CCIPR_REG)
#define LPTIM1_SEL(val) STM32_CLOCK(val, 3, 18, CCIPR_REG)
#define SAI1_SEL(val) STM32_CLOCK(val, 3, 20, CCIPR_REG)
#define I2S23_SEL(val) STM32_CLOCK(val, 3, 22, CCIPR_REG)
#define FDCAN_SEL(val) STM32_CLOCK(val, 3, 24, CCIPR_REG)
#define CLK48_SEL(val) STM32_CLOCK(val, 3, 26, CCIPR_REG)
#define ADC12_SEL(val) STM32_CLOCK(val, 3, 28, CCIPR_REG)
#define ADC34_SEL(val) STM32_CLOCK(val, 3, 30, CCIPR_REG)

View File

@@ -150,14 +150,14 @@
#define CAN1_REMAP2 CAN_REMAP2
/** ETH (no remap) */
#define ETH_REMAP0 STM32_REMAP(0U, 0x1U, 20U, STM32_AFIO_MAPR)
#define ETH_REMAP0 STM32_REMAP(0U, 0x1U, 21U, STM32_AFIO_MAPR)
/** ETH (remap) */
#define ETH_REMAP1 STM32_REMAP(1U, 0x1U, 20U, STM32_AFIO_MAPR)
#define ETH_REMAP1 STM32_REMAP(1U, 0x1U, 21U, STM32_AFIO_MAPR)
/** CAN2 (no remap) */
#define CAN2_REMAP0 STM32_REMAP(0U, 0x1U, 21U, STM32_AFIO_MAPR)
#define CAN2_REMAP0 STM32_REMAP(0U, 0x1U, 22U, STM32_AFIO_MAPR)
/** CAN2 (remap) */
#define CAN2_REMAP1 STM32_REMAP(1U, 0x1U, 21U, STM32_AFIO_MAPR)
#define CAN2_REMAP1 STM32_REMAP(1U, 0x1U, 22U, STM32_AFIO_MAPR)
/** SPI3 (no remap) */
#define SPI3_REMAP0 STM32_REMAP(0U, 0x1U, 28U, STM32_AFIO_MAPR)

View File

@@ -94,7 +94,7 @@ void free(void *ptr)
(void) sys_mutex_unlock(&z_malloc_heap_mutex);
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#else /* No malloc arena */
void *malloc(size_t size)
{

View File

@@ -134,7 +134,7 @@ static int malloc_prepare(const struct device *unused)
return 0;
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
/* Current offset from HEAP_BASE of unused memory */
LIBC_BSS static size_t heap_sz;

View File

@@ -5,7 +5,7 @@ envlist=py3
deps =
setuptools-scm
pytest
types-PyYAML
types-PyYAML==6.0.7
mypy
setenv =
TOXTEMPDIR={envtmpdir}

View File

@@ -26,17 +26,17 @@ def parse_args():
parser.add_argument('-a', '--all', dest='all',
help='Show all bugs squashed', action='store_true')
parser.add_argument('-t', '--token', dest='tokenfile',
help='File containing GitHub token', metavar='FILE')
parser.add_argument('-b', '--begin', dest='begin', help='begin date (YYYY-mm-dd)',
metavar='date', type=valid_date_type, required=True)
help='File containing GitHub token (alternatively, use GITHUB_TOKEN env variable)', metavar='FILE')
parser.add_argument('-s', '--start', dest='start', help='start date (YYYY-mm-dd)',
metavar='START_DATE', type=valid_date_type, required=True)
parser.add_argument('-e', '--end', dest='end', help='end date (YYYY-mm-dd)',
metavar='date', type=valid_date_type, required=True)
metavar='END_DATE', type=valid_date_type, required=True)
args = parser.parse_args()
if args.end < args.begin:
if args.end < args.start:
raise ValueError(
'end date {} is before begin date {}'.format(args.end, args.begin))
'end date {} is before start date {}'.format(args.end, args.start))
if args.tokenfile:
with open(args.tokenfile, 'r') as file:
@@ -53,12 +53,12 @@ def parse_args():
class BugBashTally(object):
def __init__(self, gh, begin_date, end_date):
def __init__(self, gh, start_date, end_date):
"""Create a BugBashTally object with the provided Github object,
begin datetime object, and end datetime object"""
start datetime object, and end datetime object"""
self._gh = gh
self._repo = gh.get_repo('zephyrproject-rtos/zephyr')
self._begin_date = begin_date
self._start_date = start_date
self._end_date = end_date
self._issues = []
@@ -122,12 +122,12 @@ class BugBashTally(object):
cutoff = self._end_date + timedelta(1)
issues = self._repo.get_issues(state='closed', labels=[
'bug'], since=self._begin_date)
'bug'], since=self._start_date)
for i in issues:
# the PyGithub API and v3 REST API do not facilitate 'until'
# or 'end date' :-/
if i.closed_at < self._begin_date or i.closed_at > cutoff:
if i.closed_at < self._start_date or i.closed_at > cutoff:
continue
ipr = i.pull_request
@@ -167,7 +167,7 @@ def print_top_ten(top_ten):
def main():
args = parse_args()
bbt = BugBashTally(Github(args.token), args.begin, args.end)
bbt = BugBashTally(Github(args.token), args.start, args.end)
if args.all:
# print one issue per line
issues = bbt.get_issues()

341
scripts/release/list_backports.py Executable file
View File

@@ -0,0 +1,341 @@
#!/usr/bin/env python3
# Copyright (c) 2022, Meta
#
# SPDX-License-Identifier: Apache-2.0
"""Query issues in a release branch
This script searches for issues referenced via pull-requests in a release
branch in order to simplify tracking changes such as automated backports,
manual backports, security fixes, and stability fixes.
A formatted report is printed to standard output either in JSON or
reStructuredText.
Since an issue is required for all changes to release branches, merged PRs
must have at least one instance of the phrase "Fixes #1234" in the body. This
script will throw an error if a PR has been made without an associated issue.
Usage:
./scripts/release/list_backports.py \
-t ~/.ghtoken \
-b v2.7-branch \
-s 2021-12-15 -e 2022-04-22 \
-P 45074 -P 45868 -P 44918 -P 41234 -P 41174 \
-j | jq . | tee /tmp/backports.json
GITHUB_TOKEN="<secret>" \
./scripts/release/list_backports.py \
-b v3.0-branch \
-p 43381 \
-j | jq . | tee /tmp/backports.json
"""
import argparse
from datetime import datetime, timedelta
import io
import json
import logging
import os
import re
import sys
# Requires PyGithub
from github import Github
# https://gist.github.com/monkut/e60eea811ef085a6540f
def valid_date_type(arg_date_str):
"""custom argparse *date* type for user dates values given from the
command line"""
try:
return datetime.strptime(arg_date_str, "%Y-%m-%d")
except ValueError:
msg = "Given Date ({0}) not valid! Expected format, YYYY-MM-DD!".format(arg_date_str)
raise argparse.ArgumentTypeError(msg)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-t', '--token', dest='tokenfile',
help='File containing GitHub token (alternatively, use GITHUB_TOKEN env variable)', metavar='FILE')
parser.add_argument('-b', '--base', dest='base',
help='branch (base) for PRs (e.g. v2.7-branch)', metavar='BRANCH', required=True)
parser.add_argument('-j', '--json', dest='json', action='store_true',
help='print output in JSON rather than RST')
parser.add_argument('-s', '--start', dest='start', help='start date (YYYY-mm-dd)',
metavar='START_DATE', type=valid_date_type)
parser.add_argument('-e', '--end', dest='end', help='end date (YYYY-mm-dd)',
metavar='END_DATE', type=valid_date_type)
parser.add_argument("-o", "--org", default="zephyrproject-rtos",
help="Github organisation")
parser.add_argument('-p', '--include-pull', dest='includes',
help='include pull request (can be specified multiple times)',
metavar='PR', type=int, action='append', default=[])
parser.add_argument('-P', '--exclude-pull', dest='excludes',
help='exlude pull request (can be specified multiple times, helpful for version bumps and release notes)',
metavar='PR', type=int, action='append', default=[])
parser.add_argument("-r", "--repo", default="zephyr",
help="Github repository")
args = parser.parse_args()
if args.includes:
if getattr(args, 'start'):
logging.error(
'the --start argument should not be used with --include-pull')
return None
if getattr(args, 'end'):
logging.error(
'the --end argument should not be used with --include-pull')
return None
else:
if not getattr(args, 'start'):
logging.error(
'if --include-pr PR is not used, --start START_DATE is required')
return None
if not getattr(args, 'end'):
setattr(args, 'end', datetime.now())
if args.end < args.start:
logging.error(
f'end date {args.end} is before start date {args.start}')
return None
if args.tokenfile:
with open(args.tokenfile, 'r') as file:
token = file.read()
token = token.strip()
else:
if 'GITHUB_TOKEN' not in os.environ:
raise ValueError('No credentials specified')
token = os.environ['GITHUB_TOKEN']
setattr(args, 'token', token)
return args
class Backport(object):
def __init__(self, repo, base, pulls):
self._base = base
self._repo = repo
self._issues = []
self._pulls = pulls
self._pulls_without_an_issue = []
self._pulls_with_invalid_issues = {}
@staticmethod
def by_date_range(repo, base, start_date, end_date, excludes):
"""Create a Backport object with the provided repo,
base, start datetime object, and end datetime objects, and
list of excluded PRs"""
pulls = []
unfiltered_pulls = repo.get_pulls(
base=base, state='closed')
for p in unfiltered_pulls:
if not p.merged:
# only consider merged backports
continue
if p.closed_at < start_date or p.closed_at >= end_date + timedelta(1):
# only concerned with PRs within time window
continue
if p.number in excludes:
# skip PRs that have been explicitly excluded
continue
pulls.append(p)
# paginated_list.sort() does not exist
pulls = sorted(pulls, key=lambda x: x.number)
return Backport(repo, base, pulls)
@staticmethod
def by_included_prs(repo, base, includes):
"""Create a Backport object with the provided repo,
base, and list of included PRs"""
pulls = []
for i in includes:
try:
p = repo.get_pull(i)
except Exception:
p = None
if not p:
logging.error(f'{i} is not a valid pull request')
return None
if p.base.ref != base:
logging.error(
f'{i} is not a valid pull request for base {base} ({p.base.label})')
return None
pulls.append(p)
# paginated_list.sort() does not exist
pulls = sorted(pulls, key=lambda x: x.number)
return Backport(repo, base, pulls)
@staticmethod
def sanitize_title(title):
# TODO: sanitize titles such that they are suitable for both JSON and ReStructured Text
# could also automatically fix titles like "Automated backport of PR #1234"
return title
def print(self):
for i in self.get_issues():
title = Backport.sanitize_title(i.title)
# * :github:`38972` - logging: Cleaning references to tracing in logging
print(f'* :github:`{i.number}` - {title}')
def print_json(self):
issue_objects = []
for i in self.get_issues():
obj = {}
obj['id'] = i.number
obj['title'] = Backport.sanitize_title(i.title)
obj['url'] = f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{i.number}'
issue_objects.append(obj)
print(json.dumps(issue_objects))
def get_pulls(self):
return self._pulls
def get_issues(self):
"""Return GitHub issues fixed in the provided date window"""
if self._issues:
return self._issues
issue_map = {}
self._pulls_without_an_issue = []
self._pulls_with_invalid_issues = {}
for p in self._pulls:
# check for issues in this pr
issues_for_this_pr = {}
with io.StringIO(p.body) as buf:
for line in buf.readlines():
line = line.strip()
match = re.search(r"^Fixes[:]?\s*#([1-9][0-9]*).*", line)
if not match:
match = re.search(
rf"^Fixes[:]?\s*https://github\.com/{self._repo.organization.login}/{self._repo.name}/issues/([1-9][0-9]*).*", line)
if not match:
continue
issue_number = int(match[1])
issue = self._repo.get_issue(issue_number)
if not issue:
if not self._pulls_with_invalid_issues[p.number]:
self._pulls_with_invalid_issues[p.number] = [
issue_number]
else:
self._pulls_with_invalid_issues[p.number].append(
issue_number)
logging.error(
f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{p.number} references invalid issue number {issue_number}')
continue
issues_for_this_pr[issue_number] = issue
# report prs missing issues later
if len(issues_for_this_pr) == 0:
logging.error(
f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{p.number} does not have an associated issue')
self._pulls_without_an_issue.append(p)
continue
# FIXME: when we have upgrade to python3.9+, use "issue_map | issues_for_this_pr"
issue_map = {**issue_map, **issues_for_this_pr}
issues = list(issue_map.values())
# paginated_list.sort() does not exist
issues = sorted(issues, key=lambda x: x.number)
self._issues = issues
return self._issues
def get_pulls_without_issues(self):
if self._pulls_without_an_issue:
return self._pulls_without_an_issue
self.get_issues()
return self._pulls_without_an_issue
def get_pulls_with_invalid_issues(self):
if self._pulls_with_invalid_issues:
return self._pulls_with_invalid_issues
self.get_issues()
return self._pulls_with_invalid_issues
def main():
args = parse_args()
if not args:
return os.EX_DATAERR
try:
gh = Github(args.token)
except Exception:
logging.error('failed to authenticate with GitHub')
return os.EX_DATAERR
try:
repo = gh.get_repo(args.org + '/' + args.repo)
except Exception:
logging.error('failed to obtain Github repository')
return os.EX_DATAERR
bp = None
if args.includes:
bp = Backport.by_included_prs(repo, args.base, set(args.includes))
else:
bp = Backport.by_date_range(repo, args.base,
args.start, args.end, set(args.excludes))
if not bp:
return os.EX_DATAERR
pulls_with_invalid_issues = bp.get_pulls_with_invalid_issues()
if pulls_with_invalid_issues:
logging.error('The following PRs link to invalid issues:')
for (p, lst) in pulls_with_invalid_issues:
logging.error(
f'\nhttps://github.com/{repo.organization.login}/{repo.name}/pull/{p.number}: {lst}')
return os.EX_DATAERR
pulls_without_issues = bp.get_pulls_without_issues()
if pulls_without_issues:
logging.error(
'Please ensure the body of each PR to a release branch contains "Fixes #1234"')
logging.error('The following PRs are lacking associated issues:')
for p in pulls_without_issues:
logging.error(
f'https://github.com/{repo.organization.login}/{repo.name}/pull/{p.number}')
return os.EX_DATAERR
if args.json:
bp.print_json()
else:
bp.print()
return os.EX_OK
if __name__ == '__main__':
sys.exit(main())

View File

@@ -147,10 +147,10 @@ def parse_args():
def main():
parse_args()
token = os.environ.get('GH_TOKEN', None)
token = os.environ.get('GITHUB_TOKEN', None)
if not token:
sys.exit("""Github token not set in environment,
set the env. variable GH_TOKEN please and retry.""")
set the env. variable GITHUB_TOKEN please and retry.""")
i = Issues(args.org, args.repo, token)
@@ -213,5 +213,6 @@ set the env. variable GH_TOKEN please and retry.""")
f.write("* :github:`{}` - {}\n".format(
item['number'], item['title']))
if __name__ == '__main__':
main()

View File

@@ -1373,7 +1373,7 @@ def main():
)
logger.info("Run completed")
if results.failed or (tplan.warnings and options.warnings_as_errors):
if results.failed or results.error or (tplan.warnings and options.warnings_as_errors):
sys.exit(1)

View File

@@ -19,6 +19,9 @@ config MCUBOOT_GENERATE_CONFIRMED_IMAGE
config ROM_START_OFFSET
default 0x20
config HAS_DYNAMIC_DEVICE_HANDLES
default y
endif
config SOC

View File

@@ -17,6 +17,9 @@ if BOOTLOADER_MCUBOOT
config ROM_START_OFFSET
default 0x20
config HAS_DYNAMIC_DEVICE_HANDLES
default y
endif
config SOC

View File

@@ -241,12 +241,12 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
sr.cpu = cpu_num;
sr.fn = fn;
sr.stack_top = Z_THREAD_STACK_BUFFER(stack) + sz;
sr.stack_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
sr.arg = arg;
sr.vecbase = vb;
sr.alive = &alive_flag;
appcpu_top = Z_THREAD_STACK_BUFFER(stack) + sz;
appcpu_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
start_rec = &sr;

View File

@@ -115,7 +115,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
start_rec.fn = fn;
start_rec.arg = arg;
z_mp_stack_top = Z_THREAD_STACK_BUFFER(stack) + sz;
z_mp_stack_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
soc_start_core(cpu_num);
}

View File

@@ -455,6 +455,8 @@ static void reset(struct net_buf *buf, struct net_buf **evt)
k_poll_signal_raise(hbuf_signal, 0x0);
}
#endif
hci_recv_fifo_reset();
}
#if defined(CONFIG_BT_HCI_ACL_FLOW_CONTROL)

View File

@@ -201,6 +201,21 @@ isoal_status_t sink_sdu_write_hci(void *dbuf,
}
#endif
void hci_recv_fifo_reset(void)
{
/* NOTE: As there is no equivalent API to wake up a waiting thread and
* reinitialize the queue so it is empty, we use the cancel wait and
* initialize the queue. As the Tx thread and Rx thread are co-operative
* we should be relatively safe doing the below.
* Added k_sched_lock and k_sched_unlock, as native_posix seems to
* swap to waiting thread on call to k_fifo_cancel_wait!.
*/
k_sched_lock();
k_fifo_cancel_wait(&recv_fifo);
k_fifo_init(&recv_fifo);
k_sched_unlock();
}
static struct net_buf *process_prio_evt(struct node_rx_pdu *node_rx,
uint8_t *evt_flags)
{
@@ -587,7 +602,7 @@ static void recv_thread(void *p1, void *p2, void *p3)
int err;
err = k_poll(events, 2, K_FOREVER);
LL_ASSERT(err == 0);
LL_ASSERT(err == 0 || err == -EINTR);
if (events[0].state == K_POLL_STATE_SIGNALED) {
events[0].signal->signaled = 0U;
} else if (events[1].state ==

View File

@@ -32,6 +32,7 @@ extern atomic_t hci_state_mask;
void hci_init(struct k_poll_signal *signal_host_buf);
void hci_recv_fifo_reset(void);
struct net_buf *hci_cmd_handle(struct net_buf *cmd, void **node_rx);
void hci_evt_encode(struct node_rx_pdu *node_rx, struct net_buf *buf);
uint8_t hci_get_class(struct node_rx_pdu *node_rx);

View File

@@ -406,8 +406,7 @@ struct pdu_adv *lll_adv_pdu_alloc_pdu_adv(void)
p = MFIFO_DEQUEUE_PEEK(pdu_free);
if (p) {
err = k_sem_take(&sem_pdu_free, K_NO_WAIT);
LL_ASSERT(!err);
k_sem_reset(&sem_pdu_free);
MFIFO_DEQUEUE(pdu_free);
@@ -428,6 +427,8 @@ struct pdu_adv *lll_adv_pdu_alloc_pdu_adv(void)
err = k_sem_take(&sem_pdu_free, K_FOREVER);
LL_ASSERT(!err);
k_sem_reset(&sem_pdu_free);
p = MFIFO_DEQUEUE(pdu_free);
LL_ASSERT(p);
@@ -795,11 +796,10 @@ static void pdu_free_sem_give(void)
{
static memq_link_t link;
static struct mayfly mfy = {0, 0, &link, NULL, mfy_pdu_free_sem_give};
uint32_t retval;
retval = mayfly_enqueue(TICKER_USER_ID_LLL, TICKER_USER_ID_ULL_HIGH, 0,
&mfy);
LL_ASSERT(!retval);
/* Ignore mayfly_enqueue failure on repeated enqueue call */
(void)mayfly_enqueue(TICKER_USER_ID_LLL, TICKER_USER_ID_ULL_HIGH, 0,
&mfy);
}
#else /* !CONFIG_BT_CTLR_ZLI */

View File

@@ -987,15 +987,6 @@ uint8_t ll_adv_enable(uint8_t enable)
#endif /* CONFIG_BT_CTLR_ADV_EXT */
#endif /* CONFIG_BT_CTLR_PHY */
#endif
#else /* CONFIG_BT_LL_SW_LLCP_LEGACY */
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
#if defined(CONFIG_BT_CTLR_PHY) && defined(CONFIG_BT_CTLR_ADV_EXT)
const uint8_t phy = lll->phy_s;
#else
const uint8_t phy = PHY_1M;
#endif
ull_dle_init(conn, phy);
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
#if defined(CONFIG_BT_CTLR_PHY)
@@ -1133,7 +1124,7 @@ uint8_t ll_adv_enable(uint8_t enable)
conn->tx_head = conn->tx_ctrl = conn->tx_ctrl_last =
conn->tx_data = conn->tx_data_last = 0;
#else /* CONFIG_BT_LL_SW_LLCP_LEGACY */
#else /* !CONFIG_BT_LL_SW_LLCP_LEGACY */
/* Re-initialize the control procedure data structures */
ull_llcp_init(conn);
@@ -1152,9 +1143,22 @@ uint8_t ll_adv_enable(uint8_t enable)
conn->pause_rx_data = 0U;
#endif /* CONFIG_BT_CTLR_LE_ENC */
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
uint8_t phy_in_use = PHY_1M;
#if defined(CONFIG_BT_CTLR_ADV_EXT)
if (pdu_adv->type == PDU_ADV_TYPE_EXT_IND) {
phy_in_use = lll->phy_s;
}
#endif /* CONFIG_BT_CTLR_ADV_EXT */
ull_dle_init(conn, phy_in_use);
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
/* Re-initialize the Tx Q */
ull_tx_q_init(&conn->tx_q);
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
#endif /* !CONFIG_BT_LL_SW_LLCP_LEGACY */
/* NOTE: using same link as supplied for terminate ind */
adv->link_cc_free = link;

View File

@@ -211,6 +211,16 @@ uint8_t ll_create_connection(uint16_t scan_interval, uint16_t scan_window,
conn_lll->nesn = 0;
conn_lll->empty = 0;
#if defined(CONFIG_BT_CTLR_PHY)
/* Use the default 1M PHY, extended connection initiation in LLL will
* update this with the correct PHY.
*/
conn_lll->phy_tx = PHY_1M;
conn_lll->phy_flags = 0;
conn_lll->phy_tx_time = PHY_1M;
conn_lll->phy_rx = PHY_1M;
#endif /* CONFIG_BT_CTLR_PHY */
#if defined(CONFIG_BT_LL_SW_LLCP_LEGACY)
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
conn_lll->max_tx_octets = PDU_DC_PAYLOAD_SIZE_MIN;
@@ -230,16 +240,6 @@ uint8_t ll_create_connection(uint16_t scan_interval, uint16_t scan_window,
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
#if defined(CONFIG_BT_CTLR_PHY)
/* Use the default 1M PHY, extended connection initiation in LLL will
* update this with the correct PHY.
*/
conn_lll->phy_tx = PHY_1M;
conn_lll->phy_flags = 0;
conn_lll->phy_tx_time = PHY_1M;
conn_lll->phy_rx = PHY_1M;
#endif /* CONFIG_BT_CTLR_PHY */
#if defined(CONFIG_BT_CTLR_CONN_RSSI)
conn_lll->rssi_latest = BT_HCI_LE_RSSI_NOT_AVAILABLE;
#if defined(CONFIG_BT_CTLR_CONN_RSSI_EVENT)

View File

@@ -1452,32 +1452,33 @@ void ull_conn_done(struct node_rx_event_done *done)
}
#endif /* CONFIG_BT_CTLR_LE_ENC */
/* Peripheral received terminate ind or
/* Legacy LLCP:
* Peripheral received terminate ind or
* Central received ack for the transmitted terminate ind or
* Central transmitted ack for the received terminate ind or
* there has been MIC failure
* Refactored LLCP:
* reason_final is set exactly under the above conditions
*/
reason_final = conn->llcp_terminate.reason_final;
if (reason_final && (
#if defined(CONFIG_BT_PERIPHERAL)
lll->role ||
#else /* CONFIG_BT_PERIPHERAL */
0 ||
#endif /* CONFIG_BT_PERIPHERAL */
#if defined(CONFIG_BT_CENTRAL)
#if defined(CONFIG_BT_LL_SW_LLCP_LEGACY)
(((conn->llcp_terminate.req -
conn->llcp_terminate.ack) & 0xFF) ==
TERM_ACKED) ||
conn->central.terminate_ack ||
(reason_final == BT_HCI_ERR_TERM_DUE_TO_MIC_FAIL)
#else /* CONFIG_BT_LL_SW_LLCP_LEGACY */
1
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
#else /* CONFIG_BT_CENTRAL */
1
#if defined(CONFIG_BT_CENTRAL)
(((conn->llcp_terminate.req -
conn->llcp_terminate.ack) & 0xFF) ==
TERM_ACKED) ||
conn->central.terminate_ack ||
(reason_final == BT_HCI_ERR_TERM_DUE_TO_MIC_FAIL) ||
#endif /* CONFIG_BT_CENTRAL */
)) {
#if defined(CONFIG_BT_PERIPHERAL)
lll->role
#else /* CONFIG_BT_PERIPHERAL */
false
#endif /* CONFIG_BT_PERIPHERAL */
#else /* defined(CONFIG_BT_LL_SW_LLCP_LEGACY) */
true
#endif /* !defined(CONFIG_BT_LL_SW_LLCP_LEGACY) */
)) {
conn_cleanup(conn, reason_final);
return;
@@ -1986,7 +1987,7 @@ void ull_conn_tx_ack(uint16_t handle, memq_link_t *link, struct node_tx *tx)
#if defined(CONFIG_BT_LL_SW_LLCP_LEGACY)
mem_release(tx, &mem_conn_tx_ctrl.free);
#else /* CONFIG_BT_LL_SW_LLCP_LEGACY */
struct ll_conn *conn = ll_conn_get(handle);
struct ll_conn *conn = ll_connected_get(handle);
ull_cp_release_tx(conn, tx);
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
@@ -2463,6 +2464,11 @@ static void conn_cleanup_finalize(struct ll_conn *conn)
#else /* CONFIG_BT_LL_SW_LLCP_LEGACY */
ARG_UNUSED(rx);
ull_cp_state_set(conn, ULL_CP_DISCONNECTED);
/* Update tx buffer queue handling */
#if defined(LLCP_TX_CTRL_BUF_QUEUE_ENABLE)
ull_cp_update_tx_buffer_queue(conn);
#endif /* LLCP_TX_CTRL_BUF_QUEUE_ENABLE */
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
/* flush demux-ed Tx buffer still in ULL context */
@@ -2539,6 +2545,8 @@ static void tx_ull_flush(struct ll_conn *conn)
#else /* CONFIG_BT_LL_SW_LLCP_LEGACY */
struct node_tx *tx;
ull_tx_q_resume_data(&conn->tx_q);
tx = tx_ull_dequeue(conn, NULL);
while (tx) {
memq_link_t *link;
@@ -8123,6 +8131,11 @@ void ull_dle_init(struct ll_conn *conn, uint8_t phy)
conn->lll.dle.remote.max_rx_time = max_time_min;
#endif /* CONFIG_BT_CTLR_PHY */
/*
* ref. Bluetooth Core Specification version 5.3, Vol. 6,
* Part B, section 4.5.10 we can call ull_dle_update_eff
* for initialisation
*/
ull_dle_update_eff(conn);
/* Check whether the controller should perform a data length update after
@@ -8166,3 +8179,8 @@ uint8_t ull_conn_lll_phy_active(struct ll_conn *conn, uint8_t phys)
}
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
uint8_t ull_is_lll_tx_queue_empty(struct ll_conn *conn)
{
return (memq_peek(conn->lll.memq_tx.head, conn->lll.memq_tx.tail, NULL) == NULL);
}

View File

@@ -125,3 +125,8 @@ void ull_conn_pause_rx_data(struct ll_conn *conn);
void ull_conn_resume_rx_data(struct ll_conn *conn);
#endif /* CONFIG_BT_LL_SW_LLCP_LEGACY */
/**
* @brief Check if the lower link layer transmit queue is empty
*/
uint8_t ull_is_lll_tx_queue_empty(struct ll_conn *conn);

View File

@@ -462,12 +462,7 @@ struct llcp_struct {
} cte_rsp;
#endif /* CONFIG_BT_CTLR_DF_CONN_CTE_RSP */
#if (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0) &&\
(CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM <\
CONFIG_BT_CTLR_LLCP_TX_PER_CONN_TX_CTRL_BUF_NUM_MAX)
uint8_t tx_buffer_alloc;
#endif /* (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0) */
uint8_t tx_q_pause_data_mask;
}; /* struct llcp_struct */

View File

@@ -92,6 +92,18 @@ void llcp_proc_ctx_release(struct proc_ctx *ctx)
}
#if defined(LLCP_TX_CTRL_BUF_QUEUE_ENABLE)
/*
* @brief Update 'global' tx buffer allowance
*/
void ull_cp_update_tx_buffer_queue(struct ll_conn *conn)
{
if (conn->llcp.tx_buffer_alloc > CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM) {
common_tx_buffer_alloc -= (conn->llcp.tx_buffer_alloc -
CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM);
}
}
/*
* @brief Check for per conn pre-allocated tx buffer allowance
* @return true if buffer is available
@@ -159,8 +171,8 @@ void llcp_tx_alloc_unpeek(struct proc_ctx *ctx)
*/
struct node_tx *llcp_tx_alloc(struct ll_conn *conn, struct proc_ctx *ctx)
{
#if (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0)
conn->llcp.tx_buffer_alloc++;
#if (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0)
if (conn->llcp.tx_buffer_alloc > CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM) {
common_tx_buffer_alloc++;
/* global buffer allocated, so we're at the head and should just pop head */
@@ -239,16 +251,21 @@ void llcp_tx_enqueue(struct ll_conn *conn, struct node_tx *tx)
void llcp_tx_pause_data(struct ll_conn *conn, enum llcp_tx_q_pause_data_mask pause_mask)
{
if ((conn->llcp.tx_q_pause_data_mask & pause_mask) == 0) {
conn->llcp.tx_q_pause_data_mask |= pause_mask;
/* Only pause the TX Q if we have not already paused it (by any procedure) */
if (conn->llcp.tx_q_pause_data_mask == 0) {
ull_tx_q_pause_data(&conn->tx_q);
}
/* Add the procedure that paused data */
conn->llcp.tx_q_pause_data_mask |= pause_mask;
}
void llcp_tx_resume_data(struct ll_conn *conn, enum llcp_tx_q_pause_data_mask resume_mask)
{
/* Remove the procedure that paused data */
conn->llcp.tx_q_pause_data_mask &= ~resume_mask;
/* Only resume the TX Q if we have removed all procedures that paused data */
if (conn->llcp.tx_q_pause_data_mask == 0) {
ull_tx_q_resume_data(&conn->tx_q);
}
@@ -494,9 +511,7 @@ void ull_llcp_init(struct ll_conn *conn)
#endif /* CONFIG_BT_CTLR_DF_CONN_CTE_RSP */
#if defined(LLCP_TX_CTRL_BUF_QUEUE_ENABLE)
#if (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0)
conn->llcp.tx_buffer_alloc = 0;
#endif /* (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0) */
#endif /* LLCP_TX_CTRL_BUF_QUEUE_ENABLE */
conn->llcp.tx_q_pause_data_mask = 0;
@@ -506,15 +521,13 @@ void ull_llcp_init(struct ll_conn *conn)
void ull_cp_release_tx(struct ll_conn *conn, struct node_tx *tx)
{
#if defined(LLCP_TX_CTRL_BUF_QUEUE_ENABLE)
#if (CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0)
if (conn->llcp.tx_buffer_alloc > CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM) {
common_tx_buffer_alloc--;
if (conn) {
LL_ASSERT(conn->llcp.tx_buffer_alloc > 0);
if (conn->llcp.tx_buffer_alloc > CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM) {
common_tx_buffer_alloc--;
}
conn->llcp.tx_buffer_alloc--;
}
conn->llcp.tx_buffer_alloc--;
#else /* CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0 */
ARG_UNUSED(conn);
common_tx_buffer_alloc--;
#endif /* CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM > 0 */
#else /* LLCP_TX_CTRL_BUF_QUEUE_ENABLE */
ARG_UNUSED(conn);
#endif /* LLCP_TX_CTRL_BUF_QUEUE_ENABLE */
@@ -1616,6 +1629,13 @@ uint16_t ctx_buffers_free(void)
return local_ctx_buffers_free() + remote_ctx_buffers_free();
}
#if defined(LLCP_TX_CTRL_BUF_QUEUE_ENABLE)
uint8_t common_tx_buffer_alloc_count(void)
{
return common_tx_buffer_alloc;
}
#endif /* LLCP_TX_CTRL_BUF_QUEUE_ENABLE */
void test_int_mem_proc_ctx(void)
{
struct proc_ctx *ctx1;

View File

@@ -25,6 +25,11 @@ void ull_llcp_init(struct ll_conn *conn);
*/
void ull_cp_state_set(struct ll_conn *conn, uint8_t state);
/*
* @brief Update 'global' tx buffer allowance
*/
void ull_cp_update_tx_buffer_queue(struct ll_conn *conn);
/**
*
*/

View File

@@ -77,6 +77,7 @@ enum {
enum {
RP_COMMON_STATE_IDLE,
RP_COMMON_STATE_WAIT_RX,
RP_COMMON_STATE_POSTPONE_TERMINATE,
RP_COMMON_STATE_WAIT_TX,
RP_COMMON_STATE_WAIT_TX_ACK,
RP_COMMON_STATE_WAIT_NTF,
@@ -93,6 +94,7 @@ enum {
RP_COMMON_EVT_REQUEST,
};
static void lp_comm_ntf(struct ll_conn *conn, struct proc_ctx *ctx);
static void lp_comm_terminate_invalid_pdu(struct ll_conn *conn, struct proc_ctx *ctx);
@@ -781,8 +783,17 @@ void llcp_lp_comm_init_proc(struct proc_ctx *ctx)
void llcp_lp_comm_run(struct ll_conn *conn, struct proc_ctx *ctx, void *param)
{
lp_comm_execute_fsm(conn, ctx, LP_COMMON_EVT_RUN, param);
}
static void rp_comm_terminate(struct ll_conn *conn, struct proc_ctx *ctx)
{
llcp_rr_complete(conn);
ctx->state = RP_COMMON_STATE_IDLE;
/* Mark the connection for termination */
conn->llcp_terminate.reason_final = ctx->data.term.error_code;
}
/*
* LLCP Remote Procedure Common FSM
*/
@@ -821,6 +832,8 @@ static void rp_comm_rx_decode(struct ll_conn *conn, struct proc_ctx *ctx, struct
break;
case PDU_DATA_LLCTRL_TYPE_TERMINATE_IND:
llcp_pdu_decode_terminate_ind(ctx, pdu);
/* Make sure no data is tx'ed after RX of terminate ind */
llcp_tx_pause_data(conn, LLCP_TX_QUEUE_PAUSE_DATA_TERMINATE);
break;
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
case PDU_DATA_LLCTRL_TYPE_LENGTH_REQ:
@@ -1022,8 +1035,10 @@ static void rp_comm_send_rsp(struct ll_conn *conn, struct proc_ctx *ctx, uint8_t
} else {
/* Invalid behaviour
* A procedure already sent a LL_VERSION_IND and received a LL_VERSION_IND.
* For now we chose to ignore the 'out of order' PDU
* Ignore and complete the procedure.
*/
llcp_rr_complete(conn);
ctx->state = RP_COMMON_STATE_IDLE;
}
break;
@@ -1051,12 +1066,19 @@ static void rp_comm_send_rsp(struct ll_conn *conn, struct proc_ctx *ctx, uint8_t
break;
#endif /* CONFIG_BT_CTLR_MIN_USED_CHAN && CONFIG_BT_CENTRAL */
case PROC_TERMINATE:
/* No response */
llcp_rr_complete(conn);
ctx->state = RP_COMMON_STATE_IDLE;
/* Mark the connection for termination */
conn->llcp_terminate.reason_final = ctx->data.term.error_code;
#if defined(CONFIG_BT_CENTRAL)
if (conn->lll.role == BT_HCI_ROLE_CENTRAL) {
/* No response, but postpone terminate until next event
* to ensure acking the reception of TERMINATE_IND
*/
ctx->state = RP_COMMON_STATE_POSTPONE_TERMINATE;
break;
}
#endif
#if defined(CONFIG_BT_PERIPHERAL)
/* Terminate right away */
rp_comm_terminate(conn, ctx);
#endif
break;
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
case PROC_DATA_LENGTH_UPDATE:
@@ -1102,6 +1124,26 @@ static void rp_comm_st_wait_rx(struct ll_conn *conn, struct proc_ctx *ctx, uint8
}
}
static void rp_comm_st_postpone_terminate(struct ll_conn *conn, struct proc_ctx *ctx, uint8_t evt,
void *param)
{
switch (evt) {
case RP_COMMON_EVT_RUN:
LL_ASSERT(ctx->proc == PROC_TERMINATE);
/* Note: now we terminate, mimicking legacy LLCP behaviour
* A check should be added to ensure that the ack of the terminate_ind was
* indeed tx'ed and not scheduled out/postponed by LLL
*/
rp_comm_terminate(conn, ctx);
break;
default:
/* Ignore other evts */
break;
}
}
static void rp_comm_st_wait_tx(struct ll_conn *conn, struct proc_ctx *ctx, uint8_t evt, void *param)
{
switch (evt) {
@@ -1181,6 +1223,9 @@ static void rp_comm_execute_fsm(struct ll_conn *conn, struct proc_ctx *ctx, uint
case RP_COMMON_STATE_WAIT_RX:
rp_comm_st_wait_rx(conn, ctx, evt, param);
break;
case RP_COMMON_STATE_POSTPONE_TERMINATE:
rp_comm_st_postpone_terminate(conn, ctx, evt, param);
break;
case RP_COMMON_STATE_WAIT_TX:
rp_comm_st_wait_tx(conn, ctx, evt, param);
break;

View File

@@ -33,6 +33,7 @@ enum llcp_tx_q_pause_data_mask {
LLCP_TX_QUEUE_PAUSE_DATA_ENCRYPTION = 0x01,
LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE = 0x02,
LLCP_TX_QUEUE_PAUSE_DATA_DATA_LENGTH = 0x04,
LLCP_TX_QUEUE_PAUSE_DATA_TERMINATE = 0x08,
};
#if ((CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM <\
@@ -642,4 +643,5 @@ bool lr_is_idle(struct ll_conn *conn);
bool rr_is_disconnected(struct ll_conn *conn);
bool rr_is_idle(struct ll_conn *conn);
uint16_t ctx_buffers_free(void);
uint8_t common_tx_buffer_alloc_count(void);
#endif

View File

@@ -175,10 +175,25 @@ static void pu_reset_timing_restrict(struct ll_conn *conn)
}
#if defined(CONFIG_BT_PERIPHERAL)
static inline bool phy_valid(uint8_t phy)
{
/* This is equivalent to:
* maximum one bit set, and no bit set is rfu's
*/
return (phy < 5 && phy != 3);
}
static uint8_t pu_check_update_ind(struct ll_conn *conn, struct proc_ctx *ctx)
{
uint8_t ret = 0;
/* Check if either phy selected is invalid */
if (!phy_valid(ctx->data.pu.c_to_p_phy) || !phy_valid(ctx->data.pu.p_to_c_phy)) {
/* more than one or any rfu bit selected in either phy */
ctx->data.pu.error = BT_HCI_ERR_UNSUPP_FEATURE_PARAM_VAL;
ret = 1;
}
/* Both tx and rx PHY unchanged */
if (!((ctx->data.pu.c_to_p_phy | ctx->data.pu.p_to_c_phy) & 0x07)) {
/* if no phy changes, quit procedure, and possibly signal host */
@@ -199,29 +214,41 @@ static uint8_t pu_check_update_ind(struct ll_conn *conn, struct proc_ctx *ctx)
static uint8_t pu_apply_phy_update(struct ll_conn *conn, struct proc_ctx *ctx)
{
struct lll_conn *lll = &conn->lll;
uint8_t phy_bitmask = PHY_1M;
const uint8_t old_tx = lll->phy_tx;
const uint8_t old_rx = lll->phy_rx;
#if defined(CONFIG_BT_CTLR_PHY_2M)
phy_bitmask |= PHY_2M;
#endif
#if defined(CONFIG_BT_CTLR_PHY_CODED)
phy_bitmask |= PHY_CODED;
#endif
const uint8_t p_to_c_phy = ctx->data.pu.p_to_c_phy & phy_bitmask;
const uint8_t c_to_p_phy = ctx->data.pu.c_to_p_phy & phy_bitmask;
if (0) {
#if defined(CONFIG_BT_PERIPHERAL)
} else if (lll->role == BT_HCI_ROLE_PERIPHERAL) {
if (ctx->data.pu.p_to_c_phy) {
lll->phy_tx = ctx->data.pu.p_to_c_phy;
if (p_to_c_phy) {
lll->phy_tx = p_to_c_phy;
}
if (ctx->data.pu.c_to_p_phy) {
lll->phy_rx = ctx->data.pu.c_to_p_phy;
if (c_to_p_phy) {
lll->phy_rx = c_to_p_phy;
}
#endif /* CONFIG_BT_PERIPHERAL */
#if defined(CONFIG_BT_CENTRAL)
} else if (lll->role == BT_HCI_ROLE_CENTRAL) {
if (ctx->data.pu.p_to_c_phy) {
lll->phy_rx = ctx->data.pu.p_to_c_phy;
if (p_to_c_phy) {
lll->phy_rx = p_to_c_phy;
}
if (ctx->data.pu.c_to_p_phy) {
lll->phy_tx = ctx->data.pu.c_to_p_phy;
if (c_to_p_phy) {
lll->phy_tx = c_to_p_phy;
}
#endif /* CONFIG_BT_CENTRAL */
}
return (ctx->data.pu.c_to_p_phy || ctx->data.pu.p_to_c_phy);
return ((old_tx != lll->phy_tx) || (old_rx != lll->phy_rx));
}
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
@@ -259,8 +286,8 @@ static uint8_t pu_update_eff_times(struct ll_conn *conn, struct proc_ctx *ctx)
pu_calc_eff_time(lll->dle.eff.max_rx_octets, lll->phy_rx, max_rx_time);
}
if ((eff_tx_time != lll->dle.eff.max_tx_time) ||
(eff_rx_time != lll->dle.eff.max_rx_time)) {
if ((eff_tx_time > lll->dle.eff.max_tx_time) ||
(eff_rx_time > lll->dle.eff.max_rx_time)) {
lll->dle.eff.max_tx_time = eff_tx_time;
lll->dle.eff.max_rx_time = eff_rx_time;
return 1U;
@@ -313,8 +340,9 @@ static void pu_prepare_instant(struct ll_conn *conn, struct proc_ctx *ctx)
/* Set instance only in case there is actual PHY change. Otherwise the instant should be
* set to 0.
*/
if (ctx->data.pu.c_to_p_phy != 0 || ctx->data.pu.p_to_c_phy != 0) {
ctx->data.pu.instant = ull_conn_event_counter(conn) + PHY_UPDATE_INSTANT_DELTA;
if (ctx->data.pu.c_to_p_phy != 0 || ctx->data.pu.p_to_c_phy != 0) {
ctx->data.pu.instant = ull_conn_event_counter(conn) + conn->lll.latency +
PHY_UPDATE_INSTANT_DELTA;
} else {
ctx->data.pu.instant = 0;
}
@@ -563,13 +591,14 @@ static void lp_pu_st_wait_tx_ack_phy_req(struct ll_conn *conn, struct proc_ctx *
conn, pu_select_phy_timing_restrict(conn, ctx->data.pu.tx));
ctx->state = LP_PU_STATE_WAIT_RX_PHY_UPDATE_IND;
ctx->rx_opcode = PDU_DATA_LLCTRL_TYPE_PHY_UPD_IND;
llcp_tx_resume_data(conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
break;
#endif /* CONFIG_BT_PERIPHERAL */
default:
/* Unknown role */
LL_ASSERT(0);
}
llcp_tx_resume_data(conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
break;
default:
/* Ignore other evts */
@@ -650,9 +679,14 @@ static void lp_pu_st_wait_rx_phy_update_ind(struct ll_conn *conn, struct proc_ct
ctx->state = LP_PU_STATE_WAIT_INSTANT;
} else {
llcp_rr_set_incompat(conn, INCOMPAT_NO_COLLISION);
if (ctx->data.pu.error != BT_HCI_ERR_SUCCESS) {
/* Mark the connection for termination */
conn->llcp_terminate.reason_final = ctx->data.pu.error;
}
ctx->data.pu.ntf_pu = ctx->data.pu.host_initiated;
lp_pu_complete(conn, ctx, evt, param);
}
llcp_tx_resume_data(conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
break;
case LP_PU_EVT_REJECT:
llcp_rr_set_incompat(conn, INCOMPAT_NO_COLLISION);
@@ -660,6 +694,7 @@ static void lp_pu_st_wait_rx_phy_update_ind(struct ll_conn *conn, struct proc_ct
ctx->data.pu.error = ctx->reject_ext_ind.error_code;
ctx->data.pu.ntf_pu = 1;
lp_pu_complete(conn, ctx, evt, param);
llcp_tx_resume_data(conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
default:
/* Ignore other evts */
break;
@@ -887,7 +922,8 @@ static void rp_pu_send_phy_update_ind(struct ll_conn *conn, struct proc_ctx *ctx
void *param)
{
if (llcp_rr_ispaused(conn) || !llcp_tx_alloc_peek(conn, ctx) ||
(llcp_rr_get_paused_cmd(conn) == PROC_PHY_UPDATE)) {
(llcp_rr_get_paused_cmd(conn) == PROC_PHY_UPDATE) ||
!ull_is_lll_tx_queue_empty(conn)) {
ctx->state = RP_PU_STATE_WAIT_TX_PHY_UPDATE_IND;
} else {
llcp_rr_set_paused_cmd(conn, PROC_CTE_REQ);
@@ -933,6 +969,7 @@ static void rp_pu_st_wait_rx_phy_req(struct ll_conn *conn, struct proc_ctx *ctx,
/* Combine with the 'Preferred' the phys in conn->phy_pref_?x */
pu_combine_phys(conn, ctx, conn->phy_pref_tx, conn->phy_pref_rx);
llcp_tx_pause_data(conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
switch (evt) {
case RP_PU_EVT_PHY_REQ:
switch (conn->lll.role) {
@@ -1045,8 +1082,12 @@ static void rp_pu_st_wait_rx_phy_update_ind(struct ll_conn *conn, struct proc_ct
*/
llcp_rr_prt_stop(conn);
ctx->state = LP_PU_STATE_WAIT_INSTANT;
ctx->state = RP_PU_STATE_WAIT_INSTANT;
} else {
if (ctx->data.pu.error == BT_HCI_ERR_INSTANT_PASSED) {
/* Mark the connection for termination */
conn->llcp_terminate.reason_final = BT_HCI_ERR_INSTANT_PASSED;
}
rp_pu_complete(conn, ctx, evt, param);
}
break;

View File

@@ -2542,13 +2542,15 @@ static void le_read_buffer_size_complete(struct net_buf *buf)
BT_DBG("status 0x%02x", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!bt_dev.le.acl_mtu) {
uint16_t acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!acl_mtu || !rp->le_max_num) {
return;
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num,
bt_dev.le.acl_mtu);
bt_dev.le.acl_mtu = acl_mtu;
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->le_max_num, rp->le_max_num);
#endif /* CONFIG_BT_CONN */
@@ -2562,25 +2564,26 @@ static void read_buffer_size_v2_complete(struct net_buf *buf)
BT_DBG("status %u", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (!bt_dev.le.acl_mtu) {
return;
uint16_t acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (acl_mtu && rp->acl_max_num) {
bt_dev.le.acl_mtu = acl_mtu;
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num,
bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
#endif /* CONFIG_BT_CONN */
bt_dev.le.iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!bt_dev.le.iso_mtu) {
uint16_t iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!iso_mtu || !rp->iso_max_num) {
BT_ERR("ISO buffer size not set");
return;
}
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num,
bt_dev.le.iso_mtu);
bt_dev.le.iso_mtu = iso_mtu;
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num, bt_dev.le.iso_mtu);
k_sem_init(&bt_dev.le.iso_pkts, rp->iso_max_num, rp->iso_max_num);
#endif /* CONFIG_BT_ISO */
@@ -2850,6 +2853,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
read_buffer_size_v2_complete(rsp);
net_buf_unref(rsp);
@@ -2863,6 +2867,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
@@ -2906,7 +2911,9 @@ static int le_init(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
}

View File

@@ -236,14 +236,16 @@ static bool start_http_client(void)
int protocol = IPPROTO_TCP;
#endif
(void)memset(&hints, 0, sizeof(hints));
if (IS_ENABLED(CONFIG_NET_IPV6)) {
hints.ai_family = AF_INET6;
hints.ai_socktype = SOCK_STREAM;
} else if (IS_ENABLED(CONFIG_NET_IPV4)) {
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
}
hints.ai_socktype = SOCK_STREAM;
while (resolve_attempts--) {
ret = getaddrinfo(CONFIG_HAWKBIT_SERVER, CONFIG_HAWKBIT_PORT,
&hints, &addr);
@@ -409,6 +411,8 @@ static int hawkbit_find_cancelAction_base(struct hawkbit_ctl_res *res,
return 0;
}
LOG_DBG("_links.%s.href=%s", "cancelAction", href);
helper = strstr(href, "cancelAction/");
if (!helper) {
/* A badly formatted cancel base is a server error */
@@ -462,6 +466,8 @@ static int hawkbit_find_deployment_base(struct hawkbit_ctl_res *res,
return 0;
}
LOG_DBG("_links.%s.href=%s", "deploymentBase", href);
helper = strstr(href, "deploymentBase/");
if (!helper) {
/* A badly formatted deployment base is a server error */
@@ -570,17 +576,6 @@ static int hawkbit_parse_deployment(struct hawkbit_dep_res *res,
return 0;
}
static void hawkbit_dump_base(struct hawkbit_ctl_res *r)
{
LOG_DBG("config.polling.sleep=%s", log_strdup(r->config.polling.sleep));
LOG_DBG("_links.deploymentBase.href=%s",
log_strdup(r->_links.deploymentBase.href));
LOG_DBG("_links.configData.href=%s",
log_strdup(r->_links.configData.href));
LOG_DBG("_links.cancelAction.href=%s",
log_strdup(r->_links.cancelAction.href));
}
static void hawkbit_dump_deployment(struct hawkbit_dep_res *d)
{
struct hawkbit_dep_res_chunk *c = &d->deployment.chunks[0];
@@ -1089,9 +1084,9 @@ enum hawkbit_response hawkbit_probe(void)
if (hawkbit_results.base.config.polling.sleep) {
/* Update the sleep time. */
hawkbit_update_sleep(&hawkbit_results.base);
LOG_DBG("config.polling.sleep=%s", hawkbit_results.base.config.polling.sleep);
}
hawkbit_dump_base(&hawkbit_results.base);
if (hawkbit_results.base._links.cancelAction.href) {
ret = hawkbit_find_cancelAction_base(&hawkbit_results.base,
@@ -1118,6 +1113,8 @@ enum hawkbit_response hawkbit_probe(void)
}
if (hawkbit_results.base._links.configData.href) {
LOG_DBG("_links.%s.href=%s", "configData",
hawkbit_results.base._links.configData.href);
memset(hb_context.url_buffer, 0, sizeof(hb_context.url_buffer));
hb_context.dl.http_content_size = 0;
hb_context.url_buffer_size = URL_BUFFER_SIZE;

View File

@@ -17,7 +17,7 @@ bool hawkbit_get_device_identity(char *id, int id_max_len)
}
memset(id, 0, id_max_len);
length = bin2hex(hwinfo_id, (size_t)length, id, id_max_len - 1);
length = bin2hex(hwinfo_id, (size_t)length, id, id_max_len);
return length > 0;
}

View File

@@ -97,8 +97,7 @@ static int bin2hex_str(uint8_t *bin, size_t bin_len, char *str, size_t str_buf_l
}
memset(str, 0, str_buf_len);
/* str_buf_len - 1 ensure space for \0 */
bin2hex(bin, bin_len, str, str_buf_len - 1);
bin2hex(bin, bin_len, str, str_buf_len);
return 0;
}

View File

@@ -18,7 +18,7 @@ bool updatehub_get_device_identity(char *id, int id_max_len)
}
memset(id, 0, id_max_len);
length = bin2hex(hwinfo_id, (size_t)length, id, id_max_len - 1);
length = bin2hex(hwinfo_id, (size_t)length, id, id_max_len);
return length > 0;
}

View File

@@ -1666,7 +1666,8 @@ static int context_sendto(struct net_context *context,
if (net_context_get_type(context) == SOCK_DGRAM) {
NET_ERR("Available payload buffer (%zu) is not enough for requested DGRAM (%zu)",
tmp_len, len);
return -ENOMEM;
ret = -ENOMEM;
goto fail;
}
len = tmp_len;
}

View File

@@ -1890,10 +1890,7 @@ struct net_pkt *net_pkt_shallow_clone(struct net_pkt *pkt, k_timeout_t timeout)
clone_pkt->buffer = pkt->buffer;
buf = pkt->buffer;
while (buf) {
net_pkt_frag_ref(buf);
buf = buf->frags;
}
net_pkt_frag_ref(buf);
if (pkt->buffer) {
/* The link header pointers are only usable if there is

View File

@@ -852,6 +852,7 @@ int net_route_mcast_forward_packet(struct net_pkt *pkt,
if (net_send_data(pkt_cpy) >= 0) {
++ret;
} else {
net_pkt_unref(pkt_cpy);
--err;
}
}

View File

@@ -460,6 +460,11 @@ static ssize_t spair_write(void *obj, const void *buffer, size_t count)
}
if (will_block) {
if (k_is_in_isr()) {
errno = EAGAIN;
res = -1;
goto out;
}
for (int signaled = false, result = -1; !signaled;
result = -1) {
@@ -646,6 +651,11 @@ static ssize_t spair_read(void *obj, void *buffer, size_t count)
}
if (will_block) {
if (k_is_in_isr()) {
errno = EAGAIN;
res = -1;
goto out;
}
for (int signaled = false, result = -1; !signaled;
result = -1) {

View File

@@ -86,7 +86,7 @@ bool __weak shell_mqtt_get_devid(char *id, int id_max_len)
}
(void)memset(id, 0, id_max_len);
length = bin2hex(hwinfo_id, (size_t)length, id, id_max_len - 1);
length = bin2hex(hwinfo_id, (size_t)length, id, id_max_len);
return length > 0;
}

View File

@@ -110,8 +110,6 @@ static inline void trigger_irq(int irq)
#define LOAPIC_ICR_IPI_TEST 0x00004000U
#endif
#define TRIGGER_IRQ_INT(vector) __asm__ volatile("int %0" : : "i" (vector) : "memory")
/*
* We can emulate the interrupt by sending the IPI to
* core itself by the LOAPIC for x86 platform.
@@ -133,6 +131,8 @@ static inline void trigger_irq(int irq)
*/
static inline void trigger_irq(int vector)
{
uint8_t i;
#ifdef CONFIG_X2APIC
x86_write_x2apic(LOAPIC_SELF_IPI, ((VECTOR_MASK & vector)));
#else
@@ -144,6 +144,14 @@ static inline void trigger_irq(int vector)
#endif
z_loapic_ipi(cpu_id, LOAPIC_ICR_IPI_TEST, vector);
#endif /* CONFIG_X2APIC */
/*
* add some nop operations here to cost some cycles to make sure
* the IPI interrupt is handled before do our check.
*/
for (i = 0; i < 10; i++) {
arch_nop();
}
}
#elif defined(CONFIG_ARCH_POSIX)

View File

@@ -196,6 +196,145 @@ void test_int_disconnect_rem(void)
ut_rx_q_is_empty();
}
#define SIZE 2
void test_int_pause_resume_data_path(void)
{
struct node_tx *node;
struct node_tx nodes[SIZE] = { 0 };
ull_cp_init();
ull_tx_q_init(&conn.tx_q);
/*** #1: Not paused when initialized ***/
/* Enqueue data nodes */
for (int i = 0U; i < SIZE; i++) {
ull_tx_q_enqueue_data(&conn.tx_q, &nodes[i]);
}
/* Dequeue data nodes */
for (int i = 0U; i < SIZE; i++) {
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, &nodes[i], NULL);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/*** #2: Single pause/resume ***/
/* Pause data path */
llcp_tx_pause_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Enqueue data nodes */
for (int i = 0U; i < SIZE; i++) {
ull_tx_q_enqueue_data(&conn.tx_q, &nodes[i]);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Resume data path */
llcp_tx_resume_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
/* Dequeue data nodes */
for (int i = 0U; i < SIZE; i++) {
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, &nodes[i], NULL);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/*** #3: Multiple pause/resume ***/
/* Pause data path */
llcp_tx_pause_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
llcp_tx_pause_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_DATA_LENGTH);
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Enqueue data nodes */
for (int i = 0U; i < SIZE; i++) {
ull_tx_q_enqueue_data(&conn.tx_q, &nodes[i]);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Resume data path */
llcp_tx_resume_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_DATA_LENGTH);
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Resume data path */
llcp_tx_resume_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
/* Dequeue data nodes */
for (int i = 0U; i < SIZE; i++) {
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, &nodes[i], NULL);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/*** #4: Asymetric pause/resume ***/
/* Pause data path */
llcp_tx_pause_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Enqueue data nodes */
for (int i = 0U; i < SIZE; i++) {
ull_tx_q_enqueue_data(&conn.tx_q, &nodes[i]);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Resume data path (wrong mask) */
llcp_tx_resume_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_DATA_LENGTH);
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
/* Resume data path */
llcp_tx_resume_data(&conn, LLCP_TX_QUEUE_PAUSE_DATA_PHY_UPDATE);
/* Dequeue data nodes */
for (int i = 0U; i < SIZE; i++) {
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, &nodes[i], NULL);
}
/* Tx Queue shall be empty */
node = ull_tx_q_dequeue(&conn.tx_q);
zassert_equal_ptr(node, NULL, "");
}
void test_main(void)
{
ztest_test_suite(internal,
@@ -206,7 +345,8 @@ void test_main(void)
ztest_unit_test(test_int_local_pending_requests),
ztest_unit_test(test_int_remote_pending_requests),
ztest_unit_test(test_int_disconnect_loc),
ztest_unit_test(test_int_disconnect_rem));
ztest_unit_test(test_int_disconnect_rem),
ztest_unit_test(test_int_pause_resume_data_path));
ztest_test_suite(public, ztest_unit_test(test_api_init), ztest_unit_test(test_api_connect),
ztest_unit_test(test_api_disconnect));

View File

@@ -154,10 +154,6 @@ void test_phy_update_central_loc_collision(void)
struct pdu_data_llctrl_phy_upd_ind ind = { .instant = 9,
.c_to_p_phy = PHY_2M,
.p_to_c_phy = PHY_2M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
uint16_t instant;
struct pdu_data_llctrl_reject_ext_ind reject_ext_ind = {
@@ -198,14 +194,14 @@ void test_phy_update_central_loc_collision(void)
/* TX Ack */
event_tx_ack(&conn, tx);
/* Check that data tx is not paused */
zassert_equal(conn.tx_q.pause_data, 0U, "Data tx is paused");
/* Check that data tx is paused */
zassert_equal(conn.tx_q.pause_data, 1U, "Data tx is paused");
/* Done */
event_done(&conn);
/* Check that data tx is not paused */
zassert_equal(conn.tx_q.pause_data, 0U, "Data tx is paused");
zassert_equal(conn.tx_q.pause_data, 1U, "Data tx is not paused");
/* Release Tx */
ull_cp_release_tx(&conn, tx);
@@ -306,7 +302,6 @@ void test_phy_update_central_loc_collision(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -332,14 +327,6 @@ void test_phy_update_central_rem_collision(void)
struct pdu_data_llctrl_phy_upd_ind ind_2 = { .instant = 14,
.c_to_p_phy = PHY_2M,
.p_to_c_phy = 0 };
struct pdu_data_llctrl_length_rsp length_ntf_1 = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_1M)
};
struct pdu_data_llctrl_length_rsp length_ntf_2 = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
uint16_t instant;
struct node_rx_pu pu = { .status = BT_HCI_ERR_SUCCESS };
@@ -427,7 +414,6 @@ void test_phy_update_central_rem_collision(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf_1);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -479,7 +465,6 @@ void test_phy_update_central_rem_collision(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf_2);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -500,10 +485,6 @@ void test_phy_update_periph_loc_collision(void)
struct pdu_data_llctrl_phy_upd_ind ind = { .instant = 7,
.c_to_p_phy = PHY_2M,
.p_to_c_phy = PHY_1M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_1M)
};
uint16_t instant;
struct pdu_data_llctrl_reject_ext_ind reject_ext_ind = {
@@ -608,7 +589,6 @@ void test_phy_update_periph_loc_collision(void)
/* There should be one host notification */
pu.status = BT_HCI_ERR_SUCCESS;
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -626,10 +606,6 @@ void test_phy_conn_update_central_loc_collision(void)
struct pdu_data *pdu;
uint16_t instant;
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
struct pdu_data_llctrl_reject_ext_ind reject_ext_ind = {
.reject_opcode = PDU_DATA_LLCTRL_TYPE_CONN_PARAM_REQ,
.error_code = BT_HCI_ERR_DIFF_TRANS_COLLISION
@@ -751,7 +727,6 @@ void test_phy_conn_update_central_loc_collision(void)
/* (A) There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();

View File

@@ -1090,9 +1090,6 @@ void test_conn_update_central_loc_unsupp_w_feat_exch(void)
pdu = (struct pdu_data *)tx->pdu;
instant = sys_le16_to_cpu(pdu->llctrl.conn_update_ind.instant);
/* Release Tx */
ull_cp_release_tx(&conn, tx);
/* */
while (!is_instant_reached(&conn, instant)) {
/* Prepare */
@@ -2491,7 +2488,7 @@ void test_conn_update_periph_loc_collision_reject_2nd_cpr(void)
struct ll_conn conn_2nd;
struct ll_conn conn_3rd;
uint8_t err;
struct node_tx *tx;
struct node_tx *tx, *tx1;
struct node_rx_pdu *ntf;
uint16_t instant;
@@ -2532,7 +2529,7 @@ void test_conn_update_periph_loc_collision_reject_2nd_cpr(void)
event_prepare(&conn);
/* (A) Tx Queue should have one LL Control PDU */
lt_rx(LL_CONNECTION_PARAM_REQ, &conn, &tx, &conn_param_req);
lt_rx(LL_CONNECTION_PARAM_REQ, &conn, &tx1, &conn_param_req);
lt_rx_q_is_empty(&conn);
/* (B) Rx */
@@ -2611,7 +2608,7 @@ void test_conn_update_periph_loc_collision_reject_2nd_cpr(void)
/* Release Tx */
ull_cp_release_tx(&conn, tx);
ull_cp_release_tx(&conn, tx1);
/*******************/
@@ -3668,9 +3665,6 @@ void test_conn_update_central_loc_accept_no_param_req(void)
pdu = (struct pdu_data *)tx->pdu;
instant = sys_le16_to_cpu(pdu->llctrl.conn_update_ind.instant);
/* Release Tx */
ull_cp_release_tx(&conn, tx);
/* */
while (!is_instant_reached(&conn, instant)) {
/* Prepare */

View File

@@ -883,7 +883,7 @@ void wait_for_phy_update_instant(uint8_t instant)
void check_phy_update_and_cte_req_complete(bool is_local, struct pdu_data_llctrl_cte_req *cte_req,
struct pdu_data_llctrl_phy_req *phy_req,
uint8_t ctx_num_at_end)
uint8_t ctx_num_at_end, bool dle_ntf)
{
struct pdu_data_llctrl_length_rsp length_ntf = {
PDU_PDU_MAX_OCTETS, PDU_DC_MAX_US(PDU_PDU_MAX_OCTETS, phy_req->tx_phys),
@@ -918,7 +918,9 @@ void check_phy_update_and_cte_req_complete(bool is_local, struct pdu_data_llctrl
/* There should be two host notifications, one pu and one dle */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
if (dle_ntf) {
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
}
/* Release Ntf */
ull_cp_release_ntf(ntf);
@@ -966,7 +968,7 @@ void check_phy_update_and_cte_req_complete(bool is_local, struct pdu_data_llctrl
*/
static void run_phy_update_central(bool is_local, struct pdu_data_llctrl_cte_req *cte_req,
struct pdu_data_llctrl_phy_req *phy_req, uint8_t events_at_start,
uint8_t ctx_num_at_end)
uint8_t ctx_num_at_end, bool dle_ntf)
{
struct pdu_data_llctrl_phy_req rsp = { .rx_phys = PHY_PREFER_ANY,
.tx_phys = PHY_PREFER_ANY };
@@ -1030,7 +1032,7 @@ static void run_phy_update_central(bool is_local, struct pdu_data_llctrl_cte_req
wait_for_phy_update_instant(instant);
check_phy_update_and_cte_req_complete(is_local, cte_req, phy_req, ctx_num_at_end);
check_phy_update_and_cte_req_complete(is_local, cte_req, phy_req, ctx_num_at_end, dle_ntf);
}
/**
@@ -1050,7 +1052,8 @@ static void run_phy_update_central(bool is_local, struct pdu_data_llctrl_cte_req
*/
static void run_phy_update_peripheral(bool is_local, struct pdu_data_llctrl_cte_req *cte_req,
struct pdu_data_llctrl_phy_req *phy_req,
uint8_t events_at_start, uint8_t ctx_num_at_end)
uint8_t events_at_start, uint8_t ctx_num_at_end,
bool dle_ntf)
{
struct pdu_data_llctrl_phy_req rsp = { .rx_phys = PHY_PREFER_ANY,
.tx_phys = PHY_PREFER_ANY };
@@ -1122,7 +1125,7 @@ static void run_phy_update_peripheral(bool is_local, struct pdu_data_llctrl_cte_
wait_for_phy_update_instant(instant);
check_phy_update_and_cte_req_complete(is_local, cte_req, phy_req, ctx_num_at_end);
check_phy_update_and_cte_req_complete(is_local, cte_req, phy_req, ctx_num_at_end, dle_ntf);
}
static void test_local_cte_req_wait_for_phy_update_complete_and_disable(uint8_t role)
@@ -1155,10 +1158,10 @@ static void test_local_cte_req_wait_for_phy_update_complete_and_disable(uint8_t
if (role == BT_HCI_ROLE_CENTRAL) {
run_phy_update_central(true, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt() - 1);
test_ctx_buffers_cnt() - 1, true);
} else {
run_phy_update_peripheral(true, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt() - 1);
test_ctx_buffers_cnt() - 1, true);
}
/* In this test CTE request is local procedure. Local procedures are handled after remote
@@ -1222,10 +1225,10 @@ static void test_local_cte_req_wait_for_phy_update_complete(uint8_t role)
if (role == BT_HCI_ROLE_CENTRAL) {
run_phy_update_central(true, &local_cte_req, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt() - 1);
test_ctx_buffers_cnt() - 1, false);
} else {
run_phy_update_peripheral(true, &local_cte_req, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt() - 1);
test_ctx_buffers_cnt() - 1, false);
}
/* PHY update was completed. Handle CTE request */
@@ -1280,10 +1283,10 @@ static void test_local_phy_update_wait_for_cte_req_complete(uint8_t role)
if (role == BT_HCI_ROLE_CENTRAL) {
run_phy_update_central(true, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), true);
} else {
run_phy_update_peripheral(true, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), true);
}
}
@@ -1370,10 +1373,10 @@ static void test_phy_update_wait_for_remote_cte_req_complete(uint8_t role)
if (role == BT_HCI_ROLE_CENTRAL) {
run_phy_update_central(true, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), true);
} else {
run_phy_update_peripheral(true, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), true);
}
}
@@ -1422,10 +1425,10 @@ static void test_cte_req_wait_for_remote_phy_update_complete_and_disable(uint8_t
if (role == BT_HCI_ROLE_CENTRAL) {
run_phy_update_central(false, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), true);
} else {
run_phy_update_peripheral(false, NULL, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), true);
}
/* There is no special handling of CTE REQ completion. It is done when instant happens just
@@ -1478,10 +1481,10 @@ static void test_cte_req_wait_for_remote_phy_update_complete(uint8_t role)
if (role == BT_HCI_ROLE_CENTRAL) {
run_phy_update_central(false, &local_cte_req, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), false);
} else {
run_phy_update_peripheral(false, &local_cte_req, &phy_req, pu_event_counter(&conn),
test_ctx_buffers_cnt());
test_ctx_buffers_cnt(), false);
}
/* There is no special handling of CTE REQ completion here. It is done when instant happens

View File

@@ -125,6 +125,9 @@ void test_phy_update_central_loc(void)
struct node_rx_pu pu = { .status = BT_HCI_ERR_SUCCESS };
/* 'Trigger' DLE ntf on PHY update, as this forces change to eff tx/rx times */
conn.lll.dle.eff.max_rx_time = 0;
/* Role */
test_set_role(&conn, BT_HCI_ROLE_CENTRAL);
@@ -329,10 +332,6 @@ void test_phy_update_central_rem(void)
struct pdu_data_llctrl_phy_upd_ind ind = { .instant = 7,
.c_to_p_phy = 0,
.p_to_c_phy = PHY_2M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_1M)
};
uint16_t instant;
struct node_rx_pu pu = { .status = BT_HCI_ERR_SUCCESS };
@@ -407,7 +406,6 @@ void test_phy_update_central_rem(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -425,10 +423,6 @@ void test_phy_update_periph_loc(void)
struct node_tx *tx;
struct node_rx_pdu *ntf;
struct pdu_data_llctrl_phy_req req = { .rx_phys = PHY_2M, .tx_phys = PHY_2M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
uint16_t instant;
struct node_rx_pu pu = { .status = BT_HCI_ERR_SUCCESS };
@@ -501,7 +495,6 @@ void test_phy_update_periph_loc(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -523,10 +516,6 @@ void test_phy_update_periph_rem(void)
struct pdu_data_llctrl_phy_upd_ind ind = { .instant = 7,
.c_to_p_phy = 0,
.p_to_c_phy = PHY_2M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_1M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
uint16_t instant;
struct node_rx_pu pu = { .status = BT_HCI_ERR_SUCCESS };
@@ -604,7 +593,6 @@ void test_phy_update_periph_rem(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -685,10 +673,6 @@ void test_phy_update_central_loc_collision(void)
struct pdu_data_llctrl_phy_upd_ind ind = { .instant = 9,
.c_to_p_phy = PHY_2M,
.p_to_c_phy = PHY_2M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
uint16_t instant;
struct pdu_data_llctrl_reject_ext_ind reject_ext_ind = {
@@ -730,13 +714,13 @@ void test_phy_update_central_loc_collision(void)
event_tx_ack(&conn, tx);
/* Check that data tx is not paused */
zassert_equal(conn.tx_q.pause_data, 0U, "Data tx is paused");
zassert_equal(conn.tx_q.pause_data, 1U, "Data tx is not paused");
/* Done */
event_done(&conn);
/* Check that data tx is not paused */
zassert_equal(conn.tx_q.pause_data, 0U, "Data tx is paused");
zassert_equal(conn.tx_q.pause_data, 1U, "Data tx is not paused");
/* Release Tx */
ull_cp_release_tx(&conn, tx);
@@ -837,7 +821,6 @@ void test_phy_update_central_loc_collision(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -863,14 +846,6 @@ void test_phy_update_central_rem_collision(void)
struct pdu_data_llctrl_phy_upd_ind ind_2 = { .instant = 14,
.c_to_p_phy = PHY_2M,
.p_to_c_phy = 0 };
struct pdu_data_llctrl_length_rsp length_ntf_1 = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_1M)
};
struct pdu_data_llctrl_length_rsp length_ntf_2 = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M)
};
uint16_t instant;
struct node_rx_pu pu = { .status = BT_HCI_ERR_SUCCESS };
@@ -958,7 +933,6 @@ void test_phy_update_central_rem_collision(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf_1);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -1010,7 +984,6 @@ void test_phy_update_central_rem_collision(void)
/* There should be one host notification */
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf_2);
ut_rx_q_is_empty();
/* Release Ntf */
@@ -1031,10 +1004,6 @@ void test_phy_update_periph_loc_collision(void)
struct pdu_data_llctrl_phy_upd_ind ind = { .instant = 7,
.c_to_p_phy = PHY_2M,
.p_to_c_phy = PHY_1M };
struct pdu_data_llctrl_length_rsp length_ntf = {
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_2M),
3 * PDU_DC_PAYLOAD_SIZE_MIN, PDU_DC_MAX_US(3 * PDU_DC_PAYLOAD_SIZE_MIN, PHY_1M)
};
uint16_t instant;
struct pdu_data_llctrl_reject_ext_ind reject_ext_ind = {
@@ -1139,7 +1108,6 @@ void test_phy_update_periph_loc_collision(void)
/* There should be one host notification */
pu.status = BT_HCI_ERR_SUCCESS;
ut_rx_node(NODE_PHY_UPDATE, &ntf, &pu);
ut_rx_pdu(LL_LENGTH_RSP, &ntf, &length_ntf);
ut_rx_q_is_empty();
/* Release Ntf */

View File

@@ -66,9 +66,16 @@ static void test_terminate_rem(uint8_t role)
/* Done */
event_done(&conn);
/* Prepare */
event_prepare(&conn);
/* Done */
event_done(&conn);
/* There should be no host notification */
ut_rx_q_is_empty();
zassert_equal(ctx_buffers_free(), test_ctx_buffers_cnt(),
"Free CTX buffers %d", ctx_buffers_free());
}

View File

@@ -59,17 +59,26 @@ void test_tx_buffer_alloc(void)
ctxs[ctx_idx] = llcp_create_local_procedure(PROC_VERSION_EXCHANGE);
}
/* Init per conn tx_buffer_alloc count */
for (int j = 1; j < CONFIG_BT_CTLR_LLCP_CONN; j++) {
conn[j].llcp.tx_buffer_alloc = 0;
}
#if defined(LLCP_TX_CTRL_BUF_QUEUE_ENABLE)
/* Check alloc flow */
for (i = 0; i < CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM; i++) {
zassert_true(llcp_tx_alloc_peek(&conn[0], ctxs[0]), NULL);
tx[tx_alloc_idx] = llcp_tx_alloc(&conn[0], ctxs[0]);
zassert_equal(conn[0].llcp.tx_buffer_alloc, i + 1, NULL);
zassert_equal(common_tx_buffer_alloc_count(), 0, NULL);
zassert_not_null(tx[tx_alloc_idx], NULL);
tx_alloc_idx++;
}
for (i = 0; i < CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM; i++) {
zassert_true(llcp_tx_alloc_peek(&conn[0], ctxs[0]), NULL);
tx[tx_alloc_idx] = llcp_tx_alloc(&conn[0], ctxs[0]);
zassert_equal(conn[0].llcp.tx_buffer_alloc,
CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM + i + 1, NULL);
zassert_equal(common_tx_buffer_alloc_count(), i+1, NULL);
zassert_not_null(tx[tx_alloc_idx], NULL);
tx_alloc_idx++;
}
@@ -82,6 +91,9 @@ void test_tx_buffer_alloc(void)
zassert_true(llcp_tx_alloc_peek(&conn[j], ctxs[j]), NULL);
tx[tx_alloc_idx] = llcp_tx_alloc(&conn[j], ctxs[j]);
zassert_not_null(tx[tx_alloc_idx], NULL);
zassert_equal(common_tx_buffer_alloc_count(),
CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM, NULL);
zassert_equal(conn[j].llcp.tx_buffer_alloc, i + 1, NULL);
tx_alloc_idx++;
}
@@ -90,6 +102,10 @@ void test_tx_buffer_alloc(void)
}
ull_cp_release_tx(&conn[0], tx[1]);
zassert_equal(common_tx_buffer_alloc_count(),
CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM - 1, NULL);
zassert_equal(conn[0].llcp.tx_buffer_alloc, CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM +
CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM - 1, NULL);
/* global pool is now 'open' again, but ctxs[1] is NOT next in line */
zassert_false(llcp_tx_alloc_peek(&conn[1], ctxs[1]), NULL);
@@ -97,9 +113,18 @@ void test_tx_buffer_alloc(void)
/* ... ctxs[0] is */
zassert_true(llcp_tx_alloc_peek(&conn[0], ctxs[0]), NULL);
tx[tx_alloc_idx] = llcp_tx_alloc(&conn[0], ctxs[0]);
zassert_equal(common_tx_buffer_alloc_count(), CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM,
NULL);
zassert_equal(conn[0].llcp.tx_buffer_alloc, CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM +
CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM, NULL);
zassert_not_null(tx[tx_alloc_idx], NULL);
tx_alloc_idx++;
ull_cp_release_tx(&conn[0], tx[tx_alloc_idx - 1]);
zassert_equal(common_tx_buffer_alloc_count(),
CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM - 1, NULL);
zassert_equal(conn[0].llcp.tx_buffer_alloc, CONFIG_BT_CTLR_LLCP_PER_CONN_TX_CTRL_BUF_NUM +
CONFIG_BT_CTLR_LLCP_COMMON_TX_CTRL_BUF_NUM - 1, NULL);
/* global pool does not allow as ctxs[2] is NOT next up */
zassert_false(llcp_tx_alloc_peek(&conn[2], ctxs[2]), NULL);

View File

@@ -100,6 +100,28 @@ const struct zcan_frame test_ext_frame_2 = {
.data = {1, 2, 3, 4, 5, 6, 7, 8}
};
/**
* @brief Standard (11-bit) CAN ID RTR frame 1.
*/
const struct zcan_frame test_std_rtr_frame_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_STD_ID_1,
.dlc = 0,
.data = {0}
};
/**
* @brief Extended (29-bit) CAN ID RTR frame 1.
*/
const struct zcan_frame test_ext_rtr_frame_1 = {
.id_type = CAN_EXTENDED_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_EXT_ID_1,
.dlc = 0,
.data = {0}
};
/**
* @brief Standard (11-bit) CAN ID filter 1. This filter matches
* ``test_std_frame_1``.
@@ -196,6 +218,30 @@ const struct zcan_filter test_ext_masked_filter_2 = {
.id_mask = TEST_CAN_EXT_MASK
};
/**
* @brief Standard (11-bit) CAN ID RTR filter 1. This filter matches
* ``test_std_rtr_frame_1``.
*/
const struct zcan_filter test_std_rtr_filter_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_STD_ID_1,
.rtr_mask = 1,
.id_mask = CAN_STD_ID_MASK
};
/**
* @brief Extended (29-bit) CAN ID RTR filter 1. This filter matches
* ``test_ext_rtr_frame_1``.
*/
const struct zcan_filter test_ext_rtr_filter_1 = {
.id_type = CAN_EXTENDED_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_EXT_ID_1,
.rtr_mask = 1,
.id_mask = CAN_EXT_ID_MASK
};
/**
* @brief Standard (11-bit) CAN ID filter. This filter matches
* ``TEST_CAN_SOME_STD_ID``.
@@ -585,6 +631,55 @@ static void send_receive(const struct zcan_filter *filter1,
can_remove_rx_filter(can_dev, filter_id_2);
}
/**
* @brief Perform a send/receive test with a set of CAN ID filters and CAN frames, RTR and data
* frames.
*
* @param data_filter CAN data filter
* @param rtr_filter CAN RTR filter
* @param data_frame CAN data frame
* @param rtr_frame CAN RTR frame
*/
void send_receive_rtr(const struct zcan_filter *data_filter,
const struct zcan_filter *rtr_filter,
const struct zcan_frame *data_frame,
const struct zcan_frame *rtr_frame)
{
struct zcan_frame frame;
int filter_id;
int err;
filter_id = add_rx_msgq(can_dev, rtr_filter);
/* Verify that RTR filter does not match data frame */
send_test_frame(can_dev, data_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, -EAGAIN, "Data frame passed RTR filter");
/* Verify that RTR filter matches RTR frame */
send_test_frame(can_dev, rtr_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, 0, "receive timeout");
assert_frame_equal(&frame, rtr_frame, 0);
can_remove_rx_filter(can_dev, filter_id);
filter_id = add_rx_msgq(can_dev, data_filter);
/* Verify that data filter does not match RTR frame */
send_test_frame(can_dev, rtr_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, -EAGAIN, "RTR frame passed data filter");
/* Verify that data filter matches data frame */
send_test_frame(can_dev, data_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, 0, "receive timeout");
assert_frame_equal(&frame, data_frame, 0);
can_remove_rx_filter(can_dev, filter_id);
}
/**
* @brief Test getting the CAN core clock rate.
*/
@@ -875,6 +970,24 @@ void test_send_receive_msgq(void)
can_remove_rx_filter(can_dev, filter_id);
}
/**
* @brief Test send/receive with standard (11-bit) CAN IDs and remote transmission request (RTR).
*/
void test_send_receive_std_id_rtr(void)
{
send_receive_rtr(&test_std_filter_1, &test_std_rtr_filter_1,
&test_std_frame_1, &test_std_rtr_frame_1);
}
/**
* @brief Test send/receive with extended (29-bit) CAN IDs and remote transmission request (RTR).
*/
void test_send_receive_ext_id_rtr(void)
{
send_receive_rtr(&test_ext_filter_1, &test_ext_rtr_filter_1,
&test_ext_frame_1, &test_ext_rtr_frame_1);
}
/**
* @brief Test that non-matching CAN frames do not pass a filter.
*/
@@ -1024,6 +1137,8 @@ void test_main(void)
ztest_unit_test(test_send_receive_std_id_masked),
ztest_unit_test(test_send_receive_ext_id_masked),
ztest_user_unit_test(test_send_receive_msgq),
ztest_user_unit_test(test_send_receive_std_id_rtr),
ztest_user_unit_test(test_send_receive_ext_id_rtr),
ztest_user_unit_test(test_send_invalid_dlc),
ztest_unit_test(test_send_receive_wrong_id),
ztest_user_unit_test(test_recover),

View File

@@ -70,7 +70,7 @@ void test_isr_dynamic(void)
*/
#if defined(CONFIG_X86)
#define IV_IRQS 32 /* start of vectors available for x86 IRQs */
#define TEST_IRQ_DYN_LINE 16
#define TEST_IRQ_DYN_LINE 25
#elif defined(CONFIG_ARCH_POSIX)
#define TEST_IRQ_DYN_LINE 5

View File

@@ -86,7 +86,7 @@ void isr_handler(const void *param)
* Other arch will be add later.
*/
#if defined(CONFIG_X86)
#define TEST_IRQ_DYN_LINE 17
#define TEST_IRQ_DYN_LINE 26
#elif defined(CONFIG_ARCH_POSIX)
#define TEST_IRQ_DYN_LINE 5

View File

@@ -9,6 +9,7 @@ CONFIG_NET_ARP=y
CONFIG_NET_UDP=y
CONFIG_NET_BUF_LOG=y
CONFIG_NET_LOG=y
CONFIG_NET_BUF_POOL_USAGE=y
CONFIG_ENTROPY_GENERATOR=y
CONFIG_TEST_RANDOM_GENERATOR=y

View File

@@ -4,6 +4,8 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include "kernel.h"
#include "ztest_assert.h"
#include <zephyr/types.h>
#include <stddef.h>
#include <string.h>
@@ -1061,6 +1063,119 @@ void test_net_pkt_remove_tail(void)
net_pkt_unref(pkt);
}
void test_net_pkt_shallow_clone_noleak_buf(void)
{
const int bufs_to_allocate = 3;
const size_t pkt_size = CONFIG_NET_BUF_DATA_SIZE * bufs_to_allocate;
struct net_pkt *pkt, *shallow_pkt;
struct net_buf_pool *tx_data;
pkt = net_pkt_alloc_with_buffer(NULL, pkt_size,
AF_UNSPEC, 0, K_NO_WAIT);
zassert_true(pkt != NULL, "Pkt not allocated");
net_pkt_get_info(NULL, NULL, NULL, &tx_data);
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count - bufs_to_allocate,
"Incorrect net buf allocation");
shallow_pkt = net_pkt_shallow_clone(pkt, K_NO_WAIT);
zassert_true(shallow_pkt != NULL, "Pkt not allocated");
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count - bufs_to_allocate,
"Incorrect available net buf count");
net_pkt_unref(pkt);
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count - bufs_to_allocate,
"Incorrect available net buf count");
net_pkt_unref(shallow_pkt);
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count,
"Leak detected");
}
#define TEST_NET_PKT_SHALLOW_CLONE_APPEND_BUF(extra_frag_refcounts) \
void test_net_pkt_shallow_clone_append_buf_##extra_frag_refcounts(void) \
{ \
const int bufs_to_allocate = 3; \
const int bufs_frag = 2; \
\
zassert_true(bufs_frag + bufs_to_allocate < CONFIG_NET_BUF_DATA_SIZE, \
"Total bufs to allocate must less than available space"); \
\
const size_t pkt_size = CONFIG_NET_BUF_DATA_SIZE * bufs_to_allocate; \
\
struct net_pkt *pkt, *shallow_pkt; \
struct net_buf *frag_head; \
struct net_buf *frag; \
struct net_buf_pool *tx_data; \
\
pkt = net_pkt_alloc_with_buffer(NULL, pkt_size, \
AF_UNSPEC, 0, K_NO_WAIT); \
zassert_true(pkt != NULL, "Pkt not allocated"); \
\
net_pkt_get_info(NULL, NULL, NULL, &tx_data); \
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count \
- bufs_to_allocate, "Incorrect net buf allocation"); \
\
shallow_pkt = net_pkt_shallow_clone(pkt, K_NO_WAIT); \
zassert_true(shallow_pkt != NULL, "Pkt not allocated"); \
\
/* allocate buffers for the frag */ \
for (int i = 0; i < bufs_frag; i++) { \
frag = net_buf_alloc_len(tx_data, CONFIG_NET_BUF_DATA_SIZE, K_NO_WAIT); \
zassert_true(frag != NULL, "Frag not allocated"); \
net_pkt_append_buffer(pkt, frag); \
if (i == 0) { \
frag_head = frag; \
} \
} \
\
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count \
- bufs_to_allocate - bufs_frag, "Incorrect net buf allocation"); \
\
/* Note: if the frag is appended to a net buf, then the nut buf */ \
/* takes ownership of one ref count. Otherwise net_buf_unref() must */ \
/* be called on the frag to free the buffers. */ \
\
for (int i = 0; i < extra_frag_refcounts; i++) { \
frag_head = net_buf_ref(frag_head); \
} \
\
net_pkt_unref(pkt); \
\
/* we shouldn't have freed any buffers yet */ \
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count \
- bufs_to_allocate - bufs_frag, \
"Incorrect net buf allocation"); \
\
net_pkt_unref(shallow_pkt); \
\
if (extra_frag_refcounts == 0) { \
/* if no extra ref counts to frag were added then we should free */ \
/* all the buffers at this point */ \
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count, \
"Leak detected"); \
} else { \
/* otherwise only bufs_frag should be available, and frag could */ \
/* still used at this point */ \
zassert_equal(atomic_get(&tx_data->avail_count), \
tx_data->buf_count - bufs_frag, "Leak detected"); \
} \
\
for (int i = 0; i < extra_frag_refcounts; i++) { \
net_buf_unref(frag_head); \
} \
\
/* all the buffers should be freed now */ \
zassert_equal(atomic_get(&tx_data->avail_count), tx_data->buf_count, \
"Leak detected"); \
}
TEST_NET_PKT_SHALLOW_CLONE_APPEND_BUF(0)
TEST_NET_PKT_SHALLOW_CLONE_APPEND_BUF(1)
TEST_NET_PKT_SHALLOW_CLONE_APPEND_BUF(2)
void test_main(void)
{
eth_if = net_if_get_default();
@@ -1077,7 +1192,11 @@ void test_main(void)
ztest_unit_test(test_net_pkt_headroom),
ztest_unit_test(test_net_pkt_headroom_copy),
ztest_unit_test(test_net_pkt_get_contiguous_len),
ztest_unit_test(test_net_pkt_remove_tail)
ztest_unit_test(test_net_pkt_remove_tail),
ztest_unit_test(test_net_pkt_shallow_clone_noleak_buf),
ztest_unit_test(test_net_pkt_shallow_clone_append_buf_0),
ztest_unit_test(test_net_pkt_shallow_clone_append_buf_1),
ztest_unit_test(test_net_pkt_shallow_clone_append_buf_2)
);
ztest_run_test_suite(net_pkt_tests);

View File

@@ -103,6 +103,24 @@ static int test_init(const struct device *dev)
SYS_INIT(test_init, APPLICATION, CONFIG_APPLICATION_INIT_PRIORITY);
/* Check that global static object constructors are called. */
foo_class static_foo(12345678);
static void test_global_static_ctor(void)
{
zassert_equal(static_foo.get_foo(), 12345678, NULL);
}
/*
* Check that dynamic memory allocation (usually, the C library heap) is
* functional when the global static object constructors are called.
*/
foo_class *static_init_dynamic_foo = new foo_class(87654321);
static void test_global_static_ctor_dynmem(void)
{
zassert_equal(static_init_dynamic_foo->get_foo(), 87654321, NULL);
}
static void test_new_delete(void)
{
@@ -114,6 +132,8 @@ static void test_new_delete(void)
void test_main(void)
{
ztest_test_suite(cpp_tests,
ztest_unit_test(test_global_static_ctor),
ztest_unit_test(test_global_static_ctor_dynmem),
ztest_unit_test(test_new_delete)
);

View File

@@ -1,6 +1,23 @@
tests:
cpp.main:
common:
integration_platforms:
- mps2_an385
- qemu_cortex_a53
tags: cpp
toolchain_exclude: xcc
tests:
cpp.main.minimal:
extra_configs:
- CONFIG_MINIMAL_LIBC=y
cpp.main.newlib:
filter: TOOLCHAIN_HAS_NEWLIB == 1
min_ram: 32
extra_configs:
- CONFIG_NEWLIB_LIBC=y
- CONFIG_NEWLIB_LIBC_NANO=n
cpp.main.newlib_nano:
filter: TOOLCHAIN_HAS_NEWLIB == 1 and CONFIG_HAS_NEWLIB_LIBC_NANO
min_ram: 24
extra_configs:
- CONFIG_NEWLIB_LIBC=y
- CONFIG_NEWLIB_LIBC_NANO=y

View File

@@ -65,7 +65,7 @@ manifest:
groups:
- hal
- name: hal_espressif
revision: df85671c5d0405c0747c2939c8dfe808b7e4cf38
revision: db4a5893e45db100497fbe5262847280fb749dac
path: modules/hal/espressif
west-commands: west/west-commands.yml
groups: