Compare commits

...

154 Commits

Author SHA1 Message Date
Christopher Friedt
030fa9da45 release: Zephyr 2.7.5
Set version to 2.7.5

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:08:42 -04:00
Christopher Friedt
43370b89c3 release: minor corrections to security release notes
* remove reference to other github project issue
* complete incomplete sentence

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:07:16 -04:00
Flavio Ceolin
15fa28896a release: mbedTLS: Add vulnerabilities info
Add information about vulnerabilities fixed since mbedTLS 2.26.0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Flavio Ceolin
ce3eb90a83 release: security: Add rel notes for vulnerabilities
Add information about vulnerabilities fixed in 2.7.5 release.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Chris Friedt
ca24cd6c2d release: update v2.7.5 release notes
* add bugifixes to v2.7.5 release

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-27 05:32:14 -04:00
Flavio Ceolin
4fc4dc7b84 release: mbedTLS: Add rel notes for mbedTLS
Release notes for mbedTLS lates update.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-26 11:10:27 +09:00
Krzysztof Chruscinski
fb24b62dc5 logging: Fix user space crash when runtime filtering is on
Logging module data (including filters) are not accessible by
the user space. Macro for creating logs where creating local
variable with filters before checking is we are in the user
context. It was not used in that case but creating variable
was violating access writes that resulted in failure.

Removing variable creation and using filters directly in the
if clause but after checking condition that it is not the
user context. With this approach data is accessed only in
the kernel mode.

Cherry-picked with modifications from
4ee59e2cdb.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-05-18 00:27:35 +08:00
Chris Friedt
60e7a97328 release: create outline for v2.7.5 release notes
Create a template for v2.7.5 release notes.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-18 00:07:05 +08:00
Flavio Ceolin
a1aa463783 boards: mps2_an521_ns: Remove simulation capability
This board requires TF-M which is not supported by default in the
current Zephyr release. Just remove the simulation capability to
avoid CI failures.

See: https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
29c1e08cf7 west: tf-m: Remove tf-m from Zephyr LTS
Zephyr mbedTLS was updated to 2.28.x which is a LTS release and
address several vulnerabilities affecting 2.26 (version that used to be
used on Zephyr LTS).

Unfortunately this mbedTLS version is not compatible with TF-M and
backporting mbedTLS fixes was not a viable solution. Due this problem
we are removing TF-M module from Zephyr's LTS. One still can go and add
it to this manifest if needed, but this is no longer "officially"
supported.

More information in:
https://github.com/zephyrproject-rtos/zephyr/issues/56071
https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
190f09df52 samples: tfm: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Only build / run these TF-M samples when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
a166290f1a tfm: boards: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Enable BUILD_WITH_TFM only when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
21e0870106 crypto: Bump mbedTLS to 2.28.3
Bump mbedTLS to version 2.28.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
ebe3651f3d tests: mbedtls: Fix GCC warning about test_snprintf
Fix errors like:

inlined from ‘test_mbedtls’ at
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:172:6:
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:96:17: error:
‘test_snprintf’ reading 10 bytes from a region of size 1
[-Werror=stringop-overread]
   96 |                 test_snprintf(1, "", -1) != 0 ||
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~

In GCC >= 11 because `ret_buf` in some calls are shorter literals

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Torsten Rasmussen
1b7c720c7f cmake: prefix local version of return variable
Fixes: #55490
Follow-up: #53124

Prefix local version of the return variable before calling
`zephyr_check_compiler_flag_hardcoded()`.

This ensures that there will never be any naming collision between named
return argument and the variable name used in later functions when
PARENT_SCOPE is used.

The issue #55490 provided description of situation where the double
de-referencing was not working correctly.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 599886a9d3)
2023-05-13 19:23:55 -04:00
Stephanos Ioannidis
58af1b51bd ci: Use organisation-level AWS secrets
This commit updates the CI workflows to use the `zephyrproject-rtos`
organisation-level AWS secrets instead of the repository-level secrets.

Using organisation-level secrets allows more centralised management of
the access keys used throughout the GitHub Actions CI infrastructure.

Note that the `AWS_*_ACCESS_KEY_ID` is now stored in plaintext as a
variable instead of a secret because it is equivalent to username and
needs to be identifiable for management and audit purposes.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-05-12 03:30:43 +09:00
Kumar Gala
70f2a4951a tests: posix: fs: disable CONFIG_EVENTFD
The test doesn't use eventfd so we can disable it to save some space.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit 70e921dbc7)
2023-05-10 19:48:15 -04:00
Kumar Gala
650d10805a posix: eventfd: depends on polling
Have eventfd Kconfig select POLL is the code utilizes the polling
API.  We get a link error for tests/lib/fdtable/libraries.os.fdtable
when building on arm-clang without this.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit f215e4494c)
2023-05-10 19:48:15 -04:00
Chris Friedt
25616b1021 tests: drivers: dma: loop: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 8c6c96715f)
2023-05-09 08:42:19 -04:00
Chris Friedt
f72519007c tests: drivers: dma: chan_link: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7f6d976916)
2023-05-09 08:42:19 -04:00
Chris Friedt
1b2a7ec251 tests: drivers: dma: chan_blen: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 5afcac5e14)
2023-05-09 08:42:19 -04:00
Chris Friedt
9d2533fc92 tests: posix: ensure that min and max priority are schedulable
Verify that threads are actually schedulable for min and max
scheduler priority for both `SCHED_RR` (preemptive) and
`SCHED_FIFO` (cooperative).

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit ad71b78770)
2023-05-02 16:25:42 -04:00
Chris Friedt
e20b8f3f34 posix: sched: ensure min and max priority are schedulable
Previously, there was an off-by-one error for SCHED_RR.

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 2b2cbf8107)
2023-05-02 16:25:42 -04:00
Christopher Friedt
199d5d5448 drivers: pcie_ep: iproc: compile-out unused function based on DT
Compile-out `iproc_pcie_pl330_dma_xfer()` if there are no active
DMA users in devicetree.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 9ad78eb60c)
2023-05-02 12:33:46 -04:00
Chris Friedt
5db2717f06 drivers: pcie_ep: iproc: ensure config and api are const
The `config` and `api` members of `struct device` are expected
to be `const`. This also improves reliability, as `config`
and `api` are stored in rom rather than ram, which has the
potential to be corrupted at runtime in the absense of an MMU.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7212792295)
2023-04-27 11:18:00 -04:00
Tarun Karuturi
f3851326da drivers: pcie_ep: iproc: enable based on device tree specs
There are use cases for the pcie_ep driver where we don't
necessarily need the dma functionality. Added ifdef's around
the dma functionality so that it's only available if we
specify the dma engines in the device tree similar to

```
dmas = <&pl330 0>, <&pl330 1>;
dma-names = "txdma", "rxdma";
```

Signed-off-by: Tarun Karuturi <tkaruturi@meta.com>
Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 9d95f69a87)
2023-04-27 11:18:00 -04:00
Stephanos Ioannidis
5a8d05b968 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-27 21:45:14 +09:00
Stephanos Ioannidis
eea42e38f3 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-15 15:59:20 +09:00
Chris Friedt
0388a90e7b posix: clock: fix seconds calculation
The previous method used to calculate seconds in `clock_gettime()`
seemed to have an inaccuracy that grew with time causing the
seconds to be off by an order of magnitude when ticks would roll
over.

This change fixes the method used to calculate seconds.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
4c62d76fb7 sys: time_units: Add Kconfig option for algorithm selection
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit 50c7c7b1e4)
2023-04-07 06:29:11 -04:00
Chris Friedt
6f8f9b5c7a tests: time_units: check for overflow in z_tmcvt intermediate
Prior to #41602, due to the ordering of operations (first mul,
then div), an intermediate value would overflow, resulting in
a time non-linearity.

This test ensures that time rolls-over properly.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 74c9c0e7a3)
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
afbc93287d lib: posix: clock: Prevent early overflows
Algorithm was converting uptime to nanoseconds which can easily
lead to overflows. Changed algorithm to use milliseconds and
nanoseconds for remainder only.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Gerard Marull-Paretas
a28aa01a88 sys: time_units: add missing include
The header can't be fully used in standalone mode: toolchain.h has to be
included first, otherwise the ALWAYS_INLINE attribute is not defined.
Headers that can be directly included and are not self-contained should
be considered a bad practice.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Stephanos Ioannidis
677a374255 ci: backport_issue_check: Use ubuntu-22.04 virtual environment
This commit updates the pull request backport issue check workflow to
use the Ubuntu 22.04 virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cadd6e6fa4)
2023-03-22 03:15:40 +09:00
Stephanos Ioannidis
0389fa740b ci: manifest: Use ubuntu-22.04 virtual environment
This commit updates the manifest workflow to use the Ubuntu 22.04
virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit af6d77f7a7)
2023-03-22 03:05:01 +09:00
Torsten Rasmussen
b02d34b855 cmake: fix variable de-referencing in zephyr_check_compiler_x functions
Fixes: #53124

Fix de-referencing of check and exists function arguments by correctly
de-referencing the argument references using `${<var>}`.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 04a27651ea)
2023-03-13 07:48:51 -04:00
Torsten Rasmussen
29e3a4865f cmake: dereference ${check} after zephyr_check_compiler_flag() call
Follow-up: #53124

The PR#53124 fixed an issue where the variable `check` was not properly
dereferenced into the correct variable name for return value storage.
This was corrected in 04a27651ea.

However, some code was passing a return argument as:
`zephyr_check_compiler_flag(... ${check})`
but checking the result like:
`if(${check})`
thus relying on a faulty behavior of code updating `check` and not the
`${check}` variable.

Fix this by updating to use `${${check}}` as that will point to the
correct return value.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 45b25e5508)
2023-03-10 13:32:37 -05:00
Robert Lubos
aaa6d280ce net: iface: Add NULL pointer check in net_if_ipv6_set_reachable_time
In case the IPv6 context pointer was not set on an interface (for
instance due to IPv6 context shortage), processing the RA message could
lead to a crash (i. e. NULL pointer dereference). Protect against this
by adding NULL pointer check, similarly to other functions in this area.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit c6c2098255)
2023-02-28 12:36:47 -05:00
Robert Lubos
e02a3377e5 net: shell: Validate pointer provided with net pkt command
The net_pkt pointer provided to net pkt commands was not validated in
any way. Therefore it was fairly easy to crash an application by
providing invalid address.

This commit adds the pointer validation. It's checked whether the
pointer provided belongs to any net_pkt pools known to the net stack,
and if the pointer offset within the slab actually points to the
beginning of the net_pkt structure.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit e540a98331)
2023-02-28 12:34:38 -05:00
Gerard Marull-Paretas
76c30dfa55 ci: doc-build: fix PDF build
New LaTeX Docker image (Debian based) uses Python 3.11. On Debian
systems, this version does not allow to install packages to the system
environment using pip.  Use a virtual environment instead.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
(cherry picked from commit e6d9ff2948)
2023-02-28 23:21:51 +09:00
Théo Battrel
c3f512d606 Bluetooth: Host: Check returned value by LE_READ_BUFFER_SIZE
`rp->le_max_num` was passed unchecked into `k_sem_init()`, this could
lead to the value being uninitialized and an unknown behavior.

To fix that issue, the `rp->le_max_num` value is checked the same way as
`bt_dev.le.acl_mtu` was already checked. The same things has been done
for `rp->acl_max_num` and `rp->iso_max_num` in
`read_buffer_size_v2_complete()` function.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
(cherry picked from commit ac3dec5212)
2023-02-24 19:48:12 -05:00
NingX Zhao
f882abfd13 tests: removing incorrect testcases of poll
These two test cases both are fault injection test cases,
and there are designed for testing some negative branches
to improve code coverage. But I find that this branch
shouldn't be tested, because the spinlock will be locked
before a procedure performs here, and then it will trigger
an assert error and the process will be rescheduled to the
handler function, and terminated the current test case,
so spinlock will never be unlocked. And it will impact
the next test case in the same test suite(the next testcase
will be never get spinlock).

Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
(cherry picked from commit cb4a629bc8)
2023-02-07 12:14:38 -06:00
Lucas Dietrich
bc7300fea7 kernel: workq: Add internal function z_work_submit_to_queue()
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().

When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.

Fixes #45267

Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
(cherry picked from commit 9a848b3ad4)
2023-02-03 18:37:53 -05:00
Andy Ross
8da9a76464 kernel/workq: Cleanup bespoke reschedule point
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible.  Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.

Really, this is only a half fix.  Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule().  But that would take some surgery, so let's go with
the simpler cleanup first.

This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.

[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 8d94967ec4)
2023-02-03 18:37:53 -05:00
Lixin Guo
298b8ea788 kernel: work: remove unused if statement
Condition of work == NULL is checked before, so there is no need to
check it again.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
(cherry picked from commit d4826d874e)
2023-02-03 18:37:53 -05:00
Peter Mitsis
a9aaf048e8 kernel: Fixes sys_clock_tick_get()
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
  1. The handler invokes k_uptime_ticks().
  2. k_uptime_ticks() invokes sys_clock_tick_get().
  3. sys_clock_tick_get() must call elapsed() and not
     sys_clock_elapsed() as we do not want to count any
     unannounced ticks that may have elapsed while
     processing the timer ISR.

Fixes #46378

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 71ef669ea4)
2023-02-03 18:37:53 -05:00
Peter Mitsis
e2b81b48c4 kernel: fix race condition in sys_clock_announce()
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 3e2f30a7ef)
2023-02-03 18:37:53 -05:00
Andy Ross
45c41bc344 kernel/timeout: Cleanup/speedup parallel announce logic
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list).  The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).

Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit.  The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.

Fixes #44758

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 0b2ed3818d)
2023-02-03 18:37:53 -05:00
Andy Ross
f570a46719 kernel/timeout: Serialize handler callbacks on SMP
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).

When that happens, each call will independently enter the loop over
the timeout list.  The access is correctly synchronized, so the list
handling is correct.  But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus.  That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".

That's surprising and error-prone.  Don't do it.  Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.

(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver.  That hardware issue is still poorly understood, but this fix
is desirable regardless.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b1182bf83b)
2023-02-03 18:37:53 -05:00
Flavio Ceolin
675a349e1b kernel: Fix timeout issue with SYSTEM_CLOCK_SLOPPY_IDLE
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.

Fixes #41422

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 47b7c2e931)
2023-02-03 18:37:53 -05:00
Andy Ross
16927a6cbb kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b4e9ef0691)
2023-02-03 18:37:53 -05:00
Andy Ross
ab353d6b7d kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 3267cd327e)
2023-02-03 18:37:53 -05:00
Mark Holden
951b055b7f debug: coredump: allow for coredump backends to be defined outside of tree
Move coredump_backend_api struct to public header so that custom backends
for coredump can be defined out of tree. Create simple backend in test
directory for verification.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit 7b2b283677)
2023-02-03 18:35:56 -05:00
Nicolas Pitre
8e256b3399 scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:

|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
|                             unsigned int initial_count,
|                             unsigned int limit)
|{
|    80000ad0:   6105                    addi    sp,sp,32
|    80000ad2:   ec06                    sd      ra,24(sp)
|    80000ad4:   e42a                    sd      a0,8(sp)
|    80000ad6:   c22e                    sw      a1,4(sp)
|    80000ad8:   c032                    sw      a2,0(sp)
|        ret = arch_is_user_context();
|    80000ada:   b39ff0ef                jal     ra,80000612
|        if (z_syscall_trap()) {
|    80000ade:   c911                    beqz    a0,80000af2
|                return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
|                                    *(uintptr_t *)&initial_count,
|                                    *(uintptr_t *)&limit,
|                                    K_SYSCALL_K_SEM_INIT);
|    80000ae0:   6522                    ld      a0,8(sp)
|    80000ae2:   00413583                ld      a1,4(sp)
|    80000ae6:   6602                    ld      a2,0(sp)
|    80000ae8:   0b700693                li      a3,183
|    [...]

We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.

In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:

- The top half of a1 and a2 will contain garbage due to the `ld` used
  to retrieve them. Whether or not the top bits will be cleared
  eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
  endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
  while `ld` expects a 8-byte alignment.

The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.

The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.

So let's fix this code by:

- Removing the pointer dereference roundtrip and associated casts. This
  gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
  aggregates perfectly well. The compiler does optimize things to the
  same assembly output in the end.

This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 1db5c8b948)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
74f2760771 scripts: gen_syscalls: add missing --split-type case
With CONFIG_TIMEOUT_64BIT it is both k_timeout_t and k_ticks_t that
need to be split, otherwise many syscalls returning a number of ticks
are being truncated to 32 bits.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 2cdac33d39)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
85e0912291 scripts: gen_syscalls: fix access validation size on extra params array
It was one below the entire array size.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit df80c77ed8)
2023-02-01 20:07:43 -05:00
Jim Shu
f2c582c75d gen_app_partitions: add .sdata/.sbss section into app_smem
Some architectures (e.g. RISC-V) has .sdata/.sbss section for small
data/bss. Memory partition should also manage the permission of these
sections in library so they should be put into app_smem.
(For example, newlib _impure_ptr is in .sdata section and
__malloc_top_pad is in .sbss section in RISC-V.)

Signed-off-by: Jim Shu <cwshu@andestech.com>
(cherry picked from commit 46eb3e5fce)
2023-02-01 20:07:43 -05:00
Robert Lubos
c908ee8133 net: context: Separate user data pointer from FIFO reserved space
Using the same memory as a user data pointer and FIFO reserved space
could lead to a crash in certain circumstances, those two use cases were
not completely separate.

The crash could happen for example, if an incoming TCP connection was
abruptly closed just after being established. As TCP uses the user data
to notify error condition to the upper layer, the user data pointer
could've been used while the newly allocated context could still be
waiting on the accept queue. This damaged the data area used by the FIFO
and eventually could lead to a crash.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 2ab11953e3)
2023-01-31 16:13:42 -05:00
Nicolas Pitre
175e76b302 z_thread_mark_switched_*: use z_current_get() instead of k_current_get()
k_current_get() may rely on TLS which might not yet be initialized
when those tracing functions are called, resulting in a crash.

This is different from the main branch as in that case the implementation
was completely revamped and neither k_current_get() nor z_current_get()
are used anymore. This is a much simpler fix than a backport of that
code, similar to the implication in commit commit f07df42d49 ("kernel:
make k_current_get() work without syscall").

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-23 15:01:10 -05:00
Keith Packard
c520749a71 tests: Disable HW stack protection for some mpu tests
When active, z_libc_partition consumes an MPU region which leaves too
few for some MPU tests. Free up one by disabling HW stack protection.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 19c8956946)
2023-01-11 11:02:47 -05:00
David Leach
584f52d5be tests: mem_protect: ensure allocated objects are initialized
K_OBJ_MSGQ, K_OBJ_PIPE, and K_OBJ_STACK objects have pointers
to additional memory that can be allocated. The k_obj_alloc()
returns these objects as uninitialized so when they are freed
there are random opportunities for freeing invalid memory
and causing random faults.

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit fdea2a628b)
2023-01-11 11:02:47 -05:00
David Leach
d05c3bdf36 tests: mem_protect: avoid allocating K_OBJ_MSGQ in userspace.
The K_OBJ_MSGQ object is unitialized so when the thread cleanup occurs
after an expected fault for invalid access the test case can randomly
fault again because the cleanup of the thread will sometimes attempt
to free invalid buffer_start pointer in the msgq object.

Fixes #42705

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit a0737e687c)
2023-01-11 11:02:47 -05:00
Jim Shu
3ab0c9516f tests: mem_protect: enlarge heap size of RISCV64
Because k_thread size in RISCV64 is near 512 bytes, (num_of_thread *
256) bytes heap size is not enough. Enlarge heap size in RISCV64
to the (num_of_thread * 1024) bytes like x86_64 and ARM64.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
(cherry picked from commit e2d67d60ba)
2023-01-11 11:02:47 -05:00
Keith Packard
df6f0f477f tests/kernel/mem_protect: Check for thread_userspace_local_data
When using THREAD_LOCAL_STORAGE the thread_userspace_local_data stuff
isn't used, so these tests wouldn't build.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit b03b2e0403)
2023-01-11 11:02:47 -05:00
Nicolas Pitre
2dc30ca1fb tests: lifo_usage: make it less susceptible to SMP races
On SMP, and especially using qemu on a busy system, it is possible for
a thread with a later timeout to get ahead of another one with an
earlier timeout. The tight timeout value difference (10ms) makes it
possible albeit difficult to reproduce. The result is something like:

|START - test_timeout_threads_pend_on_lifo
| thread (q order: 2, t/o: 0, lifo 0x4001d350)
|
|    Assertion failed at main.c:140:
|test_multiple_threads_pending: (data->timeout_order not equal to ii)
| *** thread 2 woke up, expected 1

Let's make timeout values 10 times larger to make this unlikely race
even less likely.

While at it... The timeout field in struct timeout_order_data is some ms
value and not a number of ticks, so change the type accordingly.
And leverage k_cyc_to_ms_floor32() to simplify computation in
is_timeout_in_range().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit a1ce2fb990)
2023-01-11 11:02:47 -05:00
Daniel Leung
5cbda9f1c7 tests: kernel/smp: wait for threads to exits between tests
This adds a bunch of k_thread_join() to make sure threads spawned
for a test are no longer running between exiting that test. This
prevents interference between tests if some threads are still
running when assumed not.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit dbe3874079)
2023-01-11 11:02:47 -05:00
Carlo Caione
711506349d tests/kernel/smp: Add SMP switch torture test
Formalize and rework the issue reproducer for #40795 and add it to the
SMP test suite.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
(cherry picked from commit 8edf9817c0)
2023-01-11 11:02:47 -05:00
Ederson de Souza
572921a44a tests/kernel/fpu_sharing: Run test with MP_NUM_CPUS=1
This test uses k_yield() to "sync" between threads, so it's implicitly
supposed to run on a single CPU. Make it explicit, to avoid issues on
platforms with more cores.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>

FIXKFLOATDISABLE

(cherry picked from commit ab17f69a72)
2023-01-11 11:02:47 -05:00
Chris Friedt
7da64958f0 release: Zephyr 2.7.4
Set version to 2.7.4

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-12-22 20:33:40 -05:00
Chris Friedt
49e965fd63 release: update v2.7.4 release notes
* add bugifixes to v2.7.4 release
* partial CVE notes from security

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-12-22 19:17:19 -05:00
Flavio Ceolin
c09b95fafd net: tcp: Fix possible buffer underflow
Fix possible underflow in tcp flags parse.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit ac2e13b9a1)
2022-12-22 12:17:46 -05:00
Andy Ross
88f09f2eac kernel/sched: Fix SMP race on pend
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich).  This
process happens with the caller's lock held, so local interrupts are
masked.  But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!

Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also.  Now we hold the scheduler lock between pend and
the final context switch.

Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.

[1] z_swap() is a historical API implemented in per-arch assembly for
    older architectures (like ARM32!).  It was designed to be called
    with what at the time was a global IRQ lock, so it doesn't
    understand the idea of a separate scheduler lock.  When we finally
    get all archictures on arch_switch() this design can be cleaned up
    quite a bit.

Signed-off-by: Andy Ross <andyross@google.com>
(cherry picked from commit c32f376e99)
2022-12-20 20:54:21 -05:00
Ian Oliver
568c09ce3a log_core: Add Kconfig symbol for init priority
Users may want to do some configuration after the kernel is up, but
before initializing the log_core. Making the log_core's init priority
configurable makes that possible.

Signed-off-by: Ian Oliver <io@amperecomputing.com>
(cherry picked from commit 1675d49b4c)
2022-12-20 15:23:12 -05:00
Chris Friedt
79f6c538c1 tests: posix: add tests for sleep() and usleep()
Previously, there was no test coverage for `sleep()` and
`usleep()`.

This change adds full test coverage.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 027b79ecc4)
2022-12-06 12:47:16 -05:00
Chris Friedt
3400e6d9db lib: posix: update usleep() to follow the POSIX spec
The original implementation of `usleep()` was not compliant
to the POSIX spec in 3 ways.
- calling thread may not be suspended (because `k_busy_wait()`
  was previously used for short durations)
- if `usecs` > 1000000, previously we did not return -1 or set
  `errno` to `EINVAL`
- if interrupted, previously we did not return -1 or set
  `errno` to `EINTR`

This change addresses those issues to make `usleep()` more
POSIX-compliant.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7b95428fa0)
2022-12-06 12:47:16 -05:00
Chris Friedt
fbea9e74c2 lib: posix: sleep() should report unslept time in seconds
In the case that `sleep()` is interrupted, the POSIX spec requires
it to return the number of "unslept" seconds (i.e. the number of
seconds requested minus the number of seconds actually slept).

Since `k_sleep()` already returns the amount of "unslept" time
in ms, we can simply use that.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit dcfcc6454b)
2022-12-06 12:47:16 -05:00
Chris Friedt
4d929827ac tests: posix: clock: do not use usleep in a broken way
Using `usleep()` for >= 10000000 microseconds results
in an error, so this test was kind of defective, having
explicitly called `usleep()` for seconds.

Also, check the return values of `clock_gettime()`.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 23a1f0a672)
2022-12-06 12:47:16 -05:00
Jamie McCrae
37b3641f00 manifest: Update mcumgr revision
Updates mcumgr to resolve an issue with the state of a firmware
update not being reset if an error occurs or if the underlying
area is erased.

Fixes #52247
Backporting commit 4c48b4f21a

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-11-28 16:02:09 -05:00
Jamie McCrae
3d940f1d1b net: Synchronise user data size with mcumgr
Fixes an issue with 2 user data sizes being out of sync causing
CI failures.

Fixes #52591

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-11-28 15:26:32 -05:00
Martí Bolívar
f0d2a3e2fe python-devicetree: CI hotfix
Pin the types-PyYAML version to 6.0.7. Version 6.0.8 is causing CI
errors for other pull requests, so we need this in to get other PRs
moving.

Fixes: #46286

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7d405a43b1 ci: Clone cached Zephyr repository with shared objects
In the new ephemeral Zephyr runners, the cached repository files are
located in a foreign file system and Git clone operation cannot create
hard-links to the cached repository objects, which forces the Git clone
operation to copy the objects from the cache file system to the runner
container file system.

This commit updates the CI workflows to instead perform a "shared
clone" of the cached repository, which allows the cloned repository to
utilise the object database of the cached repository.

While "shared clone" can be often dangerous because the source
repository objects can be deleted, in this case, the source repository
(i.e. cached repository) is mounted as read-only and immutable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
2dbe845f21 ci: codecov: Clone cached Zephyr repository
This commit updates the codecov workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
4efa225daa ci: codecov: Use zephyr-runner
This commit updates the codecov workflow to use the new Kubernetes-
based zephyr-runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0ee1955d5b ci: clang: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
a0eb50be3e ci: clang: Clone cached Zephyr repository
This commit updates the clang workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
2da9d7577f ci: clang: Use zephyr-runner
This commit updates the clang workflow to use the new Kubernetes-based
zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
d5e2a071c1 ci: twister: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
633dd420d9 ci: twister: Clone cached Zephyr repository
This commit updates the twister workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
780b4e08cb ci: twister: Use zephyr-runner
This commit updates the twister workflow to use the new Kubernetes-
based zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
281185e49d ci: Use actions/cache@v3
This commit updates the CI workflows to use the latest "cache" action
v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
cb240b4f4c ci: Use actions/setup-python@v4
This commit updates the CI workflows to use the latest "setup-python"
action v4, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ff5ee88ac0 ci: Use actions/upload-artifact@v3
This commit updates the CI workflows to use the latest
"upload-artifact" action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c3ef958116 ci: Use actions/checkout@v3
This commit updates the CI workflows to use the latest "checkout"
action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
727806f483 ci: twister: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ec6c9d3637 ci: release: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c5e88dbbda ci: codecov: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c3e4d65dd1 ci: clang: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
16207ae32f ci: footprint-tracking: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
dbf2ca1b0a ci: footprint: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0e204784ee ci: codecov: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c44406e091 ci: clang: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
1da82633b2 ci: twister: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ad6636f09c ci: bluetooth-tests: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Anas Nashif
8f4b366c0f ci: update cancel-workflow-action action to 0.11.0
Update action to use latest release which resolves a warning Node 12
being deprecated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
5f8960f5ef ci: compliance: Use upload-artifact action v3
This commit updates the "Create a release" workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
d9200eb55d ci: issue_count: Use upload-artifact action v3
This commit updates the issue count tracker workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ccdc1d3777 ci: doc-build: Use upload-artifact action v3
This commit updates the documentation build workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
197c4ddcbd ci: compliance: Use upload-artifact action v3
This commit updates the compliance check workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7287947535 ci: backport: Use Ubuntu 20.04 runner image
This commit updates the backport workflow to use the ubuntu-20.04
runner image because the ubuntu-18.04 image is deprecated and will
become unsupported by December 1, 2022.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
b0be164419 ci: west_cmds: Use specific version of runner image
This commit updates the "West Command Tests" workflow to use a specific
runner image version (ubuntu-20.04, macos-11, windows-2022) instead of
the latest version in order to prevent any potential breakages due to
the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
fab06842d5 ci: devicetree_checks: Use specific version of runner image
This commit updates the "Devicetree script tests" workflow to use a
specific runner image version (ubuntu-20.04, macos-11, windows-2022)
instead of the latest version in order to prevent any potential
breakages due to the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9b4eafc54a ci: twister: Use Ubuntu 20.04 runner image
This commit updates the "Run tests with twister" workflow to use a
specific runner image version, ubuntu-20.04, instead of the latest
version in order to prevent any potential breakages due to the 'latest'
version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9becb117b2 ci: twister_tests: Use Ubuntu 20.04 runner image
This commit updates the Twister Testsuite workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
87ab3e4d16 ci: stale_issue: Use Ubuntu 20.04 runner image
This commit updates the stale issue workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9b8305cc11 ci: release: Use Ubuntu 20.04 runner image
This commit updates the "Create a Release" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
728e5720cc ci: manifest: Use Ubuntu 20.04 runner image
This commit updates the manifest check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
6c11685863 ci: license_check: Use Ubuntu 20.04 runner image
This commit updates the license check workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0f783a4ce0 ci: issue_count: Use Ubuntu 20.04 runner image
This commit updates the issue count tracker workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
02dba17a59 ci: footprint: Use Ubuntu 20.04 runner image
This commit updates the footprint delta workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
860e7307bc ci: footprint-tracking: Use Ubuntu 20.04 runner image
This commit updates the footprint tracking workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7b087b8ac5 ci: errno: Use Ubuntu 20.04 runner image
This commit updates the error number check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
15f39300c0 ci: doc: Use Ubuntu 20.04 runner image
This commit updates the documentation build and publish workflows to
use a specific runner image version, ubuntu-20.04, instead of the
latest version in order to prevent any potential breakages due to the
'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
f6f69516ac ci: daily_test_version: Use Ubuntu 20.04 runner image
This commit updates the daily test version workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
5f9dd18a87 ci: compliance: Use Ubuntu 20.04 runner image
This commit updates the compliance check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7d8639b4a8 ci: coding_guidelines: Use Ubuntu 20.04 runner image
This commit updates the coding guidelines workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
6e723ff755 ci: clang: Use Ubuntu 20.04 runner image
This commit updates the Clang workflow to use a specific runner image
version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
bff97ed4cc ci: backport_issue_check: Use Ubuntu 20.04 runner image
This commit updates the backport issue check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
97e2959452 ci: bluetooth-tests: Use Ubuntu 20.04 runner image
This commit updates the Bluetooth tests workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
91970658ec ci: issue_count: Fix stale reference to master branch
This commit fixes a stale reference to the 'master' branch when
downloading the issue report configuration file.

Note that the 'master' branch is no longer the default
branch -- 'main' is.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
fc2585af00 ci: doc-build: skip Kconfig docs build on pull requests
The documentation supports a special target named "html-fast" that skips
generation of all Kconfig pages. Instead, it creates a single dummy page
where a reference to all existing Kconfig options is placed. This means
that references are resolved, but content is not rendered. Since Kconfig
help is rendered as a literal, chances of breaking documentation
build due to Kconfig changes should be low. The change proposed in this
patch should speed up documentation build on pull requests while a
proper solution is found for the Kconfig docs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
f95edd3a85 ci: doc-build: use concurrency group to cancel in progress builds
Add the documentation build jobs to a concurrency group so that branch
force pushes will automatically cancel in progress jobs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
2b9ed76734 ci: doc-build: disable parallel build
When an error happens during the Sphinx build (e.g. due to a broken
reference), the process hangs on CI when run with both `-j auto`
(parallel build) and `-W` (warnings as errors) options. The root cause
of the issue is unknown, and does not seem to happen locally. Parallel
builds are experimental on Sphinx, so they have been disabled on CI for
now.  Because CI runner is a single core machine, the build time should
remain equal or similar. The option is still left as default, so local
builds will continue to benefit from parallelization.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
5a041bff3d ci: doc-build: set timeout to 30 minutes
Give documentation build up to 30 minutes to finish. This should improve
the user experience when Sphinx hangs due to broken references, a
problem that needs some investigation.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Chris Friedt
f61664c6f8 tests: kernel: mutex: move race timeout test to mutex_api
Previously, this change was added to `mutex_error_case`.

That worked fine in `main`, but once the change was backported to
`v2.7-branch`, the test would fail because it *did not* cause a
failure. The reason for that, was that the `mutex_error_case`
suite has `CONFIG_ZTEST_FATAL_HOOK=y`.

With the newer ztest API, it allowed a separate suite to be used,
allowing the test to pass (although it did not really fit in with
the rest of the testsuite).

The solution is to simply merge it with the `mutex_api` suite
which uses non-inverted success logic.

This change will also have to be cherry-picked for the backport
in #49031.

Fixes #48056.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-11-18 17:33:10 -05:00
Qi Yang
e2f05e9328 kernel: mutex: fix races when lock timeout
Say threadA holds a mutex and threadB tries
to lock it with a timeout, a race would occur
if threadA unlock that mutex after threadB
got unpended by sys_clock and before it gets
scheduled and calls k_spin_lock.

This patch fixes this issue by checking the
mutex's status again after k_spin_lock calls.

Fixes #48056

Signed-off-by: Qi Yang <qi.yang@cmind-semi.com>
(cherry picked from commit 89c4a074dc)
2022-11-18 17:33:10 -05:00
Jamie McCrae
ea0b53b150 mgmt: mcumgr: Fix Bluetooth transport issues
This fixes issues with the Bluetooth SMP transport whereby deadlocks
could arise from connection references being held in long-lasting
mcumgr command processing functions.

Note: Heavily modified from original PR due to differences in MCUmgr
operation since the Zephyr 2.7 release.

Fixes #51846

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 2aca1242f7)
2022-11-17 18:41:57 -05:00
Stephanos Ioannidis
56664826b2 ci: doc: Publish pull request docs to builds.zephyrproject.io
This commit updates the CI documentation build workflow to upload the
HTML pull request documentation builds to the S3 builds.zephyrproject.io
bucket so that they are directly accessible from the web.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1dd92ec865)
2022-11-16 04:23:46 +09:00
Chris Friedt
bbb49dec38 net: sockets: socketpair: do not allow blocking IO in ISR context
Using a socketpair for communication in an ISR is not a great
solution, but the implementation should be robust in that case
as well.

It is not acceptible to block in ISR context, so robustness here
means to return -1 to indicate an error, setting errno to `EAGAIN`
(which is synonymous with `EWOULDBLOCK`).

Fixes #25417

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit d832b04e96)
2022-11-15 12:12:40 -05:00
Stephanos Ioannidis
8211ebf759 ci: Limit workflow scope to v2.7-branch
This commit updates the CI workflows to limit their event trigger scope
to the v2.7-branch in order to prevent the workflows from running when
the backport branches are pushed.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-02 03:12:21 +09:00
Jay Vasanth
1f3121b6b2 soc arm: MEC172x soc.h - Include custom IRQn_Type
Fix for issue #41012 to allow compiler to treat
IRQn_Type to be more than 8-bit. This will ensure NVIC numbers
more than 127 (required for MEC172x device) will work
correctly with irq_enable() API

Signed-off-by: Jay Vasanth <jay.vasanth@microchip.com>
(cherry picked from commit 4495f43dca)
2022-10-11 20:17:47 -04:00
Jamie McCrae
d7820faf7c drivers: counter: Update counter_set_channel_alarm documentation
Adds the -EBUSY return code to the documentation of the
counter_set_channel_alarm function which is returned when an alarm
is already active.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit c607599068)
2022-10-07 18:23:58 -04:00
Jordan Yates
5d29d52445 scripts: zspdx: fix writing custom license IDs
The builtin list function `.sort()` sorts the list in-place and returns
None. As this is an invalid type for iteration, use the builtin `sorted`
function, which returns a sorted copy of the list, which we can iterate
over.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
(cherry picked from commit 06aae61019)
2022-10-04 09:09:14 -07:00
Yong Cong Sin
be11187e09 mgmt/hawkbit: Print hrefs only if there's an update
If the is no update from the server, the _links will be NULL.
Check if it is NULL before trying to LOG these strings.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2022-10-04 07:43:25 -07:00
Ruud Derwig
9044091e21 ARC: fx possible memory corruption with userspace
Use  Z_KERNEL_STACK_BUFFER instead of
Z_THREAD_STACK_BUFFER for initial stack.

Fixes #50467

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
(cherry picked from commit 9bccb5cc4b)
2022-09-23 13:57:53 -04:00
Daniel Leung
170ba8dfcb soc: esp32: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit b820cde7a9)
2022-09-23 13:57:40 -04:00
Daniel Leung
e3f1b6fc54 soc: intel_adsp: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit 74df88d8f5)
2022-09-23 13:57:40 -04:00
Daniel Leung
7ac05528ca tests: coredump: skip acrn_ehl_crb
The coredump tests output quite a large amount of data into
the console. However, the ACRN console only has very limited
history (comparatively), such that twister is unable to
match the necessary strings to consider the tests passed.
So skip those tests on acrn_ehl_crb.

Fixes #40887

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit c1ac125068)
2022-09-20 17:06:00 +01:00
Martí Bolívar
64f411f0fb edtlib: remove python 3.5 workaround
Remove a yaml monkeypatch. It is no longer needed since we support 3.6
or later on Zephyr v2.7 LTS and 3.8 or later on what will become v3.2.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
(cherry picked from commit 7ef9c4b20e)
2022-09-18 08:32:31 -04:00
Anas Nashif
2e2dd96ae4 actions: west/devicetree: exclude python 3.6 on windows
This version of python is not available anymore. Excluding for now to
unblock CI.

Fixes: #49139

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
(cherry picked from commit 773242cbb7)
2022-09-18 08:32:31 -04:00
Torsten Rasmussen
a311291294 cmake: kconfig: preserved quotes for Kconfig string values
Fixes: #49569

Kconfig requires quoted strings in its configuration files, like this:
> CONFIG_A_STRING="foo bar"

But CMake requires expects that strings are without additional qoutes,
and therefore qoutes are stripped when loading Kconfig config filers
into CMake.

This is particular important when the string in Kconfig is a path to a
file. In this case, not stripping the quotes leads to an error as the
file cannot be found.

When users pass a string to Kconfig through CMake, they are expected to
pass it so that qoutes are correct seen from Kconfig, that is:
> cmake -DCONFIG_A_STRING=\"foo bar\"

In CMake, those qoutes are written as-is to Kconfig extra config file,
and then removed in the CMake cache.
After Kconfig processing, the Kconfig settings are read back to CMake
but without quotes. Settings that was passed through the CMake cache,
for example using `-D` are written back to the cache, but this time
without the qoutes. This results in Kconfig errors on sub-sequent CMake
runs.

Instead of writing the Kconfig value setting back to the CMake cache,
introduce an internal shadow symbol in the cache prefixed with `CLI_`.
This allows the CMake cache to keep the value correctly formatted for
Kconfig extra config creation, while at the same time keep existing
behavior for CONFIG_ symbols read from Kconfig.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2022-08-31 06:26:03 -04:00
Gerard Marull-Paretas
5221787303 scripts: west_commands: runners: jlink: support pylink >= 0.14.2
It looks like the latest release, 0.14.2, changed the contents of
JLINK_SDK_NAME as it was before 0.14.0 release. That means that the
previous fix is only applicable to a couple of releases: 0.14.0/0.14.1.

Fixes #49564

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
(cherry picked from commit 2d712c6c55)
2022-08-31 06:24:41 -04:00
Gerard Marull-Paretas
63d0c7fcae scripts: west_commands: runners: jlink: support pylink >= 0.14
pylink 0.14.0 changed the class variable where JLink DLL library name
(libjlinkarm) is stored. This patch adds support for new pylink
libraries while keeping backwards compatibility.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
(cherry picked from commit a57001347f)
2022-08-31 06:24:41 -04:00
Yong Cong Sin
8abef50e97 subsys/mgmt/hawkbit: Set ai_socktype if IPV4/IPV6
Follows the implementation of updatehub and set the
`ai_socktype` only if IPV4/IPV6

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit dd9d6bbb44)
2022-08-30 13:01:23 -04:00
Yong Cong Sin
0306e75a5f subsys/mgmt/hawkbit: Init the hints struct to a known value
Initialize the `hints` struct to a known value so that it won't
cause undetermined behavior when used in `getaddrinfo()`.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 2ed88e998a)
2022-08-30 13:01:23 -04:00
125 changed files with 2058 additions and 671 deletions

View File

@@ -9,7 +9,7 @@ on:
jobs:
backport:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
name: Backport
steps:
- name: Backport

View File

@@ -8,11 +8,11 @@ on:
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
steps:
- name: Check out source code
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Install Python dependencies
run: |

View File

@@ -8,7 +8,7 @@ on:
jobs:
bluetooth-test-results:
name: "Publish Bluetooth Test Results"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.event.workflow_run.conclusion != 'skipped'
steps:

View File

@@ -10,17 +10,13 @@ on:
- "soc/posix/**"
- "arch/posix/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bluetooth-test-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
bluetooth-test-build:
runs-on: ubuntu-latest
needs: bluetooth-test-prep
bluetooth-test:
runs-on: ubuntu-20.04
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -38,7 +34,7 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: west setup
run: |
@@ -55,7 +51,7 @@ jobs:
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: bluetooth-test-results
path: |
@@ -64,7 +60,7 @@ jobs:
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: event
path: |

View File

@@ -2,22 +2,18 @@ name: Build with Clang/LLVM
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
clang-build-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
clang-build:
runs-on: zephyr_runner
needs: clang-build-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -30,12 +26,14 @@ jobs:
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -72,7 +70,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
@@ -80,8 +78,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
@@ -100,12 +98,12 @@ jobs:
# We can limit scope to just what has changed
if [ -s testplan.csv ]; then
echo "::set-output name=report_needed::1";
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --inline-logs -M -N -v --load-tests testplan.csv --retry-failed 2
else
# if nothing is run, skip reporting step
echo "::set-output name=report_needed::0";
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
@@ -114,7 +112,7 @@ jobs:
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml

View File

@@ -4,22 +4,18 @@ on:
schedule:
- cron: '25 */3 * * 1-5'
jobs:
codecov-prep:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
codecov:
runs-on: zephyr_runner
needs: codecov-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -32,8 +28,14 @@ jobs:
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -54,7 +56,7 @@ jobs:
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -63,8 +65,8 @@ jobs:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
@@ -94,7 +96,7 @@ jobs:
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
@@ -108,7 +110,7 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Download Artifacts
@@ -144,8 +146,8 @@ jobs:
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
endforeach()
message("::set-output name=mergefiles::${MERGELIST}")
message("::set-output name=covfiles::${FILELIST}")
file(APPEND $ENV{GITHUB_OUTPUT} "mergefiles=${MERGELIST}\n")
file(APPEND $ENV{GITHUB_OUTPUT} "covfiles=${FILELIST}\n")
- name: Merge coverage files
run: |

View File

@@ -4,17 +4,17 @@ on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip

View File

@@ -4,11 +4,11 @@ on: pull_request
jobs:
maintainer_check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Check MAINTAINERS file
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -20,7 +20,7 @@ jobs:
python3 ./scripts/get_maintainer.py path CMakeLists.txt
check_compliance:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -28,13 +28,13 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
@@ -72,7 +72,7 @@ jobs:
./scripts/ci/check_compliance.py -m Codeowners -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: compliance.xml

View File

@@ -12,15 +12,15 @@ on:
jobs:
get_version:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip
@@ -28,7 +28,7 @@ jobs:
pip3 install gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0

View File

@@ -6,10 +6,14 @@ name: Devicetree script tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
@@ -21,20 +25,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -42,7 +48,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -51,7 +57,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,10 +5,10 @@ name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
- cron: '0 */3 * * *'
push:
tags:
- v*
- v*
pull_request:
paths:
- 'doc/**'
@@ -34,18 +34,23 @@ env:
jobs:
doc-build-html:
name: "Documentation Build (HTML)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 30
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen graphviz
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -69,38 +74,71 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W -j auto" make -C doc html
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
DOC_TARGET="html-fast"
else
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W" make -C doc ${DOC_TARGET}
- name: compress-docs
run: |
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
echo "${PR_NUM}" > pr_num
echo "::notice:: Documentation will be available shortly at: ${DOC_URL}"
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
path: pr_num
doc-build-pdf:
name: "Documentation Build (PDF)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container: texlive/texlive:latest
timeout-minutes: 30
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: setup-venv
run: |
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
@@ -123,7 +161,7 @@ jobs:
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: pdf-output
path: doc/_build/latex/zephyr.pdf

63
.github/workflows/doc-publish-pr.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish (Pull Request)
on:
workflow_run:
workflows: ["Documentation Build"]
types:
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
- name: Load PR number
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
id: check-pr
uses: carpentries/actions/check-valid-pr@v0.8
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete

View File

@@ -2,23 +2,21 @@
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Publish Documentation
name: Documentation Publish
on:
workflow_run:
workflows: ["Documentation Build"]
branches:
- main
- v*
tags:
- v*
- main
- v*
types:
- completed
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
@@ -34,8 +32,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3

View File

@@ -6,13 +6,13 @@ on:
jobs:
check-errno:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: zephyrprojectrtos/ci:v0.18.4
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Run errno.py
run: |

View File

@@ -13,19 +13,14 @@ on:
# same commit
- 'v*'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-tracking:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-tracking-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -44,7 +39,7 @@ jobs:
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -58,8 +53,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.FOOTPRINT_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.FOOTPRINT_AWS_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Record Footprint

View File

@@ -2,19 +2,14 @@ name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-delta:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -25,16 +20,12 @@ jobs:
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -14,13 +14,13 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/master/.github/workflows/issues-report-config.json
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/main/.github/workflows/issues-report-config.json
- name: install-packages
run: |
@@ -34,7 +34,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}
@@ -43,8 +43,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Post Results

View File

@@ -7,6 +7,4 @@ jobs:
name: Pull Request Labeler
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2.1.1
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'
- uses: actions/labeler@v4

View File

@@ -4,7 +4,7 @@ on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Scan code for licenses
steps:
- name: Checkout the code
@@ -15,7 +15,7 @@ jobs:
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -6,11 +6,11 @@ on:
jobs:
contribs:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -7,15 +7,16 @@ on:
jobs:
release:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
@@ -23,7 +24,7 @@ jobs:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx

View File

@@ -6,7 +6,7 @@ on:
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v3

View File

@@ -2,29 +2,27 @@ name: Run tests with twister
on:
push:
branches:
- v2.7-branch
pull_request_target:
branches:
- v2.7-branch
schedule:
# Run at 00:00 on Saturday
- cron: '20 0 * * 6'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
twister-build-cleanup:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
twister-build-prep:
runs-on: zephyr_runner
needs: twister-build-cleanup
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
@@ -38,14 +36,16 @@ jobs:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -102,18 +102,18 @@ jobs:
else
size=0
fi
echo "::set-output name=subset::${subset}";
echo "::set-output name=size::${size}";
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
twister-build:
runs-on: zephyr_runner
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -128,13 +128,14 @@ jobs:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -173,7 +174,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -182,8 +183,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
@@ -220,7 +221,7 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -231,7 +232,7 @@ jobs:
twister-test-results:
name: "Publish Unit Tests Results"
needs: twister-build
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()

View File

@@ -5,12 +5,16 @@ name: Twister TestSuite
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
@@ -24,17 +28,17 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest]
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,11 +5,15 @@ name: Zephyr West Command Tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -22,20 +26,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -43,7 +49,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -52,7 +58,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -627,7 +627,7 @@ if(CONFIG_64BIT)
endif()
if(CONFIG_TIMEOUT_64BIT)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t --split-type k_ticks_t)
endif()
add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 3
PATCHLEVEL = 5
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -56,7 +56,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;

View File

@@ -27,6 +27,7 @@ endif # BOARD_BL5340_DVK_CPUAPP
config BUILD_WITH_TFM
default y if BOARD_BL5340_DVK_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -20,6 +20,7 @@ config BOARD
# force building with TF-M as the Secure Execution Environment.
config BUILD_WITH_TFM
default y if TRUSTED_EXECUTION_NONSECURE
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if GPIO

View File

@@ -4,7 +4,10 @@ type: mcu
arch: arm
ram: 4096
flash: 4096
simulation: qemu
# TFM is not supported by default in the Zephyr LTS release.
# Excluding this board's simulator to avoid CI failures.
#
#simulation: qemu
toolchain:
- gnuarmemb
- zephyr

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF5340DK_NRF5340_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF9160DK_NRF9160_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -518,7 +518,7 @@ function(zephyr_library_cc_option)
string(MAKE_C_IDENTIFIER check${option} check)
zephyr_check_compiler_flag(C ${option} ${check})
if(${check})
if(${${check}})
zephyr_library_compile_options(${option})
endif()
endforeach()
@@ -1003,9 +1003,9 @@ endfunction()
function(zephyr_check_compiler_flag lang option check)
# Check if the option is covered by any hardcoded check before doing
# an automated test.
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" check exists)
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" _${check} exists)
if(exists)
set(check ${check} PARENT_SCOPE)
set(${check} ${_${check}} PARENT_SCOPE)
return()
endif()
@@ -1110,11 +1110,11 @@ function(zephyr_check_compiler_flag_hardcoded lang option check exists)
# because they would produce a warning instead of an error during
# the test. Exclude them by toolchain-specific blocklist.
if((${lang} STREQUAL CXX) AND ("${option}" IN_LIST CXX_EXCLUDED_OPTIONS))
set(check 0 PARENT_SCOPE)
set(exists 1 PARENT_SCOPE)
set(${check} 0 PARENT_SCOPE)
set(${exists} 1 PARENT_SCOPE)
else()
# There does not exist a hardcoded check for this option.
set(exists 0 PARENT_SCOPE)
set(${exists} 0 PARENT_SCOPE)
endif()
endfunction(zephyr_check_compiler_flag_hardcoded)
@@ -1862,7 +1862,7 @@ function(check_set_linker_property)
zephyr_check_compiler_flag(C "" ${check})
set(CMAKE_REQUIRED_FLAGS ${SAVED_CMAKE_REQUIRED_FLAGS})
if(${check})
if(${${check}})
set_property(TARGET ${LINKER_PROPERTY_TARGET} ${APPEND} PROPERTY ${property} ${option})
endif()
endfunction()

View File

@@ -163,12 +163,23 @@ endforeach()
unset(EXTRA_KCONFIG_OPTIONS)
get_cmake_property(cache_variable_names CACHE_VARIABLES)
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
if("${name}" MATCHES "^CLI_CONFIG_")
# Variable was set by user in earlier invocation, let's append to extra
# config unless a new value has been given.
string(REGEX REPLACE "^CLI_" "" org_name ${name})
if(NOT DEFINED ${org_name})
set(EXTRA_KCONFIG_OPTIONS
"${EXTRA_KCONFIG_OPTIONS}\n${org_name}=${${name}}"
)
endif()
elseif("${name}" MATCHES "^CONFIG_")
# When a cache variable starts with 'CONFIG_', it is assumed to be
# a Kconfig symbol assignment from the CMake command line.
set(EXTRA_KCONFIG_OPTIONS
"${EXTRA_KCONFIG_OPTIONS}\n${name}=${${name}}"
)
set(CLI_${name} "${${name}}")
list(APPEND cli_config_list ${name})
endif()
endforeach()
@@ -296,21 +307,20 @@ add_custom_target(config-twister DEPENDS ${DOTCONFIG})
# Remove the CLI Kconfig symbols from the namespace and
# CMakeCache.txt. If the symbols end up in DOTCONFIG they will be
# re-introduced to the namespace through 'import_kconfig'.
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
unset(${name})
unset(${name} CACHE)
endif()
foreach (name ${cli_config_list})
unset(${name})
unset(${name} CACHE)
endforeach()
# Parse the lines prefixed with CONFIG_ in the .config file from Kconfig
import_kconfig(CONFIG_ ${DOTCONFIG})
# Re-introduce the CLI Kconfig symbols that survived
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
if(DEFINED ${name})
set(${name} ${${name}} CACHE STRING "")
endif()
# Cache the CLI Kconfig symbols that survived through Kconfig, prefixed with CLI_.
# Remove those who might have changed compared to earlier runs, if they no longer appears.
foreach (name ${cli_config_list})
if(DEFINED ${name})
set(CLI_${name} ${CLI_${name}} CACHE INTERNAL "")
else()
unset(CLI_${name} CACHE)
endif()
endforeach()

View File

@@ -2,6 +2,197 @@
.. _zephyr_2.7:
.. _zephyr_2.7.5:
Zephyr 2.7.5
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.4 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`41111` - utils: tmcvt: fix integer overflow after 6.4 days with ``gettimeofday()`` and ``z_tmcvt()``
* :github:`51663` - tests: kernel: increase coverage for kernel and mmu tests
* :github:`53124` - bmake: fix argument passing in ``zephyr_check_compiler_flag()`` cmake function
* :github:`53315` - net: tcp: fix possible underflow in ``tcp_flags()``.
* :github:`53981` - scripts: fixes for ``gen_syscalls`` and ``gen_app_partitions``
* :github:`53983` - init: correct early init time calls to ``k_current_get()`` when TLS is enabled
* :github:`54140` - net: fix BUS FAULT when running nmap towards echo_async sample
* :github:`54325` - coredump: support out-of-tree coredump backend definition
* :github:`54386` - kernel: correct SMP scheduling with more than 2 CPUs
* :github:`54527` - tests: kernel: remove faulty test from tests/kernel/poll
* :github:`55019` - bluetooth: initialize backport of #54905 failed
* :github:`55068` - net: ipv6: validate arguments in ``net_if_ipv6_set_reachable_time()``
* :github:`55069` - net: core: ``net pkt`` shell command missing input validation
* :github:`55323` - logging: fix userspace runtime filtering
* :github:`55490` - cxx: fix compile error in C++ project for bad flags ``-Wno-pointer-sign`` and ``-Werror=implicit-int``
* :github:`56071` - security: MbedTLS: update to v2.28.3
* :github:`56729` - posix: SCHED_RR valid thread priorities
* :github:`57210` - drivers: pcie: endpoint: pcie_ep_iproc: correct use of optional devicetree binding
* :github:`57419` - tests: dma: support 64-bit addressing in tests
* :github:`57710` - posix: support building eventfd on arm-clang
mbedTLS
*******
Moving mbedTLS to 2.28.x series (2.28.3 precisely). This is a LTS release
that will be supported with bug fixes and security fixes until the end of 2024.
Detailed information can be found in:
https://github.com/Mbed-TLS/mbedtls/releases/tag/v2.28.3
https://github.com/zephyrproject-rtos/zephyr/issues/56071
This version is incompatible with TF-M and because of this TF-M is no longer
supported in Zephyr LTS. If TF-M is required it can be manually added back
changing the mbedTLS revision on ``west.yaml`` to the previous one
(5765cb7f75a9973ae9232d438e361a9d7bbc49e7). This should be carefully assessed
by a security expert to ensure that the know vulnerabilities in that version
don't affect the product.
Vulnerabilities addressed in this update:
* MBEDTLS_AESNI_C, which is enabled by default, was silently ignored on
builds that couldn't compile the GCC-style assembly implementation
(most notably builds with Visual Studio), leaving them vulnerable to
timing side-channel attacks. There is now an intrinsics-based AES-NI
implementation as a fallback for when the assembly one cannot be used.
* Fix potential heap buffer overread and overwrite in DTLS if
MBEDTLS_SSL_DTLS_CONNECTION_ID is enabled and
MBEDTLS_SSL_CID_IN_LEN_MAX > 2 * MBEDTLS_SSL_CID_OUT_LEN_MAX.
* An adversary with access to precise enough information about memory
accesses (typically, an untrusted operating system attacking a secure
enclave) could recover an RSA private key after observing the victim
performing a single private-key operation if the window size used for the
exponentiation was 3 or smaller. Found and reported by Zili KOU,
Wenjian HE, Sharad Sinha, and Wei ZHANG. See "Cache Side-channel Attacks
and Defenses of the Sliding Window Algorithm in TEEs" - Design, Automation
and Test in Europe 2023.
* Zeroize dynamically-allocated buffers used by the PSA Crypto key storage
module before freeing them. These buffers contain secret key material, and
could thus potentially leak the key through freed heap.
* Fix a potential heap buffer overread in TLS 1.2 server-side when
MBEDTLS_USE_PSA_CRYPTO is enabled, an opaque key (created with
mbedtls_pk_setup_opaque()) is provisioned, and a static ECDH ciphersuite
is selected. This may result in an application crash or potentially an
information leak.
* Fix a buffer overread in DTLS ClientHello parsing in servers with
MBEDTLS_SSL_DTLS_CLIENT_PORT_REUSE enabled. An unauthenticated client
or a man-in-the-middle could cause a DTLS server to read up to 255 bytes
after the end of the SSL input buffer. The buffer overread only happens
when MBEDTLS_SSL_IN_CONTENT_LEN is less than a threshold that depends on
the exact configuration: 258 bytes if using mbedtls_ssl_cookie_check(),
and possibly up to 571 bytes with a custom cookie check function.
Reported by the Cybeats PSI Team.
* Zeroize several intermediate variables used to calculate the expected
value when verifying a MAC or AEAD tag. This hardens the library in
case the value leaks through a memory disclosure vulnerability. For
example, a memory disclosure vulnerability could have allowed a
man-in-the-middle to inject fake ciphertext into a DTLS connection.
* In psa_cipher_generate_iv() and psa_cipher_encrypt(), do not read back
from the output buffer. This fixes a potential policy bypass or decryption
oracle vulnerability if the output buffer is in memory that is shared with
an untrusted application.
* Fix a double-free that happened after mbedtls_ssl_set_session() or
mbedtls_ssl_get_session() failed with MBEDTLS_ERR_SSL_ALLOC_FAILED
(out of memory). After that, calling mbedtls_ssl_session_free()
and mbedtls_ssl_free() would cause an internal session buffer to
be free()'d twice.
* Fix a bias in the generation of finite-field Diffie-Hellman-Merkle (DHM)
private keys and of blinding values for DHM and elliptic curves (ECP)
computations.
* Fix a potential side channel vulnerability in ECDSA ephemeral key generation.
An adversary who is capable of very precise timing measurements could
learn partial information about the leading bits of the nonce used for the
signature, allowing the recovery of the private key after observing a
large number of signature operations. This completes a partial fix in
Mbed TLS 2.20.0.
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2023-0397: `Zephyr project bug tracker GHSA-wc2h-h868-q7hj
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-wc2h-h868-q7hj>`_
* CVE-2023-0779: `Zephyr project bug tracker GHSA-9xj8-6989-r549
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-9xj8-6989-r549>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.4:
Zephyr 2.7.4
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.3 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`25417` - net: socket: socketpair: check for ISR context
* :github:`41012` - irq_enable() doesnt support enabling NVIC IRQ number more than 127
* :github:`44070` - west spdx TypeError: 'NoneType' object is not iterable
* :github:`46072` - subsys/hawkBit: Debug log error in hawkbit example "CONFIG_LOG_STRDUP_MAX_STRING"
* :github:`48056` - Possible null pointer dereference after k_mutex_lock times out
* :github:`49102` - hawkbit - dns name randomly not resolved
* :github:`49139` - can't run west or DT tests on windows / py 3.6
* :github:`49564` - Newer versions of pylink are not supported in latest zephyr 2.7 release
* :github:`49569` - Backport cmake string cache fix to v2.7 branch
* :github:`50221` - tests: debug: test case subsys/debug/coredump failed on acrn_ehl_crb on branch v2.7
* :github:`50467` - Possible memory corruption on ARC when userspace is enabled
* :github:`50468` - Incorrect Z_THREAD_STACK_BUFFER in arch_start_cpu for Xtensa
* :github:`50961` - drivers: counter: Update counter_set_channel_alarm documentation
* :github:`51714` - Bluetooth: Application with buffer that cannot unref it in disconnect handler leads to advertising issues
* :github:`51776` - POSIX API is not portable across arches
* :github:`52247` - mgmt: mcumgr: image upload, then image erase, then image upload does not restart upload from start
* :github:`52517` - lib: posix: sleep() does not return the number of seconds left if interrupted
* :github:`52518` - lib: posix: usleep() does not follow the POSIX spec
* :github:`52542` - lib: posix: make sleep() and usleep() standards-compliant
* :github:`52591` - mcumgr user data size out of sync with net buffer user data size
* :github:`52829` - kernel/sched: Fix SMP race on pend
* :github:`53088` - Unable to chage initialization priority of logging subsys
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2022-2741: `Zephyr project bug tracker GHSA-hx5v-j59q-c3j8
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hx5v-j59q-c3j8>`_
* CVE-2022-1841: `Zephyr project bug tracker GHSA-5c3j-p8cr-2pgh
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-5c3j-p8cr-2pgh>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.3:
Zephyr 2.7.3

View File

@@ -467,7 +467,7 @@ err_out:
static struct iproc_pcie_ep_ctx iproc_pcie_ep_ctx_0;
static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
static const struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.id = 0,
.base = (struct iproc_pcie_reg *)DT_INST_REG_ADDR(0),
.reg_size = DT_INST_REG_SIZE(0),
@@ -475,19 +475,21 @@ static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.map_low_size = DT_INST_REG_SIZE_BY_NAME(0, map_lowmem),
.map_high_base = DT_INST_REG_ADDR_BY_NAME(0, map_highmem),
.map_high_size = DT_INST_REG_SIZE_BY_NAME(0, map_highmem),
#if DT_INST_NODE_HAS_PROP(0, dmas)
.pl330_dev = DEVICE_DT_GET(DT_INST_DMAS_CTLR_BY_IDX(0, 0)),
.pl330_tx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, txdma, channel),
.pl330_rx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, rxdma, channel),
#endif
};
static struct pcie_ep_driver_api iproc_pcie_ep_api = {
static const struct pcie_ep_driver_api iproc_pcie_ep_api = {
.conf_read = iproc_pcie_conf_read,
.conf_write = iproc_pcie_conf_write,
.map_addr = iproc_pcie_map_addr,
.unmap_addr = iproc_pcie_unmap_addr,
.raise_irq = iproc_pcie_raise_irq,
.register_reset_cb = iproc_pcie_register_reset_cb,
.dma_xfer = iproc_pcie_pl330_dma_xfer,
.dma_xfer = DT_INST_NODE_HAS_PROP(0, dmas) ? iproc_pcie_pl330_dma_xfer : NULL,
};
DEVICE_DT_INST_DEFINE(0, &iproc_pcie_ep_init, NULL,

View File

@@ -126,6 +126,31 @@ struct coredump_mem_hdr_t {
uintptr_t end;
} __packed;
typedef void (*coredump_backend_start_t)(void);
typedef void (*coredump_backend_end_t)(void);
typedef void (*coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct coredump_backend_api {
/* Signal to backend of the start of coredump. */
coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
coredump_backend_end_t end;
/* Raw buffer output */
coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
void coredump(unsigned int reason, const z_arch_esf_t *esf,
struct k_thread *thread);
void coredump_memory_dump(uintptr_t start_addr, uintptr_t end_addr);

View File

@@ -385,6 +385,7 @@ static inline int z_impl_counter_get_value(const struct device *dev,
* interrupts or requested channel).
* @retval -EINVAL if alarm settings are invalid.
* @retval -ETIME if absolute alarm was set too late.
* @retval -EBUSY if alarm is already active.
*/
__syscall int counter_set_channel_alarm(const struct device *dev,
uint8_t chan_id,

View File

@@ -162,6 +162,11 @@ struct z_kernel {
#if defined(CONFIG_THREAD_MONITOR)
struct k_thread *threads; /* singly linked list of ALL threads */
#endif
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
/* Need to signal an IPI at the next scheduling point */
bool pending_ipi;
#endif
};
typedef struct z_kernel _kernel_t;

View File

@@ -302,10 +302,8 @@ static inline char z_log_minimal_level_to_char(int level)
} \
\
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \
@@ -347,8 +345,6 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
\
if (IS_ENABLED(CONFIG_LOG_MINIMAL)) { \
Z_LOG_TO_PRINTK(_level, "%s", _str); \
@@ -357,7 +353,7 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \

View File

@@ -199,10 +199,11 @@ struct net_conn_handle;
* anyway. This saves 12 bytes / context in IPv6.
*/
__net_socket struct net_context {
/** User data.
*
* First member of the structure to let users either have user data
* associated with a context, or put contexts into a FIFO.
/** First member of the structure to allow to put contexts into a FIFO.
*/
void *fifo_reserved;
/** User data associated with a context.
*/
void *user_data;

View File

@@ -1368,6 +1368,10 @@ uint32_t net_if_ipv6_calc_reachable_time(struct net_if_ipv6 *ipv6);
static inline void net_if_ipv6_set_reachable_time(struct net_if_ipv6 *ipv6)
{
#if defined(CONFIG_NET_NATIVE_IPV6)
if (ipv6 == NULL) {
return;
}
ipv6->reachable_time = net_if_ipv6_calc_reachable_time(ipv6);
#endif
}

View File

@@ -7,6 +7,9 @@
#ifndef ZEPHYR_INCLUDE_TIME_UNITS_H_
#define ZEPHYR_INCLUDE_TIME_UNITS_H_
#include <sys/util.h>
#include <toolchain.h>
#ifdef __cplusplus
extern "C" {
#endif
@@ -56,6 +59,21 @@ static TIME_CONSTEXPR inline int sys_clock_hw_cycles_per_sec(void)
#endif
}
/** @internal
* Macro determines if fast conversion algorithm can be used. It checks if
* maximum timeout represented in source frequency domain and multiplied by
* target frequency fits in 64 bits.
*
* @param from_hz Source frequency.
* @param to_hz Target frequency.
*
* @retval true Use faster algorithm.
* @retval false Use algorithm preventing overflow of intermediate value.
*/
#define Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz) \
((ceiling_fraction(CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS * 24ULL * 3600ULL * from_hz, \
UINT32_MAX) * to_hz) <= UINT32_MAX)
/* Time converter generator gadget. Selects from one of three
* conversion algorithms: ones that take advantage when the
* frequencies are an integer ratio (in either direction), or a full
@@ -123,8 +141,18 @@ static TIME_CONSTEXPR ALWAYS_INLINE uint64_t z_tmcvt(uint64_t t, uint32_t from_h
} else {
if (result32) {
return (uint32_t)((t * to_hz + off) / from_hz);
} else if (const_hz && Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz)) {
/* Faster algorithm but source is first multiplied by target frequency
* and it can overflow even though final result would not overflow.
* Kconfig option shall prevent use of this algorithm when there is a
* risk of overflow.
*/
return ((t * to_hz + off) / from_hz);
} else {
return (t * to_hz + off) / from_hz;
/* Slower algorithm but input is first divided before being multiplied
* which prevents overflow of intermediate value.
*/
return (t / from_hz) * to_hz + ((t % from_hz) * to_hz + off) / from_hz;
}
}
}

View File

@@ -613,6 +613,17 @@ config TIMEOUT_64BIT
availability of absolute timeout values (which require the
extra precision).
config SYS_CLOCK_MAX_TIMEOUT_DAYS
int "Max timeout (in days) used in conversions"
default 365
help
Value is used in the time conversion static inline function to determine
at compile time which algorithm to use. One algorithm is faster, takes
less code but may overflow if multiplication of source and target
frequency exceeds 64 bits. Second algorithm prevents that. Faster
algorithm is selected for conversion if maximum timeout represented in
source frequency domain multiplied by target frequency fits in 64 bits.
config XIP
bool "Execute in place"
help

View File

@@ -161,15 +161,21 @@ int z_impl_k_mutex_lock(struct k_mutex *mutex, k_timeout_t timeout)
key = k_spin_lock(&lock);
struct k_thread *waiter = z_waitq_head(&mutex->wait_q);
/*
* Check if mutex was unlocked after this thread was unpended.
* If so, skip adjusting owner's priority down.
*/
if (likely(mutex->owner != NULL)) {
struct k_thread *waiter = z_waitq_head(&mutex->wait_q);
new_prio = (waiter != NULL) ?
new_prio_for_inheritance(waiter->base.prio, mutex->owner_orig_prio) :
mutex->owner_orig_prio;
new_prio = (waiter != NULL) ?
new_prio_for_inheritance(waiter->base.prio, mutex->owner_orig_prio) :
mutex->owner_orig_prio;
LOG_DBG("adjusting prio down on mutex %p", mutex);
LOG_DBG("adjusting prio down on mutex %p", mutex);
resched = adjust_owner_prio(mutex, new_prio) || resched;
resched = adjust_owner_prio(mutex, new_prio) || resched;
}
if (resched) {
z_reschedule(&lock, key);

View File

@@ -576,6 +576,9 @@ static void triggered_work_expiration_handler(struct _timeout *timeout)
k_work_submit_to_queue(twork->workq, &twork->work);
}
extern int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work);
static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
{
struct z_poller *poller = event->poller;
@@ -587,7 +590,7 @@ static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
z_abort_timeout(&twork->timeout);
twork->poll_result = 0;
k_work_submit_to_queue(work_q, &twork->work);
z_work_submit_to_queue(work_q, &twork->work);
}
return 0;

View File

@@ -219,6 +219,25 @@ static ALWAYS_INLINE void dequeue_thread(void *pq,
}
}
static void signal_pending_ipi(void)
{
/* Synchronization note: you might think we need to lock these
* two steps, but an IPI is idempotent. It's OK if we do it
* twice. All we require is that if a CPU sees the flag true,
* it is guaranteed to send the IPI, and if a core sets
* pending_ipi, the IPI will be sent the next time through
* this code.
*/
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
if (_kernel.pending_ipi) {
_kernel.pending_ipi = false;
arch_sched_ipi();
}
}
#endif
}
#ifdef CONFIG_SMP
/* Called out of z_swap() when CONFIG_SMP. The current thread can
* never live in the run queue until we are inexorably on the context
@@ -231,6 +250,7 @@ void z_requeue_current(struct k_thread *curr)
if (z_is_thread_queued(curr)) {
_priq_run_add(&_kernel.ready_q.runq, curr);
}
signal_pending_ipi();
}
#endif
@@ -481,6 +501,15 @@ static bool thread_active_elsewhere(struct k_thread *thread)
return false;
}
static void flag_ipi(void)
{
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
_kernel.pending_ipi = true;
}
#endif
}
static void ready_thread(struct k_thread *thread)
{
#ifdef CONFIG_KERNEL_COHERENCE
@@ -495,9 +524,7 @@ static void ready_thread(struct k_thread *thread)
queue_thread(&_kernel.ready_q.runq, thread);
update_cache(0);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
}
}
@@ -626,17 +653,13 @@ static void add_thread_timeout(struct k_thread *thread, k_timeout_t timeout)
}
}
static void pend(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
static void pend_locked(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
{
#ifdef CONFIG_KERNEL_COHERENCE
__ASSERT_NO_MSG(wait_q == NULL || arch_mem_coherent(wait_q));
#endif
LOCKED(&sched_spinlock) {
add_to_waitq_locked(thread, wait_q);
}
add_to_waitq_locked(thread, wait_q);
add_thread_timeout(thread, timeout);
}
@@ -644,7 +667,9 @@ void z_pend_thread(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
{
__ASSERT_NO_MSG(thread == _current || is_thread_dummy(thread));
pend(thread, wait_q, timeout);
LOCKED(&sched_spinlock) {
pend_locked(thread, wait_q, timeout);
}
}
static inline void unpend_thread_no_timeout(struct k_thread *thread)
@@ -686,7 +711,12 @@ void z_thread_timeout(struct _timeout *timeout)
int z_pend_curr_irqlock(uint32_t key, _wait_q_t *wait_q, k_timeout_t timeout)
{
pend(_current, wait_q, timeout);
/* This is a legacy API for pre-switch architectures and isn't
* correctly synchronized for multi-cpu use
*/
__ASSERT_NO_MSG(!IS_ENABLED(CONFIG_SMP));
pend_locked(_current, wait_q, timeout);
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
pending_current = _current;
@@ -709,8 +739,20 @@ int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
pending_current = _current;
#endif
pend(_current, wait_q, timeout);
return z_swap(lock, key);
__ASSERT_NO_MSG(sizeof(sched_spinlock) == 0 || lock != &sched_spinlock);
/* We do a "lock swap" prior to calling z_swap(), such that
* the caller's lock gets released as desired. But we ensure
* that we hold the scheduler lock and leave local interrupts
* masked until we reach the context swich. z_swap() itself
* has similar code; the duplication is because it's a legacy
* API that doesn't expect to be called with scheduler lock
* held.
*/
(void) k_spin_lock(&sched_spinlock);
pend_locked(_current, wait_q, timeout);
k_spin_release(lock);
return z_swap(&sched_spinlock, key);
}
struct k_thread *z_unpend1_no_timeout(_wait_q_t *wait_q)
@@ -784,9 +826,7 @@ void z_thread_priority_set(struct k_thread *thread, int prio)
{
bool need_sched = z_set_prio(thread, prio);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (need_sched && _current->base.sched_locked == 0U) {
z_reschedule_unlocked();
@@ -826,6 +866,7 @@ void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
z_swap(lock, key);
} else {
k_spin_unlock(lock, key);
signal_pending_ipi();
}
}
@@ -835,6 +876,7 @@ void z_reschedule_irqlock(uint32_t key)
z_swap_irqlock(key);
} else {
irq_unlock(key);
signal_pending_ipi();
}
}
@@ -868,7 +910,16 @@ void k_sched_unlock(void)
struct k_thread *z_swap_next_thread(void)
{
#ifdef CONFIG_SMP
return next_up();
struct k_thread *ret = next_up();
if (ret == _current) {
/* When not swapping, have to signal IPIs here. In
* the context switch case it must happen later, after
* _current gets requeued.
*/
signal_pending_ipi();
}
return ret;
#else
return _kernel.ready_q.cache;
#endif
@@ -935,6 +986,7 @@ void *z_get_next_switch_handle(void *interrupted)
new_thread->switch_handle = NULL;
}
}
signal_pending_ipi();
return ret;
#else
_current->switch_handle = interrupted;
@@ -1331,9 +1383,7 @@ void z_impl_k_wakeup(k_tid_t thread)
z_mark_thread_as_not_suspended(thread);
z_ready_thread(thread);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (!arch_is_in_isr()) {
z_reschedule_unlocked();
@@ -1520,6 +1570,9 @@ void z_thread_abort(struct k_thread *thread)
/* It's running somewhere else, flag and poke */
thread->base.thread_state |= _THREAD_ABORTING;
/* We're going to spin, so need a true synchronous IPI
* here, not deferred!
*/
#ifdef CONFIG_SCHED_IPI_SUPPORTED
arch_sched_ipi();
#endif

View File

@@ -1011,7 +1011,7 @@ void z_thread_mark_switched_in(void)
#ifdef CONFIG_THREAD_RUNTIME_STATS
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
#ifdef CONFIG_THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS
thread->rt_stats.last_switched_in = timing_counter_get();
#else
@@ -1033,7 +1033,7 @@ void z_thread_mark_switched_out(void)
uint64_t diff;
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
if (unlikely(thread->rt_stats.last_switched_in == 0)) {
/* Has not run before */

View File

@@ -68,8 +68,14 @@ static int32_t next_timeout(void)
{
struct _timeout *to = first();
int32_t ticks_elapsed = elapsed();
int32_t ret = to == NULL ? MAX_WAIT
: CLAMP(to->dticks - ticks_elapsed, 0, MAX_WAIT);
int32_t ret;
if ((to == NULL) ||
((int64_t)(to->dticks - ticks_elapsed) > (int64_t)INT_MAX)) {
ret = MAX_WAIT;
} else {
ret = MAX(0, to->dticks - ticks_elapsed);
}
#ifdef CONFIG_TIMESLICING
if (_current_cpu->slice_ticks && _current_cpu->slice_ticks < ret) {
@@ -238,6 +244,18 @@ void sys_clock_announce(int32_t ticks)
k_spinlock_key_t key = k_spin_lock(&timeout_lock);
/* We release the lock around the callbacks below, so on SMP
* systems someone might be already running the loop. Don't
* race (which will cause paralllel execution of "sequential"
* timeouts and confuse apps), just increment the tick count
* and return.
*/
if (IS_ENABLED(CONFIG_SMP) && (announce_remaining != 0)) {
announce_remaining += ticks;
k_spin_unlock(&timeout_lock, key);
return;
}
announce_remaining = ticks;
while (first() != NULL && first()->dticks <= announce_remaining) {
@@ -245,13 +263,13 @@ void sys_clock_announce(int32_t ticks)
int dt = t->dticks;
curr_tick += dt;
announce_remaining -= dt;
t->dticks = 0;
remove_timeout(t);
k_spin_unlock(&timeout_lock, key);
t->fn(t);
key = k_spin_lock(&timeout_lock);
announce_remaining -= dt;
}
if (first() != NULL) {
@@ -271,7 +289,7 @@ int64_t sys_clock_tick_get(void)
uint64_t t = 0U;
LOCKED(&timeout_lock) {
t = curr_tick + sys_clock_elapsed();
t = curr_tick + elapsed();
}
return t;
}

View File

@@ -355,26 +355,45 @@ static int submit_to_queue_locked(struct k_work *work,
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
/* Submit work to a queue but do not yield the current thread.
*
* Intended for internal use.
*
* See also submit_to_queue_locked().
*
* @param queuep pointer to a queue reference.
* @param work the work structure to be submitted
*
* @retval see submit_to_queue_locked()
*/
int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
__ASSERT_NO_MSG(work != NULL);
k_spinlock_key_t key = k_spin_lock(&lock);
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = submit_to_queue_locked(work, &queue);
k_spin_unlock(&lock, key);
/* If we changed the queue contents (as indicated by a positive ret)
* the queue thread may now be ready, but we missed the reschedule
* point because the lock was held. If this is being invoked by a
* preemptible thread then yield.
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = z_work_submit_to_queue(queue, work);
/* submit_to_queue_locked() won't reschedule on its own
* (really it should, otherwise this process will result in
* spurious calls to z_swap() due to the race), so do it here
* if the queue state changed.
*/
if ((ret > 0) && (k_is_preempt_thread() != 0)) {
k_yield();
if (ret > 0) {
z_reschedule_unlocked();
}
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work, submit_to_queue, queue, work, ret);
@@ -586,6 +605,7 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
struct k_work *work = NULL;
k_work_handler_t handler = NULL;
k_spinlock_key_t key = k_spin_lock(&lock);
bool yield;
/* Check for and prepare any new work. */
node = sys_slist_get(&queue->pending);
@@ -644,34 +664,30 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
k_spin_unlock(&lock, key);
if (work != NULL) {
bool yield;
__ASSERT_NO_MSG(handler != NULL);
handler(work);
__ASSERT_NO_MSG(handler != NULL);
handler(work);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
}
}

View File

@@ -112,6 +112,8 @@ config APP_LINK_WITH_POSIX_SUBSYS
config EVENTFD
bool "Enable support for eventfd"
depends on !ARCH_POSIX
select POLL
default y if POSIX_API
help
Enable support for event file descriptors, eventfd. An eventfd can
be used as an event wait/notify mechanism together with POSIX calls

View File

@@ -27,7 +27,6 @@ static struct k_spinlock rt_clock_base_lock;
*/
int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
{
uint64_t elapsed_nsecs;
struct timespec base;
k_spinlock_key_t key;
@@ -48,9 +47,13 @@ int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
return -1;
}
elapsed_nsecs = k_ticks_to_ns_floor64(k_uptime_ticks());
ts->tv_sec = (int32_t) (elapsed_nsecs / NSEC_PER_SEC);
ts->tv_nsec = (int32_t) (elapsed_nsecs % NSEC_PER_SEC);
uint64_t ticks = k_uptime_ticks();
uint64_t elapsed_secs = ticks / CONFIG_SYS_CLOCK_TICKS_PER_SEC;
uint64_t nremainder = ticks - elapsed_secs * CONFIG_SYS_CLOCK_TICKS_PER_SEC;
ts->tv_sec = (time_t) elapsed_secs;
/* For ns 32 bit conversion can be used since its smaller than 1sec. */
ts->tv_nsec = (int32_t) k_ticks_to_ns_floor32(nremainder);
ts->tv_sec += base.tv_sec;
ts->tv_nsec += base.tv_nsec;

View File

@@ -15,12 +15,10 @@
#define PTHREAD_INIT_FLAGS PTHREAD_CANCEL_ENABLE
#define PTHREAD_CANCELED ((void *) -1)
#define LOWEST_POSIX_THREAD_PRIORITY 1
PTHREAD_MUTEX_DEFINE(pthread_key_lock);
static const pthread_attr_t init_pthread_attrs = {
.priority = LOWEST_POSIX_THREAD_PRIORITY,
.priority = 0,
.stack = NULL,
.stacksize = 0,
.flags = PTHREAD_INIT_FLAGS,
@@ -54,9 +52,11 @@ static uint32_t zephyr_to_posix_priority(int32_t z_prio, int *policy)
if (z_prio < 0) {
*policy = SCHED_FIFO;
prio = -1 * (z_prio + 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_COOP_PRIORITIES);
} else {
*policy = SCHED_RR;
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio - 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_PREEMPT_PRIORITIES);
}
return prio;
@@ -68,9 +68,11 @@ static int32_t posix_to_zephyr_priority(uint32_t priority, int policy)
if (policy == SCHED_FIFO) {
/* Zephyr COOP priority starts from -1 */
__ASSERT_NO_MSG(priority < CONFIG_NUM_COOP_PRIORITIES);
prio = -1 * (priority + 1);
} else {
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority);
__ASSERT_NO_MSG(priority < CONFIG_NUM_PREEMPT_PRIORITIES);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority - 1);
}
return prio;

View File

@@ -7,13 +7,9 @@
#include <kernel.h>
#include <posix/posix_sched.h>
static bool valid_posix_policy(int policy)
static inline bool valid_posix_policy(int policy)
{
if (policy != SCHED_FIFO && policy != SCHED_RR) {
return false;
}
return true;
return policy == SCHED_FIFO || policy == SCHED_RR;
}
/**
@@ -23,25 +19,12 @@ static bool valid_posix_policy(int policy)
*/
int sched_get_priority_min(int policy)
{
if (valid_posix_policy(policy) == false) {
if (!valid_posix_policy(policy)) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
return 0;
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return 0;
}
}
errno = EINVAL;
return -1;
return 0;
}
/**
@@ -51,25 +34,10 @@ int sched_get_priority_min(int policy)
*/
int sched_get_priority_max(int policy)
{
if (valid_posix_policy(policy) == false) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
/* Posix COOP priority starts from 0
* whereas zephyr starts from -1
*/
return (CONFIG_NUM_COOP_PRIORITIES - 1);
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED) && policy == SCHED_FIFO) {
return CONFIG_NUM_COOP_PRIORITIES - 1;
} else if (IS_ENABLED(CONFIG_PREEMPT_ENABLED) && policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES - 1;
}
errno = EINVAL;

View File

@@ -4,6 +4,8 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include <errno.h>
#include <kernel.h>
#include <posix/unistd.h>
@@ -14,8 +16,12 @@
*/
unsigned sleep(unsigned int seconds)
{
k_sleep(K_SECONDS(seconds));
return 0;
int rem;
rem = k_sleep(K_SECONDS(seconds));
__ASSERT_NO_MSG(rem >= 0);
return rem / MSEC_PER_SEC;
}
/**
* @brief Suspend execution for microsecond intervals.
@@ -24,10 +30,19 @@ unsigned sleep(unsigned int seconds)
*/
int usleep(useconds_t useconds)
{
if (useconds < USEC_PER_MSEC) {
k_busy_wait(useconds);
} else {
k_msleep(useconds / USEC_PER_MSEC);
int32_t rem;
if (useconds >= USEC_PER_SEC) {
errno = EINVAL;
return -1;
}
rem = k_usleep(useconds);
__ASSERT_NO_MSG(rem >= 0);
if (rem > 0) {
/* sleep was interrupted by a call to k_wakeup() */
errno = EINTR;
return -1;
}
return 0;

View File

@@ -24,6 +24,7 @@ config TFM_BOARD
menuconfig BUILD_WITH_TFM
bool "Build with TF-M as the Secure Execution Environment"
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
depends on TRUSTED_EXECUTION_NONSECURE
depends on TFM_BOARD != ""
depends on ARM_TRUSTZONE_M

View File

@@ -19,3 +19,6 @@ CONFIG_STATS_NAMES=n
# Disable Logging for footprint reduction
CONFIG_LOG=n
# Network settings
CONFIG_NET_BUF_USER_DATA_SIZE=8

View File

@@ -16,3 +16,6 @@ CONFIG_SYSTEM_WORKQUEUE_STACK_SIZE=2304
# Enable file system commands
CONFIG_MCUMGR_CMD_FS_MGMT=y
# Network settings
CONFIG_NET_BUF_USER_DATA_SIZE=8

View File

@@ -8,6 +8,7 @@ tests:
platform_allow: mps2_an521_ns lpcxpresso55s69_ns nrf5340dk_nrf5340_cpuapp_ns
nrf9160dk_nrf9160_ns nucleo_l552ze_q_ns v2m_musca_s1_ns stm32l562e_dk_ns
bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line

View File

@@ -5,6 +5,7 @@ common:
tags: psa
platform_allow: mps2_an521_ns v2m_musca_s1_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -22,3 +23,4 @@ common:
tests:
sample.tfm.protected_storage:
tags: tfm
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -8,6 +8,7 @@ tests:
platform_allow: mps2_an521_ns lpcxpresso55s69_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns nucleo_l552ze_q_ns
stm32l562e_dk_ns v2m_musca_s1_ns v2m_musca_b1_ns bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -21,6 +22,7 @@ tests:
platform_allow: mps2_an521_ns
extra_configs:
- CONFIG_TFM_BL2=n
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line

View File

@@ -3,6 +3,7 @@ common:
platform_allow: mps2_an521_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns
v2m_musca_s1_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -16,5 +17,7 @@ tests:
sample.tfm.psa_protected_storage_test:
extra_args: "CONFIG_TFM_PSA_TEST_PROTECTED_STORAGE=y"
timeout: 100
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
sample.tfm.psa_internal_trusted_storage_test:
extra_args: "CONFIG_TFM_PSA_TEST_INTERNAL_TRUSTED_STORAGE=y"
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -3,6 +3,7 @@ common:
platform_allow: lpcxpresso55s69_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns
v2m_musca_s1_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -18,3 +19,4 @@ tests:
sample.tfm.tfm_regression:
extra_args: ""
timeout: 200
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -15,3 +15,5 @@ tests:
sample.kernel.memory_protection.shared_mem:
filter: CONFIG_ARCH_HAS_USERSPACE
platform_exclude: twr_ke18f
extra_configs:
- CONFIG_TEST_HW_STACK_PROTECTION=n

View File

@@ -2769,18 +2769,6 @@ class _BindingLoader(Loader):
# Add legacy '!include foo.yaml' handling
_BindingLoader.add_constructor("!include", _binding_include)
# Use OrderedDict instead of plain dict for YAML mappings, to preserve
# insertion order on Python 3.5 and earlier (plain dicts only preserve
# insertion order on Python 3.6+). This makes testing easier and avoids
# surprises.
#
# Adapted from
# https://stackoverflow.com/questions/5121931/in-python-how-can-you-load-yaml-mappings-as-ordereddicts.
# Hopefully this API stays stable.
_BindingLoader.add_constructor(
yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG,
lambda loader, node: OrderedDict(loader.construct_pairs(node)))
#
# "Default" binding for properties which are defined by the spec.
#

View File

@@ -5,7 +5,7 @@ envlist=py3
deps =
setuptools-scm
pytest
types-PyYAML
types-PyYAML==6.0.7
mypy
setenv =
TOXTEMPDIR={envtmpdir}

View File

@@ -58,7 +58,7 @@ data_template = """
"""
library_data_template = """
*{0}:*(.data .data.*)
*{0}:*(.data .data.* .sdata .sdata.*)
"""
bss_template = """
@@ -67,7 +67,7 @@ bss_template = """
"""
library_bss_template = """
*{0}:*(.bss .bss.* COMMON COMMON.*)
*{0}:*(.bss .bss.* .sbss .sbss.* COMMON COMMON.*)
"""
footer_template = """

View File

@@ -55,8 +55,8 @@ const _k_syscall_handler_t _k_syscall_table[K_SYSCALL_LIMIT] = {
};
"""
list_template = """
/* auto-generated by gen_syscalls.py, don't edit */
list_template = """/* auto-generated by gen_syscalls.py, don't edit */
#ifndef ZEPHYR_SYSCALL_LIST_H
#define ZEPHYR_SYSCALL_LIST_H
@@ -82,17 +82,6 @@ syscall_template = """
#include <linker/sections.h>
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC diagnostic push
#endif
#ifdef __GNUC__
#pragma GCC diagnostic ignored "-Wstrict-aliasing"
#if !defined(__XCC__)
#pragma GCC diagnostic ignored "-Warray-bounds"
#endif
#endif
#ifdef __cplusplus
extern "C" {
#endif
@@ -103,10 +92,6 @@ extern "C" {
}
#endif
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC diagnostic pop
#endif
#endif
#endif /* include guard */
"""
@@ -153,25 +138,13 @@ def need_split(argtype):
# Note: "lo" and "hi" are named in little endian conventions,
# but it doesn't matter as long as they are consistently
# generated.
def union_decl(type):
return "union { struct { uintptr_t lo, hi; } split; %s val; }" % type
def union_decl(type, split):
middle = "struct { uintptr_t lo, hi; } split" if split else "uintptr_t x"
return "union { %s; %s val; }" % (middle, type)
def wrapper_defs(func_name, func_type, args):
ret64 = need_split(func_type)
mrsh_args = [] # List of rvalue expressions for the marshalled invocation
split_args = []
nsplit = 0
for argtype, argname in args:
if need_split(argtype):
split_args.append((argtype, argname))
mrsh_args.append("parm%d.split.lo" % nsplit)
mrsh_args.append("parm%d.split.hi" % nsplit)
nsplit += 1
else:
mrsh_args.append("*(uintptr_t *)&" + argname)
if ret64:
mrsh_args.append("(uintptr_t)&ret64")
decl_arglist = ", ".join([" ".join(argrec) for argrec in args]) or "void"
@@ -184,10 +157,24 @@ def wrapper_defs(func_name, func_type, args):
wrap += ("\t" + "uint64_t ret64;\n") if ret64 else ""
wrap += "\t" + "if (z_syscall_trap()) {\n"
for parmnum, rec in enumerate(split_args):
(argtype, argname) = rec
wrap += "\t\t%s parm%d;\n" % (union_decl(argtype), parmnum)
wrap += "\t\t" + "parm%d.val = %s;\n" % (parmnum, argname)
valist_args = []
for argnum, (argtype, argname) in enumerate(args):
split = need_split(argtype)
wrap += "\t\t%s parm%d" % (union_decl(argtype, split), argnum)
if argtype != "va_list":
wrap += " = { .val = %s };\n" % argname
else:
# va_list objects are ... peculiar.
wrap += ";\n" + "\t\t" + "va_copy(parm%d.val, %s);\n" % (argnum, argname)
valist_args.append("parm%d.val" % argnum)
if split:
mrsh_args.append("parm%d.split.lo" % argnum)
mrsh_args.append("parm%d.split.hi" % argnum)
else:
mrsh_args.append("parm%d.x" % argnum)
if ret64:
mrsh_args.append("(uintptr_t)&ret64")
if len(mrsh_args) > 6:
wrap += "\t\t" + "uintptr_t more[] = {\n"
@@ -200,21 +187,23 @@ def wrapper_defs(func_name, func_type, args):
% (len(mrsh_args),
", ".join(mrsh_args + [syscall_id])))
# Coverity does not understand syscall mechanism
# and will already complain when any function argument
# is not of exact size as uintptr_t. So tell Coverity
# to ignore this particular rule here.
wrap += "\t\t/* coverity[OVERRUN] */\n"
if ret64:
wrap += "\t\t" + "(void)%s;\n" % invoke
wrap += "\t\t" + "return (%s)ret64;\n" % func_type
invoke = "\t\t" + "(void) %s;\n" % invoke
retcode = "\t\t" + "return (%s) ret64;\n" % func_type
elif func_type == "void":
wrap += "\t\t" + "%s;\n" % invoke
wrap += "\t\t" + "return;\n"
invoke = "\t\t" + "(void) %s;\n" % invoke
retcode = "\t\t" + "return;\n"
elif valist_args:
invoke = "\t\t" + "%s retval = %s;\n" % (func_type, invoke)
retcode = "\t\t" + "return retval;\n"
else:
wrap += "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
invoke = "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
retcode = ""
wrap += invoke
for argname in valist_args:
wrap += "\t\t" + "va_end(%s);\n" % argname
wrap += retcode
wrap += "\t" + "}\n"
wrap += "#endif\n"
@@ -244,16 +233,11 @@ def marshall_defs(func_name, func_type, args):
mrsh_name = "z_mrsh_" + func_name
nmrsh = 0 # number of marshalled uintptr_t parameter
vrfy_parms = [] # list of (arg_num, mrsh_or_parm_num, bool_is_split)
split_parms = [] # list of a (arg_num, mrsh_num) for each split
for i, (argtype, _) in enumerate(args):
if need_split(argtype):
vrfy_parms.append((i, len(split_parms), True))
split_parms.append((i, nmrsh))
nmrsh += 2
else:
vrfy_parms.append((i, nmrsh, False))
nmrsh += 1
vrfy_parms = [] # list of (argtype, bool_is_split)
for (argtype, _) in args:
split = need_split(argtype)
vrfy_parms.append((argtype, split))
nmrsh += 2 if split else 1
# Final argument for a 64 bit return value?
if need_split(func_type):
@@ -275,25 +259,22 @@ def marshall_defs(func_name, func_type, args):
if nmrsh > 6:
mrsh += ("\tZ_OOPS(Z_SYSCALL_MEMORY_READ(more, "
+ str(nmrsh - 6) + " * sizeof(uintptr_t)));\n")
+ str(nmrsh - 5) + " * sizeof(uintptr_t)));\n")
for i, split_rec in enumerate(split_parms):
arg_num, mrsh_num = split_rec
arg_type = args[arg_num][0]
mrsh += "\t%s parm%d;\n" % (union_decl(arg_type), i)
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(mrsh_num,
nmrsh))
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(mrsh_num + 1,
nmrsh))
# Finally, invoke the verify function
out_args = []
for i, argn, is_split in vrfy_parms:
if is_split:
out_args.append("parm%d.val" % argn)
argnum = 0
for i, (argtype, split) in enumerate(vrfy_parms):
mrsh += "\t%s parm%d;\n" % (union_decl(argtype, split), i)
if split:
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
argnum += 1
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
else:
out_args.append("*(%s*)&%s" % (args[i][0], mrsh_rval(argn, nmrsh)))
mrsh += "\t" + "parm%d.x = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
argnum += 1
vrfy_call = "z_vrfy_%s(%s)\n" % (func_name, ", ".join(out_args))
# Finally, invoke the verify function
out_args = ", ".join(["parm%d.val" % i for i in range(len(args))])
vrfy_call = "z_vrfy_%s(%s)" % (func_name, out_args)
if func_type == "void":
mrsh += "\t" + "%s;\n" % vrfy_call
@@ -436,19 +417,10 @@ def main():
mrsh_fn = os.path.join(args.base_output, fn + "_mrsh.c")
with open(mrsh_fn, "w") as fp:
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n")
fp.write("#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)\n")
fp.write("#pragma GCC diagnostic push\n")
fp.write("#endif\n")
fp.write("#ifdef __GNUC__\n")
fp.write("#pragma GCC diagnostic ignored \"-Wstrict-aliasing\"\n")
fp.write("#endif\n")
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n\n")
fp.write(mrsh_includes[fn] + "\n")
fp.write("\n")
fp.write(mrsh_defs[fn] + "\n")
fp.write("#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)\n")
fp.write("#pragma GCC diagnostic pop\n")
fp.write("#endif\n")
if __name__ == "__main__":
main()

View File

@@ -16,6 +16,7 @@ import tempfile
from runners.core import ZephyrBinaryRunner, RunnerCaps
try:
import pylink
from pylink.library import Library
MISSING_REQUIREMENTS = False
except ImportError:
@@ -141,16 +142,23 @@ class JLinkBinaryRunner(ZephyrBinaryRunner):
# to load the shared library distributed with the tools, which
# provides an API call for getting the version.
if not hasattr(self, '_jlink_version'):
# pylink 0.14.0/0.14.1 exposes JLink SDK DLL (libjlinkarm) in
# JLINK_SDK_STARTS_WITH, while other versions use JLINK_SDK_NAME
if pylink.__version__ in ('0.14.0', '0.14.1'):
sdk = Library.JLINK_SDK_STARTS_WITH
else:
sdk = Library.JLINK_SDK_NAME
plat = sys.platform
if plat.startswith('win32'):
libname = Library.get_appropriate_windows_sdk_name() + '.dll'
elif plat.startswith('linux'):
libname = Library.JLINK_SDK_NAME + '.so'
libname = sdk + '.so'
elif plat.startswith('darwin'):
libname = Library.JLINK_SDK_NAME + '.dylib'
libname = sdk + '.dylib'
else:
self.logger.warning(f'unknown platform {plat}; assuming UNIX')
libname = Library.JLINK_SDK_NAME + '.so'
libname = sdk + '.so'
lib = Library(dllpath=os.fspath(Path(self.commander).parent /
libname))

View File

@@ -125,7 +125,7 @@ Created: {datetime.utcnow().strftime("%Y-%m-%dT%H:%M:%SZ")}
# write other license info, if any
if len(doc.customLicenseIDs) > 0:
for lic in list(doc.customLicenseIDs).sort():
for lic in sorted(list(doc.customLicenseIDs)):
writeOtherLicenseSPDX(f, lic)
# Open SPDX document file for writing, write the document, and calculate

View File

@@ -24,6 +24,224 @@
#define __FPU_PRESENT CONFIG_CPU_HAS_FPU
#define __MPU_PRESENT CONFIG_CPU_HAS_ARM_MPU
#define __CM4_REV 0x0201 /*!< Core Revision r2p1 */
#define __VTOR_PRESENT 1 /*!< Set to 1 if VTOR is present */
#define __NVIC_PRIO_BITS 3 /*!< Number of Bits used for Priority Levels */
#define __Vendor_SysTickConfig 0 /*!< 0 use default SysTick HW */
#define __FPU_DP 0 /*!< Set to 1 if FPU is double precision */
#define __ICACHE_PRESENT 0 /*!< Set to 1 if I-Cache is present */
#define __DCACHE_PRESENT 0 /*!< Set to 1 if D-Cache is present */
#define __DTCM_PRESENT 0 /*!< Set to 1 if DTCM is present */
/** @brief ARM Cortex-M4 NVIC Interrupt Numbers
* CM4 NVIC implements 16 internal interrupt sources. CMSIS macros use
* negative numbers [-15, -1]. Lower numerical value indicates higher
* priority.
* -15 = Reset Vector invoked on POR or any CPU reset.
* -14 = NMI
* -13 = Hard Fault. At POR or CPU reset all faults map to Hard Fault.
* -12 = Memory Management Fault. If enabled Hard Faults caused by access
* violations, no address match, or MPU mismatch.
* -11 = Bus Fault. If enabled pre-fetch, AHB access faults.
* -10 = Usage Fault. If enabled Undefined instructions, illegal state
* transition (Thumb -> ARM mode), unaligned, etc.
* -9 through -6 are not implemented (reserved).
* -5 System call via SVC instruction.
* -4 Debug Monitor.
* -3 not implemented (reserved).
* -2 PendSV for system service.
* -1 SysTick NVIC system timer.
* Numbers >= 0 are external peripheral interrupts.
*/
typedef enum {
/* ========== ARM Cortex-M4 Specific Interrupt Numbers ============ */
Reset_IRQn = -15, /*!< POR/CPU Reset Vector */
NonMaskableInt_IRQn = -14, /*!< NMI */
HardFault_IRQn = -13, /*!< Hard Faults */
MemoryManagement_IRQn = -12, /*!< Memory Management faults */
BusFault_IRQn = -11, /*!< Bus Access faults */
UsageFault_IRQn = -10, /*!< Usage/instruction faults */
SVCall_IRQn = -5, /*!< SVC */
DebugMonitor_IRQn = -4, /*!< Debug Monitor */
PendSV_IRQn = -2, /*!< PendSV */
SysTick_IRQn = -1, /*!< SysTick */
/* ============== MEC172x Specific Interrupt Numbers ============ */
GIRQ08_IRQn = 0, /*!< GPIO 0140 - 0176 */
GIRQ09_IRQn = 1, /*!< GPIO 0100 - 0136 */
GIRQ10_IRQn = 2, /*!< GPIO 0040 - 0076 */
GIRQ11_IRQn = 3, /*!< GPIO 0000 - 0036 */
GIRQ12_IRQn = 4, /*!< GPIO 0200 - 0236 */
GIRQ13_IRQn = 5, /*!< SMBus Aggregated */
GIRQ14_IRQn = 6, /*!< DMA Aggregated */
GIRQ15_IRQn = 7,
GIRQ16_IRQn = 8,
GIRQ17_IRQn = 9,
GIRQ18_IRQn = 10,
GIRQ19_IRQn = 11,
GIRQ20_IRQn = 12,
GIRQ21_IRQn = 13,
/* GIRQ22(peripheral clock wake) is not connected to NVIC */
GIRQ23_IRQn = 14,
GIRQ24_IRQn = 15,
GIRQ25_IRQn = 16,
GIRQ26_IRQn = 17, /*!< GPIO 0240 - 0276 */
/* Reserved 18-19 */
/* GIRQ's 8 - 12, 24 - 26 no direct connections */
I2C_SMB_0_IRQn = 20, /*!< GIRQ13 b[0] */
I2C_SMB_1_IRQn = 21, /*!< GIRQ13 b[1] */
I2C_SMB_2_IRQn = 22, /*!< GIRQ13 b[2] */
I2C_SMB_3_IRQn = 23, /*!< GIRQ13 b[3] */
DMA0_IRQn = 24, /*!< GIRQ14 b[0] */
DMA1_IRQn = 25, /*!< GIRQ14 b[1] */
DMA2_IRQn = 26, /*!< GIRQ14 b[2] */
DMA3_IRQn = 27, /*!< GIRQ14 b[3] */
DMA4_IRQn = 28, /*!< GIRQ14 b[4] */
DMA5_IRQn = 29, /*!< GIRQ14 b[5] */
DMA6_IRQn = 30, /*!< GIRQ14 b[6] */
DMA7_IRQn = 31, /*!< GIRQ14 b[7] */
DMA8_IRQn = 32, /*!< GIRQ14 b[8] */
DMA9_IRQn = 33, /*!< GIRQ14 b[9] */
DMA10_IRQn = 34, /*!< GIRQ14 b[10] */
DMA11_IRQn = 35, /*!< GIRQ14 b[11] */
DMA12_IRQn = 36, /*!< GIRQ14 b[12] */
DMA13_IRQn = 37, /*!< GIRQ14 b[13] */
DMA14_IRQn = 38, /*!< GIRQ14 b[14] */
DMA15_IRQn = 39, /*!< GIRQ14 b[15] */
UART0_IRQn = 40, /*!< GIRQ15 b[0] */
UART1_IRQn = 41, /*!< GIRQ15 b[1] */
EMI0_IRQn = 42, /*!< GIRQ15 b[2] */
EMI1_IRQn = 43, /*!< GIRQ15 b[3] */
EMI2_IRQn = 44, /*!< GIRQ15 b[4] */
ACPI_EC0_IBF_IRQn = 45, /*!< GIRQ15 b[5] */
ACPI_EC0_OBE_IRQn = 46, /*!< GIRQ15 b[6] */
ACPI_EC1_IBF_IRQn = 47, /*!< GIRQ15 b[7] */
ACPI_EC1_OBE_IRQn = 48, /*!< GIRQ15 b[8] */
ACPI_EC2_IBF_IRQn = 49, /*!< GIRQ15 b[9] */
ACPI_EC2_OBE_IRQn = 50, /*!< GIRQ15 b[10] */
ACPI_EC3_IBF_IRQn = 51, /*!< GIRQ15 b[11] */
ACPI_EC3_OBE_IRQn = 52, /*!< GIRQ15 b[12] */
ACPI_EC4_IBF_IRQn = 53, /*!< GIRQ15 b[13] */
ACPI_EC4_OBE_IRQn = 54, /*!< GIRQ15 b[14] */
ACPI_PM1_CTL_IRQn = 55, /*!< GIRQ15 b[15] */
ACPI_PM1_EN_IRQn = 56, /*!< GIRQ15 b[16] */
ACPI_PM1_STS_IRQn = 57, /*!< GIRQ15 b[17] */
KBC_OBE_IRQn = 58, /*!< GIRQ15 b[18] */
KBC_IBF_IRQn = 59, /*!< GIRQ15 b[19] */
MBOX_IRQn = 60, /*!< GIRQ15 b[20] */
/* reserved 61 */
P80BD_0_IRQn = 62, /*!< GIRQ15 b[22] */
/* reserved 63-64 */
PKE_IRQn = 65, /*!< GIRQ16 b[0] */
/* reserved 66 */
RNG_IRQn = 67, /*!< GIRQ16 b[2] */
AESH_IRQn = 68, /*!< GIRQ16 b[3] */
/* reserved 69 */
PECI_IRQn = 70, /*!< GIRQ17 b[0] */
TACH_0_IRQn = 71, /*!< GIRQ17 b[1] */
TACH_1_IRQn = 72, /*!< GIRQ17 b[2] */
TACH_2_IRQn = 73, /*!< GIRQ17 b[3] */
RPMFAN_0_FAIL_IRQn = 74, /*!< GIRQ17 b[20] */
RPMFAN_0_STALL_IRQn = 75, /*!< GIRQ17 b[21] */
RPMFAN_1_FAIL_IRQn = 76, /*!< GIRQ17 b[22] */
RPMFAN_1_STALL_IRQn = 77, /*!< GIRQ17 b[23] */
ADC_SNGL_IRQn = 78, /*!< GIRQ17 b[8] */
ADC_RPT_IRQn = 79, /*!< GIRQ17 b[9] */
RCID_0_IRQn = 80, /*!< GIRQ17 b[10] */
RCID_1_IRQn = 81, /*!< GIRQ17 b[11] */
RCID_2_IRQn = 82, /*!< GIRQ17 b[12] */
LED_0_IRQn = 83, /*!< GIRQ17 b[13] */
LED_1_IRQn = 84, /*!< GIRQ17 b[14] */
LED_2_IRQn = 85, /*!< GIRQ17 b[15] */
LED_3_IRQn = 86, /*!< GIRQ17 b[16] */
PHOT_IRQn = 87, /*!< GIRQ17 b[17] */
/* reserved 88-89 */
SPIP_0_IRQn = 90, /*!< GIRQ18 b[0] */
QMSPI_0_IRQn = 91, /*!< GIRQ18 b[1] */
GPSPI_0_TXBE_IRQn = 92, /*!< GIRQ18 b[2] */
GPSPI_0_RXBF_IRQn = 93, /*!< GIRQ18 b[3] */
GPSPI_1_TXBE_IRQn = 94, /*!< GIRQ18 b[4] */
GPSPI_1_RXBF_IRQn = 95, /*!< GIRQ18 b[5] */
BCL_0_ERR_IRQn = 96, /*!< GIRQ18 b[7] */
BCL_0_BCLR_IRQn = 97, /*!< GIRQ18 b[6] */
/* reserved 98-99 */
PS2_0_ACT_IRQn = 100, /*!< GIRQ18 b[10] */
/* reserved 101-102 */
ESPI_PC_IRQn = 103, /*!< GIRQ19 b[0] */
ESPI_BM1_IRQn = 104, /*!< GIRQ19 b[1] */
ESPI_BM2_IRQn = 105, /*!< GIRQ19 b[2] */
ESPI_LTR_IRQn = 106, /*!< GIRQ19 b[3] */
ESPI_OOB_UP_IRQn = 107, /*!< GIRQ19 b[4] */
ESPI_OOB_DN_IRQn = 108, /*!< GIRQ19 b[5] */
ESPI_FLASH_IRQn = 109, /*!< GIRQ19 b[6] */
ESPI_RESET_IRQn = 110, /*!< GIRQ19 b[7] */
RTMR_IRQn = 111, /*!< GIRQ23 b[10] */
HTMR_0_IRQn = 112, /*!< GIRQ23 b[16] */
HTMR_1_IRQn = 113, /*!< GIRQ23 b[17] */
WK_IRQn = 114, /*!< GIRQ21 b[3] */
WKSUB_IRQn = 115, /*!< GIRQ21 b[4] */
WKSEC_IRQn = 116, /*!< GIRQ21 b[5] */
WKSUBSEC_IRQn = 117, /*!< GIRQ21 b[6] */
WKSYSPWR_IRQn = 118, /*!< GIRQ21 b[7] */
RTC_IRQn = 119, /*!< GIRQ21 b[8] */
RTC_ALARM_IRQn = 120, /*!< GIRQ21 b[9] */
VCI_OVRD_IN_IRQn = 121, /*!< GIRQ21 b[10] */
VCI_IN0_IRQn = 122, /*!< GIRQ21 b[11] */
VCI_IN1_IRQn = 123, /*!< GIRQ21 b[12] */
VCI_IN2_IRQn = 124, /*!< GIRQ21 b[13] */
VCI_IN3_IRQn = 125, /*!< GIRQ21 b[14] */
VCI_IN4_IRQn = 126, /*!< GIRQ21 b[15] */
/* reserved 127-128 */
PS2_0A_WAKE_IRQn = 129, /*!< GIRQ21 b[18] */
PS2_0B_WAKE_IRQn = 130, /*!< GIRQ21 b[19] */
/* reserved 131-134 */
KEYSCAN_IRQn = 135, /*!< GIRQ21 b[25] */
B16TMR_0_IRQn = 136, /*!< GIRQ23 b[0] */
B16TMR_1_IRQn = 137, /*!< GIRQ23 b[1] */
B16TMR_2_IRQn = 138, /*!< GIRQ23 b[2] */
B16TMR_3_IRQn = 139, /*!< GIRQ23 b[3] */
B32TMR_0_IRQn = 140, /*!< GIRQ23 b[4] */
B32TMR_1_IRQn = 141, /*!< GIRQ23 b[5] */
CTMR_0_IRQn = 142, /*!< GIRQ23 b[6] */
CTMR_1_IRQn = 143, /*!< GIRQ23 b[7] */
CTMR_2_IRQn = 144, /*!< GIRQ23 b[8] */
CTMR_3_IRQn = 145, /*!< GIRQ23 b[9] */
CCT_IRQn = 146, /*!< GIRQ18 b[20] */
CCT_CAP0_IRQn = 147, /*!< GIRQ18 b[21] */
CCT_CAP1_IRQn = 148, /*!< GIRQ18 b[22] */
CCT_CAP2_IRQn = 149, /*!< GIRQ18 b[23] */
CCT_CAP3_IRQn = 150, /*!< GIRQ18 b[24] */
CCT_CAP4_IRQn = 151, /*!< GIRQ18 b[25] */
CCT_CAP5_IRQn = 152, /*!< GIRQ18 b[26] */
CCT_CMP0_IRQn = 153, /*!< GIRQ18 b[27] */
CCT_CMP1_IRQn = 154, /*!< GIRQ18 b[28] */
EEPROMC_IRQn = 155, /*!< GIRQ18 b[13] */
ESPI_VWIRE_IRQn = 156, /*!< GIRQ19 b[8] */
/* reserved 157 */
I2C_SMB_4_IRQn = 158, /*!< GIRQ13 b[4] */
TACH_3_IRQn = 159, /*!< GIRQ17 b[4] */
/* reserved 160-165 */
SAF_DONE_IRQn = 166, /*!< GIRQ19 b[9] */
SAF_ERR_IRQn = 167, /*!< GIRQ19 b[10] */
/* reserved 168 */
SAF_CACHE_IRQn = 169, /*!< GIRQ19 b[11] */
/* reserved 170 */
WDT_0_IRQn = 171, /*!< GIRQ21 b[2] */
GLUE_IRQn = 172, /*!< GIRQ21 b[26] */
OTP_RDY_IRQn = 173, /*!< GIRQ20 b[3] */
CLK32K_MON_IRQn = 174, /*!< GIRQ20 b[9] */
ACPI_EC0_IRQn = 175, /* ACPI EC OBE and IBF combined into one */
ACPI_EC1_IRQn = 176, /* No GIRQ connection. Status in ACPI blocks */
ACPI_EC2_IRQn = 177, /* Code uses level bits and NVIC bits */
ACPI_EC3_IRQn = 178,
ACPI_EC4_IRQn = 179,
ACPI_PM1_IRQn = 180,
MAX_IRQn
} IRQn_Type;
#include <sys/util.h>
/* chip specific register defines */

View File

@@ -205,12 +205,12 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
sr.cpu = cpu_num;
sr.fn = fn;
sr.stack_top = Z_THREAD_STACK_BUFFER(stack) + sz;
sr.stack_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
sr.arg = arg;
sr.vecbase = vb;
sr.alive = &alive_flag;
appcpu_top = Z_THREAD_STACK_BUFFER(stack) + sz;
appcpu_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
start_rec = &sr;

View File

@@ -331,7 +331,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
start_rec.vecbase = vecbase;
start_rec.alive = 0;
z_mp_stack_top = Z_THREAD_STACK_BUFFER(stack) + sz;
z_mp_stack_top = Z_KERNEL_STACK_BUFFER(stack) + sz;
/* Pre-2.x cAVS delivers the IDC to ROM code, so unmask it */
CAVS_INTCTRL[cpu_num].l2.clear = CAVS_L2_IDC;

View File

@@ -2503,13 +2503,15 @@ static void le_read_buffer_size_complete(struct net_buf *buf)
BT_DBG("status 0x%02x", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!bt_dev.le.acl_mtu) {
uint16_t acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!acl_mtu || !rp->le_max_num) {
return;
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num,
bt_dev.le.acl_mtu);
bt_dev.le.acl_mtu = acl_mtu;
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->le_max_num, rp->le_max_num);
#endif /* CONFIG_BT_CONN */
@@ -2523,25 +2525,26 @@ static void read_buffer_size_v2_complete(struct net_buf *buf)
BT_DBG("status %u", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (!bt_dev.le.acl_mtu) {
return;
uint16_t acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (acl_mtu && rp->acl_max_num) {
bt_dev.le.acl_mtu = acl_mtu;
LOG_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num,
bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
#endif /* CONFIG_BT_CONN */
bt_dev.le.iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!bt_dev.le.iso_mtu) {
uint16_t iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!iso_mtu || !rp->iso_max_num) {
BT_ERR("ISO buffer size not set");
return;
}
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num,
bt_dev.le.iso_mtu);
bt_dev.le.iso_mtu = iso_mtu;
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num, bt_dev.le.iso_mtu);
k_sem_init(&bt_dev.le.iso_pkts, rp->iso_max_num, rp->iso_max_num);
#endif /* CONFIG_BT_ISO */
@@ -2810,6 +2813,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
read_buffer_size_v2_complete(rsp);
net_buf_unref(rsp);
@@ -2823,6 +2827,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
@@ -2866,7 +2871,9 @@ static int le_init(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
}

View File

@@ -28,6 +28,11 @@ config DEBUG_COREDUMP_BACKEND_FLASH_PARTITION
Core dump is saved to a flash partition with DTS alias
"coredump-partition".
config DEBUG_COREDUMP_BACKEND_OTHER
bool "Backend subsystem for coredump defined out of tree"
help
Core dump is done via custom mechanism defined out of tree
endchoice
choice

View File

@@ -513,7 +513,7 @@ static int coredump_flash_backend_cmd(enum coredump_cmd_id cmd_id,
}
struct z_coredump_backend_api z_coredump_backend_flash_partition = {
struct coredump_backend_api coredump_backend_flash_partition = {
.start = coredump_flash_backend_start,
.end = coredump_flash_backend_end,
.buffer_output = coredump_flash_backend_buffer_output,

View File

@@ -116,7 +116,7 @@ static int coredump_logging_backend_cmd(enum coredump_cmd_id cmd_id,
}
struct z_coredump_backend_api z_coredump_backend_logging = {
struct coredump_backend_api coredump_backend_logging = {
.start = coredump_logging_backend_start,
.end = coredump_logging_backend_end,
.buffer_output = coredump_logging_backend_buffer_output,

View File

@@ -14,13 +14,17 @@
#include "coredump_internal.h"
#if defined(CONFIG_DEBUG_COREDUMP_BACKEND_LOGGING)
extern struct z_coredump_backend_api z_coredump_backend_logging;
static struct z_coredump_backend_api
*backend_api = &z_coredump_backend_logging;
extern struct coredump_backend_api coredump_backend_logging;
static struct coredump_backend_api
*backend_api = &coredump_backend_logging;
#elif defined(CONFIG_DEBUG_COREDUMP_BACKEND_FLASH_PARTITION)
extern struct z_coredump_backend_api z_coredump_backend_flash_partition;
static struct z_coredump_backend_api
*backend_api = &z_coredump_backend_flash_partition;
extern struct coredump_backend_api coredump_backend_flash_partition;
static struct coredump_backend_api
*backend_api = &coredump_backend_flash_partition;
#elif defined(CONFIG_DEBUG_COREDUMP_BACKEND_OTHER)
extern struct coredump_backend_api coredump_backend_other;
static struct coredump_backend_api
*backend_api = &coredump_backend_other;
#else
#error "Need to select a coredump backend"
#endif

View File

@@ -53,31 +53,6 @@ void z_coredump_start(void);
*/
void z_coredump_end(void);
typedef void (*z_coredump_backend_start_t)(void);
typedef void (*z_coredump_backend_end_t)(void);
typedef void (*z_coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct z_coredump_backend_api {
/* Signal to backend of the start of coredump. */
z_coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
z_coredump_backend_end_t end;
/* Raw buffer output */
z_coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
/**
* @endcond
*/

View File

@@ -10,6 +10,11 @@ menuconfig LOG
if LOG
config LOG_CORE_INIT_PRIORITY
int "Log Core Initialization Priority"
range 0 99
default 0
rsource "Kconfig.mode"
rsource "Kconfig.filtering"

View File

@@ -1276,4 +1276,4 @@ static int enable_logger(const struct device *arg)
return 0;
}
SYS_INIT(enable_logger, POST_KERNEL, 0);
SYS_INIT(enable_logger, POST_KERNEL, CONFIG_LOG_CORE_INIT_PRIORITY);

View File

@@ -239,14 +239,16 @@ static bool start_http_client(void)
int protocol = IPPROTO_TCP;
#endif
(void)memset(&hints, 0, sizeof(hints));
if (IS_ENABLED(CONFIG_NET_IPV6)) {
hints.ai_family = AF_INET6;
hints.ai_socktype = SOCK_STREAM;
} else if (IS_ENABLED(CONFIG_NET_IPV4)) {
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
}
hints.ai_socktype = SOCK_STREAM;
while (resolve_attempts--) {
ret = getaddrinfo(CONFIG_HAWKBIT_SERVER, CONFIG_HAWKBIT_PORT,
&hints, &addr);
@@ -412,6 +414,8 @@ static int hawkbit_find_cancelAction_base(struct hawkbit_ctl_res *res,
return 0;
}
LOG_DBG("_links.%s.href=%s", "cancelAction", href);
helper = strstr(href, "cancelAction/");
if (!helper) {
/* A badly formatted cancel base is a server error */
@@ -465,6 +469,8 @@ static int hawkbit_find_deployment_base(struct hawkbit_ctl_res *res,
return 0;
}
LOG_DBG("_links.%s.href=%s", "deploymentBase", href);
helper = strstr(href, "deploymentBase/");
if (!helper) {
/* A badly formatted deployment base is a server error */
@@ -573,17 +579,6 @@ static int hawkbit_parse_deployment(struct hawkbit_dep_res *res,
return 0;
}
static void hawkbit_dump_base(struct hawkbit_ctl_res *r)
{
LOG_DBG("config.polling.sleep=%s", log_strdup(r->config.polling.sleep));
LOG_DBG("_links.deploymentBase.href=%s",
log_strdup(r->_links.deploymentBase.href));
LOG_DBG("_links.configData.href=%s",
log_strdup(r->_links.configData.href));
LOG_DBG("_links.cancelAction.href=%s",
log_strdup(r->_links.cancelAction.href));
}
static void hawkbit_dump_deployment(struct hawkbit_dep_res *d)
{
struct hawkbit_dep_res_chunk *c = &d->deployment.chunks[0];
@@ -1098,9 +1093,9 @@ enum hawkbit_response hawkbit_probe(void)
if (hawkbit_results.base.config.polling.sleep) {
/* Update the sleep time. */
hawkbit_update_sleep(&hawkbit_results.base);
LOG_DBG("config.polling.sleep=%s", hawkbit_results.base.config.polling.sleep);
}
hawkbit_dump_base(&hawkbit_results.base);
if (hawkbit_results.base._links.cancelAction.href) {
ret = hawkbit_find_cancelAction_base(&hawkbit_results.base,
@@ -1127,6 +1122,8 @@ enum hawkbit_response hawkbit_probe(void)
}
if (hawkbit_results.base._links.configData.href) {
LOG_DBG("_links.%s.href=%s", "configData",
hawkbit_results.base._links.configData.href);
memset(hb_context.url_buffer, 0, sizeof(hb_context.url_buffer));
hb_context.dl.http_content_size = 0;
hb_context.url_buffer_size = URL_BUFFER_SIZE;

View File

@@ -425,13 +425,15 @@ config MCUMGR_BUF_USER_DATA_SIZE
int "Size of mcumgr buffer user data"
default 24 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV6
default 8 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV4
default 8 if MCUMGR_SMP_BT
default 4
help
The size, in bytes, of user data to allocate for each mcumgr buffer.
Different mcumgr transports impose different requirements for this
setting. A value of 4 is sufficient for UART, shell, and bluetooth.
For UDP, the userdata must be large enough to hold a IPv4/IPv6 address.
setting. A value of 4 is sufficient for UART and shell, a value of 8
is sufficient for Bluetooth. For UDP, the userdata must be large
enough to hold a IPv4/IPv6 address.
Note that CONFIG_NET_BUF_USER_DATA_SIZE must be at least as big as
MCUMGR_BUF_USER_DATA_SIZE.

View File

@@ -1,5 +1,6 @@
/*
* Copyright Runtime.io 2018. All rights reserved.
* Copyright (c) 2022 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -21,13 +22,28 @@
#include <mgmt/mcumgr/smp.h>
#include <logging/log.h>
LOG_MODULE_REGISTER(smp_bt, CONFIG_MCUMGR_LOG_LEVEL);
struct device;
struct smp_bt_user_data {
struct bt_conn *conn;
uint8_t id;
};
/* Verification of user data being able to fit */
BUILD_ASSERT(sizeof(struct smp_bt_user_data) <= CONFIG_MCUMGR_BUF_USER_DATA_SIZE,
"CONFIG_MCUMGR_BUF_USER_DATA_SIZE not large enough to fit Bluetooth user data");
struct conn_param_data {
struct bt_conn *conn;
uint8_t id;
};
static uint8_t next_id;
static struct zephyr_smp_transport smp_bt_transport;
static struct conn_param_data conn_data[CONFIG_BT_MAX_CONN];
/* SMP service.
* {8D53DC1D-1DB7-4CD3-868B-8A527460AA84}
@@ -41,6 +57,56 @@ static struct bt_uuid_128 smp_bt_svc_uuid = BT_UUID_INIT_128(
static struct bt_uuid_128 smp_bt_chr_uuid = BT_UUID_INIT_128(
BT_UUID_128_ENCODE(0xda2e7828, 0xfbce, 0x4e01, 0xae9e, 0x261174997c48));
/* Helper function that allocates conn_param_data for a conn. */
static struct conn_param_data *conn_param_data_alloc(struct bt_conn *conn)
{
for (size_t i = 0; i < ARRAY_SIZE(conn_data); i++) {
if (conn_data[i].conn == NULL) {
bool valid = false;
conn_data[i].conn = conn;
/* Generate an ID for this connection and reset semaphore */
while (!valid) {
valid = true;
conn_data[i].id = next_id;
++next_id;
if (next_id == 0) {
/* Avoid use of 0 (invalid ID) */
++next_id;
}
for (size_t l = 0; l < ARRAY_SIZE(conn_data); l++) {
if (l != i && conn_data[l].conn != NULL &&
conn_data[l].id == conn_data[i].id) {
valid = false;
break;
}
}
}
return &conn_data[i];
}
}
/* Conn data must exists. */
__ASSERT_NO_MSG(false);
return NULL;
}
/* Helper function that returns conn_param_data associated with a conn. */
static struct conn_param_data *conn_param_data_get(const struct bt_conn *conn)
{
for (size_t i = 0; i < ARRAY_SIZE(conn_data); i++) {
if (conn_data[i].conn == conn) {
return &conn_data[i];
}
}
return NULL;
}
/**
* Write handler for the SMP characteristic; processes an incoming SMP request.
*/
@@ -51,6 +117,12 @@ static ssize_t smp_bt_chr_write(struct bt_conn *conn,
{
struct smp_bt_user_data *ud;
struct net_buf *nb;
struct conn_param_data *cpd = conn_param_data_get(conn);
if (cpd == NULL) {
printk("cpd is null");
return len;
}
nb = mcumgr_buf_alloc();
if (!nb) {
@@ -59,7 +131,8 @@ static ssize_t smp_bt_chr_write(struct bt_conn *conn,
net_buf_add_mem(nb, buf, len);
ud = net_buf_user_data(nb);
ud->conn = bt_conn_ref(conn);
ud->conn = conn;
ud->id = cpd->id;
zephyr_smp_rx_req(&smp_bt_transport, nb);
@@ -113,7 +186,7 @@ static struct bt_conn *smp_bt_conn_from_pkt(const struct net_buf *nb)
return NULL;
}
return bt_conn_ref(ud->conn);
return ud->conn;
}
/**
@@ -131,7 +204,6 @@ static uint16_t smp_bt_get_mtu(const struct net_buf *nb)
}
mtu = bt_gatt_get_mtu(conn);
bt_conn_unref(conn);
/* Account for the three-byte notification header. */
return mtu - 3;
@@ -142,8 +214,8 @@ static void smp_bt_ud_free(void *ud)
struct smp_bt_user_data *user_data = ud;
if (user_data->conn) {
bt_conn_unref(user_data->conn);
user_data->conn = NULL;
user_data->id = 0;
}
}
@@ -153,7 +225,8 @@ static int smp_bt_ud_copy(struct net_buf *dst, const struct net_buf *src)
struct smp_bt_user_data *dst_ud = net_buf_user_data(dst);
if (src_ud->conn) {
dst_ud->conn = bt_conn_ref(src_ud->conn);
dst_ud->conn = src_ud->conn;
dst_ud->id = src_ud->id;
}
return 0;
@@ -165,17 +238,26 @@ static int smp_bt_ud_copy(struct net_buf *dst, const struct net_buf *src)
static int smp_bt_tx_pkt(struct zephyr_smp_transport *zst, struct net_buf *nb)
{
struct bt_conn *conn;
struct smp_bt_user_data *ud = net_buf_user_data(nb);
int rc;
conn = smp_bt_conn_from_pkt(nb);
if (conn == NULL) {
rc = -1;
} else {
rc = smp_bt_tx_rsp(conn, nb->data, nb->len);
bt_conn_unref(conn);
struct conn_param_data *cpd = conn_param_data_get(conn);
if (cpd == NULL) {
rc = -1;
} else if (cpd->id == 0 || cpd->id != ud->id) {
/* Connection has been lost or is a different device */
rc = -1;
} else {
rc = smp_bt_tx_rsp(conn, nb->data, nb->len);
}
}
smp_bt_ud_free(net_buf_user_data(nb));
smp_bt_ud_free(ud);
mcumgr_buf_free(nb);
return rc;
@@ -191,10 +273,41 @@ int smp_bt_unregister(void)
return bt_gatt_service_unregister(&smp_bt_svc);
}
/* BT connected callback. */
static void connected(struct bt_conn *conn, uint8_t err)
{
if (err == 0) {
(void)conn_param_data_alloc(conn);
}
}
/* BT disconnected callback. */
static void disconnected(struct bt_conn *conn, uint8_t reason)
{
struct conn_param_data *cpd = conn_param_data_get(conn);
/* Clear cpd. */
if (cpd != NULL) {
cpd->id = 0;
cpd->conn = NULL;
} else {
LOG_ERR("Null cpd object for connection %p", (void *)conn);
}
}
static int smp_bt_init(const struct device *dev)
{
ARG_UNUSED(dev);
next_id = 1;
/* Register BT callbacks */
static struct bt_conn_cb conn_callbacks = {
.connected = connected,
.disconnected = disconnected,
};
bt_conn_cb_register(&conn_callbacks);
zephyr_smp_transport_init(&smp_bt_transport, smp_bt_tx_pkt,
smp_bt_get_mtu, smp_bt_ud_copy,
smp_bt_ud_free);

View File

@@ -15,7 +15,9 @@ if NET_BUF
config NET_BUF_USER_DATA_SIZE
int "Size of user_data available in every network buffer"
default 8 if ((BT || NET_TCP2) && 64BIT) || BT_ISO
default 24 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV6
default 8 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV4
default 8 if ((BT || NET_TCP2) && 64BIT) || BT_ISO || MCUMGR_SMP_BT
default 4
range 4 65535 if BT || NET_TCP2
range 0 65535

View File

@@ -4214,6 +4214,74 @@ wait_reply:
#endif
}
static bool is_pkt_part_of_slab(const struct k_mem_slab *slab, const char *ptr)
{
size_t last_offset = (slab->num_blocks - 1) * slab->block_size;
size_t ptr_offset;
/* Check if pointer fits into slab buffer area. */
if ((ptr < slab->buffer) || (ptr > slab->buffer + last_offset)) {
return false;
}
/* Check if pointer offset is correct. */
ptr_offset = ptr - slab->buffer;
if (ptr_offset % slab->block_size != 0) {
return false;
}
return true;
}
struct ctx_pkt_slab_info {
const void *ptr;
bool pkt_source_found;
};
static void check_context_pool(struct net_context *context, void *user_data)
{
#if defined(CONFIG_NET_CONTEXT_NET_PKT_POOL)
if (!net_context_is_used(context)) {
return;
}
if (context->tx_slab) {
struct ctx_pkt_slab_info *info = user_data;
struct k_mem_slab *slab = context->tx_slab();
if (is_pkt_part_of_slab(slab, info->ptr)) {
info->pkt_source_found = true;
}
}
#endif /* CONFIG_NET_CONTEXT_NET_PKT_POOL */
}
static bool is_pkt_ptr_valid(const void *ptr)
{
struct k_mem_slab *rx, *tx;
net_pkt_get_info(&rx, &tx, NULL, NULL);
if (is_pkt_part_of_slab(rx, ptr) || is_pkt_part_of_slab(tx, ptr)) {
return true;
}
if (IS_ENABLED(CONFIG_NET_CONTEXT_NET_PKT_POOL)) {
struct ctx_pkt_slab_info info;
info.ptr = ptr;
info.pkt_source_found = false;
net_context_foreach(check_context_pool, &info);
if (info.pkt_source_found) {
return true;
}
}
return false;
}
static struct net_pkt *get_net_pkt(const char *ptr_str)
{
uint8_t buf[sizeof(intptr_t)];
@@ -4289,6 +4357,14 @@ static int cmd_net_pkt(const struct shell *shell, size_t argc, char *argv[])
if (!pkt) {
PR_ERROR("Invalid ptr value (%s). "
"Example: 0x01020304\n", argv[1]);
return -ENOEXEC;
}
if (!is_pkt_ptr_valid(pkt)) {
PR_ERROR("Pointer is not recognized as net_pkt (%s).\n",
argv[1]);
return -ENOEXEC;
}

View File

@@ -231,7 +231,9 @@ static const char *tcp_flags(uint8_t flags)
len += snprintk(buf + len, BUF_SIZE - len, "URG,");
}
buf[len - 1] = '\0'; /* delete the last comma */
if (len > 0) {
buf[len - 1] = '\0'; /* delete the last comma */
}
}
#undef BUF_SIZE
return buf;

View File

@@ -463,6 +463,11 @@ static ssize_t spair_write(void *obj, const void *buffer, size_t count)
}
if (will_block) {
if (k_is_in_isr()) {
errno = EAGAIN;
res = -1;
goto out;
}
for (int signaled = false, result = -1; !signaled;
result = -1) {
@@ -652,6 +657,11 @@ static ssize_t spair_read(void *obj, void *buffer, size_t count)
}
if (will_block) {
if (k_is_in_isr()) {
errno = EAGAIN;
res = -1;
goto out;
}
for (int signaled = false, result = -1; !signaled;
result = -1) {

View File

@@ -38,6 +38,7 @@
#define CONFIG_MP_NUM_CPUS 1
#define CONFIG_SYS_CLOCK_TICKS_PER_SEC 100
#define CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC 10000000
#define CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS 365
#define ARCH_STACK_PTR_ALIGN 8
/* FIXME: Properly integrate with Zephyr's arch specific code */
#define CONFIG_X86 1

View File

@@ -72,7 +72,7 @@
#include "mbedtls/memory_buffer_alloc.h"
#endif
static int test_snprintf(size_t n, const char ref_buf[10], int ref_ret)
static int test_snprintf(size_t n, const char *ref_buf, int ref_ret)
{
int ret;
char buf[10] = "xxxxxxxxx";

View File

@@ -79,8 +79,13 @@ static int test_task(uint32_t chan_id, uint32_t blen)
TC_PRINT("Starting the transfer\n");
(void)memset(rx_data, 0, sizeof(rx_data));
dma_block_cfg.block_size = sizeof(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data;
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data;
#endif
if (dma_config(dma, chan_id, &dma_cfg)) {
TC_PRINT("ERROR: transfer\n");

View File

@@ -87,8 +87,13 @@ static int test_task(int minor, int major)
(void)memset(rx_data2, 0, sizeof(rx_data2));
dma_block_cfg.block_size = sizeof(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data2;
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data2;
#endif
if (dma_config(dma, TEST_DMA_CHANNEL_1, &dma_cfg)) {
TC_PRINT("ERROR: transfer\n");
@@ -104,8 +109,13 @@ static int test_task(int minor, int major)
dma_cfg.linked_channel = TEST_DMA_CHANNEL_1;
dma_block_cfg.block_size = sizeof(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data;
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data;
#endif
if (dma_config(dma, TEST_DMA_CHANNEL_0, &dma_cfg)) {
TC_PRINT("ERROR: transfer\n");

View File

@@ -57,8 +57,13 @@ static void test_transfer(const struct device *dev, uint32_t id)
transfer_count++;
if (transfer_count < TRANSFER_LOOPS) {
dma_block_cfg.block_size = strlen(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data[transfer_count];
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data[transfer_count];
#endif
zassert_false(dma_config(dev, id, &dma_cfg),
"Not able to config transfer %d",

View File

@@ -4,3 +4,4 @@ CONFIG_FPU=y
CONFIG_FPU_SHARING=y
CONFIG_CBPRINTF_NANO=y
CONFIG_MAIN_STACK_SIZE=1024
CONFIG_MP_NUM_CPUS=1

View File

@@ -35,7 +35,7 @@ struct reply_packet {
struct timeout_order_data {
void *link_in_lifo;
struct k_lifo *klifo;
k_ticks_t timeout;
int32_t timeout;
int32_t timeout_order;
int32_t q_order;
};
@@ -43,23 +43,23 @@ struct timeout_order_data {
static struct k_lifo lifo_timeout[2];
struct timeout_order_data timeout_order_data[] = {
{0, &lifo_timeout[0], 20, 2, 0},
{0, &lifo_timeout[0], 40, 4, 1},
{0, &lifo_timeout[0], 0, 0, 2},
{0, &lifo_timeout[0], 10, 1, 3},
{0, &lifo_timeout[0], 30, 3, 4},
{0, &lifo_timeout[0], 200, 2, 0},
{0, &lifo_timeout[0], 400, 4, 1},
{0, &lifo_timeout[0], 0, 0, 2},
{0, &lifo_timeout[0], 100, 1, 3},
{0, &lifo_timeout[0], 300, 3, 4},
};
struct timeout_order_data timeout_order_data_mult_lifo[] = {
{0, &lifo_timeout[1], 0, 0, 0},
{0, &lifo_timeout[0], 30, 3, 1},
{0, &lifo_timeout[0], 50, 5, 2},
{0, &lifo_timeout[1], 80, 8, 3},
{0, &lifo_timeout[1], 70, 7, 4},
{0, &lifo_timeout[0], 10, 1, 5},
{0, &lifo_timeout[0], 60, 6, 6},
{0, &lifo_timeout[0], 20, 2, 7},
{0, &lifo_timeout[1], 40, 4, 8},
{0, &lifo_timeout[1], 0, 0, 0},
{0, &lifo_timeout[0], 300, 3, 1},
{0, &lifo_timeout[0], 500, 5, 2},
{0, &lifo_timeout[1], 800, 8, 3},
{0, &lifo_timeout[1], 700, 7, 4},
{0, &lifo_timeout[0], 100, 1, 5},
{0, &lifo_timeout[0], 600, 6, 6},
{0, &lifo_timeout[0], 200, 2, 7},
{0, &lifo_timeout[1], 400, 4, 8},
};
#define NUM_SCRATCH_LIFO_PACKETS 20
@@ -110,9 +110,7 @@ static bool is_timeout_in_range(uint32_t start_time, uint32_t timeout)
uint32_t stop_time, diff;
stop_time = k_cycle_get_32();
diff = (uint32_t)k_cyc_to_ns_floor64(stop_time -
start_time) / NSEC_PER_USEC;
diff = diff / USEC_PER_MSEC;
diff = k_cyc_to_ms_floor32(stop_time - start_time);
return timeout <= diff;
}
@@ -266,7 +264,7 @@ static void test_timeout_empty_lifo(void)
uint32_t start_time, timeout;
timeout = 10U;
timeout = 100U;
start_time = k_cycle_get_32();

View File

@@ -80,14 +80,23 @@ void test_kobject_access_grant_error(void)
*/
void test_kobject_access_grant_error_user(void)
{
struct k_msgq *m;
struct k_queue *q;
m = k_object_alloc(K_OBJ_MSGQ);
k_object_access_grant(m, k_current_get());
/*
* avoid using K_OBJ_PIPE, K_OBJ_MSGQ, or K_OBJ_STACK because the
* k_object_alloc() returns an uninitialized kernel object and these
* objects are types that can have additional memory allocations that
* need to be freed. This becomes a problem on the fault handler clean
* up because when it is freeing this uninitialized object the random
* data in the object can cause the clean up to try to free random
* data resulting in a secondary fault that fails the test.
*/
q = k_object_alloc(K_OBJ_QUEUE);
k_object_access_grant(q, k_current_get());
set_fault_valid(true);
/* a K_ERR_KERNEL_OOPS expected */
k_object_access_grant(m, NULL);
k_object_access_grant(q, NULL);
}
/**
@@ -1264,10 +1273,13 @@ void test_alloc_kobjects(void)
zassert_not_null(t, "alloc obj (0x%lx)\n", (uintptr_t)t);
p = k_object_alloc(K_OBJ_PIPE);
zassert_not_null(p, "alloc obj (0x%lx)\n", (uintptr_t)p);
k_pipe_init(p, NULL, 0);
s = k_object_alloc(K_OBJ_STACK);
zassert_not_null(s, "alloc obj (0x%lx)\n", (uintptr_t)s);
k_stack_init(s, NULL, 0);
m = k_object_alloc(K_OBJ_MSGQ);
zassert_not_null(m, "alloc obj (0x%lx)\n", (uintptr_t)m);
k_msgq_init(m, NULL, 0, 0);
q = k_object_alloc(K_OBJ_QUEUE);
zassert_not_null(q, "alloc obj (0x%lx)\n", (uintptr_t)q);

View File

@@ -131,7 +131,8 @@ static inline void set_fault_valid(bool valid)
#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
#if (defined(CONFIG_X86_64) || defined(CONFIG_ARM64) || \
(defined(CONFIG_RISCV) && defined(CONFIG_64BIT)))
#define TEST_HEAP_SIZE (2 << CONFIG_MAX_THREAD_BYTES) * 1024
#define MAX_OBJ 512
#else

View File

@@ -6,6 +6,7 @@ tests:
# To get clean results we need to disable this test until the bug is fixed and fix
# gets propagated to new Zephyr-SDK.
platform_exclude: twr_ke18f qemu_arc_hs qemu_arc_em
extra_args: CONFIG_TEST_HW_STACK_PROTECTION=n
tags: kernel security userspace ignore_faults
kernel.memory_protection.gap_filling.arc:
filter: CONFIG_ARCH_HAS_USERSPACE and CONFIG_MPU_REQUIRES_NON_OVERLAPPING_REGIONS

View File

@@ -902,6 +902,7 @@ void test_syscall_context(void)
check_syscall_context();
}
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
static void tls_leakage_user_part(void *p1, void *p2, void *p3)
{
char *tls_area = p1;
@@ -911,9 +912,11 @@ static void tls_leakage_user_part(void *p1, void *p2, void *p3)
"TLS data leakage to user mode");
}
}
#endif
void test_tls_leakage(void)
{
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* Tests two assertions:
*
* - That a user thread has full access to its TLS area
@@ -926,15 +929,21 @@ void test_tls_leakage(void)
k_thread_user_mode_enter(tls_leakage_user_part,
_current->userspace_local_data, NULL, NULL);
#else
ztest_test_skip();
#endif
}
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
void tls_entry(void *p1, void *p2, void *p3)
{
printk("tls_entry\n");
}
#endif
void test_tls_pointer(void)
{
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
k_thread_create(&test_thread, test_stack, STACKSIZE, tls_entry,
NULL, NULL, NULL, 1, K_USER, K_FOREVER);
@@ -958,6 +967,9 @@ void test_tls_pointer(void)
printk("tls area out of bounds\n");
ztest_test_fail();
}
#else
ztest_test_skip();
#endif
}

Some files were not shown because too many files have changed in this diff Show More