Compare commits

...

68 Commits

Author SHA1 Message Date
Christopher Friedt
030fa9da45 release: Zephyr 2.7.5
Set version to 2.7.5

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:08:42 -04:00
Christopher Friedt
43370b89c3 release: minor corrections to security release notes
* remove reference to other github project issue
* complete incomplete sentence

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:07:16 -04:00
Flavio Ceolin
15fa28896a release: mbedTLS: Add vulnerabilities info
Add information about vulnerabilities fixed since mbedTLS 2.26.0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Flavio Ceolin
ce3eb90a83 release: security: Add rel notes for vulnerabilities
Add information about vulnerabilities fixed in 2.7.5 release.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Chris Friedt
ca24cd6c2d release: update v2.7.5 release notes
* add bugifixes to v2.7.5 release

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-27 05:32:14 -04:00
Flavio Ceolin
4fc4dc7b84 release: mbedTLS: Add rel notes for mbedTLS
Release notes for mbedTLS lates update.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-26 11:10:27 +09:00
Krzysztof Chruscinski
fb24b62dc5 logging: Fix user space crash when runtime filtering is on
Logging module data (including filters) are not accessible by
the user space. Macro for creating logs where creating local
variable with filters before checking is we are in the user
context. It was not used in that case but creating variable
was violating access writes that resulted in failure.

Removing variable creation and using filters directly in the
if clause but after checking condition that it is not the
user context. With this approach data is accessed only in
the kernel mode.

Cherry-picked with modifications from
4ee59e2cdb.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-05-18 00:27:35 +08:00
Chris Friedt
60e7a97328 release: create outline for v2.7.5 release notes
Create a template for v2.7.5 release notes.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-18 00:07:05 +08:00
Flavio Ceolin
a1aa463783 boards: mps2_an521_ns: Remove simulation capability
This board requires TF-M which is not supported by default in the
current Zephyr release. Just remove the simulation capability to
avoid CI failures.

See: https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
29c1e08cf7 west: tf-m: Remove tf-m from Zephyr LTS
Zephyr mbedTLS was updated to 2.28.x which is a LTS release and
address several vulnerabilities affecting 2.26 (version that used to be
used on Zephyr LTS).

Unfortunately this mbedTLS version is not compatible with TF-M and
backporting mbedTLS fixes was not a viable solution. Due this problem
we are removing TF-M module from Zephyr's LTS. One still can go and add
it to this manifest if needed, but this is no longer "officially"
supported.

More information in:
https://github.com/zephyrproject-rtos/zephyr/issues/56071
https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
190f09df52 samples: tfm: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Only build / run these TF-M samples when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
a166290f1a tfm: boards: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Enable BUILD_WITH_TFM only when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
21e0870106 crypto: Bump mbedTLS to 2.28.3
Bump mbedTLS to version 2.28.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
ebe3651f3d tests: mbedtls: Fix GCC warning about test_snprintf
Fix errors like:

inlined from ‘test_mbedtls’ at
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:172:6:
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:96:17: error:
‘test_snprintf’ reading 10 bytes from a region of size 1
[-Werror=stringop-overread]
   96 |                 test_snprintf(1, "", -1) != 0 ||
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~

In GCC >= 11 because `ret_buf` in some calls are shorter literals

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Torsten Rasmussen
1b7c720c7f cmake: prefix local version of return variable
Fixes: #55490
Follow-up: #53124

Prefix local version of the return variable before calling
`zephyr_check_compiler_flag_hardcoded()`.

This ensures that there will never be any naming collision between named
return argument and the variable name used in later functions when
PARENT_SCOPE is used.

The issue #55490 provided description of situation where the double
de-referencing was not working correctly.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 599886a9d3)
2023-05-13 19:23:55 -04:00
Stephanos Ioannidis
58af1b51bd ci: Use organisation-level AWS secrets
This commit updates the CI workflows to use the `zephyrproject-rtos`
organisation-level AWS secrets instead of the repository-level secrets.

Using organisation-level secrets allows more centralised management of
the access keys used throughout the GitHub Actions CI infrastructure.

Note that the `AWS_*_ACCESS_KEY_ID` is now stored in plaintext as a
variable instead of a secret because it is equivalent to username and
needs to be identifiable for management and audit purposes.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-05-12 03:30:43 +09:00
Kumar Gala
70f2a4951a tests: posix: fs: disable CONFIG_EVENTFD
The test doesn't use eventfd so we can disable it to save some space.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit 70e921dbc7)
2023-05-10 19:48:15 -04:00
Kumar Gala
650d10805a posix: eventfd: depends on polling
Have eventfd Kconfig select POLL is the code utilizes the polling
API.  We get a link error for tests/lib/fdtable/libraries.os.fdtable
when building on arm-clang without this.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit f215e4494c)
2023-05-10 19:48:15 -04:00
Chris Friedt
25616b1021 tests: drivers: dma: loop: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 8c6c96715f)
2023-05-09 08:42:19 -04:00
Chris Friedt
f72519007c tests: drivers: dma: chan_link: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7f6d976916)
2023-05-09 08:42:19 -04:00
Chris Friedt
1b2a7ec251 tests: drivers: dma: chan_blen: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 5afcac5e14)
2023-05-09 08:42:19 -04:00
Chris Friedt
9d2533fc92 tests: posix: ensure that min and max priority are schedulable
Verify that threads are actually schedulable for min and max
scheduler priority for both `SCHED_RR` (preemptive) and
`SCHED_FIFO` (cooperative).

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit ad71b78770)
2023-05-02 16:25:42 -04:00
Chris Friedt
e20b8f3f34 posix: sched: ensure min and max priority are schedulable
Previously, there was an off-by-one error for SCHED_RR.

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 2b2cbf8107)
2023-05-02 16:25:42 -04:00
Christopher Friedt
199d5d5448 drivers: pcie_ep: iproc: compile-out unused function based on DT
Compile-out `iproc_pcie_pl330_dma_xfer()` if there are no active
DMA users in devicetree.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 9ad78eb60c)
2023-05-02 12:33:46 -04:00
Chris Friedt
5db2717f06 drivers: pcie_ep: iproc: ensure config and api are const
The `config` and `api` members of `struct device` are expected
to be `const`. This also improves reliability, as `config`
and `api` are stored in rom rather than ram, which has the
potential to be corrupted at runtime in the absense of an MMU.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7212792295)
2023-04-27 11:18:00 -04:00
Tarun Karuturi
f3851326da drivers: pcie_ep: iproc: enable based on device tree specs
There are use cases for the pcie_ep driver where we don't
necessarily need the dma functionality. Added ifdef's around
the dma functionality so that it's only available if we
specify the dma engines in the device tree similar to

```
dmas = <&pl330 0>, <&pl330 1>;
dma-names = "txdma", "rxdma";
```

Signed-off-by: Tarun Karuturi <tkaruturi@meta.com>
Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 9d95f69a87)
2023-04-27 11:18:00 -04:00
Stephanos Ioannidis
5a8d05b968 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-27 21:45:14 +09:00
Stephanos Ioannidis
eea42e38f3 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-15 15:59:20 +09:00
Chris Friedt
0388a90e7b posix: clock: fix seconds calculation
The previous method used to calculate seconds in `clock_gettime()`
seemed to have an inaccuracy that grew with time causing the
seconds to be off by an order of magnitude when ticks would roll
over.

This change fixes the method used to calculate seconds.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
4c62d76fb7 sys: time_units: Add Kconfig option for algorithm selection
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit 50c7c7b1e4)
2023-04-07 06:29:11 -04:00
Chris Friedt
6f8f9b5c7a tests: time_units: check for overflow in z_tmcvt intermediate
Prior to #41602, due to the ordering of operations (first mul,
then div), an intermediate value would overflow, resulting in
a time non-linearity.

This test ensures that time rolls-over properly.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 74c9c0e7a3)
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
afbc93287d lib: posix: clock: Prevent early overflows
Algorithm was converting uptime to nanoseconds which can easily
lead to overflows. Changed algorithm to use milliseconds and
nanoseconds for remainder only.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Gerard Marull-Paretas
a28aa01a88 sys: time_units: add missing include
The header can't be fully used in standalone mode: toolchain.h has to be
included first, otherwise the ALWAYS_INLINE attribute is not defined.
Headers that can be directly included and are not self-contained should
be considered a bad practice.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Stephanos Ioannidis
677a374255 ci: backport_issue_check: Use ubuntu-22.04 virtual environment
This commit updates the pull request backport issue check workflow to
use the Ubuntu 22.04 virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cadd6e6fa4)
2023-03-22 03:15:40 +09:00
Stephanos Ioannidis
0389fa740b ci: manifest: Use ubuntu-22.04 virtual environment
This commit updates the manifest workflow to use the Ubuntu 22.04
virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit af6d77f7a7)
2023-03-22 03:05:01 +09:00
Torsten Rasmussen
b02d34b855 cmake: fix variable de-referencing in zephyr_check_compiler_x functions
Fixes: #53124

Fix de-referencing of check and exists function arguments by correctly
de-referencing the argument references using `${<var>}`.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 04a27651ea)
2023-03-13 07:48:51 -04:00
Torsten Rasmussen
29e3a4865f cmake: dereference ${check} after zephyr_check_compiler_flag() call
Follow-up: #53124

The PR#53124 fixed an issue where the variable `check` was not properly
dereferenced into the correct variable name for return value storage.
This was corrected in 04a27651ea.

However, some code was passing a return argument as:
`zephyr_check_compiler_flag(... ${check})`
but checking the result like:
`if(${check})`
thus relying on a faulty behavior of code updating `check` and not the
`${check}` variable.

Fix this by updating to use `${${check}}` as that will point to the
correct return value.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 45b25e5508)
2023-03-10 13:32:37 -05:00
Robert Lubos
aaa6d280ce net: iface: Add NULL pointer check in net_if_ipv6_set_reachable_time
In case the IPv6 context pointer was not set on an interface (for
instance due to IPv6 context shortage), processing the RA message could
lead to a crash (i. e. NULL pointer dereference). Protect against this
by adding NULL pointer check, similarly to other functions in this area.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit c6c2098255)
2023-02-28 12:36:47 -05:00
Robert Lubos
e02a3377e5 net: shell: Validate pointer provided with net pkt command
The net_pkt pointer provided to net pkt commands was not validated in
any way. Therefore it was fairly easy to crash an application by
providing invalid address.

This commit adds the pointer validation. It's checked whether the
pointer provided belongs to any net_pkt pools known to the net stack,
and if the pointer offset within the slab actually points to the
beginning of the net_pkt structure.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit e540a98331)
2023-02-28 12:34:38 -05:00
Gerard Marull-Paretas
76c30dfa55 ci: doc-build: fix PDF build
New LaTeX Docker image (Debian based) uses Python 3.11. On Debian
systems, this version does not allow to install packages to the system
environment using pip.  Use a virtual environment instead.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
(cherry picked from commit e6d9ff2948)
2023-02-28 23:21:51 +09:00
Théo Battrel
c3f512d606 Bluetooth: Host: Check returned value by LE_READ_BUFFER_SIZE
`rp->le_max_num` was passed unchecked into `k_sem_init()`, this could
lead to the value being uninitialized and an unknown behavior.

To fix that issue, the `rp->le_max_num` value is checked the same way as
`bt_dev.le.acl_mtu` was already checked. The same things has been done
for `rp->acl_max_num` and `rp->iso_max_num` in
`read_buffer_size_v2_complete()` function.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
(cherry picked from commit ac3dec5212)
2023-02-24 19:48:12 -05:00
NingX Zhao
f882abfd13 tests: removing incorrect testcases of poll
These two test cases both are fault injection test cases,
and there are designed for testing some negative branches
to improve code coverage. But I find that this branch
shouldn't be tested, because the spinlock will be locked
before a procedure performs here, and then it will trigger
an assert error and the process will be rescheduled to the
handler function, and terminated the current test case,
so spinlock will never be unlocked. And it will impact
the next test case in the same test suite(the next testcase
will be never get spinlock).

Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
(cherry picked from commit cb4a629bc8)
2023-02-07 12:14:38 -06:00
Lucas Dietrich
bc7300fea7 kernel: workq: Add internal function z_work_submit_to_queue()
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().

When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.

Fixes #45267

Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
(cherry picked from commit 9a848b3ad4)
2023-02-03 18:37:53 -05:00
Andy Ross
8da9a76464 kernel/workq: Cleanup bespoke reschedule point
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible.  Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.

Really, this is only a half fix.  Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule().  But that would take some surgery, so let's go with
the simpler cleanup first.

This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.

[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 8d94967ec4)
2023-02-03 18:37:53 -05:00
Lixin Guo
298b8ea788 kernel: work: remove unused if statement
Condition of work == NULL is checked before, so there is no need to
check it again.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
(cherry picked from commit d4826d874e)
2023-02-03 18:37:53 -05:00
Peter Mitsis
a9aaf048e8 kernel: Fixes sys_clock_tick_get()
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
  1. The handler invokes k_uptime_ticks().
  2. k_uptime_ticks() invokes sys_clock_tick_get().
  3. sys_clock_tick_get() must call elapsed() and not
     sys_clock_elapsed() as we do not want to count any
     unannounced ticks that may have elapsed while
     processing the timer ISR.

Fixes #46378

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 71ef669ea4)
2023-02-03 18:37:53 -05:00
Peter Mitsis
e2b81b48c4 kernel: fix race condition in sys_clock_announce()
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 3e2f30a7ef)
2023-02-03 18:37:53 -05:00
Andy Ross
45c41bc344 kernel/timeout: Cleanup/speedup parallel announce logic
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list).  The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).

Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit.  The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.

Fixes #44758

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 0b2ed3818d)
2023-02-03 18:37:53 -05:00
Andy Ross
f570a46719 kernel/timeout: Serialize handler callbacks on SMP
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).

When that happens, each call will independently enter the loop over
the timeout list.  The access is correctly synchronized, so the list
handling is correct.  But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus.  That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".

That's surprising and error-prone.  Don't do it.  Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.

(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver.  That hardware issue is still poorly understood, but this fix
is desirable regardless.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b1182bf83b)
2023-02-03 18:37:53 -05:00
Flavio Ceolin
675a349e1b kernel: Fix timeout issue with SYSTEM_CLOCK_SLOPPY_IDLE
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.

Fixes #41422

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 47b7c2e931)
2023-02-03 18:37:53 -05:00
Andy Ross
16927a6cbb kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b4e9ef0691)
2023-02-03 18:37:53 -05:00
Andy Ross
ab353d6b7d kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 3267cd327e)
2023-02-03 18:37:53 -05:00
Mark Holden
951b055b7f debug: coredump: allow for coredump backends to be defined outside of tree
Move coredump_backend_api struct to public header so that custom backends
for coredump can be defined out of tree. Create simple backend in test
directory for verification.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit 7b2b283677)
2023-02-03 18:35:56 -05:00
Nicolas Pitre
8e256b3399 scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:

|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
|                             unsigned int initial_count,
|                             unsigned int limit)
|{
|    80000ad0:   6105                    addi    sp,sp,32
|    80000ad2:   ec06                    sd      ra,24(sp)
|    80000ad4:   e42a                    sd      a0,8(sp)
|    80000ad6:   c22e                    sw      a1,4(sp)
|    80000ad8:   c032                    sw      a2,0(sp)
|        ret = arch_is_user_context();
|    80000ada:   b39ff0ef                jal     ra,80000612
|        if (z_syscall_trap()) {
|    80000ade:   c911                    beqz    a0,80000af2
|                return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
|                                    *(uintptr_t *)&initial_count,
|                                    *(uintptr_t *)&limit,
|                                    K_SYSCALL_K_SEM_INIT);
|    80000ae0:   6522                    ld      a0,8(sp)
|    80000ae2:   00413583                ld      a1,4(sp)
|    80000ae6:   6602                    ld      a2,0(sp)
|    80000ae8:   0b700693                li      a3,183
|    [...]

We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.

In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:

- The top half of a1 and a2 will contain garbage due to the `ld` used
  to retrieve them. Whether or not the top bits will be cleared
  eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
  endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
  while `ld` expects a 8-byte alignment.

The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.

The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.

So let's fix this code by:

- Removing the pointer dereference roundtrip and associated casts. This
  gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
  aggregates perfectly well. The compiler does optimize things to the
  same assembly output in the end.

This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 1db5c8b948)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
74f2760771 scripts: gen_syscalls: add missing --split-type case
With CONFIG_TIMEOUT_64BIT it is both k_timeout_t and k_ticks_t that
need to be split, otherwise many syscalls returning a number of ticks
are being truncated to 32 bits.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 2cdac33d39)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
85e0912291 scripts: gen_syscalls: fix access validation size on extra params array
It was one below the entire array size.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit df80c77ed8)
2023-02-01 20:07:43 -05:00
Jim Shu
f2c582c75d gen_app_partitions: add .sdata/.sbss section into app_smem
Some architectures (e.g. RISC-V) has .sdata/.sbss section for small
data/bss. Memory partition should also manage the permission of these
sections in library so they should be put into app_smem.
(For example, newlib _impure_ptr is in .sdata section and
__malloc_top_pad is in .sbss section in RISC-V.)

Signed-off-by: Jim Shu <cwshu@andestech.com>
(cherry picked from commit 46eb3e5fce)
2023-02-01 20:07:43 -05:00
Robert Lubos
c908ee8133 net: context: Separate user data pointer from FIFO reserved space
Using the same memory as a user data pointer and FIFO reserved space
could lead to a crash in certain circumstances, those two use cases were
not completely separate.

The crash could happen for example, if an incoming TCP connection was
abruptly closed just after being established. As TCP uses the user data
to notify error condition to the upper layer, the user data pointer
could've been used while the newly allocated context could still be
waiting on the accept queue. This damaged the data area used by the FIFO
and eventually could lead to a crash.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 2ab11953e3)
2023-01-31 16:13:42 -05:00
Nicolas Pitre
175e76b302 z_thread_mark_switched_*: use z_current_get() instead of k_current_get()
k_current_get() may rely on TLS which might not yet be initialized
when those tracing functions are called, resulting in a crash.

This is different from the main branch as in that case the implementation
was completely revamped and neither k_current_get() nor z_current_get()
are used anymore. This is a much simpler fix than a backport of that
code, similar to the implication in commit commit f07df42d49 ("kernel:
make k_current_get() work without syscall").

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-23 15:01:10 -05:00
Keith Packard
c520749a71 tests: Disable HW stack protection for some mpu tests
When active, z_libc_partition consumes an MPU region which leaves too
few for some MPU tests. Free up one by disabling HW stack protection.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 19c8956946)
2023-01-11 11:02:47 -05:00
David Leach
584f52d5be tests: mem_protect: ensure allocated objects are initialized
K_OBJ_MSGQ, K_OBJ_PIPE, and K_OBJ_STACK objects have pointers
to additional memory that can be allocated. The k_obj_alloc()
returns these objects as uninitialized so when they are freed
there are random opportunities for freeing invalid memory
and causing random faults.

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit fdea2a628b)
2023-01-11 11:02:47 -05:00
David Leach
d05c3bdf36 tests: mem_protect: avoid allocating K_OBJ_MSGQ in userspace.
The K_OBJ_MSGQ object is unitialized so when the thread cleanup occurs
after an expected fault for invalid access the test case can randomly
fault again because the cleanup of the thread will sometimes attempt
to free invalid buffer_start pointer in the msgq object.

Fixes #42705

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit a0737e687c)
2023-01-11 11:02:47 -05:00
Jim Shu
3ab0c9516f tests: mem_protect: enlarge heap size of RISCV64
Because k_thread size in RISCV64 is near 512 bytes, (num_of_thread *
256) bytes heap size is not enough. Enlarge heap size in RISCV64
to the (num_of_thread * 1024) bytes like x86_64 and ARM64.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
(cherry picked from commit e2d67d60ba)
2023-01-11 11:02:47 -05:00
Keith Packard
df6f0f477f tests/kernel/mem_protect: Check for thread_userspace_local_data
When using THREAD_LOCAL_STORAGE the thread_userspace_local_data stuff
isn't used, so these tests wouldn't build.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit b03b2e0403)
2023-01-11 11:02:47 -05:00
Nicolas Pitre
2dc30ca1fb tests: lifo_usage: make it less susceptible to SMP races
On SMP, and especially using qemu on a busy system, it is possible for
a thread with a later timeout to get ahead of another one with an
earlier timeout. The tight timeout value difference (10ms) makes it
possible albeit difficult to reproduce. The result is something like:

|START - test_timeout_threads_pend_on_lifo
| thread (q order: 2, t/o: 0, lifo 0x4001d350)
|
|    Assertion failed at main.c:140:
|test_multiple_threads_pending: (data->timeout_order not equal to ii)
| *** thread 2 woke up, expected 1

Let's make timeout values 10 times larger to make this unlikely race
even less likely.

While at it... The timeout field in struct timeout_order_data is some ms
value and not a number of ticks, so change the type accordingly.
And leverage k_cyc_to_ms_floor32() to simplify computation in
is_timeout_in_range().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit a1ce2fb990)
2023-01-11 11:02:47 -05:00
Daniel Leung
5cbda9f1c7 tests: kernel/smp: wait for threads to exits between tests
This adds a bunch of k_thread_join() to make sure threads spawned
for a test are no longer running between exiting that test. This
prevents interference between tests if some threads are still
running when assumed not.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit dbe3874079)
2023-01-11 11:02:47 -05:00
Carlo Caione
711506349d tests/kernel/smp: Add SMP switch torture test
Formalize and rework the issue reproducer for #40795 and add it to the
SMP test suite.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
(cherry picked from commit 8edf9817c0)
2023-01-11 11:02:47 -05:00
Ederson de Souza
572921a44a tests/kernel/fpu_sharing: Run test with MP_NUM_CPUS=1
This test uses k_yield() to "sync" between threads, so it's implicitly
supposed to run on a single CPU. Make it explicit, to avoid issues on
platforms with more cores.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>

FIXKFLOATDISABLE

(cherry picked from commit ab17f69a72)
2023-01-11 11:02:47 -05:00
85 changed files with 1097 additions and 417 deletions

View File

@@ -8,7 +8,7 @@ on:
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Check out source code

View File

@@ -78,8 +78,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial

View File

@@ -65,8 +65,8 @@ jobs:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial

View File

@@ -19,8 +19,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip

View File

@@ -125,7 +125,7 @@ jobs:
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v3
@@ -133,6 +133,12 @@ jobs:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: setup-venv
run: |
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip

View File

@@ -50,7 +50,7 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1

View File

@@ -32,8 +32,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3

View File

@@ -53,8 +53,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.FOOTPRINT_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.FOOTPRINT_AWS_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Record Footprint

View File

@@ -43,8 +43,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Post Results

View File

@@ -7,6 +7,4 @@ jobs:
name: Pull Request Labeler
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2.1.1
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'
- uses: actions/labeler@v4

View File

@@ -6,7 +6,7 @@ on:
jobs:
contribs:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code

View File

@@ -183,8 +183,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial

View File

@@ -627,7 +627,7 @@ if(CONFIG_64BIT)
endif()
if(CONFIG_TIMEOUT_64BIT)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t --split-type k_ticks_t)
endif()
add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 4
PATCHLEVEL = 5
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -27,6 +27,7 @@ endif # BOARD_BL5340_DVK_CPUAPP
config BUILD_WITH_TFM
default y if BOARD_BL5340_DVK_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -20,6 +20,7 @@ config BOARD
# force building with TF-M as the Secure Execution Environment.
config BUILD_WITH_TFM
default y if TRUSTED_EXECUTION_NONSECURE
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if GPIO

View File

@@ -4,7 +4,10 @@ type: mcu
arch: arm
ram: 4096
flash: 4096
simulation: qemu
# TFM is not supported by default in the Zephyr LTS release.
# Excluding this board's simulator to avoid CI failures.
#
#simulation: qemu
toolchain:
- gnuarmemb
- zephyr

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF5340DK_NRF5340_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF9160DK_NRF9160_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -518,7 +518,7 @@ function(zephyr_library_cc_option)
string(MAKE_C_IDENTIFIER check${option} check)
zephyr_check_compiler_flag(C ${option} ${check})
if(${check})
if(${${check}})
zephyr_library_compile_options(${option})
endif()
endforeach()
@@ -1003,9 +1003,9 @@ endfunction()
function(zephyr_check_compiler_flag lang option check)
# Check if the option is covered by any hardcoded check before doing
# an automated test.
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" check exists)
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" _${check} exists)
if(exists)
set(check ${check} PARENT_SCOPE)
set(${check} ${_${check}} PARENT_SCOPE)
return()
endif()
@@ -1110,11 +1110,11 @@ function(zephyr_check_compiler_flag_hardcoded lang option check exists)
# because they would produce a warning instead of an error during
# the test. Exclude them by toolchain-specific blocklist.
if((${lang} STREQUAL CXX) AND ("${option}" IN_LIST CXX_EXCLUDED_OPTIONS))
set(check 0 PARENT_SCOPE)
set(exists 1 PARENT_SCOPE)
set(${check} 0 PARENT_SCOPE)
set(${exists} 1 PARENT_SCOPE)
else()
# There does not exist a hardcoded check for this option.
set(exists 0 PARENT_SCOPE)
set(${exists} 0 PARENT_SCOPE)
endif()
endfunction(zephyr_check_compiler_flag_hardcoded)
@@ -1862,7 +1862,7 @@ function(check_set_linker_property)
zephyr_check_compiler_flag(C "" ${check})
set(CMAKE_REQUIRED_FLAGS ${SAVED_CMAKE_REQUIRED_FLAGS})
if(${check})
if(${${check}})
set_property(TARGET ${LINKER_PROPERTY_TARGET} ${APPEND} PROPERTY ${property} ${option})
endif()
endfunction()

View File

@@ -2,6 +2,143 @@
.. _zephyr_2.7:
.. _zephyr_2.7.5:
Zephyr 2.7.5
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.4 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`41111` - utils: tmcvt: fix integer overflow after 6.4 days with ``gettimeofday()`` and ``z_tmcvt()``
* :github:`51663` - tests: kernel: increase coverage for kernel and mmu tests
* :github:`53124` - bmake: fix argument passing in ``zephyr_check_compiler_flag()`` cmake function
* :github:`53315` - net: tcp: fix possible underflow in ``tcp_flags()``.
* :github:`53981` - scripts: fixes for ``gen_syscalls`` and ``gen_app_partitions``
* :github:`53983` - init: correct early init time calls to ``k_current_get()`` when TLS is enabled
* :github:`54140` - net: fix BUS FAULT when running nmap towards echo_async sample
* :github:`54325` - coredump: support out-of-tree coredump backend definition
* :github:`54386` - kernel: correct SMP scheduling with more than 2 CPUs
* :github:`54527` - tests: kernel: remove faulty test from tests/kernel/poll
* :github:`55019` - bluetooth: initialize backport of #54905 failed
* :github:`55068` - net: ipv6: validate arguments in ``net_if_ipv6_set_reachable_time()``
* :github:`55069` - net: core: ``net pkt`` shell command missing input validation
* :github:`55323` - logging: fix userspace runtime filtering
* :github:`55490` - cxx: fix compile error in C++ project for bad flags ``-Wno-pointer-sign`` and ``-Werror=implicit-int``
* :github:`56071` - security: MbedTLS: update to v2.28.3
* :github:`56729` - posix: SCHED_RR valid thread priorities
* :github:`57210` - drivers: pcie: endpoint: pcie_ep_iproc: correct use of optional devicetree binding
* :github:`57419` - tests: dma: support 64-bit addressing in tests
* :github:`57710` - posix: support building eventfd on arm-clang
mbedTLS
*******
Moving mbedTLS to 2.28.x series (2.28.3 precisely). This is a LTS release
that will be supported with bug fixes and security fixes until the end of 2024.
Detailed information can be found in:
https://github.com/Mbed-TLS/mbedtls/releases/tag/v2.28.3
https://github.com/zephyrproject-rtos/zephyr/issues/56071
This version is incompatible with TF-M and because of this TF-M is no longer
supported in Zephyr LTS. If TF-M is required it can be manually added back
changing the mbedTLS revision on ``west.yaml`` to the previous one
(5765cb7f75a9973ae9232d438e361a9d7bbc49e7). This should be carefully assessed
by a security expert to ensure that the know vulnerabilities in that version
don't affect the product.
Vulnerabilities addressed in this update:
* MBEDTLS_AESNI_C, which is enabled by default, was silently ignored on
builds that couldn't compile the GCC-style assembly implementation
(most notably builds with Visual Studio), leaving them vulnerable to
timing side-channel attacks. There is now an intrinsics-based AES-NI
implementation as a fallback for when the assembly one cannot be used.
* Fix potential heap buffer overread and overwrite in DTLS if
MBEDTLS_SSL_DTLS_CONNECTION_ID is enabled and
MBEDTLS_SSL_CID_IN_LEN_MAX > 2 * MBEDTLS_SSL_CID_OUT_LEN_MAX.
* An adversary with access to precise enough information about memory
accesses (typically, an untrusted operating system attacking a secure
enclave) could recover an RSA private key after observing the victim
performing a single private-key operation if the window size used for the
exponentiation was 3 or smaller. Found and reported by Zili KOU,
Wenjian HE, Sharad Sinha, and Wei ZHANG. See "Cache Side-channel Attacks
and Defenses of the Sliding Window Algorithm in TEEs" - Design, Automation
and Test in Europe 2023.
* Zeroize dynamically-allocated buffers used by the PSA Crypto key storage
module before freeing them. These buffers contain secret key material, and
could thus potentially leak the key through freed heap.
* Fix a potential heap buffer overread in TLS 1.2 server-side when
MBEDTLS_USE_PSA_CRYPTO is enabled, an opaque key (created with
mbedtls_pk_setup_opaque()) is provisioned, and a static ECDH ciphersuite
is selected. This may result in an application crash or potentially an
information leak.
* Fix a buffer overread in DTLS ClientHello parsing in servers with
MBEDTLS_SSL_DTLS_CLIENT_PORT_REUSE enabled. An unauthenticated client
or a man-in-the-middle could cause a DTLS server to read up to 255 bytes
after the end of the SSL input buffer. The buffer overread only happens
when MBEDTLS_SSL_IN_CONTENT_LEN is less than a threshold that depends on
the exact configuration: 258 bytes if using mbedtls_ssl_cookie_check(),
and possibly up to 571 bytes with a custom cookie check function.
Reported by the Cybeats PSI Team.
* Zeroize several intermediate variables used to calculate the expected
value when verifying a MAC or AEAD tag. This hardens the library in
case the value leaks through a memory disclosure vulnerability. For
example, a memory disclosure vulnerability could have allowed a
man-in-the-middle to inject fake ciphertext into a DTLS connection.
* In psa_cipher_generate_iv() and psa_cipher_encrypt(), do not read back
from the output buffer. This fixes a potential policy bypass or decryption
oracle vulnerability if the output buffer is in memory that is shared with
an untrusted application.
* Fix a double-free that happened after mbedtls_ssl_set_session() or
mbedtls_ssl_get_session() failed with MBEDTLS_ERR_SSL_ALLOC_FAILED
(out of memory). After that, calling mbedtls_ssl_session_free()
and mbedtls_ssl_free() would cause an internal session buffer to
be free()'d twice.
* Fix a bias in the generation of finite-field Diffie-Hellman-Merkle (DHM)
private keys and of blinding values for DHM and elliptic curves (ECP)
computations.
* Fix a potential side channel vulnerability in ECDSA ephemeral key generation.
An adversary who is capable of very precise timing measurements could
learn partial information about the leading bits of the nonce used for the
signature, allowing the recovery of the private key after observing a
large number of signature operations. This completes a partial fix in
Mbed TLS 2.20.0.
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2023-0397: `Zephyr project bug tracker GHSA-wc2h-h868-q7hj
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-wc2h-h868-q7hj>`_
* CVE-2023-0779: `Zephyr project bug tracker GHSA-9xj8-6989-r549
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-9xj8-6989-r549>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.4:
Zephyr 2.7.4

View File

@@ -467,7 +467,7 @@ err_out:
static struct iproc_pcie_ep_ctx iproc_pcie_ep_ctx_0;
static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
static const struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.id = 0,
.base = (struct iproc_pcie_reg *)DT_INST_REG_ADDR(0),
.reg_size = DT_INST_REG_SIZE(0),
@@ -475,19 +475,21 @@ static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.map_low_size = DT_INST_REG_SIZE_BY_NAME(0, map_lowmem),
.map_high_base = DT_INST_REG_ADDR_BY_NAME(0, map_highmem),
.map_high_size = DT_INST_REG_SIZE_BY_NAME(0, map_highmem),
#if DT_INST_NODE_HAS_PROP(0, dmas)
.pl330_dev = DEVICE_DT_GET(DT_INST_DMAS_CTLR_BY_IDX(0, 0)),
.pl330_tx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, txdma, channel),
.pl330_rx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, rxdma, channel),
#endif
};
static struct pcie_ep_driver_api iproc_pcie_ep_api = {
static const struct pcie_ep_driver_api iproc_pcie_ep_api = {
.conf_read = iproc_pcie_conf_read,
.conf_write = iproc_pcie_conf_write,
.map_addr = iproc_pcie_map_addr,
.unmap_addr = iproc_pcie_unmap_addr,
.raise_irq = iproc_pcie_raise_irq,
.register_reset_cb = iproc_pcie_register_reset_cb,
.dma_xfer = iproc_pcie_pl330_dma_xfer,
.dma_xfer = DT_INST_NODE_HAS_PROP(0, dmas) ? iproc_pcie_pl330_dma_xfer : NULL,
};
DEVICE_DT_INST_DEFINE(0, &iproc_pcie_ep_init, NULL,

View File

@@ -126,6 +126,31 @@ struct coredump_mem_hdr_t {
uintptr_t end;
} __packed;
typedef void (*coredump_backend_start_t)(void);
typedef void (*coredump_backend_end_t)(void);
typedef void (*coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct coredump_backend_api {
/* Signal to backend of the start of coredump. */
coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
coredump_backend_end_t end;
/* Raw buffer output */
coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
void coredump(unsigned int reason, const z_arch_esf_t *esf,
struct k_thread *thread);
void coredump_memory_dump(uintptr_t start_addr, uintptr_t end_addr);

View File

@@ -162,6 +162,11 @@ struct z_kernel {
#if defined(CONFIG_THREAD_MONITOR)
struct k_thread *threads; /* singly linked list of ALL threads */
#endif
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
/* Need to signal an IPI at the next scheduling point */
bool pending_ipi;
#endif
};
typedef struct z_kernel _kernel_t;

View File

@@ -302,10 +302,8 @@ static inline char z_log_minimal_level_to_char(int level)
} \
\
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \
@@ -347,8 +345,6 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
\
if (IS_ENABLED(CONFIG_LOG_MINIMAL)) { \
Z_LOG_TO_PRINTK(_level, "%s", _str); \
@@ -357,7 +353,7 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \

View File

@@ -199,10 +199,11 @@ struct net_conn_handle;
* anyway. This saves 12 bytes / context in IPv6.
*/
__net_socket struct net_context {
/** User data.
*
* First member of the structure to let users either have user data
* associated with a context, or put contexts into a FIFO.
/** First member of the structure to allow to put contexts into a FIFO.
*/
void *fifo_reserved;
/** User data associated with a context.
*/
void *user_data;

View File

@@ -1368,6 +1368,10 @@ uint32_t net_if_ipv6_calc_reachable_time(struct net_if_ipv6 *ipv6);
static inline void net_if_ipv6_set_reachable_time(struct net_if_ipv6 *ipv6)
{
#if defined(CONFIG_NET_NATIVE_IPV6)
if (ipv6 == NULL) {
return;
}
ipv6->reachable_time = net_if_ipv6_calc_reachable_time(ipv6);
#endif
}

View File

@@ -7,6 +7,9 @@
#ifndef ZEPHYR_INCLUDE_TIME_UNITS_H_
#define ZEPHYR_INCLUDE_TIME_UNITS_H_
#include <sys/util.h>
#include <toolchain.h>
#ifdef __cplusplus
extern "C" {
#endif
@@ -56,6 +59,21 @@ static TIME_CONSTEXPR inline int sys_clock_hw_cycles_per_sec(void)
#endif
}
/** @internal
* Macro determines if fast conversion algorithm can be used. It checks if
* maximum timeout represented in source frequency domain and multiplied by
* target frequency fits in 64 bits.
*
* @param from_hz Source frequency.
* @param to_hz Target frequency.
*
* @retval true Use faster algorithm.
* @retval false Use algorithm preventing overflow of intermediate value.
*/
#define Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz) \
((ceiling_fraction(CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS * 24ULL * 3600ULL * from_hz, \
UINT32_MAX) * to_hz) <= UINT32_MAX)
/* Time converter generator gadget. Selects from one of three
* conversion algorithms: ones that take advantage when the
* frequencies are an integer ratio (in either direction), or a full
@@ -123,8 +141,18 @@ static TIME_CONSTEXPR ALWAYS_INLINE uint64_t z_tmcvt(uint64_t t, uint32_t from_h
} else {
if (result32) {
return (uint32_t)((t * to_hz + off) / from_hz);
} else if (const_hz && Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz)) {
/* Faster algorithm but source is first multiplied by target frequency
* and it can overflow even though final result would not overflow.
* Kconfig option shall prevent use of this algorithm when there is a
* risk of overflow.
*/
return ((t * to_hz + off) / from_hz);
} else {
return (t * to_hz + off) / from_hz;
/* Slower algorithm but input is first divided before being multiplied
* which prevents overflow of intermediate value.
*/
return (t / from_hz) * to_hz + ((t % from_hz) * to_hz + off) / from_hz;
}
}
}

View File

@@ -613,6 +613,17 @@ config TIMEOUT_64BIT
availability of absolute timeout values (which require the
extra precision).
config SYS_CLOCK_MAX_TIMEOUT_DAYS
int "Max timeout (in days) used in conversions"
default 365
help
Value is used in the time conversion static inline function to determine
at compile time which algorithm to use. One algorithm is faster, takes
less code but may overflow if multiplication of source and target
frequency exceeds 64 bits. Second algorithm prevents that. Faster
algorithm is selected for conversion if maximum timeout represented in
source frequency domain multiplied by target frequency fits in 64 bits.
config XIP
bool "Execute in place"
help

View File

@@ -576,6 +576,9 @@ static void triggered_work_expiration_handler(struct _timeout *timeout)
k_work_submit_to_queue(twork->workq, &twork->work);
}
extern int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work);
static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
{
struct z_poller *poller = event->poller;
@@ -587,7 +590,7 @@ static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
z_abort_timeout(&twork->timeout);
twork->poll_result = 0;
k_work_submit_to_queue(work_q, &twork->work);
z_work_submit_to_queue(work_q, &twork->work);
}
return 0;

View File

@@ -219,6 +219,25 @@ static ALWAYS_INLINE void dequeue_thread(void *pq,
}
}
static void signal_pending_ipi(void)
{
/* Synchronization note: you might think we need to lock these
* two steps, but an IPI is idempotent. It's OK if we do it
* twice. All we require is that if a CPU sees the flag true,
* it is guaranteed to send the IPI, and if a core sets
* pending_ipi, the IPI will be sent the next time through
* this code.
*/
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
if (_kernel.pending_ipi) {
_kernel.pending_ipi = false;
arch_sched_ipi();
}
}
#endif
}
#ifdef CONFIG_SMP
/* Called out of z_swap() when CONFIG_SMP. The current thread can
* never live in the run queue until we are inexorably on the context
@@ -231,6 +250,7 @@ void z_requeue_current(struct k_thread *curr)
if (z_is_thread_queued(curr)) {
_priq_run_add(&_kernel.ready_q.runq, curr);
}
signal_pending_ipi();
}
#endif
@@ -481,6 +501,15 @@ static bool thread_active_elsewhere(struct k_thread *thread)
return false;
}
static void flag_ipi(void)
{
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
_kernel.pending_ipi = true;
}
#endif
}
static void ready_thread(struct k_thread *thread)
{
#ifdef CONFIG_KERNEL_COHERENCE
@@ -495,9 +524,7 @@ static void ready_thread(struct k_thread *thread)
queue_thread(&_kernel.ready_q.runq, thread);
update_cache(0);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
}
}
@@ -799,9 +826,7 @@ void z_thread_priority_set(struct k_thread *thread, int prio)
{
bool need_sched = z_set_prio(thread, prio);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (need_sched && _current->base.sched_locked == 0U) {
z_reschedule_unlocked();
@@ -841,6 +866,7 @@ void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
z_swap(lock, key);
} else {
k_spin_unlock(lock, key);
signal_pending_ipi();
}
}
@@ -850,6 +876,7 @@ void z_reschedule_irqlock(uint32_t key)
z_swap_irqlock(key);
} else {
irq_unlock(key);
signal_pending_ipi();
}
}
@@ -883,7 +910,16 @@ void k_sched_unlock(void)
struct k_thread *z_swap_next_thread(void)
{
#ifdef CONFIG_SMP
return next_up();
struct k_thread *ret = next_up();
if (ret == _current) {
/* When not swapping, have to signal IPIs here. In
* the context switch case it must happen later, after
* _current gets requeued.
*/
signal_pending_ipi();
}
return ret;
#else
return _kernel.ready_q.cache;
#endif
@@ -950,6 +986,7 @@ void *z_get_next_switch_handle(void *interrupted)
new_thread->switch_handle = NULL;
}
}
signal_pending_ipi();
return ret;
#else
_current->switch_handle = interrupted;
@@ -1346,9 +1383,7 @@ void z_impl_k_wakeup(k_tid_t thread)
z_mark_thread_as_not_suspended(thread);
z_ready_thread(thread);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (!arch_is_in_isr()) {
z_reschedule_unlocked();
@@ -1535,6 +1570,9 @@ void z_thread_abort(struct k_thread *thread)
/* It's running somewhere else, flag and poke */
thread->base.thread_state |= _THREAD_ABORTING;
/* We're going to spin, so need a true synchronous IPI
* here, not deferred!
*/
#ifdef CONFIG_SCHED_IPI_SUPPORTED
arch_sched_ipi();
#endif

View File

@@ -1011,7 +1011,7 @@ void z_thread_mark_switched_in(void)
#ifdef CONFIG_THREAD_RUNTIME_STATS
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
#ifdef CONFIG_THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS
thread->rt_stats.last_switched_in = timing_counter_get();
#else
@@ -1033,7 +1033,7 @@ void z_thread_mark_switched_out(void)
uint64_t diff;
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
if (unlikely(thread->rt_stats.last_switched_in == 0)) {
/* Has not run before */

View File

@@ -68,8 +68,14 @@ static int32_t next_timeout(void)
{
struct _timeout *to = first();
int32_t ticks_elapsed = elapsed();
int32_t ret = to == NULL ? MAX_WAIT
: CLAMP(to->dticks - ticks_elapsed, 0, MAX_WAIT);
int32_t ret;
if ((to == NULL) ||
((int64_t)(to->dticks - ticks_elapsed) > (int64_t)INT_MAX)) {
ret = MAX_WAIT;
} else {
ret = MAX(0, to->dticks - ticks_elapsed);
}
#ifdef CONFIG_TIMESLICING
if (_current_cpu->slice_ticks && _current_cpu->slice_ticks < ret) {
@@ -238,6 +244,18 @@ void sys_clock_announce(int32_t ticks)
k_spinlock_key_t key = k_spin_lock(&timeout_lock);
/* We release the lock around the callbacks below, so on SMP
* systems someone might be already running the loop. Don't
* race (which will cause paralllel execution of "sequential"
* timeouts and confuse apps), just increment the tick count
* and return.
*/
if (IS_ENABLED(CONFIG_SMP) && (announce_remaining != 0)) {
announce_remaining += ticks;
k_spin_unlock(&timeout_lock, key);
return;
}
announce_remaining = ticks;
while (first() != NULL && first()->dticks <= announce_remaining) {
@@ -245,13 +263,13 @@ void sys_clock_announce(int32_t ticks)
int dt = t->dticks;
curr_tick += dt;
announce_remaining -= dt;
t->dticks = 0;
remove_timeout(t);
k_spin_unlock(&timeout_lock, key);
t->fn(t);
key = k_spin_lock(&timeout_lock);
announce_remaining -= dt;
}
if (first() != NULL) {
@@ -271,7 +289,7 @@ int64_t sys_clock_tick_get(void)
uint64_t t = 0U;
LOCKED(&timeout_lock) {
t = curr_tick + sys_clock_elapsed();
t = curr_tick + elapsed();
}
return t;
}

View File

@@ -355,26 +355,45 @@ static int submit_to_queue_locked(struct k_work *work,
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
/* Submit work to a queue but do not yield the current thread.
*
* Intended for internal use.
*
* See also submit_to_queue_locked().
*
* @param queuep pointer to a queue reference.
* @param work the work structure to be submitted
*
* @retval see submit_to_queue_locked()
*/
int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
__ASSERT_NO_MSG(work != NULL);
k_spinlock_key_t key = k_spin_lock(&lock);
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = submit_to_queue_locked(work, &queue);
k_spin_unlock(&lock, key);
/* If we changed the queue contents (as indicated by a positive ret)
* the queue thread may now be ready, but we missed the reschedule
* point because the lock was held. If this is being invoked by a
* preemptible thread then yield.
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = z_work_submit_to_queue(queue, work);
/* submit_to_queue_locked() won't reschedule on its own
* (really it should, otherwise this process will result in
* spurious calls to z_swap() due to the race), so do it here
* if the queue state changed.
*/
if ((ret > 0) && (k_is_preempt_thread() != 0)) {
k_yield();
if (ret > 0) {
z_reschedule_unlocked();
}
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work, submit_to_queue, queue, work, ret);
@@ -586,6 +605,7 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
struct k_work *work = NULL;
k_work_handler_t handler = NULL;
k_spinlock_key_t key = k_spin_lock(&lock);
bool yield;
/* Check for and prepare any new work. */
node = sys_slist_get(&queue->pending);
@@ -644,34 +664,30 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
k_spin_unlock(&lock, key);
if (work != NULL) {
bool yield;
__ASSERT_NO_MSG(handler != NULL);
handler(work);
__ASSERT_NO_MSG(handler != NULL);
handler(work);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
}
}

View File

@@ -112,6 +112,8 @@ config APP_LINK_WITH_POSIX_SUBSYS
config EVENTFD
bool "Enable support for eventfd"
depends on !ARCH_POSIX
select POLL
default y if POSIX_API
help
Enable support for event file descriptors, eventfd. An eventfd can
be used as an event wait/notify mechanism together with POSIX calls

View File

@@ -27,7 +27,6 @@ static struct k_spinlock rt_clock_base_lock;
*/
int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
{
uint64_t elapsed_nsecs;
struct timespec base;
k_spinlock_key_t key;
@@ -48,9 +47,13 @@ int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
return -1;
}
elapsed_nsecs = k_ticks_to_ns_floor64(k_uptime_ticks());
ts->tv_sec = (int32_t) (elapsed_nsecs / NSEC_PER_SEC);
ts->tv_nsec = (int32_t) (elapsed_nsecs % NSEC_PER_SEC);
uint64_t ticks = k_uptime_ticks();
uint64_t elapsed_secs = ticks / CONFIG_SYS_CLOCK_TICKS_PER_SEC;
uint64_t nremainder = ticks - elapsed_secs * CONFIG_SYS_CLOCK_TICKS_PER_SEC;
ts->tv_sec = (time_t) elapsed_secs;
/* For ns 32 bit conversion can be used since its smaller than 1sec. */
ts->tv_nsec = (int32_t) k_ticks_to_ns_floor32(nremainder);
ts->tv_sec += base.tv_sec;
ts->tv_nsec += base.tv_nsec;

View File

@@ -15,12 +15,10 @@
#define PTHREAD_INIT_FLAGS PTHREAD_CANCEL_ENABLE
#define PTHREAD_CANCELED ((void *) -1)
#define LOWEST_POSIX_THREAD_PRIORITY 1
PTHREAD_MUTEX_DEFINE(pthread_key_lock);
static const pthread_attr_t init_pthread_attrs = {
.priority = LOWEST_POSIX_THREAD_PRIORITY,
.priority = 0,
.stack = NULL,
.stacksize = 0,
.flags = PTHREAD_INIT_FLAGS,
@@ -54,9 +52,11 @@ static uint32_t zephyr_to_posix_priority(int32_t z_prio, int *policy)
if (z_prio < 0) {
*policy = SCHED_FIFO;
prio = -1 * (z_prio + 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_COOP_PRIORITIES);
} else {
*policy = SCHED_RR;
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio - 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_PREEMPT_PRIORITIES);
}
return prio;
@@ -68,9 +68,11 @@ static int32_t posix_to_zephyr_priority(uint32_t priority, int policy)
if (policy == SCHED_FIFO) {
/* Zephyr COOP priority starts from -1 */
__ASSERT_NO_MSG(priority < CONFIG_NUM_COOP_PRIORITIES);
prio = -1 * (priority + 1);
} else {
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority);
__ASSERT_NO_MSG(priority < CONFIG_NUM_PREEMPT_PRIORITIES);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority - 1);
}
return prio;

View File

@@ -7,13 +7,9 @@
#include <kernel.h>
#include <posix/posix_sched.h>
static bool valid_posix_policy(int policy)
static inline bool valid_posix_policy(int policy)
{
if (policy != SCHED_FIFO && policy != SCHED_RR) {
return false;
}
return true;
return policy == SCHED_FIFO || policy == SCHED_RR;
}
/**
@@ -23,25 +19,12 @@ static bool valid_posix_policy(int policy)
*/
int sched_get_priority_min(int policy)
{
if (valid_posix_policy(policy) == false) {
if (!valid_posix_policy(policy)) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
return 0;
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return 0;
}
}
errno = EINVAL;
return -1;
return 0;
}
/**
@@ -51,25 +34,10 @@ int sched_get_priority_min(int policy)
*/
int sched_get_priority_max(int policy)
{
if (valid_posix_policy(policy) == false) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
/* Posix COOP priority starts from 0
* whereas zephyr starts from -1
*/
return (CONFIG_NUM_COOP_PRIORITIES - 1);
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED) && policy == SCHED_FIFO) {
return CONFIG_NUM_COOP_PRIORITIES - 1;
} else if (IS_ENABLED(CONFIG_PREEMPT_ENABLED) && policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES - 1;
}
errno = EINVAL;

View File

@@ -24,6 +24,7 @@ config TFM_BOARD
menuconfig BUILD_WITH_TFM
bool "Build with TF-M as the Secure Execution Environment"
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
depends on TRUSTED_EXECUTION_NONSECURE
depends on TFM_BOARD != ""
depends on ARM_TRUSTZONE_M

View File

@@ -8,6 +8,7 @@ tests:
platform_allow: mps2_an521_ns lpcxpresso55s69_ns nrf5340dk_nrf5340_cpuapp_ns
nrf9160dk_nrf9160_ns nucleo_l552ze_q_ns v2m_musca_s1_ns stm32l562e_dk_ns
bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line

View File

@@ -5,6 +5,7 @@ common:
tags: psa
platform_allow: mps2_an521_ns v2m_musca_s1_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -22,3 +23,4 @@ common:
tests:
sample.tfm.protected_storage:
tags: tfm
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -8,6 +8,7 @@ tests:
platform_allow: mps2_an521_ns lpcxpresso55s69_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns nucleo_l552ze_q_ns
stm32l562e_dk_ns v2m_musca_s1_ns v2m_musca_b1_ns bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -21,6 +22,7 @@ tests:
platform_allow: mps2_an521_ns
extra_configs:
- CONFIG_TFM_BL2=n
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line

View File

@@ -3,6 +3,7 @@ common:
platform_allow: mps2_an521_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns
v2m_musca_s1_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -16,5 +17,7 @@ tests:
sample.tfm.psa_protected_storage_test:
extra_args: "CONFIG_TFM_PSA_TEST_PROTECTED_STORAGE=y"
timeout: 100
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
sample.tfm.psa_internal_trusted_storage_test:
extra_args: "CONFIG_TFM_PSA_TEST_INTERNAL_TRUSTED_STORAGE=y"
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -3,6 +3,7 @@ common:
platform_allow: lpcxpresso55s69_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns
v2m_musca_s1_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -18,3 +19,4 @@ tests:
sample.tfm.tfm_regression:
extra_args: ""
timeout: 200
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -15,3 +15,5 @@ tests:
sample.kernel.memory_protection.shared_mem:
filter: CONFIG_ARCH_HAS_USERSPACE
platform_exclude: twr_ke18f
extra_configs:
- CONFIG_TEST_HW_STACK_PROTECTION=n

View File

@@ -58,7 +58,7 @@ data_template = """
"""
library_data_template = """
*{0}:*(.data .data.*)
*{0}:*(.data .data.* .sdata .sdata.*)
"""
bss_template = """
@@ -67,7 +67,7 @@ bss_template = """
"""
library_bss_template = """
*{0}:*(.bss .bss.* COMMON COMMON.*)
*{0}:*(.bss .bss.* .sbss .sbss.* COMMON COMMON.*)
"""
footer_template = """

View File

@@ -55,8 +55,8 @@ const _k_syscall_handler_t _k_syscall_table[K_SYSCALL_LIMIT] = {
};
"""
list_template = """
/* auto-generated by gen_syscalls.py, don't edit */
list_template = """/* auto-generated by gen_syscalls.py, don't edit */
#ifndef ZEPHYR_SYSCALL_LIST_H
#define ZEPHYR_SYSCALL_LIST_H
@@ -82,17 +82,6 @@ syscall_template = """
#include <linker/sections.h>
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC diagnostic push
#endif
#ifdef __GNUC__
#pragma GCC diagnostic ignored "-Wstrict-aliasing"
#if !defined(__XCC__)
#pragma GCC diagnostic ignored "-Warray-bounds"
#endif
#endif
#ifdef __cplusplus
extern "C" {
#endif
@@ -103,10 +92,6 @@ extern "C" {
}
#endif
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC diagnostic pop
#endif
#endif
#endif /* include guard */
"""
@@ -153,25 +138,13 @@ def need_split(argtype):
# Note: "lo" and "hi" are named in little endian conventions,
# but it doesn't matter as long as they are consistently
# generated.
def union_decl(type):
return "union { struct { uintptr_t lo, hi; } split; %s val; }" % type
def union_decl(type, split):
middle = "struct { uintptr_t lo, hi; } split" if split else "uintptr_t x"
return "union { %s; %s val; }" % (middle, type)
def wrapper_defs(func_name, func_type, args):
ret64 = need_split(func_type)
mrsh_args = [] # List of rvalue expressions for the marshalled invocation
split_args = []
nsplit = 0
for argtype, argname in args:
if need_split(argtype):
split_args.append((argtype, argname))
mrsh_args.append("parm%d.split.lo" % nsplit)
mrsh_args.append("parm%d.split.hi" % nsplit)
nsplit += 1
else:
mrsh_args.append("*(uintptr_t *)&" + argname)
if ret64:
mrsh_args.append("(uintptr_t)&ret64")
decl_arglist = ", ".join([" ".join(argrec) for argrec in args]) or "void"
@@ -184,10 +157,24 @@ def wrapper_defs(func_name, func_type, args):
wrap += ("\t" + "uint64_t ret64;\n") if ret64 else ""
wrap += "\t" + "if (z_syscall_trap()) {\n"
for parmnum, rec in enumerate(split_args):
(argtype, argname) = rec
wrap += "\t\t%s parm%d;\n" % (union_decl(argtype), parmnum)
wrap += "\t\t" + "parm%d.val = %s;\n" % (parmnum, argname)
valist_args = []
for argnum, (argtype, argname) in enumerate(args):
split = need_split(argtype)
wrap += "\t\t%s parm%d" % (union_decl(argtype, split), argnum)
if argtype != "va_list":
wrap += " = { .val = %s };\n" % argname
else:
# va_list objects are ... peculiar.
wrap += ";\n" + "\t\t" + "va_copy(parm%d.val, %s);\n" % (argnum, argname)
valist_args.append("parm%d.val" % argnum)
if split:
mrsh_args.append("parm%d.split.lo" % argnum)
mrsh_args.append("parm%d.split.hi" % argnum)
else:
mrsh_args.append("parm%d.x" % argnum)
if ret64:
mrsh_args.append("(uintptr_t)&ret64")
if len(mrsh_args) > 6:
wrap += "\t\t" + "uintptr_t more[] = {\n"
@@ -200,21 +187,23 @@ def wrapper_defs(func_name, func_type, args):
% (len(mrsh_args),
", ".join(mrsh_args + [syscall_id])))
# Coverity does not understand syscall mechanism
# and will already complain when any function argument
# is not of exact size as uintptr_t. So tell Coverity
# to ignore this particular rule here.
wrap += "\t\t/* coverity[OVERRUN] */\n"
if ret64:
wrap += "\t\t" + "(void)%s;\n" % invoke
wrap += "\t\t" + "return (%s)ret64;\n" % func_type
invoke = "\t\t" + "(void) %s;\n" % invoke
retcode = "\t\t" + "return (%s) ret64;\n" % func_type
elif func_type == "void":
wrap += "\t\t" + "%s;\n" % invoke
wrap += "\t\t" + "return;\n"
invoke = "\t\t" + "(void) %s;\n" % invoke
retcode = "\t\t" + "return;\n"
elif valist_args:
invoke = "\t\t" + "%s retval = %s;\n" % (func_type, invoke)
retcode = "\t\t" + "return retval;\n"
else:
wrap += "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
invoke = "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
retcode = ""
wrap += invoke
for argname in valist_args:
wrap += "\t\t" + "va_end(%s);\n" % argname
wrap += retcode
wrap += "\t" + "}\n"
wrap += "#endif\n"
@@ -244,16 +233,11 @@ def marshall_defs(func_name, func_type, args):
mrsh_name = "z_mrsh_" + func_name
nmrsh = 0 # number of marshalled uintptr_t parameter
vrfy_parms = [] # list of (arg_num, mrsh_or_parm_num, bool_is_split)
split_parms = [] # list of a (arg_num, mrsh_num) for each split
for i, (argtype, _) in enumerate(args):
if need_split(argtype):
vrfy_parms.append((i, len(split_parms), True))
split_parms.append((i, nmrsh))
nmrsh += 2
else:
vrfy_parms.append((i, nmrsh, False))
nmrsh += 1
vrfy_parms = [] # list of (argtype, bool_is_split)
for (argtype, _) in args:
split = need_split(argtype)
vrfy_parms.append((argtype, split))
nmrsh += 2 if split else 1
# Final argument for a 64 bit return value?
if need_split(func_type):
@@ -275,25 +259,22 @@ def marshall_defs(func_name, func_type, args):
if nmrsh > 6:
mrsh += ("\tZ_OOPS(Z_SYSCALL_MEMORY_READ(more, "
+ str(nmrsh - 6) + " * sizeof(uintptr_t)));\n")
+ str(nmrsh - 5) + " * sizeof(uintptr_t)));\n")
for i, split_rec in enumerate(split_parms):
arg_num, mrsh_num = split_rec
arg_type = args[arg_num][0]
mrsh += "\t%s parm%d;\n" % (union_decl(arg_type), i)
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(mrsh_num,
nmrsh))
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(mrsh_num + 1,
nmrsh))
# Finally, invoke the verify function
out_args = []
for i, argn, is_split in vrfy_parms:
if is_split:
out_args.append("parm%d.val" % argn)
argnum = 0
for i, (argtype, split) in enumerate(vrfy_parms):
mrsh += "\t%s parm%d;\n" % (union_decl(argtype, split), i)
if split:
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
argnum += 1
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
else:
out_args.append("*(%s*)&%s" % (args[i][0], mrsh_rval(argn, nmrsh)))
mrsh += "\t" + "parm%d.x = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
argnum += 1
vrfy_call = "z_vrfy_%s(%s)\n" % (func_name, ", ".join(out_args))
# Finally, invoke the verify function
out_args = ", ".join(["parm%d.val" % i for i in range(len(args))])
vrfy_call = "z_vrfy_%s(%s)" % (func_name, out_args)
if func_type == "void":
mrsh += "\t" + "%s;\n" % vrfy_call
@@ -436,19 +417,10 @@ def main():
mrsh_fn = os.path.join(args.base_output, fn + "_mrsh.c")
with open(mrsh_fn, "w") as fp:
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n")
fp.write("#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)\n")
fp.write("#pragma GCC diagnostic push\n")
fp.write("#endif\n")
fp.write("#ifdef __GNUC__\n")
fp.write("#pragma GCC diagnostic ignored \"-Wstrict-aliasing\"\n")
fp.write("#endif\n")
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n\n")
fp.write(mrsh_includes[fn] + "\n")
fp.write("\n")
fp.write(mrsh_defs[fn] + "\n")
fp.write("#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)\n")
fp.write("#pragma GCC diagnostic pop\n")
fp.write("#endif\n")
if __name__ == "__main__":
main()

View File

@@ -2503,13 +2503,15 @@ static void le_read_buffer_size_complete(struct net_buf *buf)
BT_DBG("status 0x%02x", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!bt_dev.le.acl_mtu) {
uint16_t acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!acl_mtu || !rp->le_max_num) {
return;
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num,
bt_dev.le.acl_mtu);
bt_dev.le.acl_mtu = acl_mtu;
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->le_max_num, rp->le_max_num);
#endif /* CONFIG_BT_CONN */
@@ -2523,25 +2525,26 @@ static void read_buffer_size_v2_complete(struct net_buf *buf)
BT_DBG("status %u", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (!bt_dev.le.acl_mtu) {
return;
uint16_t acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (acl_mtu && rp->acl_max_num) {
bt_dev.le.acl_mtu = acl_mtu;
LOG_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num,
bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
#endif /* CONFIG_BT_CONN */
bt_dev.le.iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!bt_dev.le.iso_mtu) {
uint16_t iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!iso_mtu || !rp->iso_max_num) {
BT_ERR("ISO buffer size not set");
return;
}
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num,
bt_dev.le.iso_mtu);
bt_dev.le.iso_mtu = iso_mtu;
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num, bt_dev.le.iso_mtu);
k_sem_init(&bt_dev.le.iso_pkts, rp->iso_max_num, rp->iso_max_num);
#endif /* CONFIG_BT_ISO */
@@ -2810,6 +2813,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
read_buffer_size_v2_complete(rsp);
net_buf_unref(rsp);
@@ -2823,6 +2827,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
@@ -2866,7 +2871,9 @@ static int le_init(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
}

View File

@@ -28,6 +28,11 @@ config DEBUG_COREDUMP_BACKEND_FLASH_PARTITION
Core dump is saved to a flash partition with DTS alias
"coredump-partition".
config DEBUG_COREDUMP_BACKEND_OTHER
bool "Backend subsystem for coredump defined out of tree"
help
Core dump is done via custom mechanism defined out of tree
endchoice
choice

View File

@@ -513,7 +513,7 @@ static int coredump_flash_backend_cmd(enum coredump_cmd_id cmd_id,
}
struct z_coredump_backend_api z_coredump_backend_flash_partition = {
struct coredump_backend_api coredump_backend_flash_partition = {
.start = coredump_flash_backend_start,
.end = coredump_flash_backend_end,
.buffer_output = coredump_flash_backend_buffer_output,

View File

@@ -116,7 +116,7 @@ static int coredump_logging_backend_cmd(enum coredump_cmd_id cmd_id,
}
struct z_coredump_backend_api z_coredump_backend_logging = {
struct coredump_backend_api coredump_backend_logging = {
.start = coredump_logging_backend_start,
.end = coredump_logging_backend_end,
.buffer_output = coredump_logging_backend_buffer_output,

View File

@@ -14,13 +14,17 @@
#include "coredump_internal.h"
#if defined(CONFIG_DEBUG_COREDUMP_BACKEND_LOGGING)
extern struct z_coredump_backend_api z_coredump_backend_logging;
static struct z_coredump_backend_api
*backend_api = &z_coredump_backend_logging;
extern struct coredump_backend_api coredump_backend_logging;
static struct coredump_backend_api
*backend_api = &coredump_backend_logging;
#elif defined(CONFIG_DEBUG_COREDUMP_BACKEND_FLASH_PARTITION)
extern struct z_coredump_backend_api z_coredump_backend_flash_partition;
static struct z_coredump_backend_api
*backend_api = &z_coredump_backend_flash_partition;
extern struct coredump_backend_api coredump_backend_flash_partition;
static struct coredump_backend_api
*backend_api = &coredump_backend_flash_partition;
#elif defined(CONFIG_DEBUG_COREDUMP_BACKEND_OTHER)
extern struct coredump_backend_api coredump_backend_other;
static struct coredump_backend_api
*backend_api = &coredump_backend_other;
#else
#error "Need to select a coredump backend"
#endif

View File

@@ -53,31 +53,6 @@ void z_coredump_start(void);
*/
void z_coredump_end(void);
typedef void (*z_coredump_backend_start_t)(void);
typedef void (*z_coredump_backend_end_t)(void);
typedef void (*z_coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct z_coredump_backend_api {
/* Signal to backend of the start of coredump. */
z_coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
z_coredump_backend_end_t end;
/* Raw buffer output */
z_coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
/**
* @endcond
*/

View File

@@ -4214,6 +4214,74 @@ wait_reply:
#endif
}
static bool is_pkt_part_of_slab(const struct k_mem_slab *slab, const char *ptr)
{
size_t last_offset = (slab->num_blocks - 1) * slab->block_size;
size_t ptr_offset;
/* Check if pointer fits into slab buffer area. */
if ((ptr < slab->buffer) || (ptr > slab->buffer + last_offset)) {
return false;
}
/* Check if pointer offset is correct. */
ptr_offset = ptr - slab->buffer;
if (ptr_offset % slab->block_size != 0) {
return false;
}
return true;
}
struct ctx_pkt_slab_info {
const void *ptr;
bool pkt_source_found;
};
static void check_context_pool(struct net_context *context, void *user_data)
{
#if defined(CONFIG_NET_CONTEXT_NET_PKT_POOL)
if (!net_context_is_used(context)) {
return;
}
if (context->tx_slab) {
struct ctx_pkt_slab_info *info = user_data;
struct k_mem_slab *slab = context->tx_slab();
if (is_pkt_part_of_slab(slab, info->ptr)) {
info->pkt_source_found = true;
}
}
#endif /* CONFIG_NET_CONTEXT_NET_PKT_POOL */
}
static bool is_pkt_ptr_valid(const void *ptr)
{
struct k_mem_slab *rx, *tx;
net_pkt_get_info(&rx, &tx, NULL, NULL);
if (is_pkt_part_of_slab(rx, ptr) || is_pkt_part_of_slab(tx, ptr)) {
return true;
}
if (IS_ENABLED(CONFIG_NET_CONTEXT_NET_PKT_POOL)) {
struct ctx_pkt_slab_info info;
info.ptr = ptr;
info.pkt_source_found = false;
net_context_foreach(check_context_pool, &info);
if (info.pkt_source_found) {
return true;
}
}
return false;
}
static struct net_pkt *get_net_pkt(const char *ptr_str)
{
uint8_t buf[sizeof(intptr_t)];
@@ -4289,6 +4357,14 @@ static int cmd_net_pkt(const struct shell *shell, size_t argc, char *argv[])
if (!pkt) {
PR_ERROR("Invalid ptr value (%s). "
"Example: 0x01020304\n", argv[1]);
return -ENOEXEC;
}
if (!is_pkt_ptr_valid(pkt)) {
PR_ERROR("Pointer is not recognized as net_pkt (%s).\n",
argv[1]);
return -ENOEXEC;
}

View File

@@ -38,6 +38,7 @@
#define CONFIG_MP_NUM_CPUS 1
#define CONFIG_SYS_CLOCK_TICKS_PER_SEC 100
#define CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC 10000000
#define CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS 365
#define ARCH_STACK_PTR_ALIGN 8
/* FIXME: Properly integrate with Zephyr's arch specific code */
#define CONFIG_X86 1

View File

@@ -72,7 +72,7 @@
#include "mbedtls/memory_buffer_alloc.h"
#endif
static int test_snprintf(size_t n, const char ref_buf[10], int ref_ret)
static int test_snprintf(size_t n, const char *ref_buf, int ref_ret)
{
int ret;
char buf[10] = "xxxxxxxxx";

View File

@@ -79,8 +79,13 @@ static int test_task(uint32_t chan_id, uint32_t blen)
TC_PRINT("Starting the transfer\n");
(void)memset(rx_data, 0, sizeof(rx_data));
dma_block_cfg.block_size = sizeof(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data;
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data;
#endif
if (dma_config(dma, chan_id, &dma_cfg)) {
TC_PRINT("ERROR: transfer\n");

View File

@@ -87,8 +87,13 @@ static int test_task(int minor, int major)
(void)memset(rx_data2, 0, sizeof(rx_data2));
dma_block_cfg.block_size = sizeof(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data2;
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data2;
#endif
if (dma_config(dma, TEST_DMA_CHANNEL_1, &dma_cfg)) {
TC_PRINT("ERROR: transfer\n");
@@ -104,8 +109,13 @@ static int test_task(int minor, int major)
dma_cfg.linked_channel = TEST_DMA_CHANNEL_1;
dma_block_cfg.block_size = sizeof(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data;
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data;
#endif
if (dma_config(dma, TEST_DMA_CHANNEL_0, &dma_cfg)) {
TC_PRINT("ERROR: transfer\n");

View File

@@ -57,8 +57,13 @@ static void test_transfer(const struct device *dev, uint32_t id)
transfer_count++;
if (transfer_count < TRANSFER_LOOPS) {
dma_block_cfg.block_size = strlen(tx_data);
#ifdef CONFIG_DMA_64BIT
dma_block_cfg.source_address = (uint64_t)tx_data;
dma_block_cfg.dest_address = (uint64_t)rx_data[transfer_count];
#else
dma_block_cfg.source_address = (uint32_t)tx_data;
dma_block_cfg.dest_address = (uint32_t)rx_data[transfer_count];
#endif
zassert_false(dma_config(dev, id, &dma_cfg),
"Not able to config transfer %d",

View File

@@ -4,3 +4,4 @@ CONFIG_FPU=y
CONFIG_FPU_SHARING=y
CONFIG_CBPRINTF_NANO=y
CONFIG_MAIN_STACK_SIZE=1024
CONFIG_MP_NUM_CPUS=1

View File

@@ -35,7 +35,7 @@ struct reply_packet {
struct timeout_order_data {
void *link_in_lifo;
struct k_lifo *klifo;
k_ticks_t timeout;
int32_t timeout;
int32_t timeout_order;
int32_t q_order;
};
@@ -43,23 +43,23 @@ struct timeout_order_data {
static struct k_lifo lifo_timeout[2];
struct timeout_order_data timeout_order_data[] = {
{0, &lifo_timeout[0], 20, 2, 0},
{0, &lifo_timeout[0], 40, 4, 1},
{0, &lifo_timeout[0], 0, 0, 2},
{0, &lifo_timeout[0], 10, 1, 3},
{0, &lifo_timeout[0], 30, 3, 4},
{0, &lifo_timeout[0], 200, 2, 0},
{0, &lifo_timeout[0], 400, 4, 1},
{0, &lifo_timeout[0], 0, 0, 2},
{0, &lifo_timeout[0], 100, 1, 3},
{0, &lifo_timeout[0], 300, 3, 4},
};
struct timeout_order_data timeout_order_data_mult_lifo[] = {
{0, &lifo_timeout[1], 0, 0, 0},
{0, &lifo_timeout[0], 30, 3, 1},
{0, &lifo_timeout[0], 50, 5, 2},
{0, &lifo_timeout[1], 80, 8, 3},
{0, &lifo_timeout[1], 70, 7, 4},
{0, &lifo_timeout[0], 10, 1, 5},
{0, &lifo_timeout[0], 60, 6, 6},
{0, &lifo_timeout[0], 20, 2, 7},
{0, &lifo_timeout[1], 40, 4, 8},
{0, &lifo_timeout[1], 0, 0, 0},
{0, &lifo_timeout[0], 300, 3, 1},
{0, &lifo_timeout[0], 500, 5, 2},
{0, &lifo_timeout[1], 800, 8, 3},
{0, &lifo_timeout[1], 700, 7, 4},
{0, &lifo_timeout[0], 100, 1, 5},
{0, &lifo_timeout[0], 600, 6, 6},
{0, &lifo_timeout[0], 200, 2, 7},
{0, &lifo_timeout[1], 400, 4, 8},
};
#define NUM_SCRATCH_LIFO_PACKETS 20
@@ -110,9 +110,7 @@ static bool is_timeout_in_range(uint32_t start_time, uint32_t timeout)
uint32_t stop_time, diff;
stop_time = k_cycle_get_32();
diff = (uint32_t)k_cyc_to_ns_floor64(stop_time -
start_time) / NSEC_PER_USEC;
diff = diff / USEC_PER_MSEC;
diff = k_cyc_to_ms_floor32(stop_time - start_time);
return timeout <= diff;
}
@@ -266,7 +264,7 @@ static void test_timeout_empty_lifo(void)
uint32_t start_time, timeout;
timeout = 10U;
timeout = 100U;
start_time = k_cycle_get_32();

View File

@@ -80,14 +80,23 @@ void test_kobject_access_grant_error(void)
*/
void test_kobject_access_grant_error_user(void)
{
struct k_msgq *m;
struct k_queue *q;
m = k_object_alloc(K_OBJ_MSGQ);
k_object_access_grant(m, k_current_get());
/*
* avoid using K_OBJ_PIPE, K_OBJ_MSGQ, or K_OBJ_STACK because the
* k_object_alloc() returns an uninitialized kernel object and these
* objects are types that can have additional memory allocations that
* need to be freed. This becomes a problem on the fault handler clean
* up because when it is freeing this uninitialized object the random
* data in the object can cause the clean up to try to free random
* data resulting in a secondary fault that fails the test.
*/
q = k_object_alloc(K_OBJ_QUEUE);
k_object_access_grant(q, k_current_get());
set_fault_valid(true);
/* a K_ERR_KERNEL_OOPS expected */
k_object_access_grant(m, NULL);
k_object_access_grant(q, NULL);
}
/**
@@ -1264,10 +1273,13 @@ void test_alloc_kobjects(void)
zassert_not_null(t, "alloc obj (0x%lx)\n", (uintptr_t)t);
p = k_object_alloc(K_OBJ_PIPE);
zassert_not_null(p, "alloc obj (0x%lx)\n", (uintptr_t)p);
k_pipe_init(p, NULL, 0);
s = k_object_alloc(K_OBJ_STACK);
zassert_not_null(s, "alloc obj (0x%lx)\n", (uintptr_t)s);
k_stack_init(s, NULL, 0);
m = k_object_alloc(K_OBJ_MSGQ);
zassert_not_null(m, "alloc obj (0x%lx)\n", (uintptr_t)m);
k_msgq_init(m, NULL, 0, 0);
q = k_object_alloc(K_OBJ_QUEUE);
zassert_not_null(q, "alloc obj (0x%lx)\n", (uintptr_t)q);

View File

@@ -131,7 +131,8 @@ static inline void set_fault_valid(bool valid)
#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64)
#if (defined(CONFIG_X86_64) || defined(CONFIG_ARM64) || \
(defined(CONFIG_RISCV) && defined(CONFIG_64BIT)))
#define TEST_HEAP_SIZE (2 << CONFIG_MAX_THREAD_BYTES) * 1024
#define MAX_OBJ 512
#else

View File

@@ -6,6 +6,7 @@ tests:
# To get clean results we need to disable this test until the bug is fixed and fix
# gets propagated to new Zephyr-SDK.
platform_exclude: twr_ke18f qemu_arc_hs qemu_arc_em
extra_args: CONFIG_TEST_HW_STACK_PROTECTION=n
tags: kernel security userspace ignore_faults
kernel.memory_protection.gap_filling.arc:
filter: CONFIG_ARCH_HAS_USERSPACE and CONFIG_MPU_REQUIRES_NON_OVERLAPPING_REGIONS

View File

@@ -902,6 +902,7 @@ void test_syscall_context(void)
check_syscall_context();
}
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
static void tls_leakage_user_part(void *p1, void *p2, void *p3)
{
char *tls_area = p1;
@@ -911,9 +912,11 @@ static void tls_leakage_user_part(void *p1, void *p2, void *p3)
"TLS data leakage to user mode");
}
}
#endif
void test_tls_leakage(void)
{
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* Tests two assertions:
*
* - That a user thread has full access to its TLS area
@@ -926,15 +929,21 @@ void test_tls_leakage(void)
k_thread_user_mode_enter(tls_leakage_user_part,
_current->userspace_local_data, NULL, NULL);
#else
ztest_test_skip();
#endif
}
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
void tls_entry(void *p1, void *p2, void *p3)
{
printk("tls_entry\n");
}
#endif
void test_tls_pointer(void)
{
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
k_thread_create(&test_thread, test_stack, STACKSIZE, tls_entry,
NULL, NULL, NULL, 1, K_USER, K_FOREVER);
@@ -958,6 +967,9 @@ void test_tls_pointer(void)
printk("tls area out of bounds\n");
ztest_test_fail();
}
#else
ztest_test_skip();
#endif
}

View File

@@ -1,6 +1,7 @@
tests:
kernel.memory_protection.userspace:
filter: CONFIG_ARCH_HAS_USERSPACE
extra_args: CONFIG_TEST_HW_STACK_PROTECTION=n
tags: kernel security userspace ignore_faults
kernel.memory_protection.userspace.gap_filling.arc:
filter: CONFIG_ARCH_HAS_USERSPACE and CONFIG_MPU_REQUIRES_NON_OVERLAPPING_REGIONS

View File

@@ -14,8 +14,6 @@ extern void test_poll_multi(void);
extern void test_poll_threadstate(void);
extern void test_poll_grant_access(void);
extern void test_poll_fail_grant_access(void);
extern void test_poll_lower_prio(void);
extern void test_condition_met_type_err(void);
extern void test_detect_is_polling(void);
#ifdef CONFIG_USERSPACE
extern void test_k_poll_user_num_err(void);
@@ -74,10 +72,8 @@ void test_main(void)
ztest_1cpu_unit_test(test_poll_cancel_main_low_prio),
ztest_1cpu_unit_test(test_poll_cancel_main_high_prio),
ztest_unit_test(test_poll_multi),
ztest_1cpu_unit_test(test_poll_lower_prio),
ztest_1cpu_unit_test(test_poll_threadstate),
ztest_1cpu_unit_test(test_detect_is_polling),
ztest_1cpu_unit_test(test_condition_met_type_err),
ztest_user_unit_test(test_k_poll_user_num_err),
ztest_user_unit_test(test_k_poll_user_mem_err),
ztest_user_unit_test(test_k_poll_user_type_sem_err),

View File

@@ -10,107 +10,6 @@
static struct k_poll_signal signal_err;
#define STACK_SIZE (1024 + CONFIG_TEST_EXTRA_STACKSIZE)
static struct k_thread test_thread1;
static struct k_thread test_thread2;
K_THREAD_STACK_DEFINE(test_stack1, STACK_SIZE);
K_THREAD_STACK_DEFINE(test_stack2, STACK_SIZE);
/**
* @brief Test API k_poll with error events type in kernel mode
*
* @details Define a poll event and initialize by k_poll_event_init(), and using
* API k_poll with error events type as parameter check if a error will be met.
*
* @see k_poll()
*
* @ingroup kernel_poll_tests
*/
void test_condition_met_type_err(void)
{
struct k_poll_event event;
struct k_fifo fifo;
ztest_set_assert_valid(true);
k_fifo_init(&fifo);
k_poll_event_init(&event, K_POLL_TYPE_DATA_AVAILABLE, K_POLL_MODE_NOTIFY_ONLY, &fifo);
event.type = 5;
k_poll(&event, 1, K_NO_WAIT);
}
/* verify multiple pollers */
static K_SEM_DEFINE(multi_sem, 0, 1);
static K_SEM_DEFINE(multi_ready_sem, 1, 1);
static K_SEM_DEFINE(multi_reply, 0, 1);
static void thread_entry(void *p1, void *p2, void *p3)
{
struct k_poll_event event;
k_poll_event_init(&event, K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY, &multi_sem);
(void)k_poll(&event, 1, K_FOREVER);
k_sem_take(&multi_sem, K_FOREVER);
k_sem_give(&multi_reply);
}
/**
* @brief Test polling of multiple events by lower priority thread
*
* @details
* - Test the multiple semaphore events as waitable events in poll.
*
* @ingroup kernel_poll_tests
*
* @see K_POLL_EVENT_INITIALIZER(), k_poll(), k_poll_event_init()
*/
void test_poll_lower_prio(void)
{
int old_prio = k_thread_priority_get(k_current_get());
struct k_thread *p = &test_thread1;
const int main_low_prio = 10;
const int low_prio_than_main = 11;
int rc;
struct k_poll_event events[] = {
K_POLL_EVENT_INITIALIZER(K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&multi_sem),
K_POLL_EVENT_INITIALIZER(K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&multi_ready_sem),
};
k_thread_priority_set(k_current_get(), main_low_prio);
k_thread_create(&test_thread1, test_stack1,
K_THREAD_STACK_SIZEOF(test_stack1),
thread_entry, 0, 0, 0, low_prio_than_main,
K_INHERIT_PERMS, K_NO_WAIT);
k_thread_create(&test_thread2, test_stack2,
K_THREAD_STACK_SIZEOF(test_stack2),
thread_entry, 0, 0, 0, low_prio_than_main,
K_INHERIT_PERMS, K_NO_WAIT);
/* Set up the thread timeout value to check if what happened if dticks is invalid */
p->base.timeout.dticks = _EXPIRED;
/* Delay for some actions above */
k_sleep(K_MSEC(250));
(void)k_poll(events, ARRAY_SIZE(events), K_SECONDS(1));
k_sem_give(&multi_sem);
k_sem_give(&multi_sem);
rc = k_sem_take(&multi_reply, K_FOREVER);
zassert_equal(rc, 0, "");
/* Reset the initialized state */
k_thread_priority_set(k_current_get(), old_prio);
k_sleep(K_MSEC(250));
}
#ifdef CONFIG_USERSPACE
/**

View File

@@ -0,0 +1,4 @@
# Copyright (c) 2022 Carlo Caione <ccaione@baylibre.com>
# SPDX-License-Identifier: Apache-2.0
CONFIG_MP_NUM_CPUS=4

View File

@@ -0,0 +1,19 @@
/* Copyright 2022 Carlo Caione <ccaione@baylibre.com>
* SPDX-License-Identifier: Apache-2.0
*/
/ {
cpus {
cpu@2 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <2>;
};
cpu@3 {
device_type = "cpu";
compatible = "arm,cortex-a53";
reg = <3>;
};
};
};

View File

@@ -1,3 +1,4 @@
CONFIG_ZTEST=y
CONFIG_SMP=y
CONFIG_TRACE_SCHED_IPI=y
CONFIG_POLL=y

View File

@@ -22,6 +22,7 @@
#define EQUAL_PRIORITY 1
#define TIME_SLICE_MS 500
#define THREAD_DELAY 1
#define SLEEP_MS_LONG 15000
struct k_thread t2;
K_THREAD_STACK_DEFINE(t2_stack, T2_STACK_SIZE);
@@ -52,6 +53,9 @@ static K_THREAD_STACK_ARRAY_DEFINE(tstack, THREADS_NUM, STACK_SIZE);
static volatile int thread_started[THREADS_NUM - 1];
static struct k_poll_signal tsignal[THREADS_NUM];
static struct k_poll_event tevent[THREADS_NUM];
static int curr_cpu(void)
{
unsigned int k = arch_irq_lock();
@@ -141,6 +145,7 @@ void test_smp_coop_threads(void)
}
k_thread_abort(tid);
k_thread_join(tid, K_FOREVER);
zassert_true(ok, "SMP test failed");
}
@@ -181,6 +186,7 @@ void test_cpu_id_threads(void)
k_sem_take(&cpuid_sema, K_FOREVER);
k_thread_abort(tid);
k_thread_join(tid, K_FOREVER);
}
static void thread_entry(void *p1, void *p2, void *p3)
@@ -242,6 +248,10 @@ static void abort_threads(int num)
for (int i = 0; i < num; i++) {
k_thread_abort(tinfo[i].tid);
}
for (int i = 0; i < num; i++) {
k_thread_join(tinfo[i].tid, K_FOREVER);
}
}
static void cleanup_resources(void)
@@ -548,6 +558,7 @@ void test_get_cpu(void)
k_busy_wait(DELAY_US);
k_thread_abort(thread_id);
k_thread_join(thread_id, K_FOREVER);
}
#ifdef CONFIG_TRACE_SCHED_IPI
@@ -948,6 +959,67 @@ void test_inc_concurrency(void)
"total count %d is wrong(M)", global_cnt);
}
/**
* @brief Torture test for context switching code
*
* @ingroup kernel_smp_tests
*
* @details Leverage the polling API to stress test the context switching code.
* This test will hammer all the CPUs with thread swapping requests.
*/
static void process_events(void *arg0, void *arg1, void *arg2)
{
uintptr_t id = (uintptr_t) arg0;
while (1) {
k_poll(&tevent[id], 1, K_FOREVER);
if (tevent[id].signal->result != 0x55) {
ztest_test_fail();
}
tevent[id].signal->signaled = 0;
tevent[id].state = K_POLL_STATE_NOT_READY;
k_poll_signal_reset(&tsignal[id]);
}
}
static void signal_raise(void *arg0, void *arg1, void *arg2)
{
while (1) {
for (uintptr_t i = 0; i < THREADS_NUM; i++) {
k_poll_signal_raise(&tsignal[i], 0x55);
}
}
}
void test_smp_switch_torture(void)
{
for (uintptr_t i = 0; i < THREADS_NUM; i++) {
k_poll_signal_init(&tsignal[i]);
k_poll_event_init(&tevent[i], K_POLL_TYPE_SIGNAL,
K_POLL_MODE_NOTIFY_ONLY, &tsignal[i]);
k_thread_create(&tthread[i], tstack[i], STACK_SIZE,
(k_thread_entry_t) process_events,
(void *) i, NULL, NULL, K_PRIO_PREEMPT(i + 1),
K_INHERIT_PERMS, K_NO_WAIT);
}
k_thread_create(&t2, t2_stack, T2_STACK_SIZE, signal_raise,
NULL, NULL, NULL, K_PRIO_COOP(2), 0, K_NO_WAIT);
k_sleep(K_MSEC(SLEEP_MS_LONG));
k_thread_abort(&t2);
k_thread_join(&t2, K_FOREVER);
for (uintptr_t i = 0; i < THREADS_NUM; i++) {
k_thread_abort(&tthread[i]);
k_thread_join(&tthread[i], K_FOREVER);
}
}
void test_main(void)
{
/* Sleep a bit to guarantee that both CPUs enter an idle
@@ -969,7 +1041,8 @@ void test_main(void)
ztest_unit_test(test_fatal_on_smp),
ztest_unit_test(test_workq_on_smp),
ztest_unit_test(test_smp_release_global_lock),
ztest_unit_test(test_inc_concurrency)
ztest_unit_test(test_inc_concurrency),
ztest_unit_test(test_smp_switch_torture)
);
ztest_run_test_suite(smp);
}

View File

@@ -39,6 +39,7 @@ extern void test_nanosleep_1_1(void);
extern void test_nanosleep_1_1001(void);
extern void test_sleep(void);
extern void test_usleep(void);
extern void test_sched_policy(void);
void test_main(void)
{
@@ -73,7 +74,8 @@ void test_main(void)
ztest_unit_test(test_nanosleep_1_1001),
ztest_unit_test(test_posix_pthread_create_negative),
ztest_unit_test(test_sleep),
ztest_unit_test(test_usleep)
ztest_unit_test(test_usleep),
ztest_unit_test(test_sched_policy)
);
ztest_run_test_suite(posix_apis);
}

View File

@@ -577,3 +577,132 @@ void test_pthread_descriptor_leak(void)
zassert_ok(pthread_join(pthread1, &unused), "unable to join thread %zu", i);
}
}
void test_sched_policy(void)
{
/*
* TODO:
* 1. assert that _POSIX_PRIORITY_SCHEDULING is defined
* 2. if _POSIX_SPORADIC_SERVER or _POSIX_THREAD_SPORADIC_SERVER are defined,
* also check SCHED_SPORADIC
* 3. SCHED_OTHER is mandatory (but may be equivalent to SCHED_FIFO or SCHED_RR,
* and is implementation defined)
*/
int pmin;
int pmax;
pthread_t th;
pthread_attr_t attr;
struct sched_param param;
static const int policies[] = {
SCHED_FIFO,
SCHED_RR,
SCHED_INVALID,
};
static const char *const policy_names[] = {
"SCHED_FIFO",
"SCHED_RR",
"SCHED_INVALID",
};
static const bool policy_enabled[] = {
IS_ENABLED(CONFIG_COOP_ENABLED),
IS_ENABLED(CONFIG_PREEMPT_ENABLED),
false,
};
static int nprio[] = {
CONFIG_NUM_COOP_PRIORITIES,
CONFIG_NUM_PREEMPT_PRIORITIES,
42,
};
const char *const prios[] = {"pmin", "pmax"};
BUILD_ASSERT(!(SCHED_INVALID == SCHED_FIFO || SCHED_INVALID == SCHED_RR),
"SCHED_INVALID is itself invalid");
for (int policy = 0; policy < ARRAY_SIZE(policies); ++policy) {
if (!policy_enabled[policy]) {
/* test degenerate cases */
errno = 0;
zassert_equal(-1, sched_get_priority_min(policies[policy]),
"expected sched_get_priority_min(%s) to fail",
policy_names[policy]);
zassert_equal(EINVAL, errno, "sched_get_priority_min(%s) did not set errno",
policy_names[policy]);
errno = 0;
zassert_equal(-1, sched_get_priority_max(policies[policy]),
"expected sched_get_priority_max(%s) to fail",
policy_names[policy]);
zassert_equal(EINVAL, errno, "sched_get_priority_max(%s) did not set errno",
policy_names[policy]);
continue;
}
/* get pmin and pmax for policies[policy] */
for (int i = 0; i < 2; ++i) {
errno = 0;
if (i == 0) {
pmin = sched_get_priority_min(policies[policy]);
param.sched_priority = pmin;
} else {
pmax = sched_get_priority_max(policies[policy]);
param.sched_priority = pmax;
}
zassert_not_equal(-1, param.sched_priority,
"sched_get_priority_%s(%s) failed: %d",
i == 0 ? "min" : "max", policy_names[policy], errno);
zassert_equal(0, errno, "sched_get_priority_%s(%s) set errno to %s",
i == 0 ? "min" : "max", policy_names[policy], errno);
}
/*
* IEEE 1003.1-2008 Section 2.8.4
* conforming implementations should provide a range of at least 32 priorities
*
* Note: we relax this requirement
*/
zassert_true(pmax > pmin, "pmax (%d) <= pmin (%d)", pmax, pmin,
"%s min/max inconsistency: pmin: %d pmax: %d", policy_names[policy],
pmin, pmax);
/*
* Getting into the weeds a bit (i.e. whitebox testing), Zephyr
* cooperative threads use [-CONFIG_NUM_COOP_PRIORITIES,-1] and
* preemptive threads use [0, CONFIG_NUM_PREEMPT_PRIORITIES - 1],
* where the more negative thread has the higher priority. Since we
* cannot map those directly (a return value of -1 indicates error),
* we simply map those to the positive space.
*/
zassert_equal(pmin, 0, "unexpected pmin for %s", policy_names[policy]);
zassert_equal(pmax, nprio[policy] - 1, "unexpected pmax for %s",
policy_names[policy]); /* test happy paths */
for (int i = 0; i < 2; ++i) {
/* create threads with min and max priority levels */
zassert_equal(0, pthread_attr_init(&attr),
"pthread_attr_init() failed for %s (%d) of %s", prios[i],
param.sched_priority, policy_names[policy]);
zassert_equal(0, pthread_attr_setschedpolicy(&attr, policies[policy]),
"pthread_attr_setschedpolicy() failed for %s (%d) of %s",
prios[i], param.sched_priority, policy_names[policy]);
zassert_equal(0, pthread_attr_setschedparam(&attr, &param),
"pthread_attr_setschedparam() failed for %s (%d) of %s",
prios[i], param.sched_priority, policy_names[policy]);
zassert_equal(0, pthread_attr_setstack(&attr, &stack_e[0][0], STACKS),
"pthread_attr_setstack() failed for %s (%d) of %s", prios[i],
param.sched_priority, policy_names[policy]);
zassert_equal(0, pthread_create(&th, &attr, create_thread1, NULL),
"pthread_create() failed for %s (%d) of %s", prios[i],
param.sched_priority, policy_names[policy]);
zassert_equal(0, pthread_join(th, NULL),
"pthread_join() failed for %s (%d) of %s", prios[i],
param.sched_priority, policy_names[policy]);
}
}
}

View File

@@ -7,3 +7,4 @@ CONFIG_POSIX_API=y
CONFIG_POSIX_FS=y
CONFIG_ZTEST=y
CONFIG_MAIN_STACK_SIZE=4096
CONFIG_EVENTFD=n

View File

@@ -6,3 +6,8 @@ find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
project(hello_world)
target_sources(app PRIVATE src/main.c)
zephyr_library_sources_ifdef(
CONFIG_DEBUG_COREDUMP_BACKEND_OTHER
src/coredump_backend_empty.c
)

View File

@@ -0,0 +1,6 @@
CONFIG_ZTEST=y
CONFIG_DEBUG_COREDUMP=y
CONFIG_DEBUG_COREDUMP_BACKEND_OTHER=y
CONFIG_MP_NUM_CPUS=1
CONFIG_DEBUG_COREDUMP_MEMORY_DUMP_MIN=y
CONFIG_DEBUG_COREDUMP_MEMORY_DUMP_LINKER_RAM=n

View File

@@ -0,0 +1,83 @@
/*
* Copyright Meta Platforms, Inc. and its affiliates.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <errno.h>
#include <debug/coredump.h>
static int error;
static bool is_valid;
static void coredump_empty_backend_start(void)
{
/* Reset error, is_valid */
error = 0;
is_valid = false;
}
static void coredump_empty_backend_end(void)
{
is_valid = true;
}
static void coredump_empty_backend_buffer_output(uint8_t *buf, size_t buflen)
{
/* no-op */
}
static int coredump_empty_backend_query(enum coredump_query_id query_id,
void *arg)
{
int ret;
switch (query_id) {
case COREDUMP_QUERY_GET_ERROR:
ret = error;
break;
case COREDUMP_QUERY_HAS_STORED_DUMP:
ret = 0;
if (is_valid) {
ret = 1;
}
break;
default:
ret = -ENOTSUP;
break;
}
return ret;
}
static int coredump_empty_backend_cmd(enum coredump_cmd_id cmd_id,
void *arg)
{
int ret;
switch (cmd_id) {
case COREDUMP_CMD_CLEAR_ERROR:
error = 0;
ret = 0;
break;
case COREDUMP_CMD_VERIFY_STORED_DUMP:
ret = 0;
if (is_valid) {
ret = 1;
}
break;
default:
ret = -ENOTSUP;
break;
}
return ret;
}
struct coredump_backend_api coredump_backend_other = {
.start = coredump_empty_backend_start,
.end = coredump_empty_backend_end,
.buffer_output = coredump_empty_backend_buffer_output,
.query = coredump_empty_backend_query,
.cmd = coredump_empty_backend_cmd,
};

View File

@@ -15,3 +15,8 @@ tests:
filter: CONFIG_ARCH_SUPPORTS_COREDUMP
extra_args: CONF_FILE=prj_flash_partition.conf
platform_allow: qemu_x86
coredump.backends.other:
tags: ignore_faults ignore_qemu_crash
filter: CONFIG_ARCH_SUPPORTS_COREDUMP
extra_args: CONF_FILE=prj_backend_other.conf
platform_exclude: acrn_ehl_crb

View File

@@ -0,0 +1,6 @@
# Copyright 2022 Meta
# SPDX-License-Identifier: Apache-2.0
project(time_units)
set(SOURCES main.c overflow.c)
find_package(ZephyrUnittest REQUIRED HINTS $ENV{ZEPHYR_BASE})

View File

@@ -0,0 +1,15 @@
/*
* Copyright 2022 Meta
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <ztest.h>
extern void test_z_tmcvt_for_overflow(void);
void test_main(void)
{
ztest_test_suite(test_time_units, ztest_unit_test(test_z_tmcvt_for_overflow));
ztest_run_test_suite(test_time_units);
}

View File

@@ -0,0 +1,59 @@
/*
* Copyright 2022 Meta
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <inttypes.h>
#include <limits.h>
#include <stdbool.h>
#include <stdint.h>
#include <ztest.h>
#include <sys/time_units.h>
/**
* @brief Test @ref z_tmcvt for robustness against intermediate value overflow.
*
* With input
* ```
* [t0, t1, t2] = [
* UINT64_MAX / to_hz - 1,
* UINT64_MAX / to_hz,
* UINT64_MAX / to_hz + 1,
* ]
* ```
*
* passed through @ref z_tmcvt, we expect a linear sequence:
* ```
* [
* 562949953369140,
* 562949953399658,
* 562949953430175,
* ]
* ```
*
* If an overflow occurs, we see something like the following:
* ```
* [
* 562949953369140,
* 562949953399658,
* 8863,
* ]
* ```
*/
void test_z_tmcvt_for_overflow(void)
{
const uint32_t from_hz = 32768UL;
const uint32_t to_hz = 1000000000UL;
zassert_equal(562949953369140ULL,
z_tmcvt(UINT64_MAX / to_hz - 1, from_hz, to_hz, true, false, false, false),
NULL);
zassert_equal(562949953399658ULL,
z_tmcvt(UINT64_MAX / to_hz, from_hz, to_hz, true, false, false, false),
NULL);
zassert_equal(562949953430175ULL,
z_tmcvt(UINT64_MAX / to_hz + 1, from_hz, to_hz, true, false, false, false),
NULL);
}

View File

@@ -0,0 +1 @@
CONFIG_ZTEST=y

View File

@@ -0,0 +1,5 @@
common:
tags: time_units
type: unit
tests:
utilities.time_units.z_tmcvt: {}

View File

@@ -153,7 +153,7 @@ manifest:
revision: 8e303c264fc21c2116dc612658003a22e933124d
path: modules/lib/lz4
- name: mbedtls
revision: 5765cb7f75a9973ae9232d438e361a9d7bbc49e7
revision: 066cfe13469a8c8bbab6048a3d98e87c7668dd97
path: modules/crypto/mbedtls
groups:
- crypto
@@ -210,11 +210,13 @@ manifest:
path: modules/debug/TraceRecorder
groups:
- debug
- name: trusted-firmware-m
path: modules/tee/tfm
revision: c74be3890c9d975976fde1b1a3b2f5742bec34c0
groups:
- tee
# TF-M is not compatible with mbedTLS 2.28. For more context:
# https://github.com/zephyrproject-rtos/zephyr/pull/54084
#- name: trusted-firmware-m
# path: modules/tee/tfm
# revision: c74be3890c9d975976fde1b1a3b2f5742bec34c0
# groups:
# - tee
self:
path: zephyr