Compare commits

...

110 Commits

Author SHA1 Message Date
Christopher Friedt
6ed0a3557f release: Zephyr 2.7.6
Set version to 2.7.6

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2024-03-01 17:28:52 -05:00
Christopher Friedt
a405f8b6b0 release: add v2.7.6 release notes
List bugfixes and CVEs in v2.7.6 release notes.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2024-03-01 14:24:35 -05:00
Benjamin Cabé
3683fe625b doc: fix broken Sphinx by updating to Sphinx 5.0.2
Sphinx 4.x is way past EOL and due to it not pinning its dependencies,
it's effectively broken. See
https://github.com/sphinx-doc/sphinx/issues/11890 The recommended fix,
although not ideal in the context of an LTS branch, is to update to
Sphinx 5.0.2, which should have minimal impact of how the rendered
documentation looks.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2024-03-01 08:29:26 -05:00
Flavio Ceolin
e9fcfa14e6 syscall: Fix static analysis compalins
Since K_SYSCALL_MEMORY can be called with signed/unsigned size types, if
we check if size >= 0, static anlysis will complain about it when
size in unsigned.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-01 00:54:04 -05:00
Flavio Ceolin
1d16757282 userspace: Additional checks in K_SYSCALL_MEMORY
This macros needed additional checks before invoking
arch_buffer_validate.

- size can not be less then 0. Some functions invoke this macro
  using signed type which will be promote to unsigned when invoking
  arch_buffer_validate. We need to do an early check.
- We need to check for possible overflow, since a malicious user
  application could use a negative number that would be promoted
  to a big value that would cause a integer overflow when adding it
  to the buffer address, leading to invalid checks.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-01 00:54:04 -05:00
Peter Mitsis
eeefd07f68 include: util: Add Z_DETECT_POINTER_OVERFLOW()
The Z_DETECT_POINTER_OVERFLOW() macro is intended detect whether
or not a buffer spans a region of memory that goes beyond the
highest possible address (thereby overflowing the pointer).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-03-01 00:54:04 -05:00
Flavio Ceolin
d013132f55 fs: fuse: Avoid possible buffer overflow
Checks path's size before copying it to local variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 3267bdc4b7)
2024-02-29 12:07:00 -05:00
Benedikt Schmidt
25398f36da shell: modules: do not use k_thread_foreach with shell callbacks
Always use k_thread_foreach_unlocked with callbacks which print
something out to the shell, as they might call arch_irq_unlock.
Fixes #66660.

Signed-off-by: Benedikt Schmidt <benedikt.schmidt@embedded-solutions.at>
(cherry picked from commit 4c731f27c6)
2024-02-27 13:29:34 -05:00
Maxim Adelman
d3c2a2457d kernel shell, stacks shell commands: iterate unlocked on SMP
call k_thread_foreach_unlocked to avoid assertions caused
by calling shell_print while holding a global lock

Signed-off-by: Maxim Adelman <imax@meta.com>
(cherry picked from commit ecf2cb5932)
2024-02-27 13:29:34 -05:00
Adrien Ricciardi
10086910f5 drivers: i2c: i2c_dw: Fixed integer overflow in i2c_dw_data_ask().
The controller can implement a reception FIFO as deep as 256 bytes.
However, the computation made by the driver code to determine how many
bytes can be asked is stored in a signed 8-bit variable called rx_empty.

If the reception FIFO depth is greater or equal to 128 bytes and the FIFO
is currently empty, the rx_empty value will be 128 (or more), which
stands for a negative value as the variable is signed.

Thus, the later code checking if the FIFO is full will run while it should
not and exit from the i2c_dw_data_ask() function too early.

This hangs the controller in an infinite loop of interrupt storm because
the interrupt flags are never cleared.

Storing the rx_empty empty on a signed 32-bit variable instead of a 8-bit
one solves the issue and is compliant with the controller hardware
specifications of a maximum FIFO depth of 256 bytes.

It has been agreed with upstream maintainers to change the type of the
variables tx_empty, rx_empty, cnt, rx_buffer_depth and tx_buffer_depth to
plain int because it is most effectively handled by the CPUs. Using 8-bit
or 16-bit variables had no meaning here.

Signed-off-by: Adrien Ricciardi <aricciardi@baylibre.com>
(cherry picked from commit 4824e405cf)
2024-01-16 16:13:25 -05:00
Jukka Rissanen
01ad11252c tests: net: ipv6: Adjust the source address of test pkt
We would drop the received packet if the source address is our
address so tweak the test and make source address different.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 155e2149f2)
2024-01-09 12:55:32 -05:00
Jukka Rissanen
652b7f6f83 net: ipv6: Check that received src address is not mine
Drop received packet if the source address is the same as
the device address.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 8d3d48e057)
2024-01-09 12:55:32 -05:00
Jukka Rissanen
32748c69b8 net: ipv4: Drop packet if source address is my address
If we receive a packet where the source address is our own
address, then we should drop it.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 19392a6d2b)
2024-01-03 13:13:51 -05:00
Jukka Rissanen
65104bc3cc net: ipv4: Check localhost for incoming packet
If we receive a packet from non localhost interface, then
drop it if either source or destination address is a localhost
address.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 6d41e68352)
2024-01-03 13:13:51 -05:00
Keith Packard
ce4c30fc21 toolchain: Replace GCC_VERSION, CLANG_VERSION and BUILD_ASSERT macros
GCC_VERSION is defined in a few modules, and those headers are often
included first, so replace the one used in zephyr with
TOOLCHAIN_GCC_VERSION. Do the same with CLANG_VERSION, replacing it with
TOOLCHAIN_CLANG_VERSION.

BUILD_ASSERT is also defined in include/toolchain/common.h, which might
get included before gcc.h. We want to use the gcc-specific one instead
of the general one.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit c58c76ef0a)
2023-12-19 04:31:03 -05:00
Nikolay Agishev
5d382fa560 toolchain: Move extra warning options to toolchain abstraction
Move extra warning option from generic twister script into
compiler-dependent config files.
ARCMWDT compiler doesn't support extra warning options ex.
"-Wl,--fatal-warnings". To avoid build fails flag
"disable_warnings_as_errors" should be passed to twister.
This allows all warning messages and make atomatic test useles.

Signed-off-by: Nikolay Agishev <agishev@synopsys.com>
(cherry picked from commit 0dec4cf927)
2023-12-19 04:31:03 -05:00
Jamie McCrae
e677cfd61d cmake: modules: dts: Fix board revision 0 overlay
Fixes an issue whereby a board revision is 0 and the overlay file
exists but would not be included

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2023-12-01 08:28:59 -05:00
Sven Ginka
db1ed25fad drivers: sam dma xdmac: implemented dma device get_status()
the sam xdmac driver does not yet implement the
get_status() function. with this commit the function
will be implemented. Fixes #62003

Signed-off-by: Sven Ginka <sven.ginka@gmail.com>
(cherry picked from commit bc695c6df5)
2023-11-16 20:31:29 -05:00
Joshua Crawford
e6f70e97c8 drivers: flash: spi_nor: select largest valid erase operation
The spi_nor erase op selection was based on the alignment of the end of
the region to be erased. This prevented larger erase operations being
selected in many cases

Closes #60904

Signed-off-by: Joshua Crawford <joshua.crawford@levno.com>
(cherry picked from commit ea2dd9fc65)
2023-11-16 20:31:16 -05:00
Abram Early
f89298cf0e drivers: can: mcan: Move RF0L and RF1L to line 1
The code is designed to handle RF0L and RF1L in
line 1, but they were being sent to line 0. Becuase
they weren't handled, the interrupts would never
be handled which locked up the chip.

Signed-off-by: Abram Early <abram.early@gmail.com>
2023-11-05 07:49:19 -05:00
Henrik Brix Andersen
2e98b1fd8c drivers: can: be consistent in filter_id checks when removing rx filters
Change the CAN controller driver implementations for the
can_remove_rx_filter() API call to be consistent in their
validation of the supplied filter_id.

Fixes: #64398

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2023-10-26 09:34:14 -04:00
Christopher Friedt
13072b4c7b logging: log_core: correct timeout of -1 ms to K_FOREVER
Many releases ago, specifying to block indefinitely in the log
processing thread would do just that.

However, a subtle bug was introduced  such that specifying -1
for `CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS` would have the
exact opposite effect than what was intended.

As per Kconfig, a value of -1 should translate to a timeout of
`K_FOREVER`. However, conversion via `K_MSEC(-1)` results in
a `k_timeout_t` that is equal to `K_NO_WAIT` rather than the
intent which is `K_FOREVER`.

Add a dedicated check to to ensure that a value of -1 is
correctly interpreted as `K_FOREVER` in `log_core.c`.

For reference, the blocking feature was described in #15196,
added in #16194, and it would appear that the regression
happened in c5f2cdef09.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 137097f5c3)
2023-10-25 22:39:44 -04:00
Jukka Rissanen
43c936a5dd net: socket: mgmt: Check buf size in recvfrom()
Return EMSGSIZE if trying to copy too much data into
user supplied buffer.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 0a16d5c7c3)
(cherry picked from commit b13b4eb38a50ee02d599332eb99752e814340487)
2023-10-12 21:20:39 -04:00
Robert Lubos
acc7cfaadf drivers: ieee802154_nrf5: Add payload length check on TX
In case upper layer does not follow the convention, and the net_pkt
provided to the nRF 15.4 driver had a payload larger than the maximum
payload size of an individual 15.4 frame, the driver would end up with
buffer overflow.

Fix this by adding an extra payload_len check before attempting to copy
the payload to the internal buffer.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
2023-10-02 14:55:05 -04:00
Fabio Baltieri
f9a56bcfd4 can: rework the table lookup code in can_dlc_to_bytes
Rework the can_dlc_to_bytes table lookup code in a way that allow the
compiler to guess the resulting output and somehow fix the build
warning:

zephyr/drivers/can/can_nxp_s32_canxl.c:757:9: warning:
'__builtin___memcpy_chk' forming offset [16, 71] is out of the bounds
[0, 16] of object 'frame' with type 'struct can_frame' [-Warray-bounds]
 757 | memcpy(frame->data, msg_data.data, can_dlc_to_bytes(frame->dlc));

where the compiler detects that frame->data is 8 bytes long but
can_dlc_to_bytes could return more than that.

Can be reproduced with:

west build -p -b s32z270dc2_rtu1_r52 \
	-T samples/net/sockets/can/sample.net.sockets.can.one_socket

Suggested-by: Martin Jäger <martin@libre.solar>
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2023-09-19 02:32:36 -04:00
Carles Cufi
4a3b59d47b Bluetooth: controller: Check minimum sizes of adv PDUs
While the maximum sizes were already correctly checked by the code, the
minimum sizes of the PDUs were not. This meant that PDUs smaller than
the minimum required length (typically 6 bytes for AdvA) were
incorrectly forwarded up to the Host.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
(cherry picked from commit 3f0d7012a6)
2023-09-06 18:02:35 -04:00
Thomas Stranger
414d6c91a1 drivers: can: stm32: correct timing_max parameters
The timing_max parameters defined in the stm32 bxcan driver don't match the
register description in the reference manuals.
- sjw does have only 2 bits representing 1 to 4 tq.
- phase_seg1 and phase_seg2 max is one tq higher.

I have checked the following reference manuals and all match:
- RM0090: STM32F405, F415, F407, F417, F427, F437 AND F429
- RM0008: STM32F101, F102, F103, F105, F107 advanced arm-based mcus
- RM0351, RM0394: all STM32L4
- RM0091: all STM32F0 with CAN support

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit cec279b5b6)
2023-08-25 09:44:40 -04:00
Henrik Brix Andersen
2daec8c70c canbus: isotp: convert SF length check from ASSERT to runtime check
Convert the ISO-TP SF length check in send_sf() from __ASSERT() to a
runtime check.

Fixes: #61501

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 1b3d1e01de)
2023-08-25 09:44:26 -04:00
Grant Ramsay
229ca396aa canbus: isotp: Fix context buffer memory leaks
Ensure context buffers are free'd when errors occur

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2023-08-08 19:01:38 -04:00
Andrzej Głąbek
06ae95e45c drivers: nrf_rtc_timer: Always set an initial timeout
In the tickless kernel mode, the nRF system timer does not schedule
any timeout on initialization. This can lead to a situation that
for certain applications no timeout is scheduled at all (for example,
when an application does not create any threads, it exits `main()`
without any sleeping and only handles interrupts) and in consequence
`sys_clock_announce()` is never called (the nRF system timer calls
this function only from the timeout handler). This in turn causes that
uptime is reported correctly only until the RTC used as the system
timer overflows (what happens after 512 seconds).

Fix this by setting a maximum allowed timeout when initializing
the nRF system timer for the tickless kernel mode.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2023-06-13 12:06:52 +02:00
Théo Battrel
f72d8ffe80 Net: Increase NET_BUF_USER_DATA_SIZE value to 8
Increase `NET_BUF_USER_DATA_SIZE` value to 8 if `BT_CONN` is enabled.
This is necessary because one of the backported commits adds one struct
member into `struct tx_meta`, and that will be stored in the buffer
user data.

On main, this Kconfig option has been deprecated, hence why this change
was not present in the original patch set.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
2023-06-06 12:42:08 -04:00
Pavel Vasilyev
f2eeeda113 bluetooth: mesh: Remove illegal use of NET_BUF_FRAG in friend.c
This commit removes illegal use of NET_BUF_FRAG in friend.c, which is an
internal flag.

Now `struct bt_mesh_friend_seg` keeps pointer to a first received
segment of a segmented message. The rest segments are added as fragments
using net_buf API. Friend Queue keeps only head of the fragments.
When one segment (currently head of fragments) is removed from Friend
Queue, the next segment is added to the queue. Head has always 2
references: one when allocated, another one when added as fragments
head.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit 5d059117fd)
2023-06-06 12:42:08 -04:00
Carles Cufi
916d9ad13b net: buf: Simplify fragment handling
This patch reworks how fragments are handled in the net_buf
infrastructure.

In particular, it removes the union around the node and frags members
in the main net_buf structure. This is done so that both can be used at
the same time, at a cost of 4 bytes per net_buf instance.
This implies that the layout of net_buf instances changes whenever
being inserted into a queue (fifo or lifo) or a linked list (slist).

Until now, this is what happened when enqueueing a net_buf with frags
in a queue or linked list:

1.1 Before enqueueing:

 +--------+      +--------+      +--------+
 |#1  node|\     |#2  node|\     |#3  node|\
 |        | \    |        | \    |        | \
 | frags  |------| frags  |------| frags  |------NULL
 +--------+      +--------+      +--------+

net_buf #1 has 2 fragments, net_bufs #2 and #3. Both the node and frags
pointers (they are the same, since they are unioned) point to the next
fragment.

1.2 After enqueueing:

 +--------+     +--------+     +--------+     +--------+     +--------+
 |q/slist |-----|#1  node|-----|#2  node|-----|#3  node|-----|q/slist |
 |node    |     | *flag  | /   | *flag  | /   |        | /   |node    |
 |        |     | frags  |/    | frags  |/    | frags  |/    |        |
 +--------+     +--------+     +--------+     +--------+     +--------+

When enqueing a net_buf (in this case #1) that contains fragments, the
current net_buf implementation actually enqueues all the fragments (in
this case #2 and #3) as actual queue/slist items, since node and frags
are one and the same in memory. This makes the enqueuing operation
expensive and it makes it impossible to atomically dequeue. The `*flag`
notation here means that the `flags` member has been set to
`NET_BUF_FRAGS` in order to be able to reconstruct the frags pointers
when dequeuing.

After this patch, the layout changes considerably:

2.1 Before enqueueing:

 +--------+       +--------+       +--------+
 |#1  node|--NULL |#2  node|--NULL |#3  node|--NULL
 |        |       |        |       |        |
 | frags  |-------| frags  |-------| frags  |------NULL
 +--------+       +--------+       +--------+

This is very similar to 1.1, except that now node and frags are
different pointers, so node is just set to NULL.

2.2 After enqueueing:

 +--------+      +--------+      +--------+
 |q/slist |------|#1  node|------|q/slist |
 |node    |      |        |      |node    |
 |        |      | frags  |      |        |
 +--------+      +--------+      +--------+
                     |           +--------+       +--------+
                     |           |#2  node|--NULL |#3  node|--NULL
                     |           |        |       |        |
                     +-----------| frags  |-------| frags  |------NULL
                                 +--------+       +--------+

When enqueuing net_buf #1, now we only enqueue that very item, instead
of enqueing the frags as well, since now node and frags are separate
pointers. This simplifies the operation and makes it atomic.

Resolves #52718.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
(cherry picked from commit 3d306c181f)
2023-06-06 12:42:08 -04:00
Jonathan Rico
a90a3cc493 Bluetooth: host: update l2cap stress test
Do these things:
- use the proper macros for reserving the SDU header
- make every L2CAP PDU fragment into 3 ACL packets
- set tx data and verify rx matches the pattern
- measure segment pool usage

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 8bc094610c)
2023-06-06 12:42:08 -04:00
Jonathan Rico
fb5845a072 Bluetooth: host: copy fragment data at the last minute
Only copy the data from the 'parent' buf into the segment buf if we
successfully acquired the HCI semaphore (ie there are available
controller buffers).

This way we avoid pulling and pushing back the data in the case where
there aren't any controller buffers available.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit ca51439cd1)
2023-06-06 12:42:08 -04:00
Jonathan Rico
c0d6fad199 Bluetooth: host: poll on CTLR buffers instead of host TX queue
When there are no buffers, it doesn't make sense to repeatedly try to
send the host TX queue.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit ef19c64f1b)
2023-06-06 12:42:08 -04:00
Jonathan Rico
60ae4f9351 Bluetooth: host: make HCI fragmentation async
Make the ACL fragmentation asynchronous, freeing the TX thread for
sending commands when the ACL buffers are full.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit c3e5fabbf1)
2023-06-06 12:42:08 -04:00
Jonathan Rico
6ebce3643e Bluetooth: host: l2cap: add alloc_seg callback
This callback allows use-cases where the SDU is much larger than the
l2cap MPS. The stack will then try to allocate using this callback if
specified, and fall-back on using the buffer's pool (previous
behavior).

This way one can define two buffer pools, one with a very large buffer
size, and one with a buffer size >= MPS, and the stack will allocate
from that instead.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 77e1a9dcad)
2023-06-06 12:42:08 -04:00
Jonathan Rico
80ab098d64 Bluetooth: host: l2cap: workaround SDU deadlock
See the code comments.

SDUs might enter a state where they will be blocked forever, as a
workaround, we nudge them when another SDU has been sent.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 8e207fefad)
2023-06-06 12:42:08 -04:00
Jonathan Rico
d00c98d585 Bluetooth: host: l2cap: don't send too much credits
There was an edge-case where we were sending back too much credits, add
a check so we can't do that anymore.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 3c1ca93fe8)
2023-06-06 12:42:08 -04:00
Jonathan Rico
f290106952 Bluetooth: host: l2cap: release segment in all cases
See code comment

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 1c8fe67a52)
2023-06-06 12:42:08 -04:00
Jonathan Rico
2e0e5e27e8 Bluetooth: host: add l2cap stress-test
This test reproduces more-or-less #34600.

It has a central that connects to multiple peripherals, opens one l2cap
CoC channel per connection, and transmits a few SDUs largely exceeding
the MPS of the channel.

In this commit, the test doesn't pass, but when it passes (after the
subsequent commits), error and warning messages are expected from the
stack, as this is not the happy path.

We can later debate on whether these particular error messages should
be downgraded to debug.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 7a6872d837)
2023-06-06 12:42:08 -04:00
Christopher Friedt
030fa9da45 release: Zephyr 2.7.5
Set version to 2.7.5

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:08:42 -04:00
Christopher Friedt
43370b89c3 release: minor corrections to security release notes
* remove reference to other github project issue
* complete incomplete sentence

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:07:16 -04:00
Flavio Ceolin
15fa28896a release: mbedTLS: Add vulnerabilities info
Add information about vulnerabilities fixed since mbedTLS 2.26.0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Flavio Ceolin
ce3eb90a83 release: security: Add rel notes for vulnerabilities
Add information about vulnerabilities fixed in 2.7.5 release.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Chris Friedt
ca24cd6c2d release: update v2.7.5 release notes
* add bugifixes to v2.7.5 release

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-27 05:32:14 -04:00
Flavio Ceolin
4fc4dc7b84 release: mbedTLS: Add rel notes for mbedTLS
Release notes for mbedTLS lates update.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-26 11:10:27 +09:00
Krzysztof Chruscinski
fb24b62dc5 logging: Fix user space crash when runtime filtering is on
Logging module data (including filters) are not accessible by
the user space. Macro for creating logs where creating local
variable with filters before checking is we are in the user
context. It was not used in that case but creating variable
was violating access writes that resulted in failure.

Removing variable creation and using filters directly in the
if clause but after checking condition that it is not the
user context. With this approach data is accessed only in
the kernel mode.

Cherry-picked with modifications from
4ee59e2cdb.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-05-18 00:27:35 +08:00
Chris Friedt
60e7a97328 release: create outline for v2.7.5 release notes
Create a template for v2.7.5 release notes.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-18 00:07:05 +08:00
Flavio Ceolin
a1aa463783 boards: mps2_an521_ns: Remove simulation capability
This board requires TF-M which is not supported by default in the
current Zephyr release. Just remove the simulation capability to
avoid CI failures.

See: https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
29c1e08cf7 west: tf-m: Remove tf-m from Zephyr LTS
Zephyr mbedTLS was updated to 2.28.x which is a LTS release and
address several vulnerabilities affecting 2.26 (version that used to be
used on Zephyr LTS).

Unfortunately this mbedTLS version is not compatible with TF-M and
backporting mbedTLS fixes was not a viable solution. Due this problem
we are removing TF-M module from Zephyr's LTS. One still can go and add
it to this manifest if needed, but this is no longer "officially"
supported.

More information in:
https://github.com/zephyrproject-rtos/zephyr/issues/56071
https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
190f09df52 samples: tfm: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Only build / run these TF-M samples when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
a166290f1a tfm: boards: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Enable BUILD_WITH_TFM only when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
21e0870106 crypto: Bump mbedTLS to 2.28.3
Bump mbedTLS to version 2.28.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
ebe3651f3d tests: mbedtls: Fix GCC warning about test_snprintf
Fix errors like:

inlined from ‘test_mbedtls’ at
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:172:6:
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:96:17: error:
‘test_snprintf’ reading 10 bytes from a region of size 1
[-Werror=stringop-overread]
   96 |                 test_snprintf(1, "", -1) != 0 ||
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~

In GCC >= 11 because `ret_buf` in some calls are shorter literals

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Torsten Rasmussen
1b7c720c7f cmake: prefix local version of return variable
Fixes: #55490
Follow-up: #53124

Prefix local version of the return variable before calling
`zephyr_check_compiler_flag_hardcoded()`.

This ensures that there will never be any naming collision between named
return argument and the variable name used in later functions when
PARENT_SCOPE is used.

The issue #55490 provided description of situation where the double
de-referencing was not working correctly.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 599886a9d3)
2023-05-13 19:23:55 -04:00
Stephanos Ioannidis
58af1b51bd ci: Use organisation-level AWS secrets
This commit updates the CI workflows to use the `zephyrproject-rtos`
organisation-level AWS secrets instead of the repository-level secrets.

Using organisation-level secrets allows more centralised management of
the access keys used throughout the GitHub Actions CI infrastructure.

Note that the `AWS_*_ACCESS_KEY_ID` is now stored in plaintext as a
variable instead of a secret because it is equivalent to username and
needs to be identifiable for management and audit purposes.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-05-12 03:30:43 +09:00
Kumar Gala
70f2a4951a tests: posix: fs: disable CONFIG_EVENTFD
The test doesn't use eventfd so we can disable it to save some space.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit 70e921dbc7)
2023-05-10 19:48:15 -04:00
Kumar Gala
650d10805a posix: eventfd: depends on polling
Have eventfd Kconfig select POLL is the code utilizes the polling
API.  We get a link error for tests/lib/fdtable/libraries.os.fdtable
when building on arm-clang without this.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit f215e4494c)
2023-05-10 19:48:15 -04:00
Chris Friedt
25616b1021 tests: drivers: dma: loop: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 8c6c96715f)
2023-05-09 08:42:19 -04:00
Chris Friedt
f72519007c tests: drivers: dma: chan_link: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7f6d976916)
2023-05-09 08:42:19 -04:00
Chris Friedt
1b2a7ec251 tests: drivers: dma: chan_blen: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 5afcac5e14)
2023-05-09 08:42:19 -04:00
Chris Friedt
9d2533fc92 tests: posix: ensure that min and max priority are schedulable
Verify that threads are actually schedulable for min and max
scheduler priority for both `SCHED_RR` (preemptive) and
`SCHED_FIFO` (cooperative).

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit ad71b78770)
2023-05-02 16:25:42 -04:00
Chris Friedt
e20b8f3f34 posix: sched: ensure min and max priority are schedulable
Previously, there was an off-by-one error for SCHED_RR.

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 2b2cbf8107)
2023-05-02 16:25:42 -04:00
Christopher Friedt
199d5d5448 drivers: pcie_ep: iproc: compile-out unused function based on DT
Compile-out `iproc_pcie_pl330_dma_xfer()` if there are no active
DMA users in devicetree.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 9ad78eb60c)
2023-05-02 12:33:46 -04:00
Chris Friedt
5db2717f06 drivers: pcie_ep: iproc: ensure config and api are const
The `config` and `api` members of `struct device` are expected
to be `const`. This also improves reliability, as `config`
and `api` are stored in rom rather than ram, which has the
potential to be corrupted at runtime in the absense of an MMU.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7212792295)
2023-04-27 11:18:00 -04:00
Tarun Karuturi
f3851326da drivers: pcie_ep: iproc: enable based on device tree specs
There are use cases for the pcie_ep driver where we don't
necessarily need the dma functionality. Added ifdef's around
the dma functionality so that it's only available if we
specify the dma engines in the device tree similar to

```
dmas = <&pl330 0>, <&pl330 1>;
dma-names = "txdma", "rxdma";
```

Signed-off-by: Tarun Karuturi <tkaruturi@meta.com>
Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 9d95f69a87)
2023-04-27 11:18:00 -04:00
Stephanos Ioannidis
5a8d05b968 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-27 21:45:14 +09:00
Stephanos Ioannidis
eea42e38f3 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-15 15:59:20 +09:00
Chris Friedt
0388a90e7b posix: clock: fix seconds calculation
The previous method used to calculate seconds in `clock_gettime()`
seemed to have an inaccuracy that grew with time causing the
seconds to be off by an order of magnitude when ticks would roll
over.

This change fixes the method used to calculate seconds.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
4c62d76fb7 sys: time_units: Add Kconfig option for algorithm selection
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit 50c7c7b1e4)
2023-04-07 06:29:11 -04:00
Chris Friedt
6f8f9b5c7a tests: time_units: check for overflow in z_tmcvt intermediate
Prior to #41602, due to the ordering of operations (first mul,
then div), an intermediate value would overflow, resulting in
a time non-linearity.

This test ensures that time rolls-over properly.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 74c9c0e7a3)
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
afbc93287d lib: posix: clock: Prevent early overflows
Algorithm was converting uptime to nanoseconds which can easily
lead to overflows. Changed algorithm to use milliseconds and
nanoseconds for remainder only.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Gerard Marull-Paretas
a28aa01a88 sys: time_units: add missing include
The header can't be fully used in standalone mode: toolchain.h has to be
included first, otherwise the ALWAYS_INLINE attribute is not defined.
Headers that can be directly included and are not self-contained should
be considered a bad practice.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Stephanos Ioannidis
677a374255 ci: backport_issue_check: Use ubuntu-22.04 virtual environment
This commit updates the pull request backport issue check workflow to
use the Ubuntu 22.04 virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cadd6e6fa4)
2023-03-22 03:15:40 +09:00
Stephanos Ioannidis
0389fa740b ci: manifest: Use ubuntu-22.04 virtual environment
This commit updates the manifest workflow to use the Ubuntu 22.04
virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit af6d77f7a7)
2023-03-22 03:05:01 +09:00
Torsten Rasmussen
b02d34b855 cmake: fix variable de-referencing in zephyr_check_compiler_x functions
Fixes: #53124

Fix de-referencing of check and exists function arguments by correctly
de-referencing the argument references using `${<var>}`.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 04a27651ea)
2023-03-13 07:48:51 -04:00
Torsten Rasmussen
29e3a4865f cmake: dereference ${check} after zephyr_check_compiler_flag() call
Follow-up: #53124

The PR#53124 fixed an issue where the variable `check` was not properly
dereferenced into the correct variable name for return value storage.
This was corrected in 04a27651ea.

However, some code was passing a return argument as:
`zephyr_check_compiler_flag(... ${check})`
but checking the result like:
`if(${check})`
thus relying on a faulty behavior of code updating `check` and not the
`${check}` variable.

Fix this by updating to use `${${check}}` as that will point to the
correct return value.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 45b25e5508)
2023-03-10 13:32:37 -05:00
Robert Lubos
aaa6d280ce net: iface: Add NULL pointer check in net_if_ipv6_set_reachable_time
In case the IPv6 context pointer was not set on an interface (for
instance due to IPv6 context shortage), processing the RA message could
lead to a crash (i. e. NULL pointer dereference). Protect against this
by adding NULL pointer check, similarly to other functions in this area.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit c6c2098255)
2023-02-28 12:36:47 -05:00
Robert Lubos
e02a3377e5 net: shell: Validate pointer provided with net pkt command
The net_pkt pointer provided to net pkt commands was not validated in
any way. Therefore it was fairly easy to crash an application by
providing invalid address.

This commit adds the pointer validation. It's checked whether the
pointer provided belongs to any net_pkt pools known to the net stack,
and if the pointer offset within the slab actually points to the
beginning of the net_pkt structure.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit e540a98331)
2023-02-28 12:34:38 -05:00
Gerard Marull-Paretas
76c30dfa55 ci: doc-build: fix PDF build
New LaTeX Docker image (Debian based) uses Python 3.11. On Debian
systems, this version does not allow to install packages to the system
environment using pip.  Use a virtual environment instead.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
(cherry picked from commit e6d9ff2948)
2023-02-28 23:21:51 +09:00
Théo Battrel
c3f512d606 Bluetooth: Host: Check returned value by LE_READ_BUFFER_SIZE
`rp->le_max_num` was passed unchecked into `k_sem_init()`, this could
lead to the value being uninitialized and an unknown behavior.

To fix that issue, the `rp->le_max_num` value is checked the same way as
`bt_dev.le.acl_mtu` was already checked. The same things has been done
for `rp->acl_max_num` and `rp->iso_max_num` in
`read_buffer_size_v2_complete()` function.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
(cherry picked from commit ac3dec5212)
2023-02-24 19:48:12 -05:00
NingX Zhao
f882abfd13 tests: removing incorrect testcases of poll
These two test cases both are fault injection test cases,
and there are designed for testing some negative branches
to improve code coverage. But I find that this branch
shouldn't be tested, because the spinlock will be locked
before a procedure performs here, and then it will trigger
an assert error and the process will be rescheduled to the
handler function, and terminated the current test case,
so spinlock will never be unlocked. And it will impact
the next test case in the same test suite(the next testcase
will be never get spinlock).

Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
(cherry picked from commit cb4a629bc8)
2023-02-07 12:14:38 -06:00
Lucas Dietrich
bc7300fea7 kernel: workq: Add internal function z_work_submit_to_queue()
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().

When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.

Fixes #45267

Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
(cherry picked from commit 9a848b3ad4)
2023-02-03 18:37:53 -05:00
Andy Ross
8da9a76464 kernel/workq: Cleanup bespoke reschedule point
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible.  Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.

Really, this is only a half fix.  Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule().  But that would take some surgery, so let's go with
the simpler cleanup first.

This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.

[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 8d94967ec4)
2023-02-03 18:37:53 -05:00
Lixin Guo
298b8ea788 kernel: work: remove unused if statement
Condition of work == NULL is checked before, so there is no need to
check it again.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
(cherry picked from commit d4826d874e)
2023-02-03 18:37:53 -05:00
Peter Mitsis
a9aaf048e8 kernel: Fixes sys_clock_tick_get()
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
  1. The handler invokes k_uptime_ticks().
  2. k_uptime_ticks() invokes sys_clock_tick_get().
  3. sys_clock_tick_get() must call elapsed() and not
     sys_clock_elapsed() as we do not want to count any
     unannounced ticks that may have elapsed while
     processing the timer ISR.

Fixes #46378

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 71ef669ea4)
2023-02-03 18:37:53 -05:00
Peter Mitsis
e2b81b48c4 kernel: fix race condition in sys_clock_announce()
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 3e2f30a7ef)
2023-02-03 18:37:53 -05:00
Andy Ross
45c41bc344 kernel/timeout: Cleanup/speedup parallel announce logic
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list).  The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).

Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit.  The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.

Fixes #44758

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 0b2ed3818d)
2023-02-03 18:37:53 -05:00
Andy Ross
f570a46719 kernel/timeout: Serialize handler callbacks on SMP
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).

When that happens, each call will independently enter the loop over
the timeout list.  The access is correctly synchronized, so the list
handling is correct.  But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus.  That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".

That's surprising and error-prone.  Don't do it.  Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.

(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver.  That hardware issue is still poorly understood, but this fix
is desirable regardless.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b1182bf83b)
2023-02-03 18:37:53 -05:00
Flavio Ceolin
675a349e1b kernel: Fix timeout issue with SYSTEM_CLOCK_SLOPPY_IDLE
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.

Fixes #41422

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 47b7c2e931)
2023-02-03 18:37:53 -05:00
Andy Ross
16927a6cbb kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b4e9ef0691)
2023-02-03 18:37:53 -05:00
Andy Ross
ab353d6b7d kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 3267cd327e)
2023-02-03 18:37:53 -05:00
Mark Holden
951b055b7f debug: coredump: allow for coredump backends to be defined outside of tree
Move coredump_backend_api struct to public header so that custom backends
for coredump can be defined out of tree. Create simple backend in test
directory for verification.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit 7b2b283677)
2023-02-03 18:35:56 -05:00
Nicolas Pitre
8e256b3399 scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:

|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
|                             unsigned int initial_count,
|                             unsigned int limit)
|{
|    80000ad0:   6105                    addi    sp,sp,32
|    80000ad2:   ec06                    sd      ra,24(sp)
|    80000ad4:   e42a                    sd      a0,8(sp)
|    80000ad6:   c22e                    sw      a1,4(sp)
|    80000ad8:   c032                    sw      a2,0(sp)
|        ret = arch_is_user_context();
|    80000ada:   b39ff0ef                jal     ra,80000612
|        if (z_syscall_trap()) {
|    80000ade:   c911                    beqz    a0,80000af2
|                return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
|                                    *(uintptr_t *)&initial_count,
|                                    *(uintptr_t *)&limit,
|                                    K_SYSCALL_K_SEM_INIT);
|    80000ae0:   6522                    ld      a0,8(sp)
|    80000ae2:   00413583                ld      a1,4(sp)
|    80000ae6:   6602                    ld      a2,0(sp)
|    80000ae8:   0b700693                li      a3,183
|    [...]

We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.

In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:

- The top half of a1 and a2 will contain garbage due to the `ld` used
  to retrieve them. Whether or not the top bits will be cleared
  eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
  endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
  while `ld` expects a 8-byte alignment.

The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.

The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.

So let's fix this code by:

- Removing the pointer dereference roundtrip and associated casts. This
  gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
  aggregates perfectly well. The compiler does optimize things to the
  same assembly output in the end.

This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 1db5c8b948)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
74f2760771 scripts: gen_syscalls: add missing --split-type case
With CONFIG_TIMEOUT_64BIT it is both k_timeout_t and k_ticks_t that
need to be split, otherwise many syscalls returning a number of ticks
are being truncated to 32 bits.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 2cdac33d39)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
85e0912291 scripts: gen_syscalls: fix access validation size on extra params array
It was one below the entire array size.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit df80c77ed8)
2023-02-01 20:07:43 -05:00
Jim Shu
f2c582c75d gen_app_partitions: add .sdata/.sbss section into app_smem
Some architectures (e.g. RISC-V) has .sdata/.sbss section for small
data/bss. Memory partition should also manage the permission of these
sections in library so they should be put into app_smem.
(For example, newlib _impure_ptr is in .sdata section and
__malloc_top_pad is in .sbss section in RISC-V.)

Signed-off-by: Jim Shu <cwshu@andestech.com>
(cherry picked from commit 46eb3e5fce)
2023-02-01 20:07:43 -05:00
Robert Lubos
c908ee8133 net: context: Separate user data pointer from FIFO reserved space
Using the same memory as a user data pointer and FIFO reserved space
could lead to a crash in certain circumstances, those two use cases were
not completely separate.

The crash could happen for example, if an incoming TCP connection was
abruptly closed just after being established. As TCP uses the user data
to notify error condition to the upper layer, the user data pointer
could've been used while the newly allocated context could still be
waiting on the accept queue. This damaged the data area used by the FIFO
and eventually could lead to a crash.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 2ab11953e3)
2023-01-31 16:13:42 -05:00
Nicolas Pitre
175e76b302 z_thread_mark_switched_*: use z_current_get() instead of k_current_get()
k_current_get() may rely on TLS which might not yet be initialized
when those tracing functions are called, resulting in a crash.

This is different from the main branch as in that case the implementation
was completely revamped and neither k_current_get() nor z_current_get()
are used anymore. This is a much simpler fix than a backport of that
code, similar to the implication in commit commit f07df42d49 ("kernel:
make k_current_get() work without syscall").

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-23 15:01:10 -05:00
Keith Packard
c520749a71 tests: Disable HW stack protection for some mpu tests
When active, z_libc_partition consumes an MPU region which leaves too
few for some MPU tests. Free up one by disabling HW stack protection.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 19c8956946)
2023-01-11 11:02:47 -05:00
David Leach
584f52d5be tests: mem_protect: ensure allocated objects are initialized
K_OBJ_MSGQ, K_OBJ_PIPE, and K_OBJ_STACK objects have pointers
to additional memory that can be allocated. The k_obj_alloc()
returns these objects as uninitialized so when they are freed
there are random opportunities for freeing invalid memory
and causing random faults.

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit fdea2a628b)
2023-01-11 11:02:47 -05:00
David Leach
d05c3bdf36 tests: mem_protect: avoid allocating K_OBJ_MSGQ in userspace.
The K_OBJ_MSGQ object is unitialized so when the thread cleanup occurs
after an expected fault for invalid access the test case can randomly
fault again because the cleanup of the thread will sometimes attempt
to free invalid buffer_start pointer in the msgq object.

Fixes #42705

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit a0737e687c)
2023-01-11 11:02:47 -05:00
Jim Shu
3ab0c9516f tests: mem_protect: enlarge heap size of RISCV64
Because k_thread size in RISCV64 is near 512 bytes, (num_of_thread *
256) bytes heap size is not enough. Enlarge heap size in RISCV64
to the (num_of_thread * 1024) bytes like x86_64 and ARM64.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
(cherry picked from commit e2d67d60ba)
2023-01-11 11:02:47 -05:00
Keith Packard
df6f0f477f tests/kernel/mem_protect: Check for thread_userspace_local_data
When using THREAD_LOCAL_STORAGE the thread_userspace_local_data stuff
isn't used, so these tests wouldn't build.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit b03b2e0403)
2023-01-11 11:02:47 -05:00
Nicolas Pitre
2dc30ca1fb tests: lifo_usage: make it less susceptible to SMP races
On SMP, and especially using qemu on a busy system, it is possible for
a thread with a later timeout to get ahead of another one with an
earlier timeout. The tight timeout value difference (10ms) makes it
possible albeit difficult to reproduce. The result is something like:

|START - test_timeout_threads_pend_on_lifo
| thread (q order: 2, t/o: 0, lifo 0x4001d350)
|
|    Assertion failed at main.c:140:
|test_multiple_threads_pending: (data->timeout_order not equal to ii)
| *** thread 2 woke up, expected 1

Let's make timeout values 10 times larger to make this unlikely race
even less likely.

While at it... The timeout field in struct timeout_order_data is some ms
value and not a number of ticks, so change the type accordingly.
And leverage k_cyc_to_ms_floor32() to simplify computation in
is_timeout_in_range().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit a1ce2fb990)
2023-01-11 11:02:47 -05:00
Daniel Leung
5cbda9f1c7 tests: kernel/smp: wait for threads to exits between tests
This adds a bunch of k_thread_join() to make sure threads spawned
for a test are no longer running between exiting that test. This
prevents interference between tests if some threads are still
running when assumed not.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit dbe3874079)
2023-01-11 11:02:47 -05:00
Carlo Caione
711506349d tests/kernel/smp: Add SMP switch torture test
Formalize and rework the issue reproducer for #40795 and add it to the
SMP test suite.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
(cherry picked from commit 8edf9817c0)
2023-01-11 11:02:47 -05:00
Ederson de Souza
572921a44a tests/kernel/fpu_sharing: Run test with MP_NUM_CPUS=1
This test uses k_yield() to "sync" between threads, so it's implicitly
supposed to run on a single CPU. Make it explicit, to avoid issues on
platforms with more cores.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>

FIXKFLOATDISABLE

(cherry picked from commit ab17f69a72)
2023-01-11 11:02:47 -05:00
140 changed files with 2307 additions and 637 deletions

View File

@@ -8,7 +8,7 @@ on:
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
steps:
- name: Check out source code

View File

@@ -78,8 +78,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial

View File

@@ -65,8 +65,8 @@ jobs:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial

View File

@@ -19,8 +19,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip

View File

@@ -125,7 +125,7 @@ jobs:
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v3
@@ -133,6 +133,12 @@ jobs:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: setup-venv
run: |
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip

View File

@@ -50,7 +50,7 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1

View File

@@ -32,8 +32,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3

View File

@@ -53,8 +53,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.FOOTPRINT_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.FOOTPRINT_AWS_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Record Footprint

View File

@@ -43,8 +43,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Post Results

View File

@@ -7,6 +7,4 @@ jobs:
name: Pull Request Labeler
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2.1.1
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'
- uses: actions/labeler@v4

View File

@@ -6,7 +6,7 @@ on:
jobs:
contribs:
runs-on: ubuntu-20.04
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code

View File

@@ -183,8 +183,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial

View File

@@ -162,6 +162,13 @@ zephyr_compile_options(${OPTIMIZATION_FLAG})
# @Intent: Obtain compiler specific flags related to C++ that are not influenced by kconfig
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler-cpp,required>>)
# Extra warnings options for twister run
if (CONFIG_COMPILER_WARNINGS_AS_ERRORS)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,warnings_as_errors>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,warnings_as_errors>>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,warnings_as_errors>)
endif()
# @Intent: Obtain compiler specific flags for compiling under different ISO standards of C++
if(CONFIG_CPLUSPLUS)
# From kconfig choice, pick a single dialect.
@@ -627,7 +634,7 @@ if(CONFIG_64BIT)
endif()
if(CONFIG_TIMEOUT_64BIT)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t --split-type k_ticks_t)
endif()
add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}

View File

@@ -305,9 +305,13 @@ config NO_OPTIMIZATIONS
help
Compiler optimizations will be set to -O0 independently of other
options.
endchoice
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config COMPILER_COLOR_DIAGNOSTICS
bool "Enable colored diganostics"
default y

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 4
PATCHLEVEL = 6
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -27,6 +27,7 @@ endif # BOARD_BL5340_DVK_CPUAPP
config BUILD_WITH_TFM
default y if BOARD_BL5340_DVK_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -20,6 +20,7 @@ config BOARD
# force building with TF-M as the Secure Execution Environment.
config BUILD_WITH_TFM
default y if TRUSTED_EXECUTION_NONSECURE
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if GPIO

View File

@@ -4,7 +4,10 @@ type: mcu
arch: arm
ram: 4096
flash: 4096
simulation: qemu
# TFM is not supported by default in the Zephyr LTS release.
# Excluding this board's simulator to avoid CI failures.
#
#simulation: qemu
toolchain:
- gnuarmemb
- zephyr

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF5340DK_NRF5340_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF9160DK_NRF9160_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -132,6 +132,10 @@ set_property(TARGET compiler-cpp PROPERTY dialect_cpp2a "")
set_property(TARGET compiler-cpp PROPERTY dialect_cpp20 "")
set_property(TARGET compiler-cpp PROPERTY dialect_cpp2b "")
# Flags for set extra warnigs (ARCMWDT asm can't recognize --fatal-warnings. Skip it)
set_property(TARGET compiler PROPERTY warnings_as_errors -Werror)
set_property(TARGET asm PROPERTY warnings_as_errors -Werror)
# Disable exeptions flag in C++
set_property(TARGET compiler-cpp PROPERTY no_exceptions "-fno-exceptions")

View File

@@ -65,6 +65,10 @@ set_property(TARGET compiler-cpp PROPERTY dialect_cpp2a)
set_property(TARGET compiler-cpp PROPERTY dialect_cpp20)
set_property(TARGET compiler-cpp PROPERTY dialect_cpp2b)
# Extra warnings options for twister run
set_property(TARGET compiler PROPERTY warnings_as_errors)
set_property(TARGET asm PROPERTY warnings_as_errors)
# Flag for disabling exeptions in C++
set_property(TARGET compiler-cpp PROPERTY no_exceptions)

View File

@@ -137,6 +137,10 @@ set_property(TARGET compiler-cpp PROPERTY dialect_cpp20 "-std=c++20"
set_property(TARGET compiler-cpp PROPERTY dialect_cpp2b "-std=c++2b"
"-Wno-register" "-Wno-volatile")
# Flags for set extra warnigs (ARCMWDT asm can't recognize --fatal-warnings. Skip it)
set_property(TARGET compiler PROPERTY warnings_as_errors -Werror)
set_property(TARGET asm PROPERTY warnings_as_errors -Werror)
# Disable exeptions flag in C++
set_property(TARGET compiler-cpp PROPERTY no_exceptions "-fno-exceptions")

View File

@@ -53,7 +53,7 @@ list(REMOVE_DUPLICATES
# Drop support for NOT CONFIG_HAS_DTS perhaps?
if(EXISTS ${DTS_SOURCE})
set(SUPPORTS_DTS 1)
if(BOARD_REVISION AND EXISTS ${BOARD_DIR}/${BOARD}_${BOARD_REVISION_STRING}.overlay)
if(DEFINED BOARD_REVISION AND EXISTS ${BOARD_DIR}/${BOARD}_${BOARD_REVISION_STRING}.overlay)
list(APPEND DTS_SOURCE ${BOARD_DIR}/${BOARD}_${BOARD_REVISION_STRING}.overlay)
endif()
else()

View File

@@ -518,7 +518,7 @@ function(zephyr_library_cc_option)
string(MAKE_C_IDENTIFIER check${option} check)
zephyr_check_compiler_flag(C ${option} ${check})
if(${check})
if(${${check}})
zephyr_library_compile_options(${option})
endif()
endforeach()
@@ -1003,9 +1003,9 @@ endfunction()
function(zephyr_check_compiler_flag lang option check)
# Check if the option is covered by any hardcoded check before doing
# an automated test.
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" check exists)
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" _${check} exists)
if(exists)
set(check ${check} PARENT_SCOPE)
set(${check} ${_${check}} PARENT_SCOPE)
return()
endif()
@@ -1110,11 +1110,11 @@ function(zephyr_check_compiler_flag_hardcoded lang option check exists)
# because they would produce a warning instead of an error during
# the test. Exclude them by toolchain-specific blocklist.
if((${lang} STREQUAL CXX) AND ("${option}" IN_LIST CXX_EXCLUDED_OPTIONS))
set(check 0 PARENT_SCOPE)
set(exists 1 PARENT_SCOPE)
set(${check} 0 PARENT_SCOPE)
set(${exists} 1 PARENT_SCOPE)
else()
# There does not exist a hardcoded check for this option.
set(exists 0 PARENT_SCOPE)
set(${exists} 0 PARENT_SCOPE)
endif()
endfunction(zephyr_check_compiler_flag_hardcoded)
@@ -1862,7 +1862,7 @@ function(check_set_linker_property)
zephyr_check_compiler_flag(C "" ${check})
set(CMAKE_REQUIRED_FLAGS ${SAVED_CMAKE_REQUIRED_FLAGS})
if(${check})
if(${${check}})
set_property(TARGET ${LINKER_PROPERTY_TARGET} ${APPEND} PROPERTY ${property} ${option})
endif()
endfunction()

View File

@@ -0,0 +1,4 @@
# SPDX-License-Identifier: Apache-2.0
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY warnings_as_errors -Wl,--fatal-warnings)

View File

@@ -3,6 +3,9 @@ if (NOT CONFIG_COVERAGE_GCOV)
set_property(TARGET linker PROPERTY coverage --coverage)
endif()
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY ld_extra_warning_options -Wl,--fatal-warnings)
# ld/clang linker flags for sanitizing.
check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_address -fsanitize=address)

View File

@@ -7,6 +7,9 @@ if (NOT CONFIG_COVERAGE_GCOV)
set_property(TARGET linker PROPERTY coverage -lgcov)
endif()
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY warnings_as_errors -Wl,--fatal-warnings)
# ld/gcc linker flags for sanitizing.
check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_address -lasan)
check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_address -fsanitize=address)

View File

@@ -14,3 +14,6 @@ check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_undefined)
# If memory reporting is a post build command, please use
# cmake/bintools/bintools.cmake insted.
check_set_linker_property(TARGET linker PROPERTY memusage)
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY warnings_as_errors)

View File

@@ -253,8 +253,8 @@ graphviz_dot_args = [
# -- Linkcheck options ----------------------------------------------------
extlinks = {
"jira": ("https://jira.zephyrproject.org/browse/%s", ""),
"github": ("https://github.com/zephyrproject-rtos/zephyr/issues/%s", ""),
"jira": ("https://jira.zephyrproject.org/browse/%s", "JIRA %s"),
"github": ("https://github.com/zephyrproject-rtos/zephyr/issues/%s", "GitHub #%s"),
}
linkcheck_timeout = 30

View File

@@ -2,6 +2,207 @@
.. _zephyr_2.7:
.. _zephyr_2.7.6:
Zephyr 2.7.6
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.5 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`32145` - use ``k_thread_foreach_unlocked()`` with shell callbacks
* :github:`56604` - drivers: nrf: rtc: make uptime consistent for app booted from v3.x mcuboot
* :github:`25917` - bluetooth: fix deadlock with tx of acl data and hci commands
* :github:`47649` - bluetooth: release att notification buffer after reconnection
* :github:`43718` - bluetooth: bt_conn: ensure tx buffers can be allocated within timeout
* :github:`60707` - canbus: isotp: seal context buffer memory leaks
* :github:`60904` - drivers: spi_nor: make erase operation more opportunistic
* :github:`61451` - drivers: can: stm32: correct timing_max parameters
* :github:`61501` - canbus: isotp: convert SF length check from ``ASSERT`` to runtime check
* :github:`61544` - drivers: ieee802154_nrf5: add payload length check on TX
* :github:`61784` - bluetooth: controller: check minmum sizes of adv PDUs
* :github:`62003` - drivers: dma: sam: implement xdmac ``get_status()`` API
* :github:`62701` - can: rework the table lookup code in ``can_dlc_to_bytes()``
* :github:`63544` - drivers: can: mcan: move RF0L and RF1L to line 1
* :github:`63835` - net_mgmt: return ``EMSGSIZE`` if buffer passed to ``recvfrom()`` is too small
* :github:`63965` - logging: fix handling of ``CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS``
* :github:`64398` - drivers: can: be consistent in ``filter_id`` checks when removing rx filters
* :github:`65548` - cmake: modules: dts: fix board revision 0 overlay
* :github:`66500` - toolchain: support ``CONFIG_COMPILER_WARNINGS_AS_ERRORS``
* :github:`66888` - net: ipv6: drop received packets sent by the same interface
* :github:`67692` - i2c: dw: fix integer overflow in ``i2c_dw_data_ask()``
* :github:`69167` - fs: fuse: avoid possible buffer overflow
* :github:`69637` - userspace: additional checks in ``K_SYSCALL_MEMORY``
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2023-4263 `Zephyr project bug tracker GHSA-rf6q-rhhp-pqhf
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-rf6q-rhhp-pqhf>`_
* CVE-2023-4424: `Zephyr project bug tracker GHSA-j4qm-xgpf-qjw3
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-j4qm-xgpf-qjw3>`_
* CVE-2023-5779 `Zephyr project bug tracker GHSA-7cmj-963q-jj47
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-7cmj-963q-jj47>`_
* CVE-2023-6249 `Zephyr project bug tracker GHSA-32f5-3p9h-2rqc
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-32f5-3p9h-2rqc>`_
* CVE-2023-6881 `Zephyr project bug tracker GHSA-mh67-4h3q-p437
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-mh67-4h3q-p437>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.5:
Zephyr 2.7.5
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.4 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`41111` - utils: tmcvt: fix integer overflow after 6.4 days with ``gettimeofday()`` and ``z_tmcvt()``
* :github:`51663` - tests: kernel: increase coverage for kernel and mmu tests
* :github:`53124` - bmake: fix argument passing in ``zephyr_check_compiler_flag()`` cmake function
* :github:`53315` - net: tcp: fix possible underflow in ``tcp_flags()``.
* :github:`53981` - scripts: fixes for ``gen_syscalls`` and ``gen_app_partitions``
* :github:`53983` - init: correct early init time calls to ``k_current_get()`` when TLS is enabled
* :github:`54140` - net: fix BUS FAULT when running nmap towards echo_async sample
* :github:`54325` - coredump: support out-of-tree coredump backend definition
* :github:`54386` - kernel: correct SMP scheduling with more than 2 CPUs
* :github:`54527` - tests: kernel: remove faulty test from tests/kernel/poll
* :github:`55019` - bluetooth: initialize backport of #54905 failed
* :github:`55068` - net: ipv6: validate arguments in ``net_if_ipv6_set_reachable_time()``
* :github:`55069` - net: core: ``net pkt`` shell command missing input validation
* :github:`55323` - logging: fix userspace runtime filtering
* :github:`55490` - cxx: fix compile error in C++ project for bad flags ``-Wno-pointer-sign`` and ``-Werror=implicit-int``
* :github:`56071` - security: MbedTLS: update to v2.28.3
* :github:`56729` - posix: SCHED_RR valid thread priorities
* :github:`57210` - drivers: pcie: endpoint: pcie_ep_iproc: correct use of optional devicetree binding
* :github:`57419` - tests: dma: support 64-bit addressing in tests
* :github:`57710` - posix: support building eventfd on arm-clang
mbedTLS
*******
Moving mbedTLS to 2.28.x series (2.28.3 precisely). This is a LTS release
that will be supported with bug fixes and security fixes until the end of 2024.
Detailed information can be found in:
https://github.com/Mbed-TLS/mbedtls/releases/tag/v2.28.3
https://github.com/zephyrproject-rtos/zephyr/issues/56071
This version is incompatible with TF-M and because of this TF-M is no longer
supported in Zephyr LTS. If TF-M is required it can be manually added back
changing the mbedTLS revision on ``west.yaml`` to the previous one
(5765cb7f75a9973ae9232d438e361a9d7bbc49e7). This should be carefully assessed
by a security expert to ensure that the know vulnerabilities in that version
don't affect the product.
Vulnerabilities addressed in this update:
* MBEDTLS_AESNI_C, which is enabled by default, was silently ignored on
builds that couldn't compile the GCC-style assembly implementation
(most notably builds with Visual Studio), leaving them vulnerable to
timing side-channel attacks. There is now an intrinsics-based AES-NI
implementation as a fallback for when the assembly one cannot be used.
* Fix potential heap buffer overread and overwrite in DTLS if
MBEDTLS_SSL_DTLS_CONNECTION_ID is enabled and
MBEDTLS_SSL_CID_IN_LEN_MAX > 2 * MBEDTLS_SSL_CID_OUT_LEN_MAX.
* An adversary with access to precise enough information about memory
accesses (typically, an untrusted operating system attacking a secure
enclave) could recover an RSA private key after observing the victim
performing a single private-key operation if the window size used for the
exponentiation was 3 or smaller. Found and reported by Zili KOU,
Wenjian HE, Sharad Sinha, and Wei ZHANG. See "Cache Side-channel Attacks
and Defenses of the Sliding Window Algorithm in TEEs" - Design, Automation
and Test in Europe 2023.
* Zeroize dynamically-allocated buffers used by the PSA Crypto key storage
module before freeing them. These buffers contain secret key material, and
could thus potentially leak the key through freed heap.
* Fix a potential heap buffer overread in TLS 1.2 server-side when
MBEDTLS_USE_PSA_CRYPTO is enabled, an opaque key (created with
mbedtls_pk_setup_opaque()) is provisioned, and a static ECDH ciphersuite
is selected. This may result in an application crash or potentially an
information leak.
* Fix a buffer overread in DTLS ClientHello parsing in servers with
MBEDTLS_SSL_DTLS_CLIENT_PORT_REUSE enabled. An unauthenticated client
or a man-in-the-middle could cause a DTLS server to read up to 255 bytes
after the end of the SSL input buffer. The buffer overread only happens
when MBEDTLS_SSL_IN_CONTENT_LEN is less than a threshold that depends on
the exact configuration: 258 bytes if using mbedtls_ssl_cookie_check(),
and possibly up to 571 bytes with a custom cookie check function.
Reported by the Cybeats PSI Team.
* Zeroize several intermediate variables used to calculate the expected
value when verifying a MAC or AEAD tag. This hardens the library in
case the value leaks through a memory disclosure vulnerability. For
example, a memory disclosure vulnerability could have allowed a
man-in-the-middle to inject fake ciphertext into a DTLS connection.
* In psa_cipher_generate_iv() and psa_cipher_encrypt(), do not read back
from the output buffer. This fixes a potential policy bypass or decryption
oracle vulnerability if the output buffer is in memory that is shared with
an untrusted application.
* Fix a double-free that happened after mbedtls_ssl_set_session() or
mbedtls_ssl_get_session() failed with MBEDTLS_ERR_SSL_ALLOC_FAILED
(out of memory). After that, calling mbedtls_ssl_session_free()
and mbedtls_ssl_free() would cause an internal session buffer to
be free()'d twice.
* Fix a bias in the generation of finite-field Diffie-Hellman-Merkle (DHM)
private keys and of blinding values for DHM and elliptic curves (ECP)
computations.
* Fix a potential side channel vulnerability in ECDSA ephemeral key generation.
An adversary who is capable of very precise timing measurements could
learn partial information about the leading bits of the nonce used for the
signature, allowing the recovery of the private key after observing a
large number of signature operations. This completes a partial fix in
Mbed TLS 2.20.0.
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2023-0397: `Zephyr project bug tracker GHSA-wc2h-h868-q7hj
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-wc2h-h868-q7hj>`_
* CVE-2023-0779: `Zephyr project bug tracker GHSA-9xj8-6989-r549
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-9xj8-6989-r549>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.4:
Zephyr 2.7.4

View File

@@ -171,6 +171,11 @@ void can_loopback_detach(const struct device *dev, int filter_id)
{
struct can_loopback_data *data = DEV_DATA(dev);
if (filter_id < 0 || filter_id >= ARRAY_SIZE(data->filters)) {
LOG_ERR("filter ID %d out of bounds", filter_id);
return;
}
LOG_DBG("Detach filter ID: %d", filter_id);
k_mutex_lock(&data->mtx, K_FOREVER);
data->filters[filter_id].rx_cb = NULL;

View File

@@ -404,7 +404,8 @@ int can_mcan_init(const struct device *dev, const struct can_mcan_config *cfg,
#ifdef CONFIG_CAN_STM32FD
can->ils = CAN_MCAN_ILS_RXFIFO0 | CAN_MCAN_ILS_RXFIFO1;
#else
can->ils = CAN_MCAN_ILS_RF0N | CAN_MCAN_ILS_RF1N;
can->ils = CAN_MCAN_ILS_RF0N | CAN_MCAN_ILS_RF1N |
CAN_MCAN_ILS_RF0L | CAN_MCAN_ILS_RF1L;
#endif
can->ile = CAN_MCAN_ILE_EINT0 | CAN_MCAN_ILE_EINT1;
/* Interrupt on every TX fifo element*/
@@ -894,11 +895,16 @@ int can_mcan_attach_isr(struct can_mcan_data *data,
void can_mcan_detach(struct can_mcan_data *data,
struct can_mcan_msg_sram *msg_ram, int filter_nr)
{
if (filter_nr < 0) {
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}
k_mutex_lock(&data->inst_mutex, K_FOREVER);
if (filter_nr >= NUM_STD_FILTER_DATA) {
filter_nr -= NUM_STD_FILTER_DATA;
if (filter_nr >= NUM_STD_FILTER_DATA) {
LOG_ERR("Wrong filter id");
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}

View File

@@ -551,6 +551,11 @@ static void mcp2515_detach(const struct device *dev, int filter_nr)
{
struct mcp2515_data *dev_data = DEV_DATA(dev);
if (filter_nr < 0 || filter_nr >= CONFIG_CAN_MAX_FILTER) {
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}
k_mutex_lock(&dev_data->mutex, K_FOREVER);
dev_data->filter_usage &= ~BIT(filter_nr);
k_mutex_unlock(&dev_data->mutex);

View File

@@ -480,9 +480,8 @@ static void mcux_flexcan_detach(const struct device *dev, int filter_id)
const struct mcux_flexcan_config *config = dev->config;
struct mcux_flexcan_data *data = dev->data;
if (filter_id >= MCUX_FLEXCAN_MAX_RX) {
LOG_ERR("Detach: Filter id >= MAX_RX (%d >= %d)", filter_id,
MCUX_FLEXCAN_MAX_RX);
if (filter_id < 0 || filter_id >= MCUX_FLEXCAN_MAX_RX) {
LOG_ERR("filter ID %d out of bounds", filter_id);
return;
}

View File

@@ -829,7 +829,8 @@ void can_rcar_detach(const struct device *dev, int filter_nr)
{
struct can_rcar_data *data = DEV_CAN_DATA(dev);
if (filter_nr >= CONFIG_CAN_RCAR_MAX_FILTER) {
if (filter_nr < 0 || filter_nr >= CONFIG_CAN_RCAR_MAX_FILTER) {
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}

View File

@@ -1113,10 +1113,10 @@ static const struct can_driver_api can_api_funcs = {
.prescaler = 0x01
},
.timing_max = {
.sjw = 0x07,
.sjw = 0x04,
.prop_seg = 0x00,
.phase_seg1 = 0x0F,
.phase_seg2 = 0x07,
.phase_seg1 = 0x10,
.phase_seg2 = 0x08,
.prescaler = 0x400
}
};

View File

@@ -354,11 +354,36 @@ static int sam_xdmac_initialize(const struct device *dev)
return 0;
}
static int sam_xdmac_get_status(const struct device *dev, uint32_t channel,
struct dma_status *status)
{
const struct sam_xdmac_dev_cfg *const dev_cfg = dev->config;
Xdmac * const xdmac = dev_cfg->regs;
uint32_t chan_cfg = xdmac->XDMAC_CHID[channel].XDMAC_CC;
uint32_t ublen = xdmac->XDMAC_CHID[channel].XDMAC_CUBC;
/* we need to check some of the XDMAC_CC registers to determine the DMA direction */
if ((chan_cfg & XDMAC_CC_TYPE_Msk) == 0) {
status->dir = MEMORY_TO_MEMORY;
} else if ((chan_cfg & XDMAC_CC_DSYNC_Msk) == XDMAC_CC_DSYNC_MEM2PER) {
status->dir = MEMORY_TO_PERIPHERAL;
} else {
status->dir = PERIPHERAL_TO_MEMORY;
}
status->busy = ((chan_cfg & XDMAC_CC_INITD_Msk) != 0) || (ublen > 0);
status->pending_length = ublen;
return 0;
}
static const struct dma_driver_api sam_xdmac_driver_api = {
.config = sam_xdmac_config,
.reload = sam_xdmac_transfer_reload,
.start = sam_xdmac_transfer_start,
.stop = sam_xdmac_transfer_stop,
.get_status = sam_xdmac_get_status,
};
/* DMA0 */

View File

@@ -658,7 +658,7 @@ static int spi_nor_erase(const struct device *dev, off_t addr, size_t size)
if ((etp->exp != 0)
&& SPI_NOR_IS_ALIGNED(addr, etp->exp)
&& SPI_NOR_IS_ALIGNED(size, etp->exp)
&& (size >= BIT(etp->exp))
&& ((bet == NULL)
|| (etp->exp > bet->exp))) {
bet = etp;

View File

@@ -43,10 +43,10 @@ static inline void i2c_dw_data_ask(const struct device *dev)
{
struct i2c_dw_dev_config * const dw = dev->data;
uint32_t data;
uint8_t tx_empty;
int8_t rx_empty;
uint8_t cnt;
uint8_t rx_buffer_depth, tx_buffer_depth;
int tx_empty;
int rx_empty;
int cnt;
int rx_buffer_depth, tx_buffer_depth;
union ic_comp_param_1_register ic_comp_param_1;
uint32_t reg_base = get_regs(dev);

View File

@@ -494,6 +494,11 @@ static int nrf5_tx(const struct device *dev,
uint8_t *payload = frag->data;
bool ret = true;
if (payload_len > NRF5_PSDU_LENGTH) {
LOG_ERR("Payload too large: %d", payload_len);
return -EMSGSIZE;
}
LOG_DBG("%p (%u)", payload, payload_len);
nrf5_radio->tx_psdu[0] = payload_len + NRF5_FCS_LENGTH;

View File

@@ -467,7 +467,7 @@ err_out:
static struct iproc_pcie_ep_ctx iproc_pcie_ep_ctx_0;
static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
static const struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.id = 0,
.base = (struct iproc_pcie_reg *)DT_INST_REG_ADDR(0),
.reg_size = DT_INST_REG_SIZE(0),
@@ -475,19 +475,21 @@ static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.map_low_size = DT_INST_REG_SIZE_BY_NAME(0, map_lowmem),
.map_high_base = DT_INST_REG_ADDR_BY_NAME(0, map_highmem),
.map_high_size = DT_INST_REG_SIZE_BY_NAME(0, map_highmem),
#if DT_INST_NODE_HAS_PROP(0, dmas)
.pl330_dev = DEVICE_DT_GET(DT_INST_DMAS_CTLR_BY_IDX(0, 0)),
.pl330_tx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, txdma, channel),
.pl330_rx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, rxdma, channel),
#endif
};
static struct pcie_ep_driver_api iproc_pcie_ep_api = {
static const struct pcie_ep_driver_api iproc_pcie_ep_api = {
.conf_read = iproc_pcie_conf_read,
.conf_write = iproc_pcie_conf_write,
.map_addr = iproc_pcie_map_addr,
.unmap_addr = iproc_pcie_unmap_addr,
.raise_irq = iproc_pcie_raise_irq,
.register_reset_cb = iproc_pcie_register_reset_cb,
.dma_xfer = iproc_pcie_pl330_dma_xfer,
.dma_xfer = DT_INST_NODE_HAS_PROP(0, dmas) ? iproc_pcie_pl330_dma_xfer : NULL,
};
DEVICE_DT_INST_DEFINE(0, &iproc_pcie_ep_init, NULL,

View File

@@ -341,10 +341,11 @@ int sys_clock_driver_init(const struct device *dev)
alloc_mask = BIT_MASK(EXT_CHAN_COUNT) << 1;
}
if (!IS_ENABLED(CONFIG_TICKLESS_KERNEL)) {
compare_set(0, counter() + CYC_PER_TICK,
sys_clock_timeout_handler, NULL);
}
uint32_t initial_timeout = IS_ENABLED(CONFIG_TICKLESS_KERNEL) ?
MAX_CYCLES : CYC_PER_TICK;
compare_set(0, counter() + initial_timeout,
sys_clock_timeout_handler, NULL);
z_nrf_clock_control_lf_on(mode);

View File

@@ -266,6 +266,19 @@ struct bt_l2cap_chan_ops {
*/
void (*encrypt_change)(struct bt_l2cap_chan *chan, uint8_t hci_status);
/** @brief Channel alloc_seg callback
*
* If this callback is provided the channel will use it to allocate
* buffers to store segments. This avoids wasting big SDU buffers with
* potentially much smaller PDUs. If this callback is supplied, it must
* return a valid buffer.
*
* @param chan The channel requesting a buffer.
*
* @return Allocated buffer.
*/
struct net_buf *(*alloc_seg)(struct bt_l2cap_chan *chan);
/** @brief Channel alloc_buf callback
*
* If this callback is provided the channel will use it to allocate

View File

@@ -126,6 +126,31 @@ struct coredump_mem_hdr_t {
uintptr_t end;
} __packed;
typedef void (*coredump_backend_start_t)(void);
typedef void (*coredump_backend_end_t)(void);
typedef void (*coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct coredump_backend_api {
/* Signal to backend of the start of coredump. */
coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
coredump_backend_end_t end;
/* Raw buffer output */
coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
void coredump(unsigned int reason, const z_arch_esf_t *esf,
struct k_thread *thread);
void coredump_memory_dump(uintptr_t start_addr, uintptr_t end_addr);

View File

@@ -420,7 +420,7 @@ static inline uint8_t can_dlc_to_bytes(uint8_t dlc)
static const uint8_t dlc_table[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 12,
16, 20, 24, 32, 48, 64};
return dlc > 0x0F ? 64 : dlc_table[dlc];
return dlc_table[MIN(dlc, ARRAY_SIZE(dlc_table) - 1)];
}
/**

View File

@@ -162,6 +162,11 @@ struct z_kernel {
#if defined(CONFIG_THREAD_MONITOR)
struct k_thread *threads; /* singly linked list of ALL threads */
#endif
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
/* Need to signal an IPI at the next scheduling point */
bool pending_ipi;
#endif
};
typedef struct z_kernel _kernel_t;

View File

@@ -302,10 +302,8 @@ static inline char z_log_minimal_level_to_char(int level)
} \
\
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \
@@ -347,8 +345,6 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
\
if (IS_ENABLED(CONFIG_LOG_MINIMAL)) { \
Z_LOG_TO_PRINTK(_level, "%s", _str); \
@@ -357,7 +353,7 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \

View File

@@ -889,15 +889,6 @@ static inline void net_buf_simple_restore(struct net_buf_simple *buf,
buf->len = state->len;
}
/**
* Flag indicating that the buffer has associated fragments. Only used
* internally by the buffer handling code while the buffer is inside a
* FIFO, meaning this never needs to be explicitly set or unset by the
* net_buf API user. As long as the buffer is outside of a FIFO, i.e.
* in practice always for the user for this API, the buf->frags pointer
* should be used instead.
*/
#define NET_BUF_FRAGS BIT(0)
/**
* Flag indicating that the buffer's associated data pointer, points to
* externally allocated memory. Therefore once ref goes down to zero, the
@@ -907,7 +898,7 @@ static inline void net_buf_simple_restore(struct net_buf_simple *buf,
* Reference count mechanism however will behave the same way, and ref
* count going to 0 will free the net_buf but no the data pointer in it.
*/
#define NET_BUF_EXTERNAL_DATA BIT(1)
#define NET_BUF_EXTERNAL_DATA BIT(0)
/**
* @brief Network buffer representation.
@@ -917,13 +908,11 @@ static inline void net_buf_simple_restore(struct net_buf_simple *buf,
* using the net_buf_alloc() API.
*/
struct net_buf {
union {
/** Allow placing the buffer into sys_slist_t */
sys_snode_t node;
/** Allow placing the buffer into sys_slist_t */
sys_snode_t node;
/** Fragments associated with this buffer. */
struct net_buf *frags;
};
/** Fragments associated with this buffer. */
struct net_buf *frags;
/** Reference count. */
uint8_t ref;

View File

@@ -199,10 +199,11 @@ struct net_conn_handle;
* anyway. This saves 12 bytes / context in IPv6.
*/
__net_socket struct net_context {
/** User data.
*
* First member of the structure to let users either have user data
* associated with a context, or put contexts into a FIFO.
/** First member of the structure to allow to put contexts into a FIFO.
*/
void *fifo_reserved;
/** User data associated with a context.
*/
void *user_data;

View File

@@ -1368,6 +1368,10 @@ uint32_t net_if_ipv6_calc_reachable_time(struct net_if_ipv6 *ipv6);
static inline void net_if_ipv6_set_reachable_time(struct net_if_ipv6 *ipv6)
{
#if defined(CONFIG_NET_NATIVE_IPV6)
if (ipv6 == NULL) {
return;
}
ipv6->reachable_time = net_if_ipv6_calc_reachable_time(ipv6);
#endif
}

View File

@@ -7,6 +7,9 @@
#ifndef ZEPHYR_INCLUDE_TIME_UNITS_H_
#define ZEPHYR_INCLUDE_TIME_UNITS_H_
#include <sys/util.h>
#include <toolchain.h>
#ifdef __cplusplus
extern "C" {
#endif
@@ -56,6 +59,21 @@ static TIME_CONSTEXPR inline int sys_clock_hw_cycles_per_sec(void)
#endif
}
/** @internal
* Macro determines if fast conversion algorithm can be used. It checks if
* maximum timeout represented in source frequency domain and multiplied by
* target frequency fits in 64 bits.
*
* @param from_hz Source frequency.
* @param to_hz Target frequency.
*
* @retval true Use faster algorithm.
* @retval false Use algorithm preventing overflow of intermediate value.
*/
#define Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz) \
((ceiling_fraction(CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS * 24ULL * 3600ULL * from_hz, \
UINT32_MAX) * to_hz) <= UINT32_MAX)
/* Time converter generator gadget. Selects from one of three
* conversion algorithms: ones that take advantage when the
* frequencies are an integer ratio (in either direction), or a full
@@ -123,8 +141,18 @@ static TIME_CONSTEXPR ALWAYS_INLINE uint64_t z_tmcvt(uint64_t t, uint32_t from_h
} else {
if (result32) {
return (uint32_t)((t * to_hz + off) / from_hz);
} else if (const_hz && Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz)) {
/* Faster algorithm but source is first multiplied by target frequency
* and it can overflow even though final result would not overflow.
* Kconfig option shall prevent use of this algorithm when there is a
* risk of overflow.
*/
return ((t * to_hz + off) / from_hz);
} else {
return (t * to_hz + off) / from_hz;
/* Slower algorithm but input is first divided before being multiplied
* which prevents overflow of intermediate value.
*/
return (t / from_hz) * to_hz + ((t % from_hz) * to_hz + off) / from_hz;
}
}
}

View File

@@ -25,6 +25,7 @@
#include <zephyr/types.h>
#include <stddef.h>
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
@@ -61,6 +62,23 @@ extern "C" {
/** @brief 0 if @p cond is true-ish; causes a compile error otherwise. */
#define ZERO_OR_COMPILE_ERROR(cond) ((int) sizeof(char[1 - 2 * !(cond)]) - 1)
/**
* @brief Determine if a buffer exceeds highest address
*
* This macro determines if a buffer identified by a starting address @a addr
* and length @a buflen spans a region of memory that goes beond the highest
* possible address (thereby resulting in a pointer overflow).
*
* @param addr Buffer starting address
* @param buflen Length of the buffer
*
* @return true if pointer overflow detected, false otherwise
*/
#define Z_DETECT_POINTER_OVERFLOW(addr, buflen) \
(((buflen) != 0) && \
((UINTPTR_MAX - (uintptr_t)(addr)) <= ((uintptr_t)((buflen) - 1))))
#if defined(__cplusplus)
/* The built-in function used below for type checking in C is not

View File

@@ -329,6 +329,22 @@ extern int z_user_string_copy(char *dst, const char *src, size_t maxlen);
*/
#define Z_SYSCALL_VERIFY(expr) Z_SYSCALL_VERIFY_MSG(expr, #expr)
/**
* @brief Macro to check if size is negative
*
* Z_SYSCALL_MEMORY can be called with signed/unsigned types
* and because of that if we check if size is greater or equal to
* zero, many static analyzers complain about no effect expression.
*
* @param ptr Memory area to examine
* @param size Size of the memory area
* @return true if size is valid, false otherwise
* @note This is an internal API. Do not use unless you are extending
* functionality in the Zephyr tree.
*/
#define Z_SYSCALL_MEMORY_SIZE_CHECK(ptr, size) \
(((uintptr_t)ptr + size) >= (uintptr_t)ptr)
/**
* @brief Runtime check that a user thread has read and/or write permission to
* a memory area
@@ -346,8 +362,10 @@ extern int z_user_string_copy(char *dst, const char *src, size_t maxlen);
* @return 0 on success, nonzero on failure
*/
#define Z_SYSCALL_MEMORY(ptr, size, write) \
Z_SYSCALL_VERIFY_MSG(arch_buffer_validate((void *)ptr, size, write) \
== 0, \
Z_SYSCALL_VERIFY_MSG(Z_SYSCALL_MEMORY_SIZE_CHECK(ptr, size) \
&& !Z_DETECT_POINTER_OVERFLOW(ptr, size) \
&& (arch_buffer_validate((void *)ptr, size, write) \
== 0), \
"Memory region %p (size %zu) %s access denied", \
(void *)(ptr), (size_t)(size), \
write ? "write" : "read")

View File

@@ -48,8 +48,9 @@
#endif
#undef BUILD_ASSERT /* clear out common version */
/* C++11 has static_assert built in */
#ifdef __cplusplus
#if defined(__cplusplus) && (__cplusplus >= 201103L)
#define BUILD_ASSERT(EXPR, MSG...) static_assert(EXPR, "" MSG)
/*

View File

@@ -613,6 +613,17 @@ config TIMEOUT_64BIT
availability of absolute timeout values (which require the
extra precision).
config SYS_CLOCK_MAX_TIMEOUT_DAYS
int "Max timeout (in days) used in conversions"
default 365
help
Value is used in the time conversion static inline function to determine
at compile time which algorithm to use. One algorithm is faster, takes
less code but may overflow if multiplication of source and target
frequency exceeds 64 bits. Second algorithm prevents that. Faster
algorithm is selected for conversion if maximum timeout represented in
source frequency domain multiplied by target frequency fits in 64 bits.
config XIP
bool "Execute in place"
help

View File

@@ -576,6 +576,9 @@ static void triggered_work_expiration_handler(struct _timeout *timeout)
k_work_submit_to_queue(twork->workq, &twork->work);
}
extern int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work);
static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
{
struct z_poller *poller = event->poller;
@@ -587,7 +590,7 @@ static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
z_abort_timeout(&twork->timeout);
twork->poll_result = 0;
k_work_submit_to_queue(work_q, &twork->work);
z_work_submit_to_queue(work_q, &twork->work);
}
return 0;

View File

@@ -219,6 +219,25 @@ static ALWAYS_INLINE void dequeue_thread(void *pq,
}
}
static void signal_pending_ipi(void)
{
/* Synchronization note: you might think we need to lock these
* two steps, but an IPI is idempotent. It's OK if we do it
* twice. All we require is that if a CPU sees the flag true,
* it is guaranteed to send the IPI, and if a core sets
* pending_ipi, the IPI will be sent the next time through
* this code.
*/
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
if (_kernel.pending_ipi) {
_kernel.pending_ipi = false;
arch_sched_ipi();
}
}
#endif
}
#ifdef CONFIG_SMP
/* Called out of z_swap() when CONFIG_SMP. The current thread can
* never live in the run queue until we are inexorably on the context
@@ -231,6 +250,7 @@ void z_requeue_current(struct k_thread *curr)
if (z_is_thread_queued(curr)) {
_priq_run_add(&_kernel.ready_q.runq, curr);
}
signal_pending_ipi();
}
#endif
@@ -481,6 +501,15 @@ static bool thread_active_elsewhere(struct k_thread *thread)
return false;
}
static void flag_ipi(void)
{
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
_kernel.pending_ipi = true;
}
#endif
}
static void ready_thread(struct k_thread *thread)
{
#ifdef CONFIG_KERNEL_COHERENCE
@@ -495,9 +524,7 @@ static void ready_thread(struct k_thread *thread)
queue_thread(&_kernel.ready_q.runq, thread);
update_cache(0);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
}
}
@@ -799,9 +826,7 @@ void z_thread_priority_set(struct k_thread *thread, int prio)
{
bool need_sched = z_set_prio(thread, prio);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (need_sched && _current->base.sched_locked == 0U) {
z_reschedule_unlocked();
@@ -841,6 +866,7 @@ void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
z_swap(lock, key);
} else {
k_spin_unlock(lock, key);
signal_pending_ipi();
}
}
@@ -850,6 +876,7 @@ void z_reschedule_irqlock(uint32_t key)
z_swap_irqlock(key);
} else {
irq_unlock(key);
signal_pending_ipi();
}
}
@@ -883,7 +910,16 @@ void k_sched_unlock(void)
struct k_thread *z_swap_next_thread(void)
{
#ifdef CONFIG_SMP
return next_up();
struct k_thread *ret = next_up();
if (ret == _current) {
/* When not swapping, have to signal IPIs here. In
* the context switch case it must happen later, after
* _current gets requeued.
*/
signal_pending_ipi();
}
return ret;
#else
return _kernel.ready_q.cache;
#endif
@@ -950,6 +986,7 @@ void *z_get_next_switch_handle(void *interrupted)
new_thread->switch_handle = NULL;
}
}
signal_pending_ipi();
return ret;
#else
_current->switch_handle = interrupted;
@@ -1346,9 +1383,7 @@ void z_impl_k_wakeup(k_tid_t thread)
z_mark_thread_as_not_suspended(thread);
z_ready_thread(thread);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (!arch_is_in_isr()) {
z_reschedule_unlocked();
@@ -1535,6 +1570,9 @@ void z_thread_abort(struct k_thread *thread)
/* It's running somewhere else, flag and poke */
thread->base.thread_state |= _THREAD_ABORTING;
/* We're going to spin, so need a true synchronous IPI
* here, not deferred!
*/
#ifdef CONFIG_SCHED_IPI_SUPPORTED
arch_sched_ipi();
#endif

View File

@@ -1011,7 +1011,7 @@ void z_thread_mark_switched_in(void)
#ifdef CONFIG_THREAD_RUNTIME_STATS
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
#ifdef CONFIG_THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS
thread->rt_stats.last_switched_in = timing_counter_get();
#else
@@ -1033,7 +1033,7 @@ void z_thread_mark_switched_out(void)
uint64_t diff;
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
if (unlikely(thread->rt_stats.last_switched_in == 0)) {
/* Has not run before */

View File

@@ -68,8 +68,14 @@ static int32_t next_timeout(void)
{
struct _timeout *to = first();
int32_t ticks_elapsed = elapsed();
int32_t ret = to == NULL ? MAX_WAIT
: CLAMP(to->dticks - ticks_elapsed, 0, MAX_WAIT);
int32_t ret;
if ((to == NULL) ||
((int64_t)(to->dticks - ticks_elapsed) > (int64_t)INT_MAX)) {
ret = MAX_WAIT;
} else {
ret = MAX(0, to->dticks - ticks_elapsed);
}
#ifdef CONFIG_TIMESLICING
if (_current_cpu->slice_ticks && _current_cpu->slice_ticks < ret) {
@@ -238,6 +244,18 @@ void sys_clock_announce(int32_t ticks)
k_spinlock_key_t key = k_spin_lock(&timeout_lock);
/* We release the lock around the callbacks below, so on SMP
* systems someone might be already running the loop. Don't
* race (which will cause paralllel execution of "sequential"
* timeouts and confuse apps), just increment the tick count
* and return.
*/
if (IS_ENABLED(CONFIG_SMP) && (announce_remaining != 0)) {
announce_remaining += ticks;
k_spin_unlock(&timeout_lock, key);
return;
}
announce_remaining = ticks;
while (first() != NULL && first()->dticks <= announce_remaining) {
@@ -245,13 +263,13 @@ void sys_clock_announce(int32_t ticks)
int dt = t->dticks;
curr_tick += dt;
announce_remaining -= dt;
t->dticks = 0;
remove_timeout(t);
k_spin_unlock(&timeout_lock, key);
t->fn(t);
key = k_spin_lock(&timeout_lock);
announce_remaining -= dt;
}
if (first() != NULL) {
@@ -271,7 +289,7 @@ int64_t sys_clock_tick_get(void)
uint64_t t = 0U;
LOCKED(&timeout_lock) {
t = curr_tick + sys_clock_elapsed();
t = curr_tick + elapsed();
}
return t;
}

View File

@@ -355,26 +355,45 @@ static int submit_to_queue_locked(struct k_work *work,
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
/* Submit work to a queue but do not yield the current thread.
*
* Intended for internal use.
*
* See also submit_to_queue_locked().
*
* @param queuep pointer to a queue reference.
* @param work the work structure to be submitted
*
* @retval see submit_to_queue_locked()
*/
int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
__ASSERT_NO_MSG(work != NULL);
k_spinlock_key_t key = k_spin_lock(&lock);
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = submit_to_queue_locked(work, &queue);
k_spin_unlock(&lock, key);
/* If we changed the queue contents (as indicated by a positive ret)
* the queue thread may now be ready, but we missed the reschedule
* point because the lock was held. If this is being invoked by a
* preemptible thread then yield.
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = z_work_submit_to_queue(queue, work);
/* submit_to_queue_locked() won't reschedule on its own
* (really it should, otherwise this process will result in
* spurious calls to z_swap() due to the race), so do it here
* if the queue state changed.
*/
if ((ret > 0) && (k_is_preempt_thread() != 0)) {
k_yield();
if (ret > 0) {
z_reschedule_unlocked();
}
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work, submit_to_queue, queue, work, ret);
@@ -586,6 +605,7 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
struct k_work *work = NULL;
k_work_handler_t handler = NULL;
k_spinlock_key_t key = k_spin_lock(&lock);
bool yield;
/* Check for and prepare any new work. */
node = sys_slist_get(&queue->pending);
@@ -644,34 +664,30 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
k_spin_unlock(&lock, key);
if (work != NULL) {
bool yield;
__ASSERT_NO_MSG(handler != NULL);
handler(work);
__ASSERT_NO_MSG(handler != NULL);
handler(work);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
}
}

View File

@@ -112,6 +112,8 @@ config APP_LINK_WITH_POSIX_SUBSYS
config EVENTFD
bool "Enable support for eventfd"
depends on !ARCH_POSIX
select POLL
default y if POSIX_API
help
Enable support for event file descriptors, eventfd. An eventfd can
be used as an event wait/notify mechanism together with POSIX calls

View File

@@ -27,7 +27,6 @@ static struct k_spinlock rt_clock_base_lock;
*/
int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
{
uint64_t elapsed_nsecs;
struct timespec base;
k_spinlock_key_t key;
@@ -48,9 +47,13 @@ int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
return -1;
}
elapsed_nsecs = k_ticks_to_ns_floor64(k_uptime_ticks());
ts->tv_sec = (int32_t) (elapsed_nsecs / NSEC_PER_SEC);
ts->tv_nsec = (int32_t) (elapsed_nsecs % NSEC_PER_SEC);
uint64_t ticks = k_uptime_ticks();
uint64_t elapsed_secs = ticks / CONFIG_SYS_CLOCK_TICKS_PER_SEC;
uint64_t nremainder = ticks - elapsed_secs * CONFIG_SYS_CLOCK_TICKS_PER_SEC;
ts->tv_sec = (time_t) elapsed_secs;
/* For ns 32 bit conversion can be used since its smaller than 1sec. */
ts->tv_nsec = (int32_t) k_ticks_to_ns_floor32(nremainder);
ts->tv_sec += base.tv_sec;
ts->tv_nsec += base.tv_nsec;

View File

@@ -15,12 +15,10 @@
#define PTHREAD_INIT_FLAGS PTHREAD_CANCEL_ENABLE
#define PTHREAD_CANCELED ((void *) -1)
#define LOWEST_POSIX_THREAD_PRIORITY 1
PTHREAD_MUTEX_DEFINE(pthread_key_lock);
static const pthread_attr_t init_pthread_attrs = {
.priority = LOWEST_POSIX_THREAD_PRIORITY,
.priority = 0,
.stack = NULL,
.stacksize = 0,
.flags = PTHREAD_INIT_FLAGS,
@@ -54,9 +52,11 @@ static uint32_t zephyr_to_posix_priority(int32_t z_prio, int *policy)
if (z_prio < 0) {
*policy = SCHED_FIFO;
prio = -1 * (z_prio + 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_COOP_PRIORITIES);
} else {
*policy = SCHED_RR;
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio - 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_PREEMPT_PRIORITIES);
}
return prio;
@@ -68,9 +68,11 @@ static int32_t posix_to_zephyr_priority(uint32_t priority, int policy)
if (policy == SCHED_FIFO) {
/* Zephyr COOP priority starts from -1 */
__ASSERT_NO_MSG(priority < CONFIG_NUM_COOP_PRIORITIES);
prio = -1 * (priority + 1);
} else {
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority);
__ASSERT_NO_MSG(priority < CONFIG_NUM_PREEMPT_PRIORITIES);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority - 1);
}
return prio;

View File

@@ -7,13 +7,9 @@
#include <kernel.h>
#include <posix/posix_sched.h>
static bool valid_posix_policy(int policy)
static inline bool valid_posix_policy(int policy)
{
if (policy != SCHED_FIFO && policy != SCHED_RR) {
return false;
}
return true;
return policy == SCHED_FIFO || policy == SCHED_RR;
}
/**
@@ -23,25 +19,12 @@ static bool valid_posix_policy(int policy)
*/
int sched_get_priority_min(int policy)
{
if (valid_posix_policy(policy) == false) {
if (!valid_posix_policy(policy)) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
return 0;
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return 0;
}
}
errno = EINVAL;
return -1;
return 0;
}
/**
@@ -51,25 +34,10 @@ int sched_get_priority_min(int policy)
*/
int sched_get_priority_max(int policy)
{
if (valid_posix_policy(policy) == false) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
/* Posix COOP priority starts from 0
* whereas zephyr starts from -1
*/
return (CONFIG_NUM_COOP_PRIORITIES - 1);
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED) && policy == SCHED_FIFO) {
return CONFIG_NUM_COOP_PRIORITIES - 1;
} else if (IS_ENABLED(CONFIG_PREEMPT_ENABLED) && policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES - 1;
}
errno = EINVAL;

View File

@@ -24,6 +24,7 @@ config TFM_BOARD
menuconfig BUILD_WITH_TFM
bool "Build with TF-M as the Secure Execution Environment"
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
depends on TRUSTED_EXECUTION_NONSECURE
depends on TFM_BOARD != ""
depends on ARM_TRUSTZONE_M

View File

@@ -8,6 +8,7 @@ tests:
platform_allow: mps2_an521_ns lpcxpresso55s69_ns nrf5340dk_nrf5340_cpuapp_ns
nrf9160dk_nrf9160_ns nucleo_l552ze_q_ns v2m_musca_s1_ns stm32l562e_dk_ns
bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line

View File

@@ -5,6 +5,7 @@ common:
tags: psa
platform_allow: mps2_an521_ns v2m_musca_s1_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -22,3 +23,4 @@ common:
tests:
sample.tfm.protected_storage:
tags: tfm
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -8,6 +8,7 @@ tests:
platform_allow: mps2_an521_ns lpcxpresso55s69_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns nucleo_l552ze_q_ns
stm32l562e_dk_ns v2m_musca_s1_ns v2m_musca_b1_ns bl5340_dvk_cpuapp_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -21,6 +22,7 @@ tests:
platform_allow: mps2_an521_ns
extra_configs:
- CONFIG_TFM_BL2=n
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line

View File

@@ -3,6 +3,7 @@ common:
platform_allow: mps2_an521_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns
v2m_musca_s1_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -16,5 +17,7 @@ tests:
sample.tfm.psa_protected_storage_test:
extra_args: "CONFIG_TFM_PSA_TEST_PROTECTED_STORAGE=y"
timeout: 100
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
sample.tfm.psa_internal_trusted_storage_test:
extra_args: "CONFIG_TFM_PSA_TEST_INTERNAL_TRUSTED_STORAGE=y"
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -3,6 +3,7 @@ common:
platform_allow: lpcxpresso55s69_ns
nrf5340dk_nrf5340_cpuapp_ns nrf9160dk_nrf9160_ns
v2m_musca_s1_ns
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
harness: console
harness_config:
type: multi_line
@@ -18,3 +19,4 @@ tests:
sample.tfm.tfm_regression:
extra_args: ""
timeout: 200
filter: CONFIG_ZEPHYR_TRUSTED_FIRMWARE_M_MODULE

View File

@@ -15,3 +15,5 @@ tests:
sample.kernel.memory_protection.shared_mem:
filter: CONFIG_ARCH_HAS_USERSPACE
platform_exclude: twr_ke18f
extra_configs:
- CONFIG_TEST_HW_STACK_PROTECTION=n

View File

@@ -58,7 +58,7 @@ data_template = """
"""
library_data_template = """
*{0}:*(.data .data.*)
*{0}:*(.data .data.* .sdata .sdata.*)
"""
bss_template = """
@@ -67,7 +67,7 @@ bss_template = """
"""
library_bss_template = """
*{0}:*(.bss .bss.* COMMON COMMON.*)
*{0}:*(.bss .bss.* .sbss .sbss.* COMMON COMMON.*)
"""
footer_template = """

View File

@@ -55,8 +55,8 @@ const _k_syscall_handler_t _k_syscall_table[K_SYSCALL_LIMIT] = {
};
"""
list_template = """
/* auto-generated by gen_syscalls.py, don't edit */
list_template = """/* auto-generated by gen_syscalls.py, don't edit */
#ifndef ZEPHYR_SYSCALL_LIST_H
#define ZEPHYR_SYSCALL_LIST_H
@@ -82,17 +82,6 @@ syscall_template = """
#include <linker/sections.h>
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC diagnostic push
#endif
#ifdef __GNUC__
#pragma GCC diagnostic ignored "-Wstrict-aliasing"
#if !defined(__XCC__)
#pragma GCC diagnostic ignored "-Warray-bounds"
#endif
#endif
#ifdef __cplusplus
extern "C" {
#endif
@@ -103,10 +92,6 @@ extern "C" {
}
#endif
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC diagnostic pop
#endif
#endif
#endif /* include guard */
"""
@@ -153,25 +138,13 @@ def need_split(argtype):
# Note: "lo" and "hi" are named in little endian conventions,
# but it doesn't matter as long as they are consistently
# generated.
def union_decl(type):
return "union { struct { uintptr_t lo, hi; } split; %s val; }" % type
def union_decl(type, split):
middle = "struct { uintptr_t lo, hi; } split" if split else "uintptr_t x"
return "union { %s; %s val; }" % (middle, type)
def wrapper_defs(func_name, func_type, args):
ret64 = need_split(func_type)
mrsh_args = [] # List of rvalue expressions for the marshalled invocation
split_args = []
nsplit = 0
for argtype, argname in args:
if need_split(argtype):
split_args.append((argtype, argname))
mrsh_args.append("parm%d.split.lo" % nsplit)
mrsh_args.append("parm%d.split.hi" % nsplit)
nsplit += 1
else:
mrsh_args.append("*(uintptr_t *)&" + argname)
if ret64:
mrsh_args.append("(uintptr_t)&ret64")
decl_arglist = ", ".join([" ".join(argrec) for argrec in args]) or "void"
@@ -184,10 +157,24 @@ def wrapper_defs(func_name, func_type, args):
wrap += ("\t" + "uint64_t ret64;\n") if ret64 else ""
wrap += "\t" + "if (z_syscall_trap()) {\n"
for parmnum, rec in enumerate(split_args):
(argtype, argname) = rec
wrap += "\t\t%s parm%d;\n" % (union_decl(argtype), parmnum)
wrap += "\t\t" + "parm%d.val = %s;\n" % (parmnum, argname)
valist_args = []
for argnum, (argtype, argname) in enumerate(args):
split = need_split(argtype)
wrap += "\t\t%s parm%d" % (union_decl(argtype, split), argnum)
if argtype != "va_list":
wrap += " = { .val = %s };\n" % argname
else:
# va_list objects are ... peculiar.
wrap += ";\n" + "\t\t" + "va_copy(parm%d.val, %s);\n" % (argnum, argname)
valist_args.append("parm%d.val" % argnum)
if split:
mrsh_args.append("parm%d.split.lo" % argnum)
mrsh_args.append("parm%d.split.hi" % argnum)
else:
mrsh_args.append("parm%d.x" % argnum)
if ret64:
mrsh_args.append("(uintptr_t)&ret64")
if len(mrsh_args) > 6:
wrap += "\t\t" + "uintptr_t more[] = {\n"
@@ -200,21 +187,23 @@ def wrapper_defs(func_name, func_type, args):
% (len(mrsh_args),
", ".join(mrsh_args + [syscall_id])))
# Coverity does not understand syscall mechanism
# and will already complain when any function argument
# is not of exact size as uintptr_t. So tell Coverity
# to ignore this particular rule here.
wrap += "\t\t/* coverity[OVERRUN] */\n"
if ret64:
wrap += "\t\t" + "(void)%s;\n" % invoke
wrap += "\t\t" + "return (%s)ret64;\n" % func_type
invoke = "\t\t" + "(void) %s;\n" % invoke
retcode = "\t\t" + "return (%s) ret64;\n" % func_type
elif func_type == "void":
wrap += "\t\t" + "%s;\n" % invoke
wrap += "\t\t" + "return;\n"
invoke = "\t\t" + "(void) %s;\n" % invoke
retcode = "\t\t" + "return;\n"
elif valist_args:
invoke = "\t\t" + "%s retval = %s;\n" % (func_type, invoke)
retcode = "\t\t" + "return retval;\n"
else:
wrap += "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
invoke = "\t\t" + "return (%s) %s;\n" % (func_type, invoke)
retcode = ""
wrap += invoke
for argname in valist_args:
wrap += "\t\t" + "va_end(%s);\n" % argname
wrap += retcode
wrap += "\t" + "}\n"
wrap += "#endif\n"
@@ -244,16 +233,11 @@ def marshall_defs(func_name, func_type, args):
mrsh_name = "z_mrsh_" + func_name
nmrsh = 0 # number of marshalled uintptr_t parameter
vrfy_parms = [] # list of (arg_num, mrsh_or_parm_num, bool_is_split)
split_parms = [] # list of a (arg_num, mrsh_num) for each split
for i, (argtype, _) in enumerate(args):
if need_split(argtype):
vrfy_parms.append((i, len(split_parms), True))
split_parms.append((i, nmrsh))
nmrsh += 2
else:
vrfy_parms.append((i, nmrsh, False))
nmrsh += 1
vrfy_parms = [] # list of (argtype, bool_is_split)
for (argtype, _) in args:
split = need_split(argtype)
vrfy_parms.append((argtype, split))
nmrsh += 2 if split else 1
# Final argument for a 64 bit return value?
if need_split(func_type):
@@ -275,25 +259,22 @@ def marshall_defs(func_name, func_type, args):
if nmrsh > 6:
mrsh += ("\tZ_OOPS(Z_SYSCALL_MEMORY_READ(more, "
+ str(nmrsh - 6) + " * sizeof(uintptr_t)));\n")
+ str(nmrsh - 5) + " * sizeof(uintptr_t)));\n")
for i, split_rec in enumerate(split_parms):
arg_num, mrsh_num = split_rec
arg_type = args[arg_num][0]
mrsh += "\t%s parm%d;\n" % (union_decl(arg_type), i)
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(mrsh_num,
nmrsh))
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(mrsh_num + 1,
nmrsh))
# Finally, invoke the verify function
out_args = []
for i, argn, is_split in vrfy_parms:
if is_split:
out_args.append("parm%d.val" % argn)
argnum = 0
for i, (argtype, split) in enumerate(vrfy_parms):
mrsh += "\t%s parm%d;\n" % (union_decl(argtype, split), i)
if split:
mrsh += "\t" + "parm%d.split.lo = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
argnum += 1
mrsh += "\t" + "parm%d.split.hi = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
else:
out_args.append("*(%s*)&%s" % (args[i][0], mrsh_rval(argn, nmrsh)))
mrsh += "\t" + "parm%d.x = %s;\n" % (i, mrsh_rval(argnum, nmrsh))
argnum += 1
vrfy_call = "z_vrfy_%s(%s)\n" % (func_name, ", ".join(out_args))
# Finally, invoke the verify function
out_args = ", ".join(["parm%d.val" % i for i in range(len(args))])
vrfy_call = "z_vrfy_%s(%s)" % (func_name, out_args)
if func_type == "void":
mrsh += "\t" + "%s;\n" % vrfy_call
@@ -436,19 +417,10 @@ def main():
mrsh_fn = os.path.join(args.base_output, fn + "_mrsh.c")
with open(mrsh_fn, "w") as fp:
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n")
fp.write("#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)\n")
fp.write("#pragma GCC diagnostic push\n")
fp.write("#endif\n")
fp.write("#ifdef __GNUC__\n")
fp.write("#pragma GCC diagnostic ignored \"-Wstrict-aliasing\"\n")
fp.write("#endif\n")
fp.write("/* auto-generated by gen_syscalls.py, don't edit */\n\n")
fp.write(mrsh_includes[fn] + "\n")
fp.write("\n")
fp.write(mrsh_defs[fn] + "\n")
fp.write("#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)\n")
fp.write("#pragma GCC diagnostic pop\n")
fp.write("#endif\n")
if __name__ == "__main__":
main()

View File

@@ -2017,21 +2017,17 @@ class CMake():
def run_cmake(self, args=[]):
if self.warnings_as_errors:
ldflags = "-Wl,--fatal-warnings"
cflags = "-Werror"
aflags = "-Wa,--fatal-warnings"
warnings_as_errors = 'y'
gen_defines_args = "--edtlib-Werror"
else:
ldflags = cflags = aflags = ""
warnings_as_errors = 'n'
gen_defines_args = ""
logger.debug("Running cmake on %s for %s" % (self.source_dir, self.platform.name))
cmake_args = [
f'-B{self.build_dir}',
f'-S{self.source_dir}',
f'-DEXTRA_CFLAGS="{cflags}"',
f'-DEXTRA_AFLAGS="{aflags}',
f'-DEXTRA_LDFLAGS="{ldflags}"',
f'-DCONFIG_COMPILER_WARNINGS_AS_ERRORS={warnings_as_errors}',
f'-DEXTRA_GEN_DEFINES_ARGS={gen_defines_args}',
f'-G{self.generator}'
]

View File

@@ -1,7 +1,7 @@
# DOC: used to generate docs
breathe>=4.30
sphinx~=4.0
sphinx~=5.0.2
sphinx_rtd_theme~=1.0
sphinx-tabs
sphinxcontrib-svg2pdfconverter

View File

@@ -1181,6 +1181,7 @@ static inline int isr_rx_pdu(struct lll_scan *lll, struct pdu_adv *pdu_adv_rx,
/* Active scanner */
} else if (((pdu_adv_rx->type == PDU_ADV_TYPE_ADV_IND) ||
(pdu_adv_rx->type == PDU_ADV_TYPE_SCAN_IND)) &&
(pdu_adv_rx->len >= offsetof(struct pdu_adv_adv_ind, data)) &&
(pdu_adv_rx->len <= sizeof(struct pdu_adv_adv_ind)) &&
lll->type &&
#if defined(CONFIG_BT_CENTRAL)
@@ -1274,6 +1275,7 @@ static inline int isr_rx_pdu(struct lll_scan *lll, struct pdu_adv *pdu_adv_rx,
else if (((((pdu_adv_rx->type == PDU_ADV_TYPE_ADV_IND) ||
(pdu_adv_rx->type == PDU_ADV_TYPE_NONCONN_IND) ||
(pdu_adv_rx->type == PDU_ADV_TYPE_SCAN_IND)) &&
(pdu_adv_rx->len >= offsetof(struct pdu_adv_adv_ind, data)) &&
(pdu_adv_rx->len <= sizeof(struct pdu_adv_adv_ind))) ||
((pdu_adv_rx->type == PDU_ADV_TYPE_DIRECT_IND) &&
(pdu_adv_rx->len == sizeof(struct pdu_adv_direct_ind)) &&
@@ -1287,6 +1289,7 @@ static inline int isr_rx_pdu(struct lll_scan *lll, struct pdu_adv *pdu_adv_rx,
pdu_adv_rx, rl_idx)) ||
#endif /* CONFIG_BT_CTLR_ADV_EXT */
((pdu_adv_rx->type == PDU_ADV_TYPE_SCAN_RSP) &&
(pdu_adv_rx->len >= offsetof(struct pdu_adv_scan_rsp, data)) &&
(pdu_adv_rx->len <= sizeof(struct pdu_adv_scan_rsp)) &&
(lll->state != 0U) &&
isr_scan_rsp_adva_matches(pdu_adv_rx))) &&
@@ -1334,6 +1337,7 @@ static inline bool isr_scan_init_check(struct lll_scan *lll,
lll_scan_adva_check(lll, pdu->tx_addr, pdu->adv_ind.addr,
rl_idx)) &&
(((pdu->type == PDU_ADV_TYPE_ADV_IND) &&
(pdu->len >= offsetof(struct pdu_adv_adv_ind, data)) &&
(pdu->len <= sizeof(struct pdu_adv_adv_ind))) ||
((pdu->type == PDU_ADV_TYPE_DIRECT_IND) &&
(pdu->len == sizeof(struct pdu_adv_direct_ind)) &&

View File

@@ -41,6 +41,11 @@
struct tx_meta {
struct bt_conn_tx *tx;
/* This flag indicates if the current buffer has already been partially
* sent to the controller (ie, the next fragments should be sent as
* continuations).
*/
bool is_cont;
};
#define tx_data(buf) ((struct tx_meta *)net_buf_user_data(buf))
@@ -396,6 +401,8 @@ int bt_conn_send_cb(struct bt_conn *conn, struct net_buf *buf,
tx_data(buf)->tx = NULL;
}
tx_data(buf)->is_cont = false;
net_buf_put(&conn->tx_queue, buf);
return 0;
}
@@ -464,25 +471,41 @@ static int send_iso(struct bt_conn *conn, struct net_buf *buf, uint8_t flags)
return bt_send(buf);
}
static bool send_frag(struct bt_conn *conn, struct net_buf *buf, uint8_t flags,
bool always_consume)
static inline uint16_t conn_mtu(struct bt_conn *conn)
{
#if defined(CONFIG_BT_BREDR)
if (conn->type == BT_CONN_TYPE_BR || !bt_dev.le.acl_mtu) {
return bt_dev.br.mtu;
}
#endif /* CONFIG_BT_BREDR */
#if defined(CONFIG_BT_ISO)
if (conn->type == BT_CONN_TYPE_ISO && bt_dev.le.iso_mtu) {
return bt_dev.le.iso_mtu;
}
#endif /* CONFIG_BT_ISO */
#if defined(CONFIG_BT_CONN)
return bt_dev.le.acl_mtu;
#else
return 0;
#endif /* CONFIG_BT_CONN */
}
static int do_send_frag(struct bt_conn *conn, struct net_buf *buf, uint8_t flags)
{
struct bt_conn_tx *tx = tx_data(buf)->tx;
uint32_t *pending_no_cb;
uint32_t *pending_no_cb = NULL;
unsigned int key;
int err = 0;
BT_DBG("conn %p buf %p len %u flags 0x%02x", conn, buf, buf->len,
flags);
/* Wait until the controller can accept ACL packets */
k_sem_take(bt_conn_get_pkts(conn), K_FOREVER);
/* Check for disconnection while waiting for pkts_sem */
if (conn->state != BT_CONN_CONNECTED) {
err = -ENOTCONN;
goto fail;
}
BT_DBG("conn %p buf %p len %u flags 0x%02x", conn, buf, buf->len,
flags);
/* Add to pending, it must be done before bt_buf_set_type */
key = irq_lock();
if (tx) {
@@ -520,46 +543,61 @@ static bool send_frag(struct bt_conn *conn, struct net_buf *buf, uint8_t flags,
(*pending_no_cb)--;
}
irq_unlock(key);
/* We don't want to end up in a situation where send_acl/iso
* returns the same error code as when we don't get a buffer in
* time.
*/
err = -EIO;
goto fail;
}
return true;
return 0;
fail:
/* If we get here, something has seriously gone wrong:
* We also need to destroy the `parent` buf.
*/
k_sem_give(bt_conn_get_pkts(conn));
if (tx) {
tx_free(tx);
}
if (always_consume) {
net_buf_unref(buf);
}
return false;
return err;
}
static inline uint16_t conn_mtu(struct bt_conn *conn)
static int send_frag(struct bt_conn *conn,
struct net_buf *buf, struct net_buf *frag,
uint8_t flags)
{
#if defined(CONFIG_BT_BREDR)
if (conn->type == BT_CONN_TYPE_BR || !bt_dev.le.acl_mtu) {
return bt_dev.br.mtu;
/* Check if the controller can accept ACL packets */
if (k_sem_take(bt_conn_get_pkts(conn), K_NO_WAIT)) {
BT_DBG("no controller bufs");
return -ENOBUFS;
}
#endif /* CONFIG_BT_BREDR */
#if defined(CONFIG_BT_ISO)
if (conn->type == BT_CONN_TYPE_ISO && bt_dev.le.iso_mtu) {
return bt_dev.le.iso_mtu;
/* Add the data to the buffer */
if (frag) {
uint16_t frag_len = MIN(conn_mtu(conn), net_buf_tailroom(frag));
net_buf_add_mem(frag, buf->data, frag_len);
net_buf_pull(buf, frag_len);
} else {
/* De-queue the buffer now that we know we can send it.
* Only applies if the buffer to be sent is the original buffer,
* and not one of its fragments.
* This buffer was fetched from the FIFO using a peek operation.
*/
buf = net_buf_get(&conn->tx_queue, K_NO_WAIT);
frag = buf;
}
#endif /* CONFIG_BT_ISO */
#if defined(CONFIG_BT_CONN)
return bt_dev.le.acl_mtu;
#else
return 0;
#endif /* CONFIG_BT_CONN */
return do_send_frag(conn, frag, flags);
}
static struct net_buf *create_frag(struct bt_conn *conn, struct net_buf *buf)
{
struct net_buf *frag;
uint16_t frag_len;
switch (conn->type) {
#if defined(CONFIG_BT_ISO)
@@ -583,52 +621,55 @@ static struct net_buf *create_frag(struct bt_conn *conn, struct net_buf *buf)
/* Fragments never have a TX completion callback */
tx_data(frag)->tx = NULL;
frag_len = MIN(conn_mtu(conn), net_buf_tailroom(frag));
net_buf_add_mem(frag, buf->data, frag_len);
net_buf_pull(buf, frag_len);
tx_data(frag)->is_cont = false;
return frag;
}
static bool send_buf(struct bt_conn *conn, struct net_buf *buf)
static int send_buf(struct bt_conn *conn, struct net_buf *buf)
{
struct net_buf *frag;
uint8_t flags;
int err;
BT_DBG("conn %p buf %p len %u", conn, buf, buf->len);
/* Send directly if the packet fits the ACL MTU */
if (buf->len <= conn_mtu(conn)) {
return send_frag(conn, buf, FRAG_SINGLE, false);
}
/* Create & enqueue first fragment */
frag = create_frag(conn, buf);
if (!frag) {
return false;
}
if (!send_frag(conn, frag, FRAG_START, true)) {
return false;
if (buf->len <= conn_mtu(conn) && !tx_data(buf)->is_cont) {
BT_DBG("send single");
return send_frag(conn, buf, NULL, FRAG_SINGLE);
}
BT_DBG("start fragmenting");
/*
* Send the fragments. For the last one simply use the original
* buffer (which works since we've used net_buf_pull on it.
* buffer (which works since we've used net_buf_pull on it).
*/
flags = FRAG_START;
if (tx_data(buf)->is_cont) {
flags = FRAG_CONT;
}
while (buf->len > conn_mtu(conn)) {
frag = create_frag(conn, buf);
if (!frag) {
return false;
return -ENOMEM;
}
if (!send_frag(conn, frag, FRAG_CONT, true)) {
return false;
err = send_frag(conn, buf, frag, flags);
if (err) {
BT_DBG("%p failed, mark as existing frag", buf);
tx_data(buf)->is_cont = flags != FRAG_START;
net_buf_unref(frag);
return err;
}
flags = FRAG_CONT;
}
return send_frag(conn, buf, FRAG_END, false);
BT_DBG("last frag");
tx_data(buf)->is_cont = true;
return send_frag(conn, buf, NULL, FRAG_END);
}
static struct k_poll_signal conn_change =
@@ -674,10 +715,26 @@ static int conn_prepare_events(struct bt_conn *conn,
BT_DBG("Adding conn %p to poll list", conn);
k_poll_event_init(&events[0],
K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&conn->tx_queue);
bool buffers_available = k_sem_count_get(bt_conn_get_pkts(conn)) > 0;
bool packets_waiting = !k_fifo_is_empty(&conn->tx_queue);
if (packets_waiting && !buffers_available) {
/* Only resume sending when the controller has buffer space
* available for this connection.
*/
BT_DBG("wait on ctlr buffers");
k_poll_event_init(&events[0],
K_POLL_TYPE_SEM_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
bt_conn_get_pkts(conn));
} else {
/* Wait until there is more data to send. */
BT_DBG("wait on host fifo");
k_poll_event_init(&events[0],
K_POLL_TYPE_FIFO_DATA_AVAILABLE,
K_POLL_MODE_NOTIFY_ONLY,
&conn->tx_queue);
}
events[0].tag = BT_EVENT_CONN_TX_QUEUE;
return 0;
@@ -720,6 +777,7 @@ int bt_conn_prepare_events(struct k_poll_event events[])
void bt_conn_process_tx(struct bt_conn *conn)
{
struct net_buf *buf;
int err;
BT_DBG("conn %p", conn);
@@ -730,10 +788,26 @@ void bt_conn_process_tx(struct bt_conn *conn)
return;
}
/* Get next ACL packet for connection */
buf = net_buf_get(&conn->tx_queue, K_NO_WAIT);
/* Get next ACL packet for connection. The buffer will only get dequeued
* if there is a free controller buffer to put it in.
*
* Important: no operations should be done on `buf` until it is properly
* dequeued from the FIFO, using the `net_buf_get()` API.
*/
buf = k_fifo_peek_head(&conn->tx_queue);
BT_ASSERT(buf);
if (!send_buf(conn, buf)) {
/* Since we used `peek`, the queue still owns the reference to the
* buffer, so we need to take an explicit additional reference here.
*/
buf = net_buf_ref(buf);
err = send_buf(conn, buf);
net_buf_unref(buf);
if (err == -EIO) {
tx_data(buf)->tx = NULL;
/* destroy the buffer */
net_buf_unref(buf);
}
}

View File

@@ -2372,6 +2372,13 @@ static void process_events(struct k_poll_event *ev, int count)
switch (ev->state) {
case K_POLL_STATE_SIGNALED:
break;
case K_POLL_STATE_SEM_AVAILABLE:
/* After this fn is exec'd, `bt_conn_prepare_events()`
* will be called once again, and this time buffers will
* be available, so the FIFO will be added to the poll
* list instead of the ctlr buffers semaphore.
*/
break;
case K_POLL_STATE_FIFO_DATA_AVAILABLE:
if (ev->tag == BT_EVENT_CMD_TX) {
send_cmd();
@@ -2431,6 +2438,7 @@ static void hci_tx_thread(void *p1, void *p2, void *p3)
events[0].state = K_POLL_STATE_NOT_READY;
ev_count = 1;
/* This adds the FIFO per-connection */
if (IS_ENABLED(CONFIG_BT_CONN) || IS_ENABLED(CONFIG_BT_ISO)) {
ev_count += bt_conn_prepare_events(&events[1]);
}
@@ -2503,13 +2511,15 @@ static void le_read_buffer_size_complete(struct net_buf *buf)
BT_DBG("status 0x%02x", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!bt_dev.le.acl_mtu) {
uint16_t acl_mtu = sys_le16_to_cpu(rp->le_max_len);
if (!acl_mtu || !rp->le_max_num) {
return;
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num,
bt_dev.le.acl_mtu);
bt_dev.le.acl_mtu = acl_mtu;
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->le_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->le_max_num, rp->le_max_num);
#endif /* CONFIG_BT_CONN */
@@ -2523,25 +2533,26 @@ static void read_buffer_size_v2_complete(struct net_buf *buf)
BT_DBG("status %u", rp->status);
#if defined(CONFIG_BT_CONN)
bt_dev.le.acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (!bt_dev.le.acl_mtu) {
return;
uint16_t acl_mtu = sys_le16_to_cpu(rp->acl_max_len);
if (acl_mtu && rp->acl_max_num) {
bt_dev.le.acl_mtu = acl_mtu;
LOG_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num, bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
}
BT_DBG("ACL LE buffers: pkts %u mtu %u", rp->acl_max_num,
bt_dev.le.acl_mtu);
k_sem_init(&bt_dev.le.acl_pkts, rp->acl_max_num, rp->acl_max_num);
#endif /* CONFIG_BT_CONN */
bt_dev.le.iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!bt_dev.le.iso_mtu) {
uint16_t iso_mtu = sys_le16_to_cpu(rp->iso_max_len);
if (!iso_mtu || !rp->iso_max_num) {
BT_ERR("ISO buffer size not set");
return;
}
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num,
bt_dev.le.iso_mtu);
bt_dev.le.iso_mtu = iso_mtu;
BT_DBG("ISO buffers: pkts %u mtu %u", rp->iso_max_num, bt_dev.le.iso_mtu);
k_sem_init(&bt_dev.le.iso_pkts, rp->iso_max_num, rp->iso_max_num);
#endif /* CONFIG_BT_ISO */
@@ -2810,6 +2821,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
read_buffer_size_v2_complete(rsp);
net_buf_unref(rsp);
@@ -2823,6 +2835,7 @@ static int le_init_iso(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
@@ -2866,7 +2879,9 @@ static int le_init(void)
if (err) {
return err;
}
le_read_buffer_size_complete(rsp);
net_buf_unref(rsp);
}

View File

@@ -873,6 +873,12 @@ static void l2cap_chan_tx_process(struct k_work *work)
if (sent < 0) {
if (sent == -EAGAIN) {
ch->tx_buf = buf;
/* If we don't reschedule, and the app doesn't nudge l2cap (e.g. by
* sending another SDU), the channel will be stuck in limbo. To
* prevent this, we attempt to re-schedule the work item for every
* channel on every connection when an SDU has successfully been
* sent.
*/
} else {
net_buf_unref(buf);
}
@@ -1693,13 +1699,20 @@ static void le_disconn_rsp(struct bt_l2cap *l2cap, uint8_t ident,
bt_l2cap_chan_del(&chan->chan);
}
static inline struct net_buf *l2cap_alloc_seg(struct net_buf *buf)
static inline struct net_buf *l2cap_alloc_seg(struct net_buf *buf, struct bt_l2cap_le_chan *ch)
{
struct net_buf_pool *pool = net_buf_pool_get(buf->pool_id);
struct net_buf *seg;
/* Try to use original pool if possible */
seg = net_buf_alloc(pool, K_NO_WAIT);
/* Use the dedicated segment callback if registered */
if (ch->chan.ops->alloc_seg) {
seg = ch->chan.ops->alloc_seg(&ch->chan);
__ASSERT_NO_MSG(seg);
} else {
/* Try to use original pool if possible */
seg = net_buf_alloc(pool, K_NO_WAIT);
}
if (seg) {
net_buf_reserve(seg, BT_L2CAP_CHAN_SEND_RESERVE);
return seg;
@@ -1736,7 +1749,8 @@ static struct net_buf *l2cap_chan_create_seg(struct bt_l2cap_le_chan *ch,
}
segment:
seg = l2cap_alloc_seg(buf);
seg = l2cap_alloc_seg(buf, ch);
if (!seg) {
return NULL;
}
@@ -1767,6 +1781,17 @@ static void l2cap_chan_tx_resume(struct bt_l2cap_le_chan *ch)
k_work_submit(&ch->tx_work);
}
#if defined(CONFIG_BT_L2CAP_DYNAMIC_CHANNEL)
static void resume_all_channels(struct bt_conn *conn, void *data)
{
struct bt_l2cap_chan *chan;
SYS_SLIST_FOR_EACH_CONTAINER(&conn->channels, chan, node) {
l2cap_chan_tx_resume(BT_L2CAP_LE_CHAN(chan));
}
}
#endif
static void l2cap_chan_sdu_sent(struct bt_conn *conn, void *user_data)
{
uint16_t cid = POINTER_TO_UINT(user_data);
@@ -1784,7 +1809,15 @@ static void l2cap_chan_sdu_sent(struct bt_conn *conn, void *user_data)
chan->ops->sent(chan);
}
/* Resume the current channel */
l2cap_chan_tx_resume(BT_L2CAP_LE_CHAN(chan));
if (IS_ENABLED(CONFIG_BT_L2CAP_DYNAMIC_CHANNEL)) {
/* Resume all other channels in case one might be stuck.
* The current channel has already been given priority.
*/
bt_conn_foreach(BT_CONN_TYPE_LE, resume_all_channels, NULL);
}
}
static void l2cap_chan_seg_sent(struct bt_conn *conn, void *user_data)
@@ -1872,12 +1905,12 @@ static int l2cap_chan_le_send(struct bt_l2cap_le_chan *ch,
BT_WARN("Unable to send seg %d", err);
atomic_inc(&ch->tx.credits);
/* If the segment is not the original buffer release it since it
* won't be needed anymore.
/* The host takes ownership of the reference in seg when
* bt_l2cap_send_cb is successful. The call returned an error,
* so we must get rid of the reference that was taken in
* l2cap_chan_create_seg.
*/
if (seg != buf) {
net_buf_unref(seg);
}
net_buf_unref(seg);
if (err == -ENOBUFS) {
/* Restore state since segment could not be sent */
@@ -2142,12 +2175,19 @@ static void l2cap_chan_send_credits(struct bt_l2cap_le_chan *chan,
struct net_buf *buf, uint16_t credits)
{
struct bt_l2cap_le_credits *ev;
uint16_t old_credits;
/* Cap the number of credits given */
if (credits > chan->rx.init_credits) {
credits = chan->rx.init_credits;
}
/* Don't send back more than the initial amount. */
old_credits = atomic_get(&chan->rx.credits);
if (credits + old_credits > chan->rx.init_credits) {
credits = chan->rx.init_credits - old_credits;
}
buf = l2cap_create_le_sig_pdu(buf, BT_L2CAP_LE_CREDITS, get_ident(),
sizeof(*ev));
if (!buf) {
@@ -2180,6 +2220,8 @@ static void l2cap_chan_update_credits(struct bt_l2cap_le_chan *chan,
credits = ((chan->_sdu_len - net_buf_frags_len(buf)) +
(chan->rx.mps - 1)) / chan->rx.mps;
BT_DBG("cred %d old %d", credits, (int)old_credits);
if (credits < old_credits) {
return;
}

View File

@@ -124,13 +124,22 @@ static void purge_buffers(sys_slist_t *list)
buf = (void *)sys_slist_get_not_empty(list);
buf->frags = NULL;
buf->flags &= ~NET_BUF_FRAGS;
net_buf_unref(buf);
}
}
static void purge_seg_buffers(struct net_buf *buf)
{
/* Fragments head has always 2 references: one when allocated, one when becomes
* fragments head.
*/
net_buf_unref(buf);
do {
buf = net_buf_frag_del(NULL, buf);
} while (buf != NULL);
}
/* Intentionally start a little bit late into the ReceiveWindow when
* it's large enough. This may improve reliability with some platforms,
* like the PTS, where the receiver might not have sufficiently compensated
@@ -166,8 +175,10 @@ static void friend_clear(struct bt_mesh_friend *frnd)
for (i = 0; i < ARRAY_SIZE(frnd->seg); i++) {
struct bt_mesh_friend_seg *seg = &frnd->seg[i];
purge_buffers(&seg->queue);
seg->seg_count = 0U;
if (seg->buf) {
purge_seg_buffers(seg->buf);
seg->seg_count = 0U;
}
}
STRUCT_SECTION_FOREACH(bt_mesh_friend_cb, cb) {
@@ -1053,7 +1064,7 @@ init_friend:
static bool is_seg(struct bt_mesh_friend_seg *seg, uint16_t src, uint16_t seq_zero)
{
struct net_buf *buf = (void *)sys_slist_peek_head(&seg->queue);
struct net_buf *buf = seg->buf;
struct net_buf_simple_state state;
uint16_t buf_seq_zero;
uint16_t buf_src;
@@ -1086,7 +1097,7 @@ static struct bt_mesh_friend_seg *get_seg(struct bt_mesh_friend *frnd,
return seg;
}
if (!unassigned && !sys_slist_peek_head(&seg->queue)) {
if (!unassigned && !seg->buf) {
unassigned = seg;
}
}
@@ -1121,16 +1132,13 @@ static void enqueue_friend_pdu(struct bt_mesh_friend *frnd,
return;
}
net_buf_slist_put(&seg->queue, buf);
seg->buf = net_buf_frag_add(seg->buf, buf);
if (type == BT_MESH_FRIEND_PDU_COMPLETE) {
sys_slist_merge_slist(&frnd->queue, &seg->queue);
net_buf_slist_put(&frnd->queue, seg->buf);
frnd->queue_size += seg->seg_count;
seg->seg_count = 0U;
} else {
/* Mark the buffer as having more to come after it */
buf->flags |= NET_BUF_FRAGS;
}
}
@@ -1244,6 +1252,15 @@ static void friend_timeout(struct k_work *work)
return;
}
/* Put next segment to the friend queue. */
if (frnd->last != net_buf_frag_last(frnd->last)) {
struct net_buf *next;
next = net_buf_frag_del(NULL, frnd->last);
net_buf_frag_add(NULL, next);
sys_slist_prepend(&frnd->queue, &next->node);
}
md = (uint8_t)(sys_slist_peek_head(&frnd->queue) != NULL);
update_overwrite(frnd->last, md);
@@ -1252,10 +1269,6 @@ static void friend_timeout(struct k_work *work)
return;
}
/* Clear the flag we use for segment tracking */
frnd->last->flags &= ~NET_BUF_FRAGS;
frnd->last->frags = NULL;
BT_DBG("Sending buf %p from Friend Queue of LPN 0x%04x",
frnd->last, frnd->lpn);
frnd->queue_size--;
@@ -1330,16 +1343,11 @@ int bt_mesh_friend_init(void)
for (i = 0; i < ARRAY_SIZE(bt_mesh.frnd); i++) {
struct bt_mesh_friend *frnd = &bt_mesh.frnd[i];
int j;
sys_slist_init(&frnd->queue);
k_work_init_delayable(&frnd->timer, friend_timeout);
k_work_init_delayable(&frnd->clear.timer, clear_timeout);
for (j = 0; j < ARRAY_SIZE(frnd->seg); j++) {
sys_slist_init(&frnd->seg[j].queue);
}
}
return 0;
@@ -1635,11 +1643,16 @@ static bool friend_queue_prepare_space(struct bt_mesh_friend *frnd, uint16_t add
frnd->queue_size--;
avail_space++;
pending_segments = (buf->flags & NET_BUF_FRAGS);
if (buf != net_buf_frag_last(buf)) {
struct net_buf *next;
/* Make sure old slist entry state doesn't remain */
buf->frags = NULL;
buf->flags &= ~NET_BUF_FRAGS;
next = net_buf_frag_del(NULL, buf);
net_buf_frag_add(NULL, next);
sys_slist_prepend(&frnd->queue, &next->node);
pending_segments = true;
}
net_buf_unref(buf);
}
@@ -1762,7 +1775,7 @@ void bt_mesh_friend_clear_incomplete(struct bt_mesh_subnet *sub, uint16_t src,
BT_WARN("Clearing incomplete segments for 0x%04x", src);
purge_buffers(&seg->queue);
purge_seg_buffers(seg->buf);
seg->seg_count = 0U;
break;
}

View File

@@ -67,7 +67,10 @@ struct bt_mesh_friend {
struct k_work_delayable timer;
struct bt_mesh_friend_seg {
sys_slist_t queue;
/* First received segment of a segmented message. Rest
* segments are added as net_buf fragments.
*/
struct net_buf *buf;
/* The target number of segments, i.e. not necessarily
* the current number of segments, in the queue. This is

View File

@@ -907,7 +907,11 @@ static inline int send_sf(struct isotp_send_ctx *ctx)
frame.data[index++] = ISOTP_PCI_TYPE_SF | len;
__ASSERT_NO_MSG(len <= ISOTP_CAN_DL - index);
if (len > ISOTP_CAN_DL - index) {
LOG_ERR("SF len does not fit DL");
return -ENOSPC;
}
memcpy(&frame.data[index], data, len);
#ifdef CONFIG_ISOTP_ENABLE_TX_PADDING
@@ -1202,6 +1206,7 @@ static int send(struct isotp_send_ctx *ctx, const struct device *can_dev,
ret = attach_fc_filter(ctx);
if (ret) {
LOG_ERR("Can't attach fc filter: %d", ret);
free_send_ctx(&ctx);
return ret;
}
@@ -1214,6 +1219,7 @@ static int send(struct isotp_send_ctx *ctx, const struct device *can_dev,
ret = send_sf(ctx);
ctx->state = ISOTP_TX_WAIT_FIN;
if (ret) {
free_send_ctx(&ctx);
return ret == CAN_TIMEOUT ?
ISOTP_N_TIMEOUT_A : ISOTP_N_ERROR;
}

View File

@@ -28,6 +28,11 @@ config DEBUG_COREDUMP_BACKEND_FLASH_PARTITION
Core dump is saved to a flash partition with DTS alias
"coredump-partition".
config DEBUG_COREDUMP_BACKEND_OTHER
bool "Backend subsystem for coredump defined out of tree"
help
Core dump is done via custom mechanism defined out of tree
endchoice
choice

View File

@@ -513,7 +513,7 @@ static int coredump_flash_backend_cmd(enum coredump_cmd_id cmd_id,
}
struct z_coredump_backend_api z_coredump_backend_flash_partition = {
struct coredump_backend_api coredump_backend_flash_partition = {
.start = coredump_flash_backend_start,
.end = coredump_flash_backend_end,
.buffer_output = coredump_flash_backend_buffer_output,

View File

@@ -116,7 +116,7 @@ static int coredump_logging_backend_cmd(enum coredump_cmd_id cmd_id,
}
struct z_coredump_backend_api z_coredump_backend_logging = {
struct coredump_backend_api coredump_backend_logging = {
.start = coredump_logging_backend_start,
.end = coredump_logging_backend_end,
.buffer_output = coredump_logging_backend_buffer_output,

View File

@@ -14,13 +14,17 @@
#include "coredump_internal.h"
#if defined(CONFIG_DEBUG_COREDUMP_BACKEND_LOGGING)
extern struct z_coredump_backend_api z_coredump_backend_logging;
static struct z_coredump_backend_api
*backend_api = &z_coredump_backend_logging;
extern struct coredump_backend_api coredump_backend_logging;
static struct coredump_backend_api
*backend_api = &coredump_backend_logging;
#elif defined(CONFIG_DEBUG_COREDUMP_BACKEND_FLASH_PARTITION)
extern struct z_coredump_backend_api z_coredump_backend_flash_partition;
static struct z_coredump_backend_api
*backend_api = &z_coredump_backend_flash_partition;
extern struct coredump_backend_api coredump_backend_flash_partition;
static struct coredump_backend_api
*backend_api = &coredump_backend_flash_partition;
#elif defined(CONFIG_DEBUG_COREDUMP_BACKEND_OTHER)
extern struct coredump_backend_api coredump_backend_other;
static struct coredump_backend_api
*backend_api = &coredump_backend_other;
#else
#error "Need to select a coredump backend"
#endif

View File

@@ -53,31 +53,6 @@ void z_coredump_start(void);
*/
void z_coredump_end(void);
typedef void (*z_coredump_backend_start_t)(void);
typedef void (*z_coredump_backend_end_t)(void);
typedef void (*z_coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct z_coredump_backend_api {
/* Signal to backend of the start of coredump. */
z_coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
z_coredump_backend_end_t end;
/* Raw buffer output */
z_coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
/**
* @endcond
*/

View File

@@ -65,8 +65,15 @@ static void release_file_handle(size_t handle)
static bool is_mount_point(const char *path)
{
char dir_path[PATH_MAX];
size_t len;
sprintf(dir_path, "%s", path);
len = strlen(path);
if (len >= sizeof(dir_path)) {
return false;
}
memcpy(dir_path, path, len);
dir_path[len] = '\0';
return strcmp(dirname(dir_path), "/") == 0;
}

View File

@@ -1190,8 +1190,11 @@ void z_log_msg2_init(void)
struct log_msg2 *z_log_msg2_alloc(uint32_t wlen)
{
return (struct log_msg2 *)mpsc_pbuf_alloc(&log_buffer, wlen,
K_MSEC(CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS));
return (struct log_msg2 *)mpsc_pbuf_alloc(
&log_buffer, wlen,
(CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS == -1)
? K_FOREVER
: K_MSEC(CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS));
}
void z_log_msg2_commit(struct log_msg2 *msg)

View File

@@ -17,7 +17,7 @@ config NET_BUF_USER_DATA_SIZE
int "Size of user_data available in every network buffer"
default 24 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV6
default 8 if MCUMGR_SMP_UDP && MCUMGR_SMP_UDP_IPV4
default 8 if ((BT || NET_TCP2) && 64BIT) || BT_ISO || MCUMGR_SMP_BT
default 8 if ((BT || NET_TCP2) && 64BIT) || BT_CONN || BT_ISO
default 4
range 4 65535 if BT || NET_TCP2
range 0 65535

View File

@@ -405,7 +405,7 @@ struct net_buf *net_buf_get_debug(struct k_fifo *fifo, k_timeout_t timeout,
struct net_buf *net_buf_get(struct k_fifo *fifo, k_timeout_t timeout)
#endif
{
struct net_buf *buf, *frag;
struct net_buf *buf;
NET_BUF_DBG("%s():%d: fifo %p", func, line, fifo);
@@ -416,18 +416,6 @@ struct net_buf *net_buf_get(struct k_fifo *fifo, k_timeout_t timeout)
NET_BUF_DBG("%s():%d: buf %p fifo %p", func, line, buf, fifo);
/* Get any fragments belonging to this buffer */
for (frag = buf; (frag->flags & NET_BUF_FRAGS); frag = frag->frags) {
frag->frags = k_fifo_get(fifo, K_NO_WAIT);
__ASSERT_NO_MSG(frag->frags);
/* The fragments flag is only for FIFO-internal usage */
frag->flags &= ~NET_BUF_FRAGS;
}
/* Mark the end of the fragment list */
frag->frags = NULL;
return buf;
}
@@ -451,24 +439,19 @@ void net_buf_simple_reserve(struct net_buf_simple *buf, size_t reserve)
void net_buf_slist_put(sys_slist_t *list, struct net_buf *buf)
{
struct net_buf *tail;
unsigned int key;
__ASSERT_NO_MSG(list);
__ASSERT_NO_MSG(buf);
for (tail = buf; tail->frags; tail = tail->frags) {
tail->flags |= NET_BUF_FRAGS;
}
key = irq_lock();
sys_slist_append_list(list, &buf->node, &tail->node);
sys_slist_append(list, &buf->node);
irq_unlock(key);
}
struct net_buf *net_buf_slist_get(sys_slist_t *list)
{
struct net_buf *buf, *frag;
struct net_buf *buf;
unsigned int key;
__ASSERT_NO_MSG(list);
@@ -477,40 +460,15 @@ struct net_buf *net_buf_slist_get(sys_slist_t *list)
buf = (void *)sys_slist_get(list);
irq_unlock(key);
if (!buf) {
return NULL;
}
/* Get any fragments belonging to this buffer */
for (frag = buf; (frag->flags & NET_BUF_FRAGS); frag = frag->frags) {
key = irq_lock();
frag->frags = (void *)sys_slist_get(list);
irq_unlock(key);
__ASSERT_NO_MSG(frag->frags);
/* The fragments flag is only for list-internal usage */
frag->flags &= ~NET_BUF_FRAGS;
}
/* Mark the end of the fragment list */
frag->frags = NULL;
return buf;
}
void net_buf_put(struct k_fifo *fifo, struct net_buf *buf)
{
struct net_buf *tail;
__ASSERT_NO_MSG(fifo);
__ASSERT_NO_MSG(buf);
for (tail = buf; tail->frags; tail = tail->frags) {
tail->flags |= NET_BUF_FRAGS;
}
k_fifo_put_list(fifo, buf, tail);
k_fifo_put(fifo, buf);
}
#if defined(CONFIG_NET_BUF_LOG)

View File

@@ -210,7 +210,7 @@ int net_ipv4_parse_hdr_options(struct net_pkt *pkt,
}
#endif
enum net_verdict net_ipv4_input(struct net_pkt *pkt)
enum net_verdict net_ipv4_input(struct net_pkt *pkt, bool is_loopback)
{
NET_PKT_DATA_ACCESS_CONTIGUOUS_DEFINE(ipv4_access, struct net_ipv4_hdr);
NET_PKT_DATA_ACCESS_DEFINE(udp_access, struct net_udp_hdr);
@@ -266,6 +266,19 @@ enum net_verdict net_ipv4_input(struct net_pkt *pkt)
net_pkt_update_length(pkt, pkt_len);
}
if (!is_loopback) {
if (net_ipv4_is_addr_loopback(&hdr->dst) ||
net_ipv4_is_addr_loopback(&hdr->src)) {
NET_DBG("DROP: localhost packet");
goto drop;
}
if (net_ipv4_is_my_addr(&hdr->src)) {
NET_DBG("DROP: src addr is %s", "mine");
goto drop;
}
}
if (net_ipv4_is_addr_mcast(&hdr->src)) {
NET_DBG("DROP: src addr is %s", "mcast");
goto drop;

View File

@@ -488,6 +488,11 @@ enum net_verdict net_ipv6_input(struct net_pkt *pkt, bool is_loopback)
NET_DBG("DROP: invalid scope multicast packet");
goto drop;
}
if (net_ipv6_is_my_addr(&hdr->src)) {
NET_DBG("DROP: src addr is %s", "mine");
goto drop;
}
}
/* Check extension headers */

View File

@@ -123,7 +123,7 @@ static inline enum net_verdict process_data(struct net_pkt *pkt,
#endif
#if defined(CONFIG_NET_IPV4)
case 0x40:
return net_ipv4_input(pkt);
return net_ipv4_input(pkt, is_loopback);
#endif
}

View File

@@ -69,12 +69,14 @@ static inline const char *net_context_state(struct net_context *context)
#endif
#if defined(CONFIG_NET_NATIVE)
enum net_verdict net_ipv4_input(struct net_pkt *pkt);
enum net_verdict net_ipv4_input(struct net_pkt *pkt, bool is_loopback);
enum net_verdict net_ipv6_input(struct net_pkt *pkt, bool is_loopback);
#else
static inline enum net_verdict net_ipv4_input(struct net_pkt *pkt)
static inline enum net_verdict net_ipv4_input(struct net_pkt *pkt,
bool is_loopback)
{
ARG_UNUSED(pkt);
ARG_UNUSED(is_loopback);
return NET_CONTINUE;
}

View File

@@ -4214,6 +4214,74 @@ wait_reply:
#endif
}
static bool is_pkt_part_of_slab(const struct k_mem_slab *slab, const char *ptr)
{
size_t last_offset = (slab->num_blocks - 1) * slab->block_size;
size_t ptr_offset;
/* Check if pointer fits into slab buffer area. */
if ((ptr < slab->buffer) || (ptr > slab->buffer + last_offset)) {
return false;
}
/* Check if pointer offset is correct. */
ptr_offset = ptr - slab->buffer;
if (ptr_offset % slab->block_size != 0) {
return false;
}
return true;
}
struct ctx_pkt_slab_info {
const void *ptr;
bool pkt_source_found;
};
static void check_context_pool(struct net_context *context, void *user_data)
{
#if defined(CONFIG_NET_CONTEXT_NET_PKT_POOL)
if (!net_context_is_used(context)) {
return;
}
if (context->tx_slab) {
struct ctx_pkt_slab_info *info = user_data;
struct k_mem_slab *slab = context->tx_slab();
if (is_pkt_part_of_slab(slab, info->ptr)) {
info->pkt_source_found = true;
}
}
#endif /* CONFIG_NET_CONTEXT_NET_PKT_POOL */
}
static bool is_pkt_ptr_valid(const void *ptr)
{
struct k_mem_slab *rx, *tx;
net_pkt_get_info(&rx, &tx, NULL, NULL);
if (is_pkt_part_of_slab(rx, ptr) || is_pkt_part_of_slab(tx, ptr)) {
return true;
}
if (IS_ENABLED(CONFIG_NET_CONTEXT_NET_PKT_POOL)) {
struct ctx_pkt_slab_info info;
info.ptr = ptr;
info.pkt_source_found = false;
net_context_foreach(check_context_pool, &info);
if (info.pkt_source_found) {
return true;
}
}
return false;
}
static struct net_pkt *get_net_pkt(const char *ptr_str)
{
uint8_t buf[sizeof(intptr_t)];
@@ -4289,6 +4357,14 @@ static int cmd_net_pkt(const struct shell *shell, size_t argc, char *argv[])
if (!pkt) {
PR_ERROR("Invalid ptr value (%s). "
"Example: 0x01020304\n", argv[1]);
return -ENOEXEC;
}
if (!is_pkt_ptr_valid(pkt)) {
PR_ERROR("Pointer is not recognized as net_pkt (%s).\n",
argv[1]);
return -ENOEXEC;
}

View File

@@ -205,8 +205,12 @@ again:
if (info) {
ret = info_len + sizeof(hdr);
ret = MIN(max_len, ret);
memcpy(&copy_to[sizeof(hdr)], info, ret);
if (ret > max_len) {
errno = EMSGSIZE;
return -1;
}
memcpy(&copy_to[sizeof(hdr)], info, info_len);
} else {
ret = 0;
}

View File

@@ -140,7 +140,13 @@ static int cmd_kernel_threads(const struct shell *shell,
shell_print(shell, "Scheduler: %u since last call", sys_clock_elapsed());
shell_print(shell, "Threads:");
k_thread_foreach(shell_tdata_dump, (void *)shell);
/*
* Use the unlocked version as the callback itself might call
* arch_irq_unlock.
*/
k_thread_foreach_unlocked(shell_tdata_dump, (void *)shell);
return 0;
}
@@ -184,7 +190,12 @@ static int cmd_kernel_stacks(const struct shell *shell,
ARG_UNUSED(argc);
ARG_UNUSED(argv);
k_thread_foreach(shell_stack_dump, (void *)shell);
/*
* Use the unlocked version as the callback itself might call
* arch_irq_unlock.
*/
k_thread_foreach_unlocked(shell_stack_dump, (void *)shell);
/* Placeholder logic for interrupt stack until we have better
* kernel support, including dumping arch-specific exception-related

View File

@@ -38,6 +38,7 @@
#define CONFIG_MP_NUM_CPUS 1
#define CONFIG_SYS_CLOCK_TICKS_PER_SEC 100
#define CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC 10000000
#define CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS 365
#define ARCH_STACK_PTR_ALIGN 8
/* FIXME: Properly integrate with Zephyr's arch specific code */
#define CONFIG_X86 1

Some files were not shown because too many files have changed in this diff Show More