Compare commits

...

237 Commits

Author SHA1 Message Date
Christopher Friedt
6ed0a3557f release: Zephyr 2.7.6
Set version to 2.7.6

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2024-03-01 17:28:52 -05:00
Christopher Friedt
a405f8b6b0 release: add v2.7.6 release notes
List bugfixes and CVEs in v2.7.6 release notes.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2024-03-01 14:24:35 -05:00
Benjamin Cabé
3683fe625b doc: fix broken Sphinx by updating to Sphinx 5.0.2
Sphinx 4.x is way past EOL and due to it not pinning its dependencies,
it's effectively broken. See
https://github.com/sphinx-doc/sphinx/issues/11890 The recommended fix,
although not ideal in the context of an LTS branch, is to update to
Sphinx 5.0.2, which should have minimal impact of how the rendered
documentation looks.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2024-03-01 08:29:26 -05:00
Flavio Ceolin
e9fcfa14e6 syscall: Fix static analysis compalins
Since K_SYSCALL_MEMORY can be called with signed/unsigned size types, if
we check if size >= 0, static anlysis will complain about it when
size in unsigned.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-01 00:54:04 -05:00
Flavio Ceolin
1d16757282 userspace: Additional checks in K_SYSCALL_MEMORY
This macros needed additional checks before invoking
arch_buffer_validate.

- size can not be less then 0. Some functions invoke this macro
  using signed type which will be promote to unsigned when invoking
  arch_buffer_validate. We need to do an early check.
- We need to check for possible overflow, since a malicious user
  application could use a negative number that would be promoted
  to a big value that would cause a integer overflow when adding it
  to the buffer address, leading to invalid checks.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-03-01 00:54:04 -05:00
Peter Mitsis
eeefd07f68 include: util: Add Z_DETECT_POINTER_OVERFLOW()
The Z_DETECT_POINTER_OVERFLOW() macro is intended detect whether
or not a buffer spans a region of memory that goes beyond the
highest possible address (thereby overflowing the pointer).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-03-01 00:54:04 -05:00
Flavio Ceolin
d013132f55 fs: fuse: Avoid possible buffer overflow
Checks path's size before copying it to local variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 3267bdc4b7)
2024-02-29 12:07:00 -05:00
Benedikt Schmidt
25398f36da shell: modules: do not use k_thread_foreach with shell callbacks
Always use k_thread_foreach_unlocked with callbacks which print
something out to the shell, as they might call arch_irq_unlock.
Fixes #66660.

Signed-off-by: Benedikt Schmidt <benedikt.schmidt@embedded-solutions.at>
(cherry picked from commit 4c731f27c6)
2024-02-27 13:29:34 -05:00
Maxim Adelman
d3c2a2457d kernel shell, stacks shell commands: iterate unlocked on SMP
call k_thread_foreach_unlocked to avoid assertions caused
by calling shell_print while holding a global lock

Signed-off-by: Maxim Adelman <imax@meta.com>
(cherry picked from commit ecf2cb5932)
2024-02-27 13:29:34 -05:00
Adrien Ricciardi
10086910f5 drivers: i2c: i2c_dw: Fixed integer overflow in i2c_dw_data_ask().
The controller can implement a reception FIFO as deep as 256 bytes.
However, the computation made by the driver code to determine how many
bytes can be asked is stored in a signed 8-bit variable called rx_empty.

If the reception FIFO depth is greater or equal to 128 bytes and the FIFO
is currently empty, the rx_empty value will be 128 (or more), which
stands for a negative value as the variable is signed.

Thus, the later code checking if the FIFO is full will run while it should
not and exit from the i2c_dw_data_ask() function too early.

This hangs the controller in an infinite loop of interrupt storm because
the interrupt flags are never cleared.

Storing the rx_empty empty on a signed 32-bit variable instead of a 8-bit
one solves the issue and is compliant with the controller hardware
specifications of a maximum FIFO depth of 256 bytes.

It has been agreed with upstream maintainers to change the type of the
variables tx_empty, rx_empty, cnt, rx_buffer_depth and tx_buffer_depth to
plain int because it is most effectively handled by the CPUs. Using 8-bit
or 16-bit variables had no meaning here.

Signed-off-by: Adrien Ricciardi <aricciardi@baylibre.com>
(cherry picked from commit 4824e405cf)
2024-01-16 16:13:25 -05:00
Jukka Rissanen
01ad11252c tests: net: ipv6: Adjust the source address of test pkt
We would drop the received packet if the source address is our
address so tweak the test and make source address different.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 155e2149f2)
2024-01-09 12:55:32 -05:00
Jukka Rissanen
652b7f6f83 net: ipv6: Check that received src address is not mine
Drop received packet if the source address is the same as
the device address.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 8d3d48e057)
2024-01-09 12:55:32 -05:00
Jukka Rissanen
32748c69b8 net: ipv4: Drop packet if source address is my address
If we receive a packet where the source address is our own
address, then we should drop it.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 19392a6d2b)
2024-01-03 13:13:51 -05:00
Jukka Rissanen
65104bc3cc net: ipv4: Check localhost for incoming packet
If we receive a packet from non localhost interface, then
drop it if either source or destination address is a localhost
address.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 6d41e68352)
2024-01-03 13:13:51 -05:00
Keith Packard
ce4c30fc21 toolchain: Replace GCC_VERSION, CLANG_VERSION and BUILD_ASSERT macros
GCC_VERSION is defined in a few modules, and those headers are often
included first, so replace the one used in zephyr with
TOOLCHAIN_GCC_VERSION. Do the same with CLANG_VERSION, replacing it with
TOOLCHAIN_CLANG_VERSION.

BUILD_ASSERT is also defined in include/toolchain/common.h, which might
get included before gcc.h. We want to use the gcc-specific one instead
of the general one.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit c58c76ef0a)
2023-12-19 04:31:03 -05:00
Nikolay Agishev
5d382fa560 toolchain: Move extra warning options to toolchain abstraction
Move extra warning option from generic twister script into
compiler-dependent config files.
ARCMWDT compiler doesn't support extra warning options ex.
"-Wl,--fatal-warnings". To avoid build fails flag
"disable_warnings_as_errors" should be passed to twister.
This allows all warning messages and make atomatic test useles.

Signed-off-by: Nikolay Agishev <agishev@synopsys.com>
(cherry picked from commit 0dec4cf927)
2023-12-19 04:31:03 -05:00
Jamie McCrae
e677cfd61d cmake: modules: dts: Fix board revision 0 overlay
Fixes an issue whereby a board revision is 0 and the overlay file
exists but would not be included

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2023-12-01 08:28:59 -05:00
Sven Ginka
db1ed25fad drivers: sam dma xdmac: implemented dma device get_status()
the sam xdmac driver does not yet implement the
get_status() function. with this commit the function
will be implemented. Fixes #62003

Signed-off-by: Sven Ginka <sven.ginka@gmail.com>
(cherry picked from commit bc695c6df5)
2023-11-16 20:31:29 -05:00
Joshua Crawford
e6f70e97c8 drivers: flash: spi_nor: select largest valid erase operation
The spi_nor erase op selection was based on the alignment of the end of
the region to be erased. This prevented larger erase operations being
selected in many cases

Closes #60904

Signed-off-by: Joshua Crawford <joshua.crawford@levno.com>
(cherry picked from commit ea2dd9fc65)
2023-11-16 20:31:16 -05:00
Abram Early
f89298cf0e drivers: can: mcan: Move RF0L and RF1L to line 1
The code is designed to handle RF0L and RF1L in
line 1, but they were being sent to line 0. Becuase
they weren't handled, the interrupts would never
be handled which locked up the chip.

Signed-off-by: Abram Early <abram.early@gmail.com>
2023-11-05 07:49:19 -05:00
Henrik Brix Andersen
2e98b1fd8c drivers: can: be consistent in filter_id checks when removing rx filters
Change the CAN controller driver implementations for the
can_remove_rx_filter() API call to be consistent in their
validation of the supplied filter_id.

Fixes: #64398

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2023-10-26 09:34:14 -04:00
Christopher Friedt
13072b4c7b logging: log_core: correct timeout of -1 ms to K_FOREVER
Many releases ago, specifying to block indefinitely in the log
processing thread would do just that.

However, a subtle bug was introduced  such that specifying -1
for `CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS` would have the
exact opposite effect than what was intended.

As per Kconfig, a value of -1 should translate to a timeout of
`K_FOREVER`. However, conversion via `K_MSEC(-1)` results in
a `k_timeout_t` that is equal to `K_NO_WAIT` rather than the
intent which is `K_FOREVER`.

Add a dedicated check to to ensure that a value of -1 is
correctly interpreted as `K_FOREVER` in `log_core.c`.

For reference, the blocking feature was described in #15196,
added in #16194, and it would appear that the regression
happened in c5f2cdef09.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 137097f5c3)
2023-10-25 22:39:44 -04:00
Jukka Rissanen
43c936a5dd net: socket: mgmt: Check buf size in recvfrom()
Return EMSGSIZE if trying to copy too much data into
user supplied buffer.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit 0a16d5c7c3)
(cherry picked from commit b13b4eb38a50ee02d599332eb99752e814340487)
2023-10-12 21:20:39 -04:00
Robert Lubos
acc7cfaadf drivers: ieee802154_nrf5: Add payload length check on TX
In case upper layer does not follow the convention, and the net_pkt
provided to the nRF 15.4 driver had a payload larger than the maximum
payload size of an individual 15.4 frame, the driver would end up with
buffer overflow.

Fix this by adding an extra payload_len check before attempting to copy
the payload to the internal buffer.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
2023-10-02 14:55:05 -04:00
Fabio Baltieri
f9a56bcfd4 can: rework the table lookup code in can_dlc_to_bytes
Rework the can_dlc_to_bytes table lookup code in a way that allow the
compiler to guess the resulting output and somehow fix the build
warning:

zephyr/drivers/can/can_nxp_s32_canxl.c:757:9: warning:
'__builtin___memcpy_chk' forming offset [16, 71] is out of the bounds
[0, 16] of object 'frame' with type 'struct can_frame' [-Warray-bounds]
 757 | memcpy(frame->data, msg_data.data, can_dlc_to_bytes(frame->dlc));

where the compiler detects that frame->data is 8 bytes long but
can_dlc_to_bytes could return more than that.

Can be reproduced with:

west build -p -b s32z270dc2_rtu1_r52 \
	-T samples/net/sockets/can/sample.net.sockets.can.one_socket

Suggested-by: Martin Jäger <martin@libre.solar>
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2023-09-19 02:32:36 -04:00
Carles Cufi
4a3b59d47b Bluetooth: controller: Check minimum sizes of adv PDUs
While the maximum sizes were already correctly checked by the code, the
minimum sizes of the PDUs were not. This meant that PDUs smaller than
the minimum required length (typically 6 bytes for AdvA) were
incorrectly forwarded up to the Host.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
(cherry picked from commit 3f0d7012a6)
2023-09-06 18:02:35 -04:00
Thomas Stranger
414d6c91a1 drivers: can: stm32: correct timing_max parameters
The timing_max parameters defined in the stm32 bxcan driver don't match the
register description in the reference manuals.
- sjw does have only 2 bits representing 1 to 4 tq.
- phase_seg1 and phase_seg2 max is one tq higher.

I have checked the following reference manuals and all match:
- RM0090: STM32F405, F415, F407, F417, F427, F437 AND F429
- RM0008: STM32F101, F102, F103, F105, F107 advanced arm-based mcus
- RM0351, RM0394: all STM32L4
- RM0091: all STM32F0 with CAN support

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
(cherry picked from commit cec279b5b6)
2023-08-25 09:44:40 -04:00
Henrik Brix Andersen
2daec8c70c canbus: isotp: convert SF length check from ASSERT to runtime check
Convert the ISO-TP SF length check in send_sf() from __ASSERT() to a
runtime check.

Fixes: #61501

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 1b3d1e01de)
2023-08-25 09:44:26 -04:00
Grant Ramsay
229ca396aa canbus: isotp: Fix context buffer memory leaks
Ensure context buffers are free'd when errors occur

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2023-08-08 19:01:38 -04:00
Andrzej Głąbek
06ae95e45c drivers: nrf_rtc_timer: Always set an initial timeout
In the tickless kernel mode, the nRF system timer does not schedule
any timeout on initialization. This can lead to a situation that
for certain applications no timeout is scheduled at all (for example,
when an application does not create any threads, it exits `main()`
without any sleeping and only handles interrupts) and in consequence
`sys_clock_announce()` is never called (the nRF system timer calls
this function only from the timeout handler). This in turn causes that
uptime is reported correctly only until the RTC used as the system
timer overflows (what happens after 512 seconds).

Fix this by setting a maximum allowed timeout when initializing
the nRF system timer for the tickless kernel mode.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2023-06-13 12:06:52 +02:00
Théo Battrel
f72d8ffe80 Net: Increase NET_BUF_USER_DATA_SIZE value to 8
Increase `NET_BUF_USER_DATA_SIZE` value to 8 if `BT_CONN` is enabled.
This is necessary because one of the backported commits adds one struct
member into `struct tx_meta`, and that will be stored in the buffer
user data.

On main, this Kconfig option has been deprecated, hence why this change
was not present in the original patch set.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
2023-06-06 12:42:08 -04:00
Pavel Vasilyev
f2eeeda113 bluetooth: mesh: Remove illegal use of NET_BUF_FRAG in friend.c
This commit removes illegal use of NET_BUF_FRAG in friend.c, which is an
internal flag.

Now `struct bt_mesh_friend_seg` keeps pointer to a first received
segment of a segmented message. The rest segments are added as fragments
using net_buf API. Friend Queue keeps only head of the fragments.
When one segment (currently head of fragments) is removed from Friend
Queue, the next segment is added to the queue. Head has always 2
references: one when allocated, another one when added as fragments
head.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit 5d059117fd)
2023-06-06 12:42:08 -04:00
Carles Cufi
916d9ad13b net: buf: Simplify fragment handling
This patch reworks how fragments are handled in the net_buf
infrastructure.

In particular, it removes the union around the node and frags members
in the main net_buf structure. This is done so that both can be used at
the same time, at a cost of 4 bytes per net_buf instance.
This implies that the layout of net_buf instances changes whenever
being inserted into a queue (fifo or lifo) or a linked list (slist).

Until now, this is what happened when enqueueing a net_buf with frags
in a queue or linked list:

1.1 Before enqueueing:

 +--------+      +--------+      +--------+
 |#1  node|\     |#2  node|\     |#3  node|\
 |        | \    |        | \    |        | \
 | frags  |------| frags  |------| frags  |------NULL
 +--------+      +--------+      +--------+

net_buf #1 has 2 fragments, net_bufs #2 and #3. Both the node and frags
pointers (they are the same, since they are unioned) point to the next
fragment.

1.2 After enqueueing:

 +--------+     +--------+     +--------+     +--------+     +--------+
 |q/slist |-----|#1  node|-----|#2  node|-----|#3  node|-----|q/slist |
 |node    |     | *flag  | /   | *flag  | /   |        | /   |node    |
 |        |     | frags  |/    | frags  |/    | frags  |/    |        |
 +--------+     +--------+     +--------+     +--------+     +--------+

When enqueing a net_buf (in this case #1) that contains fragments, the
current net_buf implementation actually enqueues all the fragments (in
this case #2 and #3) as actual queue/slist items, since node and frags
are one and the same in memory. This makes the enqueuing operation
expensive and it makes it impossible to atomically dequeue. The `*flag`
notation here means that the `flags` member has been set to
`NET_BUF_FRAGS` in order to be able to reconstruct the frags pointers
when dequeuing.

After this patch, the layout changes considerably:

2.1 Before enqueueing:

 +--------+       +--------+       +--------+
 |#1  node|--NULL |#2  node|--NULL |#3  node|--NULL
 |        |       |        |       |        |
 | frags  |-------| frags  |-------| frags  |------NULL
 +--------+       +--------+       +--------+

This is very similar to 1.1, except that now node and frags are
different pointers, so node is just set to NULL.

2.2 After enqueueing:

 +--------+      +--------+      +--------+
 |q/slist |------|#1  node|------|q/slist |
 |node    |      |        |      |node    |
 |        |      | frags  |      |        |
 +--------+      +--------+      +--------+
                     |           +--------+       +--------+
                     |           |#2  node|--NULL |#3  node|--NULL
                     |           |        |       |        |
                     +-----------| frags  |-------| frags  |------NULL
                                 +--------+       +--------+

When enqueuing net_buf #1, now we only enqueue that very item, instead
of enqueing the frags as well, since now node and frags are separate
pointers. This simplifies the operation and makes it atomic.

Resolves #52718.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
(cherry picked from commit 3d306c181f)
2023-06-06 12:42:08 -04:00
Jonathan Rico
a90a3cc493 Bluetooth: host: update l2cap stress test
Do these things:
- use the proper macros for reserving the SDU header
- make every L2CAP PDU fragment into 3 ACL packets
- set tx data and verify rx matches the pattern
- measure segment pool usage

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 8bc094610c)
2023-06-06 12:42:08 -04:00
Jonathan Rico
fb5845a072 Bluetooth: host: copy fragment data at the last minute
Only copy the data from the 'parent' buf into the segment buf if we
successfully acquired the HCI semaphore (ie there are available
controller buffers).

This way we avoid pulling and pushing back the data in the case where
there aren't any controller buffers available.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit ca51439cd1)
2023-06-06 12:42:08 -04:00
Jonathan Rico
c0d6fad199 Bluetooth: host: poll on CTLR buffers instead of host TX queue
When there are no buffers, it doesn't make sense to repeatedly try to
send the host TX queue.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit ef19c64f1b)
2023-06-06 12:42:08 -04:00
Jonathan Rico
60ae4f9351 Bluetooth: host: make HCI fragmentation async
Make the ACL fragmentation asynchronous, freeing the TX thread for
sending commands when the ACL buffers are full.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit c3e5fabbf1)
2023-06-06 12:42:08 -04:00
Jonathan Rico
6ebce3643e Bluetooth: host: l2cap: add alloc_seg callback
This callback allows use-cases where the SDU is much larger than the
l2cap MPS. The stack will then try to allocate using this callback if
specified, and fall-back on using the buffer's pool (previous
behavior).

This way one can define two buffer pools, one with a very large buffer
size, and one with a buffer size >= MPS, and the stack will allocate
from that instead.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 77e1a9dcad)
2023-06-06 12:42:08 -04:00
Jonathan Rico
80ab098d64 Bluetooth: host: l2cap: workaround SDU deadlock
See the code comments.

SDUs might enter a state where they will be blocked forever, as a
workaround, we nudge them when another SDU has been sent.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 8e207fefad)
2023-06-06 12:42:08 -04:00
Jonathan Rico
d00c98d585 Bluetooth: host: l2cap: don't send too much credits
There was an edge-case where we were sending back too much credits, add
a check so we can't do that anymore.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 3c1ca93fe8)
2023-06-06 12:42:08 -04:00
Jonathan Rico
f290106952 Bluetooth: host: l2cap: release segment in all cases
See code comment

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 1c8fe67a52)
2023-06-06 12:42:08 -04:00
Jonathan Rico
2e0e5e27e8 Bluetooth: host: add l2cap stress-test
This test reproduces more-or-less #34600.

It has a central that connects to multiple peripherals, opens one l2cap
CoC channel per connection, and transmits a few SDUs largely exceeding
the MPS of the channel.

In this commit, the test doesn't pass, but when it passes (after the
subsequent commits), error and warning messages are expected from the
stack, as this is not the happy path.

We can later debate on whether these particular error messages should
be downgraded to debug.

Signed-off-by: Jonathan Rico <jonathan.rico@nordicsemi.no>
(cherry picked from commit 7a6872d837)
2023-06-06 12:42:08 -04:00
Christopher Friedt
030fa9da45 release: Zephyr 2.7.5
Set version to 2.7.5

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:08:42 -04:00
Christopher Friedt
43370b89c3 release: minor corrections to security release notes
* remove reference to other github project issue
* complete incomplete sentence

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-06-01 08:07:16 -04:00
Flavio Ceolin
15fa28896a release: mbedTLS: Add vulnerabilities info
Add information about vulnerabilities fixed since mbedTLS 2.26.0.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Flavio Ceolin
ce3eb90a83 release: security: Add rel notes for vulnerabilities
Add information about vulnerabilities fixed in 2.7.5 release.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 01:06:49 +09:00
Chris Friedt
ca24cd6c2d release: update v2.7.5 release notes
* add bugifixes to v2.7.5 release

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-27 05:32:14 -04:00
Flavio Ceolin
4fc4dc7b84 release: mbedTLS: Add rel notes for mbedTLS
Release notes for mbedTLS lates update.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-26 11:10:27 +09:00
Krzysztof Chruscinski
fb24b62dc5 logging: Fix user space crash when runtime filtering is on
Logging module data (including filters) are not accessible by
the user space. Macro for creating logs where creating local
variable with filters before checking is we are in the user
context. It was not used in that case but creating variable
was violating access writes that resulted in failure.

Removing variable creation and using filters directly in the
if clause but after checking condition that it is not the
user context. With this approach data is accessed only in
the kernel mode.

Cherry-picked with modifications from
4ee59e2cdb.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-05-18 00:27:35 +08:00
Chris Friedt
60e7a97328 release: create outline for v2.7.5 release notes
Create a template for v2.7.5 release notes.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-05-18 00:07:05 +08:00
Flavio Ceolin
a1aa463783 boards: mps2_an521_ns: Remove simulation capability
This board requires TF-M which is not supported by default in the
current Zephyr release. Just remove the simulation capability to
avoid CI failures.

See: https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
29c1e08cf7 west: tf-m: Remove tf-m from Zephyr LTS
Zephyr mbedTLS was updated to 2.28.x which is a LTS release and
address several vulnerabilities affecting 2.26 (version that used to be
used on Zephyr LTS).

Unfortunately this mbedTLS version is not compatible with TF-M and
backporting mbedTLS fixes was not a viable solution. Due this problem
we are removing TF-M module from Zephyr's LTS. One still can go and add
it to this manifest if needed, but this is no longer "officially"
supported.

More information in:
https://github.com/zephyrproject-rtos/zephyr/issues/56071
https://github.com/zephyrproject-rtos/zephyr/pull/54084

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
190f09df52 samples: tfm: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Only build / run these TF-M samples when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
a166290f1a tfm: boards: Add ZEPHYR_TRUSTED_FIRMWARE_M_MODULE dependency
Enable BUILD_WITH_TFM only when TF-M module is available.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
21e0870106 crypto: Bump mbedTLS to 2.28.3
Bump mbedTLS to version 2.28.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Flavio Ceolin
ebe3651f3d tests: mbedtls: Fix GCC warning about test_snprintf
Fix errors like:

inlined from ‘test_mbedtls’ at
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:172:6:
zephyrproject/zephyr/tests/crypto/mbedtls/src/mbedtls.c:96:17: error:
‘test_snprintf’ reading 10 bytes from a region of size 1
[-Werror=stringop-overread]
   96 |                 test_snprintf(1, "", -1) != 0 ||
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~

In GCC >= 11 because `ret_buf` in some calls are shorter literals

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-17 23:47:58 +08:00
Torsten Rasmussen
1b7c720c7f cmake: prefix local version of return variable
Fixes: #55490
Follow-up: #53124

Prefix local version of the return variable before calling
`zephyr_check_compiler_flag_hardcoded()`.

This ensures that there will never be any naming collision between named
return argument and the variable name used in later functions when
PARENT_SCOPE is used.

The issue #55490 provided description of situation where the double
de-referencing was not working correctly.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 599886a9d3)
2023-05-13 19:23:55 -04:00
Stephanos Ioannidis
58af1b51bd ci: Use organisation-level AWS secrets
This commit updates the CI workflows to use the `zephyrproject-rtos`
organisation-level AWS secrets instead of the repository-level secrets.

Using organisation-level secrets allows more centralised management of
the access keys used throughout the GitHub Actions CI infrastructure.

Note that the `AWS_*_ACCESS_KEY_ID` is now stored in plaintext as a
variable instead of a secret because it is equivalent to username and
needs to be identifiable for management and audit purposes.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-05-12 03:30:43 +09:00
Kumar Gala
70f2a4951a tests: posix: fs: disable CONFIG_EVENTFD
The test doesn't use eventfd so we can disable it to save some space.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit 70e921dbc7)
2023-05-10 19:48:15 -04:00
Kumar Gala
650d10805a posix: eventfd: depends on polling
Have eventfd Kconfig select POLL is the code utilizes the polling
API.  We get a link error for tests/lib/fdtable/libraries.os.fdtable
when building on arm-clang without this.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
(cherry picked from commit f215e4494c)
2023-05-10 19:48:15 -04:00
Chris Friedt
25616b1021 tests: drivers: dma: loop: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 8c6c96715f)
2023-05-09 08:42:19 -04:00
Chris Friedt
f72519007c tests: drivers: dma: chan_link: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7f6d976916)
2023-05-09 08:42:19 -04:00
Chris Friedt
1b2a7ec251 tests: drivers: dma: chan_blen: support 64-bit dma
The test does not appear to support 64-bit DMA
* mitigate compiler warning
* support 64-bit addressing mode with `CONFIG_DMA_64BIT`

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 5afcac5e14)
2023-05-09 08:42:19 -04:00
Chris Friedt
9d2533fc92 tests: posix: ensure that min and max priority are schedulable
Verify that threads are actually schedulable for min and max
scheduler priority for both `SCHED_RR` (preemptive) and
`SCHED_FIFO` (cooperative).

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit ad71b78770)
2023-05-02 16:25:42 -04:00
Chris Friedt
e20b8f3f34 posix: sched: ensure min and max priority are schedulable
Previously, there was an off-by-one error for SCHED_RR.

Fixes #56729

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 2b2cbf8107)
2023-05-02 16:25:42 -04:00
Christopher Friedt
199d5d5448 drivers: pcie_ep: iproc: compile-out unused function based on DT
Compile-out `iproc_pcie_pl330_dma_xfer()` if there are no active
DMA users in devicetree.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
(cherry picked from commit 9ad78eb60c)
2023-05-02 12:33:46 -04:00
Chris Friedt
5db2717f06 drivers: pcie_ep: iproc: ensure config and api are const
The `config` and `api` members of `struct device` are expected
to be `const`. This also improves reliability, as `config`
and `api` are stored in rom rather than ram, which has the
potential to be corrupted at runtime in the absense of an MMU.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7212792295)
2023-04-27 11:18:00 -04:00
Tarun Karuturi
f3851326da drivers: pcie_ep: iproc: enable based on device tree specs
There are use cases for the pcie_ep driver where we don't
necessarily need the dma functionality. Added ifdef's around
the dma functionality so that it's only available if we
specify the dma engines in the device tree similar to

```
dmas = <&pl330 0>, <&pl330 1>;
dma-names = "txdma", "rxdma";
```

Signed-off-by: Tarun Karuturi <tkaruturi@meta.com>
Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 9d95f69a87)
2023-04-27 11:18:00 -04:00
Stephanos Ioannidis
5a8d05b968 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-27 21:45:14 +09:00
Stephanos Ioannidis
eea42e38f3 ci: labeler: Use actions/labeler@v4
This commit updates the labeler workflow to use the labeler action v4,
which is based on node.js 16 and @actions/core 1.10.0, in preparation
for the upcoming removal of the deprecated GitHub features.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2023-04-15 15:59:20 +09:00
Chris Friedt
0388a90e7b posix: clock: fix seconds calculation
The previous method used to calculate seconds in `clock_gettime()`
seemed to have an inaccuracy that grew with time causing the
seconds to be off by an order of magnitude when ticks would roll
over.

This change fixes the method used to calculate seconds.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
4c62d76fb7 sys: time_units: Add Kconfig option for algorithm selection
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit 50c7c7b1e4)
2023-04-07 06:29:11 -04:00
Chris Friedt
6f8f9b5c7a tests: time_units: check for overflow in z_tmcvt intermediate
Prior to #41602, due to the ordering of operations (first mul,
then div), an intermediate value would overflow, resulting in
a time non-linearity.

This test ensures that time rolls-over properly.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 74c9c0e7a3)
2023-04-07 06:29:11 -04:00
Krzysztof Chruscinski
afbc93287d lib: posix: clock: Prevent early overflows
Algorithm was converting uptime to nanoseconds which can easily
lead to overflows. Changed algorithm to use milliseconds and
nanoseconds for remainder only.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Gerard Marull-Paretas
a28aa01a88 sys: time_units: add missing include
The header can't be fully used in standalone mode: toolchain.h has to be
included first, otherwise the ALWAYS_INLINE attribute is not defined.
Headers that can be directly included and are not self-contained should
be considered a bad practice.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-07 06:29:11 -04:00
Stephanos Ioannidis
677a374255 ci: backport_issue_check: Use ubuntu-22.04 virtual environment
This commit updates the pull request backport issue check workflow to
use the Ubuntu 22.04 virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cadd6e6fa4)
2023-03-22 03:15:40 +09:00
Stephanos Ioannidis
0389fa740b ci: manifest: Use ubuntu-22.04 virtual environment
This commit updates the manifest workflow to use the Ubuntu 22.04
virtual environment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit af6d77f7a7)
2023-03-22 03:05:01 +09:00
Torsten Rasmussen
b02d34b855 cmake: fix variable de-referencing in zephyr_check_compiler_x functions
Fixes: #53124

Fix de-referencing of check and exists function arguments by correctly
de-referencing the argument references using `${<var>}`.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 04a27651ea)
2023-03-13 07:48:51 -04:00
Torsten Rasmussen
29e3a4865f cmake: dereference ${check} after zephyr_check_compiler_flag() call
Follow-up: #53124

The PR#53124 fixed an issue where the variable `check` was not properly
dereferenced into the correct variable name for return value storage.
This was corrected in 04a27651ea.

However, some code was passing a return argument as:
`zephyr_check_compiler_flag(... ${check})`
but checking the result like:
`if(${check})`
thus relying on a faulty behavior of code updating `check` and not the
`${check}` variable.

Fix this by updating to use `${${check}}` as that will point to the
correct return value.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 45b25e5508)
2023-03-10 13:32:37 -05:00
Robert Lubos
aaa6d280ce net: iface: Add NULL pointer check in net_if_ipv6_set_reachable_time
In case the IPv6 context pointer was not set on an interface (for
instance due to IPv6 context shortage), processing the RA message could
lead to a crash (i. e. NULL pointer dereference). Protect against this
by adding NULL pointer check, similarly to other functions in this area.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit c6c2098255)
2023-02-28 12:36:47 -05:00
Robert Lubos
e02a3377e5 net: shell: Validate pointer provided with net pkt command
The net_pkt pointer provided to net pkt commands was not validated in
any way. Therefore it was fairly easy to crash an application by
providing invalid address.

This commit adds the pointer validation. It's checked whether the
pointer provided belongs to any net_pkt pools known to the net stack,
and if the pointer offset within the slab actually points to the
beginning of the net_pkt structure.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit e540a98331)
2023-02-28 12:34:38 -05:00
Gerard Marull-Paretas
76c30dfa55 ci: doc-build: fix PDF build
New LaTeX Docker image (Debian based) uses Python 3.11. On Debian
systems, this version does not allow to install packages to the system
environment using pip.  Use a virtual environment instead.

Signed-off-by: Gerard Marull-Paretas <gerard@teslabs.com>
(cherry picked from commit e6d9ff2948)
2023-02-28 23:21:51 +09:00
Théo Battrel
c3f512d606 Bluetooth: Host: Check returned value by LE_READ_BUFFER_SIZE
`rp->le_max_num` was passed unchecked into `k_sem_init()`, this could
lead to the value being uninitialized and an unknown behavior.

To fix that issue, the `rp->le_max_num` value is checked the same way as
`bt_dev.le.acl_mtu` was already checked. The same things has been done
for `rp->acl_max_num` and `rp->iso_max_num` in
`read_buffer_size_v2_complete()` function.

Signed-off-by: Théo Battrel <theo.battrel@nordicsemi.no>
(cherry picked from commit ac3dec5212)
2023-02-24 19:48:12 -05:00
NingX Zhao
f882abfd13 tests: removing incorrect testcases of poll
These two test cases both are fault injection test cases,
and there are designed for testing some negative branches
to improve code coverage. But I find that this branch
shouldn't be tested, because the spinlock will be locked
before a procedure performs here, and then it will trigger
an assert error and the process will be rescheduled to the
handler function, and terminated the current test case,
so spinlock will never be unlocked. And it will impact
the next test case in the same test suite(the next testcase
will be never get spinlock).

Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
(cherry picked from commit cb4a629bc8)
2023-02-07 12:14:38 -06:00
Lucas Dietrich
bc7300fea7 kernel: workq: Add internal function z_work_submit_to_queue()
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().

When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.

Fixes #45267

Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
(cherry picked from commit 9a848b3ad4)
2023-02-03 18:37:53 -05:00
Andy Ross
8da9a76464 kernel/workq: Cleanup bespoke reschedule point
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible.  Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.

Really, this is only a half fix.  Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule().  But that would take some surgery, so let's go with
the simpler cleanup first.

This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.

[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 8d94967ec4)
2023-02-03 18:37:53 -05:00
Lixin Guo
298b8ea788 kernel: work: remove unused if statement
Condition of work == NULL is checked before, so there is no need to
check it again.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
(cherry picked from commit d4826d874e)
2023-02-03 18:37:53 -05:00
Peter Mitsis
a9aaf048e8 kernel: Fixes sys_clock_tick_get()
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
  1. The handler invokes k_uptime_ticks().
  2. k_uptime_ticks() invokes sys_clock_tick_get().
  3. sys_clock_tick_get() must call elapsed() and not
     sys_clock_elapsed() as we do not want to count any
     unannounced ticks that may have elapsed while
     processing the timer ISR.

Fixes #46378

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 71ef669ea4)
2023-02-03 18:37:53 -05:00
Peter Mitsis
e2b81b48c4 kernel: fix race condition in sys_clock_announce()
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 3e2f30a7ef)
2023-02-03 18:37:53 -05:00
Andy Ross
45c41bc344 kernel/timeout: Cleanup/speedup parallel announce logic
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list).  The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).

Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit.  The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.

Fixes #44758

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 0b2ed3818d)
2023-02-03 18:37:53 -05:00
Andy Ross
f570a46719 kernel/timeout: Serialize handler callbacks on SMP
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).

When that happens, each call will independently enter the loop over
the timeout list.  The access is correctly synchronized, so the list
handling is correct.  But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus.  That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".

That's surprising and error-prone.  Don't do it.  Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.

(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver.  That hardware issue is still poorly understood, but this fix
is desirable regardless.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b1182bf83b)
2023-02-03 18:37:53 -05:00
Flavio Ceolin
675a349e1b kernel: Fix timeout issue with SYSTEM_CLOCK_SLOPPY_IDLE
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.

Fixes #41422

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 47b7c2e931)
2023-02-03 18:37:53 -05:00
Andy Ross
16927a6cbb kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit b4e9ef0691)
2023-02-03 18:37:53 -05:00
Andy Ross
ab353d6b7d kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
(cherry picked from commit 3267cd327e)
2023-02-03 18:37:53 -05:00
Mark Holden
951b055b7f debug: coredump: allow for coredump backends to be defined outside of tree
Move coredump_backend_api struct to public header so that custom backends
for coredump can be defined out of tree. Create simple backend in test
directory for verification.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit 7b2b283677)
2023-02-03 18:35:56 -05:00
Nicolas Pitre
8e256b3399 scripts: gen_syscalls: fix argument marshalling with 64-bit debug builds
Let's consider this (simplified) compilation result of a debug build
using -O0 for riscv64:

|__pinned_func
|static inline int k_sem_init(struct k_sem * sem,
|                             unsigned int initial_count,
|                             unsigned int limit)
|{
|    80000ad0:   6105                    addi    sp,sp,32
|    80000ad2:   ec06                    sd      ra,24(sp)
|    80000ad4:   e42a                    sd      a0,8(sp)
|    80000ad6:   c22e                    sw      a1,4(sp)
|    80000ad8:   c032                    sw      a2,0(sp)
|        ret = arch_is_user_context();
|    80000ada:   b39ff0ef                jal     ra,80000612
|        if (z_syscall_trap()) {
|    80000ade:   c911                    beqz    a0,80000af2
|                return (int) arch_syscall_invoke3(*(uintptr_t *)&sem,
|                                    *(uintptr_t *)&initial_count,
|                                    *(uintptr_t *)&limit,
|                                    K_SYSCALL_K_SEM_INIT);
|    80000ae0:   6522                    ld      a0,8(sp)
|    80000ae2:   00413583                ld      a1,4(sp)
|    80000ae6:   6602                    ld      a2,0(sp)
|    80000ae8:   0b700693                li      a3,183
|    [...]

We clearly see the 32-bit values `initial_count` (a1) and `limit` (a2)
being stored in memory with the `sw` (store word) instruction. Then,
according to the source code, the address of those values is casted
as a pointer to uintptr_t values, and that pointer is dereferenced to
get back those values with the `ld` (load double) instruction this time.

In other words, the assembly does exactly what the C code indicates.
This is wrong for 2 reasons:

- The top half of a1 and a2 will contain garbage due to the `ld` used
  to retrieve them. Whether or not the top bits will be cleared
  eventually depends on the architecture and compiler.
- Regardless of the above, a1 and a2 would be plain wrong on a big
  endian system.
- The load of a1 will cause a misaligned trap as it is 4-byte aligned
  while `ld` expects a 8-byte alignment.

The above code happens to work properly when compiling with
optimizations enabled as the compiler simplifies the cast and
dereference away, and register content is used as is in that case.
That doesn't make the code any more "correct" though.

The reason for taking the address of an argument and dereference it as an
uintptr_t pointer is most likely done to work around the fact that the
compiler refuses to cast an aggregate value to an integer, even if that
aggregate value is in fact a simple structure wrapping an integer.

So let's fix this code by:

- Removing the pointer dereference roundtrip and associated casts. This
  gets rid of all the issues listed above.
- Using a union to perform the type transition which deals with
  aggregates perfectly well. The compiler does optimize things to the
  same assembly output in the end.

This also makes the compiler happier as those pragmas to shut up warnings
are no longer needed. It should be the same about coverity.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 1db5c8b948)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
74f2760771 scripts: gen_syscalls: add missing --split-type case
With CONFIG_TIMEOUT_64BIT it is both k_timeout_t and k_ticks_t that
need to be split, otherwise many syscalls returning a number of ticks
are being truncated to 32 bits.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit 2cdac33d39)
2023-02-01 20:07:43 -05:00
Nicolas Pitre
85e0912291 scripts: gen_syscalls: fix access validation size on extra params array
It was one below the entire array size.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit df80c77ed8)
2023-02-01 20:07:43 -05:00
Jim Shu
f2c582c75d gen_app_partitions: add .sdata/.sbss section into app_smem
Some architectures (e.g. RISC-V) has .sdata/.sbss section for small
data/bss. Memory partition should also manage the permission of these
sections in library so they should be put into app_smem.
(For example, newlib _impure_ptr is in .sdata section and
__malloc_top_pad is in .sbss section in RISC-V.)

Signed-off-by: Jim Shu <cwshu@andestech.com>
(cherry picked from commit 46eb3e5fce)
2023-02-01 20:07:43 -05:00
Robert Lubos
c908ee8133 net: context: Separate user data pointer from FIFO reserved space
Using the same memory as a user data pointer and FIFO reserved space
could lead to a crash in certain circumstances, those two use cases were
not completely separate.

The crash could happen for example, if an incoming TCP connection was
abruptly closed just after being established. As TCP uses the user data
to notify error condition to the upper layer, the user data pointer
could've been used while the newly allocated context could still be
waiting on the accept queue. This damaged the data area used by the FIFO
and eventually could lead to a crash.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 2ab11953e3)
2023-01-31 16:13:42 -05:00
Nicolas Pitre
175e76b302 z_thread_mark_switched_*: use z_current_get() instead of k_current_get()
k_current_get() may rely on TLS which might not yet be initialized
when those tracing functions are called, resulting in a crash.

This is different from the main branch as in that case the implementation
was completely revamped and neither k_current_get() nor z_current_get()
are used anymore. This is a much simpler fix than a backport of that
code, similar to the implication in commit commit f07df42d49 ("kernel:
make k_current_get() work without syscall").

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-01-23 15:01:10 -05:00
Keith Packard
c520749a71 tests: Disable HW stack protection for some mpu tests
When active, z_libc_partition consumes an MPU region which leaves too
few for some MPU tests. Free up one by disabling HW stack protection.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit 19c8956946)
2023-01-11 11:02:47 -05:00
David Leach
584f52d5be tests: mem_protect: ensure allocated objects are initialized
K_OBJ_MSGQ, K_OBJ_PIPE, and K_OBJ_STACK objects have pointers
to additional memory that can be allocated. The k_obj_alloc()
returns these objects as uninitialized so when they are freed
there are random opportunities for freeing invalid memory
and causing random faults.

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit fdea2a628b)
2023-01-11 11:02:47 -05:00
David Leach
d05c3bdf36 tests: mem_protect: avoid allocating K_OBJ_MSGQ in userspace.
The K_OBJ_MSGQ object is unitialized so when the thread cleanup occurs
after an expected fault for invalid access the test case can randomly
fault again because the cleanup of the thread will sometimes attempt
to free invalid buffer_start pointer in the msgq object.

Fixes #42705

Signed-off-by: David Leach <david.leach@nxp.com>
(cherry picked from commit a0737e687c)
2023-01-11 11:02:47 -05:00
Jim Shu
3ab0c9516f tests: mem_protect: enlarge heap size of RISCV64
Because k_thread size in RISCV64 is near 512 bytes, (num_of_thread *
256) bytes heap size is not enough. Enlarge heap size in RISCV64
to the (num_of_thread * 1024) bytes like x86_64 and ARM64.

Signed-off-by: Jim Shu <cwshu09@gmail.com>
(cherry picked from commit e2d67d60ba)
2023-01-11 11:02:47 -05:00
Keith Packard
df6f0f477f tests/kernel/mem_protect: Check for thread_userspace_local_data
When using THREAD_LOCAL_STORAGE the thread_userspace_local_data stuff
isn't used, so these tests wouldn't build.

Signed-off-by: Keith Packard <keithp@keithp.com>
(cherry picked from commit b03b2e0403)
2023-01-11 11:02:47 -05:00
Nicolas Pitre
2dc30ca1fb tests: lifo_usage: make it less susceptible to SMP races
On SMP, and especially using qemu on a busy system, it is possible for
a thread with a later timeout to get ahead of another one with an
earlier timeout. The tight timeout value difference (10ms) makes it
possible albeit difficult to reproduce. The result is something like:

|START - test_timeout_threads_pend_on_lifo
| thread (q order: 2, t/o: 0, lifo 0x4001d350)
|
|    Assertion failed at main.c:140:
|test_multiple_threads_pending: (data->timeout_order not equal to ii)
| *** thread 2 woke up, expected 1

Let's make timeout values 10 times larger to make this unlikely race
even less likely.

While at it... The timeout field in struct timeout_order_data is some ms
value and not a number of ticks, so change the type accordingly.
And leverage k_cyc_to_ms_floor32() to simplify computation in
is_timeout_in_range().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
(cherry picked from commit a1ce2fb990)
2023-01-11 11:02:47 -05:00
Daniel Leung
5cbda9f1c7 tests: kernel/smp: wait for threads to exits between tests
This adds a bunch of k_thread_join() to make sure threads spawned
for a test are no longer running between exiting that test. This
prevents interference between tests if some threads are still
running when assumed not.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit dbe3874079)
2023-01-11 11:02:47 -05:00
Carlo Caione
711506349d tests/kernel/smp: Add SMP switch torture test
Formalize and rework the issue reproducer for #40795 and add it to the
SMP test suite.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
(cherry picked from commit 8edf9817c0)
2023-01-11 11:02:47 -05:00
Ederson de Souza
572921a44a tests/kernel/fpu_sharing: Run test with MP_NUM_CPUS=1
This test uses k_yield() to "sync" between threads, so it's implicitly
supposed to run on a single CPU. Make it explicit, to avoid issues on
platforms with more cores.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>

FIXKFLOATDISABLE

(cherry picked from commit ab17f69a72)
2023-01-11 11:02:47 -05:00
Chris Friedt
7da64958f0 release: Zephyr 2.7.4
Set version to 2.7.4

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-12-22 20:33:40 -05:00
Chris Friedt
49e965fd63 release: update v2.7.4 release notes
* add bugifixes to v2.7.4 release
* partial CVE notes from security

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-12-22 19:17:19 -05:00
Flavio Ceolin
c09b95fafd net: tcp: Fix possible buffer underflow
Fix possible underflow in tcp flags parse.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit ac2e13b9a1)
2022-12-22 12:17:46 -05:00
Andy Ross
88f09f2eac kernel/sched: Fix SMP race on pend
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich).  This
process happens with the caller's lock held, so local interrupts are
masked.  But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!

Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also.  Now we hold the scheduler lock between pend and
the final context switch.

Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.

[1] z_swap() is a historical API implemented in per-arch assembly for
    older architectures (like ARM32!).  It was designed to be called
    with what at the time was a global IRQ lock, so it doesn't
    understand the idea of a separate scheduler lock.  When we finally
    get all archictures on arch_switch() this design can be cleaned up
    quite a bit.

Signed-off-by: Andy Ross <andyross@google.com>
(cherry picked from commit c32f376e99)
2022-12-20 20:54:21 -05:00
Ian Oliver
568c09ce3a log_core: Add Kconfig symbol for init priority
Users may want to do some configuration after the kernel is up, but
before initializing the log_core. Making the log_core's init priority
configurable makes that possible.

Signed-off-by: Ian Oliver <io@amperecomputing.com>
(cherry picked from commit 1675d49b4c)
2022-12-20 15:23:12 -05:00
Chris Friedt
79f6c538c1 tests: posix: add tests for sleep() and usleep()
Previously, there was no test coverage for `sleep()` and
`usleep()`.

This change adds full test coverage.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 027b79ecc4)
2022-12-06 12:47:16 -05:00
Chris Friedt
3400e6d9db lib: posix: update usleep() to follow the POSIX spec
The original implementation of `usleep()` was not compliant
to the POSIX spec in 3 ways.
- calling thread may not be suspended (because `k_busy_wait()`
  was previously used for short durations)
- if `usecs` > 1000000, previously we did not return -1 or set
  `errno` to `EINVAL`
- if interrupted, previously we did not return -1 or set
  `errno` to `EINTR`

This change addresses those issues to make `usleep()` more
POSIX-compliant.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 7b95428fa0)
2022-12-06 12:47:16 -05:00
Chris Friedt
fbea9e74c2 lib: posix: sleep() should report unslept time in seconds
In the case that `sleep()` is interrupted, the POSIX spec requires
it to return the number of "unslept" seconds (i.e. the number of
seconds requested minus the number of seconds actually slept).

Since `k_sleep()` already returns the amount of "unslept" time
in ms, we can simply use that.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit dcfcc6454b)
2022-12-06 12:47:16 -05:00
Chris Friedt
4d929827ac tests: posix: clock: do not use usleep in a broken way
Using `usleep()` for >= 10000000 microseconds results
in an error, so this test was kind of defective, having
explicitly called `usleep()` for seconds.

Also, check the return values of `clock_gettime()`.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit 23a1f0a672)
2022-12-06 12:47:16 -05:00
Jamie McCrae
37b3641f00 manifest: Update mcumgr revision
Updates mcumgr to resolve an issue with the state of a firmware
update not being reset if an error occurs or if the underlying
area is erased.

Fixes #52247
Backporting commit 4c48b4f21a

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-11-28 16:02:09 -05:00
Jamie McCrae
3d940f1d1b net: Synchronise user data size with mcumgr
Fixes an issue with 2 user data sizes being out of sync causing
CI failures.

Fixes #52591

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-11-28 15:26:32 -05:00
Martí Bolívar
f0d2a3e2fe python-devicetree: CI hotfix
Pin the types-PyYAML version to 6.0.7. Version 6.0.8 is causing CI
errors for other pull requests, so we need this in to get other PRs
moving.

Fixes: #46286

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7d405a43b1 ci: Clone cached Zephyr repository with shared objects
In the new ephemeral Zephyr runners, the cached repository files are
located in a foreign file system and Git clone operation cannot create
hard-links to the cached repository objects, which forces the Git clone
operation to copy the objects from the cache file system to the runner
container file system.

This commit updates the CI workflows to instead perform a "shared
clone" of the cached repository, which allows the cloned repository to
utilise the object database of the cached repository.

While "shared clone" can be often dangerous because the source
repository objects can be deleted, in this case, the source repository
(i.e. cached repository) is mounted as read-only and immutable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
2dbe845f21 ci: codecov: Clone cached Zephyr repository
This commit updates the codecov workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
4efa225daa ci: codecov: Use zephyr-runner
This commit updates the codecov workflow to use the new Kubernetes-
based zephyr-runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0ee1955d5b ci: clang: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
a0eb50be3e ci: clang: Clone cached Zephyr repository
This commit updates the clang workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
2da9d7577f ci: clang: Use zephyr-runner
This commit updates the clang workflow to use the new Kubernetes-based
zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
d5e2a071c1 ci: twister: Remove obsolete clean-up steps
The repository clean-up steps are no longer necessary because the new
zephyr-runner is ephemeral and does not contain any files from the
previous runs.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
633dd420d9 ci: twister: Clone cached Zephyr repository
This commit updates the twister workflow to pre-clone the Zephyr
repository from the runner repository cache.

Note that the `origin` remote URL is reconfigured to that of the GitHub
Zephyr repository because the checkout action attempts to delete
everything and re-clone otherwise.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
780b4e08cb ci: twister: Use zephyr-runner
This commit updates the twister workflow to use the new Kubernetes-
based zephyr-runner.

Note that the repository cache directory path has been changed for the
new runner.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
281185e49d ci: Use actions/cache@v3
This commit updates the CI workflows to use the latest "cache" action
v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
cb240b4f4c ci: Use actions/setup-python@v4
This commit updates the CI workflows to use the latest "setup-python"
action v4, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ff5ee88ac0 ci: Use actions/upload-artifact@v3
This commit updates the CI workflows to use the latest
"upload-artifact" action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c3ef958116 ci: Use actions/checkout@v3
This commit updates the CI workflows to use the latest "checkout"
action v3, which is based on Node.js 16.

Note that Node.js 12-based actions are now deprecated by GitHub and may
stop working in the near future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
727806f483 ci: twister: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ec6c9d3637 ci: release: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c5e88dbbda ci: codecov: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c3e4d65dd1 ci: clang: Use output parameter file
This commit updates the workflow to use the output parameter file
(`GITHUB_OUTPUT`) instead of the stdout-based output parameter setting,
which is now deprecated by GitHub and will be removed in the near
future.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
16207ae32f ci: footprint-tracking: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
dbf2ca1b0a ci: footprint: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0e204784ee ci: codecov: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
c44406e091 ci: clang: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
1da82633b2 ci: twister: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ad6636f09c ci: bluetooth-tests: Use "concurrency" to cancel previous runs
This commit adds a concurrency group to the workflow in order to ensure
that only one instance of the workflow runs for an event-ref
combination.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Anas Nashif
8f4b366c0f ci: update cancel-workflow-action action to 0.11.0
Update action to use latest release which resolves a warning Node 12
being deprecated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
5f8960f5ef ci: compliance: Use upload-artifact action v3
This commit updates the "Create a release" workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
d9200eb55d ci: issue_count: Use upload-artifact action v3
This commit updates the issue count tracker workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
ccdc1d3777 ci: doc-build: Use upload-artifact action v3
This commit updates the documentation build workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
197c4ddcbd ci: compliance: Use upload-artifact action v3
This commit updates the compliance check workflow to use a specific
upload-artifact action version, v3, instead of the latest master branch
in order to prevent any potential breakages due to the newer commits.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7287947535 ci: backport: Use Ubuntu 20.04 runner image
This commit updates the backport workflow to use the ubuntu-20.04
runner image because the ubuntu-18.04 image is deprecated and will
become unsupported by December 1, 2022.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
b0be164419 ci: west_cmds: Use specific version of runner image
This commit updates the "West Command Tests" workflow to use a specific
runner image version (ubuntu-20.04, macos-11, windows-2022) instead of
the latest version in order to prevent any potential breakages due to
the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
fab06842d5 ci: devicetree_checks: Use specific version of runner image
This commit updates the "Devicetree script tests" workflow to use a
specific runner image version (ubuntu-20.04, macos-11, windows-2022)
instead of the latest version in order to prevent any potential
breakages due to the 'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9b4eafc54a ci: twister: Use Ubuntu 20.04 runner image
This commit updates the "Run tests with twister" workflow to use a
specific runner image version, ubuntu-20.04, instead of the latest
version in order to prevent any potential breakages due to the 'latest'
version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9becb117b2 ci: twister_tests: Use Ubuntu 20.04 runner image
This commit updates the Twister Testsuite workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
87ab3e4d16 ci: stale_issue: Use Ubuntu 20.04 runner image
This commit updates the stale issue workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
9b8305cc11 ci: release: Use Ubuntu 20.04 runner image
This commit updates the "Create a Release" workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
728e5720cc ci: manifest: Use Ubuntu 20.04 runner image
This commit updates the manifest check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
6c11685863 ci: license_check: Use Ubuntu 20.04 runner image
This commit updates the license check workflow to use a specific runner
image version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
0f783a4ce0 ci: issue_count: Use Ubuntu 20.04 runner image
This commit updates the issue count tracker workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
02dba17a59 ci: footprint: Use Ubuntu 20.04 runner image
This commit updates the footprint delta workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
860e7307bc ci: footprint-tracking: Use Ubuntu 20.04 runner image
This commit updates the footprint tracking workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7b087b8ac5 ci: errno: Use Ubuntu 20.04 runner image
This commit updates the error number check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
15f39300c0 ci: doc: Use Ubuntu 20.04 runner image
This commit updates the documentation build and publish workflows to
use a specific runner image version, ubuntu-20.04, instead of the
latest version in order to prevent any potential breakages due to the
'latest' version change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
f6f69516ac ci: daily_test_version: Use Ubuntu 20.04 runner image
This commit updates the daily test version workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
5f9dd18a87 ci: compliance: Use Ubuntu 20.04 runner image
This commit updates the compliance check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
7d8639b4a8 ci: coding_guidelines: Use Ubuntu 20.04 runner image
This commit updates the coding guidelines workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
6e723ff755 ci: clang: Use Ubuntu 20.04 runner image
This commit updates the Clang workflow to use a specific runner image
version, ubuntu-20.04, instead of the latest version in order to
prevent any potential breakages due to the 'latest' version change by
GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
bff97ed4cc ci: backport_issue_check: Use Ubuntu 20.04 runner image
This commit updates the backport issue check workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
97e2959452 ci: bluetooth-tests: Use Ubuntu 20.04 runner image
This commit updates the Bluetooth tests workflow to use a specific
runner image version, ubuntu-20.04, instead of the latest version in
order to prevent any potential breakages due to the 'latest' version
change by GitHub.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Stephanos Ioannidis
91970658ec ci: issue_count: Fix stale reference to master branch
This commit fixes a stale reference to the 'master' branch when
downloading the issue report configuration file.

Note that the 'master' branch is no longer the default
branch -- 'main' is.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
fc2585af00 ci: doc-build: skip Kconfig docs build on pull requests
The documentation supports a special target named "html-fast" that skips
generation of all Kconfig pages. Instead, it creates a single dummy page
where a reference to all existing Kconfig options is placed. This means
that references are resolved, but content is not rendered. Since Kconfig
help is rendered as a literal, chances of breaking documentation
build due to Kconfig changes should be low. The change proposed in this
patch should speed up documentation build on pull requests while a
proper solution is found for the Kconfig docs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
f95edd3a85 ci: doc-build: use concurrency group to cancel in progress builds
Add the documentation build jobs to a concurrency group so that branch
force pushes will automatically cancel in progress jobs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
2b9ed76734 ci: doc-build: disable parallel build
When an error happens during the Sphinx build (e.g. due to a broken
reference), the process hangs on CI when run with both `-j auto`
(parallel build) and `-W` (warnings as errors) options. The root cause
of the issue is unknown, and does not seem to happen locally. Parallel
builds are experimental on Sphinx, so they have been disabled on CI for
now.  Because CI runner is a single core machine, the build time should
remain equal or similar. The option is still left as default, so local
builds will continue to benefit from parallelization.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Gerard Marull-Paretas
5a041bff3d ci: doc-build: set timeout to 30 minutes
Give documentation build up to 30 minutes to finish. This should improve
the user experience when Sphinx hangs due to broken references, a
problem that needs some investigation.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-27 16:18:59 +09:00
Chris Friedt
f61664c6f8 tests: kernel: mutex: move race timeout test to mutex_api
Previously, this change was added to `mutex_error_case`.

That worked fine in `main`, but once the change was backported to
`v2.7-branch`, the test would fail because it *did not* cause a
failure. The reason for that, was that the `mutex_error_case`
suite has `CONFIG_ZTEST_FATAL_HOOK=y`.

With the newer ztest API, it allowed a separate suite to be used,
allowing the test to pass (although it did not really fit in with
the rest of the testsuite).

The solution is to simply merge it with the `mutex_api` suite
which uses non-inverted success logic.

This change will also have to be cherry-picked for the backport
in #49031.

Fixes #48056.

Signed-off-by: Chris Friedt <cfriedt@meta.com>
2022-11-18 17:33:10 -05:00
Qi Yang
e2f05e9328 kernel: mutex: fix races when lock timeout
Say threadA holds a mutex and threadB tries
to lock it with a timeout, a race would occur
if threadA unlock that mutex after threadB
got unpended by sys_clock and before it gets
scheduled and calls k_spin_lock.

This patch fixes this issue by checking the
mutex's status again after k_spin_lock calls.

Fixes #48056

Signed-off-by: Qi Yang <qi.yang@cmind-semi.com>
(cherry picked from commit 89c4a074dc)
2022-11-18 17:33:10 -05:00
Jamie McCrae
ea0b53b150 mgmt: mcumgr: Fix Bluetooth transport issues
This fixes issues with the Bluetooth SMP transport whereby deadlocks
could arise from connection references being held in long-lasting
mcumgr command processing functions.

Note: Heavily modified from original PR due to differences in MCUmgr
operation since the Zephyr 2.7 release.

Fixes #51846

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 2aca1242f7)
2022-11-17 18:41:57 -05:00
Stephanos Ioannidis
56664826b2 ci: doc: Publish pull request docs to builds.zephyrproject.io
This commit updates the CI documentation build workflow to upload the
HTML pull request documentation builds to the S3 builds.zephyrproject.io
bucket so that they are directly accessible from the web.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1dd92ec865)
2022-11-16 04:23:46 +09:00
Chris Friedt
bbb49dec38 net: sockets: socketpair: do not allow blocking IO in ISR context
Using a socketpair for communication in an ISR is not a great
solution, but the implementation should be robust in that case
as well.

It is not acceptible to block in ISR context, so robustness here
means to return -1 to indicate an error, setting errno to `EAGAIN`
(which is synonymous with `EWOULDBLOCK`).

Fixes #25417

Signed-off-by: Chris Friedt <cfriedt@meta.com>
(cherry picked from commit d832b04e96)
2022-11-15 12:12:40 -05:00
Stephanos Ioannidis
8211ebf759 ci: Limit workflow scope to v2.7-branch
This commit updates the CI workflows to limit their event trigger scope
to the v2.7-branch in order to prevent the workflows from running when
the backport branches are pushed.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-11-02 03:12:21 +09:00
Jay Vasanth
1f3121b6b2 soc arm: MEC172x soc.h - Include custom IRQn_Type
Fix for issue #41012 to allow compiler to treat
IRQn_Type to be more than 8-bit. This will ensure NVIC numbers
more than 127 (required for MEC172x device) will work
correctly with irq_enable() API

Signed-off-by: Jay Vasanth <jay.vasanth@microchip.com>
(cherry picked from commit 4495f43dca)
2022-10-11 20:17:47 -04:00
Jamie McCrae
d7820faf7c drivers: counter: Update counter_set_channel_alarm documentation
Adds the -EBUSY return code to the documentation of the
counter_set_channel_alarm function which is returned when an alarm
is already active.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit c607599068)
2022-10-07 18:23:58 -04:00
Jordan Yates
5d29d52445 scripts: zspdx: fix writing custom license IDs
The builtin list function `.sort()` sorts the list in-place and returns
None. As this is an invalid type for iteration, use the builtin `sorted`
function, which returns a sorted copy of the list, which we can iterate
over.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
(cherry picked from commit 06aae61019)
2022-10-04 09:09:14 -07:00
Yong Cong Sin
be11187e09 mgmt/hawkbit: Print hrefs only if there's an update
If the is no update from the server, the _links will be NULL.
Check if it is NULL before trying to LOG these strings.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2022-10-04 07:43:25 -07:00
Ruud Derwig
9044091e21 ARC: fx possible memory corruption with userspace
Use  Z_KERNEL_STACK_BUFFER instead of
Z_THREAD_STACK_BUFFER for initial stack.

Fixes #50467

Signed-off-by: Ruud Derwig <Ruud.Derwig@synopsys.com>
(cherry picked from commit 9bccb5cc4b)
2022-09-23 13:57:53 -04:00
Daniel Leung
170ba8dfcb soc: esp32: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit b820cde7a9)
2022-09-23 13:57:40 -04:00
Daniel Leung
e3f1b6fc54 soc: intel_adsp: use Z_KERNEL_STACK_BUFFER instead of...
...Z_THREAD_STACK_BUFFER.

This is currently a symbolic change as Z_THREAD_STACK_BUFFER
is simply an alias to Z_KERNEL_STACK_BUFFER without userspace,
and Xtensa does not support userspace at the moment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit 74df88d8f5)
2022-09-23 13:57:40 -04:00
Daniel Leung
7ac05528ca tests: coredump: skip acrn_ehl_crb
The coredump tests output quite a large amount of data into
the console. However, the ACRN console only has very limited
history (comparatively), such that twister is unable to
match the necessary strings to consider the tests passed.
So skip those tests on acrn_ehl_crb.

Fixes #40887

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
(cherry picked from commit c1ac125068)
2022-09-20 17:06:00 +01:00
Martí Bolívar
64f411f0fb edtlib: remove python 3.5 workaround
Remove a yaml monkeypatch. It is no longer needed since we support 3.6
or later on Zephyr v2.7 LTS and 3.8 or later on what will become v3.2.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
(cherry picked from commit 7ef9c4b20e)
2022-09-18 08:32:31 -04:00
Anas Nashif
2e2dd96ae4 actions: west/devicetree: exclude python 3.6 on windows
This version of python is not available anymore. Excluding for now to
unblock CI.

Fixes: #49139

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
(cherry picked from commit 773242cbb7)
2022-09-18 08:32:31 -04:00
Torsten Rasmussen
a311291294 cmake: kconfig: preserved quotes for Kconfig string values
Fixes: #49569

Kconfig requires quoted strings in its configuration files, like this:
> CONFIG_A_STRING="foo bar"

But CMake requires expects that strings are without additional qoutes,
and therefore qoutes are stripped when loading Kconfig config filers
into CMake.

This is particular important when the string in Kconfig is a path to a
file. In this case, not stripping the quotes leads to an error as the
file cannot be found.

When users pass a string to Kconfig through CMake, they are expected to
pass it so that qoutes are correct seen from Kconfig, that is:
> cmake -DCONFIG_A_STRING=\"foo bar\"

In CMake, those qoutes are written as-is to Kconfig extra config file,
and then removed in the CMake cache.
After Kconfig processing, the Kconfig settings are read back to CMake
but without quotes. Settings that was passed through the CMake cache,
for example using `-D` are written back to the cache, but this time
without the qoutes. This results in Kconfig errors on sub-sequent CMake
runs.

Instead of writing the Kconfig value setting back to the CMake cache,
introduce an internal shadow symbol in the cache prefixed with `CLI_`.
This allows the CMake cache to keep the value correctly formatted for
Kconfig extra config creation, while at the same time keep existing
behavior for CONFIG_ symbols read from Kconfig.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2022-08-31 06:26:03 -04:00
Gerard Marull-Paretas
5221787303 scripts: west_commands: runners: jlink: support pylink >= 0.14.2
It looks like the latest release, 0.14.2, changed the contents of
JLINK_SDK_NAME as it was before 0.14.0 release. That means that the
previous fix is only applicable to a couple of releases: 0.14.0/0.14.1.

Fixes #49564

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
(cherry picked from commit 2d712c6c55)
2022-08-31 06:24:41 -04:00
Gerard Marull-Paretas
63d0c7fcae scripts: west_commands: runners: jlink: support pylink >= 0.14
pylink 0.14.0 changed the class variable where JLink DLL library name
(libjlinkarm) is stored. This patch adds support for new pylink
libraries while keeping backwards compatibility.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
(cherry picked from commit a57001347f)
2022-08-31 06:24:41 -04:00
Yong Cong Sin
8abef50e97 subsys/mgmt/hawkbit: Set ai_socktype if IPV4/IPV6
Follows the implementation of updatehub and set the
`ai_socktype` only if IPV4/IPV6

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit dd9d6bbb44)
2022-08-30 13:01:23 -04:00
Yong Cong Sin
0306e75a5f subsys/mgmt/hawkbit: Init the hints struct to a known value
Initialize the `hints` struct to a known value so that it won't
cause undetermined behavior when used in `getaddrinfo()`.

Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 2ed88e998a)
2022-08-30 13:01:23 -04:00
Christopher Friedt
003de78ce0 release: Zephyr 2.7.3
Set version to 2.7.3

Fixes #49354

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
2022-08-22 12:12:49 -04:00
Flavio Ceolin
9502d500b6 release: security: Notes for 2.7.3
Add security release notes for 2.7.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-08-22 10:54:58 -04:00
Christopher Friedt
2a88e08296 release: update v2.7.3 release notes
* add bugifixes to v2.7.3 release
* awating CVE notes from security

Fixes #49310

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
2022-08-22 10:54:58 -04:00
Jamie McCrae
e1ee34e55c drivers: sensor: sm351lt: Fix global thread triggering bug
This fixes a bug in the sm351lt driver whereby global triggering will
cause an MPU fault due to an unset pointer.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-08-03 19:05:58 -04:00
Szymon Janc
2ad1ef651b Bluetooth: host: Fix L2CAP reconfigure response with invalid CID
When an L2CAP_CREDIT_BASED_RECONFIGURE_REQ packet is received with
invalid parameters, the recipient shall send an
L2CAP_CREDIT_BASED_RECONFIGURE_RSP PDU with a non-zero Result field
and not change any MTU and MPS values.

This fix incorrectly reconfiguring valid channels while responding with
0x003 (Reconfiguration failed - one or more Destination CIDs invalid)
result code.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
(cherry picked from commit 253070b76b)
2022-08-02 12:36:44 -04:00
Szymon Janc
089675af45 Bluetooth: host: Fix L2CAP reconfigure response with invalid MTU
TSE18813 clarified IUT behavior and rejecting reconfiguration which
would result in MTU decrease is enough. There is no need to disconnect
L2CAP channel(s).

This was affecting L2CAP/ECFC/BI-03-C qualification test case
(TCRL 2022-2).

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
(cherry picked from commit 266394dea4)
2022-08-02 12:36:44 -04:00
Andriy Gelman
03ff0d471e net: route: Fix pkt leak if net_send_data() fails
If the call to net_send_data() fails, for example if the forwading
interface is down, then the pkt will leak. The reference taken by
net_pkt_shallow_clone() will never be released. Fix the problem
by dropping the rerefence count in the error path.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit a3cdb2102c)
2022-08-02 10:41:52 -04:00
Erwan Gouriou
cd96136bcb boards: nucleo_wb55rg: Fix documentation about BLE binary compatibility
Rather than stating a version information that will get out of date
at each release, refer to the source of information located in hal_stm32
module.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit 6656607d02)
2022-07-25 05:54:23 -04:00
Henrik Brix Andersen
567fda57df tests: drivers: can: api: add test for RTR filter matching
Add test for CAN RX filtering of RTR frames.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
b14f356c96 drivers: can: loopback: check frame ID type and RTR bit in filters
Check the frame ID type and RTR bit when comparing loopback CAN frames
against installed RX filters.

Fixes: #47904

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
874d77bc75 drivers: can: mcux: flexcan: fix handling of RTR frames
When installing a RX filter, the driver uses "filter->rtr &
filter->rtr_mask" for setting the filter mask. It should just be using
filter->rtr_mask, otherwise filters for non-RTR frames will match RTR
frames as well.

When transmitting a RTR frame, the hardware automatically switches the
mailbox used for TX to RX in order to receive the reply. This, however,
does not match the Zephyr CAN driver model, where mailboxes are
dedicated to either RX or TX. Attempting to reuse the TX mailbox (which
was automatically switched to an RX mailbox by the hardware) fails on
the first call, after which the mailbox is reset and can be reused for
TX. To overcome this, the driver must abort the RX mailbox operation
when the hardware performs the TX to RX switch.

Fixes: #47902

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
ec0befb938 drivers: can: mcan: acknowledge all received frames
The Bosch M_CAN IP does not support RX filtering of the RTR bit, so the
driver handles this bit in software.

If a recevied frame matches a filter with RTR enabled, the RTR bit of
the frame must match that of the filter in order to be passed to the RX
callback function. If the RTR bits do not match the frame must be
dropped.

Improve the readability of the the logic for determining if a frame
should be dropped and add a missing FIFO acknowledge write for dropped
frames.

Fixes: #47204

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Christopher Friedt
273e90a86f scripts: release: list_backports: use older python dict merge method
In Python versions >= 3.9, dicts can be merged with the `|` operator.

This is not the case for python versions < 3.9, and the simplest way
is to use `dict_c = {**dict_a, **dict_b}`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3783cf8353)
2022-07-19 01:30:16 +09:00
Christopher Friedt
59dc65a7b4 ci: backports: check if a backport PR has a valid issue
This is an automated check for the Backports project to
require one or more `Fixes #<issue>` items in the body
of the pull request.

Fixes #46164

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit aa4e437573)
2022-07-18 23:32:19 +09:00
Christopher Friedt
8ff8cafc18 scripts: release: list_backports.py
Created list_backports.py to examine prs applied to a backport
branch and extract associated issues. This is helpful for
adding to release notes.

The script may also be used to ensure that backported changes
also have one or more associated issues.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 57762ca12c)
2022-07-18 23:32:19 +09:00
Christopher Friedt
ba07347b60 scripts: release: use GITHUB_TOKEN and start_date in scripts
Updated bug_bash.py and list_issues.py to use the GITHUB_TOKEN
environment variable for consistency with other scripts.

Updated bug_bash.py to use `-s / --start-date` instead of
`-b / --begin-date`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3b3fc27860)
2022-07-18 23:32:19 +09:00
Christopher Friedt
e423902617 tests: posix: pthread: test for pthread descriptor leaks
Add a simple test to ensure that we can create and join a
single thread `CONFIG_MAX_PTHREAD_COUNT` * 2 times. If
there are leaks, then `pthread_create()` should
eventually return `EAGAIN`. If there are no leaks, then
the test should pass.

Fixes #47609

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit d37350bc19)
2022-07-12 16:02:26 -04:00
Christopher Friedt
018f836c4d posix: pthread: consider PTHREAD_EXITED state in pthread_create
If a thread is joined using `pthread_join()`, then the
internal state would be set to `PTHREAD_EXITED`.

Previously, `pthread_create()` would only consider pthreads
with internal state `PTHREAD_TERMINATED` as candidates for new
threads. However, that causes a descriptor leak.

We should be able to reuse a single thread an infinite number
of times.

Here, we also consider threads with internal state
`PTHREAD_EXITED` as candiates in `pthread_create()`.

Fixes #47609

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit da0398d198)
2022-07-12 16:02:26 -04:00
Stephanos Ioannidis
f4466c4760 tests: cpp: cxx: Add qemu_cortex_a53 as integration platform
This commit adds the `qemu_cortex_a53`, which is an MMU-based platform,
as an integration platform for the C++ subsystem tests.

This ensures that the `test_global_static_ctor_dynmem` test, which
verifies that the dynamic memory allocation service is functional
during the global static object constructor invocation, is tested on
an MMU-based platform, which may have a different libc heap
initialisation path.

In addition to the above, this increases the overall test coverage
ensuring that the C++ subsystem is functional on an MMU-based platform
in general.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 8e6322a21a)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
9a5cbe3568 tests: cpp: cxx: Test with various types of libc
This commit changes the C++ subsystem test, which previously was only
being run with the minimal libc, to be run with all the mainstream C
libraries (minimal libc, newlib, newlib-nano, picolibc).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 03f0693125)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
5b7b15fb2d tests: cpp: cxx: Add dynamic memory availability test for static init
This commit adds a test to verify that the dynamic memory allocation
service (the `new` operator) is available and functional when the C++
static global object constructors are invoked called during the system
initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit dc4895b876)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
e5a92a1fab tests: cpp: cxx: Add static global constructor invocation test
This commit adds a test to verify that the C++ static global object
constructors are invoked called during the system initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 6e0063af29)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
74f0b6443a lib: libc: newlib: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the newlib malloc heap
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 43e1c28a25)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
6c16b3492b lib: libc: minimal: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the minimal libc malloc
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit db0748c462)
2022-07-12 12:05:56 -04:00
Christopher Friedt
1831431bab lib: posix: semaphore: use consistent timebase in sem_timedwait
In the Zephyr implementation, `sem_timedwait()` uses a
potentially wildly different timebase for comparison via
`k_uptime_get()` (uptime in ms).

The standard specifies `CLOCK_REALTIME`. However, the real-time
clock can be modified to an arbitrary value via clock_settime()
and there is no guarantee that it will always reflect uptime.

This change ensures that `sem_timedwait()` uses a more
consistent timebase for comparison.

Fixes #46807

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
(cherry picked from commit 9d433c89a2)
2022-07-06 07:30:37 -04:00
Torsten Rasmussen
765f63c6b9 cmake: remove xtensa workaround in Zephyr toolchain code.
Remove xtensa specific workaround as this code is now present in Zephyr
SDK cmake code.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 92a1ca61eb)
2022-06-28 12:37:43 -04:00
Torsten Rasmussen
062306fc0b cmake: zephyr toolchain code cleanup
With the revert of commit 820d327b46 then
some additional code can be cleaned up.

This removes the final left-overs from Zephyr SDK 0.11.1 support and
older.

It further aligns message printing when including Zephyr SDK toolchain
to other toolchain message printing.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit fb3a113eb8)
2022-06-28 12:37:43 -04:00
Torsten Rasmussen
8fcf7f1d78 Revert "cmake: Zephyr sdk backward compatibility with 0.11.1 and 0.11.2"
This reverts commit 820d327b46.

Commit b973cdc9e8 updated the minimum
required Zephyr SDK version to 0.13.

Therefore revert commit 820d327b46 as
backward support for 0.11.1 and 0.11.2 is no longer required.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit e747fe73cd)
2022-06-28 12:37:43 -04:00
Vinayak Kariappa Chettimada
f06b3d922c Bluetooth: Controller: Fix PHY update for unsupported PHY
Fix PHY update procedure to handle unsupported PHY requested
by peer central device. PHY update complete will not be
generated to Host, connection is maintained on the old
PHY and the Controller will not respond to PDUs received on
the unsupported PHY.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 620a5524a5)
2022-06-28 11:39:37 -04:00
Francois Ramu
b75c012c55 drivers: spi: stm32 spi with dma must enable cs after periph
When using DMA to transfer over the spi, the spi_stm32_cs_control
is done after enabling the SPI. The same sequence applies
in the transceive_dma function as in transceive function

Signed-off-by: Francois Ramu <francois.ramu@st.com>
2022-06-28 11:39:17 -04:00
Stephanos Ioannidis
1efe6de3fe drivers: i2c: Fix infinite recursion in driver unregister function
This commit fixes an infinite recusion in the
`z_vrfy_i2c_slave_driver_unregister` function.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 745b7d202e)
2022-06-21 13:00:55 -04:00
Pavel Vasilyev
39270ed4a0 Bluetooth: Mesh: Fix segmentation when sending proxy message
Previous check in the if-statement would never allow to send last
segment if msg->len + 2 == MTU * x.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit 1efce43a00)
2022-06-21 12:17:16 -04:00
Pavel Vasilyev
81ffa550ee Bluetooth: Mesh: Check SegN when receiving Transaction Start PDU
When receiving Transaction Start PDU, assure that number of segments
needed to send a Provisioning PDU with TotalLength size is equal to SegN
value provided in the Transaction Start PDU.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit a63c515679)
2022-06-20 11:31:10 -04:00
Aleksandr Khromykh
8c2965e017 Bluetooth: Mesh: add check for rx buffer overflow in pb adv
There is potential buffer overflow in pb adv.
If Transaction Continuation PDU comes before
Transaction Start PDU the last segment number is set to 0xff.
The current implementation has a strictly limited buffer size.
It is possible to receive malformed frame with wrong segment
number. All segments with number 2 and above will be stored
in the memory behind Rx buffer.

Signed-off-by: Aleksandr Khromykh <Aleksandr.Khromykh@nordicsemi.no>
(cherry picked from commit 6896075b62)
2022-06-20 11:25:30 -04:00
Alexander Wachter
7aa38b4ac8 drivers: can: m_can: fix alignmed issues
Make sure that all access to the msg_sram
is 32 bit aligned.

Signed-off-by: Alexander Wachter <alexander@wachter.cloud>
2022-06-10 21:01:47 -07:00
Christopher Friedt
6dd320f791 release: update v2.7.2 release notes
* add backported bugfixes
* add backported fix for CVE-2021-3966

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2022-05-24 12:13:21 -04:00
Alexej Rempel
ecac165d36 logging: shell: fix shell stats null pointer dereference
If CONFIG_SHELL_STATS is disabled, shell->stats is NULL and
must not be dereferenced. Guard against it.

Fixes #44089

Signed-off-by: Alexej Rempel <Alexej.Rempel@de.eckerle-gruppe.com>
(cherry picked from commit 476d199752)
2022-05-24 12:12:22 -04:00
Michał Narajowski
132d90d1bc tests/bluetooth/tester: Refactor Read UUID callback
ATT_READ_BY_TYPE_RSP returns Attribute Data List so handle it in the
application.

This affects GATT/CL/GAR/BV-03-C.

Signed-off-by: Michał Narajowski <michal.narajowski@codecoup.pl>
(cherry picked from commit 54cd46ac68)
2022-05-17 18:33:37 +02:00
Mark Holden
58356313ac coredump: adjust mem_region find in gdbstub
Adjust get_mem_region to not return region when address == end
as there will be nothing to read there. Also, a subsequent region
may have that address as a start address and would be a more appropriate
selection.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit d04ab82943)
2022-05-17 18:06:16 +02:00
Piotr Pryga
99cfd3e4d7 Bluetooth: Controller: Fix per adv scheduling issue
sw_switch implementation uses two parallel groups of
PPIs connecting radio and timer tasks and events.
The groups are used interchaneably, one is set for
following radio TX/RX event while the other is in use
(enabled).

The group should be disabled by timer compare event that
starts Radio to TX/RX a PDU. The timer is responsible for
maintenance of TIFS/TMAFS. The disabled group collects
all PPIs required to maintain the TIFS/TMASF. After
the time is reached Radio is started and the group is
disabled. It will be enabled again by software radio
swich during next call.

If the group is not disabled then it will work in parallel
to other one. That causes issues in correct maintenance of
instant when radio shoudl be started for next TX/RX  event
e.g. radio may be enabled to early.

In case the PHY CODED was enabled and periodic advertising
included chained PDUs, that are transmitted back-to-back,
there was missing group delay disable. The missing case was
sw_switch function called with dir_curr and dir_next set
to SW_SWITCH_TX.

Signed-off-by: Piotr Pryga <piotr.pryga@nordicsemi.no>
2022-05-16 10:07:40 +02:00
Andrei Emeltchenko
780588bd33 edac: ibecc: Add support for EHL SKU13, SKU14, SKU15
Add support for missing EHL SKUs. The information about SKUs is
already public and available in Linux kernel:
https://github.com/torvalds/linux/blob/
38f80f42147ff658aff218edb0a88c37e58bf44f/drivers/edac/
igen6_edac.c#L197-L208

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
(cherry picked from commit f6069aa8fa)
2022-05-10 20:03:44 -04:00
213 changed files with 4203 additions and 1184 deletions

View File

@@ -9,7 +9,7 @@ on:
jobs:
backport:
runs-on: ubuntu-18.04
runs-on: ubuntu-20.04
name: Backport
steps:
- name: Backport

View File

@@ -0,0 +1,30 @@
name: Backport Issue Check
on:
pull_request_target:
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-22.04
steps:
- name: Check out source code
uses: actions/checkout@v3
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -8,7 +8,7 @@ on:
jobs:
bluetooth-test-results:
name: "Publish Bluetooth Test Results"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.event.workflow_run.conclusion != 'skipped'
steps:

View File

@@ -10,17 +10,13 @@ on:
- "soc/posix/**"
- "arch/posix/**"
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
bluetooth-test-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
bluetooth-test-build:
runs-on: ubuntu-latest
needs: bluetooth-test-prep
bluetooth-test:
runs-on: ubuntu-20.04
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -38,7 +34,7 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: west setup
run: |
@@ -55,7 +51,7 @@ jobs:
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: bluetooth-test-results
path: |
@@ -64,7 +60,7 @@ jobs:
- name: Upload Event Details
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: event
path: |

View File

@@ -2,22 +2,18 @@ name: Build with Clang/LLVM
on: pull_request_target
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
clang-build-prep:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
clang-build:
runs-on: zephyr_runner
needs: clang-build-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -30,12 +26,14 @@ jobs:
outputs:
report_needed: ${{ steps.twister.outputs.report_needed }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -72,7 +70,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: nashif/action-s3-cache@master
@@ -80,8 +78,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
@@ -100,12 +98,12 @@ jobs:
# We can limit scope to just what has changed
if [ -s testplan.csv ]; then
echo "::set-output name=report_needed::1";
echo "report_needed=1" >> $GITHUB_OUTPUT
# Full twister but with options based on changes
./scripts/twister --inline-logs -M -N -v --load-tests testplan.csv --retry-failed 2
else
# if nothing is run, skip reporting step
echo "::set-output name=report_needed::0";
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
@@ -114,7 +112,7 @@ jobs:
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.platform }})
path: twister-out/twister.xml

View File

@@ -4,22 +4,18 @@ on:
schedule:
- cron: '25 */3 * * 1-5'
jobs:
codecov-prep:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
codecov:
runs-on: zephyr_runner
needs: codecov-prep
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -32,8 +28,14 @@ jobs:
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
@@ -54,7 +56,7 @@ jobs:
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -63,8 +65,8 @@ jobs:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
@@ -94,7 +96,7 @@ jobs:
- name: Upload Coverage Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Coverage Data (Subset ${{ matrix.platform }})
path: coverage/reports/${{ matrix.platform }}.info
@@ -108,7 +110,7 @@ jobs:
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Download Artifacts
@@ -144,8 +146,8 @@ jobs:
set(MERGELIST "${MERGELIST} -a ${f}")
endif()
endforeach()
message("::set-output name=mergefiles::${MERGELIST}")
message("::set-output name=covfiles::${FILELIST}")
file(APPEND $ENV{GITHUB_OUTPUT} "mergefiles=${MERGELIST}\n")
file(APPEND $ENV{GITHUB_OUTPUT} "covfiles=${FILELIST}\n")
- name: Merge coverage files
run: |

View File

@@ -4,17 +4,17 @@ on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run coding guidelines checks on patch series (PR)
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip

View File

@@ -4,11 +4,11 @@ on: pull_request
jobs:
maintainer_check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Check MAINTAINERS file
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -20,7 +20,7 @@ jobs:
python3 ./scripts/get_maintainer.py path CMakeLists.txt
check_compliance:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
@@ -28,13 +28,13 @@ jobs:
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
@@ -72,7 +72,7 @@ jobs:
./scripts/ci/check_compliance.py -m Codeowners -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: compliance.xml

View File

@@ -12,15 +12,15 @@ on:
jobs:
get_version:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: install-pip
@@ -28,7 +28,7 @@ jobs:
pip3 install gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
fetch-depth: 0

View File

@@ -6,10 +6,14 @@ name: Devicetree script tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
@@ -21,20 +25,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -42,7 +48,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -51,7 +57,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,10 +5,10 @@ name: Documentation Build
on:
schedule:
- cron: '0 */3 * * *'
- cron: '0 */3 * * *'
push:
tags:
- v*
- v*
pull_request:
paths:
- 'doc/**'
@@ -34,18 +34,23 @@ env:
jobs:
doc-build-html:
name: "Documentation Build (HTML)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 30
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen graphviz
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
@@ -69,38 +74,71 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W -j auto" make -C doc html
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
DOC_TARGET="html-fast"
else
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -W" make -C doc ${DOC_TARGET}
- name: compress-docs
run: |
tar cfJ html-output.tar.xz --directory=doc/_build html
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: html-output
path: html-output.tar.xz
- name: process-pr
if: github.event_name == 'pull_request'
run: |
REPO_NAME="${{ github.event.repository.name }}"
PR_NUM="${{ github.event.pull_request.number }}"
DOC_URL="https://builds.zephyrproject.io/${REPO_NAME}/pr/${PR_NUM}/docs/"
echo "${PR_NUM}" > pr_num
echo "::notice:: Documentation will be available shortly at: ${DOC_URL}"
- name: upload-pr-number
uses: actions/upload-artifact@v3
if: github.event_name == 'pull_request'
with:
name: pr_num
path: pr_num
doc-build-pdf:
name: "Documentation Build (PDF)"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container: texlive/texlive:latest
timeout-minutes: 30
concurrency:
group: doc-build-pdf-${{ github.ref }}
cancel-in-progress: true
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip ninja-build doxygen graphviz librsvg2-bin
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: pip-${{ hashFiles('scripts/requirements-doc.txt') }}
- name: setup-venv
run: |
python3 -m venv .venv
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: install-pip
run: |
pip3 install -U setuptools wheel pip
@@ -123,7 +161,7 @@ jobs:
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
with:
name: pdf-output
path: doc/_build/latex/zephyr.pdf

63
.github/workflows/doc-publish-pr.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Documentation Publish (Pull Request)
on:
workflow_run:
workflows: ["Documentation Build"]
types:
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-20.04
if: |
github.event.workflow_run.event == 'pull_request' &&
github.event.workflow_run.conclusion == 'success' &&
github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download artifacts
uses: dawidd6/action-download-artifact@v2
with:
workflow: doc-build.yml
run_id: ${{ github.event.workflow_run.id }}
- name: Load PR number
run: |
echo "PR_NUM=$(<pr_num/pr_num)" >> $GITHUB_ENV
- name: Check PR number
id: check-pr
uses: carpentries/actions/check-valid-pr@v0.8
with:
pr: ${{ env.PR_NUM }}
sha: ${{ github.event.workflow_run.head_sha }}
- name: Validate PR number
if: steps.check-pr.outputs.VALID != 'true'
run: |
echo "ABORT: PR number validation failed!"
exit 1
- name: Uncompress HTML docs
run: |
tar xf html-output/html-output.tar.xz -C html-output
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ vars.AWS_BUILDS_ZEPHYR_PR_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_BUILDS_ZEPHYR_PR_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3
env:
HEAD_BRANCH: ${{ github.event.workflow_run.head_branch }}
run: |
aws s3 sync --quiet html-output/html \
s3://builds.zephyrproject.org/${{ github.event.repository.name }}/pr/${PR_NUM}/docs \
--delete

View File

@@ -2,23 +2,21 @@
# Copyright (c) 2021 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Publish Documentation
name: Documentation Publish
on:
workflow_run:
workflows: ["Documentation Build"]
branches:
- main
- v*
tags:
- v*
- main
- v*
types:
- completed
- completed
jobs:
doc-publish:
name: Publish Documentation
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: ${{ github.event.workflow_run.conclusion == 'success' }}
steps:
@@ -34,8 +32,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_DOCS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_DOCS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Upload to AWS S3

View File

@@ -6,13 +6,13 @@ on:
jobs:
check-errno:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
container:
image: zephyrprojectrtos/ci:v0.18.4
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Run errno.py
run: |

View File

@@ -13,19 +13,14 @@ on:
# same commit
- 'v*'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-tracking-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-tracking:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-tracking-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -44,7 +39,7 @@ jobs:
sudo pip3 install -U setuptools wheel pip gitpython
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -58,8 +53,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.FOOTPRINT_AWS_KEY_ID }}
aws-secret-access-key: ${{ secrets.FOOTPRINT_AWS_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Record Footprint

View File

@@ -2,19 +2,14 @@ name: Footprint Delta
on: pull_request
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
footprint-cancel:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
footprint-delta:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
needs: footprint-cancel
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
@@ -25,16 +20,12 @@ jobs:
CLANG_ROOT_DIR: /usr/lib/llvm-12
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0

View File

@@ -14,13 +14,13 @@ env:
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/master/.github/workflows/issues-report-config.json
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/main/.github/workflows/issues-report-config.json
- name: install-packages
run: |
@@ -34,7 +34,7 @@ jobs:
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}
@@ -43,8 +43,8 @@ jobs:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-access-key-id: ${{ vars.AWS_TESTING_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_TESTING_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Post Results

View File

@@ -7,6 +7,4 @@ jobs:
name: Pull Request Labeler
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2.1.1
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'
- uses: actions/labeler@v4

View File

@@ -4,7 +4,7 @@ on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: Scan code for licenses
steps:
- name: Checkout the code
@@ -15,7 +15,7 @@ jobs:
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v1
uses: actions/upload-artifact@v3
with:
name: scancode
path: ./artifacts

View File

@@ -6,11 +6,11 @@ on:
jobs:
contribs:
runs-on: ubuntu-latest
runs-on: ubuntu-22.04
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}

View File

@@ -7,15 +7,16 @@ on:
jobs:
release:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
run: |
echo "VERSION=${GITHUB_REF#refs/tags/}" >> $GITHUB_OUTPUT
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
@@ -23,7 +24,7 @@ jobs:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@master
uses: actions/upload-artifact@v3
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx

View File

@@ -6,7 +6,7 @@ on:
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v3

View File

@@ -2,29 +2,27 @@ name: Run tests with twister
on:
push:
branches:
- v2.7-branch
pull_request_target:
branches:
- v2.7-branch
schedule:
# Run at 00:00 on Saturday
- cron: '20 0 * * 6'
concurrency:
group: ${{ github.workflow }}-${{ github.event_name }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true
jobs:
twister-build-cleanup:
runs-on: ubuntu-latest
steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.6.0
with:
access_token: ${{ github.token }}
twister-build-prep:
runs-on: zephyr_runner
needs: twister-build-cleanup
runs-on: zephyr-runner-linux-x64-4xlarge
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
@@ -38,14 +36,16 @@ jobs:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
if: github.event_name == 'pull_request_target'
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -102,18 +102,18 @@ jobs:
else
size=0
fi
echo "::set-output name=subset::${subset}";
echo "::set-output name=size::${size}";
echo "subset=${subset}" >> $GITHUB_OUTPUT
echo "size=${size}" >> $GITHUB_OUTPUT
twister-build:
runs-on: zephyr_runner
runs-on: zephyr-runner-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: zephyrprojectrtos/ci:v0.18.4
options: '--entrypoint /bin/bash'
volumes:
- /home/runners/zephyrproject:/github/cache/zephyrproject
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
@@ -128,13 +128,14 @@ jobs:
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Cleanup
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
# hotfix, until we have a better way to deal with existing data
rm -rf zephyr zephyr-testing
git clone --shared /github/cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -173,7 +174,7 @@ jobs:
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
message("::set-output name=repo::${repo2}")
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
@@ -182,8 +183,8 @@ jobs:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ secrets.CCACHE_S3_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.CCACHE_S3_SECRET_ACCESS_KEY }}
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
@@ -220,7 +221,7 @@ jobs:
- name: Upload Unit Test Results
if: always()
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: Unit Test Results (Subset ${{ matrix.subset }})
if-no-files-found: ignore
@@ -231,7 +232,7 @@ jobs:
twister-test-results:
name: "Publish Unit Tests Results"
needs: twister-build
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
# the build-and-test job might be skipped, we don't need to run this job then
if: success() || failure()

View File

@@ -5,12 +5,16 @@ name: Twister TestSuite
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
@@ -24,17 +28,17 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest]
os: [ubuntu-20.04]
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -5,11 +5,15 @@ name: Zephyr West Command Tests
on:
push:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
branches:
- v2.7-branch
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
@@ -22,20 +26,22 @@ jobs:
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-11, windows-2022]
exclude:
- os: macos-latest
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6
steps:
- name: checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
@@ -43,7 +49,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
@@ -52,7 +58,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
uses: actions/cache@v3
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}

View File

@@ -162,6 +162,13 @@ zephyr_compile_options(${OPTIMIZATION_FLAG})
# @Intent: Obtain compiler specific flags related to C++ that are not influenced by kconfig
zephyr_compile_options($<$<COMPILE_LANGUAGE:CXX>:$<TARGET_PROPERTY:compiler-cpp,required>>)
# Extra warnings options for twister run
if (CONFIG_COMPILER_WARNINGS_AS_ERRORS)
zephyr_compile_options($<$<COMPILE_LANGUAGE:C>:$<TARGET_PROPERTY:compiler,warnings_as_errors>>)
zephyr_compile_options($<$<COMPILE_LANGUAGE:ASM>:$<TARGET_PROPERTY:asm,warnings_as_errors>>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,warnings_as_errors>)
endif()
# @Intent: Obtain compiler specific flags for compiling under different ISO standards of C++
if(CONFIG_CPLUSPLUS)
# From kconfig choice, pick a single dialect.
@@ -627,7 +634,7 @@ if(CONFIG_64BIT)
endif()
if(CONFIG_TIMEOUT_64BIT)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t --split-type k_ticks_t)
endif()
add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}

View File

@@ -305,9 +305,13 @@ config NO_OPTIMIZATIONS
help
Compiler optimizations will be set to -O0 independently of other
options.
endchoice
config COMPILER_WARNINGS_AS_ERRORS
bool "Treat warnings as errors"
help
Turn on "warning as error" toolchain flags
config COMPILER_COLOR_DIAGNOSTICS
bool "Enable colored diganostics"
default y

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 2
PATCHLEVEL = 6
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -56,7 +56,7 @@ void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_sp = Z_KERNEL_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;

View File

@@ -27,6 +27,7 @@ endif # BOARD_BL5340_DVK_CPUAPP
config BUILD_WITH_TFM
default y if BOARD_BL5340_DVK_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -20,6 +20,7 @@ config BOARD
# force building with TF-M as the Secure Execution Environment.
config BUILD_WITH_TFM
default y if TRUSTED_EXECUTION_NONSECURE
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if GPIO

View File

@@ -4,7 +4,10 @@ type: mcu
arch: arm
ram: 4096
flash: 4096
simulation: qemu
# TFM is not supported by default in the Zephyr LTS release.
# Excluding this board's simulator to avoid CI failures.
#
#simulation: qemu
toolchain:
- gnuarmemb
- zephyr

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF5340DK_NRF5340_CPUAPP_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -13,6 +13,7 @@ config BOARD
config BUILD_WITH_TFM
default y if BOARD_NRF9160DK_NRF9160_NS
depends on ZEPHYR_TRUSTED_FIRMWARE_M_MODULE
if BUILD_WITH_TFM

View File

@@ -186,8 +186,9 @@ To operate bluetooth on Nucleo WB55RG, Cortex-M0 core should be flashed with
a valid STM32WB Coprocessor binaries (either 'Full stack' or 'HCI Layer').
These binaries are delivered in STM32WB Cube packages, under
Projects/STM32WB_Copro_Wireless_Binaries/STM32WB5x/
To date, interoperability and backward compatibility has been tested and is
guaranteed up to version 1.5 of STM32Cube package releases.
For compatibility information with the various versions of these binaries,
please check `modules/hal/stm32/lib/stm32wb/hci/README <https://github.com/zephyrproject-rtos/hal_stm32/blob/main/lib/stm32wb/hci/README>`__
in the hal_stm32 repo.
Connections and IOs
===================

View File

@@ -132,6 +132,10 @@ set_property(TARGET compiler-cpp PROPERTY dialect_cpp2a "")
set_property(TARGET compiler-cpp PROPERTY dialect_cpp20 "")
set_property(TARGET compiler-cpp PROPERTY dialect_cpp2b "")
# Flags for set extra warnigs (ARCMWDT asm can't recognize --fatal-warnings. Skip it)
set_property(TARGET compiler PROPERTY warnings_as_errors -Werror)
set_property(TARGET asm PROPERTY warnings_as_errors -Werror)
# Disable exeptions flag in C++
set_property(TARGET compiler-cpp PROPERTY no_exceptions "-fno-exceptions")

View File

@@ -65,6 +65,10 @@ set_property(TARGET compiler-cpp PROPERTY dialect_cpp2a)
set_property(TARGET compiler-cpp PROPERTY dialect_cpp20)
set_property(TARGET compiler-cpp PROPERTY dialect_cpp2b)
# Extra warnings options for twister run
set_property(TARGET compiler PROPERTY warnings_as_errors)
set_property(TARGET asm PROPERTY warnings_as_errors)
# Flag for disabling exeptions in C++
set_property(TARGET compiler-cpp PROPERTY no_exceptions)

View File

@@ -137,6 +137,10 @@ set_property(TARGET compiler-cpp PROPERTY dialect_cpp20 "-std=c++20"
set_property(TARGET compiler-cpp PROPERTY dialect_cpp2b "-std=c++2b"
"-Wno-register" "-Wno-volatile")
# Flags for set extra warnigs (ARCMWDT asm can't recognize --fatal-warnings. Skip it)
set_property(TARGET compiler PROPERTY warnings_as_errors -Werror)
set_property(TARGET asm PROPERTY warnings_as_errors -Werror)
# Disable exeptions flag in C++
set_property(TARGET compiler-cpp PROPERTY no_exceptions "-fno-exceptions")

View File

@@ -53,7 +53,7 @@ list(REMOVE_DUPLICATES
# Drop support for NOT CONFIG_HAS_DTS perhaps?
if(EXISTS ${DTS_SOURCE})
set(SUPPORTS_DTS 1)
if(BOARD_REVISION AND EXISTS ${BOARD_DIR}/${BOARD}_${BOARD_REVISION_STRING}.overlay)
if(DEFINED BOARD_REVISION AND EXISTS ${BOARD_DIR}/${BOARD}_${BOARD_REVISION_STRING}.overlay)
list(APPEND DTS_SOURCE ${BOARD_DIR}/${BOARD}_${BOARD_REVISION_STRING}.overlay)
endif()
else()

View File

@@ -518,7 +518,7 @@ function(zephyr_library_cc_option)
string(MAKE_C_IDENTIFIER check${option} check)
zephyr_check_compiler_flag(C ${option} ${check})
if(${check})
if(${${check}})
zephyr_library_compile_options(${option})
endif()
endforeach()
@@ -1003,9 +1003,9 @@ endfunction()
function(zephyr_check_compiler_flag lang option check)
# Check if the option is covered by any hardcoded check before doing
# an automated test.
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" check exists)
zephyr_check_compiler_flag_hardcoded(${lang} "${option}" _${check} exists)
if(exists)
set(check ${check} PARENT_SCOPE)
set(${check} ${_${check}} PARENT_SCOPE)
return()
endif()
@@ -1110,11 +1110,11 @@ function(zephyr_check_compiler_flag_hardcoded lang option check exists)
# because they would produce a warning instead of an error during
# the test. Exclude them by toolchain-specific blocklist.
if((${lang} STREQUAL CXX) AND ("${option}" IN_LIST CXX_EXCLUDED_OPTIONS))
set(check 0 PARENT_SCOPE)
set(exists 1 PARENT_SCOPE)
set(${check} 0 PARENT_SCOPE)
set(${exists} 1 PARENT_SCOPE)
else()
# There does not exist a hardcoded check for this option.
set(exists 0 PARENT_SCOPE)
set(${exists} 0 PARENT_SCOPE)
endif()
endfunction(zephyr_check_compiler_flag_hardcoded)
@@ -1862,7 +1862,7 @@ function(check_set_linker_property)
zephyr_check_compiler_flag(C "" ${check})
set(CMAKE_REQUIRED_FLAGS ${SAVED_CMAKE_REQUIRED_FLAGS})
if(${check})
if(${${check}})
set_property(TARGET ${LINKER_PROPERTY_TARGET} ${APPEND} PROPERTY ${property} ${option})
endif()
endfunction()

View File

@@ -1,6 +1,8 @@
# SPDX-License-Identifier: Apache-2.0
include(${ZEPHYR_BASE}/cmake/toolchain/zephyr/host-tools.cmake)
if(ZEPHYR_SDK_HOST_TOOLS)
include(${ZEPHYR_BASE}/cmake/toolchain/zephyr/host-tools.cmake)
endif()
# dtc is an optional dependency
find_program(

View File

@@ -163,12 +163,23 @@ endforeach()
unset(EXTRA_KCONFIG_OPTIONS)
get_cmake_property(cache_variable_names CACHE_VARIABLES)
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
if("${name}" MATCHES "^CLI_CONFIG_")
# Variable was set by user in earlier invocation, let's append to extra
# config unless a new value has been given.
string(REGEX REPLACE "^CLI_" "" org_name ${name})
if(NOT DEFINED ${org_name})
set(EXTRA_KCONFIG_OPTIONS
"${EXTRA_KCONFIG_OPTIONS}\n${org_name}=${${name}}"
)
endif()
elseif("${name}" MATCHES "^CONFIG_")
# When a cache variable starts with 'CONFIG_', it is assumed to be
# a Kconfig symbol assignment from the CMake command line.
set(EXTRA_KCONFIG_OPTIONS
"${EXTRA_KCONFIG_OPTIONS}\n${name}=${${name}}"
)
set(CLI_${name} "${${name}}")
list(APPEND cli_config_list ${name})
endif()
endforeach()
@@ -296,21 +307,20 @@ add_custom_target(config-twister DEPENDS ${DOTCONFIG})
# Remove the CLI Kconfig symbols from the namespace and
# CMakeCache.txt. If the symbols end up in DOTCONFIG they will be
# re-introduced to the namespace through 'import_kconfig'.
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
unset(${name})
unset(${name} CACHE)
endif()
foreach (name ${cli_config_list})
unset(${name})
unset(${name} CACHE)
endforeach()
# Parse the lines prefixed with CONFIG_ in the .config file from Kconfig
import_kconfig(CONFIG_ ${DOTCONFIG})
# Re-introduce the CLI Kconfig symbols that survived
foreach (name ${cache_variable_names})
if("${name}" MATCHES "^CONFIG_")
if(DEFINED ${name})
set(${name} ${${name}} CACHE STRING "")
endif()
# Cache the CLI Kconfig symbols that survived through Kconfig, prefixed with CLI_.
# Remove those who might have changed compared to earlier runs, if they no longer appears.
foreach (name ${cli_config_list})
if(DEFINED ${name})
set(CLI_${name} ${CLI_${name}} CACHE INTERNAL "")
else()
unset(CLI_${name} CACHE)
endif()
endforeach()

View File

@@ -0,0 +1,4 @@
# SPDX-License-Identifier: Apache-2.0
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY warnings_as_errors -Wl,--fatal-warnings)

View File

@@ -3,6 +3,9 @@ if (NOT CONFIG_COVERAGE_GCOV)
set_property(TARGET linker PROPERTY coverage --coverage)
endif()
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY ld_extra_warning_options -Wl,--fatal-warnings)
# ld/clang linker flags for sanitizing.
check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_address -fsanitize=address)

View File

@@ -7,6 +7,9 @@ if (NOT CONFIG_COVERAGE_GCOV)
set_property(TARGET linker PROPERTY coverage -lgcov)
endif()
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY warnings_as_errors -Wl,--fatal-warnings)
# ld/gcc linker flags for sanitizing.
check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_address -lasan)
check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_address -fsanitize=address)

View File

@@ -14,3 +14,6 @@ check_set_linker_property(TARGET linker APPEND PROPERTY sanitize_undefined)
# If memory reporting is a post build command, please use
# cmake/bintools/bintools.cmake insted.
check_set_linker_property(TARGET linker PROPERTY memusage)
# Extra warnings options for twister run
set_property(TARGET linker PROPERTY warnings_as_errors)

View File

@@ -1,8 +0,0 @@
# Zephyr 0.11 SDK Toolchain
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
config TOOLCHAIN_ZEPHYR_0_11
def_bool y
select HAS_NEWLIB_LIBC_NANO if (ARC || ARM || RISCV)

View File

@@ -1,34 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(TOOLCHAIN_HOME ${ZEPHYR_SDK_INSTALL_DIR})
set(COMPILER gcc)
set(LINKER ld)
set(BINTOOLS gnu)
# Find some toolchain that is distributed with this particular SDK
file(GLOB toolchain_paths
LIST_DIRECTORIES true
${TOOLCHAIN_HOME}/xtensa/*/*-zephyr-elf
${TOOLCHAIN_HOME}/*-zephyr-elf
${TOOLCHAIN_HOME}/*-zephyr-eabi
)
if(toolchain_paths)
list(GET toolchain_paths 0 some_toolchain_path)
get_filename_component(one_toolchain_root "${some_toolchain_path}" DIRECTORY)
get_filename_component(one_toolchain "${some_toolchain_path}" NAME)
set(CROSS_COMPILE_TARGET ${one_toolchain})
set(SYSROOT_TARGET ${one_toolchain})
endif()
if(NOT CROSS_COMPILE_TARGET)
message(FATAL_ERROR "Unable to find 'x86_64-zephyr-elf' or any other architecture in ${TOOLCHAIN_HOME}")
endif()
set(CROSS_COMPILE ${one_toolchain_root}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
set(SYSROOT_DIR ${one_toolchain_root}/${SYSROOT_TARGET}/${SYSROOT_TARGET})
set(TOOLCHAIN_HAS_NEWLIB ON CACHE BOOL "True if toolchain supports newlib")

View File

@@ -1,12 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(HOST_TOOLS_HOME ${ZEPHYR_SDK_INSTALL_DIR}/sysroots/${TOOLCHAIN_ARCH}-pokysdk-linux)
# Path used for searching by the find_*() functions, with appropriate
# suffixes added. Ensures that the SDK's host tools will be found when
# we call, e.g. find_program(QEMU qemu-system-x86)
list(APPEND CMAKE_PREFIX_PATH ${HOST_TOOLS_HOME}/usr)
# TODO: Use find_* somehow for these as well?
set_ifndef(QEMU_BIOS ${HOST_TOOLS_HOME}/usr/share/qemu)
set_ifndef(OPENOCD_DEFAULT_PATH ${HOST_TOOLS_HOME}/usr/share/openocd/scripts)

View File

@@ -1,53 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(CROSS_COMPILE_TARGET_arm arm-zephyr-eabi)
set(CROSS_COMPILE_TARGET_arm64 aarch64-zephyr-elf)
set(CROSS_COMPILE_TARGET_nios2 nios2-zephyr-elf)
set(CROSS_COMPILE_TARGET_riscv riscv64-zephyr-elf)
set(CROSS_COMPILE_TARGET_mips mipsel-zephyr-elf)
set(CROSS_COMPILE_TARGET_xtensa xtensa-zephyr-elf)
set(CROSS_COMPILE_TARGET_arc arc-zephyr-elf)
set(CROSS_COMPILE_TARGET_x86 x86_64-zephyr-elf)
set(CROSS_COMPILE_TARGET_sparc sparc-zephyr-elf)
set(CROSS_COMPILE_TARGET ${CROSS_COMPILE_TARGET_${ARCH}})
set(SYSROOT_TARGET ${CROSS_COMPILE_TARGET})
if("${ARCH}" STREQUAL "xtensa")
# Xtensa GCC needs a different toolchain per SOC
if("${SOC_SERIES}" STREQUAL "cavs_v15")
set(SR_XT_TC_SOC intel_apl_adsp)
elseif("${SOC_SERIES}" STREQUAL "cavs_v18")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "cavs_v20")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "cavs_v25")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "baytrail_adsp")
set(SR_XT_TC_SOC intel_byt_adsp)
elseif("${SOC_SERIES}" STREQUAL "broadwell_adsp")
set(SR_XT_TC_SOC intel_bdw_adsp)
elseif("${SOC_SERIES}" STREQUAL "imx8")
set(SR_XT_TC_SOC nxp_imx_adsp)
elseif("${SOC_SERIES}" STREQUAL "imx8m")
set(SR_XT_TC_SOC nxp_imx8m_adsp)
else()
message(FATAL_ERROR "Not compiler set for SOC_SERIES ${SOC_SERIES}")
endif()
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/xtensa/${SR_XT_TC_SOC}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/xtensa/${SR_XT_TC_SOC}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
else()
# Non-Xtensa SDK toolchains follow a simpler convention
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/${SYSROOT_TARGET}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
endif()
if("${ARCH}" STREQUAL "x86")
if(CONFIG_X86_64)
list(APPEND TOOLCHAIN_C_FLAGS -m64)
list(APPEND TOOLCHAIN_LD_FLAGS -m64)
else()
list(APPEND TOOLCHAIN_C_FLAGS -m32)
list(APPEND TOOLCHAIN_LD_FLAGS -m32)
endif()
endif()

View File

@@ -1,13 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/generic.cmake)
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/generic.cmake)
set(TOOLCHAIN_KCONFIG_DIR ${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR})
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/generic.cmake)
set(TOOLCHAIN_KCONFIG_DIR ${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr)
set(TOOLCHAIN_KCONFIG_DIR ${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr)
endif()
message(STATUS "Found toolchain: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")

View File

@@ -1,55 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
# This is the minimum required version for Zephyr to work (Old style)
set(REQUIRED_SDK_VER 0.11.1)
cmake_host_system_information(RESULT TOOLCHAIN_ARCH QUERY OS_PLATFORM)
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/host-tools.cmake)
if(NOT DEFINED ZEPHYR_SDK_INSTALL_DIR)
# Until https://github.com/zephyrproject-rtos/zephyr/issues/4912 is
# resolved we use ZEPHYR_SDK_INSTALL_DIR to determine whether the user
# wants to use the Zephyr SDK or not.
return()
endif()
# Cache the Zephyr SDK install dir.
set(ZEPHYR_SDK_INSTALL_DIR ${ZEPHYR_SDK_INSTALL_DIR} CACHE PATH "Zephyr SDK install directory")
if(NOT DEFINED SDK_VERSION)
if(ZEPHYR_TOOLCHAIN_VARIANT AND ZEPHYR_SDK_INSTALL_DIR)
# Manual detection for Zephyr SDK 0.11.1 and 0.11.2 for backward compatibility.
set(sdk_version_path ${ZEPHYR_SDK_INSTALL_DIR}/sdk_version)
if(NOT (EXISTS ${sdk_version_path}))
message(FATAL_ERROR
"The file '${ZEPHYR_SDK_INSTALL_DIR}/sdk_version' was not found. \
Is ZEPHYR_SDK_INSTALL_DIR=${ZEPHYR_SDK_INSTALL_DIR} misconfigured?")
endif()
# Read version as published by the SDK
file(READ ${sdk_version_path} SDK_VERSION_PRE1)
# Remove any pre-release data, for example 0.10.0-beta4 -> 0.10.0
string(REGEX REPLACE "-.*" "" SDK_VERSION_PRE2 ${SDK_VERSION_PRE1})
# Strip any trailing spaces/newlines from the version string
string(STRIP ${SDK_VERSION_PRE2} SDK_VERSION)
string(REGEX MATCH "([0-9]*).([0-9]*)" SDK_MAJOR_MINOR ${SDK_VERSION})
string(REGEX MATCH "([0-9]+)\.([0-9]+)\.([0-9]+)" SDK_MAJOR_MINOR_MICRO ${SDK_VERSION})
#at least 0.0.0
if(NOT SDK_MAJOR_MINOR_MICRO)
message(FATAL_ERROR "sdk version: ${SDK_MAJOR_MINOR_MICRO} improper format.
Expected format: x.y.z
Check whether the Zephyr SDK was installed correctly.
")
endif()
endif()
endif()
message(STATUS "Using toolchain: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/host-tools.cmake)
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/host-tools.cmake)
endif()
message(STATUS "Found host-tools: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")

View File

@@ -1,18 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/target.cmake)
elseif(("${ARCH}" STREQUAL "sparc") AND (${SDK_VERSION} VERSION_LESS 0.12))
# SDK 0.11.3, 0.11.4 does not have SPARC target support.
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/target.cmake)
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/target.cmake)
# Workaround, FIXME: Waiting for new SDK.
if("${ARCH}" STREQUAL "xtensa")
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/xtensa/${SOC_TOOLCHAIN_NAME}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/xtensa/${SOC_TOOLCHAIN_NAME}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
endif()
endif()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/target.cmake)

View File

@@ -90,10 +90,6 @@ if(NOT DEFINED ZEPHYR_TOOLCHAIN_VARIANT)
if (NOT Zephyr-sdk_CONSIDERED_VERSIONS)
set(error_msg "ZEPHYR_TOOLCHAIN_VARIANT not specified and no Zephyr SDK is installed.\n")
string(APPEND error_msg "Please set ZEPHYR_TOOLCHAIN_VARIANT to the toolchain to use or install the Zephyr SDK.")
if(NOT ZEPHYR_TOOLCHAIN_VARIANT AND NOT ZEPHYR_SDK_INSTALL_DIR)
set(error_note "Note: If you are using Zephyr SDK 0.11.1 or 0.11.2, remember to set ZEPHYR_SDK_INSTALL_DIR and ZEPHYR_TOOLCHAIN_VARIANT")
endif()
else()
# Note: When CMake mimimun version becomes >= 3.17, change this loop into:
# foreach(version config IN ZIP_LISTS Zephyr-sdk_CONSIDERED_VERSIONS Zephyr-sdk_CONSIDERED_CONFIGS)
@@ -116,11 +112,17 @@ if(NOT DEFINED ZEPHYR_TOOLCHAIN_VARIANT)
message(FATAL_ERROR "${error_msg}
The Zephyr SDK can be downloaded from:
https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v${TOOLCHAIN_ZEPHYR_MINIMUM_REQUIRED_VERSION}/zephyr-sdk-${TOOLCHAIN_ZEPHYR_MINIMUM_REQUIRED_VERSION}-setup.run
${error_note}
")
endif()
if(DEFINED ZEPHYR_SDK_INSTALL_DIR)
# Cache the Zephyr SDK install dir.
set(ZEPHYR_SDK_INSTALL_DIR ${ZEPHYR_SDK_INSTALL_DIR} CACHE PATH "Zephyr SDK install directory")
# Use the Zephyr SDK host-tools.
set(ZEPHYR_SDK_HOST_TOOLS TRUE)
endif()
if(CMAKE_SCRIPT_MODE_FILE)
if("${FORMAT}" STREQUAL "json")
set(json "{\"ZEPHYR_TOOLCHAIN_VARIANT\" : \"${ZEPHYR_TOOLCHAIN_VARIANT}\", ")

View File

@@ -253,8 +253,8 @@ graphviz_dot_args = [
# -- Linkcheck options ----------------------------------------------------
extlinks = {
"jira": ("https://jira.zephyrproject.org/browse/%s", ""),
"github": ("https://github.com/zephyrproject-rtos/zephyr/issues/%s", ""),
"jira": ("https://jira.zephyrproject.org/browse/%s", "JIRA %s"),
"github": ("https://github.com/zephyrproject-rtos/zephyr/issues/%s", "GitHub #%s"),
}
linkcheck_timeout = 30

View File

@@ -2,24 +2,441 @@
.. _zephyr_2.7:
.. _zephyr_2.7.1:
.. _zephyr_2.7.6:
Zephyr 2.7.1
Zephyr 2.7.6
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.5 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`32145` - use ``k_thread_foreach_unlocked()`` with shell callbacks
* :github:`56604` - drivers: nrf: rtc: make uptime consistent for app booted from v3.x mcuboot
* :github:`25917` - bluetooth: fix deadlock with tx of acl data and hci commands
* :github:`47649` - bluetooth: release att notification buffer after reconnection
* :github:`43718` - bluetooth: bt_conn: ensure tx buffers can be allocated within timeout
* :github:`60707` - canbus: isotp: seal context buffer memory leaks
* :github:`60904` - drivers: spi_nor: make erase operation more opportunistic
* :github:`61451` - drivers: can: stm32: correct timing_max parameters
* :github:`61501` - canbus: isotp: convert SF length check from ``ASSERT`` to runtime check
* :github:`61544` - drivers: ieee802154_nrf5: add payload length check on TX
* :github:`61784` - bluetooth: controller: check minmum sizes of adv PDUs
* :github:`62003` - drivers: dma: sam: implement xdmac ``get_status()`` API
* :github:`62701` - can: rework the table lookup code in ``can_dlc_to_bytes()``
* :github:`63544` - drivers: can: mcan: move RF0L and RF1L to line 1
* :github:`63835` - net_mgmt: return ``EMSGSIZE`` if buffer passed to ``recvfrom()`` is too small
* :github:`63965` - logging: fix handling of ``CONFIG_LOG_BLOCK_IN_THREAD_TIMEOUT_MS``
* :github:`64398` - drivers: can: be consistent in ``filter_id`` checks when removing rx filters
* :github:`65548` - cmake: modules: dts: fix board revision 0 overlay
* :github:`66500` - toolchain: support ``CONFIG_COMPILER_WARNINGS_AS_ERRORS``
* :github:`66888` - net: ipv6: drop received packets sent by the same interface
* :github:`67692` - i2c: dw: fix integer overflow in ``i2c_dw_data_ask()``
* :github:`69167` - fs: fuse: avoid possible buffer overflow
* :github:`69637` - userspace: additional checks in ``K_SYSCALL_MEMORY``
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* (N/A)
* CVE-2023-4263 `Zephyr project bug tracker GHSA-rf6q-rhhp-pqhf
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-rf6q-rhhp-pqhf>`_
* CVE-2023-4424: `Zephyr project bug tracker GHSA-j4qm-xgpf-qjw3
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-j4qm-xgpf-qjw3>`_
* CVE-2023-5779 `Zephyr project bug tracker GHSA-7cmj-963q-jj47
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-7cmj-963q-jj47>`_
* CVE-2023-6249 `Zephyr project bug tracker GHSA-32f5-3p9h-2rqc
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-32f5-3p9h-2rqc>`_
* CVE-2023-6881 `Zephyr project bug tracker GHSA-mh67-4h3q-p437
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-mh67-4h3q-p437>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.5:
Zephyr 2.7.5
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.4 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`41111` - utils: tmcvt: fix integer overflow after 6.4 days with ``gettimeofday()`` and ``z_tmcvt()``
* :github:`51663` - tests: kernel: increase coverage for kernel and mmu tests
* :github:`53124` - bmake: fix argument passing in ``zephyr_check_compiler_flag()`` cmake function
* :github:`53315` - net: tcp: fix possible underflow in ``tcp_flags()``.
* :github:`53981` - scripts: fixes for ``gen_syscalls`` and ``gen_app_partitions``
* :github:`53983` - init: correct early init time calls to ``k_current_get()`` when TLS is enabled
* :github:`54140` - net: fix BUS FAULT when running nmap towards echo_async sample
* :github:`54325` - coredump: support out-of-tree coredump backend definition
* :github:`54386` - kernel: correct SMP scheduling with more than 2 CPUs
* :github:`54527` - tests: kernel: remove faulty test from tests/kernel/poll
* :github:`55019` - bluetooth: initialize backport of #54905 failed
* :github:`55068` - net: ipv6: validate arguments in ``net_if_ipv6_set_reachable_time()``
* :github:`55069` - net: core: ``net pkt`` shell command missing input validation
* :github:`55323` - logging: fix userspace runtime filtering
* :github:`55490` - cxx: fix compile error in C++ project for bad flags ``-Wno-pointer-sign`` and ``-Werror=implicit-int``
* :github:`56071` - security: MbedTLS: update to v2.28.3
* :github:`56729` - posix: SCHED_RR valid thread priorities
* :github:`57210` - drivers: pcie: endpoint: pcie_ep_iproc: correct use of optional devicetree binding
* :github:`57419` - tests: dma: support 64-bit addressing in tests
* :github:`57710` - posix: support building eventfd on arm-clang
mbedTLS
*******
Moving mbedTLS to 2.28.x series (2.28.3 precisely). This is a LTS release
that will be supported with bug fixes and security fixes until the end of 2024.
Detailed information can be found in:
https://github.com/Mbed-TLS/mbedtls/releases/tag/v2.28.3
https://github.com/zephyrproject-rtos/zephyr/issues/56071
This version is incompatible with TF-M and because of this TF-M is no longer
supported in Zephyr LTS. If TF-M is required it can be manually added back
changing the mbedTLS revision on ``west.yaml`` to the previous one
(5765cb7f75a9973ae9232d438e361a9d7bbc49e7). This should be carefully assessed
by a security expert to ensure that the know vulnerabilities in that version
don't affect the product.
Vulnerabilities addressed in this update:
* MBEDTLS_AESNI_C, which is enabled by default, was silently ignored on
builds that couldn't compile the GCC-style assembly implementation
(most notably builds with Visual Studio), leaving them vulnerable to
timing side-channel attacks. There is now an intrinsics-based AES-NI
implementation as a fallback for when the assembly one cannot be used.
* Fix potential heap buffer overread and overwrite in DTLS if
MBEDTLS_SSL_DTLS_CONNECTION_ID is enabled and
MBEDTLS_SSL_CID_IN_LEN_MAX > 2 * MBEDTLS_SSL_CID_OUT_LEN_MAX.
* An adversary with access to precise enough information about memory
accesses (typically, an untrusted operating system attacking a secure
enclave) could recover an RSA private key after observing the victim
performing a single private-key operation if the window size used for the
exponentiation was 3 or smaller. Found and reported by Zili KOU,
Wenjian HE, Sharad Sinha, and Wei ZHANG. See "Cache Side-channel Attacks
and Defenses of the Sliding Window Algorithm in TEEs" - Design, Automation
and Test in Europe 2023.
* Zeroize dynamically-allocated buffers used by the PSA Crypto key storage
module before freeing them. These buffers contain secret key material, and
could thus potentially leak the key through freed heap.
* Fix a potential heap buffer overread in TLS 1.2 server-side when
MBEDTLS_USE_PSA_CRYPTO is enabled, an opaque key (created with
mbedtls_pk_setup_opaque()) is provisioned, and a static ECDH ciphersuite
is selected. This may result in an application crash or potentially an
information leak.
* Fix a buffer overread in DTLS ClientHello parsing in servers with
MBEDTLS_SSL_DTLS_CLIENT_PORT_REUSE enabled. An unauthenticated client
or a man-in-the-middle could cause a DTLS server to read up to 255 bytes
after the end of the SSL input buffer. The buffer overread only happens
when MBEDTLS_SSL_IN_CONTENT_LEN is less than a threshold that depends on
the exact configuration: 258 bytes if using mbedtls_ssl_cookie_check(),
and possibly up to 571 bytes with a custom cookie check function.
Reported by the Cybeats PSI Team.
* Zeroize several intermediate variables used to calculate the expected
value when verifying a MAC or AEAD tag. This hardens the library in
case the value leaks through a memory disclosure vulnerability. For
example, a memory disclosure vulnerability could have allowed a
man-in-the-middle to inject fake ciphertext into a DTLS connection.
* In psa_cipher_generate_iv() and psa_cipher_encrypt(), do not read back
from the output buffer. This fixes a potential policy bypass or decryption
oracle vulnerability if the output buffer is in memory that is shared with
an untrusted application.
* Fix a double-free that happened after mbedtls_ssl_set_session() or
mbedtls_ssl_get_session() failed with MBEDTLS_ERR_SSL_ALLOC_FAILED
(out of memory). After that, calling mbedtls_ssl_session_free()
and mbedtls_ssl_free() would cause an internal session buffer to
be free()'d twice.
* Fix a bias in the generation of finite-field Diffie-Hellman-Merkle (DHM)
private keys and of blinding values for DHM and elliptic curves (ECP)
computations.
* Fix a potential side channel vulnerability in ECDSA ephemeral key generation.
An adversary who is capable of very precise timing measurements could
learn partial information about the leading bits of the nonce used for the
signature, allowing the recovery of the private key after observing a
large number of signature operations. This completes a partial fix in
Mbed TLS 2.20.0.
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2023-0397: `Zephyr project bug tracker GHSA-wc2h-h868-q7hj
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-wc2h-h868-q7hj>`_
* CVE-2023-0779: `Zephyr project bug tracker GHSA-9xj8-6989-r549
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-9xj8-6989-r549>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.4:
Zephyr 2.7.4
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.3 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`25417` - net: socket: socketpair: check for ISR context
* :github:`41012` - irq_enable() doesnt support enabling NVIC IRQ number more than 127
* :github:`44070` - west spdx TypeError: 'NoneType' object is not iterable
* :github:`46072` - subsys/hawkBit: Debug log error in hawkbit example "CONFIG_LOG_STRDUP_MAX_STRING"
* :github:`48056` - Possible null pointer dereference after k_mutex_lock times out
* :github:`49102` - hawkbit - dns name randomly not resolved
* :github:`49139` - can't run west or DT tests on windows / py 3.6
* :github:`49564` - Newer versions of pylink are not supported in latest zephyr 2.7 release
* :github:`49569` - Backport cmake string cache fix to v2.7 branch
* :github:`50221` - tests: debug: test case subsys/debug/coredump failed on acrn_ehl_crb on branch v2.7
* :github:`50467` - Possible memory corruption on ARC when userspace is enabled
* :github:`50468` - Incorrect Z_THREAD_STACK_BUFFER in arch_start_cpu for Xtensa
* :github:`50961` - drivers: counter: Update counter_set_channel_alarm documentation
* :github:`51714` - Bluetooth: Application with buffer that cannot unref it in disconnect handler leads to advertising issues
* :github:`51776` - POSIX API is not portable across arches
* :github:`52247` - mgmt: mcumgr: image upload, then image erase, then image upload does not restart upload from start
* :github:`52517` - lib: posix: sleep() does not return the number of seconds left if interrupted
* :github:`52518` - lib: posix: usleep() does not follow the POSIX spec
* :github:`52542` - lib: posix: make sleep() and usleep() standards-compliant
* :github:`52591` - mcumgr user data size out of sync with net buffer user data size
* :github:`52829` - kernel/sched: Fix SMP race on pend
* :github:`53088` - Unable to chage initialization priority of logging subsys
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2022-2741: `Zephyr project bug tracker GHSA-hx5v-j59q-c3j8
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hx5v-j59q-c3j8>`_
* CVE-2022-1841: `Zephyr project bug tracker GHSA-5c3j-p8cr-2pgh
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-5c3j-p8cr-2pgh>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.3:
Zephyr 2.7.3
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.2 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`39882` - Bluetooth Host qualification on 2.7 branch
* :github:`41074` - can_mcan_send sends corrupted CAN frames with a byte-by-byte memcpy implementation
* :github:`43479` - Bluetooth: Controller: Fix per adv scheduling issue
* :github:`43694` - drivers: spi: stm32 spi with dma must enable cs after periph
* :github:`44089` - logging: shell backend: null-deref when logs are dropped
* :github:`45341` - Add new EHL SKUs for IBECC
* :github:`45529` - GdbStub get_mem_region bugfix
* :github:`46621` - drivers: i2c: Infinite recursion in driver unregister function
* :github:`46698` - sm351 driver faults when using global thread
* :github:`46706` - add missing checks for segment number
* :github:`46757` - Bluetooth: Controller: Missing validation of unsupported PHY when performing PHY update
* :github:`46807` - lib: posix: semaphore: use consistent timebase in sem_timedwait
* :github:`46822` - L2CAP disconnected packet timing in ecred reconf function
* :github:`46994` - Incorrect Xtensa toolchain path resolution
* :github:`47356` - cpp: global static object initialisation may fail for MMU and MPU platforms
* :github:`47609` - posix: pthread: descriptor leak with pthread_join
* :github:`47955` - drivers: can: various RTR fixes
* :github:`48249` - boards: nucleo_wb55rg: documentation BLE binary compatibility issue
* :github:`48271` - net: Possible net_pkt leak in ipv6 multicast forwarding
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2022-2741: Under embargo until 2022-10-14
* CVE-2022-1042: `Zephyr project bug tracker GHSA-j7v7-w73r-mm5x
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-j7v7-w73r-mm5x>`_
* CVE-2022-1041: `Zephyr project bug tracker GHSA-p449-9hv9-pj38
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-p449-9hv9-pj38>`_
* CVE-2021-3966: `Zephyr project bug tracker GHSA-hfxq-3w6x-fv2m
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hfxq-3w6x-fv2m>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.2:
Zephyr 2.7.2
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.1 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`23419` - posix: clock: No thread safety clock_getime / clock_settime
* :github:`30367` - TCP2 does not send our MSS to peer
* :github:`37389` - nucleo_g0b1re: Swapping image in mcuboot results in hard fault and softbricks the device
* :github:`38268` - Multiple defects in "Multi Producer Single Consumer Packet Buffer" library
* :github:`38576` - net shell: self-connecting to TCP might lead to a crash
* :github:`39184` - HawkBit hash mismatch
* :github:`39242` - net: sockets: Zephyr Fatal in dns_resolve_cb if dns request was attempted in offline state
* :github:`39399` - linker: Missing align __itcm_load_start / __dtcm_data_load_start linker symbols
* :github:`39608` - stm32: lpuart: 9600 baudrate doesn't work
* :github:`39609` - spi: slave: division by zero in timeout calculation
* :github:`39660` - poll() not notified when a TLS/TCP connection is closed without TLS close_notify
* :github:`39687` - sensor: qdec_nrfx: PM callback has incorrect signature
* :github:`39774` - modem: uart mux reading optimization never used
* :github:`39882` - Bluetooth Host qualification on 2.7 branch
* :github:`40163` - Use correct clock frequency for systick+DWT
* :github:`40464` - Dereferencing NULL with getsockname() on TI Simplelink Platform
* :github:`40578` - MODBUS RS-485 transceiver support broken on several platforms due to DE race condition
* :github:`40614` - poll: the code judgment condition is always true
* :github:`40640` - drivers: usb_dc_native_posix: segfault when using composite USB device
* :github:`40730` - More power supply modes on STM32H7XX
* :github:`40775` - stm32: multi-threading broken after #40173
* :github:`40795` - Timer signal thread execution loop break SMP on ARM64
* :github:`40925` - mesh_badge not working reel_board_v2
* :github:`40985` - net: icmpv6: Add support for Route Info option in Router Advertisement
* :github:`41026` - LoRa: sx126x: DIO1 interrupt left enabled in sleep mode
* :github:`41077` - console: gsm_mux: could not send more than 128 bytes of data on dlci
* :github:`41089` - power modes for STM32H7
* :github:`41095` - libc: newlib: 'gettimeofday' causes stack overflow on non-POSIX builds
* :github:`41237` - drivers: ieee802154_dw1000: use dedicated workqueue
* :github:`41240` - logging can get messed up when messages are dropped
* :github:`41284` - pthread_cond_wait return value incorrect
* :github:`41339` - stm32, Unable to read UART while checking from Framing error.
* :github:`41488` - Stall logging on nrf52840
* :github:`41499` - drivers: iwdg: stm32: WDT_OPT_PAUSE_HALTED_BY_DBG might not work
* :github:`41503` - including net/socket.h fails with redefinition of struct zsock_timeval (sometimes :-) )
* :github:`41529` - documentation: generate Doxygen tag file
* :github:`41536` - Backport STM32 SMPS Support to v2.7.0
* :github:`41582` - stm32h7: CSI as PLL source is broken
* :github:`41683` - http_client: Unreliable rsp->body_start pointer
* :github:`41915` - regression: Build fails after switching logging to V2
* :github:`41942` - k_delayable_work being used as k_work in work's handler
* :github:`41952` - Log timestamp overflows when using LOGv2
* :github:`42164` - tests/bluetooth/tester broken after switch to logging v2
* :github:`42271` - drivers: can: m_can: The can_set_bitrate() function doesn't work.
* :github:`42299` - spi: nRF HAL driver asserts when PM is used
* :github:`42373` - add k_spin_lock() to doxygen prior to v3.0 release
* :github:`42581` - include: drivers: clock_control: stm32 incorrect DT_PROP is used for xtpre
* :github:`42615` - Bluetooth: Controller: Missing ticks slot offset calculation in Periodic Advertising event scheduling
* :github:`42622` - pm: pm_device structure bigger than nessecary when PM_DEVICE_RUNTIME not set
* :github:`42631` - Unable to identify owner of net_mgmt_lock easily
* :github:`42825` - MQTT client disconnection (EAGAIN) on publish with big payload
* :github:`42862` - Bluetooth: L2CAP: Security check on l2cap request is wrong
* :github:`43117` - Not possible to create more than one shield.
* :github:`43130` - STM32WL ADC idles / doesn't work
* :github:`43176` - net/icmpv4: client possible to ddos itself when there's an error for the broadcasted packet
* :github:`43177` - net: shell: errno not cleared before calling the strtol
* :github:`43178` - net: ip: route: log_strdup misuse
* :github:`43179` - net: tcp: forever loop in tcp_resend_data
* :github:`43180` - net: tcp: possible deadlock in tcp_conn_unref()
* :github:`43181` - net: sockets: net_pkt leak in accept
* :github:`43182` - net: arp: ARP retransmission source address selection
* :github:`43183` - net: mqtt: setsockopt leak on failure
* :github:`43184` - arm: Wrong macro used for z_interrupt_stacks declaration in stack.h
* :github:`43185` - arm: cortex-m: uninitialised ptr_esf in get_esf() in fault.c
* :github:`43470` - wifi: esp_at: race condition on mutex's leading to deadlock
* :github:`43490` - net: sockets: userspace accept() crashes with NULL addr/addrlen pointer
* :github:`43548` - gen_relocate_app truncates files on incremental builds
* :github:`43572` - stm32: wrong clock the LSI freq for stm32l0x mcus
* :github:`43580` - hl7800: tcp stack freezes on slow response from modem
* :github:`43807` - Test "cpp.libcxx.newlib.exception" failed on platforms which use zephyr.bin to run tests.
* :github:`43839` - Bluetooth: controller: missing NULL assign to df_cfg in ll_adv_set
* :github:`43853` - X86 MSI messages always get to BSP core (need a fix to be backported)
* :github:`43858` - mcumgr seems to lock up when it receives command for group that does not exist
* :github:`44107` - The SMP nsim boards are started incorrectly when launching on real HW
* :github:`44310` - net: gptp: type mismatch calculation error in gptp_mi
* :github:`44336` - nucleo_wb55rg: stm32cubeprogrammer runner is missing for twister tests
* :github:`44337` - twister: Miss sn option to stm32cubeprogrgammer runner
* :github:`44352` - stm32l5x boards missing the openocd runner
* :github:`44497` - Add guide for disabling MSD on JLink OB devices and link to from smp_svr page
* :github:`44531` - bl654_usb without mcuboot maximum image size is not limited
* :github:`44886` - Unable to boot Zephyr on FVP_BaseR_AEMv8R
* :github:`44902` - x86: FPU registers are not initialised for userspace (eager FPU sharing)
* :github:`45869` - doc: update requirements
* :github:`45870` - drivers: virt_ivshmem: Allow multiple instances of ivShMem devices
* :github:`45871` - ci: split Bluetooth workflow
* :github:`45872` - ci: make git credentials non-persistent
* :github:`45873` - soc: esp32: use PYTHON_EXECUTABLE from build system
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2021-3966: `Zephyr project bug tracker GHSA-hfxq-3w6x-fv2m
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hfxq-3w6x-fv2m>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.1:
Zephyr 2.7.1
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************

View File

@@ -11,6 +11,9 @@
#include "can_loopback.h"
#include <logging/log.h>
#include "can_utils.h"
LOG_MODULE_DECLARE(can_driver, CONFIG_CAN_LOG_LEVEL);
K_KERNEL_STACK_DEFINE(tx_thread_stack,
@@ -41,13 +44,6 @@ static void dispatch_frame(const struct zcan_frame *frame,
filter->rx_cb(&frame_tmp, filter->cb_arg);
}
static inline int check_filter_match(const struct zcan_frame *frame,
const struct zcan_filter *filter)
{
return ((filter->id & filter->id_mask) ==
(frame->id & filter->id_mask));
}
void tx_thread(void *data_arg, void *arg2, void *arg3)
{
ARG_UNUSED(arg2);
@@ -63,7 +59,7 @@ void tx_thread(void *data_arg, void *arg2, void *arg3)
for (int i = 0; i < CONFIG_CAN_MAX_FILTER; i++) {
filter = &data->filters[i];
if (filter->rx_cb &&
check_filter_match(&frame.frame, &filter->filter)) {
can_utils_filter_match(&frame.frame, &filter->filter) != 0) {
dispatch_frame(&frame.frame, filter);
}
}
@@ -175,6 +171,11 @@ void can_loopback_detach(const struct device *dev, int filter_id)
{
struct can_loopback_data *data = DEV_DATA(dev);
if (filter_id < 0 || filter_id >= ARRAY_SIZE(data->filters)) {
LOG_ERR("filter ID %d out of bounds", filter_id);
return;
}
LOG_DBG("Detach filter ID: %d", filter_id);
k_mutex_lock(&data->mtx, K_FOREVER);
data->filters[filter_id].rx_cb = NULL;

View File

@@ -23,6 +23,34 @@ LOG_MODULE_DECLARE(can_driver, CONFIG_CAN_LOG_LEVEL);
#define MCAN_MAX_DLC CAN_MAX_DLC
#endif
static void memcpy32_volatile(volatile void *dst_, const volatile void *src_,
size_t len)
{
volatile uint32_t *dst = dst_;
const volatile uint32_t *src = src_;
__ASSERT(len % 4 == 0, "len must be a multiple of 4!");
len /= sizeof(uint32_t);
while (len--) {
*dst = *src;
++dst;
++src;
}
}
static void memset32_volatile(volatile void *dst_, uint32_t val, size_t len)
{
volatile uint32_t *dst = dst_;
__ASSERT(len % 4 == 0, "len must be a multiple of 4!");
len /= sizeof(uint32_t);
while (len--) {
*dst++ = val;
}
}
static int can_exit_sleep_mode(struct can_mcan_reg *can)
{
uint32_t start_time;
@@ -376,7 +404,8 @@ int can_mcan_init(const struct device *dev, const struct can_mcan_config *cfg,
#ifdef CONFIG_CAN_STM32FD
can->ils = CAN_MCAN_ILS_RXFIFO0 | CAN_MCAN_ILS_RXFIFO1;
#else
can->ils = CAN_MCAN_ILS_RF0N | CAN_MCAN_ILS_RF1N;
can->ils = CAN_MCAN_ILS_RF0N | CAN_MCAN_ILS_RF1N |
CAN_MCAN_ILS_RF0L | CAN_MCAN_ILS_RF1L;
#endif
can->ile = CAN_MCAN_ILE_EINT0 | CAN_MCAN_ILE_EINT1;
/* Interrupt on every TX fifo element*/
@@ -389,12 +418,7 @@ int can_mcan_init(const struct device *dev, const struct can_mcan_config *cfg,
}
/* No memset because only aligned ptr are allowed */
for (uint32_t *ptr = (uint32_t *)msg_ram;
ptr < (uint32_t *)msg_ram +
sizeof(struct can_mcan_msg_sram) / sizeof(uint32_t);
ptr++) {
*ptr = 0;
}
memset32_volatile(msg_ram, 0, sizeof(struct can_mcan_msg_sram));
return 0;
}
@@ -486,15 +510,17 @@ static void can_mcan_get_message(struct can_mcan_data *data,
uint32_t get_idx, filt_idx;
struct zcan_frame frame;
can_rx_callback_t cb;
volatile uint32_t *src, *dst, *end;
int data_length;
void *cb_arg;
struct can_mcan_rx_fifo_hdr hdr;
bool rtr_filter_mask;
bool rtr_filter;
while ((*fifo_status_reg & CAN_MCAN_RXF0S_F0FL)) {
get_idx = (*fifo_status_reg & CAN_MCAN_RXF0S_F0GI) >>
CAN_MCAN_RXF0S_F0GI_POS;
hdr = fifo[get_idx].hdr;
memcpy32_volatile(&hdr, &fifo[get_idx].hdr,
sizeof(struct can_mcan_rx_fifo_hdr));
if (hdr.xtd) {
frame.id = hdr.ext_id;
@@ -514,24 +540,25 @@ static void can_mcan_get_message(struct can_mcan_data *data,
filt_idx = hdr.fidx;
/* Check if RTR must match */
if ((hdr.xtd && data->ext_filt_rtr_mask & (1U << filt_idx) &&
((data->ext_filt_rtr >> filt_idx) & 1U) != frame.rtr) ||
(data->std_filt_rtr_mask & (1U << filt_idx) &&
((data->std_filt_rtr >> filt_idx) & 1U) != frame.rtr)) {
if (hdr.xtd != 0) {
rtr_filter_mask = (data->ext_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->ext_filt_rtr & BIT(filt_idx)) != 0;
} else {
rtr_filter_mask = (data->std_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->std_filt_rtr & BIT(filt_idx)) != 0;
}
if (rtr_filter_mask && (rtr_filter != frame.rtr)) {
/* RTR bit does not match filter RTR mask and bit, drop frame */
*fifo_ack_reg = get_idx;
continue;
}
data_length = can_dlc_to_bytes(frame.dlc);
if (data_length <= sizeof(frame.data)) {
/* data needs to be written in 32 bit blocks!*/
for (src = fifo[get_idx].data_32,
dst = frame.data_32,
end = dst + CAN_DIV_CEIL(data_length, sizeof(uint32_t));
dst < end;
src++, dst++) {
*dst = *src;
}
memcpy32_volatile(frame.data_32, fifo[get_idx].data_32,
ROUND_UP(data_length, sizeof(uint32_t)));
if (frame.id_type == CAN_STANDARD_IDENTIFIER) {
LOG_DBG("Frame on filter %d, ID: 0x%x",
@@ -647,8 +674,6 @@ int can_mcan_send(const struct can_mcan_config *cfg,
uint32_t put_idx;
int ret;
struct can_mcan_mm mm;
volatile uint32_t *dst, *end;
const uint32_t *src;
LOG_DBG("Sending %d bytes. Id: 0x%x, ID type: %s %s %s %s",
data_length, frame->id,
@@ -696,15 +721,9 @@ int can_mcan_send(const struct can_mcan_config *cfg,
tx_hdr.ext_id = frame->id;
}
msg_ram->tx_buffer[put_idx].hdr = tx_hdr;
for (src = frame->data_32,
dst = msg_ram->tx_buffer[put_idx].data_32,
end = dst + CAN_DIV_CEIL(data_length, sizeof(uint32_t));
dst < end;
src++, dst++) {
*dst = *src;
}
memcpy32_volatile(&msg_ram->tx_buffer[put_idx].hdr, &tx_hdr, sizeof(tx_hdr));
memcpy32_volatile(msg_ram->tx_buffer[put_idx].data_32, frame->data_32,
ROUND_UP(data_length, 4));
data->tx_fin_cb[put_idx] = callback;
data->tx_fin_cb_arg[put_idx] = callback_arg;
@@ -761,7 +780,8 @@ int can_mcan_attach_std(struct can_mcan_data *data,
filter_element.sfce = filter_nr & 0x01 ? CAN_MCAN_FCE_FIFO1 :
CAN_MCAN_FCE_FIFO0;
msg_ram->std_filt[filter_nr] = filter_element;
memcpy32_volatile(&msg_ram->std_filt[filter_nr], &filter_element,
sizeof(struct can_mcan_std_filter));
k_mutex_unlock(&data->inst_mutex);
@@ -820,7 +840,8 @@ static int can_mcan_attach_ext(struct can_mcan_data *data,
filter_element.efce = filter_nr & 0x01 ? CAN_MCAN_FCE_FIFO1 :
CAN_MCAN_FCE_FIFO0;
msg_ram->ext_filt[filter_nr] = filter_element;
memcpy32_volatile(&msg_ram->ext_filt[filter_nr], &filter_element,
sizeof(struct can_mcan_ext_filter));
k_mutex_unlock(&data->inst_mutex);
@@ -874,21 +895,25 @@ int can_mcan_attach_isr(struct can_mcan_data *data,
void can_mcan_detach(struct can_mcan_data *data,
struct can_mcan_msg_sram *msg_ram, int filter_nr)
{
const struct can_mcan_ext_filter ext_filter = {0};
const struct can_mcan_std_filter std_filter = {0};
if (filter_nr < 0) {
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}
k_mutex_lock(&data->inst_mutex, K_FOREVER);
if (filter_nr >= NUM_STD_FILTER_DATA) {
filter_nr -= NUM_STD_FILTER_DATA;
if (filter_nr >= NUM_STD_FILTER_DATA) {
LOG_ERR("Wrong filter id");
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}
msg_ram->ext_filt[filter_nr] = ext_filter;
memset32_volatile(&msg_ram->ext_filt[filter_nr], 0,
sizeof(struct can_mcan_ext_filter));
data->rx_cb_ext[filter_nr] = NULL;
} else {
msg_ram->std_filt[filter_nr] = std_filter;
memset32_volatile(&msg_ram->std_filt[filter_nr], 0,
sizeof(struct can_mcan_std_filter));
data->rx_cb_std[filter_nr] = NULL;
}

View File

@@ -48,7 +48,7 @@ struct can_mcan_rx_fifo_hdr {
volatile uint32_t res : 2; /* Reserved */
volatile uint32_t fidx : 7; /* Filter Index */
volatile uint32_t anmf : 1; /* Accepted non-matching frame */
} __packed;
} __packed __aligned(4);
struct can_mcan_rx_fifo {
struct can_mcan_rx_fifo_hdr hdr;
@@ -56,7 +56,7 @@ struct can_mcan_rx_fifo {
volatile uint8_t data[64];
volatile uint32_t data_32[16];
};
} __packed;
} __packed __aligned(4);
struct can_mcan_mm {
volatile uint8_t idx : 5;
@@ -84,7 +84,7 @@ struct can_mcan_tx_buffer_hdr {
volatile uint8_t res2 : 1; /* Reserved */
volatile uint8_t efc : 1; /* Event FIFO control (Store Tx events) */
struct can_mcan_mm mm; /* Message marker */
} __packed;
} __packed __aligned(4);
struct can_mcan_tx_buffer {
struct can_mcan_tx_buffer_hdr hdr;
@@ -92,7 +92,7 @@ struct can_mcan_tx_buffer {
volatile uint8_t data[64];
volatile uint32_t data_32[16];
};
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_TE_TX 0x1 /* TX event */
#define CAN_MCAN_TE_TXC 0x2 /* TX event in spite of cancellation */
@@ -109,7 +109,7 @@ struct can_mcan_tx_event_fifo {
volatile uint8_t fdf : 1; /* FD Format */
volatile uint8_t et : 2; /* Event type */
struct can_mcan_mm mm; /* Message marker */
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_FCE_DISABLE 0x0
#define CAN_MCAN_FCE_FIFO0 0x1
@@ -130,7 +130,7 @@ struct can_mcan_std_filter {
volatile uint32_t id1 : 11;
volatile uint32_t sfce : 3; /* Filter config */
volatile uint32_t sft : 2; /* Filter type */
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_EFT_RANGE_XIDAM 0x0
#define CAN_MCAN_EFT_DUAL 0x1
@@ -143,7 +143,7 @@ struct can_mcan_ext_filter {
volatile uint32_t id2 : 29; /* ID2 for dual or range, mask otherwise */
volatile uint32_t res : 1;
volatile uint32_t eft : 2; /* Filter type */
} __packed;
} __packed __aligned(4);
struct can_mcan_msg_sram {
volatile struct can_mcan_std_filter std_filt[NUM_STD_FILTER_ELEMENTS];
@@ -153,7 +153,7 @@ struct can_mcan_msg_sram {
volatile struct can_mcan_rx_fifo rx_buffer[NUM_RX_BUF_ELEMENTS];
volatile struct can_mcan_tx_event_fifo tx_event_fifo[NUM_TX_BUF_ELEMENTS];
volatile struct can_mcan_tx_buffer tx_buffer[NUM_TX_BUF_ELEMENTS];
} __packed;
} __packed __aligned(4);
struct can_mcan_data {
struct k_mutex inst_mutex;

View File

@@ -551,6 +551,11 @@ static void mcp2515_detach(const struct device *dev, int filter_nr)
{
struct mcp2515_data *dev_data = DEV_DATA(dev);
if (filter_nr < 0 || filter_nr >= CONFIG_CAN_MAX_FILTER) {
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}
k_mutex_lock(&dev_data->mutex, K_FOREVER);
dev_data->filter_usage &= ~BIT(filter_nr);
k_mutex_unlock(&dev_data->mutex);

View File

@@ -253,13 +253,11 @@ static void mcux_flexcan_copy_zfilter_to_mbconfig(const struct zcan_filter *src,
if (src->id_type == CAN_STANDARD_IDENTIFIER) {
dest->format = kFLEXCAN_FrameFormatStandard;
dest->id = FLEXCAN_ID_STD(src->id);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask, src->rtr_mask, 1);
} else {
dest->format = kFLEXCAN_FrameFormatExtend;
dest->id = FLEXCAN_ID_EXT(src->id);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask, src->rtr_mask, 1);
}
if ((src->rtr & src->rtr_mask) == CAN_DATAFRAME) {
@@ -482,9 +480,8 @@ static void mcux_flexcan_detach(const struct device *dev, int filter_id)
const struct mcux_flexcan_config *config = dev->config;
struct mcux_flexcan_data *data = dev->data;
if (filter_id >= MCUX_FLEXCAN_MAX_RX) {
LOG_ERR("Detach: Filter id >= MAX_RX (%d >= %d)", filter_id,
MCUX_FLEXCAN_MAX_RX);
if (filter_id < 0 || filter_id >= MCUX_FLEXCAN_MAX_RX) {
LOG_ERR("filter ID %d out of bounds", filter_id);
return;
}
@@ -646,6 +643,7 @@ static inline void mcux_flexcan_transfer_rx_idle(const struct device *dev,
static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
{
struct mcux_flexcan_data *data = (struct mcux_flexcan_data *)userData;
const struct mcux_flexcan_config *config = data->dev->config;
switch (status) {
case kStatus_FLEXCAN_UnHandled:
@@ -654,6 +652,7 @@ static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
mcux_flexcan_transfer_error_status(data->dev, (uint64_t)result);
break;
case kStatus_FLEXCAN_TxSwitchToRx:
FLEXCAN_TransferAbortReceive(config->base, &data->handle, (uint64_t)result);
__fallthrough;
case kStatus_FLEXCAN_TxIdle:
/* The result field is a MB value which is limited to 32bit value */

View File

@@ -829,7 +829,8 @@ void can_rcar_detach(const struct device *dev, int filter_nr)
{
struct can_rcar_data *data = DEV_CAN_DATA(dev);
if (filter_nr >= CONFIG_CAN_RCAR_MAX_FILTER) {
if (filter_nr < 0 || filter_nr >= CONFIG_CAN_RCAR_MAX_FILTER) {
LOG_ERR("filter ID %d out of bounds", filter_nr);
return;
}

View File

@@ -1113,10 +1113,10 @@ static const struct can_driver_api can_api_funcs = {
.prescaler = 0x01
},
.timing_max = {
.sjw = 0x07,
.sjw = 0x04,
.prop_seg = 0x00,
.phase_seg1 = 0x0F,
.phase_seg2 = 0x07,
.phase_seg1 = 0x10,
.phase_seg2 = 0x08,
.prescaler = 0x400
}
};

View File

@@ -354,11 +354,36 @@ static int sam_xdmac_initialize(const struct device *dev)
return 0;
}
static int sam_xdmac_get_status(const struct device *dev, uint32_t channel,
struct dma_status *status)
{
const struct sam_xdmac_dev_cfg *const dev_cfg = dev->config;
Xdmac * const xdmac = dev_cfg->regs;
uint32_t chan_cfg = xdmac->XDMAC_CHID[channel].XDMAC_CC;
uint32_t ublen = xdmac->XDMAC_CHID[channel].XDMAC_CUBC;
/* we need to check some of the XDMAC_CC registers to determine the DMA direction */
if ((chan_cfg & XDMAC_CC_TYPE_Msk) == 0) {
status->dir = MEMORY_TO_MEMORY;
} else if ((chan_cfg & XDMAC_CC_DSYNC_Msk) == XDMAC_CC_DSYNC_MEM2PER) {
status->dir = MEMORY_TO_PERIPHERAL;
} else {
status->dir = PERIPHERAL_TO_MEMORY;
}
status->busy = ((chan_cfg & XDMAC_CC_INITD_Msk) != 0) || (ublen > 0);
status->pending_length = ublen;
return 0;
}
static const struct dma_driver_api sam_xdmac_driver_api = {
.config = sam_xdmac_config,
.reload = sam_xdmac_transfer_reload,
.start = sam_xdmac_transfer_start,
.stop = sam_xdmac_transfer_stop,
.get_status = sam_xdmac_get_status,
};
/* DMA0 */

View File

@@ -302,6 +302,12 @@ int edac_ibecc_init(const struct device *dev)
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU11):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU12):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU13):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU14):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU15):
break;
default:
LOG_ERR("PCI Probe failed");

View File

@@ -18,6 +18,9 @@
#define PCI_DEVICE_ID_SKU10 0x452e
#define PCI_DEVICE_ID_SKU11 0x4532
#define PCI_DEVICE_ID_SKU12 0x4518
#define PCI_DEVICE_ID_SKU13 0x451a
#define PCI_DEVICE_ID_SKU14 0x4534
#define PCI_DEVICE_ID_SKU15 0x4536
/* TODO: Move to correct place NMI registers */

View File

@@ -658,7 +658,7 @@ static int spi_nor_erase(const struct device *dev, off_t addr, size_t size)
if ((etp->exp != 0)
&& SPI_NOR_IS_ALIGNED(addr, etp->exp)
&& SPI_NOR_IS_ALIGNED(size, etp->exp)
&& (size >= BIT(etp->exp))
&& ((bet == NULL)
|| (etp->exp > bet->exp))) {
bet = etp;

View File

@@ -43,10 +43,10 @@ static inline void i2c_dw_data_ask(const struct device *dev)
{
struct i2c_dw_dev_config * const dw = dev->data;
uint32_t data;
uint8_t tx_empty;
int8_t rx_empty;
uint8_t cnt;
uint8_t rx_buffer_depth, tx_buffer_depth;
int tx_empty;
int rx_empty;
int cnt;
int rx_buffer_depth, tx_buffer_depth;
union ic_comp_param_1_register ic_comp_param_1;
uint32_t reg_base = get_regs(dev);

View File

@@ -71,7 +71,7 @@ static inline int z_vrfy_i2c_slave_driver_register(const struct device *dev)
static inline int z_vrfy_i2c_slave_driver_unregister(const struct device *dev)
{
Z_OOPS(Z_SYSCALL_OBJ(dev, K_OBJ_DRIVER_I2C));
return z_vrfy_i2c_slave_driver_unregister(dev);
return z_impl_i2c_slave_driver_unregister(dev);
}
#include <syscalls/i2c_slave_driver_unregister_mrsh.c>

View File

@@ -494,6 +494,11 @@ static int nrf5_tx(const struct device *dev,
uint8_t *payload = frag->data;
bool ret = true;
if (payload_len > NRF5_PSDU_LENGTH) {
LOG_ERR("Payload too large: %d", payload_len);
return -EMSGSIZE;
}
LOG_DBG("%p (%u)", payload, payload_len);
nrf5_radio->tx_psdu[0] = payload_len + NRF5_FCS_LENGTH;

View File

@@ -467,7 +467,7 @@ err_out:
static struct iproc_pcie_ep_ctx iproc_pcie_ep_ctx_0;
static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
static const struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.id = 0,
.base = (struct iproc_pcie_reg *)DT_INST_REG_ADDR(0),
.reg_size = DT_INST_REG_SIZE(0),
@@ -475,19 +475,21 @@ static struct iproc_pcie_ep_config iproc_pcie_ep_config_0 = {
.map_low_size = DT_INST_REG_SIZE_BY_NAME(0, map_lowmem),
.map_high_base = DT_INST_REG_ADDR_BY_NAME(0, map_highmem),
.map_high_size = DT_INST_REG_SIZE_BY_NAME(0, map_highmem),
#if DT_INST_NODE_HAS_PROP(0, dmas)
.pl330_dev = DEVICE_DT_GET(DT_INST_DMAS_CTLR_BY_IDX(0, 0)),
.pl330_tx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, txdma, channel),
.pl330_rx_chan_id = DT_INST_DMAS_CELL_BY_NAME(0, rxdma, channel),
#endif
};
static struct pcie_ep_driver_api iproc_pcie_ep_api = {
static const struct pcie_ep_driver_api iproc_pcie_ep_api = {
.conf_read = iproc_pcie_conf_read,
.conf_write = iproc_pcie_conf_write,
.map_addr = iproc_pcie_map_addr,
.unmap_addr = iproc_pcie_unmap_addr,
.raise_irq = iproc_pcie_raise_irq,
.register_reset_cb = iproc_pcie_register_reset_cb,
.dma_xfer = iproc_pcie_pl330_dma_xfer,
.dma_xfer = DT_INST_NODE_HAS_PROP(0, dmas) ? iproc_pcie_pl330_dma_xfer : NULL,
};
DEVICE_DT_INST_DEFINE(0, &iproc_pcie_ep_init, NULL,

View File

@@ -209,9 +209,9 @@ static int sm351lt_init(const struct device *dev)
}
#if defined(CONFIG_SM351LT_TRIGGER)
#if defined(CONFIG_SM351LT_TRIGGER_OWN_THREAD)
data->dev = dev;
#if defined(CONFIG_SM351LT_TRIGGER_OWN_THREAD)
k_sem_init(&data->gpio_sem, 0, K_SEM_MAX_LIMIT);
k_thread_create(&data->thread, data->thread_stack,

View File

@@ -718,11 +718,11 @@ static int transceive_dma(const struct device *dev,
/* Set buffers info */
spi_context_buffers_setup(&data->ctx, tx_bufs, rx_bufs, 1);
LL_SPI_Enable(spi);
/* This is turned off in spi_stm32_complete(). */
spi_stm32_cs_control(dev, true);
LL_SPI_Enable(spi);
while (data->ctx.rx_len > 0 || data->ctx.tx_len > 0) {
size_t dma_len;

View File

@@ -341,10 +341,11 @@ int sys_clock_driver_init(const struct device *dev)
alloc_mask = BIT_MASK(EXT_CHAN_COUNT) << 1;
}
if (!IS_ENABLED(CONFIG_TICKLESS_KERNEL)) {
compare_set(0, counter() + CYC_PER_TICK,
sys_clock_timeout_handler, NULL);
}
uint32_t initial_timeout = IS_ENABLED(CONFIG_TICKLESS_KERNEL) ?
MAX_CYCLES : CYC_PER_TICK;
compare_set(0, counter() + initial_timeout,
sys_clock_timeout_handler, NULL);
z_nrf_clock_control_lf_on(mode);

View File

@@ -266,6 +266,19 @@ struct bt_l2cap_chan_ops {
*/
void (*encrypt_change)(struct bt_l2cap_chan *chan, uint8_t hci_status);
/** @brief Channel alloc_seg callback
*
* If this callback is provided the channel will use it to allocate
* buffers to store segments. This avoids wasting big SDU buffers with
* potentially much smaller PDUs. If this callback is supplied, it must
* return a valid buffer.
*
* @param chan The channel requesting a buffer.
*
* @return Allocated buffer.
*/
struct net_buf *(*alloc_seg)(struct bt_l2cap_chan *chan);
/** @brief Channel alloc_buf callback
*
* If this callback is provided the channel will use it to allocate

View File

@@ -126,6 +126,31 @@ struct coredump_mem_hdr_t {
uintptr_t end;
} __packed;
typedef void (*coredump_backend_start_t)(void);
typedef void (*coredump_backend_end_t)(void);
typedef void (*coredump_backend_buffer_output_t)(uint8_t *buf, size_t buflen);
typedef int (*coredump_backend_query_t)(enum coredump_query_id query_id,
void *arg);
typedef int (*coredump_backend_cmd_t)(enum coredump_cmd_id cmd_id,
void *arg);
struct coredump_backend_api {
/* Signal to backend of the start of coredump. */
coredump_backend_start_t start;
/* Signal to backend of the end of coredump. */
coredump_backend_end_t end;
/* Raw buffer output */
coredump_backend_buffer_output_t buffer_output;
/* Perform query on backend */
coredump_backend_query_t query;
/* Perform command on backend */
coredump_backend_cmd_t cmd;
};
void coredump(unsigned int reason, const z_arch_esf_t *esf,
struct k_thread *thread);
void coredump_memory_dump(uintptr_t start_addr, uintptr_t end_addr);

View File

@@ -420,7 +420,7 @@ static inline uint8_t can_dlc_to_bytes(uint8_t dlc)
static const uint8_t dlc_table[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 12,
16, 20, 24, 32, 48, 64};
return dlc > 0x0F ? 64 : dlc_table[dlc];
return dlc_table[MIN(dlc, ARRAY_SIZE(dlc_table) - 1)];
}
/**

View File

@@ -385,6 +385,7 @@ static inline int z_impl_counter_get_value(const struct device *dev,
* interrupts or requested channel).
* @retval -EINVAL if alarm settings are invalid.
* @retval -ETIME if absolute alarm was set too late.
* @retval -EBUSY if alarm is already active.
*/
__syscall int counter_set_channel_alarm(const struct device *dev,
uint8_t chan_id,

View File

@@ -162,6 +162,11 @@ struct z_kernel {
#if defined(CONFIG_THREAD_MONITOR)
struct k_thread *threads; /* singly linked list of ALL threads */
#endif
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
/* Need to signal an IPI at the next scheduling point */
bool pending_ipi;
#endif
};
typedef struct z_kernel _kernel_t;

View File

@@ -302,10 +302,8 @@ static inline char z_log_minimal_level_to_char(int level)
} \
\
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \
@@ -347,8 +345,6 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
bool is_user_context = k_is_user_context(); \
uint32_t filters = IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) ? \
(_dsource)->filters : 0;\
\
if (IS_ENABLED(CONFIG_LOG_MINIMAL)) { \
Z_LOG_TO_PRINTK(_level, "%s", _str); \
@@ -357,7 +353,7 @@ static inline char z_log_minimal_level_to_char(int level)
break; \
} \
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING) && !is_user_context && \
_level > Z_LOG_RUNTIME_FILTER(filters)) { \
_level > Z_LOG_RUNTIME_FILTER((_dsource)->filters)) { \
break; \
} \
if (IS_ENABLED(CONFIG_LOG2)) { \

View File

@@ -889,15 +889,6 @@ static inline void net_buf_simple_restore(struct net_buf_simple *buf,
buf->len = state->len;
}
/**
* Flag indicating that the buffer has associated fragments. Only used
* internally by the buffer handling code while the buffer is inside a
* FIFO, meaning this never needs to be explicitly set or unset by the
* net_buf API user. As long as the buffer is outside of a FIFO, i.e.
* in practice always for the user for this API, the buf->frags pointer
* should be used instead.
*/
#define NET_BUF_FRAGS BIT(0)
/**
* Flag indicating that the buffer's associated data pointer, points to
* externally allocated memory. Therefore once ref goes down to zero, the
@@ -907,7 +898,7 @@ static inline void net_buf_simple_restore(struct net_buf_simple *buf,
* Reference count mechanism however will behave the same way, and ref
* count going to 0 will free the net_buf but no the data pointer in it.
*/
#define NET_BUF_EXTERNAL_DATA BIT(1)
#define NET_BUF_EXTERNAL_DATA BIT(0)
/**
* @brief Network buffer representation.
@@ -917,13 +908,11 @@ static inline void net_buf_simple_restore(struct net_buf_simple *buf,
* using the net_buf_alloc() API.
*/
struct net_buf {
union {
/** Allow placing the buffer into sys_slist_t */
sys_snode_t node;
/** Allow placing the buffer into sys_slist_t */
sys_snode_t node;
/** Fragments associated with this buffer. */
struct net_buf *frags;
};
/** Fragments associated with this buffer. */
struct net_buf *frags;
/** Reference count. */
uint8_t ref;

View File

@@ -199,10 +199,11 @@ struct net_conn_handle;
* anyway. This saves 12 bytes / context in IPv6.
*/
__net_socket struct net_context {
/** User data.
*
* First member of the structure to let users either have user data
* associated with a context, or put contexts into a FIFO.
/** First member of the structure to allow to put contexts into a FIFO.
*/
void *fifo_reserved;
/** User data associated with a context.
*/
void *user_data;

View File

@@ -1368,6 +1368,10 @@ uint32_t net_if_ipv6_calc_reachable_time(struct net_if_ipv6 *ipv6);
static inline void net_if_ipv6_set_reachable_time(struct net_if_ipv6 *ipv6)
{
#if defined(CONFIG_NET_NATIVE_IPV6)
if (ipv6 == NULL) {
return;
}
ipv6->reachable_time = net_if_ipv6_calc_reachable_time(ipv6);
#endif
}

View File

@@ -7,6 +7,9 @@
#ifndef ZEPHYR_INCLUDE_TIME_UNITS_H_
#define ZEPHYR_INCLUDE_TIME_UNITS_H_
#include <sys/util.h>
#include <toolchain.h>
#ifdef __cplusplus
extern "C" {
#endif
@@ -56,6 +59,21 @@ static TIME_CONSTEXPR inline int sys_clock_hw_cycles_per_sec(void)
#endif
}
/** @internal
* Macro determines if fast conversion algorithm can be used. It checks if
* maximum timeout represented in source frequency domain and multiplied by
* target frequency fits in 64 bits.
*
* @param from_hz Source frequency.
* @param to_hz Target frequency.
*
* @retval true Use faster algorithm.
* @retval false Use algorithm preventing overflow of intermediate value.
*/
#define Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz) \
((ceiling_fraction(CONFIG_SYS_CLOCK_MAX_TIMEOUT_DAYS * 24ULL * 3600ULL * from_hz, \
UINT32_MAX) * to_hz) <= UINT32_MAX)
/* Time converter generator gadget. Selects from one of three
* conversion algorithms: ones that take advantage when the
* frequencies are an integer ratio (in either direction), or a full
@@ -123,8 +141,18 @@ static TIME_CONSTEXPR ALWAYS_INLINE uint64_t z_tmcvt(uint64_t t, uint32_t from_h
} else {
if (result32) {
return (uint32_t)((t * to_hz + off) / from_hz);
} else if (const_hz && Z_TMCVT_USE_FAST_ALGO(from_hz, to_hz)) {
/* Faster algorithm but source is first multiplied by target frequency
* and it can overflow even though final result would not overflow.
* Kconfig option shall prevent use of this algorithm when there is a
* risk of overflow.
*/
return ((t * to_hz + off) / from_hz);
} else {
return (t * to_hz + off) / from_hz;
/* Slower algorithm but input is first divided before being multiplied
* which prevents overflow of intermediate value.
*/
return (t / from_hz) * to_hz + ((t % from_hz) * to_hz + off) / from_hz;
}
}
}

View File

@@ -25,6 +25,7 @@
#include <zephyr/types.h>
#include <stddef.h>
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
@@ -61,6 +62,23 @@ extern "C" {
/** @brief 0 if @p cond is true-ish; causes a compile error otherwise. */
#define ZERO_OR_COMPILE_ERROR(cond) ((int) sizeof(char[1 - 2 * !(cond)]) - 1)
/**
* @brief Determine if a buffer exceeds highest address
*
* This macro determines if a buffer identified by a starting address @a addr
* and length @a buflen spans a region of memory that goes beond the highest
* possible address (thereby resulting in a pointer overflow).
*
* @param addr Buffer starting address
* @param buflen Length of the buffer
*
* @return true if pointer overflow detected, false otherwise
*/
#define Z_DETECT_POINTER_OVERFLOW(addr, buflen) \
(((buflen) != 0) && \
((UINTPTR_MAX - (uintptr_t)(addr)) <= ((uintptr_t)((buflen) - 1))))
#if defined(__cplusplus)
/* The built-in function used below for type checking in C is not

View File

@@ -329,6 +329,22 @@ extern int z_user_string_copy(char *dst, const char *src, size_t maxlen);
*/
#define Z_SYSCALL_VERIFY(expr) Z_SYSCALL_VERIFY_MSG(expr, #expr)
/**
* @brief Macro to check if size is negative
*
* Z_SYSCALL_MEMORY can be called with signed/unsigned types
* and because of that if we check if size is greater or equal to
* zero, many static analyzers complain about no effect expression.
*
* @param ptr Memory area to examine
* @param size Size of the memory area
* @return true if size is valid, false otherwise
* @note This is an internal API. Do not use unless you are extending
* functionality in the Zephyr tree.
*/
#define Z_SYSCALL_MEMORY_SIZE_CHECK(ptr, size) \
(((uintptr_t)ptr + size) >= (uintptr_t)ptr)
/**
* @brief Runtime check that a user thread has read and/or write permission to
* a memory area
@@ -346,8 +362,10 @@ extern int z_user_string_copy(char *dst, const char *src, size_t maxlen);
* @return 0 on success, nonzero on failure
*/
#define Z_SYSCALL_MEMORY(ptr, size, write) \
Z_SYSCALL_VERIFY_MSG(arch_buffer_validate((void *)ptr, size, write) \
== 0, \
Z_SYSCALL_VERIFY_MSG(Z_SYSCALL_MEMORY_SIZE_CHECK(ptr, size) \
&& !Z_DETECT_POINTER_OVERFLOW(ptr, size) \
&& (arch_buffer_validate((void *)ptr, size, write) \
== 0), \
"Memory region %p (size %zu) %s access denied", \
(void *)(ptr), (size_t)(size), \
write ? "write" : "read")

View File

@@ -48,8 +48,9 @@
#endif
#undef BUILD_ASSERT /* clear out common version */
/* C++11 has static_assert built in */
#ifdef __cplusplus
#if defined(__cplusplus) && (__cplusplus >= 201103L)
#define BUILD_ASSERT(EXPR, MSG...) static_assert(EXPR, "" MSG)
/*

View File

@@ -613,6 +613,17 @@ config TIMEOUT_64BIT
availability of absolute timeout values (which require the
extra precision).
config SYS_CLOCK_MAX_TIMEOUT_DAYS
int "Max timeout (in days) used in conversions"
default 365
help
Value is used in the time conversion static inline function to determine
at compile time which algorithm to use. One algorithm is faster, takes
less code but may overflow if multiplication of source and target
frequency exceeds 64 bits. Second algorithm prevents that. Faster
algorithm is selected for conversion if maximum timeout represented in
source frequency domain multiplied by target frequency fits in 64 bits.
config XIP
bool "Execute in place"
help

View File

@@ -161,15 +161,21 @@ int z_impl_k_mutex_lock(struct k_mutex *mutex, k_timeout_t timeout)
key = k_spin_lock(&lock);
struct k_thread *waiter = z_waitq_head(&mutex->wait_q);
/*
* Check if mutex was unlocked after this thread was unpended.
* If so, skip adjusting owner's priority down.
*/
if (likely(mutex->owner != NULL)) {
struct k_thread *waiter = z_waitq_head(&mutex->wait_q);
new_prio = (waiter != NULL) ?
new_prio_for_inheritance(waiter->base.prio, mutex->owner_orig_prio) :
mutex->owner_orig_prio;
new_prio = (waiter != NULL) ?
new_prio_for_inheritance(waiter->base.prio, mutex->owner_orig_prio) :
mutex->owner_orig_prio;
LOG_DBG("adjusting prio down on mutex %p", mutex);
LOG_DBG("adjusting prio down on mutex %p", mutex);
resched = adjust_owner_prio(mutex, new_prio) || resched;
resched = adjust_owner_prio(mutex, new_prio) || resched;
}
if (resched) {
z_reschedule(&lock, key);

View File

@@ -576,6 +576,9 @@ static void triggered_work_expiration_handler(struct _timeout *timeout)
k_work_submit_to_queue(twork->workq, &twork->work);
}
extern int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work);
static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
{
struct z_poller *poller = event->poller;
@@ -587,7 +590,7 @@ static int signal_triggered_work(struct k_poll_event *event, uint32_t status)
z_abort_timeout(&twork->timeout);
twork->poll_result = 0;
k_work_submit_to_queue(work_q, &twork->work);
z_work_submit_to_queue(work_q, &twork->work);
}
return 0;

View File

@@ -219,6 +219,25 @@ static ALWAYS_INLINE void dequeue_thread(void *pq,
}
}
static void signal_pending_ipi(void)
{
/* Synchronization note: you might think we need to lock these
* two steps, but an IPI is idempotent. It's OK if we do it
* twice. All we require is that if a CPU sees the flag true,
* it is guaranteed to send the IPI, and if a core sets
* pending_ipi, the IPI will be sent the next time through
* this code.
*/
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
if (_kernel.pending_ipi) {
_kernel.pending_ipi = false;
arch_sched_ipi();
}
}
#endif
}
#ifdef CONFIG_SMP
/* Called out of z_swap() when CONFIG_SMP. The current thread can
* never live in the run queue until we are inexorably on the context
@@ -231,6 +250,7 @@ void z_requeue_current(struct k_thread *curr)
if (z_is_thread_queued(curr)) {
_priq_run_add(&_kernel.ready_q.runq, curr);
}
signal_pending_ipi();
}
#endif
@@ -481,6 +501,15 @@ static bool thread_active_elsewhere(struct k_thread *thread)
return false;
}
static void flag_ipi(void)
{
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
if (CONFIG_MP_NUM_CPUS > 1) {
_kernel.pending_ipi = true;
}
#endif
}
static void ready_thread(struct k_thread *thread)
{
#ifdef CONFIG_KERNEL_COHERENCE
@@ -495,9 +524,7 @@ static void ready_thread(struct k_thread *thread)
queue_thread(&_kernel.ready_q.runq, thread);
update_cache(0);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
}
}
@@ -626,17 +653,13 @@ static void add_thread_timeout(struct k_thread *thread, k_timeout_t timeout)
}
}
static void pend(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
static void pend_locked(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
{
#ifdef CONFIG_KERNEL_COHERENCE
__ASSERT_NO_MSG(wait_q == NULL || arch_mem_coherent(wait_q));
#endif
LOCKED(&sched_spinlock) {
add_to_waitq_locked(thread, wait_q);
}
add_to_waitq_locked(thread, wait_q);
add_thread_timeout(thread, timeout);
}
@@ -644,7 +667,9 @@ void z_pend_thread(struct k_thread *thread, _wait_q_t *wait_q,
k_timeout_t timeout)
{
__ASSERT_NO_MSG(thread == _current || is_thread_dummy(thread));
pend(thread, wait_q, timeout);
LOCKED(&sched_spinlock) {
pend_locked(thread, wait_q, timeout);
}
}
static inline void unpend_thread_no_timeout(struct k_thread *thread)
@@ -686,7 +711,12 @@ void z_thread_timeout(struct _timeout *timeout)
int z_pend_curr_irqlock(uint32_t key, _wait_q_t *wait_q, k_timeout_t timeout)
{
pend(_current, wait_q, timeout);
/* This is a legacy API for pre-switch architectures and isn't
* correctly synchronized for multi-cpu use
*/
__ASSERT_NO_MSG(!IS_ENABLED(CONFIG_SMP));
pend_locked(_current, wait_q, timeout);
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
pending_current = _current;
@@ -709,8 +739,20 @@ int z_pend_curr(struct k_spinlock *lock, k_spinlock_key_t key,
#if defined(CONFIG_TIMESLICING) && defined(CONFIG_SWAP_NONATOMIC)
pending_current = _current;
#endif
pend(_current, wait_q, timeout);
return z_swap(lock, key);
__ASSERT_NO_MSG(sizeof(sched_spinlock) == 0 || lock != &sched_spinlock);
/* We do a "lock swap" prior to calling z_swap(), such that
* the caller's lock gets released as desired. But we ensure
* that we hold the scheduler lock and leave local interrupts
* masked until we reach the context swich. z_swap() itself
* has similar code; the duplication is because it's a legacy
* API that doesn't expect to be called with scheduler lock
* held.
*/
(void) k_spin_lock(&sched_spinlock);
pend_locked(_current, wait_q, timeout);
k_spin_release(lock);
return z_swap(&sched_spinlock, key);
}
struct k_thread *z_unpend1_no_timeout(_wait_q_t *wait_q)
@@ -784,9 +826,7 @@ void z_thread_priority_set(struct k_thread *thread, int prio)
{
bool need_sched = z_set_prio(thread, prio);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (need_sched && _current->base.sched_locked == 0U) {
z_reschedule_unlocked();
@@ -826,6 +866,7 @@ void z_reschedule(struct k_spinlock *lock, k_spinlock_key_t key)
z_swap(lock, key);
} else {
k_spin_unlock(lock, key);
signal_pending_ipi();
}
}
@@ -835,6 +876,7 @@ void z_reschedule_irqlock(uint32_t key)
z_swap_irqlock(key);
} else {
irq_unlock(key);
signal_pending_ipi();
}
}
@@ -868,7 +910,16 @@ void k_sched_unlock(void)
struct k_thread *z_swap_next_thread(void)
{
#ifdef CONFIG_SMP
return next_up();
struct k_thread *ret = next_up();
if (ret == _current) {
/* When not swapping, have to signal IPIs here. In
* the context switch case it must happen later, after
* _current gets requeued.
*/
signal_pending_ipi();
}
return ret;
#else
return _kernel.ready_q.cache;
#endif
@@ -935,6 +986,7 @@ void *z_get_next_switch_handle(void *interrupted)
new_thread->switch_handle = NULL;
}
}
signal_pending_ipi();
return ret;
#else
_current->switch_handle = interrupted;
@@ -1331,9 +1383,7 @@ void z_impl_k_wakeup(k_tid_t thread)
z_mark_thread_as_not_suspended(thread);
z_ready_thread(thread);
#if defined(CONFIG_SMP) && defined(CONFIG_SCHED_IPI_SUPPORTED)
arch_sched_ipi();
#endif
flag_ipi();
if (!arch_is_in_isr()) {
z_reschedule_unlocked();
@@ -1520,6 +1570,9 @@ void z_thread_abort(struct k_thread *thread)
/* It's running somewhere else, flag and poke */
thread->base.thread_state |= _THREAD_ABORTING;
/* We're going to spin, so need a true synchronous IPI
* here, not deferred!
*/
#ifdef CONFIG_SCHED_IPI_SUPPORTED
arch_sched_ipi();
#endif

View File

@@ -1011,7 +1011,7 @@ void z_thread_mark_switched_in(void)
#ifdef CONFIG_THREAD_RUNTIME_STATS
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
#ifdef CONFIG_THREAD_RUNTIME_STATS_USE_TIMING_FUNCTIONS
thread->rt_stats.last_switched_in = timing_counter_get();
#else
@@ -1033,7 +1033,7 @@ void z_thread_mark_switched_out(void)
uint64_t diff;
struct k_thread *thread;
thread = k_current_get();
thread = z_current_get();
if (unlikely(thread->rt_stats.last_switched_in == 0)) {
/* Has not run before */

View File

@@ -68,8 +68,14 @@ static int32_t next_timeout(void)
{
struct _timeout *to = first();
int32_t ticks_elapsed = elapsed();
int32_t ret = to == NULL ? MAX_WAIT
: CLAMP(to->dticks - ticks_elapsed, 0, MAX_WAIT);
int32_t ret;
if ((to == NULL) ||
((int64_t)(to->dticks - ticks_elapsed) > (int64_t)INT_MAX)) {
ret = MAX_WAIT;
} else {
ret = MAX(0, to->dticks - ticks_elapsed);
}
#ifdef CONFIG_TIMESLICING
if (_current_cpu->slice_ticks && _current_cpu->slice_ticks < ret) {
@@ -238,6 +244,18 @@ void sys_clock_announce(int32_t ticks)
k_spinlock_key_t key = k_spin_lock(&timeout_lock);
/* We release the lock around the callbacks below, so on SMP
* systems someone might be already running the loop. Don't
* race (which will cause paralllel execution of "sequential"
* timeouts and confuse apps), just increment the tick count
* and return.
*/
if (IS_ENABLED(CONFIG_SMP) && (announce_remaining != 0)) {
announce_remaining += ticks;
k_spin_unlock(&timeout_lock, key);
return;
}
announce_remaining = ticks;
while (first() != NULL && first()->dticks <= announce_remaining) {
@@ -245,13 +263,13 @@ void sys_clock_announce(int32_t ticks)
int dt = t->dticks;
curr_tick += dt;
announce_remaining -= dt;
t->dticks = 0;
remove_timeout(t);
k_spin_unlock(&timeout_lock, key);
t->fn(t);
key = k_spin_lock(&timeout_lock);
announce_remaining -= dt;
}
if (first() != NULL) {
@@ -271,7 +289,7 @@ int64_t sys_clock_tick_get(void)
uint64_t t = 0U;
LOCKED(&timeout_lock) {
t = curr_tick + sys_clock_elapsed();
t = curr_tick + elapsed();
}
return t;
}

View File

@@ -355,26 +355,45 @@ static int submit_to_queue_locked(struct k_work *work,
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
/* Submit work to a queue but do not yield the current thread.
*
* Intended for internal use.
*
* See also submit_to_queue_locked().
*
* @param queuep pointer to a queue reference.
* @param work the work structure to be submitted
*
* @retval see submit_to_queue_locked()
*/
int z_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
__ASSERT_NO_MSG(work != NULL);
k_spinlock_key_t key = k_spin_lock(&lock);
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = submit_to_queue_locked(work, &queue);
k_spin_unlock(&lock, key);
/* If we changed the queue contents (as indicated by a positive ret)
* the queue thread may now be ready, but we missed the reschedule
* point because the lock was held. If this is being invoked by a
* preemptible thread then yield.
return ret;
}
int k_work_submit_to_queue(struct k_work_q *queue,
struct k_work *work)
{
SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work, submit_to_queue, queue, work);
int ret = z_work_submit_to_queue(queue, work);
/* submit_to_queue_locked() won't reschedule on its own
* (really it should, otherwise this process will result in
* spurious calls to z_swap() due to the race), so do it here
* if the queue state changed.
*/
if ((ret > 0) && (k_is_preempt_thread() != 0)) {
k_yield();
if (ret > 0) {
z_reschedule_unlocked();
}
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work, submit_to_queue, queue, work, ret);
@@ -586,6 +605,7 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
struct k_work *work = NULL;
k_work_handler_t handler = NULL;
k_spinlock_key_t key = k_spin_lock(&lock);
bool yield;
/* Check for and prepare any new work. */
node = sys_slist_get(&queue->pending);
@@ -644,34 +664,30 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
k_spin_unlock(&lock, key);
if (work != NULL) {
bool yield;
__ASSERT_NO_MSG(handler != NULL);
handler(work);
__ASSERT_NO_MSG(handler != NULL);
handler(work);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
/* Mark the work item as no longer running and deal
* with any cancellation issued while it was running.
* Clear the BUSY flag and optionally yield to prevent
* starving other threads.
*/
key = k_spin_lock(&lock);
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&work->flags, K_WORK_RUNNING_BIT);
if (flag_test(&work->flags, K_WORK_CANCELING_BIT)) {
finalize_cancel_locked(work);
}
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
flag_clear(&queue->flags, K_WORK_QUEUE_BUSY_BIT);
yield = !flag_test(&queue->flags, K_WORK_QUEUE_NO_YIELD_BIT);
k_spin_unlock(&lock, key);
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
/* Optionally yield to prevent the work queue from
* starving other threads.
*/
if (yield) {
k_yield();
}
}
}

View File

@@ -94,7 +94,7 @@ void free(void *ptr)
(void) sys_mutex_unlock(&z_malloc_heap_mutex);
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#else /* No malloc arena */
void *malloc(size_t size)
{

View File

@@ -133,7 +133,7 @@ static int malloc_prepare(const struct device *unused)
return 0;
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
/* Current offset from HEAP_BASE of unused memory */
LIBC_BSS static size_t heap_sz;

View File

@@ -19,6 +19,7 @@ config POSIX_API
config PTHREAD_IPC
bool "POSIX pthread IPC API"
default y if POSIX_API
depends on POSIX_CLOCK
help
This enables a mostly-standards-compliant implementation of
the pthread mutex, condition variable and barrier IPC
@@ -111,6 +112,8 @@ config APP_LINK_WITH_POSIX_SUBSYS
config EVENTFD
bool "Enable support for eventfd"
depends on !ARCH_POSIX
select POLL
default y if POSIX_API
help
Enable support for event file descriptors, eventfd. An eventfd can
be used as an event wait/notify mechanism together with POSIX calls

View File

@@ -27,7 +27,6 @@ static struct k_spinlock rt_clock_base_lock;
*/
int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
{
uint64_t elapsed_nsecs;
struct timespec base;
k_spinlock_key_t key;
@@ -48,9 +47,13 @@ int z_impl_clock_gettime(clockid_t clock_id, struct timespec *ts)
return -1;
}
elapsed_nsecs = k_ticks_to_ns_floor64(k_uptime_ticks());
ts->tv_sec = (int32_t) (elapsed_nsecs / NSEC_PER_SEC);
ts->tv_nsec = (int32_t) (elapsed_nsecs % NSEC_PER_SEC);
uint64_t ticks = k_uptime_ticks();
uint64_t elapsed_secs = ticks / CONFIG_SYS_CLOCK_TICKS_PER_SEC;
uint64_t nremainder = ticks - elapsed_secs * CONFIG_SYS_CLOCK_TICKS_PER_SEC;
ts->tv_sec = (time_t) elapsed_secs;
/* For ns 32 bit conversion can be used since its smaller than 1sec. */
ts->tv_nsec = (int32_t) k_ticks_to_ns_floor32(nremainder);
ts->tv_sec += base.tv_sec;
ts->tv_nsec += base.tv_nsec;

View File

@@ -15,12 +15,10 @@
#define PTHREAD_INIT_FLAGS PTHREAD_CANCEL_ENABLE
#define PTHREAD_CANCELED ((void *) -1)
#define LOWEST_POSIX_THREAD_PRIORITY 1
PTHREAD_MUTEX_DEFINE(pthread_key_lock);
static const pthread_attr_t init_pthread_attrs = {
.priority = LOWEST_POSIX_THREAD_PRIORITY,
.priority = 0,
.stack = NULL,
.stacksize = 0,
.flags = PTHREAD_INIT_FLAGS,
@@ -54,9 +52,11 @@ static uint32_t zephyr_to_posix_priority(int32_t z_prio, int *policy)
if (z_prio < 0) {
*policy = SCHED_FIFO;
prio = -1 * (z_prio + 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_COOP_PRIORITIES);
} else {
*policy = SCHED_RR;
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - z_prio - 1);
__ASSERT_NO_MSG(prio < CONFIG_NUM_PREEMPT_PRIORITIES);
}
return prio;
@@ -68,9 +68,11 @@ static int32_t posix_to_zephyr_priority(uint32_t priority, int policy)
if (policy == SCHED_FIFO) {
/* Zephyr COOP priority starts from -1 */
__ASSERT_NO_MSG(priority < CONFIG_NUM_COOP_PRIORITIES);
prio = -1 * (priority + 1);
} else {
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority);
__ASSERT_NO_MSG(priority < CONFIG_NUM_PREEMPT_PRIORITIES);
prio = (CONFIG_NUM_PREEMPT_PRIORITIES - priority - 1);
}
return prio;
@@ -150,7 +152,7 @@ int pthread_create(pthread_t *newthread, const pthread_attr_t *attr,
for (pthread_num = 0;
pthread_num < CONFIG_MAX_PTHREAD_COUNT; pthread_num++) {
thread = &posix_thread_pool[pthread_num];
if (thread->state == PTHREAD_TERMINATED) {
if (thread->state == PTHREAD_EXITED || thread->state == PTHREAD_TERMINATED) {
thread->state = PTHREAD_JOINABLE;
break;
}

View File

@@ -7,13 +7,9 @@
#include <kernel.h>
#include <posix/posix_sched.h>
static bool valid_posix_policy(int policy)
static inline bool valid_posix_policy(int policy)
{
if (policy != SCHED_FIFO && policy != SCHED_RR) {
return false;
}
return true;
return policy == SCHED_FIFO || policy == SCHED_RR;
}
/**
@@ -23,25 +19,12 @@ static bool valid_posix_policy(int policy)
*/
int sched_get_priority_min(int policy)
{
if (valid_posix_policy(policy) == false) {
if (!valid_posix_policy(policy)) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
return 0;
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return 0;
}
}
errno = EINVAL;
return -1;
return 0;
}
/**
@@ -51,25 +34,10 @@ int sched_get_priority_min(int policy)
*/
int sched_get_priority_max(int policy)
{
if (valid_posix_policy(policy) == false) {
errno = EINVAL;
return -1;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED)) {
if (policy == SCHED_FIFO) {
/* Posix COOP priority starts from 0
* whereas zephyr starts from -1
*/
return (CONFIG_NUM_COOP_PRIORITIES - 1);
}
}
if (IS_ENABLED(CONFIG_PREEMPT_ENABLED)) {
if (policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES;
}
if (IS_ENABLED(CONFIG_COOP_ENABLED) && policy == SCHED_FIFO) {
return CONFIG_NUM_COOP_PRIORITIES - 1;
} else if (IS_ENABLED(CONFIG_PREEMPT_ENABLED) && policy == SCHED_RR) {
return CONFIG_NUM_PREEMPT_PRIORITIES - 1;
}
errno = EINVAL;

Some files were not shown because too many files have changed in this diff Show More