Compare commits

...

41 Commits

Author SHA1 Message Date
Christopher Friedt
003de78ce0 release: Zephyr 2.7.3
Set version to 2.7.3

Fixes #49354

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
2022-08-22 12:12:49 -04:00
Flavio Ceolin
9502d500b6 release: security: Notes for 2.7.3
Add security release notes for 2.7.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-08-22 10:54:58 -04:00
Christopher Friedt
2a88e08296 release: update v2.7.3 release notes
* add bugifixes to v2.7.3 release
* awating CVE notes from security

Fixes #49310

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
2022-08-22 10:54:58 -04:00
Jamie McCrae
e1ee34e55c drivers: sensor: sm351lt: Fix global thread triggering bug
This fixes a bug in the sm351lt driver whereby global triggering will
cause an MPU fault due to an unset pointer.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2022-08-03 19:05:58 -04:00
Szymon Janc
2ad1ef651b Bluetooth: host: Fix L2CAP reconfigure response with invalid CID
When an L2CAP_CREDIT_BASED_RECONFIGURE_REQ packet is received with
invalid parameters, the recipient shall send an
L2CAP_CREDIT_BASED_RECONFIGURE_RSP PDU with a non-zero Result field
and not change any MTU and MPS values.

This fix incorrectly reconfiguring valid channels while responding with
0x003 (Reconfiguration failed - one or more Destination CIDs invalid)
result code.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
(cherry picked from commit 253070b76b)
2022-08-02 12:36:44 -04:00
Szymon Janc
089675af45 Bluetooth: host: Fix L2CAP reconfigure response with invalid MTU
TSE18813 clarified IUT behavior and rejecting reconfiguration which
would result in MTU decrease is enough. There is no need to disconnect
L2CAP channel(s).

This was affecting L2CAP/ECFC/BI-03-C qualification test case
(TCRL 2022-2).

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
(cherry picked from commit 266394dea4)
2022-08-02 12:36:44 -04:00
Andriy Gelman
03ff0d471e net: route: Fix pkt leak if net_send_data() fails
If the call to net_send_data() fails, for example if the forwading
interface is down, then the pkt will leak. The reference taken by
net_pkt_shallow_clone() will never be released. Fix the problem
by dropping the rerefence count in the error path.

Signed-off-by: Andriy Gelman <andriy.gelman@gmail.com>
(cherry picked from commit a3cdb2102c)
2022-08-02 10:41:52 -04:00
Erwan Gouriou
cd96136bcb boards: nucleo_wb55rg: Fix documentation about BLE binary compatibility
Rather than stating a version information that will get out of date
at each release, refer to the source of information located in hal_stm32
module.

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
(cherry picked from commit 6656607d02)
2022-07-25 05:54:23 -04:00
Henrik Brix Andersen
567fda57df tests: drivers: can: api: add test for RTR filter matching
Add test for CAN RX filtering of RTR frames.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
b14f356c96 drivers: can: loopback: check frame ID type and RTR bit in filters
Check the frame ID type and RTR bit when comparing loopback CAN frames
against installed RX filters.

Fixes: #47904

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
874d77bc75 drivers: can: mcux: flexcan: fix handling of RTR frames
When installing a RX filter, the driver uses "filter->rtr &
filter->rtr_mask" for setting the filter mask. It should just be using
filter->rtr_mask, otherwise filters for non-RTR frames will match RTR
frames as well.

When transmitting a RTR frame, the hardware automatically switches the
mailbox used for TX to RX in order to receive the reply. This, however,
does not match the Zephyr CAN driver model, where mailboxes are
dedicated to either RX or TX. Attempting to reuse the TX mailbox (which
was automatically switched to an RX mailbox by the hardware) fails on
the first call, after which the mailbox is reset and can be reused for
TX. To overcome this, the driver must abort the RX mailbox operation
when the hardware performs the TX to RX switch.

Fixes: #47902

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Henrik Brix Andersen
ec0befb938 drivers: can: mcan: acknowledge all received frames
The Bosch M_CAN IP does not support RX filtering of the RTR bit, so the
driver handles this bit in software.

If a recevied frame matches a filter with RTR enabled, the RTR bit of
the frame must match that of the filter in order to be passed to the RX
callback function. If the RTR bits do not match the frame must be
dropped.

Improve the readability of the the logic for determining if a frame
should be dropped and add a missing FIFO acknowledge write for dropped
frames.

Fixes: #47204

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2022-07-19 12:07:14 -04:00
Christopher Friedt
273e90a86f scripts: release: list_backports: use older python dict merge method
In Python versions >= 3.9, dicts can be merged with the `|` operator.

This is not the case for python versions < 3.9, and the simplest way
is to use `dict_c = {**dict_a, **dict_b}`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3783cf8353)
2022-07-19 01:30:16 +09:00
Christopher Friedt
59dc65a7b4 ci: backports: check if a backport PR has a valid issue
This is an automated check for the Backports project to
require one or more `Fixes #<issue>` items in the body
of the pull request.

Fixes #46164

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit aa4e437573)
2022-07-18 23:32:19 +09:00
Christopher Friedt
8ff8cafc18 scripts: release: list_backports.py
Created list_backports.py to examine prs applied to a backport
branch and extract associated issues. This is helpful for
adding to release notes.

The script may also be used to ensure that backported changes
also have one or more associated issues.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 57762ca12c)
2022-07-18 23:32:19 +09:00
Christopher Friedt
ba07347b60 scripts: release: use GITHUB_TOKEN and start_date in scripts
Updated bug_bash.py and list_issues.py to use the GITHUB_TOKEN
environment variable for consistency with other scripts.

Updated bug_bash.py to use `-s / --start-date` instead of
`-b / --begin-date`.

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit 3b3fc27860)
2022-07-18 23:32:19 +09:00
Christopher Friedt
e423902617 tests: posix: pthread: test for pthread descriptor leaks
Add a simple test to ensure that we can create and join a
single thread `CONFIG_MAX_PTHREAD_COUNT` * 2 times. If
there are leaks, then `pthread_create()` should
eventually return `EAGAIN`. If there are no leaks, then
the test should pass.

Fixes #47609

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit d37350bc19)
2022-07-12 16:02:26 -04:00
Christopher Friedt
018f836c4d posix: pthread: consider PTHREAD_EXITED state in pthread_create
If a thread is joined using `pthread_join()`, then the
internal state would be set to `PTHREAD_EXITED`.

Previously, `pthread_create()` would only consider pthreads
with internal state `PTHREAD_TERMINATED` as candidates for new
threads. However, that causes a descriptor leak.

We should be able to reuse a single thread an infinite number
of times.

Here, we also consider threads with internal state
`PTHREAD_EXITED` as candiates in `pthread_create()`.

Fixes #47609

Signed-off-by: Christopher Friedt <cfriedt@fb.com>
(cherry picked from commit da0398d198)
2022-07-12 16:02:26 -04:00
Stephanos Ioannidis
f4466c4760 tests: cpp: cxx: Add qemu_cortex_a53 as integration platform
This commit adds the `qemu_cortex_a53`, which is an MMU-based platform,
as an integration platform for the C++ subsystem tests.

This ensures that the `test_global_static_ctor_dynmem` test, which
verifies that the dynamic memory allocation service is functional
during the global static object constructor invocation, is tested on
an MMU-based platform, which may have a different libc heap
initialisation path.

In addition to the above, this increases the overall test coverage
ensuring that the C++ subsystem is functional on an MMU-based platform
in general.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 8e6322a21a)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
9a5cbe3568 tests: cpp: cxx: Test with various types of libc
This commit changes the C++ subsystem test, which previously was only
being run with the minimal libc, to be run with all the mainstream C
libraries (minimal libc, newlib, newlib-nano, picolibc).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 03f0693125)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
5b7b15fb2d tests: cpp: cxx: Add dynamic memory availability test for static init
This commit adds a test to verify that the dynamic memory allocation
service (the `new` operator) is available and functional when the C++
static global object constructors are invoked called during the system
initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit dc4895b876)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
e5a92a1fab tests: cpp: cxx: Add static global constructor invocation test
This commit adds a test to verify that the C++ static global object
constructors are invoked called during the system initialisation.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 6e0063af29)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
74f0b6443a lib: libc: newlib: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the newlib malloc heap
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 43e1c28a25)
2022-07-12 12:05:56 -04:00
Stephanos Ioannidis
6c16b3492b lib: libc: minimal: Initialise libc heap during POST_KERNEL phase
This commit changes the invocation of the minimal libc malloc
initialisation function such that it is executed during the POST_KERNEL
phase instead of the APPLICATION phase.

This is necessary in order to ensure that the application
initialisation functions (i.e. the functions called during the
APPLICATIION phase) can make use of the libc heap.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit db0748c462)
2022-07-12 12:05:56 -04:00
Christopher Friedt
1831431bab lib: posix: semaphore: use consistent timebase in sem_timedwait
In the Zephyr implementation, `sem_timedwait()` uses a
potentially wildly different timebase for comparison via
`k_uptime_get()` (uptime in ms).

The standard specifies `CLOCK_REALTIME`. However, the real-time
clock can be modified to an arbitrary value via clock_settime()
and there is no guarantee that it will always reflect uptime.

This change ensures that `sem_timedwait()` uses a more
consistent timebase for comparison.

Fixes #46807

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
(cherry picked from commit 9d433c89a2)
2022-07-06 07:30:37 -04:00
Torsten Rasmussen
765f63c6b9 cmake: remove xtensa workaround in Zephyr toolchain code.
Remove xtensa specific workaround as this code is now present in Zephyr
SDK cmake code.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 92a1ca61eb)
2022-06-28 12:37:43 -04:00
Torsten Rasmussen
062306fc0b cmake: zephyr toolchain code cleanup
With the revert of commit 820d327b46 then
some additional code can be cleaned up.

This removes the final left-overs from Zephyr SDK 0.11.1 support and
older.

It further aligns message printing when including Zephyr SDK toolchain
to other toolchain message printing.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit fb3a113eb8)
2022-06-28 12:37:43 -04:00
Torsten Rasmussen
8fcf7f1d78 Revert "cmake: Zephyr sdk backward compatibility with 0.11.1 and 0.11.2"
This reverts commit 820d327b46.

Commit b973cdc9e8 updated the minimum
required Zephyr SDK version to 0.13.

Therefore revert commit 820d327b46 as
backward support for 0.11.1 and 0.11.2 is no longer required.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit e747fe73cd)
2022-06-28 12:37:43 -04:00
Vinayak Kariappa Chettimada
f06b3d922c Bluetooth: Controller: Fix PHY update for unsupported PHY
Fix PHY update procedure to handle unsupported PHY requested
by peer central device. PHY update complete will not be
generated to Host, connection is maintained on the old
PHY and the Controller will not respond to PDUs received on
the unsupported PHY.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 620a5524a5)
2022-06-28 11:39:37 -04:00
Francois Ramu
b75c012c55 drivers: spi: stm32 spi with dma must enable cs after periph
When using DMA to transfer over the spi, the spi_stm32_cs_control
is done after enabling the SPI. The same sequence applies
in the transceive_dma function as in transceive function

Signed-off-by: Francois Ramu <francois.ramu@st.com>
2022-06-28 11:39:17 -04:00
Stephanos Ioannidis
1efe6de3fe drivers: i2c: Fix infinite recursion in driver unregister function
This commit fixes an infinite recusion in the
`z_vrfy_i2c_slave_driver_unregister` function.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 745b7d202e)
2022-06-21 13:00:55 -04:00
Pavel Vasilyev
39270ed4a0 Bluetooth: Mesh: Fix segmentation when sending proxy message
Previous check in the if-statement would never allow to send last
segment if msg->len + 2 == MTU * x.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit 1efce43a00)
2022-06-21 12:17:16 -04:00
Pavel Vasilyev
81ffa550ee Bluetooth: Mesh: Check SegN when receiving Transaction Start PDU
When receiving Transaction Start PDU, assure that number of segments
needed to send a Provisioning PDU with TotalLength size is equal to SegN
value provided in the Transaction Start PDU.

Signed-off-by: Pavel Vasilyev <pavel.vasilyev@nordicsemi.no>
(cherry picked from commit a63c515679)
2022-06-20 11:31:10 -04:00
Aleksandr Khromykh
8c2965e017 Bluetooth: Mesh: add check for rx buffer overflow in pb adv
There is potential buffer overflow in pb adv.
If Transaction Continuation PDU comes before
Transaction Start PDU the last segment number is set to 0xff.
The current implementation has a strictly limited buffer size.
It is possible to receive malformed frame with wrong segment
number. All segments with number 2 and above will be stored
in the memory behind Rx buffer.

Signed-off-by: Aleksandr Khromykh <Aleksandr.Khromykh@nordicsemi.no>
(cherry picked from commit 6896075b62)
2022-06-20 11:25:30 -04:00
Alexander Wachter
7aa38b4ac8 drivers: can: m_can: fix alignmed issues
Make sure that all access to the msg_sram
is 32 bit aligned.

Signed-off-by: Alexander Wachter <alexander@wachter.cloud>
2022-06-10 21:01:47 -07:00
Christopher Friedt
6dd320f791 release: update v2.7.2 release notes
* add backported bugfixes
* add backported fix for CVE-2021-3966

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2022-05-24 12:13:21 -04:00
Alexej Rempel
ecac165d36 logging: shell: fix shell stats null pointer dereference
If CONFIG_SHELL_STATS is disabled, shell->stats is NULL and
must not be dereferenced. Guard against it.

Fixes #44089

Signed-off-by: Alexej Rempel <Alexej.Rempel@de.eckerle-gruppe.com>
(cherry picked from commit 476d199752)
2022-05-24 12:12:22 -04:00
Michał Narajowski
132d90d1bc tests/bluetooth/tester: Refactor Read UUID callback
ATT_READ_BY_TYPE_RSP returns Attribute Data List so handle it in the
application.

This affects GATT/CL/GAR/BV-03-C.

Signed-off-by: Michał Narajowski <michal.narajowski@codecoup.pl>
(cherry picked from commit 54cd46ac68)
2022-05-17 18:33:37 +02:00
Mark Holden
58356313ac coredump: adjust mem_region find in gdbstub
Adjust get_mem_region to not return region when address == end
as there will be nothing to read there. Also, a subsequent region
may have that address as a start address and would be a more appropriate
selection.

Signed-off-by: Mark Holden <mholden@fb.com>
(cherry picked from commit d04ab82943)
2022-05-17 18:06:16 +02:00
Piotr Pryga
99cfd3e4d7 Bluetooth: Controller: Fix per adv scheduling issue
sw_switch implementation uses two parallel groups of
PPIs connecting radio and timer tasks and events.
The groups are used interchaneably, one is set for
following radio TX/RX event while the other is in use
(enabled).

The group should be disabled by timer compare event that
starts Radio to TX/RX a PDU. The timer is responsible for
maintenance of TIFS/TMAFS. The disabled group collects
all PPIs required to maintain the TIFS/TMASF. After
the time is reached Radio is started and the group is
disabled. It will be enabled again by software radio
swich during next call.

If the group is not disabled then it will work in parallel
to other one. That causes issues in correct maintenance of
instant when radio shoudl be started for next TX/RX  event
e.g. radio may be enabled to early.

In case the PHY CODED was enabled and periodic advertising
included chained PDUs, that are transmitted back-to-back,
there was missing group delay disable. The missing case was
sw_switch function called with dir_curr and dir_next set
to SW_SWITCH_TX.

Signed-off-by: Piotr Pryga <piotr.pryga@nordicsemi.no>
2022-05-16 10:07:40 +02:00
Andrei Emeltchenko
780588bd33 edac: ibecc: Add support for EHL SKU13, SKU14, SKU15
Add support for missing EHL SKUs. The information about SKUs is
already public and available in Linux kernel:
https://github.com/torvalds/linux/blob/
38f80f42147ff658aff218edb0a88c37e58bf44f/drivers/edac/
igen6_edac.c#L197-L208

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
(cherry picked from commit f6069aa8fa)
2022-05-10 20:03:44 -04:00
46 changed files with 940 additions and 298 deletions

View File

@@ -0,0 +1,30 @@
name: Backport Issue Check
on:
pull_request_target:
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
runs-on: ubuntu-latest
steps:
- name: Check out source code
uses: actions/checkout@v2
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip
pip3 install -U pygithub
- name: Run backport issue checker
env:
GITHUB_TOKEN: ${{ secrets.ZB_GITHUB_TOKEN }}
run: |
./scripts/release/list_backports.py \
-o ${{ github.event.repository.owner.login }} \
-r ${{ github.event.repository.name }} \
-b ${{ github.event.pull_request.base.ref }} \
-p ${{ github.event.pull_request.number }}

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 2
PATCHLEVEL = 3
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -186,8 +186,9 @@ To operate bluetooth on Nucleo WB55RG, Cortex-M0 core should be flashed with
a valid STM32WB Coprocessor binaries (either 'Full stack' or 'HCI Layer').
These binaries are delivered in STM32WB Cube packages, under
Projects/STM32WB_Copro_Wireless_Binaries/STM32WB5x/
To date, interoperability and backward compatibility has been tested and is
guaranteed up to version 1.5 of STM32Cube package releases.
For compatibility information with the various versions of these binaries,
please check `modules/hal/stm32/lib/stm32wb/hci/README <https://github.com/zephyrproject-rtos/hal_stm32/blob/main/lib/stm32wb/hci/README>`__
in the hal_stm32 repo.
Connections and IOs
===================

View File

@@ -1,6 +1,8 @@
# SPDX-License-Identifier: Apache-2.0
include(${ZEPHYR_BASE}/cmake/toolchain/zephyr/host-tools.cmake)
if(ZEPHYR_SDK_HOST_TOOLS)
include(${ZEPHYR_BASE}/cmake/toolchain/zephyr/host-tools.cmake)
endif()
# dtc is an optional dependency
find_program(

View File

@@ -1,8 +0,0 @@
# Zephyr 0.11 SDK Toolchain
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
config TOOLCHAIN_ZEPHYR_0_11
def_bool y
select HAS_NEWLIB_LIBC_NANO if (ARC || ARM || RISCV)

View File

@@ -1,34 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(TOOLCHAIN_HOME ${ZEPHYR_SDK_INSTALL_DIR})
set(COMPILER gcc)
set(LINKER ld)
set(BINTOOLS gnu)
# Find some toolchain that is distributed with this particular SDK
file(GLOB toolchain_paths
LIST_DIRECTORIES true
${TOOLCHAIN_HOME}/xtensa/*/*-zephyr-elf
${TOOLCHAIN_HOME}/*-zephyr-elf
${TOOLCHAIN_HOME}/*-zephyr-eabi
)
if(toolchain_paths)
list(GET toolchain_paths 0 some_toolchain_path)
get_filename_component(one_toolchain_root "${some_toolchain_path}" DIRECTORY)
get_filename_component(one_toolchain "${some_toolchain_path}" NAME)
set(CROSS_COMPILE_TARGET ${one_toolchain})
set(SYSROOT_TARGET ${one_toolchain})
endif()
if(NOT CROSS_COMPILE_TARGET)
message(FATAL_ERROR "Unable to find 'x86_64-zephyr-elf' or any other architecture in ${TOOLCHAIN_HOME}")
endif()
set(CROSS_COMPILE ${one_toolchain_root}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
set(SYSROOT_DIR ${one_toolchain_root}/${SYSROOT_TARGET}/${SYSROOT_TARGET})
set(TOOLCHAIN_HAS_NEWLIB ON CACHE BOOL "True if toolchain supports newlib")

View File

@@ -1,12 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(HOST_TOOLS_HOME ${ZEPHYR_SDK_INSTALL_DIR}/sysroots/${TOOLCHAIN_ARCH}-pokysdk-linux)
# Path used for searching by the find_*() functions, with appropriate
# suffixes added. Ensures that the SDK's host tools will be found when
# we call, e.g. find_program(QEMU qemu-system-x86)
list(APPEND CMAKE_PREFIX_PATH ${HOST_TOOLS_HOME}/usr)
# TODO: Use find_* somehow for these as well?
set_ifndef(QEMU_BIOS ${HOST_TOOLS_HOME}/usr/share/qemu)
set_ifndef(OPENOCD_DEFAULT_PATH ${HOST_TOOLS_HOME}/usr/share/openocd/scripts)

View File

@@ -1,53 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(CROSS_COMPILE_TARGET_arm arm-zephyr-eabi)
set(CROSS_COMPILE_TARGET_arm64 aarch64-zephyr-elf)
set(CROSS_COMPILE_TARGET_nios2 nios2-zephyr-elf)
set(CROSS_COMPILE_TARGET_riscv riscv64-zephyr-elf)
set(CROSS_COMPILE_TARGET_mips mipsel-zephyr-elf)
set(CROSS_COMPILE_TARGET_xtensa xtensa-zephyr-elf)
set(CROSS_COMPILE_TARGET_arc arc-zephyr-elf)
set(CROSS_COMPILE_TARGET_x86 x86_64-zephyr-elf)
set(CROSS_COMPILE_TARGET_sparc sparc-zephyr-elf)
set(CROSS_COMPILE_TARGET ${CROSS_COMPILE_TARGET_${ARCH}})
set(SYSROOT_TARGET ${CROSS_COMPILE_TARGET})
if("${ARCH}" STREQUAL "xtensa")
# Xtensa GCC needs a different toolchain per SOC
if("${SOC_SERIES}" STREQUAL "cavs_v15")
set(SR_XT_TC_SOC intel_apl_adsp)
elseif("${SOC_SERIES}" STREQUAL "cavs_v18")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "cavs_v20")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "cavs_v25")
set(SR_XT_TC_SOC intel_s1000)
elseif("${SOC_SERIES}" STREQUAL "baytrail_adsp")
set(SR_XT_TC_SOC intel_byt_adsp)
elseif("${SOC_SERIES}" STREQUAL "broadwell_adsp")
set(SR_XT_TC_SOC intel_bdw_adsp)
elseif("${SOC_SERIES}" STREQUAL "imx8")
set(SR_XT_TC_SOC nxp_imx_adsp)
elseif("${SOC_SERIES}" STREQUAL "imx8m")
set(SR_XT_TC_SOC nxp_imx8m_adsp)
else()
message(FATAL_ERROR "Not compiler set for SOC_SERIES ${SOC_SERIES}")
endif()
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/xtensa/${SR_XT_TC_SOC}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/xtensa/${SR_XT_TC_SOC}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
else()
# Non-Xtensa SDK toolchains follow a simpler convention
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/${SYSROOT_TARGET}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
endif()
if("${ARCH}" STREQUAL "x86")
if(CONFIG_X86_64)
list(APPEND TOOLCHAIN_C_FLAGS -m64)
list(APPEND TOOLCHAIN_LD_FLAGS -m64)
else()
list(APPEND TOOLCHAIN_C_FLAGS -m32)
list(APPEND TOOLCHAIN_LD_FLAGS -m32)
endif()
endif()

View File

@@ -1,13 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/generic.cmake)
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/generic.cmake)
set(TOOLCHAIN_KCONFIG_DIR ${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR})
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/generic.cmake)
set(TOOLCHAIN_KCONFIG_DIR ${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr)
set(TOOLCHAIN_KCONFIG_DIR ${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr)
endif()
message(STATUS "Found toolchain: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")

View File

@@ -1,55 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
# This is the minimum required version for Zephyr to work (Old style)
set(REQUIRED_SDK_VER 0.11.1)
cmake_host_system_information(RESULT TOOLCHAIN_ARCH QUERY OS_PLATFORM)
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/host-tools.cmake)
if(NOT DEFINED ZEPHYR_SDK_INSTALL_DIR)
# Until https://github.com/zephyrproject-rtos/zephyr/issues/4912 is
# resolved we use ZEPHYR_SDK_INSTALL_DIR to determine whether the user
# wants to use the Zephyr SDK or not.
return()
endif()
# Cache the Zephyr SDK install dir.
set(ZEPHYR_SDK_INSTALL_DIR ${ZEPHYR_SDK_INSTALL_DIR} CACHE PATH "Zephyr SDK install directory")
if(NOT DEFINED SDK_VERSION)
if(ZEPHYR_TOOLCHAIN_VARIANT AND ZEPHYR_SDK_INSTALL_DIR)
# Manual detection for Zephyr SDK 0.11.1 and 0.11.2 for backward compatibility.
set(sdk_version_path ${ZEPHYR_SDK_INSTALL_DIR}/sdk_version)
if(NOT (EXISTS ${sdk_version_path}))
message(FATAL_ERROR
"The file '${ZEPHYR_SDK_INSTALL_DIR}/sdk_version' was not found. \
Is ZEPHYR_SDK_INSTALL_DIR=${ZEPHYR_SDK_INSTALL_DIR} misconfigured?")
endif()
# Read version as published by the SDK
file(READ ${sdk_version_path} SDK_VERSION_PRE1)
# Remove any pre-release data, for example 0.10.0-beta4 -> 0.10.0
string(REGEX REPLACE "-.*" "" SDK_VERSION_PRE2 ${SDK_VERSION_PRE1})
# Strip any trailing spaces/newlines from the version string
string(STRIP ${SDK_VERSION_PRE2} SDK_VERSION)
string(REGEX MATCH "([0-9]*).([0-9]*)" SDK_MAJOR_MINOR ${SDK_VERSION})
string(REGEX MATCH "([0-9]+)\.([0-9]+)\.([0-9]+)" SDK_MAJOR_MINOR_MICRO ${SDK_VERSION})
#at least 0.0.0
if(NOT SDK_MAJOR_MINOR_MICRO)
message(FATAL_ERROR "sdk version: ${SDK_MAJOR_MINOR_MICRO} improper format.
Expected format: x.y.z
Check whether the Zephyr SDK was installed correctly.
")
endif()
endif()
endif()
message(STATUS "Using toolchain: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/host-tools.cmake)
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/host-tools.cmake)
endif()
message(STATUS "Found host-tools: zephyr ${SDK_VERSION} (${ZEPHYR_SDK_INSTALL_DIR})")

View File

@@ -1,18 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
if(${SDK_VERSION} VERSION_LESS_EQUAL 0.11.2)
# For backward compatibility with 0.11.1 and 0.11.2
# we need to source files from Zephyr repo
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/target.cmake)
elseif(("${ARCH}" STREQUAL "sparc") AND (${SDK_VERSION} VERSION_LESS 0.12))
# SDK 0.11.3, 0.11.4 does not have SPARC target support.
include(${CMAKE_CURRENT_LIST_DIR}/${SDK_MAJOR_MINOR}/target.cmake)
else()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/target.cmake)
# Workaround, FIXME: Waiting for new SDK.
if("${ARCH}" STREQUAL "xtensa")
set(SYSROOT_DIR ${TOOLCHAIN_HOME}/xtensa/${SOC_TOOLCHAIN_NAME}/${SYSROOT_TARGET})
set(CROSS_COMPILE ${TOOLCHAIN_HOME}/xtensa/${SOC_TOOLCHAIN_NAME}/${CROSS_COMPILE_TARGET}/bin/${CROSS_COMPILE_TARGET}-)
endif()
endif()
include(${ZEPHYR_SDK_INSTALL_DIR}/cmake/zephyr/target.cmake)

View File

@@ -90,10 +90,6 @@ if(NOT DEFINED ZEPHYR_TOOLCHAIN_VARIANT)
if (NOT Zephyr-sdk_CONSIDERED_VERSIONS)
set(error_msg "ZEPHYR_TOOLCHAIN_VARIANT not specified and no Zephyr SDK is installed.\n")
string(APPEND error_msg "Please set ZEPHYR_TOOLCHAIN_VARIANT to the toolchain to use or install the Zephyr SDK.")
if(NOT ZEPHYR_TOOLCHAIN_VARIANT AND NOT ZEPHYR_SDK_INSTALL_DIR)
set(error_note "Note: If you are using Zephyr SDK 0.11.1 or 0.11.2, remember to set ZEPHYR_SDK_INSTALL_DIR and ZEPHYR_TOOLCHAIN_VARIANT")
endif()
else()
# Note: When CMake mimimun version becomes >= 3.17, change this loop into:
# foreach(version config IN ZIP_LISTS Zephyr-sdk_CONSIDERED_VERSIONS Zephyr-sdk_CONSIDERED_CONFIGS)
@@ -116,11 +112,17 @@ if(NOT DEFINED ZEPHYR_TOOLCHAIN_VARIANT)
message(FATAL_ERROR "${error_msg}
The Zephyr SDK can be downloaded from:
https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v${TOOLCHAIN_ZEPHYR_MINIMUM_REQUIRED_VERSION}/zephyr-sdk-${TOOLCHAIN_ZEPHYR_MINIMUM_REQUIRED_VERSION}-setup.run
${error_note}
")
endif()
if(DEFINED ZEPHYR_SDK_INSTALL_DIR)
# Cache the Zephyr SDK install dir.
set(ZEPHYR_SDK_INSTALL_DIR ${ZEPHYR_SDK_INSTALL_DIR} CACHE PATH "Zephyr SDK install directory")
# Use the Zephyr SDK host-tools.
set(ZEPHYR_SDK_HOST_TOOLS TRUE)
endif()
if(CMAKE_SCRIPT_MODE_FILE)
if("${FORMAT}" STREQUAL "json")
set(json "{\"ZEPHYR_TOOLCHAIN_VARIANT\" : \"${ZEPHYR_TOOLCHAIN_VARIANT}\", ")

View File

@@ -2,24 +2,186 @@
.. _zephyr_2.7:
.. _zephyr_2.7.1:
.. _zephyr_2.7.3:
Zephyr 2.7.1
Zephyr 2.7.3
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.2 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`39882` - Bluetooth Host qualification on 2.7 branch
* :github:`41074` - can_mcan_send sends corrupted CAN frames with a byte-by-byte memcpy implementation
* :github:`43479` - Bluetooth: Controller: Fix per adv scheduling issue
* :github:`43694` - drivers: spi: stm32 spi with dma must enable cs after periph
* :github:`44089` - logging: shell backend: null-deref when logs are dropped
* :github:`45341` - Add new EHL SKUs for IBECC
* :github:`45529` - GdbStub get_mem_region bugfix
* :github:`46621` - drivers: i2c: Infinite recursion in driver unregister function
* :github:`46698` - sm351 driver faults when using global thread
* :github:`46706` - add missing checks for segment number
* :github:`46757` - Bluetooth: Controller: Missing validation of unsupported PHY when performing PHY update
* :github:`46807` - lib: posix: semaphore: use consistent timebase in sem_timedwait
* :github:`46822` - L2CAP disconnected packet timing in ecred reconf function
* :github:`46994` - Incorrect Xtensa toolchain path resolution
* :github:`47356` - cpp: global static object initialisation may fail for MMU and MPU platforms
* :github:`47609` - posix: pthread: descriptor leak with pthread_join
* :github:`47955` - drivers: can: various RTR fixes
* :github:`48249` - boards: nucleo_wb55rg: documentation BLE binary compatibility issue
* :github:`48271` - net: Possible net_pkt leak in ipv6 multicast forwarding
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* (N/A)
* CVE-2022-2741: Under embargo until 2022-10-14
* CVE-2022-1042: `Zephyr project bug tracker GHSA-j7v7-w73r-mm5x
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-j7v7-w73r-mm5x>`_
* CVE-2022-1041: `Zephyr project bug tracker GHSA-p449-9hv9-pj38
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-p449-9hv9-pj38>`_
* CVE-2021-3966: `Zephyr project bug tracker GHSA-hfxq-3w6x-fv2m
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hfxq-3w6x-fv2m>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.2:
Zephyr 2.7.2
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************
These GitHub issues were addressed since the previous 2.7.1 tagged
release:
.. comment List derived from GitHub Issue query: ...
* :github:`issuenumber` - issue title
* :github:`23419` - posix: clock: No thread safety clock_getime / clock_settime
* :github:`30367` - TCP2 does not send our MSS to peer
* :github:`37389` - nucleo_g0b1re: Swapping image in mcuboot results in hard fault and softbricks the device
* :github:`38268` - Multiple defects in "Multi Producer Single Consumer Packet Buffer" library
* :github:`38576` - net shell: self-connecting to TCP might lead to a crash
* :github:`39184` - HawkBit hash mismatch
* :github:`39242` - net: sockets: Zephyr Fatal in dns_resolve_cb if dns request was attempted in offline state
* :github:`39399` - linker: Missing align __itcm_load_start / __dtcm_data_load_start linker symbols
* :github:`39608` - stm32: lpuart: 9600 baudrate doesn't work
* :github:`39609` - spi: slave: division by zero in timeout calculation
* :github:`39660` - poll() not notified when a TLS/TCP connection is closed without TLS close_notify
* :github:`39687` - sensor: qdec_nrfx: PM callback has incorrect signature
* :github:`39774` - modem: uart mux reading optimization never used
* :github:`39882` - Bluetooth Host qualification on 2.7 branch
* :github:`40163` - Use correct clock frequency for systick+DWT
* :github:`40464` - Dereferencing NULL with getsockname() on TI Simplelink Platform
* :github:`40578` - MODBUS RS-485 transceiver support broken on several platforms due to DE race condition
* :github:`40614` - poll: the code judgment condition is always true
* :github:`40640` - drivers: usb_dc_native_posix: segfault when using composite USB device
* :github:`40730` - More power supply modes on STM32H7XX
* :github:`40775` - stm32: multi-threading broken after #40173
* :github:`40795` - Timer signal thread execution loop break SMP on ARM64
* :github:`40925` - mesh_badge not working reel_board_v2
* :github:`40985` - net: icmpv6: Add support for Route Info option in Router Advertisement
* :github:`41026` - LoRa: sx126x: DIO1 interrupt left enabled in sleep mode
* :github:`41077` - console: gsm_mux: could not send more than 128 bytes of data on dlci
* :github:`41089` - power modes for STM32H7
* :github:`41095` - libc: newlib: 'gettimeofday' causes stack overflow on non-POSIX builds
* :github:`41237` - drivers: ieee802154_dw1000: use dedicated workqueue
* :github:`41240` - logging can get messed up when messages are dropped
* :github:`41284` - pthread_cond_wait return value incorrect
* :github:`41339` - stm32, Unable to read UART while checking from Framing error.
* :github:`41488` - Stall logging on nrf52840
* :github:`41499` - drivers: iwdg: stm32: WDT_OPT_PAUSE_HALTED_BY_DBG might not work
* :github:`41503` - including net/socket.h fails with redefinition of struct zsock_timeval (sometimes :-) )
* :github:`41529` - documentation: generate Doxygen tag file
* :github:`41536` - Backport STM32 SMPS Support to v2.7.0
* :github:`41582` - stm32h7: CSI as PLL source is broken
* :github:`41683` - http_client: Unreliable rsp->body_start pointer
* :github:`41915` - regression: Build fails after switching logging to V2
* :github:`41942` - k_delayable_work being used as k_work in work's handler
* :github:`41952` - Log timestamp overflows when using LOGv2
* :github:`42164` - tests/bluetooth/tester broken after switch to logging v2
* :github:`42271` - drivers: can: m_can: The can_set_bitrate() function doesn't work.
* :github:`42299` - spi: nRF HAL driver asserts when PM is used
* :github:`42373` - add k_spin_lock() to doxygen prior to v3.0 release
* :github:`42581` - include: drivers: clock_control: stm32 incorrect DT_PROP is used for xtpre
* :github:`42615` - Bluetooth: Controller: Missing ticks slot offset calculation in Periodic Advertising event scheduling
* :github:`42622` - pm: pm_device structure bigger than nessecary when PM_DEVICE_RUNTIME not set
* :github:`42631` - Unable to identify owner of net_mgmt_lock easily
* :github:`42825` - MQTT client disconnection (EAGAIN) on publish with big payload
* :github:`42862` - Bluetooth: L2CAP: Security check on l2cap request is wrong
* :github:`43117` - Not possible to create more than one shield.
* :github:`43130` - STM32WL ADC idles / doesn't work
* :github:`43176` - net/icmpv4: client possible to ddos itself when there's an error for the broadcasted packet
* :github:`43177` - net: shell: errno not cleared before calling the strtol
* :github:`43178` - net: ip: route: log_strdup misuse
* :github:`43179` - net: tcp: forever loop in tcp_resend_data
* :github:`43180` - net: tcp: possible deadlock in tcp_conn_unref()
* :github:`43181` - net: sockets: net_pkt leak in accept
* :github:`43182` - net: arp: ARP retransmission source address selection
* :github:`43183` - net: mqtt: setsockopt leak on failure
* :github:`43184` - arm: Wrong macro used for z_interrupt_stacks declaration in stack.h
* :github:`43185` - arm: cortex-m: uninitialised ptr_esf in get_esf() in fault.c
* :github:`43470` - wifi: esp_at: race condition on mutex's leading to deadlock
* :github:`43490` - net: sockets: userspace accept() crashes with NULL addr/addrlen pointer
* :github:`43548` - gen_relocate_app truncates files on incremental builds
* :github:`43572` - stm32: wrong clock the LSI freq for stm32l0x mcus
* :github:`43580` - hl7800: tcp stack freezes on slow response from modem
* :github:`43807` - Test "cpp.libcxx.newlib.exception" failed on platforms which use zephyr.bin to run tests.
* :github:`43839` - Bluetooth: controller: missing NULL assign to df_cfg in ll_adv_set
* :github:`43853` - X86 MSI messages always get to BSP core (need a fix to be backported)
* :github:`43858` - mcumgr seems to lock up when it receives command for group that does not exist
* :github:`44107` - The SMP nsim boards are started incorrectly when launching on real HW
* :github:`44310` - net: gptp: type mismatch calculation error in gptp_mi
* :github:`44336` - nucleo_wb55rg: stm32cubeprogrammer runner is missing for twister tests
* :github:`44337` - twister: Miss sn option to stm32cubeprogrgammer runner
* :github:`44352` - stm32l5x boards missing the openocd runner
* :github:`44497` - Add guide for disabling MSD on JLink OB devices and link to from smp_svr page
* :github:`44531` - bl654_usb without mcuboot maximum image size is not limited
* :github:`44886` - Unable to boot Zephyr on FVP_BaseR_AEMv8R
* :github:`44902` - x86: FPU registers are not initialised for userspace (eager FPU sharing)
* :github:`45869` - doc: update requirements
* :github:`45870` - drivers: virt_ivshmem: Allow multiple instances of ivShMem devices
* :github:`45871` - ci: split Bluetooth workflow
* :github:`45872` - ci: make git credentials non-persistent
* :github:`45873` - soc: esp32: use PYTHON_EXECUTABLE from build system
Security Vulnerability Related
******************************
The following security vulnerabilities (CVEs) were addressed in this
release:
* CVE-2021-3966: `Zephyr project bug tracker GHSA-hfxq-3w6x-fv2m
<https://github.com/zephyrproject-rtos/zephyr/security/advisories/GHSA-hfxq-3w6x-fv2m>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
.. _zephyr_2.7.1:
Zephyr 2.7.1
####################
This is an LTS maintenance release with fixes.
Issues Fixed
************

View File

@@ -11,6 +11,9 @@
#include "can_loopback.h"
#include <logging/log.h>
#include "can_utils.h"
LOG_MODULE_DECLARE(can_driver, CONFIG_CAN_LOG_LEVEL);
K_KERNEL_STACK_DEFINE(tx_thread_stack,
@@ -41,13 +44,6 @@ static void dispatch_frame(const struct zcan_frame *frame,
filter->rx_cb(&frame_tmp, filter->cb_arg);
}
static inline int check_filter_match(const struct zcan_frame *frame,
const struct zcan_filter *filter)
{
return ((filter->id & filter->id_mask) ==
(frame->id & filter->id_mask));
}
void tx_thread(void *data_arg, void *arg2, void *arg3)
{
ARG_UNUSED(arg2);
@@ -63,7 +59,7 @@ void tx_thread(void *data_arg, void *arg2, void *arg3)
for (int i = 0; i < CONFIG_CAN_MAX_FILTER; i++) {
filter = &data->filters[i];
if (filter->rx_cb &&
check_filter_match(&frame.frame, &filter->filter)) {
can_utils_filter_match(&frame.frame, &filter->filter) != 0) {
dispatch_frame(&frame.frame, filter);
}
}

View File

@@ -23,6 +23,34 @@ LOG_MODULE_DECLARE(can_driver, CONFIG_CAN_LOG_LEVEL);
#define MCAN_MAX_DLC CAN_MAX_DLC
#endif
static void memcpy32_volatile(volatile void *dst_, const volatile void *src_,
size_t len)
{
volatile uint32_t *dst = dst_;
const volatile uint32_t *src = src_;
__ASSERT(len % 4 == 0, "len must be a multiple of 4!");
len /= sizeof(uint32_t);
while (len--) {
*dst = *src;
++dst;
++src;
}
}
static void memset32_volatile(volatile void *dst_, uint32_t val, size_t len)
{
volatile uint32_t *dst = dst_;
__ASSERT(len % 4 == 0, "len must be a multiple of 4!");
len /= sizeof(uint32_t);
while (len--) {
*dst++ = val;
}
}
static int can_exit_sleep_mode(struct can_mcan_reg *can)
{
uint32_t start_time;
@@ -389,12 +417,7 @@ int can_mcan_init(const struct device *dev, const struct can_mcan_config *cfg,
}
/* No memset because only aligned ptr are allowed */
for (uint32_t *ptr = (uint32_t *)msg_ram;
ptr < (uint32_t *)msg_ram +
sizeof(struct can_mcan_msg_sram) / sizeof(uint32_t);
ptr++) {
*ptr = 0;
}
memset32_volatile(msg_ram, 0, sizeof(struct can_mcan_msg_sram));
return 0;
}
@@ -486,15 +509,17 @@ static void can_mcan_get_message(struct can_mcan_data *data,
uint32_t get_idx, filt_idx;
struct zcan_frame frame;
can_rx_callback_t cb;
volatile uint32_t *src, *dst, *end;
int data_length;
void *cb_arg;
struct can_mcan_rx_fifo_hdr hdr;
bool rtr_filter_mask;
bool rtr_filter;
while ((*fifo_status_reg & CAN_MCAN_RXF0S_F0FL)) {
get_idx = (*fifo_status_reg & CAN_MCAN_RXF0S_F0GI) >>
CAN_MCAN_RXF0S_F0GI_POS;
hdr = fifo[get_idx].hdr;
memcpy32_volatile(&hdr, &fifo[get_idx].hdr,
sizeof(struct can_mcan_rx_fifo_hdr));
if (hdr.xtd) {
frame.id = hdr.ext_id;
@@ -514,24 +539,25 @@ static void can_mcan_get_message(struct can_mcan_data *data,
filt_idx = hdr.fidx;
/* Check if RTR must match */
if ((hdr.xtd && data->ext_filt_rtr_mask & (1U << filt_idx) &&
((data->ext_filt_rtr >> filt_idx) & 1U) != frame.rtr) ||
(data->std_filt_rtr_mask & (1U << filt_idx) &&
((data->std_filt_rtr >> filt_idx) & 1U) != frame.rtr)) {
if (hdr.xtd != 0) {
rtr_filter_mask = (data->ext_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->ext_filt_rtr & BIT(filt_idx)) != 0;
} else {
rtr_filter_mask = (data->std_filt_rtr_mask & BIT(filt_idx)) != 0;
rtr_filter = (data->std_filt_rtr & BIT(filt_idx)) != 0;
}
if (rtr_filter_mask && (rtr_filter != frame.rtr)) {
/* RTR bit does not match filter RTR mask and bit, drop frame */
*fifo_ack_reg = get_idx;
continue;
}
data_length = can_dlc_to_bytes(frame.dlc);
if (data_length <= sizeof(frame.data)) {
/* data needs to be written in 32 bit blocks!*/
for (src = fifo[get_idx].data_32,
dst = frame.data_32,
end = dst + CAN_DIV_CEIL(data_length, sizeof(uint32_t));
dst < end;
src++, dst++) {
*dst = *src;
}
memcpy32_volatile(frame.data_32, fifo[get_idx].data_32,
ROUND_UP(data_length, sizeof(uint32_t)));
if (frame.id_type == CAN_STANDARD_IDENTIFIER) {
LOG_DBG("Frame on filter %d, ID: 0x%x",
@@ -647,8 +673,6 @@ int can_mcan_send(const struct can_mcan_config *cfg,
uint32_t put_idx;
int ret;
struct can_mcan_mm mm;
volatile uint32_t *dst, *end;
const uint32_t *src;
LOG_DBG("Sending %d bytes. Id: 0x%x, ID type: %s %s %s %s",
data_length, frame->id,
@@ -696,15 +720,9 @@ int can_mcan_send(const struct can_mcan_config *cfg,
tx_hdr.ext_id = frame->id;
}
msg_ram->tx_buffer[put_idx].hdr = tx_hdr;
for (src = frame->data_32,
dst = msg_ram->tx_buffer[put_idx].data_32,
end = dst + CAN_DIV_CEIL(data_length, sizeof(uint32_t));
dst < end;
src++, dst++) {
*dst = *src;
}
memcpy32_volatile(&msg_ram->tx_buffer[put_idx].hdr, &tx_hdr, sizeof(tx_hdr));
memcpy32_volatile(msg_ram->tx_buffer[put_idx].data_32, frame->data_32,
ROUND_UP(data_length, 4));
data->tx_fin_cb[put_idx] = callback;
data->tx_fin_cb_arg[put_idx] = callback_arg;
@@ -761,7 +779,8 @@ int can_mcan_attach_std(struct can_mcan_data *data,
filter_element.sfce = filter_nr & 0x01 ? CAN_MCAN_FCE_FIFO1 :
CAN_MCAN_FCE_FIFO0;
msg_ram->std_filt[filter_nr] = filter_element;
memcpy32_volatile(&msg_ram->std_filt[filter_nr], &filter_element,
sizeof(struct can_mcan_std_filter));
k_mutex_unlock(&data->inst_mutex);
@@ -820,7 +839,8 @@ static int can_mcan_attach_ext(struct can_mcan_data *data,
filter_element.efce = filter_nr & 0x01 ? CAN_MCAN_FCE_FIFO1 :
CAN_MCAN_FCE_FIFO0;
msg_ram->ext_filt[filter_nr] = filter_element;
memcpy32_volatile(&msg_ram->ext_filt[filter_nr], &filter_element,
sizeof(struct can_mcan_ext_filter));
k_mutex_unlock(&data->inst_mutex);
@@ -874,9 +894,6 @@ int can_mcan_attach_isr(struct can_mcan_data *data,
void can_mcan_detach(struct can_mcan_data *data,
struct can_mcan_msg_sram *msg_ram, int filter_nr)
{
const struct can_mcan_ext_filter ext_filter = {0};
const struct can_mcan_std_filter std_filter = {0};
k_mutex_lock(&data->inst_mutex, K_FOREVER);
if (filter_nr >= NUM_STD_FILTER_DATA) {
filter_nr -= NUM_STD_FILTER_DATA;
@@ -885,10 +902,12 @@ void can_mcan_detach(struct can_mcan_data *data,
return;
}
msg_ram->ext_filt[filter_nr] = ext_filter;
memset32_volatile(&msg_ram->ext_filt[filter_nr], 0,
sizeof(struct can_mcan_ext_filter));
data->rx_cb_ext[filter_nr] = NULL;
} else {
msg_ram->std_filt[filter_nr] = std_filter;
memset32_volatile(&msg_ram->std_filt[filter_nr], 0,
sizeof(struct can_mcan_std_filter));
data->rx_cb_std[filter_nr] = NULL;
}

View File

@@ -48,7 +48,7 @@ struct can_mcan_rx_fifo_hdr {
volatile uint32_t res : 2; /* Reserved */
volatile uint32_t fidx : 7; /* Filter Index */
volatile uint32_t anmf : 1; /* Accepted non-matching frame */
} __packed;
} __packed __aligned(4);
struct can_mcan_rx_fifo {
struct can_mcan_rx_fifo_hdr hdr;
@@ -56,7 +56,7 @@ struct can_mcan_rx_fifo {
volatile uint8_t data[64];
volatile uint32_t data_32[16];
};
} __packed;
} __packed __aligned(4);
struct can_mcan_mm {
volatile uint8_t idx : 5;
@@ -84,7 +84,7 @@ struct can_mcan_tx_buffer_hdr {
volatile uint8_t res2 : 1; /* Reserved */
volatile uint8_t efc : 1; /* Event FIFO control (Store Tx events) */
struct can_mcan_mm mm; /* Message marker */
} __packed;
} __packed __aligned(4);
struct can_mcan_tx_buffer {
struct can_mcan_tx_buffer_hdr hdr;
@@ -92,7 +92,7 @@ struct can_mcan_tx_buffer {
volatile uint8_t data[64];
volatile uint32_t data_32[16];
};
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_TE_TX 0x1 /* TX event */
#define CAN_MCAN_TE_TXC 0x2 /* TX event in spite of cancellation */
@@ -109,7 +109,7 @@ struct can_mcan_tx_event_fifo {
volatile uint8_t fdf : 1; /* FD Format */
volatile uint8_t et : 2; /* Event type */
struct can_mcan_mm mm; /* Message marker */
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_FCE_DISABLE 0x0
#define CAN_MCAN_FCE_FIFO0 0x1
@@ -130,7 +130,7 @@ struct can_mcan_std_filter {
volatile uint32_t id1 : 11;
volatile uint32_t sfce : 3; /* Filter config */
volatile uint32_t sft : 2; /* Filter type */
} __packed;
} __packed __aligned(4);
#define CAN_MCAN_EFT_RANGE_XIDAM 0x0
#define CAN_MCAN_EFT_DUAL 0x1
@@ -143,7 +143,7 @@ struct can_mcan_ext_filter {
volatile uint32_t id2 : 29; /* ID2 for dual or range, mask otherwise */
volatile uint32_t res : 1;
volatile uint32_t eft : 2; /* Filter type */
} __packed;
} __packed __aligned(4);
struct can_mcan_msg_sram {
volatile struct can_mcan_std_filter std_filt[NUM_STD_FILTER_ELEMENTS];
@@ -153,7 +153,7 @@ struct can_mcan_msg_sram {
volatile struct can_mcan_rx_fifo rx_buffer[NUM_RX_BUF_ELEMENTS];
volatile struct can_mcan_tx_event_fifo tx_event_fifo[NUM_TX_BUF_ELEMENTS];
volatile struct can_mcan_tx_buffer tx_buffer[NUM_TX_BUF_ELEMENTS];
} __packed;
} __packed __aligned(4);
struct can_mcan_data {
struct k_mutex inst_mutex;

View File

@@ -253,13 +253,11 @@ static void mcux_flexcan_copy_zfilter_to_mbconfig(const struct zcan_filter *src,
if (src->id_type == CAN_STANDARD_IDENTIFIER) {
dest->format = kFLEXCAN_FrameFormatStandard;
dest->id = FLEXCAN_ID_STD(src->id);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_STD_MASK(src->id_mask, src->rtr_mask, 1);
} else {
dest->format = kFLEXCAN_FrameFormatExtend;
dest->id = FLEXCAN_ID_EXT(src->id);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask,
src->rtr & src->rtr_mask, 1);
*mask = FLEXCAN_RX_MB_EXT_MASK(src->id_mask, src->rtr_mask, 1);
}
if ((src->rtr & src->rtr_mask) == CAN_DATAFRAME) {
@@ -646,6 +644,7 @@ static inline void mcux_flexcan_transfer_rx_idle(const struct device *dev,
static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
{
struct mcux_flexcan_data *data = (struct mcux_flexcan_data *)userData;
const struct mcux_flexcan_config *config = data->dev->config;
switch (status) {
case kStatus_FLEXCAN_UnHandled:
@@ -654,6 +653,7 @@ static FLEXCAN_CALLBACK(mcux_flexcan_transfer_callback)
mcux_flexcan_transfer_error_status(data->dev, (uint64_t)result);
break;
case kStatus_FLEXCAN_TxSwitchToRx:
FLEXCAN_TransferAbortReceive(config->base, &data->handle, (uint64_t)result);
__fallthrough;
case kStatus_FLEXCAN_TxIdle:
/* The result field is a MB value which is limited to 32bit value */

View File

@@ -302,6 +302,12 @@ int edac_ibecc_init(const struct device *dev)
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU11):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU12):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU13):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU14):
__fallthrough;
case PCIE_ID(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SKU15):
break;
default:
LOG_ERR("PCI Probe failed");

View File

@@ -18,6 +18,9 @@
#define PCI_DEVICE_ID_SKU10 0x452e
#define PCI_DEVICE_ID_SKU11 0x4532
#define PCI_DEVICE_ID_SKU12 0x4518
#define PCI_DEVICE_ID_SKU13 0x451a
#define PCI_DEVICE_ID_SKU14 0x4534
#define PCI_DEVICE_ID_SKU15 0x4536
/* TODO: Move to correct place NMI registers */

View File

@@ -71,7 +71,7 @@ static inline int z_vrfy_i2c_slave_driver_register(const struct device *dev)
static inline int z_vrfy_i2c_slave_driver_unregister(const struct device *dev)
{
Z_OOPS(Z_SYSCALL_OBJ(dev, K_OBJ_DRIVER_I2C));
return z_vrfy_i2c_slave_driver_unregister(dev);
return z_impl_i2c_slave_driver_unregister(dev);
}
#include <syscalls/i2c_slave_driver_unregister_mrsh.c>

View File

@@ -209,9 +209,9 @@ static int sm351lt_init(const struct device *dev)
}
#if defined(CONFIG_SM351LT_TRIGGER)
#if defined(CONFIG_SM351LT_TRIGGER_OWN_THREAD)
data->dev = dev;
#if defined(CONFIG_SM351LT_TRIGGER_OWN_THREAD)
k_sem_init(&data->gpio_sem, 0, K_SEM_MAX_LIMIT);
k_thread_create(&data->thread, data->thread_stack,

View File

@@ -718,11 +718,11 @@ static int transceive_dma(const struct device *dev,
/* Set buffers info */
spi_context_buffers_setup(&data->ctx, tx_bufs, rx_bufs, 1);
LL_SPI_Enable(spi);
/* This is turned off in spi_stm32_complete(). */
spi_stm32_cs_control(dev, true);
LL_SPI_Enable(spi);
while (data->ctx.rx_len > 0 || data->ctx.tx_len > 0) {
size_t dma_len;

View File

@@ -94,7 +94,7 @@ void free(void *ptr)
(void) sys_mutex_unlock(&z_malloc_heap_mutex);
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#else /* No malloc arena */
void *malloc(size_t size)
{

View File

@@ -133,7 +133,7 @@ static int malloc_prepare(const struct device *unused)
return 0;
}
SYS_INIT(malloc_prepare, APPLICATION, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
SYS_INIT(malloc_prepare, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
/* Current offset from HEAP_BASE of unused memory */
LIBC_BSS static size_t heap_sz;

View File

@@ -19,6 +19,7 @@ config POSIX_API
config PTHREAD_IPC
bool "POSIX pthread IPC API"
default y if POSIX_API
depends on POSIX_CLOCK
help
This enables a mostly-standards-compliant implementation of
the pthread mutex, condition variable and barrier IPC

View File

@@ -150,7 +150,7 @@ int pthread_create(pthread_t *newthread, const pthread_attr_t *attr,
for (pthread_num = 0;
pthread_num < CONFIG_MAX_PTHREAD_COUNT; pthread_num++) {
thread = &posix_thread_pool[pthread_num];
if (thread->state == PTHREAD_TERMINATED) {
if (thread->state == PTHREAD_EXITED || thread->state == PTHREAD_TERMINATED) {
thread->state = PTHREAD_JOINABLE;
break;
}

View File

@@ -91,6 +91,7 @@ int sem_post(sem_t *semaphore)
int sem_timedwait(sem_t *semaphore, struct timespec *abstime)
{
int32_t timeout;
struct timespec current;
int64_t current_ms, abstime_ms;
__ASSERT(abstime, "abstime pointer NULL");
@@ -100,8 +101,12 @@ int sem_timedwait(sem_t *semaphore, struct timespec *abstime)
return -1;
}
current_ms = (int64_t)k_uptime_get();
if (clock_gettime(CLOCK_REALTIME, &current) < 0) {
return -1;
}
abstime_ms = (int64_t)_ts_to_ms(abstime);
current_ms = (int64_t)_ts_to_ms(&current);
if (abstime_ms <= current_ms) {
timeout = 0;

View File

@@ -12,4 +12,5 @@ CONFIG_TEST_RANDOM_GENERATOR=y
# Use Portable threads
CONFIG_PTHREAD_IPC=y
CONFIG_POSIX_CLOCK=y
CONFIG_NET_SOCKETS_POSIX_NAMES=y

View File

@@ -122,7 +122,7 @@ class GdbStub(abc.ABC):
def get_mem_region(addr):
for r in self.mem_regions:
if r['start'] <= addr <= r['end']:
if r['start'] <= addr < r['end']:
return r
return None

View File

@@ -26,17 +26,17 @@ def parse_args():
parser.add_argument('-a', '--all', dest='all',
help='Show all bugs squashed', action='store_true')
parser.add_argument('-t', '--token', dest='tokenfile',
help='File containing GitHub token', metavar='FILE')
parser.add_argument('-b', '--begin', dest='begin', help='begin date (YYYY-mm-dd)',
metavar='date', type=valid_date_type, required=True)
help='File containing GitHub token (alternatively, use GITHUB_TOKEN env variable)', metavar='FILE')
parser.add_argument('-s', '--start', dest='start', help='start date (YYYY-mm-dd)',
metavar='START_DATE', type=valid_date_type, required=True)
parser.add_argument('-e', '--end', dest='end', help='end date (YYYY-mm-dd)',
metavar='date', type=valid_date_type, required=True)
metavar='END_DATE', type=valid_date_type, required=True)
args = parser.parse_args()
if args.end < args.begin:
if args.end < args.start:
raise ValueError(
'end date {} is before begin date {}'.format(args.end, args.begin))
'end date {} is before start date {}'.format(args.end, args.start))
if args.tokenfile:
with open(args.tokenfile, 'r') as file:
@@ -53,12 +53,12 @@ def parse_args():
class BugBashTally(object):
def __init__(self, gh, begin_date, end_date):
def __init__(self, gh, start_date, end_date):
"""Create a BugBashTally object with the provided Github object,
begin datetime object, and end datetime object"""
start datetime object, and end datetime object"""
self._gh = gh
self._repo = gh.get_repo('zephyrproject-rtos/zephyr')
self._begin_date = begin_date
self._start_date = start_date
self._end_date = end_date
self._issues = []
@@ -122,12 +122,12 @@ class BugBashTally(object):
cutoff = self._end_date + timedelta(1)
issues = self._repo.get_issues(state='closed', labels=[
'bug'], since=self._begin_date)
'bug'], since=self._start_date)
for i in issues:
# the PyGithub API and v3 REST API do not facilitate 'until'
# or 'end date' :-/
if i.closed_at < self._begin_date or i.closed_at > cutoff:
if i.closed_at < self._start_date or i.closed_at > cutoff:
continue
ipr = i.pull_request
@@ -167,7 +167,7 @@ def print_top_ten(top_ten):
def main():
args = parse_args()
bbt = BugBashTally(Github(args.token), args.begin, args.end)
bbt = BugBashTally(Github(args.token), args.start, args.end)
if args.all:
# print one issue per line
issues = bbt.get_issues()

341
scripts/release/list_backports.py Executable file
View File

@@ -0,0 +1,341 @@
#!/usr/bin/env python3
# Copyright (c) 2022, Meta
#
# SPDX-License-Identifier: Apache-2.0
"""Query issues in a release branch
This script searches for issues referenced via pull-requests in a release
branch in order to simplify tracking changes such as automated backports,
manual backports, security fixes, and stability fixes.
A formatted report is printed to standard output either in JSON or
reStructuredText.
Since an issue is required for all changes to release branches, merged PRs
must have at least one instance of the phrase "Fixes #1234" in the body. This
script will throw an error if a PR has been made without an associated issue.
Usage:
./scripts/release/list_backports.py \
-t ~/.ghtoken \
-b v2.7-branch \
-s 2021-12-15 -e 2022-04-22 \
-P 45074 -P 45868 -P 44918 -P 41234 -P 41174 \
-j | jq . | tee /tmp/backports.json
GITHUB_TOKEN="<secret>" \
./scripts/release/list_backports.py \
-b v3.0-branch \
-p 43381 \
-j | jq . | tee /tmp/backports.json
"""
import argparse
from datetime import datetime, timedelta
import io
import json
import logging
import os
import re
import sys
# Requires PyGithub
from github import Github
# https://gist.github.com/monkut/e60eea811ef085a6540f
def valid_date_type(arg_date_str):
"""custom argparse *date* type for user dates values given from the
command line"""
try:
return datetime.strptime(arg_date_str, "%Y-%m-%d")
except ValueError:
msg = "Given Date ({0}) not valid! Expected format, YYYY-MM-DD!".format(arg_date_str)
raise argparse.ArgumentTypeError(msg)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-t', '--token', dest='tokenfile',
help='File containing GitHub token (alternatively, use GITHUB_TOKEN env variable)', metavar='FILE')
parser.add_argument('-b', '--base', dest='base',
help='branch (base) for PRs (e.g. v2.7-branch)', metavar='BRANCH', required=True)
parser.add_argument('-j', '--json', dest='json', action='store_true',
help='print output in JSON rather than RST')
parser.add_argument('-s', '--start', dest='start', help='start date (YYYY-mm-dd)',
metavar='START_DATE', type=valid_date_type)
parser.add_argument('-e', '--end', dest='end', help='end date (YYYY-mm-dd)',
metavar='END_DATE', type=valid_date_type)
parser.add_argument("-o", "--org", default="zephyrproject-rtos",
help="Github organisation")
parser.add_argument('-p', '--include-pull', dest='includes',
help='include pull request (can be specified multiple times)',
metavar='PR', type=int, action='append', default=[])
parser.add_argument('-P', '--exclude-pull', dest='excludes',
help='exlude pull request (can be specified multiple times, helpful for version bumps and release notes)',
metavar='PR', type=int, action='append', default=[])
parser.add_argument("-r", "--repo", default="zephyr",
help="Github repository")
args = parser.parse_args()
if args.includes:
if getattr(args, 'start'):
logging.error(
'the --start argument should not be used with --include-pull')
return None
if getattr(args, 'end'):
logging.error(
'the --end argument should not be used with --include-pull')
return None
else:
if not getattr(args, 'start'):
logging.error(
'if --include-pr PR is not used, --start START_DATE is required')
return None
if not getattr(args, 'end'):
setattr(args, 'end', datetime.now())
if args.end < args.start:
logging.error(
f'end date {args.end} is before start date {args.start}')
return None
if args.tokenfile:
with open(args.tokenfile, 'r') as file:
token = file.read()
token = token.strip()
else:
if 'GITHUB_TOKEN' not in os.environ:
raise ValueError('No credentials specified')
token = os.environ['GITHUB_TOKEN']
setattr(args, 'token', token)
return args
class Backport(object):
def __init__(self, repo, base, pulls):
self._base = base
self._repo = repo
self._issues = []
self._pulls = pulls
self._pulls_without_an_issue = []
self._pulls_with_invalid_issues = {}
@staticmethod
def by_date_range(repo, base, start_date, end_date, excludes):
"""Create a Backport object with the provided repo,
base, start datetime object, and end datetime objects, and
list of excluded PRs"""
pulls = []
unfiltered_pulls = repo.get_pulls(
base=base, state='closed')
for p in unfiltered_pulls:
if not p.merged:
# only consider merged backports
continue
if p.closed_at < start_date or p.closed_at >= end_date + timedelta(1):
# only concerned with PRs within time window
continue
if p.number in excludes:
# skip PRs that have been explicitly excluded
continue
pulls.append(p)
# paginated_list.sort() does not exist
pulls = sorted(pulls, key=lambda x: x.number)
return Backport(repo, base, pulls)
@staticmethod
def by_included_prs(repo, base, includes):
"""Create a Backport object with the provided repo,
base, and list of included PRs"""
pulls = []
for i in includes:
try:
p = repo.get_pull(i)
except Exception:
p = None
if not p:
logging.error(f'{i} is not a valid pull request')
return None
if p.base.ref != base:
logging.error(
f'{i} is not a valid pull request for base {base} ({p.base.label})')
return None
pulls.append(p)
# paginated_list.sort() does not exist
pulls = sorted(pulls, key=lambda x: x.number)
return Backport(repo, base, pulls)
@staticmethod
def sanitize_title(title):
# TODO: sanitize titles such that they are suitable for both JSON and ReStructured Text
# could also automatically fix titles like "Automated backport of PR #1234"
return title
def print(self):
for i in self.get_issues():
title = Backport.sanitize_title(i.title)
# * :github:`38972` - logging: Cleaning references to tracing in logging
print(f'* :github:`{i.number}` - {title}')
def print_json(self):
issue_objects = []
for i in self.get_issues():
obj = {}
obj['id'] = i.number
obj['title'] = Backport.sanitize_title(i.title)
obj['url'] = f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{i.number}'
issue_objects.append(obj)
print(json.dumps(issue_objects))
def get_pulls(self):
return self._pulls
def get_issues(self):
"""Return GitHub issues fixed in the provided date window"""
if self._issues:
return self._issues
issue_map = {}
self._pulls_without_an_issue = []
self._pulls_with_invalid_issues = {}
for p in self._pulls:
# check for issues in this pr
issues_for_this_pr = {}
with io.StringIO(p.body) as buf:
for line in buf.readlines():
line = line.strip()
match = re.search(r"^Fixes[:]?\s*#([1-9][0-9]*).*", line)
if not match:
match = re.search(
rf"^Fixes[:]?\s*https://github\.com/{self._repo.organization.login}/{self._repo.name}/issues/([1-9][0-9]*).*", line)
if not match:
continue
issue_number = int(match[1])
issue = self._repo.get_issue(issue_number)
if not issue:
if not self._pulls_with_invalid_issues[p.number]:
self._pulls_with_invalid_issues[p.number] = [
issue_number]
else:
self._pulls_with_invalid_issues[p.number].append(
issue_number)
logging.error(
f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{p.number} references invalid issue number {issue_number}')
continue
issues_for_this_pr[issue_number] = issue
# report prs missing issues later
if len(issues_for_this_pr) == 0:
logging.error(
f'https://github.com/{self._repo.organization.login}/{self._repo.name}/pull/{p.number} does not have an associated issue')
self._pulls_without_an_issue.append(p)
continue
# FIXME: when we have upgrade to python3.9+, use "issue_map | issues_for_this_pr"
issue_map = {**issue_map, **issues_for_this_pr}
issues = list(issue_map.values())
# paginated_list.sort() does not exist
issues = sorted(issues, key=lambda x: x.number)
self._issues = issues
return self._issues
def get_pulls_without_issues(self):
if self._pulls_without_an_issue:
return self._pulls_without_an_issue
self.get_issues()
return self._pulls_without_an_issue
def get_pulls_with_invalid_issues(self):
if self._pulls_with_invalid_issues:
return self._pulls_with_invalid_issues
self.get_issues()
return self._pulls_with_invalid_issues
def main():
args = parse_args()
if not args:
return os.EX_DATAERR
try:
gh = Github(args.token)
except Exception:
logging.error('failed to authenticate with GitHub')
return os.EX_DATAERR
try:
repo = gh.get_repo(args.org + '/' + args.repo)
except Exception:
logging.error('failed to obtain Github repository')
return os.EX_DATAERR
bp = None
if args.includes:
bp = Backport.by_included_prs(repo, args.base, set(args.includes))
else:
bp = Backport.by_date_range(repo, args.base,
args.start, args.end, set(args.excludes))
if not bp:
return os.EX_DATAERR
pulls_with_invalid_issues = bp.get_pulls_with_invalid_issues()
if pulls_with_invalid_issues:
logging.error('The following PRs link to invalid issues:')
for (p, lst) in pulls_with_invalid_issues:
logging.error(
f'\nhttps://github.com/{repo.organization.login}/{repo.name}/pull/{p.number}: {lst}')
return os.EX_DATAERR
pulls_without_issues = bp.get_pulls_without_issues()
if pulls_without_issues:
logging.error(
'Please ensure the body of each PR to a release branch contains "Fixes #1234"')
logging.error('The following PRs are lacking associated issues:')
for p in pulls_without_issues:
logging.error(
f'https://github.com/{repo.organization.login}/{repo.name}/pull/{p.number}')
return os.EX_DATAERR
if args.json:
bp.print_json()
else:
bp.print()
return os.EX_OK
if __name__ == '__main__':
sys.exit(main())

View File

@@ -147,10 +147,10 @@ def parse_args():
def main():
parse_args()
token = os.environ.get('GH_TOKEN', None)
token = os.environ.get('GITHUB_TOKEN', None)
if not token:
sys.exit("""Github token not set in environment,
set the env. variable GH_TOKEN please and retry.""")
set the env. variable GITHUB_TOKEN please and retry.""")
i = Issues(args.org, args.repo, token)
@@ -213,5 +213,6 @@ set the env. variable GH_TOKEN please and retry.""")
f.write("* :github:`{}` - {}\n".format(
item['number'], item['title']))
if __name__ == '__main__':
main()

View File

@@ -571,7 +571,7 @@ void sw_switch(uint8_t dir_curr, uint8_t dir_next, uint8_t phy_curr, uint8_t fla
hal_radio_sw_switch_coded_tx_config_set(ppi_en, ppi_dis,
cc_s2, sw_tifs_toggle);
} else if (!dir_curr) {
} else {
/* Switching to TX after RX on LE 1M/2M PHY */
hal_radio_sw_switch_coded_config_clear(ppi_en,

View File

@@ -4169,6 +4169,7 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
struct lll_conn *lll = &conn->lll;
struct node_rx_pdu *rx;
uint8_t old_tx, old_rx;
uint8_t phy_bitmask;
/* Acquire additional rx node for Data length notification as
* a peripheral.
@@ -4198,6 +4199,15 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
conn->llcp_ack = conn->llcp_req;
}
/* supported PHYs mask */
phy_bitmask = PHY_1M;
if (IS_ENABLED(CONFIG_BT_CTLR_PHY_2M)) {
phy_bitmask |= PHY_2M;
}
if (IS_ENABLED(CONFIG_BT_CTLR_PHY_CODED)) {
phy_bitmask |= PHY_CODED;
}
/* apply new phy */
old_tx = lll->phy_tx;
old_rx = lll->phy_rx;
@@ -4211,7 +4221,10 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
if (conn->llcp.phy_upd_ind.tx) {
lll->phy_tx = conn->llcp.phy_upd_ind.tx;
if (conn->llcp.phy_upd_ind.tx & phy_bitmask) {
lll->phy_tx = conn->llcp.phy_upd_ind.tx &
phy_bitmask;
}
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
eff_tx_time = calc_eff_time(lll->max_tx_octets,
@@ -4221,7 +4234,10 @@ static inline void event_phy_upd_ind_prep(struct ll_conn *conn,
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
}
if (conn->llcp.phy_upd_ind.rx) {
lll->phy_rx = conn->llcp.phy_upd_ind.rx;
if (conn->llcp.phy_upd_ind.rx & phy_bitmask) {
lll->phy_rx = conn->llcp.phy_upd_ind.rx &
phy_bitmask;
}
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
eff_rx_time =

View File

@@ -1295,18 +1295,13 @@ static void le_ecred_reconf_req(struct bt_l2cap *l2cap, uint8_t ident,
chan = bt_l2cap_le_lookup_tx_cid(conn, scid);
if (!chan) {
result = BT_L2CAP_RECONF_INVALID_CID;
continue;
goto response;
}
/* If the MTU value is decreased for any of the included
* channels, then the receiver shall disconnect all
* included channels.
*/
if (BT_L2CAP_LE_CHAN(chan)->tx.mtu > mtu) {
BT_ERR("chan %p decreased MTU %u -> %u", chan,
BT_L2CAP_LE_CHAN(chan)->tx.mtu, mtu);
result = BT_L2CAP_RECONF_INVALID_MTU;
bt_l2cap_chan_disconnect(chan);
goto response;
}

View File

@@ -29,6 +29,7 @@
#define START_PAYLOAD_MAX 20
#define CONT_PAYLOAD_MAX 23
#define RX_BUFFER_MAX 65
#define START_LAST_SEG(gpc) (gpc >> 2)
#define CONT_SEG_INDEX(gpc) (gpc >> 2)
@@ -38,7 +39,8 @@
#define LINK_ACK 0x01
#define LINK_CLOSE 0x02
#define XACT_SEG_DATA(_seg) (&link.rx.buf->data[20 + ((_seg - 1) * 23)])
#define XACT_SEG_OFFSET(_seg) (20 + ((_seg - 1) * 23))
#define XACT_SEG_DATA(_seg) (&link.rx.buf->data[XACT_SEG_OFFSET(_seg)])
#define XACT_SEG_RECV(_seg) (link.rx.seg &= ~(1 << (_seg)))
#define XACT_ID_MAX 0x7f
@@ -116,7 +118,7 @@ struct prov_rx {
uint8_t gpc;
};
NET_BUF_SIMPLE_DEFINE_STATIC(rx_buf, 65);
NET_BUF_SIMPLE_DEFINE_STATIC(rx_buf, RX_BUFFER_MAX);
static struct pb_adv link = { .rx = { .buf = &rx_buf } };
@@ -147,7 +149,7 @@ static struct bt_mesh_send_cb buf_sent_cb = {
.end = buf_sent,
};
static uint8_t last_seg(uint8_t len)
static uint8_t last_seg(uint16_t len)
{
if (len <= START_PAYLOAD_MAX) {
return 0;
@@ -383,6 +385,11 @@ static void gen_prov_cont(struct prov_rx *rx, struct net_buf_simple *buf)
return;
}
if (XACT_SEG_OFFSET(seg) + buf->len > RX_BUFFER_MAX) {
BT_WARN("Rx buffer overflow. Malformed generic prov frame?");
return;
}
memcpy(XACT_SEG_DATA(seg), buf->data, buf->len);
XACT_SEG_RECV(seg);
@@ -475,6 +482,13 @@ static void gen_prov_start(struct prov_rx *rx, struct net_buf_simple *buf)
return;
}
if (START_LAST_SEG(rx->gpc) != last_seg(link.rx.buf->len)) {
BT_ERR("Invalid SegN (%u, calculated %u)", START_LAST_SEG(rx->gpc),
last_seg(link.rx.buf->len));
prov_failed(PROV_ERR_NVAL_FMT);
return;
}
prov_clear_tx();
link.rx.last_seg = START_LAST_SEG(rx->gpc);

View File

@@ -222,7 +222,7 @@ int bt_mesh_proxy_msg_send(struct bt_mesh_proxy_role *role, uint8_t type,
net_buf_simple_pull(msg, mtu);
while (msg->len) {
if (msg->len + 1 < mtu) {
if (msg->len + 1 <= mtu) {
net_buf_simple_push_u8(msg, PDU_HDR(SAR_LAST, type));
err = role->cb.send(conn, msg->data, msg->len, end, user_data);
if (err) {

View File

@@ -691,6 +691,7 @@ int net_route_mcast_forward_packet(struct net_pkt *pkt,
if (net_send_data(pkt_cpy) >= 0) {
++ret;
} else {
net_pkt_unref(pkt_cpy);
--err;
}
}

View File

@@ -319,7 +319,9 @@ static void dropped(const struct log_backend *const backend, uint32_t cnt)
const struct shell *shell = (const struct shell *)backend->cb->ctx;
const struct shell_log_backend *log_backend = shell->log_backend;
atomic_add(&shell->stats->log_lost_cnt, cnt);
if (IS_ENABLED(CONFIG_SHELL_STATS)) {
atomic_add(&shell->stats->log_lost_cnt, cnt);
}
atomic_add(&log_backend->control_block->dropped_cnt, cnt);
}

View File

@@ -575,6 +575,12 @@ struct gatt_read_rp {
uint8_t data[];
} __packed;
struct gatt_char_value {
uint16_t handle;
uint8_t data_len;
uint8_t data[0];
} __packed;
#define GATT_READ_UUID 0x12
struct gatt_read_uuid_cmd {
uint8_t address_type;
@@ -586,8 +592,8 @@ struct gatt_read_uuid_cmd {
} __packed;
struct gatt_read_uuid_rp {
uint8_t att_response;
uint16_t data_length;
uint8_t data[];
uint8_t values_count;
struct gatt_char_value values[0];
} __packed;
#define GATT_READ_LONG 0x13

View File

@@ -1395,6 +1395,51 @@ static uint8_t read_cb(struct bt_conn *conn, uint8_t err,
return BT_GATT_ITER_CONTINUE;
}
static uint8_t read_uuid_cb(struct bt_conn *conn, uint8_t err,
struct bt_gatt_read_params *params, const void *data,
uint16_t length)
{
struct gatt_read_uuid_rp *rp = (void *)gatt_buf.buf;
struct gatt_char_value value;
/* Respond to the Lower Tester with ATT Error received */
if (err) {
rp->att_response = err;
}
/* read complete */
if (!data) {
tester_send(BTP_SERVICE_ID_GATT, btp_opcode, CONTROLLER_INDEX,
gatt_buf.buf, gatt_buf.len);
read_destroy(params);
return BT_GATT_ITER_STOP;
}
value.handle = params->by_uuid.start_handle;
value.data_len = length;
if (!gatt_buf_add(&value, sizeof(struct gatt_char_value))) {
tester_rsp(BTP_SERVICE_ID_GATT, btp_opcode,
CONTROLLER_INDEX, BTP_STATUS_FAILED);
read_destroy(params);
return BT_GATT_ITER_STOP;
}
if (!gatt_buf_add(data, length)) {
tester_rsp(BTP_SERVICE_ID_GATT, btp_opcode,
CONTROLLER_INDEX, BTP_STATUS_FAILED);
read_destroy(params);
return BT_GATT_ITER_STOP;
}
rp->values_count++;
return BT_GATT_ITER_CONTINUE;
}
static void read_data(uint8_t *data, uint16_t len)
{
const struct gatt_read_cmd *cmd = (void *) data;
@@ -1448,7 +1493,7 @@ static void read_uuid(uint8_t *data, uint16_t len)
goto fail;
}
if (!gatt_buf_reserve(sizeof(struct gatt_read_rp))) {
if (!gatt_buf_reserve(sizeof(struct gatt_read_uuid_rp))) {
goto fail;
}
@@ -1456,7 +1501,7 @@ static void read_uuid(uint8_t *data, uint16_t len)
read_params.handle_count = 0;
read_params.by_uuid.start_handle = sys_le16_to_cpu(cmd->start_handle);
read_params.by_uuid.end_handle = sys_le16_to_cpu(cmd->end_handle);
read_params.func = read_cb;
read_params.func = read_uuid_cb;
btp_opcode = GATT_READ_UUID;

View File

@@ -89,6 +89,28 @@ const struct zcan_frame test_ext_msg_2 = {
.data = {1, 2, 3, 4, 5, 6, 7, 8}
};
/**
* @brief Standard (11-bit) CAN ID RTR frame 1.
*/
const struct zcan_frame test_std_rtr_msg_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_STD_ID_1,
.dlc = 0,
.data = {0}
};
/**
* @brief Extended (29-bit) CAN ID RTR frame 1.
*/
const struct zcan_frame test_ext_rtr_msg_1 = {
.id_type = CAN_EXTENDED_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_EXT_ID_1,
.dlc = 0,
.data = {0}
};
const struct zcan_filter test_std_filter_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_DATAFRAME,
@@ -154,6 +176,30 @@ const struct zcan_filter test_ext_masked_filter_2 = {
.id_mask = TEST_CAN_EXT_MASK
};
/**
* @brief Standard (11-bit) CAN ID RTR filter 1. This filter matches
* ``test_std_rtr_frame_1``.
*/
const struct zcan_filter test_std_rtr_filter_1 = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_STD_ID_1,
.rtr_mask = 1,
.id_mask = CAN_STD_ID_MASK
};
/**
* @brief Extended (29-bit) CAN ID RTR filter 1. This filter matches
* ``test_ext_rtr_frame_1``.
*/
const struct zcan_filter test_ext_rtr_filter_1 = {
.id_type = CAN_EXTENDED_IDENTIFIER,
.rtr = CAN_REMOTEREQUEST,
.id = TEST_CAN_EXT_ID_1,
.rtr_mask = 1,
.id_mask = CAN_EXT_ID_MASK
};
const struct zcan_filter test_std_some_filter = {
.id_type = CAN_STANDARD_IDENTIFIER,
.rtr = CAN_DATAFRAME,
@@ -517,6 +563,55 @@ static void send_receive(const struct zcan_filter *filter1,
can_detach(can_dev, filter_id_1);
}
/**
* @brief Perform a send/receive test with a set of CAN ID filters and CAN frames, RTR and data
* frames.
*
* @param data_filter CAN data filter
* @param rtr_filter CAN RTR filter
* @param data_frame CAN data frame
* @param rtr_frame CAN RTR frame
*/
void send_receive_rtr(const struct zcan_filter *data_filter,
const struct zcan_filter *rtr_filter,
const struct zcan_frame *data_frame,
const struct zcan_frame *rtr_frame)
{
struct zcan_frame frame;
int filter_id;
int err;
filter_id = attach_msgq(can_dev, rtr_filter);
/* Verify that RTR filter does not match data frame */
send_test_msg(can_dev, data_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, -EAGAIN, "Data frame passed RTR filter");
/* Verify that RTR filter matches RTR frame */
send_test_msg(can_dev, rtr_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, 0, "receive timeout");
check_msg(&frame, rtr_frame, 0);
can_detach(can_dev, filter_id);
filter_id = attach_msgq(can_dev, data_filter);
/* Verify that data filter does not match RTR frame */
send_test_msg(can_dev, rtr_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, -EAGAIN, "RTR frame passed data filter");
/* Verify that data filter matches data frame */
send_test_msg(can_dev, data_frame);
err = k_msgq_get(&can_msgq, &frame, TEST_RECEIVE_TIMEOUT);
zassert_equal(err, 0, "receive timeout");
check_msg(&frame, data_frame, 0);
can_detach(can_dev, filter_id);
}
/*
* Set driver to loopback mode
* The driver stays in loopback mode after that.
@@ -691,6 +786,24 @@ void test_send_receive_buffer(void)
can_detach(can_dev, filter_id);
}
/**
* @brief Test send/receive with standard (11-bit) CAN IDs and remote transmission request (RTR).
*/
void test_send_receive_std_id_rtr(void)
{
send_receive_rtr(&test_std_filter_1, &test_std_rtr_filter_1,
&test_std_msg_1, &test_std_rtr_msg_1);
}
/**
* @brief Test send/receive with extended (29-bit) CAN IDs and remote transmission request (RTR).
*/
void test_send_receive_ext_id_rtr(void)
{
send_receive_rtr(&test_ext_filter_1, &test_ext_rtr_filter_1,
&test_ext_msg_1, &test_ext_rtr_msg_1);
}
/*
* Attach to a filter that should not pass the message and send a message
* with a different id.
@@ -746,6 +859,8 @@ void test_main(void)
ztest_unit_test(test_send_receive_ext),
ztest_unit_test(test_send_receive_std_masked),
ztest_unit_test(test_send_receive_ext_masked),
ztest_user_unit_test(test_send_receive_std_id_rtr),
ztest_user_unit_test(test_send_receive_ext_id_rtr),
ztest_unit_test(test_send_receive_buffer),
ztest_unit_test(test_send_receive_wrong_id));
ztest_run_test_suite(can_driver);

View File

@@ -21,6 +21,7 @@ extern void test_posix_pthread_create_negative(void);
extern void test_posix_pthread_termination(void);
extern void test_posix_multiple_threads_single_key(void);
extern void test_posix_single_thread_multiple_keys(void);
extern void test_pthread_descriptor_leak(void);
extern void test_nanosleep_NULL_NULL(void);
extern void test_nanosleep_NULL_notNULL(void);
extern void test_nanosleep_notNULL_NULL(void);
@@ -45,6 +46,7 @@ void test_main(void)
ztest_unit_test(test_posix_pthread_termination),
ztest_unit_test(test_posix_multiple_threads_single_key),
ztest_unit_test(test_posix_single_thread_multiple_keys),
ztest_unit_test(test_pthread_descriptor_leak),
ztest_unit_test(test_posix_clock),
ztest_unit_test(test_posix_semaphore),
ztest_unit_test(test_posix_normal_mutex),

View File

@@ -560,3 +560,20 @@ void test_posix_pthread_create_negative(void)
ret = pthread_create(&pthread1, &attr1, create_thread1, (void *)1);
zassert_equal(ret, EINVAL, "create thread with 0 size");
}
void test_pthread_descriptor_leak(void)
{
void *unused;
pthread_t pthread1;
pthread_attr_t attr;
zassert_ok(pthread_attr_init(&attr), NULL);
zassert_ok(pthread_attr_setstack(&attr, &stack_e[0][0], STACKS), NULL);
/* If we are leaking descriptors, then this loop will never complete */
for (size_t i = 0; i < CONFIG_MAX_PTHREAD_COUNT * 2; ++i) {
zassert_ok(pthread_create(&pthread1, &attr, create_thread1, NULL),
"unable to create thread %zu", i);
zassert_ok(pthread_join(pthread1, &unused), "unable to join thread %zu", i);
}
}

View File

@@ -57,6 +57,24 @@ static int test_init(const struct device *dev)
SYS_INIT(test_init, APPLICATION, CONFIG_APPLICATION_INIT_PRIORITY);
/* Check that global static object constructors are called. */
foo_class static_foo(12345678);
static void test_global_static_ctor(void)
{
zassert_equal(static_foo.get_foo(), 12345678, NULL);
}
/*
* Check that dynamic memory allocation (usually, the C library heap) is
* functional when the global static object constructors are called.
*/
foo_class *static_init_dynamic_foo = new foo_class(87654321);
static void test_global_static_ctor_dynmem(void)
{
zassert_equal(static_init_dynamic_foo->get_foo(), 87654321, NULL);
}
static void test_new_delete(void)
{
@@ -68,6 +86,8 @@ static void test_new_delete(void)
void test_main(void)
{
ztest_test_suite(cpp_tests,
ztest_unit_test(test_global_static_ctor),
ztest_unit_test(test_global_static_ctor_dynmem),
ztest_unit_test(test_new_delete)
);

View File

@@ -1,5 +1,22 @@
tests:
cpp.main:
common:
tags: cpp
integration_platforms:
- mps2_an385
tags: cpp
- qemu_cortex_a53
tests:
cpp.main.minimal:
extra_configs:
- CONFIG_MINIMAL_LIBC=y
cpp.main.newlib:
filter: TOOLCHAIN_HAS_NEWLIB == 1
min_ram: 32
extra_configs:
- CONFIG_NEWLIB_LIBC=y
- CONFIG_NEWLIB_LIBC_NANO=n
cpp.main.newlib_nano:
filter: TOOLCHAIN_HAS_NEWLIB == 1 and CONFIG_HAS_NEWLIB_LIBC_NANO
min_ram: 24
extra_configs:
- CONFIG_NEWLIB_LIBC=y
- CONFIG_NEWLIB_LIBC_NANO=y