Compare commits

...

129 Commits

Author SHA1 Message Date
Thomas Stranger
091f5956b7 github: workflows: disable workflows
Disable workflows because the branch is no longer maintained.

Signed-off-by: Thomas Stranger <thomas.stranger@outlook.com>
2025-11-24 13:09:08 -05:00
Henrik Brix Andersen
c49ce6725d ci: build samples/cpp/hello_world as part of the multiplatform test
Build the C++ version of the Hello, World sample as part of the
multiplatform (build) test in CI.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit cd543887f4)
2024-11-06 06:44:42 +01:00
Henrik Brix Andersen
1040cd6023 SDK_VERSION: Use Zephyr SDK 0.16.9
This commit updates ZEPHYR_SDK to point to the Zephyr SDK 0.16.9 release.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
2024-11-06 06:44:42 +01:00
Dominik Ermel
03bc6e0d70 storage/stream_flash: Fix range check in stream_flash_erase_page
Added check where stream_flash_erase_page checks if requested
offset is actually within stream flash designated area.

Fixes #79800

Signed-off-by: Dominik Ermel <dominik.ermel@nordicsemi.no>
(cherry picked from commit 8714c172ed)
2024-11-06 06:44:30 +01:00
Torsten Rasmussen
6aeb7a2b96 cmake: limit Zephyr SDK to 0.16.x series for Zephyr 3.6 LTS
Copy of 6da35a811a

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-10-29 22:11:21 +01:00
Torsten Rasmussen
6e7db36176 cmake: support range for find_package(Zephyr-sdk)
Fixes: #80200

CMake `find_package(<package> <version>)` support the use of ranges,
like `1.0.0...4.0.0`.

Update the FindZephyr-sdk.cmake module to support this.
This allows looking up the Zephyr SDK with an upper boundry, for example
`find_package(Zephyr-sdk 0.16...<0.17)`.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2024-10-29 22:11:21 +01:00
Abram Early
ee4c40c507 modbus: reset wait semaphore before tx
A response returned after a request times out would increment the
semaphore and stay until the next request is made which will immediately
return when k_sem_take is called even before a response is returned. This
will once again have the same problem when the actual response arrives.
So the wait semaphore just needs to be reset before transmitting.

Signed-off-by: Abram Early <abram.early@gmail.com>
(cherry picked from commit 583f4956dc)
2024-10-22 20:41:04 +02:00
Ibe Van de Veire
7f478311e6 test: net: igmp: Add extra IGMPv3 testcase
Added extra testcases for the IGMPv3 protocol. The IGMP driver is
supposed to send an IGMPv3 report when joining a group.

Signed-off-by: Ibe Van de Veire <ibe.vandeveire@basalte.be>
(cherry picked from commit e6dd4cda89)
2024-10-22 09:30:41 +02:00
Ibe Van de Veire
d2ba008b0a net: ip: igmp: Add igmp.h for definitions
Add igmp.h file to declare definitions for IGMP that are not meant te be
included by the application but can be used in e.g. tests.

Signed-off-by: Ibe Van de Veire <ibe.vandeveire@basalte.be>
(cherry picked from commit ba9eca3181)
2024-10-22 09:30:41 +02:00
Ibe Van de Veire
8e2419fce4 net: ip: igmp: Remove too strict length check
According to RFC2236 section 2.5, the IGMP message may be longer then 8
bytes. The rest of the bytes should be ignored.

Signed-off-by: Ibe Van de Veire <ibe.vandeveire@basalte.be>
(cherry picked from commit c646dd37e5)
2024-10-22 09:30:41 +02:00
Ibe Van de Veire
73372783ce net: ip: igmp: Fix wrong header length
The header length of the net ip packet was calculated using only the
net_pkt_ip_hdr_len function. The correct header length should be
calculated by adding net_pkt_ip_hdr_len and net_pkt_ipv4_opts_len. This
resulted in an incorrect IGMP version type in case of IGMPv2 message
(when IGMPv3 was enabled). The IGMP message was not parsed correctly and
therefore dropped.

Signed-off-by: Ibe Van de Veire <ibe.vandeveire@basalte.be>
(cherry picked from commit f852c12360)
2024-10-22 09:30:41 +02:00
Daekeun Kang
6937dabf26 net: fix handle unaligned memory access in net_context_bind()
This commit addresses an issue in net_context_bind() where unaligned
memory access was not properly handled when checking for INADDR_ANY.
The problem primarily affected MCUs like ARMv6 that don't support
unaligned memory access.

- Use UNALIGNED_GET() to safely access the sin_addr.s_addr field
- Ensures correct behavior on architectures with alignment restrictions

This fix improves compatibility and prevents potential crashes or
unexpected behavior on affected platforms.

Signed-off-by: Daekeun Kang <dkkang@huconn.com>
(cherry picked from commit b24c5201a0)
2024-10-02 10:06:12 +02:00
Jamie McCrae
25d2970266 doc: services: device_mgmt: mcumgr: Correct license for tool
Corrects an incorrect license for a tool

Fixes #78927

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2024-09-30 17:08:57 -05:00
Lyle Zhu
097ed18842 Bluetooth: SDP: Fix stack override issue
Check the remaining space of the local variable
`filter` to avoid stack override issue.

Signed-off-by: Lyle Zhu <lyle.zhu@nxp.com>
(cherry picked from commit f3a1cf2782)
2024-09-26 11:31:34 +02:00
Lyle Zhu
f5f9872138 Bluetooth: RFCOMM: check the validity of received frame
Check whether the received frame is complete by
comparing the length of the received data with
the frame data.

Signed-off-by: Lyle Zhu <lyle.zhu@nxp.com>
(cherry picked from commit 37d62c6a16)
2024-09-26 11:16:26 +02:00
Emil Gydesen
2c2540ee46 zephyr: Add zero-len check for utf8_trunc
The function did not check if the provided string had a zero
length before starting to truncate, which meant that last_byte_p
could possible have pointed to the value before the string.

Signed-off-by: Emil Gydesen <emil.gydesen@nordicsemi.no>
2024-09-18 12:51:33 +02:00
Eunkyu Lee
a0c8a437c1 Bluetooth: Classic: Add length check in bluetooth classic
Added length checks for user input in `sdp_client_receive` and
`l2cap_br_info_rsp`.

Signed-off-by: Eunkyu Lee <mochaccino.00.00@gmail.com>
(cherry picked from commit 88881257ab)
2024-09-06 10:05:42 -05:00
Eunkyu Lee
ecfc6e1e89 Bluetooth: Host: Add missing buffer length check
Modified to check the length of the remaining data in buffer
before processing the next report. The length check is missing
in the cont routine.

Signed-off-by: Eunkyu Lee <mochaccino.00.00@gmail.com>
(cherry picked from commit e491f220d8)
2024-09-06 10:05:19 -05:00
Emil Gydesen
6567d6e370 Bluetooth: ASCS: Validate num_ases in CP requests
Add validation of the number of ASEs in control point
write requests.

This validates that the number of ASEs
in the control point is not greater than the total number
of ASEs we support.

This also validates that the GATT MTU is large enough to
hold all the responses from the write since those can only be
sent as notifications and never be truncated.

Finally this validates and updates the size of the buffer used to
hold the responses, which may be an optimization for some builds.

Signed-off-by: Emil Gydesen <emil.gydesen@nordicsemi.no>
(cherry picked from commit 7b0784c1f6)
2024-09-06 10:04:50 -05:00
Emil Gydesen
9ab5868c91 Bluetooth: OTS: Add len validation in olcp_ind_handler
Verify the length of the indication before we pull from the
buffer.

Signed-off-by: Emil Gydesen <emil.gydesen@nordicsemi.no>
(cherry picked from commit 044f8aaeb3)
2024-09-06 10:04:31 -05:00
Emil Gydesen
5afc636af9 Bluetooth: BAP: Add check for num_subgroups in parse_recv_state
In the parse_recv_state we did not verify that we can handle all
the subgroups before we started parsing them.

Signed-off-by: Emil Gydesen <emil.gydesen@nordicsemi.no>
(cherry picked from commit edbe34eaf2)
2024-09-06 10:04:09 -05:00
Anas Nashif
6874cf56ab ci: rerun issue check on PR edit
Re-run issue check when a PR is updated, i.e. when someone adds
'Fixes...` to the PR body.

This is mostly for release branches and has no effect on main branch.

Also, add concurrency check in the workflow.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
(cherry picked from commit 4077249cc7)
2024-08-29 22:09:03 +02:00
Benjamin Cabé
cc23222d25 doc: fix issue with keydown/keyup being ignored
Ensure initial search menu visibility is properly set so that key
events don't get trapped preventing arrow key navigation on the
document.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit b3776ca97c)
2024-08-26 17:01:45 -05:00
Aleksander Wasaznik
d5a127ad2f Bluetooth: host: sched-lock bt_recv()
`bt_recv` is invoked from the BT long work queue, which is preemptible.
The host uses cooperative scheduling to ensure thread safety.

Signed-off-by: Aleksander Wasaznik <aleksander.wasaznik@nordicsemi.no>
2024-08-14 13:27:50 -05:00
Georges Oates_Larsen
c684dabf6a net: net_if: fix net_if_send_data for offloaded ifaces
Some offloaded ifaces have an L2, but lack support for
net_l2->send. This edge case is not handled by
net_if_send_data, resulting in a NULL dereference under
rare circumstances.

This patch expands the offloaded iface guard in
net_if_send_data to handle this edge case.

Signed-off-by: Georges Oates_Larsen <georges.larsen@nordicsemi.no>
(cherry picked from commit 1c79445059)
2024-08-13 22:03:24 +02:00
Francois Ramu
7c5e43aea6 doc: release 3.6: no CACHE_MANAGEMENT by default on stm32h7/f7
Mentioning that stm32h7 and stm32F7 series do not have the
CONFIG_CACHE_MANAGEMENT by default. The application must
explicitly set CONFIG_CACHE_MANAGEMENT=y to activate
the cache (Icache, Dcache) on those stm32 series.

Signed-off-by: Francois Ramu <francois.ramu@st.com>
(cherry picked from commit 1c59fa5231)
2024-08-05 22:12:18 +02:00
Flavio Ceolin
850601b055 tests: dynamic_thread_stack: Check thread stack permission
Check if a user thread is capable of free another thread stack.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit 95dde52b1e)
2024-08-01 21:36:06 +02:00
Flavio Ceolin
28107c9b54 userspace: dynamic: Fix k_thread_stack_free verification
k_thread_stack_free syscall was not checking if the caller
had permission to given stack object.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
(cherry picked from commit c12f0507b6)
2024-08-01 21:36:06 +02:00
Jukka Rissanen
c9473ca717 net: context: Check null pointer in V6ONLY getsockopt
Make sure we are not accessing NULL pointer when checking
if the IPv4 mapping to IPv6 is enabled.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit cf552905f4)
2024-08-01 10:12:54 +02:00
Vinayak Kariappa Chettimada
2cddfcef89 Bluetooth: Controller: Add explicit LLCP error code check
Add unit tests to cover explicit LLCP error code check and
cover the same in the Controller implementation.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit d6f2bc9669)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Vinayak Kariappa Chettimada
90225887db Bluetooth: Controller: Use BT_HCI_ERR_UNSPECIFIED as needed
A Host shall consider any error code that it does not
explicitly understand equivalent to the error code
Unspecified Error (0x1F).

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 78466c8f52)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Vinayak Kariappa Chettimada
e31a6ef555 Bluetooth: Controller: Refactor BT_CTLR_LE_ENC implementation
Refactor reused function in BT_CTLR_LE_ENC feature.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit fe205a598e)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Vinayak Kariappa Chettimada
9d03b8f1d3 Bluetooth: Controller: Fix missing conn update ind PDU validation
Fix missing validation of Connection Update Ind PDU. Ignore
invalid connection update parameters and force a silent
local connection termination.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
(cherry picked from commit 4b6d3f1e16)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Erik Brockhoff
ae70d76bee Bluetooth: controller: minor cleanup and a fix-up re. LLCP
Only perform retention if not already done.
Ensure 'sched' is performed on phy ntf even if dle is not.

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 9d8059b6e5)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Erik Brockhoff
d1a10bc004 Bluetooth: controller: fix node_rx retention mechanism
Ensure that in LLCP reference to node_rx is cleared when
retention is NOT used, to avoid corruption of node_rx later
re-allocated

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit 806a4fcf92)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Erik Brockhoff
a3ca8d09f2 Bluetooth: controller: fixing rx node leak on CPR reject of parallel CPR
In case a CPR is intiated but rejected due to CPR active on
other connection, rx nodes are leaked due to retained node not
being properly released.

Signed-off-by: Erik Brockhoff <erbr@oticon.com>
(cherry picked from commit edef1b7cf4)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Morten Priess
e8cc179b6e Bluetooth: controller: Prevent invalid compiler code reordering
In ull_disable, it is imperative that the callback is set up before a
second reference counter check, otherwise it may happen that an LLL done
event has already passed when the disable callback and semaphore is
assigned.

This causes the HCI thread to wait until timeout and assert after
ull_ticker_stop_with_mark.

For certain compilers, due to compiler optimizations, it can be seen
from the assembler code that the callback is assigned after the second
reference counter check.

By adding memory barriers, the code correctly reorders code to the
expected sequence.

Signed-off-by: Morten Priess <mtpr@oticon.com>
(cherry picked from commit 7f82b6a219)
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2024-07-21 00:34:05 +02:00
Jordan Yates
a025a65d1f net: lib: dhcpv4: goto INIT on IF down, not RENEWING
When the interface goes down, the safest thing to do is to return to
the INIT state, as there is no guarantee that any state is preserved
upon the interface coming back up again.

This is particularly the case with WiFi.

Signed-off-by: Jordan Yates <jordan@embeint.com>
(cherry picked from commit 0f56974c9d)
2024-07-15 10:04:46 -05:00
Robert Lubos
ace6e447cf net: mqtt: Fix possible socket leak with websocket transport
In case underlying TCP/TLS connection is already down, the
websocket_disconnect() call is expected to fail, as it involves
communication. Therefore, mqtt_client_websocket_disconnect() should not
quit early in such cases, as it could lead to an underlying socket leak.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 4625fa713f)
2024-07-09 18:58:50 +02:00
Jared Kangas
4769dbb650 drivers: adc: lmp90xxx: fix checksum mismatch return value
During channel reads, zero is returned on CRC mismatches: the returned
error variable is not written to after a previous non-zero check. Return
-EIO to mirror other drivers' checksum validation behaviors.

Signed-off-by: Jared Kangas <kangas.jd@gmail.com>
(cherry picked from commit 8ec3c045f8)
2024-07-05 08:32:16 +02:00
Jamie McCrae
b27d3a61b7 cmake: modules: extensions: Fix dts watch file processing
Fixes and simplifies the handling of how the dts watch file is
processed

Signed-off-by: Grzegorz Swiderski <grzegorz.swiderski@nordicsemi.no>
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit 11c1f3de61)
2024-07-01 13:43:12 +02:00
Jamie McCrae
388717ecf1 cmake: modules: extension: Fix dts file watch
Fixes an issue in the code that processes the output file of a
compiler to see which files should be watched, the compiler can
combine multiple files into a single line instead of putting them
each on separate lines if the length of the file paths is short,
therefore account for this and split it up into multiple elements

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit f4cfb8cd96)
2024-07-01 13:43:12 +02:00
Sven Ginka
6866bbbf05 soc: atmel: sam: Add invalidate d-cache at z_arm_platform_init
Before that fix, the SOC was unable to boot properly.
Starting turned directly into z_arm_usage_fault().
Fixes zephyrproject-rtos#73485

Signed-off-by: Sven Ginka <sven.ginka@gmail.com>
(cherry picked from commit c3d7b1c978)
2024-06-25 22:32:07 +02:00
Johan Hedberg
fee892052a Bluetooth: Host: Avoid processing "no change" encryption changes
If the new encryption state is the same as the old one, there's no point in
doing additional processing or callbacks. Simply log a warning and ignore
the HCI event in such a case.

Signed-off-by: Johan Hedberg <johan.hedberg@silabs.com>
(cherry picked from commit bf363d7c3e)
2024-06-25 17:43:37 +02:00
Sean Nyekjaer
8cd7337d69 dts: arm: st: mp1: fix exti interrupt numbering
Align interrupt numbering with RM0436 for STM32MP157.
This will allow EXTI interrupt for line 6, 7, 8, 9, 10 and 11.

Fixes: ff231fa20a ("dts: stm32: Populate new properties for exti nodes")
Signed-off-by: Sean Nyekjaer <sean@geanix.com>
(cherry picked from commit ede866440d)
2024-06-05 16:06:37 +02:00
Abderrahmane Jarmouni
999c9a3b36 drivers: counter: stm32_rtc: fix clk disable for WBAX
clock_control_on() was called instead of clock_control_off().

Signed-off-by: Abderrahmane Jarmouni <abderrahmane.jarmouni-ext@st.com>
(cherry picked from commit c09f1ec91a)
2024-06-05 16:06:21 +02:00
Henrik Brix Andersen
1c606f5528 drivers: can: shell: print raw DLC when sending frame, not bytes
Print the raw DLC when enqueuing a CAN frame for sending, not the
corresponding number of bytes.

Fixes: #73309

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 6a070ee165)
2024-05-29 14:53:17 +02:00
Henrik Brix Andersen
a5d72fec6e drivers: can: shell: fully initialize frame before sending
Zerorise the CAN frame before filling in data to ensure all data bytes are
initialized.

Fixes: #73309

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit fb4f67b775)
2024-05-29 14:53:17 +02:00
Gerson Fernando Budke
899e9e5320 mgmt: updatehub: Fix mark for update
This fixes compatibility with recent bootutils API.

Fixes #69297

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
(cherry picked from commit 94cd46d6ef)
2024-05-29 14:53:08 +02:00
Gerson Fernando Budke
73d6a10f54 mgmt: updatehub: Fix json arrays
After the changes introduced by #50816 the UpdateHub could not decode
anymore the JSON object. This introduce missing parsing definitions
to allow JSON parser undertood the correct UpdateHub probe object.

Fixes #69297

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
(cherry picked from commit 5fb62ca960)
2024-05-29 14:53:08 +02:00
Henrik Brix Andersen
c5156632a5 drivers: sensor: nxp: kinetis: temp: fix memset() length
Use the correct buffer size when calling memset().

Fixes: #73093

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 59402fd82e)
2024-05-23 12:43:06 +02:00
Henrik Brix Andersen
4fe22b0f4e drivers: sensors: nxp: kinetis: temp: select CONFIG_ADC
The NXP Kinetis temperature sensor depends on CONFIG_ADC. Make the driver
Kconfig select CONFIG_ADC to get better CI coverage (enabling the driver
when CONFIG_SENSOR is enabled without depending on CONFIG_ADC=y).

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 0a49b788a4)
2024-05-23 12:43:06 +02:00
Aleksander Wasaznik
f743d124b8 Bluetooth: Check buffer length in GATT rsp functions
Add length checks local to the parsing function. This removes the need
for a separate data validation step.

Signed-off-by: Aleksander Wasaznik <aleksander.wasaznik@nordicsemi.no>
(cherry picked from commit 3eeb8f8d18)
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
a9fbf481f3 tests bsim edtt: Kill stuck processes in the same way as other tests
This test keeps its own partial way of running tests.
Let's have it kill stuck processes in the same way as
the rest (sending another kill 5 seconds after, and printing
a message about what happened)

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit 693ae8635a)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
29976389f4 tests/bsim/bt l2cap/stress: Increase runtime timeout
This test has been seen failing in the new runners
due to a (realtime) timeout.
Let's double the timeout so it does not.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit f4972347d4)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
e946ab2cc0 tests/bsim broadcast_audio: Increase realtime timeout
This test has been seen failing in older slower computers
due to timeouts, let's increase the timeout so we don't
break in those cases.
Note this timeout is just a safety to eventually kill
hung simulations even if nobody presses Ctrl+C.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit 0888882262)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
89e601ec2d tests/bsim audio mcs_mcc: Increase realtime timeout
This test has been seen failing in older slower computers
due to timeouts, let's increase the timeout so we don't
break in those cases.
Note this timeout is just a safety to eventually kill
hung simulations even if nobody presses Ctrl+C.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit e9c8856165)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
09b84c709a tests/bsim mesh: Increase realtime timeout
One of these tests has been seen failing in older slower
computers due to timeouts, let's increase the timeout so
we don't break in those cases.
Note this timeout is just a safety to eventually kill
hung simulations even if nobody presses Ctrl+C.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit d592455cc0)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
313a062cbe tests bsim/bt/ll: Split build script in 2
The CIS tests are building quite many
apps on their own.
Let's split them in their separate sub-script,
so we don't parallelize too many builds,
and users have more granularity if they only
want to build a subset.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit 940d53e839)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Alberto Escolar Piedras
00b117f430 tests/bsim/bt/host: Split build script in 6
There are quite many BT host test images being built.
Today these are all built in parallel, causing a quite
high load.

Let's split them in their separate sub-scripts,
so we don't parallelize too many builds,
and users have more granularity if they only
want to build a subset.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit 65d49cd434)
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-05 21:25:26 +02:00
Takuya Sasaki
4c45c3e1b2 net: context: Fix the ICMP error on raw
This commit applies the issues detected in UDP to recv_raw() as
well. Please refer to the previous commit log for details.

Signed-off-by: Takuya Sasaki <takuya.sasaki@spacecubics.com>
(cherry picked from commit 7d1edd1fcb)
2024-05-01 10:14:51 -05:00
Takuya Sasaki
4c54123941 net: context: Fix the ICMP error on udp
When receiving a UDP packet, net_conn_input() searches for a
matching connection within `conn_used`.

However, when receiving UDP packets simultaneously from multiple
clients, we may encounter a situation where the connection that was
supposed to be bound cannot be found within `conn_used`, and raise
the ICMP error.

This is because, within recv_udp(), to avoid the failure of
bind_default(), we temporarily remove it from `conn_used` using
net_conn_unregister().

If the context already has a connection handler, it means it's
already registered. In that case, all we have to do is 1) update
the callback registered in the net_context and 2) update the
user_data and remote address and port using net_conn_update().

Fixes #70020

Signed-off-by: Takuya Sasaki <takuya.sasaki@spacecubics.com>
(cherry picked from commit 4f802e1197)
2024-05-01 10:14:51 -05:00
Takuya Sasaki
bc1af7a997 net: conn: Add internal function for update connection
This commit adds the new internal function for update the callback,
user data, remote address, and port for a registered connection
handle.

Signed-off-by: Takuya Sasaki <takuya.sasaki@spacecubics.com>
(cherry picked from commit 46ca624be4)
2024-05-01 10:14:51 -05:00
Takuya Sasaki
17c98d3316 net: conn: Add static function for changing remote
This commit adds the new static function for change the remote
address and port to connection, and replaces the changing process
for remote address and port in net_conn_register().

Signed-off-by: Takuya Sasaki <takuya.sasaki@spacecubics.com>
(cherry picked from commit ef18518e91)
2024-05-01 10:14:51 -05:00
Takuya Sasaki
efdad10cc6 net: conn: Move net_conn_change_callback() to static
The net_conn_change_callback() is not currently being called by
anyone, so this commit moves to static function, and replaces
the change callback parameter process in net_conn_register().

Signed-off-by: Takuya Sasaki <takuya.sasaki@spacecubics.com>
(cherry picked from commit 49c6da51ce)
2024-05-01 10:14:51 -05:00
Torsten Rasmussen
ed1f5b7a20 cmake: fix issue with parsing version file located in /VERSION
Fixes: #71384

A VERSION file placed in `/` or `<drive>:\` was accidentally being
picked up during `find_package(Zephyr)`.

This happened because Zephyr loads the VERSION file to determine if it
is the correct Zephyr to use.

During initial phase of find_package(), then APPLICATION_SOURCE_DIR is
not defined, causing one version file to be picked up from `/VERSION`
instead of `<app>/VERSION`. `/VERSION` is outside any Zephyr repo, west
workspace, or the application itself and therefore should not be picked
up accidentally.

Fix this be checking that APPLICATION_SOURCE_DIR is defined, and only
when defined, look for a VERSION file there.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
(cherry picked from commit 4959a0241e)
2024-05-01 10:12:08 -05:00
Grzegorz Swiderski
df2c9863dc cmake: configuration_files: Add missing FILE_SUFFIX handling
for Kconfig fragments in the `boards` application subdirectory.

Signed-off-by: Grzegorz Swiderski <grzegorz.swiderski@nordicsemi.no>
(cherry picked from commit 3cff550180)
2024-04-26 14:10:03 -04:00
Jamie McCrae
d731d424fb cmake: modules: extensions: Rename prefixes in functions
Renames prefixes in some functions to avoid clashes with global
variables that have the same name

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2024-04-26 14:10:03 -04:00
Jamie McCrae
d5b083f98a cmake: modules: extensions: Fix missing board revision overlays
Fixes an issue whereby board overlays for board revisions were not
included when configuring applications

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2024-04-26 14:10:03 -04:00
Stephanos Ioannidis
597ab8302d tests: kernel: thread_runtime_stats: Relax precision test for QEMU
This commit relaxes the idle event statistics test precision requirement
for emulated QEMU targets because the cycle counts may be inaccurate when
the host CPU is overloaded (e.g. when running tests with twister) and a
high failure rate is observed for this test in the CI.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 2b2dd01c38)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
18d4edcffb testsuite: ztest: Increase ZTEST_TEST_DELAY_MS to 5000
This commit increases the default value of `ZTEST_TEST_DELAY_MS` from 3000
to 5000 milliseconds because the current value of 3000ms may not be
sufficient for the hosts with lower CPU clock frequency (e.g. new Zephyr CI
runners with cost-effective processors).

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1bf751074d)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
05da0d6823 ci: bsim-tests: Use zephyr-runner v2
This commit updates the bsim-tests workflow to use the new zephyr-runner v2
CI runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 9f9a6c547b)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
49fbbf0494 ci: clang: Prioritise remote Redis cache storage
This commit updates the clang workflow such that ccache only uses remote
Redis cache storage when available.

The purpose of this to reduce the individual runner local disk IOPS
requirement; thereby, reducing the overall load on the SAN.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 95e7eb31e6)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
e1f1922a33 ci: clang: Use Redis remote storage for ccache
This commit updates the clang workflow to, when available, use Redis remote
storage backend for the ccache compilation cache data.

The Redis cache server is hosted in the Kubernetes cluster in which the
zephyr-runner pods run -- the Redis remote storage backend will be ignored
if the server is unavailable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 4a2884c652)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
914ac45778 ci: clang: Store ccache data in node cache
This commit updates the clang workflow to store ccache data in the
zephyr-runner v2 node cache.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit cd83f0724b)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
1f9fde3dc2 ci: clang: Use zephyr-runner v2
This commit updates the clang workflow to use the new zephyr-runner v2 CI
runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 64ca699fc8)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
e25ff072ee ci: codecov: Set twister timeout multiplier to 2
This commit sets the codecov workflow twister timeout multiplier to 2,
which effectively increases the default test timeout from 60 to 120
seconds, because the new cost-effective Zephyr runners may take longer to
execute tests and the default timeout is not sufficient for some tests to
complete.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 550bb4e4a6)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
4f8ccc3122 ci: codecov: Prioritise remote Redis cache storage
This commit updates the codecov workflow such that ccache only uses remote
Redis cache storage when available.

The purpose of this to reduce the individual runner local disk IOPS
requirement; thereby, reducing the overall load on the SAN.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit a636c52b6a)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
1cef4cba1a ci: codecov: Add --specs to ccache ignore option list
This commit adds the compiler `--specs=*` flag to the ccache ignore option
list because ccache is unable to resolve the toolchain-provided specs file
path and will consider source files to be uncacheable if it is unable to
read the specified specs file.

Note that adding `--specs=*` to the ignore option list is not a problem
because it is unlikely for the content of the toolchain libc spec file to
change without the compiler executable itself changing.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit ab9f6b456b)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
ce208dbe6d ci: codecov: Use Redis remote storage for ccache
This commit updates the codecov workflow to, when available, use Redis
remote storage backend for the ccache compilation cache data.

The Redis cache server is hosted in the Kubernetes cluster in which the
zephyr-runner pods run -- the Redis remote storage backend will be ignored
if the server is unavailable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit b57f1b5a15)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
ee7617ccad ci: codecov: Store ccache data in node cache
This commit updates the codecov workflow to store ccache data in the
zephyr-runner v2 node cache.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 36b0b101d4)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
27f2c02f8d ci: codecov: Use zephyr-runner v2
This commit updates the codecov workflow to use the new zephyr-runner v2 CI
runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 354e290a23)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
f5eaee5d13 ci: codecov: Run on all zephyrproject-rtos organisation repositories
This commit updates the codecov workflow to run on all forks under the
zephyrproject-rtos organisation.

The purpose of this is mainly to simplify the process of testing of this
workflow under the zephyr-testing repository.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit c1bd5a613f)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
902fb9b4b0 ci: footprint-tracking: Use zephyr-runner v2
This commit updates the bsim-tests workflow to use the new zephyr-runner v2
CI runner deployment.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 9838633c0e)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
1a45a764a6 ci: doc-build: Use zephyr-runner v2
This commit updates the doc-build workflow to use the new zephyr-runner v2
CI runner deployment.

It also installs additional system packages that are not available by
default in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 2819c3526a)
2024-04-24 20:11:32 +09:00
Anas Nashif
7b47e9cd7f ci: twister: increase matrix size for push jobs
Increase matrix size to 20 from 15 on push events.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
(cherry picked from commit 9970724652)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
8aa0204a74 ci: twister: Set build job timeout to 24 hours
This commit increases the twister build job timeout from the default value
of 6 hours to 24 hours because scheduled (weekly) build runs take longer
than 6 hours to complete.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 7df7e834d9)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
d9fdd258b1 ci: twister: Set twister timeout multiplier to 2
This commit sets the twister timeout multiplier to 2, which effectively
increases the default test timeout from 60 to 120 seconds, because the new
cost-effective Zephyr runners may take longer to execute tests and the
default timeout is not sufficient for some tests to complete.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit de68ea7ce0)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
47ed96c9f3 ci: twister: Prioritise remote Redis cache storage
This commit updates the twister workflow such that ccache only uses remote
Redis cache storage when available.

The purpose of this to reduce the individual runner local disk IOPS
requirement; thereby, reducing the overall load on the SAN.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 527435d642)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
1660ff0bb5 ci: twister: Add --specs to ccache ignore option list
This commit adds the compiler `--specs=*` flag to the ccache ignore option
list because ccache is unable to resolve the toolchain-provided specs file
path and will consider source files to be uncacheable if it is unable to
read the specified specs file.

Note that adding `--specs=*` to the ignore option list is not a problem
because it is unlikely for the content of the toolchain libc spec file to
change without the compiler executable itself changing.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit d3f9f391ad)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
6eb54b420d ci: twister: Use Redis remote storage for ccache
This commit updates the twister workflow to, when available, use Redis
remote storage backend for the ccache compilation cache data.

The Redis cache server is hosted in the Kubernetes cluster in which the
zephyr-runner pods run -- the Redis remote storage backend will be ignored
if the server is unavailable.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 3823f1f0db)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
d8f9eb7c93 ci: twister: Store ccache data in node cache
This commit updates the twister workflow to store ccache data in the
zephyr-runner v2 node cache.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 1be3aacad3)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
5a75516c9f ci: twister: Print cloud service information
This commit updates the twister workflow jobs that run on the zephyr-runner
v2 to print the underlying cloud service information in the logs to help
trace and debug potential runner issues.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 9a9bebb9f9)
2024-04-24 20:11:32 +09:00
Stephanos Ioannidis
8d6c22bb5c ci: twister: Use zephyr-runner v2
This commit updates the twister workflow to use the new zephyr-runner v2 CI
runner deployment.

It also updates the workflow to use the `ci-repo-cache` Docker image, which
includes the Zephyr repository cache, because the node level repository
cache is no longer available in the zephyr-runner v2.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
(cherry picked from commit 7c19bc70bf)
2024-04-24 20:11:32 +09:00
Celina Sophie Kalus
33b431c841 tests: posix: eventfd: Add ioctl F_SETFL test
Add a test to protect against future regressions in the ioctl F_SETFL
operation of eventfd. Flags are set and unset and validity of the file
descriptor is checked by reading and writing.

Signed-off-by: Celina Sophie Kalus <hello@celinakalus.de>
(cherry picked from commit 325f22a16f)
2024-04-23 10:11:26 +02:00
Celina Sophie Kalus
02aa8a246d posix: eventfd: Fix unsetting internal flags in ioctl
Commit e6eb0a705b ("posix: eventfd: revise locking, signaling, and
allocation") introduced a regression where the internal flags of an
event file descriptor would be erased when calling the F_SETFL ioctl
operation.

This includes the flag EFD_IN_USE_INTERNAL which determines whether
this file descriptor has been opened, thus effectively closing the
eventfd whenever one tries to change a flag.

Signed-off-by: Celina Sophie Kalus <hello@celinakalus.de>
(cherry picked from commit 5bd86eaddb)
2024-04-23 10:11:26 +02:00
Marco Widmer
1ef57ab277 pm: runtime: fix race when waiting for suspended event
To wait for the asynchronous suspending work item to complete, a
combination of semaphores and events is used. First, the semaphore is
released, then the events are cleared (through the boolean argument to
k_event_wait), then events are awaited.

However, if the event flag happens to be set by the work handler in the
short time between k_sem_give and k_event_wait, it is then cleared by
k_event_wait and k_event_wait blocks forever waiting for the event.

Make sure that we clear the event flag before releasing the semaphore.

Signed-off-by: Marco Widmer <marco.widmer@bytesatwork.ch>
(cherry picked from commit d83c63ecce)
2024-04-21 03:27:40 -07:00
Francois Ramu
d0af8d80de samples: drivers: spi flash testcase check the erased data
After erasing the sector, compare dat read with expected 0xFF pattern
to decide if erasing is successful instead of relying on the
returned code of the flash_erase function

Signed-off-by: Francois Ramu <francois.ramu@st.com>
(cherry picked from commit 325ae4d32a)
2024-04-12 07:49:25 -05:00
Francois Ramu
6d9dee38f4 drivers: flash: stm32 ospi flash driver the correct erase instruction
Use the most adapted instruction for the sector erase command
It can be 0x20 or ox21 or 0x21DE in octo - DTR mode.
The value is given by the SFDP table and  filled in the erase_types
 table the during the SFDP discovery process.

Signed-off-by: Francois Ramu <francois.ramu@st.com>
(cherry picked from commit 01684df03a)
2024-04-12 07:49:25 -05:00
Henrik Brix Andersen
719b338055 drivers: can: mcan: enable transmitter delay compensation when possible
Enable Transmitter Delay Compensation whenever the data phase timing
parameters allow it.

Fixes: #70447

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit bfad7bc00e)
2024-04-05 08:42:26 -05:00
Henrik Brix Andersen
d17fedb5f8 drivers: can: add utility macro for calculating TDC offset
Add a utility macro for calculating the Transmitter Delay Compensation
(TDC) Offset using the sample point and CAN core clock prescaler specified
by a set of data phase timing parameters.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit a3631264d1)
2024-04-05 08:42:26 -05:00
Henrik Brix Andersen
1b610be502 dts: bindings: can: can-fd-controller: remove tx-delay-comp-offset prop
Remove the unused "tx-delay-comp-offset" property from the base CAN FD
controller devicetree binding.

Having a static Transmitter Delay Compensation (TDC) offset is useless.
The offset needs to match the data phase timing parameters in order to
properly configure the second sample point when transmitting CAN FD frames
with BRS enabled.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit 744f20d005)
2024-04-05 08:42:26 -05:00
Henrik Brix Andersen
89356355d8 drivers: can: mcan: remove broken transmitter delay compensation support
Remove broken support for Transmitter Delay Compensation from the Bosch
M_CAN backend driver.

Even if this was enabled via Kconfig, the TDC bit in the DBTP register set
during driver initialization is overwritten in can_mcan_set_timing_data(),
turning TDC off.

Signed-off-by: Henrik Brix Andersen <hebad@vestas.com>
(cherry picked from commit ec75dc2232)
2024-04-05 08:42:26 -05:00
Reto Schneider
c719aedb0d doc: releases: Fix GH link in migration guide v3.6
Putting a space between :github: and the PR number causes the rendered
text to not link to the issue.

Signed-off-by: Reto Schneider <reto.schneider@husqvarnagroup.com>
(cherry picked from commit 5a49b7c90d)
2024-04-03 09:28:55 -05:00
Krzysztof Chruściński
8380437a8c shell: rtt: Add detection of host presence
If host is not reading RTT data (because there is no PC connection
or RTT reading application is not running on the host), thread will
stuck continuously trying to write to RTT. All threads with equal or
lower priority are blocked then. Adding detection of that case and
if host is not reading data for configurable period then data is
dropped until host accepts new data.

Similar solution is using in RTT logging backend.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit 04a74ce107)
2024-04-03 09:25:06 -05:00
Flavio Ceolin
b6391efe58 intel_adsp/ace: power: Lock interruption when power gate fails
In case the core is not power gated, waiti will restore intlevel. In
this case we lock interruption after it.

In the bug scenario, the host starts streaming and via SOF APIs, keeps a
lock to prevent Zephyr from entering PM_STATE_RUNTIME_IDLE. During the
test case, host removes this block and core0 is allowed to enter IDLE
state.

When core0 enters power gated state, interrrupts are left enabled (so
the core can be woken up when something happens). This leaves a race
where suitably timed interrupt will actually block entry to power gated
state and k_cpu_idle() in power_gate_entry() will return. This is rare,
but happens often enough that the relatively short test plan run on SOF
pull-requests will trigger this case.

Fixes #69807

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
(cherry picked from commit 07426a800c)
2024-03-29 09:22:24 +01:00
Fin Maaß
c3ff6548f3 mgmt: hawkbit: remove hb_context.status_buffer_size
remove hb_context.status_buffer_size and replace it with
sizeof(hb_context.status_buffer), because hb_context.status_buffer_size
is never set.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
(cherry picked from commit 1bea938c9f)
2024-03-29 09:22:10 +01:00
Fabio Baltieri
83466c2ac1 input: gpio_keys: fix suspend race condition
Change the suspend/resume code to ensure that the interrupt are disabled
before changing the pin configuration. The current sequence has been
reported to cause spurious readouts on some platforms, this takes the
existing code and duplicates for the suspend and resume case, but swaps
the interrupt disable and configure for the suspend case.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
(cherry picked from commit d0a8c4158c)
2024-03-18 23:55:26 +01:00
Daniel DeGrasse
78aaf1d527 drivers: mipi_dbi: mipi_dbi_spi: do not take spinlock
Taking a spinlock will result in interrupts being blocked in the MIPI
DBI driver, which is not desired behavior while issuing SPI transfers,
since the driver may use interrupts to drive the transfer

Fixes #68815

Signed-off-by: Daniel DeGrasse <daniel.degrasse@nxp.com>
(cherry picked from commit 5b6fadc10d)
2024-03-18 22:11:30 +01:00
Sylvio Alves
b3e4eef8b5 hal_espressif: update to include bugfixes
Added longjmp patch.
Fixes build warning in phy driver.
Fixes runtime missing rom function.
Fixes missing mcuboot assertion implementation.
Added function to retrieve uart port num

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
(cherry picked from commit f7eac8ab8d)
2024-03-17 12:55:25 +01:00
Yasushi SHOJI
a25087f9e5 drivers: sensor: ams_as5600: Fix calculation of fractional part
commit 98903d48c3 upstream.

The original calculation has two bugs. One is the calculated value, and the
other is that the value is not in one-millionth parts.

What the original calculation does is compute a scaled position value by
multiplying the raw sensor value (`dev_data->position`) by
`AS5600_FULL_ANGLE`, which represents a full rotation in degrees. It then
subtracts the product of the whole number of pulses (`val->val1`) and
`AS5600_PULSES_PER_REV` from this scaled position value.

    ((int32_t)dev_data->position * AS5600_FULL_ANGLE)
    - (val->val1 * AS5600_PULSES_PER_REV);

What you actually need is to extract the fractional part of the value by
taking the modulo of AS5600_PULSES_PER_REV from the scaled value of the
position.

   (((int32_t)dev_data->position * AS5600_FULL_ANGLE)
   % AS5600_PULSES_PER_REV)

Then convert the value to one-millionth part.

   * (AS5600_MILLION_UNIT / AS5600_PULSES_PER_REV);

Signed-off-by: Yasushi SHOJI <yashi@spacecubics.com>
2024-03-17 12:54:44 +01:00
Alberto Escolar Piedras
492eeeac31 drivers/counter nrfx: Fix with DT instance not matching device instance
478530ec0a introduced a bug
where if the DT index while iterating its DT structure
initialization does not match the actual peripheral instance,
or if the device instance string is not just a simple integer,
but a more complex string like "00", or "02", either
the wrong peripheral address would be used, or the file
would failt to compile.

Let's fix this by reverting that change, and instead, for
simulation converting the hardcoded DT/real HW address
to the valid addr for simulation on the fly.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit f9d5e8458c)
2024-03-14 09:47:45 +01:00
Alberto Escolar Piedras
72113c183b manifest: Update nrf hw models to latest
* Update the HW models module to
3925b7030736f25f45ceedc3621219125a2d4685

Including the following:
* 3925b70: Add new API to convert real peripheral addr to simulated one
* 319e3eb: nhw_convert_periph_base_addr: Fix include for nrf5340

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
(cherry picked from commit f6435e012e)
2024-03-14 09:47:45 +01:00
Peter Mitsis
cd21e04ec5 kernel: Update k_wakeup()
This commit does two things to k_wakeup():

1. It locks the scheduler before marking the thread as not suspended.
As the the clearing of the _THREAD_SUSPENDED bit is not atomic, this
helps ensure that neither another thread nor ISR interrupts this
action (resulting in a corrupted thread_state).

2. The call to flag_ipi() has been removed as it is already being
made within ready_thread().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
(cherry picked from commit 51ae993c12)
2024-03-12 16:02:04 +01:00
Jamie McCrae
fe92cf677d scripts: snippets: Fix path output on windows
Fixes an issue with paths being output in windows-style with back
slashes, this causes issues for certain escape sequences when
cmake interprets them. Replace these paths with posix paths so
that they are not treated as possible escape sequences.

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
(cherry picked from commit b680a6ec72)
2024-03-12 14:56:55 +01:00
Guillaume Gautier
517880c664 samples: boards: stm32: pm: s2ram: enable standby mode
Add Standby mode to the overlay since here we want to test Suspend to RAM.

Signed-off-by: Guillaume Gautier <guillaume.gautier-ext@st.com>
(cherry picked from commit cfa7e38378)
2024-03-11 15:09:06 +01:00
Guillaume Gautier
b0bf5d3e09 dts: arm: st: wba: do not enable standby pm mode
Do not enable Standby low-power mode by default since the associated
Kconfig PM_S2RAM is disabled by default. Otherwise we could enter an
unsupported low-power state.

Signed-off-by: Guillaume Gautier <guillaume.gautier-ext@st.com>
(cherry picked from commit f1bfbb0627)
2024-03-11 15:09:06 +01:00
Yuval Peress
be48697fa2 it82xx2: Add missing ISRs for gpioj
Without this we can't take advandage of pins 6 & 7.

Fixes #69503

Signed-off-by: Yuval Peress <peress@google.com>
(cherry picked from commit 375aa90c09)
2024-03-08 19:01:46 +01:00
Robert Lubos
658a52be20 tests: net: sockets: tls: Add test verifying send() after close
Add test case verifying that send() returns an error when called after
TLS session has been closed.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit b920793e9b)
2024-03-08 19:01:30 +01:00
Robert Lubos
0383b7fb1c net: sockets: tls: Return an error on send() after session is closed
It was an overlook to return 0 on TLS send() call, after detecting that
TLS session has been closed by peer, such a behavior is only valid for
recv(). Instead, an error should be returned.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit dc52b20705)
2024-03-08 19:01:30 +01:00
Maochen Wang
169020a824 net: zperf: Fix TOS option not working in zperf
When the zperf command is called with '-S' option which means IP_TOS
for IPv4 and IPV6_TCLASS for IPv6, an error is printed and the
setting does not work. The socket option handling was changed by
commit 77e522a5a243('net: context: Refactor option setters'), but the
callers of option setters were not changed. This causes the IP_TOS
or IPV6_TCLASS option failed to set. The fix is to use uint8_t to
store the value of the -S option.

Signed-off-by: Maochen Wang <maochen.wang@nxp.com>
(cherry picked from commit e7444dcf42)
2024-03-08 19:01:10 +01:00
Laurentiu Mihalcea
da2a9af2f2 tests: devicetree: check L2 interrupt encoding
Add some assert statements meant to check if L2 interrupts
are encoded right when dealing with nodes that consume interrupts
from multiple aggregators. For this to work, also add another
interrupt controller node which extends a different L1 interrupt
from `test_intc`.

Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
(cherry picked from commit 8fec5f361c)
2024-03-08 09:34:09 +01:00
Laurentiu Mihalcea
634b41780e devicetree.h: switch to DT_IRQ_INTC_BY_IDX for L2 and L3 INTID encodings
Using `DT_IRQ_INTC()` to fetch the interrupt controller associated
with a node works well for nodes which consume interrupts from a
single aggregator. However, when specifying multiple (and different)
interrupt aggregators via the `interrupts-extended` property,
the L2 and L3 interrupts will no longer be encoded properly. This
is because `DT_IRQ_INTC(node_id)` uses `DT_IRQ_INTC_BY_IDX(node_id, 0)`
so all the interrupts will use the first aggregator as their parent.
To fix this, switch from using `DT_IRQ_INTC()` to `DT_IRQ_INTC_BY_IDX()`.

Signed-off-by: Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>
(cherry picked from commit 3799e0f498)
2024-03-08 09:34:09 +01:00
Marcin Gasiorek
a8af8ae2eb net: ip: Fix for improper offset return by net_pkt_find_offset()
The original packet's link-layer destination and source address can be
stored in separately allocated memory. This allocated memory can be
placed just after pkt data buffers.
In case when `net_pkt_find_offset()` uses condition:
`if (buf->data <= ptr && ptr <= (buf->data + buf->len)) {`
the offset is set outside the packet's buffer and the function returns
incorrect offset instead of error code.
Finally the offset is used to set ll address in cloned packet, and
this can have unexpected behavior (e.g. crash when cursor will be set
to empty memory).

Signed-off-by: Marcin Gasiorek <marcin.gasiorek@nordicsemi.no>
(cherry picked from commit fb99f65fe9)
2024-03-07 09:15:30 +01:00
Mathieu Choplain
f9576a7713 drivers: can: add missing argument to LOG_ERR call
PR #64399 introduced checks for out-of-bounds filter IDs
in CAN drivers, along with logging of said IDs; however,
the call to LOG_ERR in the native POSIX/Linux driver is
missing the 'filter_id' argument.

This commit adds the missing argument to ensure proper
data is printed when the LOG_ERR call is performed.

Signed-off-by: Mathieu Choplain <mathieu.choplain@st.com>
(cherry picked from commit 6ff47b15db)
2024-03-05 18:38:24 +01:00
Sylvio Alves
b90726cd10 drivers: dac: esp32: fix clock control subsys argument
Current cfg->clock_subsys is passed as address and is
causing driver assertion.

Fixes #69198

Signed-off-by: Sylvio Alves <sylvio.alves@espressif.com>
(cherry picked from commit a79c54dc43)
2024-03-05 18:38:07 +01:00
Jukka Rissanen
3a7d43abba net: sockets: Do not start service thread if too little resources
If the CONFIG_NET_SOCKETS_POLL_MAX is smaller than what is needed
for the socket service API to work properly, then we should not
start the service thread as the service API cannot work and might
cause memory overwrite in ctx.events[] array.

Fixes #69233

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit ea189d5aee)
2024-03-05 18:37:51 +01:00
Pisit Sawangvonganan
8710af1838 wifi: shell: removed NULL check to net_mgmt callback
Since PR, PR_SHELL, PR_ERROR, PR_INFO, and PR_WARNING already have
an embedded `sh` NULL check, we can remove the change from PR #68809.

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
(cherry picked from commit b6f51edd6c)
2024-03-05 18:37:24 +01:00
Pisit Sawangvonganan
720b33ebf3 net: shell: ensure the shell sh is valid before call shell_printf
It is possible that the `sh` was not set before use.
This change adds a NULL check for `sh` in the following macros:
PR, PR_SHELL, PR_ERROR, PR_INFO, and PR_WARNING.
In case `sh` is NULL, the above macros will call `printk` instead.

Fixes #68793

Signed-off-by: Pisit Sawangvonganan <pisit@ndrsolution.com>
(cherry picked from commit 7b8a9e1818)
2024-03-05 18:37:24 +01:00
147 changed files with 2576 additions and 704 deletions

View File

@@ -2,12 +2,20 @@ name: Backport Issue Check
on:
pull_request_target:
types:
- edited
- opened
- reopened
- synchronize
branches:
- v*-branch
jobs:
backport:
name: Backport Issue Check
concurrency:
group: backport-issue-check-${{ github.ref }}
cancel-in-progress: true
runs-on: ubuntu-22.04
if: github.repository == 'zephyrproject-rtos/zephyr'

View File

@@ -30,12 +30,11 @@ concurrency:
jobs:
bsim-test:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.7
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.7.20240313
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
BSIM_OUT_PATH: /opt/bsim/
@@ -55,10 +54,16 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
@@ -78,7 +83,7 @@ jobs:
west init -l . || true
west config manifest.group-filter -- +ci
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV

View File

@@ -9,17 +9,19 @@ concurrency:
jobs:
clang-build:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.7
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.7.20240313
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
platform: ["native_sim"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
LLVM_TOOLCHAIN_PATH: /usr/lib/llvm-16
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
@@ -34,10 +36,16 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
@@ -63,7 +71,7 @@ jobs:
# So first retry to update, if that does not work, remove all modules
# and start over. (Workaround until we implement more robust module
# west caching).
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west2.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
@@ -74,31 +82,12 @@ jobs:
gcc --version
ls -la
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
- name: Set up ccache
run: |
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-clang-${{ matrix.platform }}-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && rm -rf /github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- name: Run Tests with Twister
id: twister
@@ -119,10 +108,10 @@ jobs:
echo "report_needed=0" >> $GITHUB_OUTPUT
fi
- name: ccache stats post
- name: Print ccache stats
if: always()
run: |
ccache -s
ccache -p
ccache -s -vv
- name: Upload Unit Test Results
if: always() && steps.twister.outputs.report_needed != 0

View File

@@ -10,17 +10,22 @@ concurrency:
jobs:
codecov:
if: github.repository == 'zephyrproject-rtos/zephyr'
runs-on: zephyr-runner-linux-x64-4xlarge
if: github.repository_owner == 'zephyrproject-rtos'
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.7
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.7.20240313
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
platform: ["mps2_an385", "native_sim", "qemu_x86", "unit_testing"]
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resovle the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '--specs=*'
steps:
- name: Apply container owner mismatch workaround
run: |
@@ -30,6 +35,12 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
@@ -37,7 +48,7 @@ jobs:
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: checkout
@@ -58,30 +69,12 @@ jobs:
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Prepare ccache keys
id: ccache_cache_prop
shell: cmake -P {0}
- name: Set up ccache
run: |
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
with:
key: ${{ steps.ccache_cache_prop.outputs.repo }}-${{github.event_name}}-${{matrix.platform}}-codecov-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- name: Run Tests with Twister (Push)
continue-on-error: true
@@ -90,12 +83,14 @@ jobs:
export ZEPHYR_TOOLCHAIN_VARIANT=zephyr
mkdir -p coverage/reports
pip3 install gcovr==6.0
./scripts/twister -i --force-color -N -v --filter runnable -p ${{ matrix.platform }} --coverage -T tests --coverage-tool gcovr -xCONFIG_TEST_EXTRA_STACK_SIZE=4096 -e nano
./scripts/twister -i --force-color -N -v --filter runnable -p ${{ matrix.platform }} \
--coverage -T tests --coverage-tool gcovr -xCONFIG_TEST_EXTRA_STACK_SIZE=4096 -e nano \
--timeout-multiplier 2
- name: ccache stats post
- name: Print ccache stats
if: always()
run: |
ccache -s
ccache -p
ccache -s -vv
- name: Rename coverage files
if: always()

View File

@@ -36,13 +36,29 @@ jobs:
doc-build-html:
name: "Documentation Build (HTML)"
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
timeout-minutes: 45
concurrency:
group: doc-build-html-${{ github.ref }}
cancel-in-progress: true
steps:
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install -y wget python3-pip git ninja-build graphviz lcov
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
sudo tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz -C /opt
echo "/opt/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
echo "${HOME}/.local/bin" >> $GITHUB_PATH
- name: checkout
uses: actions/checkout@v4
with:
@@ -61,14 +77,6 @@ jobs:
git rebase origin/${BASE_REF}
git log --graph --oneline HEAD...${PR_HEAD}
- name: install-pkgs
run: |
sudo apt-get update
sudo apt-get install -y ninja-build graphviz lcov
wget --no-verbose "https://github.com/doxygen/doxygen/releases/download/Release_${DOXYGEN_VERSION//./_}/doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz"
tar xf doxygen-${DOXYGEN_VERSION}.linux.bin.tar.gz
echo "${PWD}/doxygen-${DOXYGEN_VERSION}/bin" >> $GITHUB_PATH
- name: cache-pip
uses: actions/cache@v4
with:
@@ -153,7 +161,8 @@ jobs:
if: |
github.event_name != 'pull_request' &&
github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container: texlive/texlive:latest
timeout-minutes: 60
concurrency:
@@ -165,6 +174,12 @@ jobs:
run: |
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: checkout
uses: actions/checkout@v4

View File

@@ -22,10 +22,11 @@ concurrency:
jobs:
footprint-tracking:
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository_owner == 'zephyrproject-rtos'
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.7
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.7.20240313
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false
@@ -40,6 +41,12 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH

View File

@@ -68,7 +68,7 @@ jobs:
elif [ "${{ runner.os }}" = "Windows" ]; then
EXTRA_TWISTER_FLAGS="-P native_sim --short-build-path -O/tmp/twister-out"
fi
./scripts/twister --force-color --inline-logs -T samples/hello_world -v $EXTRA_TWISTER_FLAGS
./scripts/twister --force-color --inline-logs -T samples/hello_world -T samples/cpp/hello_world -v $EXTRA_TWISTER_FLAGS
- name: Upload artifacts
if: failure()
@@ -77,3 +77,4 @@ jobs:
if-no-files-found: ignore
path:
zephyr/twister-out/*/samples/hello_world/sample.basic.helloworld/build.log
zephyr/twister-out/*/samples/cpp/hello_world/sample.cpp.helloworld/build.log

View File

@@ -22,19 +22,18 @@ concurrency:
jobs:
twister-build-prep:
if: github.repository_owner == 'zephyrproject-rtos'
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.7
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.7.20240313
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
outputs:
subset: ${{ steps.output-services.outputs.subset }}
size: ${{ steps.output-services.outputs.size }}
fullrun: ${{ steps.output-services.outputs.fullrun }}
env:
MATRIX_SIZE: 10
PUSH_MATRIX_SIZE: 15
PUSH_MATRIX_SIZE: 20
DAILY_MATRIX_SIZE: 80
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
@@ -50,11 +49,17 @@ jobs:
# GitHub comes up with a fundamental fix for this problem.
git config --global --add safe.directory ${GITHUB_WORKSPACE}
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Clone cached Zephyr repository
if: github.event_name == 'pull_request_target'
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
@@ -76,7 +81,7 @@ jobs:
west init -l . || true
west config manifest.group-filter -- +ci,+optional
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
@@ -119,28 +124,39 @@ jobs:
echo "fullrun=${TWISTER_FULL}" >> $GITHUB_OUTPUT
twister-build:
runs-on: zephyr-runner-linux-x64-4xlarge
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.7
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.7.20240313
options: '--entrypoint /bin/bash'
volumes:
- /repo-cache/zephyrproject:/github/cache/zephyrproject
strategy:
fail-fast: false
matrix:
subset: ${{fromJSON(needs.twister-build-prep.outputs.subset)}}
timeout-minutes: 1440
env:
CCACHE_DIR: /node-cache/ccache-zephyr
CCACHE_REMOTE_STORAGE: "redis://cache-*.keydb-cache.svc.cluster.local|shards=1,2,3"
CCACHE_REMOTE_ONLY: "true"
# `--specs` is ignored because ccache is unable to resolve the toolchain specs file path.
CCACHE_IGNOREOPTIONS: '--specs=*'
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 '
TWISTER_COMMON: ' --force-color --inline-logs -v -N -M --retry-failed 3 --timeout-multiplier 2 '
DAILY_OPTIONS: ' -M --build-only --all --show-footprint'
PR_OPTIONS: ' --clobber-output --integration'
PUSH_OPTIONS: ' --clobber-output -M --show-footprint'
COMMIT_RANGE: ${{ github.event.pull_request.base.sha }}..${{ github.event.pull_request.head.sha }}
BASE_REF: ${{ github.base_ref }}
steps:
- name: Print cloud service information
run: |
echo "ZEPHYR_RUNNER_CLOUD_PROVIDER = ${ZEPHYR_RUNNER_CLOUD_PROVIDER}"
echo "ZEPHYR_RUNNER_CLOUD_NODE = ${ZEPHYR_RUNNER_CLOUD_NODE}"
echo "ZEPHYR_RUNNER_CLOUD_POD = ${ZEPHYR_RUNNER_CLOUD_POD}"
- name: Apply container owner mismatch workaround
run: |
# FIXME: The owner UID of the GITHUB_WORKSPACE directory may not
@@ -152,7 +168,7 @@ jobs:
- name: Clone cached Zephyr repository
continue-on-error: true
run: |
git clone --shared /github/cache/zephyrproject/zephyr .
git clone --shared /repo-cache/zephyrproject/zephyr .
git remote set-url origin ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
- name: Checkout
@@ -176,15 +192,9 @@ jobs:
west init -l . || true
west config manifest.group-filter -- +ci,+optional
west config --global update.narrow true
west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /github/cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /github/cache/zephyrproject)
west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || west update --path-cache /repo-cache/zephyrproject 2>&1 1> west.update.log || ( rm -rf ../modules ../bootloader ../tools && west update --path-cache /repo-cache/zephyrproject)
west forall -c 'git reset --hard HEAD'
# Hotfix until we have kitware ninja in the docker image.
# Needed for full functionality of the job server functionality in twister which only works with
# kitware supplied ninja version.
wget -c https://github.com/Kitware/ninja/releases/download/v1.11.1.g95dee.kitware.jobserver-1/ninja-1.11.1.g95dee.kitware.jobserver-1_x86_64-linux-gnu.tar.gz -O - | tar xz --strip-components=1
sudo cp ninja /usr/local/bin
echo "ZEPHYR_SDK_INSTALL_DIR=/opt/toolchains/zephyr-sdk-$( cat SDK_VERSION )" >> $GITHUB_ENV
- name: Check Environment
@@ -196,32 +206,12 @@ jobs:
echo "github.base_ref: ${{ github.base_ref }}"
echo "github.ref_name: ${{ github.ref_name }}"
- name: Prepare ccache timestamp/data
id: ccache_cache_timestamp
shell: cmake -P {0}
- name: Set up ccache
run: |
string(TIMESTAMP current_date "%Y-%m-%d-%H;%M;%S" UTC)
string(REPLACE "/" "_" repo ${{github.repository}})
string(REPLACE "-" "_" repo2 ${repo})
file(APPEND $ENV{GITHUB_OUTPUT} "repo=${repo2}\n")
- name: use cache
id: cache-ccache
uses: zephyrproject-rtos/action-s3-cache@v1.2.0
continue-on-error: true
with:
key: ${{ steps.ccache_cache_timestamp.outputs.repo }}-${{ github.ref_name }}-${{github.event_name}}-${{ matrix.subset }}-ccache
path: /github/home/.cache/ccache
aws-s3-bucket: ccache.zephyrproject.org
aws-access-key-id: ${{ vars.AWS_CCACHE_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CCACHE_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: ccache stats initial
run: |
mkdir -p /github/home/.cache
test -d github/home/.cache/ccache && rm -rf /github/home/.cache/ccache && mv github/home/.cache/ccache /github/home/.cache/ccache
ccache -M 10G -s
mkdir -p ${CCACHE_DIR}
ccache -M 10G
ccache -p
ccache -z -s -vv
- if: github.event_name == 'push'
name: Run Tests with Twister (Push)
@@ -264,10 +254,10 @@ jobs:
fi
fi
- name: ccache stats post
- name: Print ccache stats
if: always()
run: |
ccache -p
ccache -s
ccache -s -vv
- name: Upload Unit Test Results
if: always()

View File

@@ -1 +1 @@
0.16.5
0.16.9

View File

@@ -50,7 +50,7 @@ endif()
find_package(Deprecated COMPONENTS CROSS_COMPILE)
find_package(Zephyr-sdk 0.16)
find_package(Zephyr-sdk 0.16...<0.17)
# gperf is an optional dependency
find_program(GPERF gperf)

View File

@@ -54,7 +54,7 @@ if(("zephyr" STREQUAL ${ZEPHYR_TOOLCHAIN_VARIANT}) OR
# To support Zephyr SDK tools (DTC, and other tools) with 3rd party toolchains
# then we keep track of current toolchain variant.
set(ZEPHYR_CURRENT_TOOLCHAIN_VARIANT ${ZEPHYR_TOOLCHAIN_VARIANT})
find_package(Zephyr-sdk ${Zephyr-sdk_FIND_VERSION}
find_package(Zephyr-sdk ${Zephyr-sdk_FIND_VERSION_COMPLETE}
REQUIRED QUIET CONFIG HINTS ${ZEPHYR_SDK_INSTALL_DIR}
)
if(DEFINED ZEPHYR_CURRENT_TOOLCHAIN_VARIANT)
@@ -82,10 +82,20 @@ if(("zephyr" STREQUAL ${ZEPHYR_TOOLCHAIN_VARIANT}) OR
list(REMOVE_DUPLICATES Zephyr-sdk_CONSIDERED_VERSIONS)
list(SORT Zephyr-sdk_CONSIDERED_VERSIONS COMPARE NATURAL ORDER DESCENDING)
if("${Zephyr-sdk_FIND_VERSION_RANGE_MAX}" STREQUAL "INCLUDE")
set(upper_bound _EQUAL)
endif()
if(NOT DEFINED Zephyr-sdk_FIND_VERSION_RANGE)
# Range not given, max out to ensure max version is not in effect.
set(Zephyr-sdk_FIND_VERSION_MAX 99999999)
endif()
# Loop over each found Zepher SDK version until one is found that is compatible.
foreach(zephyr_sdk_candidate ${Zephyr-sdk_CONSIDERED_VERSIONS})
if("${zephyr_sdk_candidate}" VERSION_GREATER_EQUAL "${Zephyr-sdk_FIND_VERSION}")
if("${zephyr_sdk_candidate}" VERSION_GREATER_EQUAL "${Zephyr-sdk_FIND_VERSION}"
AND "${zephyr_sdk_candidate}" VERSION_LESS${upper_bound} "${Zephyr-sdk_FIND_VERSION_MAX}"
)
# Find the path for the current version being checked and get the directory
# of the Zephyr SDK so it can be checked.
list(FIND zephyr_sdk_found_versions ${zephyr_sdk_candidate} zephyr_sdk_current_index)
@@ -93,7 +103,7 @@ if(("zephyr" STREQUAL ${ZEPHYR_TOOLCHAIN_VARIANT}) OR
get_filename_component(zephyr_sdk_current_check_path ${zephyr_sdk_current_check_path} DIRECTORY)
# Then see if this version is compatible.
find_package(Zephyr-sdk ${Zephyr-sdk_FIND_VERSION} QUIET CONFIG PATHS ${zephyr_sdk_current_check_path} NO_DEFAULT_PATH)
find_package(Zephyr-sdk ${Zephyr-sdk_FIND_VERSION_COMPLETE} QUIET CONFIG PATHS ${zephyr_sdk_current_check_path} NO_DEFAULT_PATH)
if (${Zephyr-sdk_FOUND})
# A compatible version of the Zephyr SDK has been found which is the highest
@@ -106,7 +116,7 @@ if(("zephyr" STREQUAL ${ZEPHYR_TOOLCHAIN_VARIANT}) OR
if (NOT ${Zephyr-sdk_FOUND})
# This means no compatible Zephyr SDK versions were found, set the version
# back to the minimum version so that it is displayed in the error text.
find_package(Zephyr-sdk ${Zephyr-sdk_FIND_VERSION} REQUIRED CONFIG PATHS ${zephyr_sdk_search_paths})
find_package(Zephyr-sdk ${Zephyr-sdk_FIND_VERSION_COMPLETE} REQUIRED CONFIG PATHS ${zephyr_sdk_search_paths})
endif()
endif()

View File

@@ -45,7 +45,7 @@ endif()
zephyr_get(CONF_FILE SYSBUILD LOCAL)
if(NOT DEFINED CONF_FILE)
zephyr_file(CONF_FILES ${APPLICATION_CONFIG_DIR} KCONF CONF_FILE NAMES "prj.conf" SUFFIX ${FILE_SUFFIX} REQUIRED)
zephyr_file(CONF_FILES ${APPLICATION_CONFIG_DIR}/boards KCONF CONF_FILE)
zephyr_file(CONF_FILES ${APPLICATION_CONFIG_DIR}/boards KCONF CONF_FILE SUFFIX ${FILE_SUFFIX})
else()
string(CONFIGURE "${CONF_FILE}" CONF_FILE_EXPANDED)
string(REPLACE " " ";" CONF_FILE_AS_LIST "${CONF_FILE_EXPANDED}")
@@ -77,8 +77,12 @@ zephyr_file(CONF_FILES ${APPLICATION_CONFIG_DIR}/boards DTS APP_BOARD_DTS SUFFIX
zephyr_get(DTC_OVERLAY_FILE SYSBUILD LOCAL)
if(NOT DEFINED DTC_OVERLAY_FILE)
zephyr_file(CONF_FILES ${APPLICATION_CONFIG_DIR} DTS DTC_OVERLAY_FILE
NAMES "${APP_BOARD_DTS};${BOARD}.overlay;app.overlay" SUFFIX ${FILE_SUFFIX})
if(DEFINED APP_BOARD_DTS)
set(DTC_OVERLAY_FILE ${APP_BOARD_DTS})
else()
zephyr_file(CONF_FILES ${APPLICATION_CONFIG_DIR} DTS DTC_OVERLAY_FILE
NAMES "${BOARD}.overlay;app.overlay" SUFFIX ${FILE_SUFFIX})
endif()
endif()
set(DTC_OVERLAY_FILE ${DTC_OVERLAY_FILE} CACHE STRING "If desired, you can \

View File

@@ -2220,16 +2220,12 @@ function(toolchain_parse_make_rule input_file include_files)
# the element separator, so let's get the pure `;` back.
string(REPLACE "\;" ";" input_as_list ${input})
# Pop the first line and treat it specially
list(POP_FRONT input_as_list first_input_line)
string(FIND ${first_input_line} ": " index)
math(EXPR j "${index} + 2")
string(SUBSTRING ${first_input_line} ${j} -1 first_include_file)
# The file might also contain multiple files on one line if one or both of
# the file paths are short, split these up into multiple elements using regex
string(REGEX REPLACE "([^ ])[ ]([^ ])" "\\1;\\2" input_as_list "${input_as_list}")
# Remove whitespace before and after filename and convert to CMake path.
string(STRIP "${first_include_file}" first_include_file)
file(TO_CMAKE_PATH "${first_include_file}" first_include_file)
set(result "${first_include_file}")
# Pop the first item containing "empty_file.o:"
list(POP_FRONT input_as_list first_input_line)
# Remove whitespace before and after filename and convert to CMake path.
foreach(file ${input_as_list})
@@ -2481,76 +2477,76 @@ Please provide one of following: APPLICATION_ROOT, CONF_FILES")
set(multi_args CONF_FILES NAMES)
endif()
cmake_parse_arguments(FILE "${options}" "${single_args}" "${multi_args}" ${ARGN})
if(FILE_UNPARSED_ARGUMENTS)
message(FATAL_ERROR "zephyr_file(${ARGV0} <val> ...) given unknown arguments: ${FILE_UNPARSED_ARGUMENTS}")
cmake_parse_arguments(ZFILE "${options}" "${single_args}" "${multi_args}" ${ARGN})
if(ZFILE_UNPARSED_ARGUMENTS)
message(FATAL_ERROR "zephyr_file(${ARGV0} <val> ...) given unknown arguments: ${ZFILE_UNPARSED_ARGUMENTS}")
endif()
if(FILE_APPLICATION_ROOT)
if(ZFILE_APPLICATION_ROOT)
# Note: user can do: `-D<var>=<relative-path>` and app can at same
# time specify `list(APPEND <var> <abs-path>)`
# Thus need to check and update only CACHED variables (-D<var>).
set(CACHED_PATH $CACHE{${FILE_APPLICATION_ROOT}})
set(CACHED_PATH $CACHE{${ZFILE_APPLICATION_ROOT}})
foreach(path ${CACHED_PATH})
# The cached variable is relative path, i.e. provided by `-D<var>` or
# `set(<var> CACHE)`, so let's update current scope variable to absolute
# path from `APPLICATION_SOURCE_DIR`.
if(NOT IS_ABSOLUTE ${path})
set(abs_path ${APPLICATION_SOURCE_DIR}/${path})
list(FIND ${FILE_APPLICATION_ROOT} ${path} index)
list(FIND ${ZFILE_APPLICATION_ROOT} ${path} index)
if(NOT ${index} LESS 0)
list(REMOVE_AT ${FILE_APPLICATION_ROOT} ${index})
list(INSERT ${FILE_APPLICATION_ROOT} ${index} ${abs_path})
list(REMOVE_AT ${ZFILE_APPLICATION_ROOT} ${index})
list(INSERT ${ZFILE_APPLICATION_ROOT} ${index} ${abs_path})
endif()
endif()
endforeach()
# Now all cached relative paths has been updated.
# Let's check if anyone uses relative path as scoped variable, and fail
foreach(path ${${FILE_APPLICATION_ROOT}})
foreach(path ${${ZFILE_APPLICATION_ROOT}})
if(NOT IS_ABSOLUTE ${path})
message(FATAL_ERROR
"Relative path encountered in scoped variable: ${FILE_APPLICATION_ROOT}, value=${path}\n \
Please adjust any `set(${FILE_APPLICATION_ROOT} ${path})` or `list(APPEND ${FILE_APPLICATION_ROOT} ${path})`\n \
"Relative path encountered in scoped variable: ${ZFILE_APPLICATION_ROOT}, value=${path}\n \
Please adjust any `set(${ZFILE_APPLICATION_ROOT} ${path})` or `list(APPEND ${ZFILE_APPLICATION_ROOT} ${path})`\n \
to absolute path using `\${CMAKE_CURRENT_SOURCE_DIR}/${path}` or similar. \n \
Relative paths are only allowed with `-D${ARGV1}=<path>`")
endif()
endforeach()
# This updates the provided argument in parent scope (callers scope)
set(${FILE_APPLICATION_ROOT} ${${FILE_APPLICATION_ROOT}} PARENT_SCOPE)
set(${ZFILE_APPLICATION_ROOT} ${${ZFILE_APPLICATION_ROOT}} PARENT_SCOPE)
endif()
if(FILE_CONF_FILES)
if(DEFINED FILE_BOARD_REVISION AND NOT FILE_BOARD)
if(ZFILE_CONF_FILES)
if(DEFINED ZFILE_BOARD_REVISION AND NOT ZFILE_BOARD)
message(FATAL_ERROR
"zephyr_file(${ARGV0} <path> BOARD_REVISION ${FILE_BOARD_REVISION} ...)"
"zephyr_file(${ARGV0} <path> BOARD_REVISION ${ZFILE_BOARD_REVISION} ...)"
" given without BOARD argument, please specify BOARD"
)
endif()
if(NOT DEFINED FILE_BOARD)
if(NOT DEFINED ZFILE_BOARD)
# Defaulting to system wide settings when BOARD is not given as argument
set(FILE_BOARD ${BOARD})
set(ZFILE_BOARD ${BOARD})
if(DEFINED BOARD_REVISION)
set(FILE_BOARD_REVISION ${BOARD_REVISION})
set(ZFILE_BOARD_REVISION ${BOARD_REVISION})
endif()
endif()
if(FILE_NAMES)
set(dts_filename_list ${FILE_NAMES})
set(kconf_filename_list ${FILE_NAMES})
if(ZFILE_NAMES)
set(dts_filename_list ${ZFILE_NAMES})
set(kconf_filename_list ${ZFILE_NAMES})
else()
zephyr_build_string(filename
BOARD ${FILE_BOARD}
BUILD ${FILE_BUILD}
BOARD ${ZFILE_BOARD}
BUILD ${ZFILE_BUILD}
)
set(filename_list ${filename})
zephyr_build_string(filename
BOARD ${FILE_BOARD}
BOARD_REVISION ${FILE_BOARD_REVISION}
BUILD ${FILE_BUILD}
BOARD ${ZFILE_BOARD}
BOARD_REVISION ${ZFILE_BOARD_REVISION}
BUILD ${ZFILE_BUILD}
)
list(APPEND filename_list ${filename})
list(REMOVE_DUPLICATES filename_list)
@@ -2561,24 +2557,24 @@ Relative paths are only allowed with `-D${ARGV1}=<path>`")
list(TRANSFORM kconf_filename_list APPEND ".conf")
endif()
if(FILE_DTS)
foreach(path ${FILE_CONF_FILES})
if(ZFILE_DTS)
foreach(path ${ZFILE_CONF_FILES})
foreach(filename ${dts_filename_list})
if(NOT IS_ABSOLUTE ${filename})
set(test_file ${path}/${filename})
else()
set(test_file ${filename})
endif()
zephyr_file_suffix(test_file SUFFIX ${FILE_SUFFIX})
zephyr_file_suffix(test_file SUFFIX ${ZFILE_SUFFIX})
if(EXISTS ${test_file})
list(APPEND ${FILE_DTS} ${test_file})
list(APPEND ${ZFILE_DTS} ${test_file})
if(DEFINED FILE_BUILD)
if(DEFINED ZFILE_BUILD)
set(deprecated_file_found y)
endif()
if(FILE_NAMES)
if(ZFILE_NAMES)
break()
endif()
endif()
@@ -2586,31 +2582,32 @@ Relative paths are only allowed with `-D${ARGV1}=<path>`")
endforeach()
# This updates the provided list in parent scope (callers scope)
set(${FILE_DTS} ${${FILE_DTS}} PARENT_SCOPE)
set(${ZFILE_DTS} ${${ZFILE_DTS}} PARENT_SCOPE)
if(NOT ${FILE_DTS})
if(NOT ${ZFILE_DTS})
set(not_found ${dts_filename_list})
endif()
endif()
if(FILE_KCONF)
foreach(path ${FILE_CONF_FILES})
if(ZFILE_KCONF)
foreach(path ${ZFILE_CONF_FILES})
foreach(filename ${kconf_filename_list})
if(NOT IS_ABSOLUTE ${filename})
set(test_file ${path}/${filename})
else()
set(test_file ${filename})
endif()
zephyr_file_suffix(test_file SUFFIX ${FILE_SUFFIX})
zephyr_file_suffix(test_file SUFFIX ${ZFILE_SUFFIX})
if(EXISTS ${test_file})
list(APPEND ${FILE_KCONF} ${test_file})
list(APPEND ${ZFILE_KCONF} ${test_file})
if(DEFINED FILE_BUILD)
if(DEFINED ZFILE_BUILD)
set(deprecated_file_found y)
endif()
if(FILE_NAMES)
if(ZFILE_NAMES)
break()
endif()
endif()
@@ -2618,16 +2615,16 @@ Relative paths are only allowed with `-D${ARGV1}=<path>`")
endforeach()
# This updates the provided list in parent scope (callers scope)
set(${FILE_KCONF} ${${FILE_KCONF}} PARENT_SCOPE)
set(${ZFILE_KCONF} ${${ZFILE_KCONF}} PARENT_SCOPE)
if(NOT ${FILE_KCONF})
if(NOT ${ZFILE_KCONF})
set(not_found ${kconf_filename_list})
endif()
endif()
if(FILE_REQUIRED AND DEFINED not_found)
if(ZFILE_REQUIRED AND DEFINED not_found)
message(FATAL_ERROR
"No ${not_found} file(s) was found in the ${FILE_CONF_FILES} folder(s), "
"No ${not_found} file(s) was found in the ${ZFILE_CONF_FILES} folder(s), "
"please read the Zephyr documentation on application development."
)
endif()
@@ -2691,9 +2688,9 @@ endfunction()
#
function(zephyr_file_suffix filename)
set(single_args SUFFIX)
cmake_parse_arguments(FILE "" "${single_args}" "" ${ARGN})
cmake_parse_arguments(SFILE "" "${single_args}" "" ${ARGN})
if(NOT DEFINED FILE_SUFFIX OR NOT DEFINED ${filename})
if(NOT DEFINED SFILE_SUFFIX OR NOT DEFINED ${filename})
# If the file suffix variable is not known then there is nothing to do, return early
return()
endif()
@@ -2709,7 +2706,7 @@ function(zephyr_file_suffix filename)
# Search for the full stop so we know where to add the file suffix before the file extension
cmake_path(GET file EXTENSION file_ext)
cmake_path(REMOVE_EXTENSION file OUTPUT_VARIABLE new_filename)
cmake_path(APPEND_STRING new_filename "_${FILE_SUFFIX}${file_ext}")
cmake_path(APPEND_STRING new_filename "_${SFILE_SUFFIX}${file_ext}")
# Use the filename with the suffix if it exists, if not then fall back to the default
if(EXISTS "${new_filename}")

View File

@@ -36,8 +36,12 @@
# The final load of `version.cmake` will setup correct build version values.
if(NOT DEFINED VERSION_FILE AND NOT DEFINED VERSION_TYPE)
set(VERSION_FILE ${ZEPHYR_BASE}/VERSION ${APPLICATION_SOURCE_DIR}/VERSION)
set(VERSION_TYPE KERNEL APP)
set(VERSION_FILE ${ZEPHYR_BASE}/VERSION)
set(VERSION_TYPE KERNEL)
if(DEFINED APPLICATION_SOURCE_DIR)
list(APPEND VERSION_FILE ${APPLICATION_SOURCE_DIR}/VERSION)
list(APPEND VERSION_TYPE APP)
endif()
endif()
foreach(type file IN ZIP_LISTS VERSION_TYPE VERSION_FILE)

View File

@@ -57,6 +57,7 @@
.getElementById("search-se-settings-icon")
.setAttribute("aria-expanded", visible ? "true" : "false");
};
setSearchEngineSettingsMenuVisibility(false);
window.toggleSearchEngineSettingsMenu = function () {
isVisible = searchMenu.style.display === "block";

View File

@@ -46,6 +46,9 @@ Kernel
:kconfig:option:`CONFIG_HEAP_MEM_POOL_IGNORE_MIN` option has been introduced (which defaults
being disabled).
* STM32H7 and STM32F7 should now activate the cache (Icache and Dcache) by setting explicitly
the ``CONFIG_CACHE_MANAGEMENT`` to ``y``.
Boards
******
@@ -638,7 +641,7 @@ Shell
* :kconfig:option:`CONFIG_SHELL_BACKEND_SERIAL_API` now does not automatically default to
:kconfig:option:`CONFIG_SHELL_BACKEND_SERIAL_API_ASYNC` when
:kconfig:option:`CONFIG_UART_ASYNC_API` is enabled, :kconfig:option:`CONFIG_SHELL_ASYNC_API`
also has to be enabled in order to use the asynchronous serial shell (:github: `68475`).
also has to be enabled in order to use the asynchronous serial shell (:github:`68475`).
ZBus
====

View File

@@ -48,26 +48,26 @@ project.
.. table:: Tools and Libraries for MCUmgr
:align: center
+--------------------------------------------------------------------------------+-------------------------------------------+--------------------------+--------------------------------------------------+---------------+------------+---------+
| Name | OS support | Transports | Groups | Type | Language | License |
| +---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+ | | |
| | Windows | Linux | mac | Mobile | Embedded | Serial | Bluetooth | UDP | OS | IMG | Stat | Settings | FS | Shell | Zephyr | | | |
+================================================================================+=========+=======+=====+========+==========+========+===========+=====+====+=====+======+==========+====+=======+========+===============+============+=========+
| `AuTerm <https://github.com/thedjnK/AuTerm/>`_ | ✓ | ✓ | ✓ | ✕ | ✕ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Application | C++ (Qt) | GPLv3 |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+---------+
| `mcumgr-client <https://github.com/vouch-opensource/mcumgr-client/>`_ | ✓ | ✓ | ✓ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | Application | Rust | BSD |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+---------+
| `mcumgr-web <https://github.com/boogie/mcumgr-web/>`_ | ✓ | ✓ | ✓ | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | Web page | Javascript | MIT |
| | | | | | | | | | | | | | | | | (chrome only) | | |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+---------+
| nRF Connect Device Manager: |br| | | | | | | | | | | | | | | | | | | |
| `Android | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ | ✕ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Library and | Java, | Apache |
| <https://github.com/NordicSemiconductor/Android-nRF-Connect-Device-Manager/>`_ | | | | | | | | | | | | | | | | application | Kotlin, | |
| and `iOS | | | | | | | | | | | | | | | | | Swift | |
| <https://github.com/NordicSemiconductor/IOS-nRF-Connect-Device-Manager>`_ | | | | | | | | | | | | | | | | | | |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+---------+
| Zephyr MCUmgr client (in-tree) | ✕ | ✓ | ✕ | ✕ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | Library | C | Apache |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+---------+
+--------------------------------------------------------------------------------+-------------------------------------------+--------------------------+--------------------------------------------------+---------------+------------+------------+
| Name | OS support | Transports | Groups | Type | Language | License |
| +---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+ | | |
| | Windows | Linux | mac | Mobile | Embedded | Serial | Bluetooth | UDP | OS | IMG | Stat | Settings | FS | Shell | Zephyr | | | |
+================================================================================+=========+=======+=====+========+==========+========+===========+=====+====+=====+======+==========+====+=======+========+===============+============+============+
| `AuTerm <https://github.com/thedjnK/AuTerm/>`_ | ✓ | ✓ | ✓ | ✕ | ✕ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Application | C++ (Qt) | GPL-3.0 |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+------------+
| `mcumgr-client <https://github.com/vouch-opensource/mcumgr-client/>`_ | ✓ | ✓ | ✓ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | Application | Rust | Apache-2.0 |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+------------+
| `mcumgr-web <https://github.com/boogie/mcumgr-web/>`_ | ✓ | ✓ | ✓ | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | Web page | Javascript | MIT |
| | | | | | | | | | | | | | | | | (chrome only) | | |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+------------+
| nRF Connect Device Manager: |br| | | | | | | | | | | | | | | | | | | |
| `Android | ✕ | ✕ | ✕ | ✓ | ✕ | ✕ | ✓ | ✕ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | Library and | Java, | Apache-2.0 |
| <https://github.com/NordicSemiconductor/Android-nRF-Connect-Device-Manager/>`_ | | | | | | | | | | | | | | | | application | Kotlin, | |
| and `iOS | | | | | | | | | | | | | | | | | Swift | |
| <https://github.com/NordicSemiconductor/IOS-nRF-Connect-Device-Manager>`_ | | | | | | | | | | | | | | | | | | |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+------------+
| Zephyr MCUmgr client (in-tree) | ✕ | ✓ | ✕ | ✕ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✕ | ✕ | ✕ | ✕ | ✕ | Library | C | Apache-2.0 |
+--------------------------------------------------------------------------------+---------+-------+-----+--------+----------+--------+-----------+-----+----+-----+------+----------+----+-------+--------+---------------+------------+------------+
.. only:: latex

View File

@@ -639,7 +639,7 @@ static int lmp90xxx_adc_read_channel(const struct device *dev,
if (buf[3] != crc) {
LOG_ERR("CRC mismatch (0x%02x vs. 0x%02x)", buf[3],
crc);
return err;
return -EIO;
}
}

View File

@@ -229,19 +229,7 @@ static void bt_ipc_rx(const uint8_t *data, size_t len)
if (buf) {
LOG_DBG("Calling bt_recv(%p)", buf);
/* The IPC service does not guarantee that the handler thread
* is cooperative. In particular, the OpenAMP implementation is
* preemtible by default. OTOH, the HCI driver interface requires
* that the bt_recv() function is called from a cooperative
* thread.
*
* Calling `k_sched lock()` has the effect of making the current
* thread cooperative.
*/
k_sched_lock();
bt_recv(buf);
k_sched_unlock();
LOG_HEXDUMP_DBG(buf->data, buf->len, "RX buf payload:");
}

View File

@@ -1,4 +1,4 @@
# Bosch m_can configuration options
# Bosch M_CAN configuration options
# Copyright (c) 2020 Alexander Wachter
# SPDX-License-Identifier: Apache-2.0
@@ -6,17 +6,4 @@
config CAN_MCAN
bool
help
Enable Bosch m_can driver.
This driver supports the Bosch m_can IP. This IP is built into the
STM32G4, STM32G0, STM32H7, and the Microchip SAM controllers with
CAN FD.
if CAN_MCAN
config CAN_DELAY_COMP
bool "Transceiver delay compensation"
default y
help
Enable the automatic transceiver delay compensation.
endif #CAN_MCAN
Enable the Bosch M_CAN CAN IP module driver backend.

View File

@@ -225,8 +225,10 @@ unlock:
#ifdef CONFIG_CAN_FD_MODE
int can_mcan_set_timing_data(const struct device *dev, const struct can_timing *timing_data)
{
const uint8_t tdco_max = FIELD_GET(CAN_MCAN_TDCR_TDCO, CAN_MCAN_TDCR_TDCO);
struct can_mcan_data *data = dev->data;
uint32_t dbtp = 0U;
uint8_t tdco;
int err;
if (data->common.started) {
@@ -240,6 +242,23 @@ int can_mcan_set_timing_data(const struct device *dev, const struct can_timing *
FIELD_PREP(CAN_MCAN_DBTP_DTSEG2, timing_data->phase_seg2 - 1UL) |
FIELD_PREP(CAN_MCAN_DBTP_DBRP, timing_data->prescaler - 1UL);
if (timing_data->prescaler == 1U || timing_data->prescaler == 2U) {
/* TDC can only be enabled if DBRP = { 0, 1 } */
dbtp |= CAN_MCAN_DBTP_TDC;
/* Set TDC offset for correct location of the Secondary Sample Point (SSP) */
tdco = CAN_CALC_TDCO(timing_data, 0U, tdco_max);
LOG_DBG("TDC enabled, using TDCO %u", tdco);
err = can_mcan_write_reg(dev, CAN_MCAN_TDCR, FIELD_PREP(CAN_MCAN_TDCR_TDCO, tdco));
if (err != 0) {
goto unlock;
}
} else {
LOG_DBG("TDC cannot be enabled, prescaler value %u too high",
timing_data->prescaler);
}
err = can_mcan_write_reg(dev, CAN_MCAN_DBTP, dbtp);
if (err != 0) {
goto unlock;
@@ -1416,32 +1435,6 @@ int can_mcan_init(const struct device *dev)
return err;
}
#if defined(CONFIG_CAN_DELAY_COMP) && defined(CONFIG_CAN_FD_MODE)
err = can_mcan_read_reg(dev, CAN_MCAN_DBTP, &reg);
if (err != 0) {
return err;
}
reg |= CAN_MCAN_DBTP_TDC;
err = can_mcan_write_reg(dev, CAN_MCAN_DBTP, reg);
if (err != 0) {
return err;
}
err = can_mcan_read_reg(dev, CAN_MCAN_TDCR, &reg);
if (err != 0) {
return err;
}
reg |= FIELD_PREP(CAN_MCAN_TDCR_TDCO, config->tx_delay_comp_offset);
err = can_mcan_write_reg(dev, CAN_MCAN_TDCR, reg);
if (err != 0) {
return err;
}
#endif /* defined(CONFIG_CAN_DELAY_COMP) && defined(CONFIG_CAN_FD_MODE) */
err = can_mcan_read_reg(dev, CAN_MCAN_GFC, &reg);
if (err != 0) {
return err;

View File

@@ -240,7 +240,7 @@ static void can_native_linux_remove_rx_filter(const struct device *dev, int filt
struct can_native_linux_data *data = dev->data;
if (filter_id < 0 || filter_id >= ARRAY_SIZE(data->filters)) {
LOG_ERR("filter ID %d out of bounds");
LOG_ERR("filter ID %d out of bounds", filter_id);
return;
}

View File

@@ -676,7 +676,7 @@ static int cmd_can_send(const struct shell *sh, size_t argc, char **argv)
const struct device *dev = device_get_binding(argv[1]);
static unsigned int frame_counter;
unsigned int frame_no;
struct can_frame frame;
struct can_frame frame = { 0 };
uint32_t max_id;
int argidx = 2;
uint32_t val;
@@ -781,7 +781,7 @@ static int cmd_can_send(const struct shell *sh, size_t argc, char **argv)
(frame.flags & CAN_FRAME_RTR) != 0 ? 1 : 0,
(frame.flags & CAN_FRAME_FDF) != 0 ? 1 : 0,
(frame.flags & CAN_FRAME_BRS) != 0 ? 1 : 0,
can_dlc_to_bytes(frame.dlc));
frame.dlc);
err = can_send(dev, &frame, K_NO_WAIT, can_shell_tx_callback, UINT_TO_POINTER(frame_no));
if (err != 0) {

View File

@@ -191,7 +191,7 @@ static int rtc_stm32_start(const struct device *dev)
/* Enable RTC bus clock */
if (clock_control_on(clk, (clock_control_subsys_t) &cfg->pclken[0]) != 0) {
LOG_ERR("clock op failed\n");
LOG_ERR("RTC clock enabling failed\n");
return -EIO;
}
#else
@@ -212,9 +212,9 @@ static int rtc_stm32_stop(const struct device *dev)
const struct device *const clk = DEVICE_DT_GET(STM32_CLOCK_CONTROL_NODE);
const struct rtc_stm32_config *cfg = dev->config;
/* Enable RTC bus clock */
if (clock_control_on(clk, (clock_control_subsys_t) &cfg->pclken[0]) != 0) {
LOG_ERR("clock op failed\n");
/* Disable RTC bus clock */
if (clock_control_off(clk, (clock_control_subsys_t) &cfg->pclken[0]) != 0) {
LOG_ERR("RTC clock disabling failed\n");
return -EIO;
}
#else

View File

@@ -28,6 +28,12 @@ LOG_MODULE_REGISTER(LOG_MODULE_NAME, LOG_LEVEL);
#define COUNTER_OVERFLOW_SHORT NRF_TIMER_SHORT_COMPARE0_CLEAR_MASK
#define COUNTER_READ_CC NRF_TIMER_CC_CHANNEL1
#if defined(CONFIG_SOC_SERIES_BSIM_NRFXX)
#define MAYBE_CONST_CONFIG
#else
#define MAYBE_CONST_CONFIG const
#endif
struct counter_nrfx_data {
counter_top_callback_t top_cb;
void *top_user_data;
@@ -283,7 +289,16 @@ static uint32_t get_pending_int(const struct device *dev)
static int init_timer(const struct device *dev,
const struct counter_timer_config *config)
{
const struct counter_nrfx_config *nrfx_config = dev->config;
MAYBE_CONST_CONFIG struct counter_nrfx_config *nrfx_config =
(MAYBE_CONST_CONFIG struct counter_nrfx_config *)dev->config;
#if defined(CONFIG_SOC_SERIES_BSIM_NRFXX)
/* For simulated devices we need to convert the hardcoded DT address from the real
* peripheral into the correct one for simulation
*/
nrfx_config->timer = nhw_convert_periph_base_addr(nrfx_config->timer);
#endif
NRF_TIMER_Type *reg = nrfx_config->timer;
nrf_timer_bit_width_set(reg, config->bit_width);
@@ -430,7 +445,7 @@ static const struct counter_driver_api counter_nrfx_driver_api = {
static struct counter_nrfx_ch_data \
counter##idx##_ch_data[CC_TO_ID(DT_INST_PROP(idx, cc_num))]; \
LOG_INSTANCE_REGISTER(LOG_MODULE_NAME, idx, CONFIG_COUNTER_LOG_LEVEL); \
static const struct counter_nrfx_config nrfx_counter_##idx##_config = { \
static MAYBE_CONST_CONFIG struct counter_nrfx_config nrfx_counter_##idx##_config = { \
.info = { \
.max_top_value = (uint32_t)BIT64_MASK(DT_INST_PROP(idx, max_bit_width)),\
.freq = TIMER_CLOCK((NRF_TIMER_Type *)DT_INST_REG_ADDR(idx)) / \
@@ -439,7 +454,7 @@ static const struct counter_driver_api counter_nrfx_driver_api = {
.channels = CC_TO_ID(DT_INST_PROP(idx, cc_num)), \
}, \
.ch_data = counter##idx##_ch_data, \
.timer = (NRF_TIMER_Type *)_CONCAT(NRF_TIMER, idx), \
.timer = (NRF_TIMER_Type *)DT_INST_REG_ADDR(idx), \
LOG_INSTANCE_PTR_INIT(log, LOG_MODULE_NAME, idx) \
}; \
DEVICE_DT_INST_DEFINE(idx, \

View File

@@ -66,7 +66,7 @@ static int dac_esp32_init(const struct device *dev)
}
if (clock_control_on(cfg->clock_dev,
(clock_control_subsys_t) &cfg->clock_subsys) != 0) {
(clock_control_subsys_t) cfg->clock_subsys) != 0) {
LOG_ERR("DAC clock setup failed (%d)", -EIO);
return -EIO;
}

View File

@@ -1092,11 +1092,11 @@ static int flash_stm32_ospi_erase(const struct device *dev, off_t addr,
? SPI_NOR_CMD_SE_4B
: SPI_NOR_CMD_SE;
}
/* Avoid using wrong erase type,
* if zero entries are found in erase_types
*/
bet = NULL;
}
/* Avoid using wrong erase type,
* if zero entries are found in erase_types
*/
bet = NULL;
}
LOG_INF("Sector/Block Erase addr 0x%x, asize 0x%x amode 0x%x instr 0x%x",
cmd_erase.Address, cmd_erase.AddressSize,

View File

@@ -207,51 +207,59 @@ static int gpio_keys_pm_action(const struct device *dev,
const struct gpio_keys_config *cfg = dev->config;
struct gpio_keys_data *data = dev->data;
struct gpio_keys_pin_data *pin_data = cfg->pin_data;
gpio_flags_t gpio_flags;
gpio_flags_t int_flags;
int ret;
switch (action) {
case PM_DEVICE_ACTION_SUSPEND:
gpio_flags = GPIO_DISCONNECTED;
int_flags = GPIO_INT_DISABLE;
atomic_set(&data->suspended, 1);
break;
for (int i = 0; i < cfg->num_keys; i++) {
const struct gpio_dt_spec *gpio = &cfg->pin_cfg[i].spec;
if (!cfg->polling_mode) {
ret = gpio_pin_interrupt_configure_dt(gpio, GPIO_INT_DISABLE);
if (ret < 0) {
LOG_ERR("interrupt configuration failed: %d", ret);
return ret;
}
}
ret = gpio_pin_configure_dt(gpio, GPIO_DISCONNECTED);
if (ret != 0) {
LOG_ERR("Pin %d configuration failed: %d", i, ret);
return ret;
}
}
return 0;
case PM_DEVICE_ACTION_RESUME:
gpio_flags = GPIO_INPUT;
int_flags = GPIO_INT_EDGE_BOTH;
atomic_set(&data->suspended, 0);
break;
for (int i = 0; i < cfg->num_keys; i++) {
const struct gpio_dt_spec *gpio = &cfg->pin_cfg[i].spec;
ret = gpio_pin_configure_dt(gpio, GPIO_INPUT);
if (ret != 0) {
LOG_ERR("Pin %d configuration failed: %d", i, ret);
return ret;
}
if (cfg->polling_mode) {
k_work_reschedule(&pin_data[0].work,
K_MSEC(cfg->debounce_interval_ms));
} else {
ret = gpio_pin_interrupt_configure_dt(gpio, GPIO_INT_EDGE_BOTH);
if (ret < 0) {
LOG_ERR("interrupt configuration failed: %d", ret);
return ret;
}
}
}
return 0;
default:
return -ENOTSUP;
}
for (int i = 0; i < cfg->num_keys; i++) {
const struct gpio_dt_spec *gpio = &cfg->pin_cfg[i].spec;
ret = gpio_pin_configure_dt(gpio, gpio_flags);
if (ret != 0) {
LOG_ERR("Pin %d configuration failed: %d", i, ret);
return ret;
}
if (cfg->polling_mode) {
continue;
}
ret = gpio_pin_interrupt_configure_dt(gpio, int_flags);
if (ret < 0) {
LOG_ERR("interrupt configuration failed: %d", ret);
return ret;
}
}
if (action == PM_DEVICE_ACTION_RESUME && cfg->polling_mode) {
k_work_reschedule(&pin_data[0].work,
K_MSEC(cfg->debounce_interval_ms));
}
return 0;
}
#endif

View File

@@ -25,7 +25,7 @@ struct mipi_dbi_spi_config {
struct mipi_dbi_spi_data {
/* Used for 3 wire mode */
uint16_t spi_byte;
struct k_spinlock lock;
struct k_mutex lock;
};
/* Expands to 1 if the node does not have the `write-only` property */
@@ -58,7 +58,11 @@ static int mipi_dbi_spi_write_helper(const struct device *dev,
.count = 1,
};
int ret = 0;
k_spinlock_key_t spinlock_key = k_spin_lock(&data->lock);
ret = k_mutex_lock(&data->lock, K_FOREVER);
if (ret < 0) {
return ret;
}
if (dbi_config->mode == MIPI_DBI_MODE_SPI_3WIRE &&
IS_ENABLED(CONFIG_MIPI_DBI_SPI_3WIRE)) {
@@ -124,7 +128,7 @@ static int mipi_dbi_spi_write_helper(const struct device *dev,
ret = -ENOTSUP;
}
out:
k_spin_unlock(&data->lock, spinlock_key);
k_mutex_unlock(&data->lock);
return ret;
}
@@ -164,9 +168,12 @@ static int mipi_dbi_spi_command_read(const struct device *dev,
.count = 1,
};
int ret = 0;
k_spinlock_key_t spinlock_key = k_spin_lock(&data->lock);
struct spi_config tmp_config;
ret = k_mutex_lock(&data->lock, K_FOREVER);
if (ret < 0) {
return ret;
}
memcpy(&tmp_config, &dbi_config->config, sizeof(tmp_config));
if (dbi_config->mode == MIPI_DBI_MODE_SPI_3WIRE &&
IS_ENABLED(CONFIG_MIPI_DBI_SPI_3WIRE)) {
@@ -231,7 +238,7 @@ static int mipi_dbi_spi_command_read(const struct device *dev,
}
out:
spi_release(config->spi_dev, &tmp_config);
k_spin_unlock(&data->lock, spinlock_key);
k_mutex_unlock(&data->lock);
return ret;
}
@@ -262,6 +269,7 @@ static int mipi_dbi_spi_reset(const struct device *dev, uint32_t delay)
static int mipi_dbi_spi_init(const struct device *dev)
{
const struct mipi_dbi_spi_config *config = dev->config;
struct mipi_dbi_spi_data *data = dev->data;
int ret;
if (!device_is_ready(config->spi_dev)) {
@@ -291,6 +299,8 @@ static int mipi_dbi_spi_init(const struct device *dev)
}
}
k_mutex_init(&data->lock);
return 0;
}

View File

@@ -19,6 +19,7 @@ LOG_MODULE_REGISTER(ams_as5600, CONFIG_SENSOR_LOG_LEVEL);
#define AS5600_ANGLE_REGISTER_H 0x0E
#define AS5600_FULL_ANGLE 360
#define AS5600_PULSES_PER_REV 4096
#define AS5600_MILLION_UNIT 1000000
struct as5600_dev_cfg {
struct i2c_dt_spec i2c_port;
@@ -60,8 +61,8 @@ static int as5600_get(const struct device *dev, enum sensor_channel chan,
val->val1 = ((int32_t)dev_data->position * AS5600_FULL_ANGLE) /
AS5600_PULSES_PER_REV;
val->val2 = ((int32_t)dev_data->position * AS5600_FULL_ANGLE) -
(val->val1 * AS5600_PULSES_PER_REV);
val->val2 = (((int32_t)dev_data->position * AS5600_FULL_ANGLE) %
AS5600_PULSES_PER_REV) * (AS5600_MILLION_UNIT / AS5600_PULSES_PER_REV);
} else {
return -ENOTSUP;
}

View File

@@ -7,7 +7,8 @@ config TEMP_KINETIS
bool "NXP Kinetis Temperature Sensor"
default y
depends on DT_HAS_NXP_KINETIS_TEMPERATURE_ENABLED
depends on (ADC && SOC_FAMILY_KINETIS)
depends on SOC_FAMILY_KINETIS
select ADC
help
Enable driver for NXP Kinetis temperature sensor.

View File

@@ -159,7 +159,7 @@ static int temp_kinetis_init(const struct device *dev)
},
};
memset(&data->buffer, 0, ARRAY_SIZE(data->buffer));
memset(&data->buffer, 0, sizeof(data->buffer));
if (!device_is_ready(config->adc)) {
LOG_ERR("ADC device is not ready");

View File

@@ -61,8 +61,8 @@
reg = <0x5000d000 0x400>;
num-lines = <16>;
interrupts = <6 0>, <7 0>, <8 0>, <9 0>,
<10 0>, <23 0>, <40 0>, <42 0>,
<64 0>, <65 0>, <66 0>, <67 0>,
<10 0>, <23 0>, <64 0>, <65 0>,
<66 0>, <67 0>, <40 0>, <42 0>,
<76 0>, <77 0>, <121 0>, <127 0>;
interrupt-names = "line0", "line1", "line2", "line3",
"line4", "line5", "line6", "line7",

View File

@@ -33,7 +33,8 @@
device_type = "cpu";
compatible = "arm,cortex-m33";
reg = <0>;
cpu-power-states = <&stop0 &stop1 &standby>;
/* Do not add &standby here since CONFIG_PM_S2RAM is disabled by default */
cpu-power-states = <&stop0 &stop1>;
#address-cells = <1>;
#size-cells = <1>;

View File

@@ -47,6 +47,3 @@ properties:
description: |
Initial time quanta of phase buffer 2 segment for the data phase (ISO11898-1:2015). Deprecated
in favor of setting advanced timing parameters from the application.
tx-delay-comp-offset:
type: int
default: 0

View File

@@ -275,7 +275,7 @@
0x00f01639 1 /* GPOTR */
0x00f01651 1 /* P18SCR */
0x00f016a8 8>; /* GPCR */
ngpios = <6>;
ngpios = <8>;
gpio-controller;
interrupts = <IT8XXX2_IRQ_WU128 IRQ_TYPE_LEVEL_HIGH
IT8XXX2_IRQ_WU129 IRQ_TYPE_LEVEL_HIGH
@@ -283,14 +283,14 @@
IT8XXX2_IRQ_WU131 IRQ_TYPE_LEVEL_HIGH
IT8XXX2_IRQ_WU132 IRQ_TYPE_LEVEL_HIGH
IT8XXX2_IRQ_WU133 IRQ_TYPE_LEVEL_HIGH
NO_FUNC 0
NO_FUNC 0>;
IT8XXX2_IRQ_WU134 IRQ_TYPE_LEVEL_HIGH
IT8XXX2_IRQ_WU135 IRQ_TYPE_LEVEL_HIGH>;
interrupt-parent = <&intc>;
wuc-base = <0xf01b34 0xf01b34 0xf01b34 0xf01b34
0xf01b34 0xf01b34 NO_FUNC NO_FUNC >;
0xf01b34 0xf01b34 0xf01b34 0xf01b34 >;
wuc-mask = <BIT(0) BIT(1) BIT(2) BIT(3)
BIT(4) BIT(5) 0 0 >;
has-volt-sel = <1 1 1 1 1 1 0 0>;
BIT(4) BIT(5) BIT(6) BIT(7)>;
has-volt-sel = <1 1 1 1 1 1 1 1>;
#gpio-cells = <2>;
};
@@ -996,4 +996,3 @@
};
};
};

View File

@@ -2549,12 +2549,13 @@
#define DT_IRQN_L1_INTERNAL(node_id, idx) DT_IRQ_BY_IDX(node_id, idx, irq)
/* DT helper macro to encode a node's IRQN to level 2 according to the multi-level scheme */
#define DT_IRQN_L2_INTERNAL(node_id, idx) \
(IRQ_TO_L2(DT_IRQN_L1_INTERNAL(node_id, idx)) | DT_IRQ(DT_IRQ_INTC(node_id), irq))
(IRQ_TO_L2(DT_IRQN_L1_INTERNAL(node_id, idx)) | \
DT_IRQ(DT_IRQ_INTC_BY_IDX(node_id, idx), irq))
/* DT helper macro to encode a node's IRQN to level 3 according to the multi-level scheme */
#define DT_IRQN_L3_INTERNAL(node_id, idx) \
(IRQ_TO_L3(DT_IRQN_L1_INTERNAL(node_id, idx)) | \
IRQ_TO_L2(DT_IRQ(DT_IRQ_INTC(node_id), irq)) | \
DT_IRQ(DT_IRQ_INTC(DT_IRQ_INTC(node_id)), irq))
IRQ_TO_L2(DT_IRQ(DT_IRQ_INTC_BY_IDX(node_id, idx), irq)) | \
DT_IRQ(DT_IRQ_INTC(DT_IRQ_INTC_BY_IDX(node_id, idx)), irq))
/* DT helper macro for the macros above */
#define DT_IRQN_LVL_INTERNAL(node_id, idx, level) DT_CAT3(DT_IRQN_L, level, _INTERNAL)(node_id, idx)

View File

@@ -89,8 +89,6 @@ static inline uint8_t bt_hci_evt_get_flags(uint8_t evt)
* for so-called high priority HCI events, which should instead be delivered to
* the host stack through bt_recv_prio().
*
* @note This function must only be called from a cooperative thread.
*
* @param buf Network buffer containing data from the controller.
*
* @return 0 on success or negative error number on failure.

View File

@@ -317,6 +317,23 @@ typedef void (*can_state_change_callback_t)(const struct device *dev,
* For internal driver use only, skip these in public documentation.
*/
/**
* @brief Calculate Transmitter Delay Compensation Offset from data phase timing parameters.
*
* Calculates the TDC Offset in minimum time quanta (mtq) using the sample point and CAN core clock
* prescaler specified by a set of data phase timing parameters.
*
* The result is clamped to the minimum/maximum supported TDC Offset values provided.
*
* @param _timing_data Pointer to data phase timing parameters.
* @param _tdco_min Minimum supported TDC Offset value in mtq.
* @param _tdco_max Maximum supported TDC Offset value in mtq.
* @return Calculated TDC Offset value in mtq.
*/
#define CAN_CALC_TDCO(_timing_data, _tdco_min, _tdco_max) \
CLAMP((1U + _timing_data->prop_seg + _timing_data->phase_seg1) * _timing_data->prescaler, \
_tdco_min, _tdco_max)
/**
* @brief Common CAN controller driver configuration.
*

View File

@@ -1240,12 +1240,9 @@ struct can_mcan_config {
uint16_t sjw;
uint16_t prop_ts1;
uint16_t ts2;
#ifdef CONFIG_CAN_FD_MODE
uint8_t sjw_data;
uint8_t prop_ts1_data;
uint8_t ts2_data;
uint8_t tx_delay_comp_offset;
#endif
const void *custom;
};
@@ -1313,7 +1310,6 @@ struct can_mcan_config {
.prop_ts1_data = DT_PROP_OR(node_id, prop_seg_data, 0) + \
DT_PROP_OR(node_id, phase_seg1_data, 0), \
.ts2_data = DT_PROP_OR(node_id, phase_seg2_data, 0), \
.tx_delay_comp_offset = DT_PROP(node_id, tx_delay_comp_offset), \
.custom = _custom, \
}
#else /* CONFIG_CAN_FD_MODE */

View File

@@ -141,7 +141,8 @@ int stream_flash_erase_page(struct stream_flash_ctx *ctx, off_t off);
* @param settings_key key to use with the settings module for loading
* the stream write progress
*
* @return non-negative on success, negative errno code on fail
* @return non-negative on success, -ERANGE in case when @p off is out
* of area designated for stream or negative errno code on fail
*/
int stream_flash_progress_load(struct stream_flash_ctx *ctx,
const char *settings_key);

View File

@@ -166,6 +166,15 @@ int z_impl_k_thread_stack_free(k_thread_stack_t *stack)
#ifdef CONFIG_USERSPACE
static inline int z_vrfy_k_thread_stack_free(k_thread_stack_t *stack)
{
/* The thread stack object must not be in initialized state.
*
* Thread stack objects are initialized when the thread is created
* and de-initialized whent the thread is destroyed. Since we can't
* free a stack that is in use, we have to check that the caller
* has access to the object but that it is not in use anymore.
*/
K_OOPS(K_SYSCALL_OBJ_NEVER_INIT(stack, K_OBJ_THREAD_STACK_ELEMENT));
return z_impl_k_thread_stack_free(stack);
}
#include <syscalls/k_thread_stack_free_mrsh.c>

View File

@@ -1653,13 +1653,18 @@ void z_impl_k_wakeup(k_tid_t thread)
}
}
k_spinlock_key_t key = k_spin_lock(&sched_spinlock);
z_mark_thread_as_not_suspended(thread);
z_ready_thread(thread);
flag_ipi();
if (!thread_active_elsewhere(thread)) {
ready_thread(thread);
}
if (!arch_is_in_isr()) {
z_reschedule_unlocked();
if (arch_is_in_isr()) {
k_spin_unlock(&sched_spinlock, key);
} else {
z_reschedule(&sched_spinlock, key);
}
}

View File

@@ -247,7 +247,9 @@ static int eventfd_ioctl_op(void *obj, unsigned int request, va_list args)
errno = EINVAL;
ret = -1;
} else {
efd->flags = flags;
int prev_flags = efd->flags & ~EFD_FLAGS_SET_INTERNAL;
efd->flags = flags | prev_flags;
ret = 0;
}
} break;

View File

@@ -16,7 +16,14 @@
char *utf8_trunc(char *utf8_str)
{
char *last_byte_p = utf8_str + strlen(utf8_str) - 1;
const size_t len = strlen(utf8_str);
if (len == 0U) {
/* no-op */
return utf8_str;
}
char *last_byte_p = utf8_str + len - 1U;
uint8_t bytes_truncated;
char seq_start_byte;

View File

@@ -7,6 +7,10 @@
/ {
/* Change min residency time to ease power consumption measurement */
cpus {
cpu0: cpu@0 {
cpu-power-states = <&stop0 &stop1 &standby>;
};
power-states {
stop0: state0 {
min-residency-us = <500000>;

View File

@@ -30,6 +30,8 @@
#define SPI_FLASH_MULTI_SECTOR_TEST
#endif
const uint8_t erased[] = { 0xff, 0xff, 0xff, 0xff };
void single_sector_test(const struct device *flash_dev)
{
const uint8_t expected[] = { 0x55, 0xaa, 0x66, 0x99 };
@@ -53,9 +55,20 @@ void single_sector_test(const struct device *flash_dev)
if (rc != 0) {
printf("Flash erase failed! %d\n", rc);
} else {
/* Check erased pattern */
memset(buf, 0, len);
rc = flash_read(flash_dev, SPI_FLASH_TEST_REGION_OFFSET, buf, len);
if (rc != 0) {
printf("Flash read failed! %d\n", rc);
return;
}
if (memcmp(erased, buf, len) != 0) {
printf("Flash erase failed at offset 0x%x got 0x%x\n",
SPI_FLASH_TEST_REGION_OFFSET, *(uint32_t *)buf);
return;
}
printf("Flash erase succeeded!\n");
}
printf("\nTest 2: Flash write\n");
printf("Attempting to write %zu bytes\n", len);
@@ -98,7 +111,7 @@ void multi_sector_test(const struct device *flash_dev)
uint8_t buf[sizeof(expected)];
int rc;
printf("\nPerform test on multiple consequtive sectors");
printf("\nPerform test on multiple consecutive sectors");
/* Write protection needs to be disabled before each write or
* erase, since the flash component turns on write protection
@@ -125,9 +138,9 @@ void multi_sector_test(const struct device *flash_dev)
printf("Flash read failed! %d\n", rc);
return;
}
if (buf[0] != 0xff) {
if (memcmp(erased, buf, len) != 0) {
printf("Flash erase failed at offset 0x%x got 0x%x\n",
offs, buf[0]);
offs, *(uint32_t *)buf);
return;
}
offs += SPI_FLASH_SECTOR_SIZE;

View File

@@ -23,6 +23,7 @@
.github/workflows/stale-workflow-queue-cleanup.yml
.github/workflows/greet_first_time_contributor.yml
.github/workflows/issues-report-config.json
.github/workflows/backport_issue_check.yml
CODEOWNERS
MAINTAINERS.yml
LICENSE

View File

@@ -56,7 +56,7 @@ class Snippet:
path = pathobj.parent / value
if not path.is_file():
_err(f'snippet file {pathobj}: {variable}: file not found: {path}')
return f'"{path}"'
return f'"{path.as_posix()}"'
if variable in ('DTS_EXTRA_CPPFLAGS'):
return f'"{value}"'
_err(f'unknown append variable: {variable}')

View File

@@ -122,6 +122,7 @@ void z_arm_platform_init(void)
* sys_cache*-functions can enable them, if requested by the
* configuration.
*/
SCB_InvalidateDCache();
SCB_DisableDCache();
/*

View File

@@ -119,6 +119,7 @@ void z_arm_platform_init(void)
* sys_cache*-functions can enable them, if requested by the
* configuration.
*/
SCB_InvalidateDCache();
SCB_DisableDCache();
/*

View File

@@ -176,6 +176,13 @@ void power_gate_entry(uint32_t core_id)
soc_cpus_active[core_id] = false;
sys_cache_data_flush_range(soc_cpus_active, sizeof(soc_cpus_active));
k_cpu_idle();
/* It is unlikely we get in here, but when this happens
* we need to lock interruptions again.
*
* @note Zephyr looks PS.INTLEVEL to check if interruptions are locked.
*/
(void)arch_irq_lock();
z_xt_ints_off(0xffffffff);
}

View File

@@ -42,6 +42,8 @@ LOG_MODULE_REGISTER(bt_ascs, CONFIG_BT_ASCS_LOG_LEVEL);
(CONFIG_BT_ASCS_ASE_SNK_COUNT + \
CONFIG_BT_ASCS_ASE_SRC_COUNT)
#define NTF_HEADER_SIZE (3) /* opcode (1) + handle (2) */
BUILD_ASSERT(CONFIG_BT_ASCS_MAX_ACTIVE_ASES <= MAX(MAX_ASES_SESSIONS,
CONFIG_BT_ISO_MAX_CHAN),
"Max active ASEs are set to more than actual number of ASEs or ISOs");
@@ -84,8 +86,9 @@ static struct bt_ascs_ase {
* writing
*/
BUILD_ASSERT(
BT_ATT_BUF_SIZE - 3 >= ASE_BUF_SIZE ||
DIV_ROUND_UP(ASE_BUF_SIZE, (BT_ATT_BUF_SIZE - 3)) <= CONFIG_BT_ATT_PREPARE_COUNT,
(BT_ATT_BUF_SIZE - NTF_HEADER_SIZE) >= ASE_BUF_SIZE ||
DIV_ROUND_UP(ASE_BUF_SIZE, (BT_ATT_BUF_SIZE - NTF_HEADER_SIZE)) <=
CONFIG_BT_ATT_PREPARE_COUNT,
"CONFIG_BT_ATT_PREPARE_COUNT not large enough to cover the maximum supported ASCS value");
/* It is mandatory to support long writes in ASCS unconditionally, and thus
@@ -174,9 +177,19 @@ static void ase_free(struct bt_ascs_ase *ase)
(void)k_work_cancel_delayable(&ase->state_transition_work);
}
static uint16_t get_max_ntf_size(struct bt_conn *conn)
{
const uint16_t mtu = conn == NULL ? 0 : bt_gatt_get_mtu(conn);
if (mtu > NTF_HEADER_SIZE) {
return mtu - NTF_HEADER_SIZE;
}
return 0U;
}
static int ase_state_notify(struct bt_ascs_ase *ase)
{
const uint8_t att_ntf_header_size = 3; /* opcode (1) + handle (2) */
struct bt_conn *conn = ase->conn;
struct bt_conn_info conn_info;
uint16_t max_ntf_size;
@@ -202,7 +215,7 @@ static int ase_state_notify(struct bt_ascs_ase *ase)
ascs_ep_get_status(&ase->ep, &ase_buf);
max_ntf_size = bt_gatt_get_mtu(conn) - att_ntf_header_size;
max_ntf_size = get_max_ntf_size(conn);
ntf_size = MIN(max_ntf_size, ase_buf.len);
if (ntf_size < ase_buf.len) {
@@ -1079,15 +1092,23 @@ static void ascs_ase_cfg_changed(const struct bt_gatt_attr *attr,
LOG_DBG("attr %p value 0x%04x", attr, value);
}
NET_BUF_SIMPLE_DEFINE_STATIC(rsp_buf, CONFIG_BT_L2CAP_TX_MTU);
#define CP_RSP_BUF_SIZE \
(sizeof(struct bt_ascs_cp_rsp) + (ASE_COUNT * sizeof(struct bt_ascs_cp_ase_rsp)))
/* Ensure that the cp_rsp_buf can fit in any notification
* (sizeof buffer - header for notification)
*/
BUILD_ASSERT(BT_ATT_BUF_SIZE - NTF_HEADER_SIZE >= CP_RSP_BUF_SIZE,
"BT_ATT_BUF_SIZE not large enough to hold responses for all ASEs");
NET_BUF_SIMPLE_DEFINE_STATIC(cp_rsp_buf, CP_RSP_BUF_SIZE);
static void ascs_cp_rsp_init(uint8_t op)
{
struct bt_ascs_cp_rsp *rsp;
net_buf_simple_reset(&rsp_buf);
net_buf_simple_reset(&cp_rsp_buf);
rsp = net_buf_simple_add(&rsp_buf, sizeof(*rsp));
rsp = net_buf_simple_add(&cp_rsp_buf, sizeof(*rsp));
rsp->op = op;
rsp->num_ase = 0;
}
@@ -1095,13 +1116,13 @@ static void ascs_cp_rsp_init(uint8_t op)
/* Add response to an opcode/ASE ID */
static void ascs_cp_rsp_add(uint8_t id, uint8_t code, uint8_t reason)
{
struct bt_ascs_cp_rsp *rsp = (void *)rsp_buf.__buf;
struct bt_ascs_cp_rsp *rsp = (void *)cp_rsp_buf.__buf;
struct bt_ascs_cp_ase_rsp *ase_rsp;
LOG_DBG("id 0x%02x code %s (0x%02x) reason %s (0x%02x)", id,
bt_ascs_rsp_str(code), code, bt_ascs_reason_str(reason), reason);
if (rsp->num_ase == 0xff) {
if (rsp->num_ase == BT_ASCS_UNSUPP_OR_LENGTH_ERR_NUM_ASE) {
return;
}
@@ -1118,7 +1139,7 @@ static void ascs_cp_rsp_add(uint8_t id, uint8_t code, uint8_t reason)
break;
}
ase_rsp = net_buf_simple_add(&rsp_buf, sizeof(*ase_rsp));
ase_rsp = net_buf_simple_add(&cp_rsp_buf, sizeof(*ase_rsp));
ase_rsp->id = id;
ase_rsp->code = code;
ase_rsp->reason = reason;
@@ -1727,7 +1748,42 @@ int bt_ascs_config_ase(struct bt_conn *conn, struct bt_bap_stream *stream,
return 0;
}
static bool is_valid_config_len(struct net_buf_simple *buf)
static uint16_t get_max_ase_rsp_for_conn(struct bt_conn *conn)
{
const uint16_t max_ntf_size = get_max_ntf_size(conn);
const size_t rsp_hdr_size = sizeof(struct bt_ascs_cp_rsp);
if (max_ntf_size > rsp_hdr_size) {
return (max_ntf_size - rsp_hdr_size) / sizeof(struct bt_ascs_cp_ase_rsp);
}
return 0U;
}
static bool is_valid_num_ases(struct bt_conn *conn, uint8_t num_ases)
{
const uint16_t max_ase_rsp = get_max_ase_rsp_for_conn(conn);
if (num_ases < 1U) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
return false;
} else if (num_ases > ASE_COUNT) {
/* If the request is for more ASEs than we have, we just reject the request */
LOG_DBG("Number_of_ASEs parameter value (%u) is greater than %d", num_ases,
ASE_COUNT);
return false;
} else if (num_ases > max_ase_rsp) {
/* If the request is for more ASEs than we can respond to, we reject the request */
LOG_DBG("Number_of_ASEs parameter value (%u) is greater than what we can respond "
"to (%u) based on the MTU",
num_ases, max_ase_rsp);
return false;
}
return true;
}
static bool is_valid_config_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_config_op *op;
struct net_buf_simple_state state;
@@ -1740,8 +1796,7 @@ static bool is_valid_config_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -1777,7 +1832,7 @@ static ssize_t ascs_config(struct bt_conn *conn, struct net_buf_simple *buf)
const struct bt_ascs_config_op *req;
const struct bt_ascs_config *cfg;
if (!is_valid_config_len(buf)) {
if (!is_valid_config_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -1945,7 +2000,7 @@ static void ase_qos(struct bt_ascs_ase *ase, uint8_t cig_id, uint8_t cis_id,
*rsp = BT_BAP_ASCS_RSP(BT_BAP_ASCS_RSP_CODE_SUCCESS, BT_BAP_ASCS_REASON_NONE);
}
static bool is_valid_qos_len(struct net_buf_simple *buf)
static bool is_valid_qos_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_qos_op *op;
struct net_buf_simple_state state;
@@ -1959,8 +2014,7 @@ static bool is_valid_qos_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -1986,7 +2040,7 @@ static ssize_t ascs_qos(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_qos_op *req;
if (!is_valid_qos_len(buf)) {
if (!is_valid_qos_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -2327,7 +2381,7 @@ static int ase_enable(struct bt_ascs_ase *ase, struct bt_ascs_metadata *meta)
return 0;
}
static bool is_valid_enable_len(struct net_buf_simple *buf)
static bool is_valid_enable_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_enable_op *op;
struct net_buf_simple_state state;
@@ -2340,8 +2394,7 @@ static bool is_valid_enable_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -2378,7 +2431,7 @@ static ssize_t ascs_enable(struct bt_conn *conn, struct net_buf_simple *buf)
struct bt_ascs_metadata *meta;
int i;
if (!is_valid_enable_len(buf)) {
if (!is_valid_enable_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -2472,7 +2525,7 @@ static void ase_start(struct bt_ascs_ase *ase)
ascs_cp_rsp_success(ASE_ID(ase));
}
static bool is_valid_start_len(struct net_buf_simple *buf)
static bool is_valid_start_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_start_op *op;
struct net_buf_simple_state state;
@@ -2485,8 +2538,7 @@ static bool is_valid_start_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -2505,7 +2557,7 @@ static ssize_t ascs_start(struct bt_conn *conn, struct net_buf_simple *buf)
const struct bt_ascs_start_op *req;
int i;
if (!is_valid_start_len(buf)) {
if (!is_valid_start_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -2555,7 +2607,7 @@ static ssize_t ascs_start(struct bt_conn *conn, struct net_buf_simple *buf)
return buf->size;
}
static bool is_valid_disable_len(struct net_buf_simple *buf)
static bool is_valid_disable_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_disable_op *op;
struct net_buf_simple_state state;
@@ -2568,8 +2620,7 @@ static bool is_valid_disable_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -2587,7 +2638,7 @@ static ssize_t ascs_disable(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_disable_op *req;
if (!is_valid_disable_len(buf)) {
if (!is_valid_disable_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -2685,7 +2736,7 @@ static void ase_stop(struct bt_ascs_ase *ase)
ascs_cp_rsp_success(ASE_ID(ase));
}
static bool is_valid_stop_len(struct net_buf_simple *buf)
static bool is_valid_stop_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_stop_op *op;
struct net_buf_simple_state state;
@@ -2698,7 +2749,7 @@ static bool is_valid_stop_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
if (op->num_ases < 1U) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
return false;
}
@@ -2718,7 +2769,7 @@ static ssize_t ascs_stop(struct bt_conn *conn, struct net_buf_simple *buf)
const struct bt_ascs_start_op *req;
int i;
if (!is_valid_stop_len(buf)) {
if (!is_valid_stop_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -2768,7 +2819,7 @@ static ssize_t ascs_stop(struct bt_conn *conn, struct net_buf_simple *buf)
return buf->size;
}
static bool is_valid_metadata_len(struct net_buf_simple *buf)
static bool is_valid_metadata_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_metadata_op *op;
struct net_buf_simple_state state;
@@ -2781,8 +2832,7 @@ static bool is_valid_metadata_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -2819,7 +2869,7 @@ static ssize_t ascs_metadata(struct bt_conn *conn, struct net_buf_simple *buf)
struct bt_ascs_metadata *meta;
int i;
if (!is_valid_metadata_len(buf)) {
if (!is_valid_metadata_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -2862,7 +2912,7 @@ static ssize_t ascs_metadata(struct bt_conn *conn, struct net_buf_simple *buf)
return buf->size;
}
static bool is_valid_release_len(struct net_buf_simple *buf)
static bool is_valid_release_len(struct bt_conn *conn, struct net_buf_simple *buf)
{
const struct bt_ascs_release_op *op;
struct net_buf_simple_state state;
@@ -2875,8 +2925,7 @@ static bool is_valid_release_len(struct net_buf_simple *buf)
}
op = net_buf_simple_pull_mem(buf, sizeof(*op));
if (op->num_ases < 1) {
LOG_WRN("Number_of_ASEs parameter value is less than 1");
if (!is_valid_num_ases(conn, op->num_ases)) {
return false;
}
@@ -2895,7 +2944,7 @@ static ssize_t ascs_release(struct bt_conn *conn, struct net_buf_simple *buf)
const struct bt_ascs_release_op *req;
int i;
if (!is_valid_release_len(buf)) {
if (!is_valid_release_len(conn, buf)) {
return BT_GATT_ERR(BT_ATT_ERR_INVALID_ATTRIBUTE_LEN);
}
@@ -3003,7 +3052,7 @@ static ssize_t ascs_cp_write(struct bt_conn *conn,
}
respond:
control_point_notify(conn, rsp_buf.data, rsp_buf.len);
control_point_notify(conn, cp_rsp_buf.data, cp_rsp_buf.len);
return len;
}

View File

@@ -23,6 +23,7 @@
#include <zephyr/sys/check.h>
#include <zephyr/logging/log.h>
#include <sys/errno.h>
LOG_MODULE_REGISTER(bt_bap_broadcast_assistant, CONFIG_BT_BAP_BROADCAST_ASSISTANT_LOG_LEVEL);
@@ -143,6 +144,13 @@ static int parse_recv_state(const void *data, uint16_t length,
}
recv_state->num_subgroups = net_buf_simple_pull_u8(&buf);
if (recv_state->num_subgroups > CONFIG_BT_BAP_BASS_MAX_SUBGROUPS) {
LOG_DBG("Cannot parse %u subgroups (max %d)", recv_state->num_subgroups,
CONFIG_BT_BAP_BASS_MAX_SUBGROUPS);
return -ENOMEM;
}
for (int i = 0; i < recv_state->num_subgroups; i++) {
struct bt_bap_bass_subgroup *subgroup = &recv_state->subgroups[i];
uint8_t *metadata;

View File

@@ -8516,7 +8516,7 @@ static void le_ltk_request(struct pdu_data *pdu_data, uint16_t handle,
}
static void encrypt_change(uint8_t err, uint16_t handle,
struct net_buf *buf)
struct net_buf *buf, bool encryption_on)
{
struct bt_hci_evt_encrypt_change *ep;
@@ -8527,9 +8527,9 @@ static void encrypt_change(uint8_t err, uint16_t handle,
hci_evt_create(buf, BT_HCI_EVT_ENCRYPT_CHANGE, sizeof(*ep));
ep = net_buf_add(buf, sizeof(*ep));
ep->status = err;
ep->status = err ? err : (encryption_on ? err : BT_HCI_ERR_UNSPECIFIED);
ep->handle = sys_cpu_to_le16(handle);
ep->encrypt = !err ? 1 : 0;
ep->encrypt = encryption_on ? 1 : 0;
}
#endif /* CONFIG_BT_CTLR_LE_ENC */
@@ -8671,7 +8671,7 @@ static void encode_data_ctrl(struct node_rx_pdu *node_rx,
break;
case PDU_DATA_LLCTRL_TYPE_START_ENC_RSP:
encrypt_change(0x00, handle, buf);
encrypt_change(0x00, handle, buf, true);
break;
#endif /* CONFIG_BT_CTLR_LE_ENC */
@@ -8688,7 +8688,7 @@ static void encode_data_ctrl(struct node_rx_pdu *node_rx,
#if defined(CONFIG_BT_CTLR_LE_ENC)
case PDU_DATA_LLCTRL_TYPE_REJECT_IND:
encrypt_change(pdu_data->llctrl.reject_ind.error_code, handle,
buf);
buf, false);
break;
#endif /* CONFIG_BT_CTLR_LE_ENC */

View File

@@ -1954,12 +1954,15 @@ int ull_disable(void *lll)
if (!ull_ref_get(hdr)) {
return -EALREADY;
}
cpu_dmb(); /* Ensure synchronized data access */
k_sem_init(&sem, 0, 1);
hdr->disabled_param = &sem;
hdr->disabled_cb = disabled_cb;
cpu_dmb(); /* Ensure synchronized data access */
/* ULL_HIGH can run after we have call `ull_ref_get` and it can
* decrement the ref count. Hence, handle this race condition by
* ensuring that `disabled_cb` has been set while the ref count is still

View File

@@ -284,11 +284,26 @@ void llcp_rx_node_retain(struct proc_ctx *ctx)
{
LL_ASSERT(ctx->node_ref.rx);
/* Mark RX node to NOT release */
ctx->node_ref.rx->hdr.type = NODE_RX_TYPE_RETAIN;
/* Only retain if not already retained */
if (ctx->node_ref.rx->hdr.type != NODE_RX_TYPE_RETAIN) {
/* Mark RX node to NOT release */
ctx->node_ref.rx->hdr.type = NODE_RX_TYPE_RETAIN;
/* store link element reference to use once this node is moved up */
ctx->node_ref.rx->hdr.link = ctx->node_ref.link;
/* store link element reference to use once this node is moved up */
ctx->node_ref.rx->hdr.link = ctx->node_ref.link;
}
}
void llcp_rx_node_release(struct proc_ctx *ctx)
{
LL_ASSERT(ctx->node_ref.rx);
/* Only release if retained */
if (ctx->node_ref.rx->hdr.type == NODE_RX_TYPE_RETAIN) {
/* Mark RX node to release and release */
ctx->node_ref.rx->hdr.type = NODE_RX_TYPE_RELEASE;
ll_rx_put_sched(ctx->node_ref.rx->hdr.link, ctx->node_ref.rx);
}
}
void llcp_nodes_release(struct ll_conn *conn, struct proc_ctx *ctx)
@@ -296,12 +311,14 @@ void llcp_nodes_release(struct ll_conn *conn, struct proc_ctx *ctx)
if (ctx->node_ref.rx && ctx->node_ref.rx->hdr.type == NODE_RX_TYPE_RETAIN) {
/* RX node retained, so release */
ctx->node_ref.rx->hdr.link->mem = conn->llcp.rx_node_release;
ctx->node_ref.rx->hdr.type = NODE_RX_TYPE_RELEASE;
conn->llcp.rx_node_release = ctx->node_ref.rx;
}
#if defined(CONFIG_BT_CTLR_PHY) && defined(CONFIG_BT_CTLR_DATA_LENGTH)
if (ctx->proc == PROC_PHY_UPDATE && ctx->data.pu.ntf_dle_node) {
/* RX node retained, so release */
ctx->data.pu.ntf_dle_node->hdr.link->mem = conn->llcp.rx_node_release;
ctx->data.pu.ntf_dle_node->hdr.type = NODE_RX_TYPE_RELEASE;
conn->llcp.rx_node_release = ctx->data.pu.ntf_dle_node;
}
#endif
@@ -706,9 +723,6 @@ void ull_cp_release_nodes(struct ll_conn *conn)
hdr = &rx->hdr;
rx = hdr->link->mem;
/* Mark for buffer for release */
hdr->type = NODE_RX_TYPE_RELEASE;
/* enqueue rx node towards Thread */
ll_rx_put(hdr->link, hdr);
}

View File

@@ -1139,6 +1139,7 @@ static void rp_comm_ntf(struct ll_conn *conn, struct proc_ctx *ctx, uint8_t gene
/* Allocate ntf node */
ntf = ctx->node_ref.rx;
ctx->node_ref.rx = NULL;
LL_ASSERT(ntf);
/* This should be an 'old' RX node, so put/sched when done */

View File

@@ -196,6 +196,22 @@ static bool cu_check_conn_parameters(struct ll_conn *conn, struct proc_ctx *ctx)
}
#endif /* CONFIG_BT_CTLR_CONN_PARAM_REQ */
static bool cu_check_conn_ind_parameters(struct ll_conn *conn, struct proc_ctx *ctx)
{
const uint16_t interval_max = ctx->data.cu.interval_max; /* unit 1.25ms */
const uint16_t timeout = ctx->data.cu.timeout; /* unit 10ms */
const uint16_t latency = ctx->data.cu.latency;
/* Valid conn_update_ind parameters */
return (interval_max >= CONN_INTERVAL_MIN(conn)) &&
(interval_max <= CONN_UPDATE_CONN_INTV_4SEC) &&
(latency <= CONN_UPDATE_LATENCY_MAX) &&
(timeout >= CONN_UPDATE_TIMEOUT_100MS) &&
(timeout <= CONN_UPDATE_TIMEOUT_32SEC) &&
((timeout * 4U) > /* *4U re. conn events is equivalent to *2U re. ms */
((latency + 1U) * interval_max));
}
static void cu_prepare_update_ind(struct ll_conn *conn, struct proc_ctx *ctx)
{
ctx->data.cu.win_size = 1U;
@@ -585,8 +601,20 @@ static void lp_cu_st_wait_rx_conn_update_ind(struct ll_conn *conn, struct proc_c
switch (evt) {
case LP_CU_EVT_CONN_UPDATE_IND:
llcp_pdu_decode_conn_update_ind(ctx, param);
/* Invalid PDU, mark the connection for termination */
if (!cu_check_conn_ind_parameters(conn, ctx)) {
llcp_rr_set_incompat(conn, INCOMPAT_NO_COLLISION);
conn->llcp_terminate.reason_final = BT_HCI_ERR_INVALID_LL_PARAM;
lp_cu_complete(conn, ctx);
break;
}
llcp_rr_set_incompat(conn, INCOMPAT_RESERVED);
/* Keep RX node to use for NTF */
llcp_rx_node_retain(ctx);
ctx->state = LP_CU_STATE_WAIT_INSTANT;
break;
case LP_CU_EVT_UNKNOWN:
@@ -633,8 +661,7 @@ static void lp_cu_check_instant(struct ll_conn *conn, struct proc_ctx *ctx, uint
lp_cu_ntf_complete(conn, ctx, evt, param);
} else {
/* Release RX node kept for NTF */
ctx->node_ref.rx->hdr.type = NODE_RX_TYPE_RELEASE;
ll_rx_put_sched(ctx->node_ref.rx->hdr.link, ctx->node_ref.rx);
llcp_rx_node_release(ctx);
ctx->node_ref.rx = NULL;
lp_cu_complete(conn, ctx);
@@ -973,11 +1000,18 @@ static void rp_cu_st_wait_conn_param_req_available(struct ll_conn *conn, struct
case RP_CU_EVT_RUN:
if (cpr_active_is_set(conn)) {
ctx->state = RP_CU_STATE_WAIT_CONN_PARAM_REQ_AVAILABLE;
if (!llcp_rr_ispaused(conn) && llcp_tx_alloc_peek(conn, ctx)) {
/* We're good to reject immediately */
ctx->data.cu.rejected_opcode = PDU_DATA_LLCTRL_TYPE_CONN_PARAM_REQ;
ctx->data.cu.error = BT_HCI_ERR_UNSUPP_LL_PARAM_VAL;
rp_cu_send_reject_ext_ind(conn, ctx, evt, param);
/* Possibly retained rx node to be released as we won't need it */
llcp_rx_node_release(ctx);
ctx->node_ref.rx = NULL;
break;
}
/* In case we have to defer NTF */
llcp_rx_node_retain(ctx);
@@ -992,6 +1026,9 @@ static void rp_cu_st_wait_conn_param_req_available(struct ll_conn *conn, struct
rp_cu_conn_param_req_ntf(conn, ctx);
ctx->state = RP_CU_STATE_WAIT_CONN_PARAM_REQ_REPLY;
} else {
/* Possibly retained rx node to be released as we won't need it */
llcp_rx_node_release(ctx);
ctx->node_ref.rx = NULL;
#if defined(CONFIG_BT_CTLR_USER_CPR_ANCHOR_POINT_MOVE)
/* Handle APM as a vendor specific user extension */
if (conn->lll.role == BT_HCI_ROLE_PERIPHERAL &&
@@ -1177,8 +1214,7 @@ static void rp_cu_check_instant(struct ll_conn *conn, struct proc_ctx *ctx, uint
cu_ntf(conn, ctx);
} else {
/* Release RX node kept for NTF */
ctx->node_ref.rx->hdr.type = NODE_RX_TYPE_RELEASE;
ll_rx_put_sched(ctx->node_ref.rx->hdr.link, ctx->node_ref.rx);
llcp_rx_node_release(ctx);
ctx->node_ref.rx = NULL;
}
rp_cu_complete(conn, ctx);
@@ -1198,19 +1234,27 @@ static void rp_cu_st_wait_rx_conn_update_ind(struct ll_conn *conn, struct proc_c
case BT_HCI_ROLE_PERIPHERAL:
llcp_pdu_decode_conn_update_ind(ctx, param);
if (is_instant_not_passed(ctx->data.cu.instant,
ull_conn_event_counter(conn))) {
/* Valid PDU */
if (cu_check_conn_ind_parameters(conn, ctx)) {
if (is_instant_not_passed(ctx->data.cu.instant,
ull_conn_event_counter(conn))) {
/* Keep RX node to use for NTF */
llcp_rx_node_retain(ctx);
llcp_rx_node_retain(ctx);
ctx->state = RP_CU_STATE_WAIT_INSTANT;
/* In case we only just received it in time */
rp_cu_check_instant(conn, ctx, evt, param);
break;
}
ctx->state = RP_CU_STATE_WAIT_INSTANT;
/* In case we only just received it in time */
rp_cu_check_instant(conn, ctx, evt, param);
} else {
conn->llcp_terminate.reason_final = BT_HCI_ERR_INSTANT_PASSED;
llcp_rr_complete(conn);
ctx->state = RP_CU_STATE_IDLE;
} else {
conn->llcp_terminate.reason_final = BT_HCI_ERR_INVALID_LL_PARAM;
}
llcp_rr_complete(conn);
ctx->state = RP_CU_STATE_IDLE;
break;
default:
/* Unknown role */

View File

@@ -225,6 +225,7 @@ static void lp_enc_ntf(struct ll_conn *conn, struct proc_ctx *ctx)
/* Piggy-back on RX node */
ntf = ctx->node_ref.rx;
ctx->node_ref.rx = NULL;
LL_ASSERT(ntf);
ntf->hdr.type = NODE_RX_TYPE_DC_PDU;
@@ -381,19 +382,29 @@ static void lp_enc_store_s(struct ll_conn *conn, struct proc_ctx *ctx, struct pd
static inline uint8_t reject_error_code(struct pdu_data *pdu)
{
uint8_t error;
if (pdu->llctrl.opcode == PDU_DATA_LLCTRL_TYPE_REJECT_IND) {
return pdu->llctrl.reject_ind.error_code;
error = pdu->llctrl.reject_ind.error_code;
#if defined(CONFIG_BT_CTLR_EXT_REJ_IND)
} else if (pdu->llctrl.opcode == PDU_DATA_LLCTRL_TYPE_REJECT_EXT_IND) {
return pdu->llctrl.reject_ext_ind.error_code;
error = pdu->llctrl.reject_ext_ind.error_code;
#endif /* CONFIG_BT_CTLR_EXT_REJ_IND */
} else {
/* Called with an invalid PDU */
LL_ASSERT(0);
/* Keep compiler happy */
return BT_HCI_ERR_UNSPECIFIED;
error = BT_HCI_ERR_UNSPECIFIED;
}
/* Check expected error code from the peer */
if (error != BT_HCI_ERR_PIN_OR_KEY_MISSING &&
error != BT_HCI_ERR_UNSUPP_REMOTE_FEATURE) {
error = BT_HCI_ERR_UNSPECIFIED;
}
return error;
}
static void lp_enc_st_wait_rx_enc_rsp(struct ll_conn *conn, struct proc_ctx *ctx, uint8_t evt,

View File

@@ -413,6 +413,7 @@ void llcp_ntf_set_pending(struct ll_conn *conn);
void llcp_ntf_clear_pending(struct ll_conn *conn);
bool llcp_ntf_pending(struct ll_conn *conn);
void llcp_rx_node_retain(struct proc_ctx *ctx);
void llcp_rx_node_release(struct proc_ctx *ctx);
/*
* ULL -> LLL Interface

View File

@@ -81,6 +81,12 @@ void llcp_lr_check_done(struct ll_conn *conn, struct proc_ctx *ctx)
ctx_header = llcp_lr_peek(conn);
LL_ASSERT(ctx_header == ctx);
/* If we have a node rx it must not be marked RETAIN as
* the memory referenced would leak
*/
LL_ASSERT(ctx->node_ref.rx == NULL ||
ctx->node_ref.rx->hdr.type != NODE_RX_TYPE_RETAIN);
lr_dequeue(conn);
llcp_proc_ctx_release(ctx);
@@ -312,6 +318,11 @@ void llcp_lr_rx(struct ll_conn *conn, struct proc_ctx *ctx, memq_link_t *link,
break;
}
/* If rx node was not retained clear reference */
if (ctx->node_ref.rx && ctx->node_ref.rx->hdr.type != NODE_RX_TYPE_RETAIN) {
ctx->node_ref.rx = NULL;
}
llcp_lr_check_done(conn, ctx);
}

View File

@@ -433,6 +433,7 @@ static void pu_ntf(struct ll_conn *conn, struct proc_ctx *ctx)
/* Piggy-back on stored RX node */
ntf = ctx->node_ref.rx;
ctx->node_ref.rx = NULL;
LL_ASSERT(ntf);
if (ctx->data.pu.ntf_pu) {
@@ -449,15 +450,9 @@ static void pu_ntf(struct ll_conn *conn, struct proc_ctx *ctx)
}
/* Enqueue notification towards LL */
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)
/* only 'put' as the 'sched' is handled when handling DLE ntf */
ll_rx_put(ntf->hdr.link, ntf);
#else
ll_rx_put_sched(ntf->hdr.link, ntf);
#endif /* CONFIG_BT_CTLR_DATA_LENGTH */
ctx->data.pu.ntf_pu = 0;
ctx->node_ref.rx = NULL;
}
#if defined(CONFIG_BT_CTLR_DATA_LENGTH)

View File

@@ -118,6 +118,12 @@ void llcp_rr_check_done(struct ll_conn *conn, struct proc_ctx *ctx)
ctx_header = llcp_rr_peek(conn);
LL_ASSERT(ctx_header == ctx);
/* If we have a node rx it must not be marked RETAIN as
* the memory referenced would leak
*/
LL_ASSERT(ctx->node_ref.rx == NULL ||
ctx->node_ref.rx->hdr.type != NODE_RX_TYPE_RETAIN);
rr_dequeue(conn);
llcp_proc_ctx_release(ctx);
@@ -307,6 +313,12 @@ void llcp_rr_rx(struct ll_conn *conn, struct proc_ctx *ctx, memq_link_t *link,
LL_ASSERT(0);
break;
}
/* If rx node was not retained clear reference */
if (ctx->node_ref.rx && ctx->node_ref.rx->hdr.type != NODE_RX_TYPE_RETAIN) {
ctx->node_ref.rx = NULL;
}
llcp_rr_check_done(conn, ctx);
}

View File

@@ -3910,7 +3910,7 @@ static uint16_t parse_include(struct bt_conn *conn, const void *pdu,
struct bt_gatt_discover_params *params,
uint16_t length)
{
const struct bt_att_read_type_rsp *rsp = pdu;
const struct bt_att_read_type_rsp *rsp;
uint16_t handle = 0U;
struct bt_gatt_include value;
union {
@@ -3919,6 +3919,13 @@ static uint16_t parse_include(struct bt_conn *conn, const void *pdu,
struct bt_uuid_128 u128;
} u;
if (length < sizeof(*rsp)) {
LOG_WRN("Parse err");
goto done;
}
rsp = pdu;
/* Data can be either in UUID16 or UUID128 */
switch (rsp->len) {
case 8: /* UUID16 */
@@ -4003,7 +4010,7 @@ static uint16_t parse_characteristic(struct bt_conn *conn, const void *pdu,
struct bt_gatt_discover_params *params,
uint16_t length)
{
const struct bt_att_read_type_rsp *rsp = pdu;
const struct bt_att_read_type_rsp *rsp;
uint16_t handle = 0U;
union {
struct bt_uuid uuid;
@@ -4011,6 +4018,13 @@ static uint16_t parse_characteristic(struct bt_conn *conn, const void *pdu,
struct bt_uuid_128 u128;
} u;
if (length < sizeof(*rsp)) {
LOG_WRN("Parse err");
goto done;
}
rsp = pdu;
/* Data can be either in UUID16 or UUID128 */
switch (rsp->len) {
case 7: /* UUID16 */
@@ -4084,7 +4098,7 @@ static uint16_t parse_read_std_char_desc(struct bt_conn *conn, const void *pdu,
struct bt_gatt_discover_params *params,
uint16_t length)
{
const struct bt_att_read_type_rsp *rsp = pdu;
const struct bt_att_read_type_rsp *rsp;
uint16_t handle = 0U;
uint16_t uuid_val;
@@ -4094,6 +4108,13 @@ static uint16_t parse_read_std_char_desc(struct bt_conn *conn, const void *pdu,
uuid_val = BT_UUID_16(params->uuid)->val;
if (length < sizeof(*rsp)) {
LOG_WRN("Parse err");
goto done;
}
rsp = pdu;
/* Parse characteristics found */
for (length--, pdu = rsp->data; length >= rsp->len;
length -= rsp->len, pdu = (const uint8_t *)pdu + rsp->len) {
@@ -4103,9 +4124,16 @@ static uint16_t parse_read_std_char_desc(struct bt_conn *conn, const void *pdu,
struct bt_gatt_cep cep;
struct bt_gatt_scc scc;
} value;
const struct bt_att_data *data = pdu;
const struct bt_att_data *data;
struct bt_gatt_attr attr;
if (length < sizeof(*data)) {
LOG_WRN("Parse err dat");
goto done;
}
data = pdu;
handle = sys_le16_to_cpu(data->handle);
/* Handle 0 is invalid */
if (!handle) {
@@ -4114,17 +4142,39 @@ static uint16_t parse_read_std_char_desc(struct bt_conn *conn, const void *pdu,
switch (uuid_val) {
case BT_UUID_GATT_CEP_VAL:
if (length < sizeof(*data) + sizeof(uint16_t)) {
LOG_WRN("Parse err cep");
goto done;
}
value.cep.properties = sys_get_le16(data->value);
break;
case BT_UUID_GATT_CCC_VAL:
if (length < sizeof(*data) + sizeof(uint16_t)) {
LOG_WRN("Parse err ccc");
goto done;
}
value.ccc.flags = sys_get_le16(data->value);
break;
case BT_UUID_GATT_SCC_VAL:
if (length < sizeof(*data) + sizeof(uint16_t)) {
LOG_WRN("Parse err scc");
goto done;
}
value.scc.flags = sys_get_le16(data->value);
break;
case BT_UUID_GATT_CPF_VAL:
{
struct gatt_cpf *cpf = (struct gatt_cpf *)data->value;
struct gatt_cpf *cpf;
if (length < sizeof(*data) + sizeof(*cpf)) {
LOG_WRN("Parse err cpf");
goto done;
}
cpf = (void *)data->value;
value.cpf.format = cpf->format;
value.cpf.exponent = cpf->exponent;
@@ -4227,7 +4277,7 @@ static uint16_t parse_service(struct bt_conn *conn, const void *pdu,
struct bt_gatt_discover_params *params,
uint16_t length)
{
const struct bt_att_read_group_rsp *rsp = pdu;
const struct bt_att_read_group_rsp *rsp;
uint16_t start_handle, end_handle = 0U;
union {
struct bt_uuid uuid;
@@ -4235,6 +4285,13 @@ static uint16_t parse_service(struct bt_conn *conn, const void *pdu,
struct bt_uuid_128 u128;
} u;
if (length < sizeof(*rsp)) {
LOG_WRN("Parse err");
goto done;
}
rsp = pdu;
/* Data can be either in UUID16 or UUID128 */
switch (rsp->len) {
case 6: /* UUID16 */
@@ -4365,7 +4422,7 @@ static void gatt_find_info_rsp(struct bt_conn *conn, int err,
const void *pdu, uint16_t length,
void *user_data)
{
const struct bt_att_find_info_rsp *rsp = pdu;
const struct bt_att_find_info_rsp *rsp;
struct bt_gatt_discover_params *params = user_data;
uint16_t handle = 0U;
uint16_t len;
@@ -4387,6 +4444,13 @@ static void gatt_find_info_rsp(struct bt_conn *conn, int err,
goto done;
}
if (length < sizeof(*rsp)) {
LOG_WRN("Parse err");
goto done;
}
rsp = pdu;
/* Data can be either in UUID16 or UUID128 */
switch (rsp->format) {
case BT_ATT_INFO_16:
@@ -4992,7 +5056,7 @@ static void gatt_prepare_write_rsp(struct bt_conn *conn, int err,
void *user_data)
{
struct bt_gatt_write_params *params = user_data;
const struct bt_att_prepare_write_rsp *rsp = pdu;
const struct bt_att_prepare_write_rsp *rsp;
size_t len;
bool data_valid;
@@ -5004,6 +5068,13 @@ static void gatt_prepare_write_rsp(struct bt_conn *conn, int err,
return;
}
if (length < sizeof(*rsp)) {
LOG_WRN("Parse err");
goto fail;
}
rsp = pdu;
len = length - sizeof(*rsp);
if (len > params->length) {
LOG_ERR("Incorrect length, canceling write");

View File

@@ -2062,6 +2062,12 @@ static void hci_encrypt_change(struct net_buf *buf)
return;
}
if (conn->encrypt == evt->encrypt) {
LOG_WRN("No change to encryption state (encrypt 0x%02x)", evt->encrypt);
bt_conn_unref(conn);
return;
}
conn->encrypt = evt->encrypt;
#if defined(CONFIG_BT_SMP)
@@ -3848,7 +3854,7 @@ static void rx_queue_put(struct net_buf *buf)
}
#endif /* !CONFIG_BT_RECV_BLOCKING */
int bt_recv(struct net_buf *buf)
static int bt_recv_unsafe(struct net_buf *buf)
{
bt_monitor_send(bt_monitor_opcode(buf), buf->data, buf->len);
@@ -3899,6 +3905,17 @@ int bt_recv(struct net_buf *buf)
}
}
int bt_recv(struct net_buf *buf)
{
int err;
k_sched_lock();
err = bt_recv_unsafe(buf);
k_sched_unlock();
return err;
}
int bt_recv_prio(struct net_buf *buf)
{
bt_monitor_send(bt_monitor_opcode(buf), buf->data, buf->len);

View File

@@ -392,6 +392,11 @@ static int l2cap_br_info_rsp(struct bt_l2cap_br *l2cap, uint8_t ident,
switch (type) {
case BT_L2CAP_INFO_FEAT_MASK:
if (buf->len < sizeof(uint32_t)) {
LOG_ERR("Invalid remote info feat mask");
err = -EINVAL;
break;
}
l2cap->info_feat_mask = net_buf_pull_le32(buf);
LOG_DBG("remote info mask 0x%08x", l2cap->info_feat_mask);
@@ -402,6 +407,11 @@ static int l2cap_br_info_rsp(struct bt_l2cap_br *l2cap, uint8_t ident,
l2cap_br_get_info(l2cap, BT_L2CAP_INFO_FIXED_CHAN);
return 0;
case BT_L2CAP_INFO_FIXED_CHAN:
if (buf->len < sizeof(uint8_t)) {
LOG_ERR("Invalid remote info fixed chan");
err = -EINVAL;
break;
}
l2cap->info_fixed_chan = net_buf_pull_u8(buf);
LOG_DBG("remote fixed channel mask 0x%02x", l2cap->info_fixed_chan);

View File

@@ -1454,10 +1454,13 @@ static int rfcomm_recv(struct bt_l2cap_chan *chan, struct net_buf *buf)
{
struct bt_rfcomm_session *session = RFCOMM_SESSION(chan);
struct bt_rfcomm_hdr *hdr = (void *)buf->data;
struct bt_rfcomm_hdr_ext *hdr_ext = (void *)buf->data;
uint8_t dlci, frame_type, fcs, fcs_len;
uint16_t msg_len;
uint16_t hdr_len;
/* Need to consider FCS also*/
if (buf->len < (sizeof(*hdr) + 1)) {
if (buf->len < (sizeof(*hdr) + sizeof(fcs))) {
LOG_ERR("Too small RFCOMM Frame");
return 0;
}
@@ -1467,19 +1470,28 @@ static int rfcomm_recv(struct bt_l2cap_chan *chan, struct net_buf *buf)
LOG_DBG("session %p dlci %x type %x", session, dlci, frame_type);
fcs_len = (frame_type == BT_RFCOMM_UIH) ? BT_RFCOMM_FCS_LEN_UIH :
BT_RFCOMM_FCS_LEN_NON_UIH;
fcs = *(net_buf_tail(buf) - 1);
if (BT_RFCOMM_LEN_EXTENDED(hdr->length)) {
msg_len = BT_RFCOMM_GET_LEN_EXTENDED(hdr_ext->hdr.length, hdr_ext->second_length);
hdr_len = sizeof(*hdr_ext);
} else {
msg_len = BT_RFCOMM_GET_LEN(hdr->length);
hdr_len = sizeof(*hdr);
}
if (buf->len < (hdr_len + msg_len + sizeof(fcs))) {
LOG_ERR("Too small RFCOMM information (%d < %d)", buf->len,
hdr_len + msg_len + sizeof(fcs));
return 0;
}
fcs_len = (frame_type == BT_RFCOMM_UIH) ? BT_RFCOMM_FCS_LEN_UIH : hdr_len;
fcs = *(net_buf_tail(buf) - sizeof(fcs));
if (!rfcomm_check_fcs(fcs_len, buf->data, fcs)) {
LOG_ERR("FCS check failed");
return 0;
}
if (BT_RFCOMM_LEN_EXTENDED(hdr->length)) {
net_buf_pull(buf, sizeof(*hdr) + 1);
} else {
net_buf_pull(buf, sizeof(*hdr));
}
net_buf_pull(buf, hdr_len);
switch (frame_type) {
case BT_RFCOMM_SABM:
@@ -1489,8 +1501,7 @@ static int rfcomm_recv(struct bt_l2cap_chan *chan, struct net_buf *buf)
if (!dlci) {
rfcomm_handle_msg(session, buf);
} else {
rfcomm_handle_data(session, buf, dlci,
BT_RFCOMM_GET_PF(hdr->control));
rfcomm_handle_data(session, buf, dlci, BT_RFCOMM_GET_PF(hdr->control));
}
break;
case BT_RFCOMM_DISC:

View File

@@ -49,6 +49,11 @@ struct bt_rfcomm_hdr {
uint8_t length;
} __packed;
struct bt_rfcomm_hdr_ext {
struct bt_rfcomm_hdr hdr;
uint8_t second_length;
} __packed;
#define BT_RFCOMM_SABM 0x2f
#define BT_RFCOMM_UA 0x63
#define BT_RFCOMM_UIH 0xef
@@ -137,13 +142,14 @@ struct bt_rfcomm_rpn {
sizeof(struct bt_rfcomm_hdr) + 1 + (mtu) + \
BT_RFCOMM_FCS_SIZE)
#define BT_RFCOMM_GET_DLCI(addr) (((addr) & 0xfc) >> 2)
#define BT_RFCOMM_GET_FRAME_TYPE(ctrl) ((ctrl) & 0xef)
#define BT_RFCOMM_GET_MSG_TYPE(type) (((type) & 0xfc) >> 2)
#define BT_RFCOMM_GET_MSG_CR(type) (((type) & 0x02) >> 1)
#define BT_RFCOMM_GET_LEN(len) (((len) & 0xfe) >> 1)
#define BT_RFCOMM_GET_CHANNEL(dlci) ((dlci) >> 1)
#define BT_RFCOMM_GET_PF(ctrl) (((ctrl) & 0x10) >> 4)
#define BT_RFCOMM_GET_DLCI(addr) (((addr) & 0xfc) >> 2)
#define BT_RFCOMM_GET_FRAME_TYPE(ctrl) ((ctrl) & 0xef)
#define BT_RFCOMM_GET_MSG_TYPE(type) (((type) & 0xfc) >> 2)
#define BT_RFCOMM_GET_MSG_CR(type) (((type) & 0x02) >> 1)
#define BT_RFCOMM_GET_LEN(len) (((len) & 0xfe) >> 1)
#define BT_RFCOMM_GET_LEN_EXTENDED(first, second) ((((first) & 0xfe) >> 1) | ((second) << 7))
#define BT_RFCOMM_GET_CHANNEL(dlci) ((dlci) >> 1)
#define BT_RFCOMM_GET_PF(ctrl) (((ctrl) & 0x10) >> 4)
#define BT_RFCOMM_SET_ADDR(dlci, cr) ((((dlci) & 0x3f) << 2) | \
((cr) << 1) | 0x01)

View File

@@ -602,6 +602,24 @@ void bt_hci_le_adv_ext_report(struct net_buf *buf)
is_report_complete = data_status == BT_HCI_LE_ADV_EVT_TYPE_DATA_STATUS_COMPLETE;
more_to_come = data_status == BT_HCI_LE_ADV_EVT_TYPE_DATA_STATUS_PARTIAL;
if (evt->length > buf->len) {
LOG_WRN("Adv report corrupted (wants %u out of %u)", evt->length, buf->len);
net_buf_reset(buf);
if (evt_type & BT_HCI_LE_ADV_EVT_TYPE_LEGACY) {
return;
}
/* Start discarding irrespective of the `more_to_come` flag. We
* assume we may have lost a partial adv report in the truncated
* data.
*/
reassembling_advertiser.state = FRAG_ADV_DISCARDING;
return;
}
if (evt_type & BT_HCI_LE_ADV_EVT_TYPE_LEGACY) {
/* Legacy advertising reports are complete.
* Create event immediately.

View File

@@ -115,7 +115,7 @@ struct select_attrs_data {
uint16_t max_att_len;
uint16_t att_list_len;
uint8_t cont_state_size;
uint8_t num_filters;
size_t num_filters;
bool new_service;
};
@@ -814,7 +814,7 @@ static uint8_t select_attrs(struct bt_sdp_attribute *attr, uint8_t att_idx,
struct select_attrs_data *sad = user_data;
uint16_t att_id_lower, att_id_upper, att_id_cur, space;
uint32_t attr_size, seq_size;
uint8_t idx_filter;
size_t idx_filter;
for (idx_filter = 0U; idx_filter < sad->num_filters; idx_filter++) {
@@ -939,7 +939,7 @@ static uint8_t select_attrs(struct bt_sdp_attribute *attr, uint8_t att_idx,
* @return len Length of the attribute list created
*/
static uint16_t create_attr_list(struct bt_sdp *sdp, struct bt_sdp_record *record,
uint32_t *filter, uint8_t num_filters,
uint32_t *filter, size_t num_filters,
uint16_t max_att_len, uint8_t cont_state_size,
uint8_t next_att, struct search_state *state,
struct net_buf *rsp_buf)
@@ -978,12 +978,13 @@ static uint16_t create_attr_list(struct bt_sdp *sdp, struct bt_sdp_record *recor
* IDs, the lower 2 bytes contain the ID and the upper 2 bytes are set to
* 0xFFFF. For attribute ranges, the lower 2bytes indicate the start ID and
* the upper 2bytes indicate the end ID
* @param max_filters Max element slots of filter to be filled in
* @param num_filters No. of filter elements filled in (to be returned)
*
* @return 0 for success, or relevant error code
*/
static uint16_t get_att_search_list(struct net_buf *buf, uint32_t *filter,
uint8_t *num_filters)
size_t max_filters, size_t *num_filters)
{
struct bt_sdp_data_elem data_elem;
uint16_t res;
@@ -998,6 +999,11 @@ static uint16_t get_att_search_list(struct net_buf *buf, uint32_t *filter,
size = data_elem.data_size;
while (size) {
if (*num_filters >= max_filters) {
LOG_WRN("Exceeded maximum array length %u of %p", max_filters, filter);
return 0;
}
res = parse_data_elem(buf, &data_elem);
if (res) {
return res;
@@ -1075,7 +1081,8 @@ static uint16_t sdp_svc_att_req(struct bt_sdp *sdp, struct net_buf *buf,
struct net_buf *rsp_buf;
uint32_t svc_rec_hdl;
uint16_t max_att_len, res, att_list_len;
uint8_t num_filters, cont_state_size, next_att = 0U;
size_t num_filters;
uint8_t cont_state_size, next_att = 0U;
if (buf->len < 6) {
LOG_WRN("Malformed packet");
@@ -1086,7 +1093,7 @@ static uint16_t sdp_svc_att_req(struct bt_sdp *sdp, struct net_buf *buf,
max_att_len = net_buf_pull_be16(buf);
/* Set up the filters */
res = get_att_search_list(buf, filter, &num_filters);
res = get_att_search_list(buf, filter, ARRAY_SIZE(filter), &num_filters);
if (res) {
/* Error in parsing */
return res;
@@ -1191,7 +1198,8 @@ static uint16_t sdp_svc_search_att_req(struct bt_sdp *sdp, struct net_buf *buf,
struct bt_sdp_att_rsp *rsp;
struct bt_sdp_data_elem_seq *seq = NULL;
uint16_t max_att_len, res, att_list_len = 0U;
uint8_t num_filters, cont_state_size, next_svc = 0U, next_att = 0U;
size_t num_filters;
uint8_t cont_state_size, next_svc = 0U, next_att = 0U;
bool dry_run = false;
res = find_services(buf, matching_recs);
@@ -1207,7 +1215,7 @@ static uint16_t sdp_svc_search_att_req(struct bt_sdp *sdp, struct net_buf *buf,
max_att_len = net_buf_pull_be16(buf);
/* Set up the filters */
res = get_att_search_list(buf, filter, &num_filters);
res = get_att_search_list(buf, filter, ARRAY_SIZE(filter), &num_filters);
if (res) {
/* Error in parsing */
@@ -1750,6 +1758,11 @@ static int sdp_client_receive(struct bt_l2cap_chan *chan, struct net_buf *buf)
switch (hdr->op_code) {
case BT_SDP_SVC_SEARCH_ATTR_RSP:
/* Check the buffer len for the length field */
if (buf->len < sizeof(uint16_t)) {
LOG_ERR("Invalid frame payload length");
return 0;
}
/* Get number of attributes in this frame. */
frame_len = net_buf_pull_be16(buf);
/* Check valid buf len for attribute list and cont state */

View File

@@ -297,6 +297,11 @@ static void olcp_ind_handler(struct bt_conn *conn,
enum bt_gatt_ots_olcp_proc_type op_code;
struct net_buf_simple net_buf;
if (length < sizeof(op_code)) {
LOG_DBG("Invalid indication length: %u", length);
return;
}
net_buf_simple_init_with_data(&net_buf, (void *)data, length);
op_code = net_buf_simple_pull_u8(&net_buf);
@@ -304,6 +309,12 @@ static void olcp_ind_handler(struct bt_conn *conn,
LOG_DBG("OLCP indication");
if (op_code == BT_GATT_OTS_OLCP_PROC_RESP) {
if (net_buf.len < (sizeof(uint8_t) + sizeof(uint8_t))) {
LOG_DBG("Invalid indication length for op_code %u: %u", op_code,
net_buf.len);
return;
}
enum bt_gatt_ots_olcp_proc_type req_opcode =
net_buf_simple_pull_u8(&net_buf);
enum bt_gatt_ots_olcp_res_code result_code =
@@ -366,6 +377,11 @@ static void oacp_ind_handler(struct bt_conn *conn,
uint32_t checksum;
struct net_buf_simple net_buf;
if (length < sizeof(op_code)) {
LOG_DBG("Invalid indication length: %u", length);
return;
}
net_buf_simple_init_with_data(&net_buf, (void *)data, length);
op_code = net_buf_simple_pull_u8(&net_buf);

View File

@@ -81,7 +81,6 @@ static struct hawkbit_context {
uint8_t *response_data;
int32_t json_action_id;
size_t url_buffer_size;
size_t status_buffer_size;
struct hawkbit_download dl;
struct http_request http_req;
struct flash_img_context flash_ctx;
@@ -851,7 +850,7 @@ static bool send_request(enum http_method method, enum hawkbit_http_request type
ret = json_obj_encode_buf(json_cfg_descr, ARRAY_SIZE(json_cfg_descr), &cfg,
hb_context.status_buffer,
hb_context.status_buffer_size - 1);
sizeof(hb_context.status_buffer));
if (ret) {
LOG_ERR("Can't encode the JSON script (HAWKBIT_CONFIG_DEVICE): %d", ret);
return false;
@@ -881,7 +880,7 @@ static bool send_request(enum http_method method, enum hawkbit_http_request type
ret = json_obj_encode_buf(json_close_descr, ARRAY_SIZE(json_close_descr), &close,
hb_context.status_buffer,
hb_context.status_buffer_size - 1);
sizeof(hb_context.status_buffer));
if (ret) {
LOG_ERR("Can't encode the JSON script (HAWKBIT_CLOSE): %d", ret);
return false;
@@ -928,7 +927,7 @@ static bool send_request(enum http_method method, enum hawkbit_http_request type
ret = json_obj_encode_buf(json_dep_fbk_descr, ARRAY_SIZE(json_dep_fbk_descr),
&feedback, hb_context.status_buffer,
hb_context.status_buffer_size - 1);
sizeof(hb_context.status_buffer));
if (ret) {
LOG_ERR("Can't encode the JSON script (HAWKBIT_REPORT): %d", ret);
return ret;

View File

@@ -847,19 +847,31 @@ enum updatehub_response z_impl_updatehub_probe(void)
recv_probe_sh_string_descr,
ARRAY_SIZE(recv_probe_sh_string_descr),
&metadata_any_boards) < 0) {
LOG_ERR("Could not parse json");
LOG_ERR("Could not parse the json");
ctx.code_status = UPDATEHUB_METADATA_ERROR;
goto cleanup;
}
LOG_DBG("elements: %d", metadata_any_boards.objects_len);
for (size_t i = 0; i < metadata_any_boards.objects_len; ++i) {
LOG_DBG("obj[%d].elements: %d",
i, metadata_any_boards.objects[i].objects_len);
for (size_t j = 0; j < metadata_any_boards.objects[i].objects_len; ++j) {
struct resp_probe_objects *obj =
&metadata_any_boards.objects[i].objects[j].objects;
LOG_DBG("\tMode: %s\n\tSHA: %s\n\tSize: %d",
obj->mode, obj->sha256sum, obj->size);
}
}
if (metadata_any_boards.objects_len != 2) {
LOG_ERR("Could not parse json");
LOG_ERR("Object length of type 'any metadata' is incorrect");
ctx.code_status = UPDATEHUB_METADATA_ERROR;
goto cleanup;
}
sha256size = strlen(
metadata_any_boards.objects[1].objects.sha256sum) + 1;
metadata_any_boards.objects[1].objects[0].objects.sha256sum) + 1;
if (sha256size != SHA256_HEX_DIGEST_SIZE) {
LOG_ERR("SHA256 size is invalid");
@@ -868,14 +880,26 @@ enum updatehub_response z_impl_updatehub_probe(void)
}
memcpy(update_info.sha256sum_image,
metadata_any_boards.objects[1].objects.sha256sum,
metadata_any_boards.objects[1].objects[0].objects.sha256sum,
SHA256_HEX_DIGEST_SIZE);
update_info.image_size = metadata_any_boards.objects[1].objects.size;
update_info.image_size = metadata_any_boards.objects[1].objects[0].objects.size;
LOG_DBG("metadata_any: %s",
update_info.sha256sum_image);
} else {
LOG_DBG("elements: %d\n", metadata_some_boards.objects_len);
for (size_t i = 0; i < metadata_some_boards.objects_len; ++i) {
LOG_DBG("obj[%d].elements: %d\n",
i, metadata_some_boards.objects[i].objects_len);
for (size_t j = 0; j < metadata_some_boards.objects[i].objects_len; ++j) {
struct resp_probe_objects *obj =
&metadata_some_boards.objects[i].objects[j].objects;
LOG_DBG("\tMode: %s\n\tSHA: %s\n\tSize: %d\n",
obj->mode, obj->sha256sum, obj->size);
}
}
if (metadata_some_boards.objects_len != 2) {
LOG_ERR("Could not parse json");
LOG_ERR("Object length of type 'some metadata' is incorrect");
ctx.code_status = UPDATEHUB_METADATA_ERROR;
goto cleanup;
}
@@ -888,7 +912,7 @@ enum updatehub_response z_impl_updatehub_probe(void)
}
sha256size = strlen(
metadata_some_boards.objects[1].objects.sha256sum) + 1;
metadata_some_boards.objects[1].objects[0].objects.sha256sum) + 1;
if (sha256size != SHA256_HEX_DIGEST_SIZE) {
LOG_ERR("SHA256 size is invalid");
@@ -897,10 +921,10 @@ enum updatehub_response z_impl_updatehub_probe(void)
}
memcpy(update_info.sha256sum_image,
metadata_some_boards.objects[1].objects.sha256sum,
metadata_some_boards.objects[1].objects[0].objects.sha256sum,
SHA256_HEX_DIGEST_SIZE);
update_info.image_size =
metadata_some_boards.objects[1].objects.size;
metadata_some_boards.objects[1].objects[0].objects.size;
LOG_DBG("metadata_some: %s",
update_info.sha256sum_image);
}
@@ -942,7 +966,7 @@ enum updatehub_response z_impl_updatehub_update(void)
if (updatehub_storage_mark_partition_to_upgrade(&ctx.storage_ctx,
UPDATEHUB_SLOT_PARTITION_1)) {
LOG_ERR("Could not reporting downloaded state");
LOG_ERR("Could not mark partition to upgrade");
ctx.code_status = UPDATEHUB_INSTALL_ERROR;
goto error;
}

View File

@@ -97,15 +97,20 @@ struct resp_probe_objects_array {
struct resp_probe_objects objects;
};
struct resp_probe_objects_array_array {
struct resp_probe_objects_array objects[4];
size_t objects_len;
};
struct resp_probe_any_boards {
struct resp_probe_objects_array objects[2];
struct resp_probe_objects_array_array objects[2];
size_t objects_len;
const char *product;
const char *supported_hardware;
};
struct resp_probe_some_boards {
struct resp_probe_objects_array objects[2];
struct resp_probe_objects_array_array objects[2];
size_t objects_len;
const char *product;
const char *supported_hardware[CONFIG_UPDATEHUB_SUPPORTED_HARDWARE_MAX];
@@ -148,6 +153,13 @@ static const struct json_obj_descr recv_probe_objects_descr_array[] = {
objects, recv_probe_objects_descr),
};
static const struct json_obj_descr recv_probe_objects_descr_array_array[] = {
JSON_OBJ_DESCR_ARRAY_ARRAY(struct resp_probe_objects_array_array,
objects, 4, objects_len,
recv_probe_objects_descr_array,
ARRAY_SIZE(recv_probe_objects_descr_array)),
};
static const struct json_obj_descr recv_probe_sh_string_descr[] = {
JSON_OBJ_DESCR_PRIM(struct resp_probe_any_boards,
product, JSON_TOK_STRING),
@@ -156,8 +168,8 @@ static const struct json_obj_descr recv_probe_sh_string_descr[] = {
JSON_TOK_STRING),
JSON_OBJ_DESCR_ARRAY_ARRAY(struct resp_probe_any_boards,
objects, 2, objects_len,
recv_probe_objects_descr_array,
ARRAY_SIZE(recv_probe_objects_descr_array)),
recv_probe_objects_descr_array_array,
ARRAY_SIZE(recv_probe_objects_descr_array_array)),
};
static const struct json_obj_descr recv_probe_sh_array_descr[] = {
@@ -169,8 +181,8 @@ static const struct json_obj_descr recv_probe_sh_array_descr[] = {
supported_hardware_len, JSON_TOK_STRING),
JSON_OBJ_DESCR_ARRAY_ARRAY(struct resp_probe_some_boards,
objects, 2, objects_len,
recv_probe_objects_descr_array,
ARRAY_SIZE(recv_probe_objects_descr_array)),
recv_probe_objects_descr_array_array,
ARRAY_SIZE(recv_probe_objects_descr_array_array)),
};
static const struct json_obj_descr device_identity_descr[] = {

Some files were not shown because too many files have changed in this diff Show More