Compare commits

...

41 Commits

Author SHA1 Message Date
Pieter De Gendt
9d018107e9 net: lib: coap: Return an error on removing a non-existing observer
If we're parsing a CoAP request with an observe option of '1', but there is
no matching observer, return an error instead of returning a zero.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
(cherry picked from commit d5931bae93)
2025-05-30 08:28:56 +00:00
Krzysztof Chruściński
c08390df71 drivers: serial: nrfx_uarte: Fix use of PM_DEVICE_ISR_SAFE
PM_DEVICE_ISR_SAFE shall not be used when non-asynchronous API is used
because RX is disabled in suspend action and that takes relatively
long time. In case of PM_DEVICE_ISR_SAFE it is done with interrupts
disabled. RX is not used at all if disable-rx property is set and in
that case PM_DEVICE_ISR_SAFE can be used.

Added macro which determines if ISR safe mode can be used.

Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
(cherry picked from commit e0f5241b28)
2025-05-23 11:44:33 -07:00
Luis Ubieda
d4374ed2cf sensor: adxl345: Disable Sensor Streaming by default
Have the application enable this feature explicitcly, so that
simple applications do not need to disable this to get the
expected behavior.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Luis Ubieda
0f9a0f4d00 sensor: adxl345: Only enable FIFO Stream with Sensor Stream is enabled
Otherwise with its default configuration (25-Hz, 32-level FIFO),
getting individual samples could be up to 1-second old.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Luis Ubieda
80b607dd8f sensor: adxl345: Fix decoder for non-streaming mode
The following fixes have been applied to this decoder:
- The Q-scale factor was fixed, both for full-scale and non
full-scale modes.
- The data-type decoded is struct sensor_three_axis_data, as
it should for read/decode API.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Luis Ubieda
e465e6168e sensor: adxl345: Add get_size_info API
Used by the sensor-shell in order to retrieve values, otherwise
it crashes.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
2025-05-09 14:06:28 -07:00
Dong Wang
0fb81dec80 logging: Assign IDs to log backends early during log_core_init
This commit moves the assignment of backend IDs from the 'z_log_init'
function to the earlier 'log_core_init' function. This ensures that
backend IDs are assigned before they are used in the 'log_backend_enable'
function, preventing incorrect settings of log dynamic filters.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
(cherry picked from commit 3fac83151d)
2025-05-09 11:59:59 -07:00
Benjamin Cabé
1cdd63e5ba doc: Update Graphviz font configuration
Customize Graphviz dot rendering to use same font stack as our Sphinx
theme.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit c0f76d9363)
2025-05-09 11:57:23 -07:00
Dong Wang
64451449ac logging: Ensure atomic update of log filter slot in LOG_FILTER_SLOT_SET
Replaced the read-modify-write sequence with a single read and write
operation, preventing the intermediate value is wrongly used to filter
out logs of another thread with higher priority that preempts the current
thread.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
(cherry picked from commit 872f363696)
2025-05-09 10:45:48 -07:00
Chekhov Ma
79696d420e drivers: gpio: adp5585: fix wrong reg during pin configure
The ADP5585_GPO_OUT_MODE_A is used when configuring initial output
during pin configuration, causing pins configured HIGH is incorrectly
configured as open-drain. Replacing the reg with ADP5585_GPO_DATA_OUT
fixes the issue.

Signed-off-by: Chekhov Ma <chekhov.ma@nxp.com>
(cherry picked from commit c87900aa40)
2025-05-09 10:36:07 -07:00
Benjamin Cabé
d5d5a7c7db doc: make graphviz diagrams look good in dark theme
Add filter to invert colors in dark mode.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
(cherry picked from commit fd919b5160)
2025-05-09 10:34:47 -07:00
Jukka Rissanen
215737c229 net: if: Release the interface lock early when starting IPv4 ACD
In order to avoid any mutex deadlocks between iface->lock and
TX lock, release the interface lock before calling a function
that will acquire TX lock. See previous commit for similar issue
in RS timer handling. So here we create a separate list of ACD
addresses that are to be started when network interface comes up
without iface->lock held.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
60190a93f4 net: if: Release the interface lock early when starting IPv6 DAD
In order to avoid any mutex deadlocks between iface->lock and
TX lock, release the interface lock before calling a function
that will acquire TX lock. See previous commit for similar issue
in RS timer handling.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
f749a05200 net: if: Release the interface lock early in IPv6 RS timeout handler
The net_if.c:rs_timeout() is sending a new IPv6 router solicitation
message to network by calling net_if_start_rs(). That function will
then acquire iface->lock and call net_ipv6_start_rs() which will try
to send the RS message and acquire TX send lock.
During this RS send, we might receive TCP data that could try to
send an ack to peer. This will then in turn cause also TX lock
to be acquired. Depending on timing, the lock ordering between
rx thread and system workq might mix which could lead to deadlock.
Fix this issue by releasing the iface->lock before starting the
RS sending process. The net_if_start_rs() does not really need to
keep the interface lock for a long time as it is the only one sending
the RS message.

Fixes #86499

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
3af878f68e net: if: Documentation missing for IPv4 ACD timeout variable
Add documentation for ACD timeout variable as it was missing.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Jukka Rissanen
05dd94d1dd net: if: Documentation missing for IPv6 DAD start time variable
Add documentation for DAD start time variable as it was missing.

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
2025-05-09 10:34:15 -07:00
Sudan Landge
0a8f9bb61c modules: tf-m: update to 2.1.2
Update TF-M to 2.1.2 from version 2.1.1.
This is required to use MbedTLS 3.6.3.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2025-05-09 10:27:22 -07:00
Sudan Landge
bbe7ab00de modules: mbedtls: update to 3.6.3
Update Mbed TLS to 3.6.3 as it has CVE fixes.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2025-05-09 10:27:22 -07:00
Fabio Baltieri
7ff8f6696c ci: update few workflows to use Python 3.12
The HTML doc build workflow and PR assigner recently started failing to
install python packages, breaking all backports. Seems to work when
using Python 3.12, this is already done in main in bacb99da6d.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2025-05-09 11:19:59 +01:00
Alex An
83e99be7d1 drivers: retained_mem: Fix using multiple nRF retained memory regions
Fixes an error when using multiple nRF retained memory regions caused by
a missing separating comma.

Signed-off-by: Alex An <aza0337@auburn.edu>
(cherry picked from commit fbb75862a1)
2025-05-07 11:02:27 -07:00
Luis Ubieda
94c7905e01 spi_rtio: fix transactions for default handler
This patch fixes transaction op items not performed within a single
SPI transfer. This is common for Write + Read commands, that depend on
the CS kept asserted until the end, otherwise the context will be lost.
A similar fix was applied to i2c_rtio_default on #79890.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit b5ca5b7b77)
2025-05-07 08:14:55 -07:00
Mathieu Choplain
43c79ba72c dts: st: stm32u5: restore correct clocks on multi-bit devices
During the transition to STM32_CLOCK macro (in 57723cf), the `clocks`
property of peripherals requiring more than one bit to be set were
mistakenly modified. Commit 2c3294b079
partially fixed these errors, but some nodes for the U5 series are still
wrong.

Restore `clocks` on affected devices in corresponding STM32U5 DTSI.

Fixes: 57723cf405
Signed-off-by: Mathieu Choplain <mathieu.choplain@st.com>
(cherry picked from commit 37bdc38ec6)
2025-03-20 14:25:06 -07:00
Luis Ubieda
7412674a3c rtio: workq: Select Early P4WQ threads init
Otherwise the RTIO Workqueue is not functional for devices during init.

An example of this issue is devices using SPI transfers with default
spi_rtio.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit 2ce2794987)
2025-02-26 09:38:00 -06:00
Luis Ubieda
0c413db7eb p4wq: Add Kconfig to perform early init on threads
In order to make them functional for devices during init. Default
behavior is to keep late initialization, as before.

Signed-off-by: Luis Ubieda <luisf@croxel.com>
(cherry picked from commit ec45b29ea3)
2025-02-26 09:38:00 -06:00
Simon Guinot
af81a8ba32 drivers: led: lp50xx: check the number of LED colors
The current code assumes (especially in the lp50xx_set_color function)
that the number of LED colors defined in DT is not greater than 3. But
since this is not checked, then this is not necessarily the case...

This patch consolidates the initialization of the lp50xx LED driver by
checking the number of colors for each LED found in DT.

Signed-off-by: Simon Guinot <simon.guinot@sequanux.org>
(cherry picked from commit ffbb8f981a)
2025-02-26 09:31:58 -06:00
Robert Lubos
546386cd1f net: if: Setup DAD timer regardless of DAD query result
In rare occasions when sending DAD NS packet fails, we should still
setup the DAD timer, unless we implement some kind of more advanced
retry mechanism. If we don't do that, the IPv6 address added to the
interface will never be usable in such cases.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 008a7ca202)
2025-02-26 09:30:50 -06:00
Robert Lubos
aa48023434 net: if: Clear neighbor cache when removing IPv6 addr with active DAD
DAD creates an entry in the neighbor cache for the queried (own)
address. In case the address is removed from the interface while DAD is
still incomplete, we need to remove the corresponding cache entry (just
like in case of DAD timeout) to avoid stale entries in the cache.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit a09fd8e97f)
2025-02-26 09:30:50 -06:00
Robert Lubos
3410679046 tests: net: conn_mgr: Use valid LL address in tests
Using L2 address of length 1 (invalid/unsupported one) confused IPv6
layer during LL address generation - since that length was not a valid
one, the address was not initialized properly and a part of it was set
semi-random. This could result for example in filling out the neighbor
tables.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 3089a5d116)
2025-02-26 09:30:50 -06:00
Derek Snell
7eab725dfb soc: nxp: rw: Update system core clock frequency
After updating the main_clk, need to update the frequency tracked in
HAL MCUXpresso SDK framework for other drivers.

Signed-off-by: Derek Snell <derek.snell@nxp.com>
(cherry picked from commit 793e44afdd)
2025-02-26 09:28:45 -06:00
Yong Cong Sin
a50476ca00 logging: log_cmds: init uninitialized backend on log_go()
For backends that do not autostart themselves, initialize
& enable them on `log backend <log_backend_*> go`, so
that they function properly.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit f840a35be3)
2025-02-20 13:03:28 -06:00
Yong Cong Sin
ae73df5621 logging: init backend id regardless of autostart
The `id` is basically a compile-time constant. Setting it every
time the backend is enabled is unnecessary. Instead, set it on
`z_log_init()` regardless of whether or not it requires to be
`autostart`ed.

Fixes an issue where the `filter_get`/`filter_set`
accessed the wrong index and displayed the wrong log level when
user accesses the status of an uninitialized backend via:
`log backend <uninitialized_backend> status`.

Also fixes an issue when user tries to list the backends via:
`log list_backends`, where all uninitialized backends will have
ID = 0.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
(cherry picked from commit 8dd9d924fe)
2025-02-20 13:03:28 -06:00
Jordan Yates
f61d53d5b5 bluetooth: increment BT_BUF_CMD_TX_COUNT
The extended advertising start procedure can consume both command
buffers in a single API call, resulting in `bt_le_create_conn_cancel`
being unable to claim a buffer to terminate the connection request.

Increase the command count if both extended advertising and Bluetooth
central are enabled in an application.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2025-02-20 10:56:17 -08:00
Jordan Yates
790bd7c44b bluetooth: host: hci_core: add missing NULL check
Add check that the command buffer claimed in `bt_le_create_conn_cancel`
is not `NULL`. Fixes a fault caused by providing the `NULL` buffer to
`bt_hci_cmd_state_set_init`.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2025-02-20 10:56:17 -08:00
Florian Weber
c2ad663314 rtio: workq: bugfix of memory allocation
This commit fixes the bug that the memory of the work request is
freed up in the work handler, before it is not needed anymore by the p4wq.
This is fixed now, by using the new done_handler in the p4wq
for freeing up that memory.

Signed-off-by: Florian Weber <Florian.Weber@live.de>
(cherry picked from commit b55b9ae72b)
2025-02-20 10:55:30 -08:00
Florian Weber
f0c05671da lib: os: p4wq: add done handler
Add an optional handler to the p4wq to give the submitting code
(e.g. rtio workq) a possibility execute code after the work was
succesfully executed.

Signed-off-by: Florian Weber <Florian.Weber@live.de>
(cherry picked from commit 093b29fdb5)
2025-02-20 10:55:30 -08:00
Jukka Rissanen
b40cacb67d net: Update IP address refcount properly when address already exists
If an IP address already exists when it is tried to be added to the
network interface, then just return it but update ref count if it was
not updated. This could happen if the address was added and then removed,
but for example an active connection was still using it and keeping the
ref count > 0. In this case we must update the ref count so that the IP
address is not removed if the connection is closed.

Fixes #85380

Signed-off-by: Jukka Rissanen <jukka.rissanen@nordicsemi.no>
(cherry picked from commit ae05221762)
2025-02-20 10:52:59 -08:00
Erwan Gouriou
c1f3c446bc dts: stm32h7: Fix ltdc reset lines
LTDC reset lines where off by 1. Fix it.

Signed-off-by: Erwan Gouriou <erwan.gouriou@st.com>
(cherry picked from commit 6f55a32da8)
2025-02-20 10:46:32 -08:00
Robert Lubos
cd85e0ef82 net: ipv6: Fix Neighbor Advertisement processing w/o TLLA option
According to RFC 4861, ch. 7.2.5:

 "If the Override flag is set, or the supplied link-layer address
  is the same as that in the cache, or no Target Link-Layer Address
  option was supplied, the received advertisement MUST update the
  Neighbor Cache entry as follows

  ...

  If the Solicited flag is set, the state of the entry MUST be
  set to REACHABLE"

This indicates that Target Link-Layer Address option does not need to be
present in the received solicited Neighbor Advertisement to confirm
reachability. Therefore remove `tllao_offset` variable check from the
if condition responsible for updating cache entry. No further changes in
the logic are required because if TLLA option is missing,
`lladdr_changed` will be set to false, so no LL address will be updated.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 02c153c8b1)
2025-02-20 10:44:32 -08:00
Robert Lubos
e47c933b29 net: ipv6: Send Neighbor Solicitations in PROBE state as unicast
According to RFC 4861, ch. 7.3.3:

 "Upon entering the PROBE state, a node sends a unicast Neighbor
  Solicitation message to the neighbor using the cached link-layer
  address."

Zephyr's implementation was not compliant with behavior, as instead of
sending a unicast probe for reachability confirmation, it was sending a
multicast packet instead. This commit fixes it.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit 8cd213e846)
2025-02-20 10:44:32 -08:00
Robert Lubos
c4cb9bd1af net: ipv6: Fix neighbor registration based on received RA message
When Router Advertisement with Source Link-Layer Address option is
received, host should register a new neighbor marked as STALE
(RFC 4861, ch. 6.3.4). This behavior was broken however, because
we always added a new neighbor in INCOMPLETE state before processing
SLLA option. In result, the entry was not updated to the STALE state,
and a redundant Neighbor Solicitation was sent.

Fix this by moving the code responsible for adding neighbor in
INCOMPLETE state after options processing, and only as a fallback
behavior if the SLLA option was not present.

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
(cherry picked from commit fce53922ef)
2025-02-20 10:44:32 -08:00
Lingao Meng
f001cf6de9 Bluetooth: Mesh: Fix Assert in bt_mesh_adv_unref when messages to a proxy
Fixes:https://github.com/zephyrproject-rtos/zephyr/issues/83904

This solution fix is to define a separate variable for the each proxy FIFO.

Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
(cherry picked from commit 6371080406)
2025-02-13 09:44:08 -08:00
47 changed files with 609 additions and 169 deletions

View File

@@ -22,6 +22,11 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: Install Python dependencies
run: |
sudo pip3 install -U setuptools wheel pip

View File

@@ -70,6 +70,11 @@ jobs:
cancel-in-progress: true
steps:
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: install-pkgs
run: |
sudo apt-get update
@@ -202,6 +207,11 @@ jobs:
- name: checkout
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.12
- name: install-pkgs
run: |
apt-get update

View File

@@ -921,6 +921,10 @@ dark-mode-toggle::part(toggleLabel){
font-size: unset;
}
div.graphviz > object {
filter: var(--graphviz-filter);
}
/* Home page grid display */
.grid {
list-style-type: none !important;

View File

@@ -93,4 +93,6 @@
--btn-neutral-background-color: #404040;
--btn-neutral-hover-background-color: #505050;
--footer-color: #aaa;
--graphviz-filter: invert(0.9) brightness(1.2);
}

View File

@@ -91,4 +91,6 @@
--btn-neutral-background-color: #f3f6f6;
--btn-neutral-hover-background-color: #e5ebeb;
--footer-color: #808080;
--graphviz-filter: none;
}

View File

@@ -325,6 +325,9 @@ graphviz_dot_args = [
"-Ncolor=gray60",
"-Nfontcolor=gray25",
"-Ecolor=gray60",
"-Gfontname=system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif",
"-Nfontname=system-ui,-apple-system,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif",
"-Efontname=SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,Courier,monospace",
]
# -- Options for sphinx_copybutton ----------------------------------------

View File

@@ -2,6 +2,48 @@
.. _zephyr_4.0:
.. _zephyr_4.0.1:
Zephyr 4.0.1
############
This is an LTS maintenance release with fixes.
Security Vulnerability Related
******************************
The following CVEs are addressed by this release:
* :cve:`2025-27809` `TLS clients may unwittingly skip server authentication
<https://mbed-tls.readthedocs.io/en/latest/security-advisories/mbedtls-security-advisory-2025-03-1/>`_
* :cve:`2025-27810` `Potential authentication bypass in TLS handshake
<https://mbed-tls.readthedocs.io/en/latest/security-advisories/mbedtls-security-advisory-2025-03-2/>`_
More detailed information can be found in:
https://docs.zephyrproject.org/latest/security/vulnerabilities.html
Issues fixed
************
These GitHub issues were addressed since the previous 4.0.0 tagged release:
Mbed TLS
********
Mbed TLS was updated to version 3.6.3 (from 3.6.2). The release notes can be found at:
https://github.com/Mbed-TLS/mbedtls/releases/tag/mbedtls-3.6.3
Mbed TLS 3.6 is an LTS release that will be supported
with security and bug fixes until at least March 2027.
Trusted Firmware-M (TF-M)
*************************
TF-M was updated to version 2.1.2 (from 2.1.1). The release notes can be found at:
https://trustedfirmware-m.readthedocs.io/en/tf-mv2.1.2/releases/2.1.2.html
.. _zephyr_4.0.0:
Zephyr 4.0.0
############

View File

@@ -143,14 +143,14 @@ static int gpio_adp5585_config(const struct device *dev, gpio_pin_t pin, gpio_fl
data->output |= BIT(pin);
}
if (bank == 0) {
/* reg_value for ADP5585_GPO_OUT_MODE */
/* reg_value for ADP5585_GPO_DATA_OUT */
reg_value = (uint8_t)data->output;
} else {
/* reg_value for ADP5585_GPO_OUT_MODE */
/* reg_value for ADP5585_GPO_DATA_OUT */
reg_value = (uint8_t)(data->output >> 8);
}
ret = i2c_reg_write_byte_dt(&parent_cfg->i2c_bus,
ADP5585_GPO_OUT_MODE_A + bank,
ADP5585_GPO_DATA_OUT_A + bank,
reg_value);
if (ret != 0) {
goto out;

View File

@@ -153,7 +153,8 @@ static int lp50xx_set_color(const struct device *dev, uint32_t led,
{
const struct lp50xx_config *config = dev->config;
const struct led_info *led_info = lp50xx_led_to_info(config, led);
uint8_t buf[4];
uint8_t buf[LP50XX_COLORS_PER_LED + 1];
uint8_t i;
if (!led_info) {
return -ENODEV;
@@ -170,11 +171,11 @@ static int lp50xx_set_color(const struct device *dev, uint32_t led,
buf[0] = LP50XX_OUT0_COLOR(config->num_modules);
buf[0] += LP50XX_COLORS_PER_LED * led_info->index;
buf[1] = color[0];
buf[2] = color[1];
buf[3] = color[2];
for (i = 0; i < led_info->num_colors; i++) {
buf[1 + i] = color[i];
}
return i2c_write_dt(&config->bus, buf, sizeof(buf));
return i2c_write_dt(&config->bus, buf, led_info->num_colors + 1);
}
static int lp50xx_write_channels(const struct device *dev,
@@ -266,6 +267,7 @@ static int lp50xx_enable(const struct device *dev, bool enable)
static int lp50xx_init(const struct device *dev)
{
const struct lp50xx_config *config = dev->config;
uint8_t led;
int err;
if (!i2c_is_ready_dt(&config->bus)) {
@@ -273,6 +275,7 @@ static int lp50xx_init(const struct device *dev)
return -ENODEV;
}
/* Check LED configuration found in DT */
if (config->num_leds > config->max_leds) {
LOG_ERR("%s: invalid number of LEDs %d (max %d)",
dev->name,
@@ -280,6 +283,16 @@ static int lp50xx_init(const struct device *dev)
config->max_leds);
return -EINVAL;
}
for (led = 0; led < config->num_leds; led++) {
const struct led_info *led_info =
lp50xx_led_to_info(config, led);
if (led_info->num_colors > LP50XX_COLORS_PER_LED) {
LOG_ERR("%s: LED %d: invalid number of colors (max %d)",
dev->name, led, LP50XX_COLORS_PER_LED);
return -EINVAL;
}
}
/* Configure GPIO if present */
if (config->gpio_enable.port != NULL) {

View File

@@ -11,7 +11,7 @@
#define _BUILD_MEM_REGION(node_id) \
{.dt_addr = DT_REG_ADDR(DT_PARENT(node_id)),\
.dt_size = DT_REG_SIZE(DT_PARENT(node_id))}
.dt_size = DT_REG_SIZE(DT_PARENT(node_id))},
struct ret_mem_region {
uintptr_t dt_addr;

View File

@@ -37,7 +37,6 @@ endchoice
config ADXL345_STREAM
bool "Use FIFO to stream data"
select ADXL345_TRIGGER
default y
depends on SPI_RTIO
depends on SENSOR_ASYNC_API
help

View File

@@ -281,6 +281,7 @@ int adxl345_read_sample(const struct device *dev,
{
int16_t raw_x, raw_y, raw_z;
uint8_t axis_data[6], status1;
struct adxl345_dev_data *data = dev->data;
if (!IS_ENABLED(CONFIG_ADXL345_TRIGGER)) {
do {
@@ -303,6 +304,9 @@ int adxl345_read_sample(const struct device *dev,
sample->y = raw_y;
sample->z = raw_z;
sample->selected_range = data->selected_range;
sample->is_full_res = data->is_full_res;
return 0;
}
@@ -453,11 +457,13 @@ static int adxl345_init(const struct device *dev)
return -ENODEV;
}
#if CONFIG_ADXL345_STREAM
rc = adxl345_reg_write_byte(dev, ADXL345_FIFO_CTL_REG, ADXL345_FIFO_STREAM_MODE);
if (rc < 0) {
LOG_ERR("FIFO enable failed\n");
return -EIO;
}
#endif
rc = adxl345_reg_write_byte(dev, ADXL345_DATA_FORMAT_REG, ADXL345_RANGE_8G);
if (rc < 0) {

View File

@@ -202,6 +202,7 @@ struct adxl345_sample {
uint8_t res: 7;
#endif /* CONFIG_ADXL345_STREAM */
uint8_t selected_range;
bool is_full_res;
int16_t x;
int16_t y;
int16_t z;

View File

@@ -6,17 +6,42 @@
#include "adxl345.h"
#ifdef CONFIG_ADXL345_STREAM
/** The q-scale factor will always be the same, as the nominal LSB/g
* changes at the same rate the selected shift parameter per range:
*
* - At 2G: 256 LSB/g, 10-bits resolution.
* - At 4g: 128 LSB/g, 10-bits resolution.
* - At 8g: 64 LSB/g, 10-bits resolution.
* - At 16g 32 LSB/g, 10-bits resolution.
*/
static const uint32_t qscale_factor_no_full_res[] = {
/* (1.0 / Resolution-LSB-per-g * (2^31 / 2^5) * SENSOR_G / 1000000 */
[ADXL345_RANGE_2G] = UINT32_C(2570754),
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^6) * SENSOR_G / 1000000 */
[ADXL345_RANGE_4G] = UINT32_C(2570754),
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^7) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_8G] = UINT32_C(2570754),
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^8) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_16G] = UINT32_C(2570754),
};
#define SENSOR_SCALING_FACTOR (SENSOR_G / (16 * 1000 / 100))
static const uint32_t accel_period_ns[] = {
[ADXL345_ODR_12HZ] = UINT32_C(1000000000) / 12,
[ADXL345_ODR_25HZ] = UINT32_C(1000000000) / 25,
[ADXL345_ODR_50HZ] = UINT32_C(1000000000) / 50,
[ADXL345_ODR_100HZ] = UINT32_C(1000000000) / 100,
[ADXL345_ODR_200HZ] = UINT32_C(1000000000) / 200,
[ADXL345_ODR_400HZ] = UINT32_C(1000000000) / 400,
/** Sensitivities based on Range:
*
* - At 2G: 256 LSB/g, 10-bits resolution.
* - At 4g: 256 LSB/g, 11-bits resolution.
* - At 8g: 256 LSB/g, 12-bits resolution.
* - At 16g 256 LSB/g, 13-bits resolution.
*/
static const uint32_t qscale_factor_full_res[] = {
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^5) * SENSOR_G / 1000000 */
[ADXL345_RANGE_2G] = UINT32_C(2570754),
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^6) * SENSOR_G / 1000000 */
[ADXL345_RANGE_4G] = UINT32_C(1285377),
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^7) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_8G] = UINT32_C(642688),
/* (1.0 / Resolution-LSB-per-g) * (2^31 / 2^8) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_16G] = UINT32_C(321344),
};
static const uint32_t range_to_shift[] = {
@@ -26,30 +51,6 @@ static const uint32_t range_to_shift[] = {
[ADXL345_RANGE_16G] = 8,
};
/* (1 / sensitivity) * (pow(2,31) / pow(2,shift)) * (unit_scaler) */
static const uint32_t qscale_factor_no_full_res[] = {
/* (1.0 / ADXL362_ACCEL_2G_LSB_PER_G) * (2^31 / 2^5) * SENSOR_G / 1000000 */
[ADXL345_RANGE_2G] = UINT32_C(2569011),
/* (1.0 / ADXL362_ACCEL_4G_LSB_PER_G) * (2^31 / 2^6) * SENSOR_G / 1000000 */
[ADXL345_RANGE_4G] = UINT32_C(642253),
/* (1.0 / ADXL362_ACCEL_8G_LSB_PER_G) * (2^31 / 2^7) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_8G] = UINT32_C(160563),
/* (1.0 / ADXL362_ACCEL_8G_LSB_PER_G) * (2^31 / 2^8) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_16G] = UINT32_C(40141),
};
/* (1 / sensitivity) * (pow(2,31) / pow(2,shift)) * (unit_scaler) */
static const uint32_t qscale_factor_full_res[] = {
/* (1.0 / ADXL362_ACCEL_2G_LSB_PER_G) * (2^31 / 2^5) * SENSOR_G / 1000000 */
[ADXL345_RANGE_2G] = UINT32_C(2569011),
/* (1.0 / ADXL362_ACCEL_4G_LSB_PER_G) * (2^31 / 2^6) * SENSOR_G / 1000000 */
[ADXL345_RANGE_4G] = UINT32_C(1284506),
/* (1.0 / ADXL362_ACCEL_8G_LSB_PER_G) * (2^31 / 2^7) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_8G] = UINT32_C(642253),
/* (1.0 / ADXL362_ACCEL_8G_LSB_PER_G) * (2^31 / 2^8) ) * SENSOR_G / 1000000 */
[ADXL345_RANGE_16G] = UINT32_C(321126),
};
static inline void adxl345_accel_convert_q31(q31_t *out, int16_t sample, int32_t range,
uint8_t is_full_res)
{
@@ -76,15 +77,28 @@ static inline void adxl345_accel_convert_q31(q31_t *out, int16_t sample, int32_t
}
break;
}
*out = sample * qscale_factor_full_res[range];
} else {
if (sample & BIT(9)) {
sample |= ADXL345_COMPLEMENT;
}
*out = sample * qscale_factor_no_full_res[range];
}
*out = sample * qscale_factor_no_full_res[range];
}
#ifdef CONFIG_ADXL345_STREAM
#define SENSOR_SCALING_FACTOR (SENSOR_G / (16 * 1000 / 100))
static const uint32_t accel_period_ns[] = {
[ADXL345_ODR_12HZ] = UINT32_C(1000000000) / 12,
[ADXL345_ODR_25HZ] = UINT32_C(1000000000) / 25,
[ADXL345_ODR_50HZ] = UINT32_C(1000000000) / 50,
[ADXL345_ODR_100HZ] = UINT32_C(1000000000) / 100,
[ADXL345_ODR_200HZ] = UINT32_C(1000000000) / 200,
[ADXL345_ODR_400HZ] = UINT32_C(1000000000) / 400,
};
static int adxl345_decode_stream(const uint8_t *buffer, struct sensor_chan_spec chan_spec,
uint32_t *fit, uint16_t max_count, void *data_out)
{
@@ -208,7 +222,12 @@ static int adxl345_decode_sample(const struct adxl345_sample *data,
struct sensor_chan_spec chan_spec, uint32_t *fit,
uint16_t max_count, void *data_out)
{
struct sensor_value *out = (struct sensor_value *)data_out;
struct sensor_three_axis_data *out = (struct sensor_three_axis_data *)data_out;
memset(out, 0, sizeof(struct sensor_three_axis_data));
out->header.base_timestamp_ns = k_ticks_to_ns_floor64(k_uptime_ticks());
out->header.reading_count = 1;
out->shift = range_to_shift[data->selected_range];
if (*fit > 0) {
return -ENOTSUP;
@@ -216,9 +235,12 @@ static int adxl345_decode_sample(const struct adxl345_sample *data,
switch (chan_spec.chan_type) {
case SENSOR_CHAN_ACCEL_XYZ:
adxl345_accel_convert(out++, data->x);
adxl345_accel_convert(out++, data->y);
adxl345_accel_convert(out, data->z);
adxl345_accel_convert_q31(&out->readings->x, data->x, data->selected_range,
data->is_full_res);
adxl345_accel_convert_q31(&out->readings->y, data->y, data->selected_range,
data->is_full_res);
adxl345_accel_convert_q31(&out->readings->z, data->z, data->selected_range,
data->is_full_res);
break;
default:
return -ENOTSUP;
@@ -226,7 +248,7 @@ static int adxl345_decode_sample(const struct adxl345_sample *data,
*fit = 1;
return 0;
return 1;
}
static int adxl345_decoder_decode(const uint8_t *buffer, struct sensor_chan_spec chan_spec,
@@ -259,10 +281,33 @@ static bool adxl345_decoder_has_trigger(const uint8_t *buffer, enum sensor_trigg
}
}
static int adxl345_get_size_info(struct sensor_chan_spec channel, size_t *base_size,
size_t *frame_size)
{
__ASSERT_NO_MSG(base_size != NULL);
__ASSERT_NO_MSG(frame_size != NULL);
if (channel.chan_type >= SENSOR_CHAN_ALL) {
return -ENOTSUP;
}
switch (channel.chan_type) {
case SENSOR_CHAN_ACCEL_XYZ:
*base_size = sizeof(struct sensor_three_axis_data);
*frame_size = sizeof(struct sensor_three_axis_sample_data);
return 0;
default:
break;
}
return -ENOTSUP;
}
SENSOR_DECODER_API_DT_DEFINE() = {
.get_frame_count = adxl345_decoder_get_frame_count,
.decode = adxl345_decoder_decode,
.has_trigger = adxl345_decoder_has_trigger,
.get_size_info = adxl345_get_size_info,
};
int adxl345_get_decoder(const struct device *dev, const struct sensor_decoder_api **decoder)

View File

@@ -2339,6 +2339,18 @@ static int uarte_instance_init(const struct device *dev,
: UART_CFG_FLOW_CTRL_NONE, \
}
/* Macro determines if PM actions are interrupt safe. They are in case of
* asynchronous API (except for instance in fast power domain) and non-asynchronous
* API if RX is disabled. Macro must resolve to a literal 1 or 0.
*/
#define UARTE_PM_ISR_SAFE(idx) \
COND_CODE_1(INSTANCE_IS_FAST_PD(_, /*empty*/, idx, _), \
(0), \
(COND_CODE_1(CONFIG_UART_##idx##_ASYNC, \
(PM_DEVICE_ISR_SAFE), \
(COND_CODE_1(UARTE_PROP(idx, disable_rx), \
(PM_DEVICE_ISR_SAFE), (0)))))) \
#define UART_NRF_UARTE_DEVICE(idx) \
NRF_DT_CHECK_NODE_HAS_PINCTRL_SLEEP(UARTE(idx)); \
UARTE_INT_DRIVEN(idx); \
@@ -2405,7 +2417,7 @@ static int uarte_instance_init(const struct device *dev,
} \
\
PM_DEVICE_DT_DEFINE(UARTE(idx), uarte_nrfx_pm_action, \
PM_DEVICE_ISR_SAFE); \
UARTE_PM_ISR_SAFE(idx)); \
\
DEVICE_DT_DEFINE(UARTE(idx), \
uarte_##idx##_init, \

View File

@@ -37,6 +37,24 @@ config SPI_RTIO
This option enables the RTIO API calls. RTIO support is
experimental as the API itself is unstable.
if SPI_RTIO
config SPI_RTIO_FALLBACK_MSGS
int "Number of available spi_buf structs for the default handler to use"
default 4
help
When RTIO is used with a driver that does not yet implement the submit API
natively the submissions are converted back to struct spi_buf values that
are given to spi_transfer. This requires some number of msgs be available to convert
the submissions into on the stack. MISRA rules dictate we must know this in
advance.
In all likelihood 4 is going to work for everyone, but in case you do end up with
an issue where you are using RTIO, your driver does not implement submit natively,
and get an error relating to not enough spi msgs this is the Kconfig to manipulate.
endif # SPI_RTIO
config SPI_SLAVE
bool "Slave support [EXPERIMENTAL]"
select EXPERIMENTAL

View File

@@ -22,6 +22,7 @@ static void spi_rtio_iodev_default_submit_sync(struct rtio_iodev_sqe *iodev_sqe)
{
struct spi_dt_spec *dt_spec = iodev_sqe->sqe.iodev->data;
const struct device *dev = dt_spec->bus;
uint8_t num_msgs = 0;
int err = 0;
LOG_DBG("Sync RTIO work item for: %p", (void *)dev);
@@ -33,67 +34,103 @@ static void spi_rtio_iodev_default_submit_sync(struct rtio_iodev_sqe *iodev_sqe)
struct rtio_iodev_sqe *txn_head = iodev_sqe;
struct rtio_iodev_sqe *txn_curr = iodev_sqe;
/* We allocate the spi_buf's on the stack, to do so
* the count of messages needs to be determined to
* ensure we don't go over the statically sized array.
*/
do {
switch (txn_curr->sqe.op) {
case RTIO_OP_RX:
case RTIO_OP_TX:
case RTIO_OP_TINY_TX:
case RTIO_OP_TXRX:
num_msgs++;
break;
default:
LOG_ERR("Invalid op code %d for submission %p", txn_curr->sqe.op,
(void *)&txn_curr->sqe);
err = -EIO;
break;
}
txn_curr = rtio_txn_next(txn_curr);
} while (err == 0 && txn_curr != NULL);
if (err != 0) {
rtio_iodev_sqe_err(txn_head, err);
return;
}
/* Allocate msgs on the stack, MISRA doesn't like VLAs so we need a statically
* sized array here. It's pretty unlikely we have more than 4 spi messages
* in a transaction as we typically would only have 2, one to write a
* register address, and another to read/write the register into an array
*/
if (num_msgs > CONFIG_SPI_RTIO_FALLBACK_MSGS) {
LOG_ERR("At most CONFIG_SPI_RTIO_FALLBACK_MSGS"
" submissions in a transaction are"
" allowed in the default handler");
rtio_iodev_sqe_err(txn_head, -ENOMEM);
return;
}
struct spi_buf tx_bufs[CONFIG_SPI_RTIO_FALLBACK_MSGS];
struct spi_buf rx_bufs[CONFIG_SPI_RTIO_FALLBACK_MSGS];
struct spi_buf_set tx_buf_set = {
.buffers = tx_bufs,
.count = num_msgs,
};
struct spi_buf_set rx_buf_set = {
.buffers = rx_bufs,
.count = num_msgs,
};
txn_curr = txn_head;
for (size_t i = 0 ; i < num_msgs ; i++) {
struct rtio_sqe *sqe = &txn_curr->sqe;
struct spi_buf tx_buf = {0};
struct spi_buf_set tx_buf_set = {
.buffers = &tx_buf,
};
struct spi_buf rx_buf = {0};
struct spi_buf_set rx_buf_set = {
.buffers = &rx_buf,
};
LOG_DBG("Preparing transfer: %p", txn_curr);
switch (sqe->op) {
case RTIO_OP_RX:
rx_buf.buf = sqe->rx.buf;
rx_buf.len = sqe->rx.buf_len;
rx_buf_set.count = 1;
rx_bufs[i].buf = sqe->rx.buf;
rx_bufs[i].len = sqe->rx.buf_len;
tx_bufs[i].buf = NULL;
tx_bufs[i].len = sqe->rx.buf_len;
break;
case RTIO_OP_TX:
tx_buf.buf = (uint8_t *)sqe->tx.buf;
tx_buf.len = sqe->tx.buf_len;
tx_buf_set.count = 1;
rx_bufs[i].buf = NULL;
rx_bufs[i].len = sqe->tx.buf_len;
tx_bufs[i].buf = (uint8_t *)sqe->tx.buf;
tx_bufs[i].len = sqe->tx.buf_len;
break;
case RTIO_OP_TINY_TX:
tx_buf.buf = (uint8_t *)sqe->tiny_tx.buf;
tx_buf.len = sqe->tiny_tx.buf_len;
tx_buf_set.count = 1;
rx_bufs[i].buf = NULL;
rx_bufs[i].len = sqe->tiny_tx.buf_len;
tx_bufs[i].buf = (uint8_t *)sqe->tiny_tx.buf;
tx_bufs[i].len = sqe->tiny_tx.buf_len;
break;
case RTIO_OP_TXRX:
rx_buf.buf = sqe->txrx.rx_buf;
rx_buf.len = sqe->txrx.buf_len;
tx_buf.buf = (uint8_t *)sqe->txrx.tx_buf;
tx_buf.len = sqe->txrx.buf_len;
rx_buf_set.count = 1;
tx_buf_set.count = 1;
rx_bufs[i].buf = sqe->txrx.rx_buf;
rx_bufs[i].len = sqe->txrx.buf_len;
tx_bufs[i].buf = (uint8_t *)sqe->txrx.tx_buf;
tx_bufs[i].len = sqe->txrx.buf_len;
break;
default:
LOG_ERR("Invalid op code %d for submission %p\n", sqe->op, (void *)sqe);
err = -EIO;
break;
}
if (!err) {
struct spi_buf_set *tx_buf_ptr = tx_buf_set.count > 0 ? &tx_buf_set : NULL;
struct spi_buf_set *rx_buf_ptr = rx_buf_set.count > 0 ? &rx_buf_set : NULL;
txn_curr = rtio_txn_next(txn_curr);
}
err = spi_transceive_dt(dt_spec, tx_buf_ptr, rx_buf_ptr);
if (err == 0) {
__ASSERT_NO_MSG(num_msgs > 0);
err = spi_transceive_dt(dt_spec, &tx_buf_set, &rx_buf_set);
}
/* NULL if this submission is not a transaction */
txn_curr = rtio_txn_next(txn_curr);
}
} while (err >= 0 && txn_curr != NULL);
if (err < 0) {
LOG_ERR("Transfer failed: %d", err);
if (err != 0) {
rtio_iodev_sqe_err(txn_head, err);
} else {
LOG_DBG("Transfer OK: %d", err);
rtio_iodev_sqe_ok(txn_head, err);
rtio_iodev_sqe_ok(txn_head, 0);
}
}

View File

@@ -88,7 +88,7 @@
interrupts = <88 0>, <89 0>;
interrupt-names = "ltdc", "ltdc_er";
clocks = <&rcc STM32_CLOCK(APB3, 3U)>;
resets = <&rctl STM32_RESET(APB3, 4U)>;
resets = <&rctl STM32_RESET(APB3, 3U)>;
status = "disabled";
};

View File

@@ -49,7 +49,7 @@
interrupts = <88 0>, <89 0>;
interrupt-names = "ltdc", "ltdc_er";
clocks = <&rcc STM32_CLOCK(APB3, 3U)>;
resets = <&rctl STM32_RESET(APB3, 4U)>;
resets = <&rctl STM32_RESET(APB3, 3U)>;
status = "disabled";
};

View File

@@ -42,7 +42,7 @@
interrupts = <88 0>, <89 0>;
interrupt-names = "ltdc", "ltdc_er";
clocks = <&rcc STM32_CLOCK(APB3, 3U)>;
resets = <&rctl STM32_RESET(APB3, 4U)>;
resets = <&rctl STM32_RESET(APB3, 3U)>;
status = "disabled";
};

View File

@@ -52,7 +52,7 @@
interrupts = <88 0>, <89 0>;
interrupt-names = "ltdc", "ltdc_er";
clocks = <&rcc STM32_CLOCK(APB3, 3U)>;
resets = <&rctl STM32_RESET(APB3, 4U)>;
resets = <&rctl STM32_RESET(APB3, 3U)>;
status = "disabled";
};

View File

@@ -304,7 +304,7 @@
compatible = "zephyr,memory-region", "st,stm32-backup-sram";
reg = <0x40036400 DT_SIZE_K(2)>;
/* BKPSRAMEN and RAMCFGEN clock enable */
clocks = <&rcc STM32_CLOCK(AHB1, 28U)>;
clocks = <&rcc STM32_CLOCK_BUS_AHB1 ((1 << 28) | (1 << 17))>;
zephyr,memory-region = "BACKUP_SRAM";
status = "disabled";
};

View File

@@ -106,7 +106,8 @@
num-bidir-endpoints = <9>;
ram-size = <4096>;
maximum-speed = "high-speed";
clocks = <&rcc STM32_CLOCK(AHB2, 15U)>,
/* Enable OTG_HS PHY and peripheral clocks (OTGHSPHYEN | OTGEN) */
clocks = <&rcc STM32_CLOCK_BUS_AHB2 ((1 << 15) | (1 << 14))>,
<&rcc STM32_SRC_HSI48 ICKLK_SEL(0)>;
phys = <&otghs_phy>;
status = "disabled";

View File

@@ -83,6 +83,7 @@ struct log_backend_control_block {
void *ctx;
uint8_t id;
bool active;
bool initialized;
/* Initialization level. */
uint8_t level;
@@ -140,6 +141,7 @@ static inline void log_backend_init(const struct log_backend *const backend)
if (backend->api->init) {
backend->api->init(backend);
}
backend->cb->initialized = true;
}
/**

View File

@@ -400,12 +400,12 @@ static inline char z_log_minimal_level_to_char(int level)
#define LOG_FILTER_SLOT_GET(_filters, _id) \
((*(_filters) >> LOG_FILTER_SLOT_SHIFT(_id)) & LOG_FILTER_SLOT_MASK)
#define LOG_FILTER_SLOT_SET(_filters, _id, _filter) \
do { \
*(_filters) &= ~(LOG_FILTER_SLOT_MASK << \
LOG_FILTER_SLOT_SHIFT(_id)); \
*(_filters) |= ((_filter) & LOG_FILTER_SLOT_MASK) << \
LOG_FILTER_SLOT_SHIFT(_id); \
#define LOG_FILTER_SLOT_SET(_filters, _id, _filter) \
do { \
uint32_t others = *(_filters) & ~(LOG_FILTER_SLOT_MASK << \
LOG_FILTER_SLOT_SHIFT(_id)); \
*(_filters) = others | (((_filter) & LOG_FILTER_SLOT_MASK) << \
LOG_FILTER_SLOT_SHIFT(_id)); \
} while (false)
#define LOG_FILTER_AGGR_SLOT_IDX 0

View File

@@ -94,6 +94,11 @@ struct net_if_addr {
struct {
/** Duplicate address detection (DAD) timer */
sys_snode_t dad_node;
/** DAD needed list node */
sys_snode_t dad_need_node;
/** DAD start time */
uint32_t dad_start;
/** How many times we have done DAD */
@@ -104,6 +109,11 @@ struct net_if_addr {
struct {
/** Address conflict detection (ACD) timer. */
sys_snode_t acd_node;
/** ACD needed list node */
sys_snode_t acd_need_node;
/** ACD timeout value. */
k_timepoint_t acd_timeout;
/** ACD probe/announcement counter. */
@@ -134,7 +144,10 @@ struct net_if_addr {
*/
uint8_t is_temporary : 1;
uint8_t _unused : 4;
/** Was this address added or not */
uint8_t is_added : 1;
uint8_t _unused : 3;
};
/**
@@ -1183,9 +1196,10 @@ int net_if_set_link_addr_locked(struct net_if *iface,
extern int net_if_addr_unref_debug(struct net_if *iface,
sa_family_t family,
const void *addr,
struct net_if_addr **ifaddr,
const char *caller, int line);
#define net_if_addr_unref(iface, family, addr) \
net_if_addr_unref_debug(iface, family, addr, __func__, __LINE__)
#define net_if_addr_unref(iface, family, addr, ifaddr) \
net_if_addr_unref_debug(iface, family, addr, ifaddr, __func__, __LINE__)
extern struct net_if_addr *net_if_addr_ref_debug(struct net_if *iface,
sa_family_t family,
@@ -1197,7 +1211,8 @@ extern struct net_if_addr *net_if_addr_ref_debug(struct net_if *iface,
#else
extern int net_if_addr_unref(struct net_if *iface,
sa_family_t family,
const void *addr);
const void *addr,
struct net_if_addr **ifaddr);
extern struct net_if_addr *net_if_addr_ref(struct net_if *iface,
sa_family_t family,
const void *addr);

View File

@@ -18,6 +18,13 @@ struct k_p4wq_work;
*/
typedef void (*k_p4wq_handler_t)(struct k_p4wq_work *work);
/**
* Optional P4 Queue done callback.
* Will be called after the memory structure is not used anymore by the p4wq.
* If it is not used it must be set to NULL.
*/
typedef void (*k_p4wq_done_handler_t)(struct k_p4wq_work *work);
/**
* @brief P4 Queue Work Item
*
@@ -74,6 +81,11 @@ struct k_p4wq {
/* K_P4WQ_* flags above */
uint32_t flags;
/* done handler which is called every time after work was successfully executed
* and k_p4wq_work is not needed by p4wq anymore
*/
k_p4wq_done_handler_t done_handler;
};
struct k_p4wq_initparam {
@@ -83,6 +95,7 @@ struct k_p4wq_initparam {
struct k_thread *threads;
struct z_thread_stack_element *stacks;
uint32_t flags;
k_p4wq_done_handler_t done_handler;
};
/**
@@ -95,8 +108,9 @@ struct k_p4wq_initparam {
* @param name Symbol name of the struct k_p4wq that will be defined
* @param n_threads Number of threads in the work queue pool
* @param stack_sz Requested stack size of each thread, in bytes
* @param dn_handler Function pointer to handler of type k_p4wq_done_handler_t
*/
#define K_P4WQ_DEFINE(name, n_threads, stack_sz) \
#define K_P4WQ_DEFINE_WITH_DONE_HANDLER(name, n_threads, stack_sz, dn_handler) \
static K_THREAD_STACK_ARRAY_DEFINE(_p4stacks_##name, \
n_threads, stack_sz); \
static struct k_thread _p4threads_##name[n_threads]; \
@@ -109,8 +123,19 @@ struct k_p4wq_initparam {
.stacks = &(_p4stacks_##name[0][0]), \
.queue = &name, \
.flags = 0, \
.done_handler = dn_handler, \
}
/**
* @brief Statically initialize a P4 Work Queue
*
* Same like K_P4WQ_DEFINE_WITH_DONE_HANDLER but without an
* optional handler which is called everytime when work is executed
* and not used anymore by the p4wq
*/
#define K_P4WQ_DEFINE(name, n_threads, stack_sz) \
K_P4WQ_DEFINE_WITH_DONE_HANDLER(name, n_threads, stack_sz, NULL)
/**
* @brief Statically initialize an array of P4 Work Queues
*
@@ -122,8 +147,9 @@ struct k_p4wq_initparam {
* @param n_threads Number of threads and work queues
* @param stack_sz Requested stack size of each thread, in bytes
* @param flg Flags
* @param dn_handler Function pointer to handler of type k_p4wq_done_handler_t
*/
#define K_P4WQ_ARRAY_DEFINE(name, n_threads, stack_sz, flg) \
#define K_P4WQ_ARRAY_DEFINE_WITH_DONE_HANDLER(name, n_threads, stack_sz, flg, dn_handler) \
static K_THREAD_STACK_ARRAY_DEFINE(_p4stacks_##name, \
n_threads, stack_sz); \
static struct k_thread _p4threads_##name[n_threads]; \
@@ -136,8 +162,19 @@ struct k_p4wq_initparam {
.stacks = &(_p4stacks_##name[0][0]), \
.queue = name, \
.flags = K_P4WQ_QUEUE_PER_THREAD | flg, \
.done_handler = dn_handler, \
}
/**
* @brief Statically initialize an array of P4 Work Queues
*
* Same like K_P4WQ_ARRAY_DEFINE_WITH_DONE_HANDLER but without an
* optional handler which is called everytime when work is executed
* and not used anymore by the p4wq
*/
#define K_P4WQ_ARRAY_DEFINE(name, n_threads, stack_sz, flg) \
K_P4WQ_ARRAY_DEFINE_WITH_DONE_HANDLER(name, n_threads, stack_sz, flg, NULL)
/**
* @brief Initialize P4 Queue
*

View File

@@ -106,6 +106,16 @@ config MPSC_CLEAR_ALLOCATED
When enabled packet space is zeroed before returning from allocation.
endif
if SCHED_DEADLINE
config P4WQ_INIT_STAGE_EARLY
bool "Early initialization of P4WQ threads"
help
Initialize P4WQ threads early so that the P4WQ can be used on devices
initialization sequence.
endif
config REBOOT
bool "Reboot functionality"
help

View File

@@ -104,7 +104,14 @@ static FUNC_NORETURN void p4wq_loop(void *p0, void *p1, void *p2)
if (!thread_was_requeued(_current)) {
sys_dlist_remove(&w->dlnode);
w->thread = NULL;
k_sem_give(&w->done_sem);
if (queue->done_handler) {
k_spin_unlock(&queue->lock, k);
queue->done_handler(w);
k = k_spin_lock(&queue->lock);
} else {
k_sem_give(&w->done_sem);
}
}
} else {
z_pend_curr(&queue->lock, k, &queue->waitq, K_FOREVER);
@@ -152,6 +159,7 @@ static int static_init(void)
if (!i || (pp->flags & K_P4WQ_QUEUE_PER_THREAD)) {
k_p4wq_init(q);
q->done_handler = pp->done_handler;
}
q->flags = pp->flags;
@@ -215,7 +223,11 @@ void k_p4wq_enable_static_thread(struct k_p4wq *queue, struct k_thread *thread,
* so they can initialize in parallel instead of serially on the main
* CPU.
*/
#if defined(CONFIG_P4WQ_INIT_STAGE_EARLY)
SYS_INIT(static_init, POST_KERNEL, 1);
#else
SYS_INIT(static_init, APPLICATION, 99);
#endif
void k_p4wq_submit(struct k_p4wq *queue, struct k_p4wq_work *item)
{
@@ -301,7 +313,14 @@ bool k_p4wq_cancel(struct k_p4wq *queue, struct k_p4wq_work *item)
if (ret) {
rb_remove(&queue->queue, &item->rbnode);
k_sem_give(&item->done_sem);
if (queue->done_handler) {
k_spin_unlock(&queue->lock, k);
queue->done_handler(item);
k = k_spin_lock(&queue->lock);
} else {
k_sem_give(&item->done_sem);
}
}
k_spin_unlock(&queue->lock, k);

View File

@@ -132,6 +132,8 @@ __ramfunc void clock_init(void)
CLOCK_SetClkDiv(kCLOCK_DivSystickClk, 1U);
CLOCK_AttachClk(kSYSTICK_DIV_to_SYSTICK_CLK);
SystemCoreClockUpdate();
/* Set PLL FRG clock to 20MHz. */
CLOCK_SetClkDiv(kCLOCK_DivPllFrgClk, 13U);

View File

@@ -46,7 +46,7 @@ manifest:
groups:
- optional
- name: tf-m-tests
revision: 502ea90105ee18f20c78f710e2ba2ded0fc0756e
revision: c712761dd5391bf3f38033643d28a736cae89a19
path: modules/tee/tf-m/tf-m-tests
remote: upstream
groups:

View File

@@ -173,6 +173,7 @@ config BT_BUF_CMD_TX_SIZE
config BT_BUF_CMD_TX_COUNT
int "Number of HCI command buffers"
default 3 if BT_EXT_ADV && BT_CENTRAL
default 2
range 2 64
help

View File

@@ -891,6 +891,9 @@ int bt_le_create_conn_cancel(void)
struct bt_hci_cmd_state_set state;
buf = bt_hci_cmd_create(BT_HCI_OP_LE_CREATE_CONN_CANCEL, 0);
if (!buf) {
return -ENOBUFS;
}
bt_hci_cmd_state_set_init(buf, &state, bt_dev.flags,
BT_DEV_INITIATING, false);

View File

@@ -58,7 +58,11 @@ struct bt_mesh_adv_ctx {
};
struct bt_mesh_adv {
sys_snode_t node;
void *adv_bearer;
#if defined(CONFIG_BT_MESH_GATT)
void *gatt_bearer[CONFIG_BT_MAX_CONN];
#endif
struct bt_mesh_adv_ctx ctx;

View File

@@ -56,6 +56,24 @@ static struct bt_mesh_proxy_role roles[CONFIG_BT_MAX_CONN];
static int conn_count;
static void proxy_queue_put(struct bt_mesh_proxy_role *role, struct bt_mesh_adv *adv)
{
k_fifo_put(&role->pending, &(adv->gatt_bearer[bt_conn_index(role->conn)]));
}
static struct bt_mesh_adv *proxy_queue_get(struct bt_mesh_proxy_role *role)
{
void *gatt_bearer;
gatt_bearer = k_fifo_get(&role->pending, K_NO_WAIT);
if (!gatt_bearer) {
return NULL;
}
return CONTAINER_OF(gatt_bearer, struct bt_mesh_adv,
gatt_bearer[bt_conn_index(role->conn)]);
}
static void proxy_sar_timeout(struct k_work *work)
{
struct bt_mesh_proxy_role *role;
@@ -66,7 +84,7 @@ static void proxy_sar_timeout(struct k_work *work)
role = CONTAINER_OF(dwork, struct bt_mesh_proxy_role, sar_timer);
while (!k_fifo_is_empty(&role->pending)) {
struct bt_mesh_adv *adv = k_fifo_get(&role->pending, K_NO_WAIT);
struct bt_mesh_adv *adv = proxy_queue_get(role);
__ASSERT_NO_MSG(adv);
@@ -243,7 +261,7 @@ int bt_mesh_proxy_relay_send(struct bt_conn *conn, struct bt_mesh_adv *adv)
{
struct bt_mesh_proxy_role *role = &roles[bt_conn_index(conn)];
k_fifo_put(&role->pending, bt_mesh_adv_ref(adv));
proxy_queue_put(role, bt_mesh_adv_ref(adv));
bt_mesh_wq_submit(&role->work);
@@ -259,7 +277,7 @@ static void proxy_msg_send_pending(struct k_work *work)
return;
}
adv = k_fifo_get(&role->pending, K_NO_WAIT);
adv = proxy_queue_get(role);
if (!adv) {
return;
}

View File

@@ -341,7 +341,19 @@ static int log_go(const struct shell *sh,
char **argv)
{
if (backend || !IS_ENABLED(CONFIG_LOG_FRONTEND)) {
log_backend_activate(backend, backend->cb->ctx);
if (!backend->cb->initialized) {
log_backend_init(backend);
while (log_backend_is_ready(backend) != 0) {
if (IS_ENABLED(CONFIG_MULTITHREADING)) {
k_msleep(10);
}
}
if (log_backend_is_ready(backend) == 0) {
log_backend_enable(backend, backend->cb->ctx, CONFIG_LOG_MAX_LEVEL);
}
} else {
log_backend_activate(backend, backend->cb->ctx);
}
return 0;
}

View File

@@ -286,6 +286,14 @@ void log_core_init(void)
if (IS_ENABLED(CONFIG_LOG_RUNTIME_FILTERING)) {
z_log_runtime_filters_init();
}
STRUCT_SECTION_FOREACH(log_backend, backend) {
uint32_t id;
/* As first slot in filtering mask is reserved, backend ID has offset.*/
id = LOG_FILTER_FIRST_BACKEND_SLOT_IDX;
id += backend - log_backend_get(0);
log_backend_id_set(backend, id);
}
}
static uint32_t activate_foreach_backend(uint32_t mask)
@@ -328,8 +336,8 @@ static uint32_t z_log_init(bool blocking, bool can_sleep)
int backend_index = 0;
/* Activate autostart backends */
STRUCT_SECTION_FOREACH(log_backend, backend) {
/* Activate autostart backends */
if (backend->autostart) {
log_backend_init(backend);

View File

@@ -560,12 +560,6 @@ void log_backend_enable(struct log_backend const *const backend,
void *ctx,
uint32_t level)
{
/* As first slot in filtering mask is reserved, backend ID has offset.*/
uint32_t id = LOG_FILTER_FIRST_BACKEND_SLOT_IDX;
id += backend - log_backend_get(0);
log_backend_id_set(backend, id);
backend->cb->level = level;
backend_filter_set(backend, level);
log_backend_activate(backend, ctx);

View File

@@ -1552,7 +1552,7 @@ static void ipv6_nd_reachable_timeout(struct k_work *work)
data->ns_count);
ret = net_ipv6_send_ns(nbr->iface, NULL, NULL,
NULL, &data->addr,
&data->addr, &data->addr,
false);
if (ret < 0) {
NET_DBG("Cannot send NS (%d)", ret);
@@ -1730,7 +1730,7 @@ static inline bool handle_na_neighbor(struct net_pkt *pkt,
if (na_hdr->flags & NET_ICMPV6_NA_FLAG_OVERRIDE ||
(!(na_hdr->flags & NET_ICMPV6_NA_FLAG_OVERRIDE) &&
tllao_offset && !lladdr_changed)) {
!lladdr_changed)) {
if (lladdr_changed) {
dbg_update_neighbor_lladdr_raw(
@@ -2562,12 +2562,6 @@ static int handle_ra_input(struct net_icmp_ctx *ctx,
nd_opt_hdr = (struct net_icmpv6_nd_opt_hdr *)
net_pkt_get_data(pkt, &nd_access);
/* Add neighbor cache entry using link local address, regardless of link layer address
* presence in Router Advertisement.
*/
nbr = net_ipv6_nbr_add(net_pkt_iface(pkt), (struct in6_addr *)NET_IPV6_HDR(pkt)->src, NULL,
true, NET_IPV6_NBR_STATE_INCOMPLETE);
while (nd_opt_hdr) {
net_pkt_acknowledge_data(pkt, &nd_access);
@@ -2668,6 +2662,15 @@ static int handle_ra_input(struct net_icmp_ctx *ctx,
net_pkt_get_data(pkt, &nd_access);
}
if (nbr == NULL) {
/* Add neighbor cache entry using link local address, regardless
* of link layer address presence in Router Advertisement.
*/
nbr = net_ipv6_nbr_add(net_pkt_iface(pkt),
(struct in6_addr *)NET_IPV6_HDR(pkt)->src,
NULL, true, NET_IPV6_NBR_STATE_INCOMPLETE);
}
router = net_if_ipv6_router_lookup(net_pkt_iface(pkt),
(struct in6_addr *)ip_hdr->src);
if (router) {

View File

@@ -1295,21 +1295,25 @@ void net_if_ipv6_start_dad(struct net_if *iface,
ifaddr->dad_count = 1U;
if (!net_ipv6_start_dad(iface, ifaddr)) {
ifaddr->dad_start = k_uptime_get_32();
ifaddr->ifindex = net_if_get_by_iface(iface);
if (net_ipv6_start_dad(iface, ifaddr) != 0) {
NET_ERR("Interface %p failed to send DAD query for %s",
iface,
net_sprint_ipv6_addr(&ifaddr->address.in6_addr));
}
k_mutex_lock(&lock, K_FOREVER);
sys_slist_find_and_remove(&active_dad_timers,
&ifaddr->dad_node);
sys_slist_append(&active_dad_timers, &ifaddr->dad_node);
k_mutex_unlock(&lock);
ifaddr->dad_start = k_uptime_get_32();
ifaddr->ifindex = net_if_get_by_iface(iface);
/* FUTURE: use schedule, not reschedule. */
if (!k_work_delayable_remaining_get(&dad_timer)) {
k_work_reschedule(&dad_timer,
K_MSEC(DAD_TIMEOUT));
}
k_mutex_lock(&lock, K_FOREVER);
sys_slist_find_and_remove(&active_dad_timers,
&ifaddr->dad_node);
sys_slist_append(&active_dad_timers, &ifaddr->dad_node);
k_mutex_unlock(&lock);
/* FUTURE: use schedule, not reschedule. */
if (!k_work_delayable_remaining_get(&dad_timer)) {
k_work_reschedule(&dad_timer,
K_MSEC(DAD_TIMEOUT));
}
} else {
NET_DBG("Interface %p is down, starting DAD for %s later.",
@@ -1320,8 +1324,9 @@ void net_if_ipv6_start_dad(struct net_if *iface,
void net_if_start_dad(struct net_if *iface)
{
struct net_if_addr *ifaddr;
struct net_if_addr *ifaddr, *next;
struct net_if_ipv6 *ipv6;
sys_slist_t dad_needed;
struct in6_addr addr = { };
int ret;
@@ -1353,6 +1358,8 @@ void net_if_start_dad(struct net_if *iface)
/* Start DAD for all the addresses that were added earlier when
* the interface was down.
*/
sys_slist_init(&dad_needed);
ARRAY_FOR_EACH(ipv6->unicast, i) {
if (!ipv6->unicast[i].is_used ||
ipv6->unicast[i].address.family != AF_INET6 ||
@@ -1362,9 +1369,21 @@ void net_if_start_dad(struct net_if *iface)
continue;
}
net_if_ipv6_start_dad(iface, &ipv6->unicast[i]);
sys_slist_prepend(&dad_needed, &ipv6->unicast[i].dad_need_node);
}
net_if_unlock(iface);
/* Start DAD for all the addresses without holding the iface lock
* to avoid any possible mutex deadlock issues.
*/
SYS_SLIST_FOR_EACH_CONTAINER_SAFE(&dad_needed,
ifaddr, next, dad_need_node) {
net_if_ipv6_start_dad(iface, ifaddr);
}
return;
out:
net_if_unlock(iface);
}
@@ -1411,7 +1430,10 @@ void net_if_ipv6_dad_failed(struct net_if *iface, const struct in6_addr *addr)
net_if_ipv6_addr_rm(iface, addr);
if (IS_ENABLED(CONFIG_NET_IPV6_PE) && iface->pe_enabled) {
net_if_unlock(iface);
net_ipv6_pe_start(iface, addr, timeout, preferred_lifetime);
return;
}
out:
@@ -1515,6 +1537,8 @@ void net_if_start_rs(struct net_if *iface)
goto out;
}
net_if_unlock(iface);
NET_DBG("Starting ND/RS for iface %p", iface);
if (!net_ipv6_start_rs(iface)) {
@@ -1530,6 +1554,7 @@ void net_if_start_rs(struct net_if *iface)
}
}
return;
out:
net_if_unlock(iface);
}
@@ -1907,6 +1932,7 @@ static inline void net_if_addr_init(struct net_if_addr *ifaddr,
uint32_t vlifetime)
{
ifaddr->is_used = true;
ifaddr->is_added = true;
ifaddr->is_temporary = false;
ifaddr->address.family = AF_INET6;
ifaddr->addr_type = addr_type;
@@ -1936,6 +1962,7 @@ struct net_if_addr *net_if_ipv6_addr_add(struct net_if *iface,
{
struct net_if_addr *ifaddr = NULL;
struct net_if_ipv6 *ipv6;
bool do_dad = false;
net_if_lock(iface);
@@ -1945,6 +1972,17 @@ struct net_if_addr *net_if_ipv6_addr_add(struct net_if *iface,
ifaddr = ipv6_addr_find(iface, addr);
if (ifaddr) {
/* Address already exists, just return it but update ref count
* if it was not updated. This could happen if the address was
* added and then removed but for example an active connection
* was still using it. In this case we must update the ref count
* so that the address is not removed if the connection is closed.
*/
if (!ifaddr->is_added) {
atomic_inc(&ifaddr->atomic_ref);
ifaddr->is_added = true;
}
goto out;
}
@@ -1976,8 +2014,7 @@ struct net_if_addr *net_if_ipv6_addr_add(struct net_if *iface,
*/
join_mcast_nodes(iface,
&ipv6->unicast[i].address.in6_addr);
net_if_ipv6_start_dad(iface, &ipv6->unicast[i]);
do_dad = true;
} else {
/* If DAD is not done for point-to-point links, then
* the address is usable immediately.
@@ -1991,9 +2028,17 @@ struct net_if_addr *net_if_ipv6_addr_add(struct net_if *iface,
sizeof(struct in6_addr));
ifaddr = &ipv6->unicast[i];
goto out;
break;
}
net_if_unlock(iface);
if (ifaddr != NULL && do_dad) {
net_if_ipv6_start_dad(iface, ifaddr);
}
return ifaddr;
out:
net_if_unlock(iface);
@@ -2002,6 +2047,7 @@ out:
bool net_if_ipv6_addr_rm(struct net_if *iface, const struct in6_addr *addr)
{
struct net_if_addr *ifaddr;
struct net_if_ipv6 *ipv6;
bool result = true;
int ret;
@@ -2016,11 +2062,12 @@ bool net_if_ipv6_addr_rm(struct net_if *iface, const struct in6_addr *addr)
goto out;
}
ret = net_if_addr_unref(iface, AF_INET6, addr);
ret = net_if_addr_unref(iface, AF_INET6, addr, &ifaddr);
if (ret > 0) {
NET_DBG("Address %s still in use (ref %d)",
net_sprint_ipv6_addr(addr), ret);
result = false;
ifaddr->is_added = false;
goto out;
} else if (ret < 0) {
NET_DBG("Address %s not found (%d)",
@@ -4161,7 +4208,9 @@ void net_if_ipv4_start_acd(struct net_if *iface, struct net_if_addr *ifaddr)
void net_if_start_acd(struct net_if *iface)
{
struct net_if_addr *ifaddr, *next;
struct net_if_ipv4 *ipv4;
sys_slist_t acd_needed;
int ret;
net_if_lock(iface);
@@ -4183,6 +4232,11 @@ void net_if_start_acd(struct net_if *iface)
ipv4->conflict_cnt = 0;
/* Start ACD for all the addresses that were added earlier when
* the interface was down.
*/
sys_slist_init(&acd_needed);
/* Start ACD for all the addresses that were added earlier when
* the interface was down.
*/
@@ -4194,9 +4248,21 @@ void net_if_start_acd(struct net_if *iface)
continue;
}
net_if_ipv4_start_acd(iface, &ipv4->unicast[i].ipv4);
sys_slist_prepend(&acd_needed, &ipv4->unicast[i].ipv4.acd_need_node);
}
net_if_unlock(iface);
/* Start ACD for all the addresses without holding the iface lock
* to avoid any possible mutex deadlock issues.
*/
SYS_SLIST_FOR_EACH_CONTAINER_SAFE(&acd_needed,
ifaddr, next, acd_need_node) {
net_if_ipv4_start_acd(iface, ifaddr);
}
return;
out:
net_if_unlock(iface);
}
@@ -4229,6 +4295,17 @@ struct net_if_addr *net_if_ipv4_addr_add(struct net_if *iface,
ifaddr = ipv4_addr_find(iface, addr);
if (ifaddr) {
/* TODO: should set addr_type/vlifetime */
/* Address already exists, just return it but update ref count
* if it was not updated. This could happen if the address was
* added and then removed but for example an active connection
* was still using it. In this case we must update the ref count
* so that the address is not removed if the connection is closed.
*/
if (!ifaddr->is_added) {
atomic_inc(&ifaddr->atomic_ref);
ifaddr->is_added = true;
}
goto out;
}
@@ -4251,6 +4328,7 @@ struct net_if_addr *net_if_ipv4_addr_add(struct net_if *iface,
if (ifaddr) {
ifaddr->is_used = true;
ifaddr->is_added = true;
ifaddr->address.family = AF_INET;
ifaddr->address.in_addr.s4_addr32[0] =
addr->s4_addr32[0];
@@ -4276,7 +4354,8 @@ struct net_if_addr *net_if_ipv4_addr_add(struct net_if *iface,
if (!(l2_flags_get(iface) & NET_L2_POINT_TO_POINT) &&
!net_ipv4_is_addr_loopback(addr)) {
net_if_ipv4_start_acd(iface, ifaddr);
/* ACD is started after the lock is released. */
;
} else {
ifaddr->addr_state = NET_ADDR_PREFERRED;
}
@@ -4284,7 +4363,12 @@ struct net_if_addr *net_if_ipv4_addr_add(struct net_if *iface,
net_mgmt_event_notify_with_info(NET_EVENT_IPV4_ADDR_ADD, iface,
&ifaddr->address.in_addr,
sizeof(struct in_addr));
goto out;
net_if_unlock(iface);
net_if_ipv4_start_acd(iface, ifaddr);
return ifaddr;
}
out:
@@ -4295,6 +4379,7 @@ out:
bool net_if_ipv4_addr_rm(struct net_if *iface, const struct in_addr *addr)
{
struct net_if_addr *ifaddr;
struct net_if_ipv4 *ipv4;
bool result = true;
int ret;
@@ -4309,11 +4394,12 @@ bool net_if_ipv4_addr_rm(struct net_if *iface, const struct in_addr *addr)
goto out;
}
ret = net_if_addr_unref(iface, AF_INET, addr);
ret = net_if_addr_unref(iface, AF_INET, addr, &ifaddr);
if (ret > 0) {
NET_DBG("Address %s still in use (ref %d)",
net_sprint_ipv4_addr(addr), ret);
result = false;
ifaddr->is_added = false;
goto out;
} else if (ret < 0) {
NET_DBG("Address %s not found (%d)",
@@ -4932,8 +5018,13 @@ static void remove_ipv6_ifaddr(struct net_if *iface,
#if defined(CONFIG_NET_IPV6_DAD)
if (!net_if_flag_is_set(iface, NET_IF_IPV6_NO_ND)) {
k_mutex_lock(&lock, K_FOREVER);
sys_slist_find_and_remove(&active_dad_timers,
&ifaddr->dad_node);
if (sys_slist_find_and_remove(&active_dad_timers,
&ifaddr->dad_node)) {
/* Addreess with active DAD timer would still have
* stale entry in the neighbor cache.
*/
net_ipv6_nbr_rm(iface, &ifaddr->address.in6_addr);
}
k_mutex_unlock(&lock);
}
#endif
@@ -5048,11 +5139,13 @@ struct net_if_addr *net_if_addr_ref(struct net_if *iface,
int net_if_addr_unref_debug(struct net_if *iface,
sa_family_t family,
const void *addr,
struct net_if_addr **ret_ifaddr,
const char *caller, int line)
#else
int net_if_addr_unref(struct net_if *iface,
sa_family_t family,
const void *addr)
const void *addr,
struct net_if_addr **ret_ifaddr)
#endif /* NET_LOG_LEVEL >= LOG_LEVEL_DBG */
{
struct net_if_addr *ifaddr;
@@ -5106,6 +5199,10 @@ int net_if_addr_unref(struct net_if *iface,
#endif
if (ref > 1) {
if (ret_ifaddr) {
*ret_ifaddr = ifaddr;
}
return ref - 1;
}

View File

@@ -783,7 +783,8 @@ static void tcp_conn_release(struct k_work *work)
net_if_addr_unref(conn->iface, conn->src.sa.sa_family,
conn->src.sa.sa_family == AF_INET ?
(const void *)&conn->src.sin.sin_addr :
(const void *)&conn->src.sin6.sin6_addr);
(const void *)&conn->src.sin6.sin6_addr,
NULL);
}
conn->context->tcp = NULL;

View File

@@ -658,6 +658,12 @@ int coap_resource_parse_observe(struct coap_resource *resource, const struct coa
ret = coap_service_remove_observer(service, resource, addr, token, tkl);
if (ret < 0) {
LOG_WRN("Failed to remove observer (%d)", ret);
goto unlock;
}
if (ret == 0) {
/* Observer not found */
ret = -ENOENT;
}
}

View File

@@ -4,6 +4,7 @@
config RTIO_WORKQ
bool "RTIO Work-queues service to process Sync operations"
select SCHED_DEADLINE
select P4WQ_INIT_STAGE_EARLY
select RTIO_CONSUME_SEM
help
Enable RTIO Work-queues to allow processing synchronous operations

View File

@@ -11,15 +11,24 @@
#define RTIO_WORKQ_PRIO_HIGH RTIO_WORKQ_PRIO_MED - 1
#define RTIO_WORKQ_PRIO_LOW RTIO_WORKQ_PRIO_MED + 1
K_P4WQ_DEFINE(rtio_workq,
CONFIG_RTIO_WORKQ_THREADS_POOL,
CONFIG_RTIO_WORKQ_STACK_SIZE);
K_MEM_SLAB_DEFINE_STATIC(rtio_work_items_slab,
sizeof(struct rtio_work_req),
CONFIG_RTIO_WORKQ_POOL_ITEMS,
4);
static void rtio_work_req_done_handler(struct k_p4wq_work *work)
{
struct rtio_work_req *req = CONTAINER_OF(work,
struct rtio_work_req,
work);
k_mem_slab_free(&rtio_work_items_slab, req);
}
K_P4WQ_DEFINE_WITH_DONE_HANDLER(rtio_workq,
CONFIG_RTIO_WORKQ_THREADS_POOL,
CONFIG_RTIO_WORKQ_STACK_SIZE,
rtio_work_req_done_handler);
static void rtio_work_handler(struct k_p4wq_work *work)
{
struct rtio_work_req *req = CONTAINER_OF(work,
@@ -28,8 +37,6 @@ static void rtio_work_handler(struct k_p4wq_work *work)
struct rtio_iodev_sqe *iodev_sqe = req->iodev_sqe;
req->handler(iodev_sqe);
k_mem_slab_free(&rtio_work_items_slab, req);
}
struct rtio_work_req *rtio_work_req_alloc(void)

View File

@@ -16,7 +16,7 @@
static void test_iface_init(struct net_if *iface)
{
/* Fake link layer address is needed to silence assertions inside the net core */
static uint8_t fake_lladdr[] = { 0x01 };
static uint8_t fake_lladdr[] = { 0x00, 0x00, 0x5E, 0x00, 0x53, 0x01 };
net_if_set_link_addr(iface, fake_lladdr, sizeof(fake_lladdr), NET_LINK_DUMMY);

View File

@@ -15,7 +15,7 @@
static void test_iface_init(struct net_if *iface)
{
/* Fake link layer address is needed to silence assertions inside the net core */
static uint8_t fake_lladdr[] = { 0x01 };
static uint8_t fake_lladdr[] = { 0x00, 0x00, 0x5E, 0x00, 0x53, 0x01 };
net_if_set_link_addr(iface, fake_lladdr, sizeof(fake_lladdr), NET_LINK_DUMMY);

View File

@@ -280,7 +280,7 @@ manifest:
revision: 2b498e6f36d6b82ae1da12c8b7742e318624ecf5
path: modules/lib/gui/lvgl
- name: mbedtls
revision: a78176c6ff0733ba08018cba4447bd3f20de7978
revision: 5f889934359deccf421554c7045a8381ef75298f
path: modules/crypto/mbedtls
groups:
- crypto
@@ -327,7 +327,7 @@ manifest:
groups:
- crypto
- name: trusted-firmware-m
revision: 8134106ef9cb3df60e8bd22b172532558e936bd2
revision: e2288c13ee0abc16163186523897e7910b03dd31
path: modules/tee/tf-m/trusted-firmware-m
groups:
- tee