Compare commits

..

38 Commits

Author SHA1 Message Date
Johan Hedberg
2bd062c258 release: bump version to Zephyr 2.2.1
Update version to 2.2.1

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-05-29 11:50:11 +03:00
Johan Hedberg
d2160cbf4a doc: releases: Add release notes for Zephyr 2.2.1
Add the release notes for Zephyr 2.2.1.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-05-29 11:50:11 +03:00
David Brown
9a3aa3c9c3 updatehub: Require peer verification with DTLS
DTLS without peer verification offers no security whatsoever (and is
arguably worse than not using DTLS in the first place).

Change the verification option to require this peer verification.  To
use this, it may be necessary to install and use a root certificate.

Signed-off-by: David Brown <david.brown@linaro.org>
2020-05-28 22:28:36 +03:00
François Delawarde
137ebbc43f bluetooth: host: fix wrong bt/cf settings loading
This commits fixes the loading of bt/cf settings into memory. Only data
was loaded and not the address.

Signed-off-by: François Delawarde <fnde@demant.com>
2020-05-27 12:37:37 +03:00
Andries Kruithof
a520506076 Bluetooth: controller: split: Update feature exchange to BTCore V5.0
The existing feature exchange procedure does not give the proper
response as specified in the BT core spec 5.0.
The old behaviour is that the feature-response returns the logical and
of the features for both peers.
The behaviour implemented here is that the feature-response returns the
featureset of the peer, except for octet 0 which is the logical and of
the supported features.
Tested by using the bt shell, and having different featuresets
on the 2 peers.

This fixes #25483

Signed-off-by: Andries Kruithof <Andries.Kruithof@nordicsemi.no>
2020-05-27 12:37:04 +03:00
Vinayak Kariappa Chettimada
2758c33e39 Bluetooth: controller: split: Fix slave latency cancel race
Fix missing transmit buffer demutiplexing before checking if
slave latency needs to be maintained or cancelled.

This bug was detected when new transmit buffer was enqueued
overlapping with on-air radio transmission of empty PDU
preceding the handling of radio event done.

Symptoms of this bug being data transmission latency of upto
slave latency plus one times connection interval.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-05-22 16:47:57 +03:00
Morten Priess
2fa7a7e977 Bluetooth: controller: Added support for vendor ticker nodes
With BT_CTLR_USER_TICKER_ID_RANGE it is possible for vendors to add a
number of ticker nodes for proprietary purposes. The feature depends on
BT_CTLR_USER_EXT.

Signed-off-by: Morten Priess <mtpr@oticon.com>
2020-05-18 13:05:17 +03:00
Johan Hedberg
dbfc2ebc6b Bluetooth: Fix NULL pointer dereference when bt_send() fails
The last parameter to hci_cmd_done() is expected to be a valid net_buf
since the function immediately tries to dereference it. Fix this by
passing the appropriate buffer reference to the function.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-05-06 11:56:34 +03:00
Trond Einar Snekvik
82083a90cc Bluetooth: Mesh: net_key_status only pull one key idx
Fixes bug where the config client's net_key_status handler would attempt
to pull two key indexes from a message which only holds one.

Fixes #24601.

Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
2020-04-28 21:31:18 +03:00
Gerson Fernando Budke
341681f312 lib: updatehub: Improve probe security
Improve buffer overflow security on probe_cb. This ensures that socket
buffer have fixed lenght and content received by COAP fills properly on
metadata buffer. After that, ensures that metadata content is a valid
string with length lower than metadata size.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
3f4221db6f lib: updatehub: Refact to use bin2hex
Use bin2hex instead inline conversion.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
4d949eebef lib: updatehub: Fix variable-size string copy
A malformed JSON payload that is received from an UpdateHub server
may trigger memory corruption in the Zephyr OS. This could result
in a denial of service in the best case, or code execution in the
worst case.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2020-04-22 14:37:28 +03:00
Otavio Salvador
8e8ba2396c CODEOWNERS: Update UpdateHub owners
This replaces @chtavares592 with @nandojve as he will contributing to it
from now on.

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit a3d6b627b7)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
0e51f8bd79 lib: updatehub: Add missing do upgrade request call
After a success image download, UpdateHub needs inform MCUboot that
must test new image and then, on success, commit this new image. This
add missing upgrade request call step and fixes the upgarde flow.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit d1e2d345fb)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
3e91120f2b lib: updatehub: Fix download block error
The current version aborts update when found last transfer block. Now,
system checks only at end of coap block transfer total size and install
if download is ok.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 1128eab3f2)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
fb244f7542 lib: updatehub: Extract sha256 final method
Extract finish sha256 calc method.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 1fe1b0eec6)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
f7ac49efe6 lib: updatehub: Fix buffer sizes
The MAX_PAYLOAD_SIZE must reflect the size of COAP_BLOCK_x. This is
necessary becase BLOCK size represents max payload size. The current
value create inconsistencies for coap lib. The same way,
MAX_DOWNLOAD_DATA must allocate sufficient space for MAX_PAYLOAD_SIZE
plus all space for coap header etc.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 5f5919a900)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
fce4c9e317 lib: updatehub: Fix build warnings
Fix all build warnings.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 92f9cd9f85)
2020-04-22 14:37:28 +03:00
Flavio Ceolin
385f88242e net: coap: Fix possible overflow
Fix possible integer overflow when parsing a CoAP packet.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-04-22 13:02:46 +03:00
Flavio Ceolin
4e8e72bd4c math: extras: Add overflow functions to u16
Add functions to addition and multiplication for u16_t.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-04-22 13:02:46 +03:00
Vinayak Kariappa Chettimada
eee10e9aff Bluetooth: controller: split: Fix DLE duplicate requests
Fix implementation to handle back-to-back and duplicate
LENGTH_REQ PDU reception.

Relates to #23707.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:50 +03:00
Vinayak Kariappa Chettimada
b7e7153b95 Bluetooth: controller: split: Simplify DLE state checks
Simplify the Data Length Update Procedure state check when
processing incoming LENGTH_REQ/RSP PDUs.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:50 +03:00
Vinayak Kariappa Chettimada
525500123a Bluetooth: controller: split: Validate chan map and hop value
Add validation of channel map and hop increment value
received in CONNECT_IND PDU.

Zero bit count leads to controller assert or divide-by-zero
fault.

Hop increment shall be between 5 and 16 by BT Specification.

Relates to #23705.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:29 +03:00
Vinayak Kariappa Chettimada
ff39065ed4 Bluetooth: controller: Fix ticker ticks_current value
Update the ticks_current value on last stopped ticker
instance, so that when a new ticker instance is started
the anchor ticks calculation uses the correct current tick
with respect to supplied anchor ticks.

Fixes #23805.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:03 +03:00
Vinayak Kariappa Chettimada
1b253c2dba Bluetooth: controller: split: Fix slave latency during conn update
Fix regression in cancelling slave latency during Connection
Update Procedure.

Slave latency should not be applied between the ack of a
Connection Update Indication PDU and until the instant.
When caching was introduced, implementation missed this
consideration.

Relates to #23813.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 12:59:43 +03:00
Vinayak Kariappa Chettimada
37c46e3cf5 Bluetooth: controller: split: Fix regression in handling invalid pkt seq
Fix regression in handling invalid packet sequence in the
first packet in a connection.

This relates to commit 62c1e1a52b ("Bluetooth: controller:
split: Fix assert on invalid packet sequence")

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 12:59:20 +03:00
Andries Kruithof
c5f9a2d2bc [Backport-v2.2-branch] Bluetooth: controller: legacy: DLE timing
Fixes #23482.

There is a bug in calculation of time for the DLE procedure:
the preamble size is incorrect for the 2M phy

This is for the legacy code, see PR #23570 for the split controller

Signed-off-by: Andries Kruithof <Andries.Kruithof@nordicsemi.no>
2020-04-22 12:58:52 +03:00
Joakim Andersson
7e4bcff791 Bluetooth: SMP: Fix bond lost on pairing failure.
Fix an an issue where established bonding information in the peripheral
are deleted when the central does not have the bond information.
This could be because the central has removed the bond information, or
this is in fact not the central but someone spoofing it's identity, or
an accidental RPA match.

This is a regression from: a3e89e84a8

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-04-22 12:58:19 +03:00
Vinayak Kariappa Chettimada
ee4006e42e Bluetooth: controller: nRF: Revert use of ticker compat mode as default
Ticker is now fixed to avoid catch up of periodic timeout under
large ISR latencies.

Revert commit a749e28d98 ("Bluetooth: controller: split:
nRF: Use ticker compat mode as default").

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
bec2a497b9 Bluetooth: controller: Consider must_expire while avoiding catchup
If must_expire is set for a ticker instance, then ticker
expiry shall still perform catchup.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
2295af0f05 Bluetooth: controller: Fix ticks_slot_previous calculation
Fix ticks_slot_previous calculation in the ticker_job when
tickers are skipped.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
8e12b42af2 Bluetooth: controller: Fix ticker state on skipped
Reset ticker state in ticker_job for ticker instances that
have been skipped in the ticker_worker.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
a3c2a72c6c Bluetooth: controller: Remove lazy handling in ticker_worker
Removed lazy handling from ticker_worker as it is now taken
care in ticker_job.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
c2e829ba8c Bluetooth: controller: Fix ticker catch up on ISR latency
Fix ticker to avoid catch up of periodic timeouts in case of
large ISR latencies like in case of flash erase scenarios.

Relates to #23815.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Andries Kruithof
b83d9a5eea Bluetooth: controller: split: correct timing calculation in PKT_US
A bug in the PKT_US resulted in wrong calculations for the 2M phy.
Fixes the bug, verified on EBQ.
Also adds some defines for improved readability.

Fixes #23482

Signed-off-by: Andries Kruithof <Andries.Kruithof@nordicsemi.no>
2020-04-12 11:54:29 +03:00
Carles Cufi
a69cdb31ca ci: do not use latest sphinx/breathe releases for docs
Sphinx version 3.0.0 release recently break doc build, lock version
of sphinx to something compatible while we upgrade dependencies...

breathe v4.15.0 was just released and fixes some compatibility issues
with latest sphinx, the combo works, however with many warnings that we
still need to either fix or whitelist. This is temporary until we are
able to use those new versions.

Backport of #24109 and #24166

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-04-08 17:14:25 +02:00
Dan Erichsen
d39cb42d09 bluetooth: host: Do not send unwanted SC indicate
Fixes #23485

When we create a GATT table dynamically, we also create a hash
identifying this table. This hash can be stored in persistent memory and
we can thus determine after recreating the GATT table whether the
services have changed or not from before the reboot.

When these hashes are identical, it implies that the table has not
changed, wherefore a service changed indication should not be sent to
any bonded clients. The method for achieving this was to remove the
gatt_sc.work entry from the work queue. This work queue entry was to
send an indication to the clients when the table had been allocated.
If the final entry then caused the hashes to match, the indication
would be cancelled.

On unit testing this behaviour in simulation and in practice, we found
that the indication was sent nonetheless, and the issue was located to
be tied to the SERVICE_RANGE_CHANGED flag which is set when the services
are changed and is cleared when the indications are being sent out.

It was the job of the work queue entry to clear this flag, and as the
entry was never serviced, the flag was never cleared, and when
sc_commit() is called at the end of the process, it believes that there
is a new service change pending and therefore starts the job over, thus
creating a redundant indication to the clients.

This commit fixes the issue by clearing the flag when the work entry
is removed due to a hash match. This has been unittested in a live
environment, in a simulation environment, and sanitycheck has been run
on it.

Signed-off-by: Dan Erichsen <daee@demant.com>
2020-03-17 18:16:57 +01:00
Wolfgang Puffitsch
9d83a7cd27 Bluetooth: controller: split: Fix response to unexpected LL_FEATURE_RSP
Fix response to unexpected LL_FEATURE_RSP for the case that
BT_CTLR_SLAVE_FEAT_REQ is disabled. Fixes LL/PAC/SLA/BV-01 for such a
configuration.

Signed-off-by: Wolfgang Puffitsch <wopu@demant.com>
2020-03-17 18:16:47 +01:00
11625 changed files with 278663 additions and 822674 deletions

View File

@@ -1,32 +0,0 @@
steps:
- command:
- .buildkite/run.sh
env:
ZEPHYR_TOOLCHAIN_VARIANT: "zephyr"
ZEPHYR_SDK_INSTALL_DIR: "/opt/sdk/zephyr-sdk-0.12.2"
parallelism: 400
timeout_in_minutes: 210
retry:
manual: true
plugins:
- docker#v3.5.0:
image: "zephyrprojectrtos/ci:v0.11.13"
propagate-environment: true
volumes:
- "/var/lib/buildkite-agent/git-mirrors:/var/lib/buildkite-agent/git-mirrors"
- "/var/lib/buildkite-agent/zephyr-module-cache:/var/lib/buildkite-agent/zephyr-module-cache"
- "/var/lib/buildkite-agent/zephyr-ccache:/root/.ccache"
workdir: "/workdir/zephyr"
agents:
- "queue=default"
- wait: ~
continue_on_failure: true
- plugins:
- junit-annotate#v1.7.0:
artifacts: twister-*.xml
notify:
- email: "builds+int+399+7809482394022958124@lists.zephyrproject.org"
if: build.state != "passed"

View File

@@ -1,8 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
# report disk usage:
echo "--- $0 disk usage"
df -h

View File

@@ -1,44 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
# Save off where we started so we can go back there
WORKDIR=${PWD}
echo "--- $0 disk usage"
df -h
du -hs /var/lib/buildkite-agent/*
docker images -a
docker system df -v
if [ -n "${BUILDKITE_PULL_REQUEST_BASE_BRANCH}" ]; then
git fetch -v origin ${BUILDKITE_PULL_REQUEST_BASE_BRANCH}
git checkout FETCH_HEAD
git config --local user.email "builds@zephyrproject.org"
git config --local user.name "Zephyr CI"
git merge --no-edit "${BUILDKITE_COMMIT}" || {
local merge_result=$?
echo "Merge failed: ${merge_result}"
git merge --abort
exit $merge_result
}
fi
mkdir -p /var/lib/buildkite-agent/zephyr-ccache/
# create cache dirs, no-op if they already exist
mkdir -p /var/lib/buildkite-agent/zephyr-module-cache/modules
mkdir -p /var/lib/buildkite-agent/zephyr-module-cache/tools
mkdir -p /var/lib/buildkite-agent/zephyr-module-cache/bootloader
# Clean cache - if it already exists
cd /var/lib/buildkite-agent/zephyr-module-cache
find -type f -not -path "*/.git/*" -not -name ".git" -delete
# Remove any stale locks
find -name index.lock -delete
# return from where we started so we can find pipeline files from
# git repo
cd ${WORKDIR}

View File

@@ -1,28 +0,0 @@
steps:
- command:
- .buildkite/run.sh
env:
ZEPHYR_TOOLCHAIN_VARIANT: "zephyr"
ZEPHYR_SDK_INSTALL_DIR: "/opt/sdk/zephyr-sdk-0.12.2"
parallelism: 20
timeout_in_minutes: 180
retry:
manual: true
plugins:
- docker#v3.5.0:
image: "zephyrprojectrtos/ci:v0.11.13"
propagate-environment: true
volumes:
- "/var/lib/buildkite-agent/git-mirrors:/var/lib/buildkite-agent/git-mirrors"
- "/var/lib/buildkite-agent/zephyr-module-cache:/var/lib/buildkite-agent/zephyr-module-cache"
- "/var/lib/buildkite-agent/zephyr-ccache:/root/.ccache"
workdir: "/workdir/zephyr"
agents:
- "queue=default"
- wait: ~
continue_on_failure: true
- plugins:
- junit-annotate#v1.7.0:
artifacts: twister-*.xml

View File

@@ -1,75 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Linaro Limited
#
# SPDX-License-Identifier: Apache-2.0
set -eE
function cleanup()
{
# Rename twister junit xml for use with junit-annotate-buildkite-plugin
# create dummy file if twister did nothing
if [ ! -f twister-out/twister.xml ]; then
touch twister-out/twister.xml
fi
mv twister-out/twister.xml twister-${BUILDKITE_JOB_ID}.xml
buildkite-agent artifact upload twister-${BUILDKITE_JOB_ID}.xml
# Upload test_file to get list of tests that are build/run
if [ -f test_file.txt ]; then
buildkite-agent artifact upload test_file.txt
fi
# ccache stats
echo "--- ccache stats at finish"
ccache -s
# disk usage
echo "--- disk usage at finish"
df -h
}
trap cleanup ERR
echo "--- run $0"
git log -n 5 --oneline --decorate --abbrev=12
# Setup module cache
cd /workdir
ln -s /var/lib/buildkite-agent/zephyr-module-cache/modules
ln -s /var/lib/buildkite-agent/zephyr-module-cache/tools
ln -s /var/lib/buildkite-agent/zephyr-module-cache/bootloader
cd /workdir/zephyr
export JOB_NUM=$((${BUILDKITE_PARALLEL_JOB}+1))
# ccache stats
echo ""
echo "--- ccache stats at start"
ccache -s
if [ -n "${DAILY_BUILD}" ]; then
TWISTER_OPTIONS=" --inline-logs -N --build-only --all --retry-failed 3 -v "
echo "--- DAILY BUILD"
west init -l .
west update 1> west.update.log || west update 1> west.update-2.log
west forall -c 'git reset --hard HEAD'
source zephyr-env.sh
./scripts/twister --subset ${JOB_NUM}/${BUILDKITE_PARALLEL_JOB_COUNT} ${TWISTER_OPTIONS}
else
if [ -n "${BUILDKITE_PULL_REQUEST_BASE_BRANCH}" ]; then
./scripts/ci/run_ci.sh -c -b ${BUILDKITE_PULL_REQUEST_BASE_BRANCH} -r origin \
-m ${JOB_NUM} -M ${BUILDKITE_PARALLEL_JOB_COUNT} -p ${BUILDKITE_PULL_REQUEST}
else
./scripts/ci/run_ci.sh -c -b ${BUILDKITE_BRANCH} -r origin \
-m ${JOB_NUM} -M ${BUILDKITE_PARALLEL_JOB_COUNT};
fi
fi
TWISTER_EXIT_STATUS=$?
cleanup
exit ${TWISTER_EXIT_STATUS}

View File

@@ -1,7 +1,7 @@
--emacs
--summary-file
--show-types
--max-line-length=100
--max-line-length=80
--min-conf-desc-length=1
--typedefsfile=scripts/checkpatch/typedefsfile
@@ -10,21 +10,10 @@
--ignore SPLIT_STRING
--ignore VOLATILE
--ignore CONFIG_EXPERIMENTAL
--ignore PREFER_KERNEL_TYPES
--ignore PREFER_SECTION
--ignore AVOID_EXTERNS
--ignore NETWORKING_BLOCK_COMMENT_STYLE
--ignore DATE_TIME
--ignore MINMAX
--ignore CONST_STRUCT
--ignore FILE_PATH_CHANGES
--ignore SPDX_LICENSE_TAG
--ignore C99_COMMENT_TOLERANCE
--ignore REPEATED_WORD
--ignore UNDOCUMENTED_DT_STRING
--ignore DT_SPLIT_BINDING_PATCH
--ignore DT_SCHEMA_BINDING_PATCH
--ignore TRAILING_SEMICOLON
--ignore COMPLEX_MACRO
--ignore MULTISTATEMENT_MACRO_USE_DO_WHILE
--exclude ext

View File

@@ -65,12 +65,18 @@ ExperimentalAutoDetectBinPacking: false
#FixNamespaceComments: false # Unknown to clang-format-4.0
# Taken from:
# git grep -h '^#define [^[:space:]]*FOR_EACH[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*FOR_EACH[^[:space:]]*\)(.*$, - '\1'," \
# git grep -h '^#define [^[:space:]]*for_each[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*for_each[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
ForEachMacros:
- 'FOR_EACH'
- 'FOR_EACH_FIXED_ARG'
- 'for_each_linux_bus'
- 'for_each_linux_driver'
- 'metal_bitmap_for_each_clear_bit'
- 'metal_bitmap_for_each_set_bit'
- 'metal_for_each_page_size_down'
- 'metal_for_each_page_size_up'
- 'metal_list_for_each'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
@@ -85,11 +91,11 @@ ForEachMacros:
- 'SYS_SLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SLIST_FOR_EACH_NODE'
- 'SYS_SLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
- 'Z_GENLIST_FOR_EACH_CONTAINER'
- 'Z_GENLIST_FOR_EACH_CONTAINER_SAFE'
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
#IncludeBlocks: Preserve # Unknown to clang-format-5.0
IncludeCategories:

View File

@@ -11,11 +11,6 @@ insert_final_newline = true
trim_trailing_whitespace = true
max_line_length = 80
# Assembly
[*.S]
indent_style = tab
indent_size = 8
# C
[*.{c,h}]
indent_style = tab
@@ -32,7 +27,7 @@ indent_style = tab
indent_size = 8
# YAML
[*.{yml,yaml}]
[*.yml]
indent_style = space
indent_size = 2
@@ -60,16 +55,3 @@ indent_size = 2
# Makefile
[Makefile]
indent_style = tab
# Device tree
[*.{dts,dtsi,overlay}]
indent_style = tab
indent_size = 8
# Git commit messages
[COMMIT_EDITMSG]
max_line_length = 72
# Kconfig
[Kconfig*]
indent_style=tab

4
.gitattributes vendored
View File

@@ -3,7 +3,3 @@
.gitattributes export-ignore
.gitignore export-ignore
.mailmap export-ignore
# Tell linguist that generated test pattern files should not be included in the
# language statistics.
*.pat linguist-generated=true

View File

@@ -24,11 +24,10 @@ A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Logs and console output**
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
**Screenshots or console output**
If applicable, add a screenshot (drag-and-drop an image), or console logs
(cut-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue.
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)

View File

@@ -2,7 +2,7 @@
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
labels: enhancement
assignees: ''
---

View File

@@ -1,19 +0,0 @@
---
name: Hardware Support
about: Suggest adding hardware support
title: ''
labels: Hardware Support
assignees: ''
---
**Is this request related to a missing driver support for a particular hardware platform, SoC or board? Please describe.**
Describe in details the hardware support being requested and why this support benefits Zephyr.
**Describe why you are asking for this support?**
Describe why you are asking for this support instead of contributing it directly to the tree
If this is a new board or SoC, please state whether you are willing to maintain the Zephyr support for it if it is included in the main tree
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the hardware here.

161
.github/labeler.yml vendored
View File

@@ -1,161 +0,0 @@
"Release Notes":
- "doc/releases/**/*"
"area: Modem":
- "drivers/modem/**/*"
"area: PWM":
- "drivers/pwm/**/*"
"area: Watchdog":
- "drivers/watchdog/**/*"
"area: Sensors":
- "drivers/sensors/**/*"
"area: ADC":
- "drivers/adc/**/*"
"area: Counter":
- "drivers/counter/**/*"
"area: CAN":
- "include/drivers/can.h"
- "include/canbus/*/**"
- "drivers/can/**/*"
- "subsys/canbus/*/**"
"area: EEPROM":
- "include/drivers/eeprom.h"
- "drivers/eeprom/**/*"
"area: Timer":
- "drivers/timer/**/*"
"area: I2S":
- "drivers/i2s/**/*"
"area: C Library":
- "lib/libc/**/*"
"area: Devicetree":
- "dts/**/*"
- "**/*.dts"
- "**/*.dtsi"
- "include/devicetree.h"
- "include/devicetree/*"
- "doc/guides/dts/**/*"
"area: Devicetree Binding":
- "include/dt-bindings/**/*"
- "dts/bindings/**/*"
"area: Devicetree Tooling":
- "scripts/dts/**/*"
"area: I2C":
- "drivers/i2c/**/*"
"area: SPI":
- "drivers/spi/**/*"
"area: Boards":
- "boards/**/*"
"area: POSIX":
- "lib/posix/**/*"
"area: native port":
- "arch/posix/**/*"
- "include/arch/posix/**/*"
- "soc/posix/**/*"
- "**/*native_posix*"
"area: X86":
- "arch/x86/**/*"
- "include/arch/x86/**/*"
"area: ARM":
- "arch/arm/**/*"
- "include/arch/arm/**/*"
"area: ARM_64":
- "arch/arm/core/aarch64/**/*"
- "include/arch/arm/aarch64/**/*"
"area: NIOS2":
- "arch/nios2/**/*"
- "include/arch/nios2/**/*"
"area: Xtensa":
- "arch/xtensa/**/*"
- "include/arch/xtensa/**/*"
"area: RISCv32/64":
- "arch/risv/**/*"
- "include/arch/riscv/**/*"
"area: ARC":
- "arch/arc/**/*"
- "include/arch/arc/**/*"
"area: Networking":
- "subsys/net/**/*"
- "samples/net/**/*"
- "tests/net/**/*"
- "include/net/**/*"
- "include/drivers/ieee802154/**/*"
- "drivers/ethernet/**/*"
- "drivers/ieee802154/**/*"
- "drivers/wifi/**/*"
- "drivers/ptp_clock/**/*"
- "drivers/net/**/*"
"area: Logging":
- "subsys/logging/**/*"
"area: Shell":
- "subsys/shell/**/*"
"area: Console":
- "subsys/console/**/*"
"area: Test Framework":
- "subsys/testsuite/**/*"
"area: Settings":
- "subsys/settings/**/*"
"area: File System":
- "subsys/fs/**/*"
"area: Storage":
- "subsys/storage/**/*"
"area: Bluetooth":
- "subsys/bluetooth/**/*"
- "**/*bluetooth*"
"area: Bluetooth Mesh":
- "subsys/bluetooth/mesh/**/*"
"area: Bluetooth Controller":
- "subsys/bluetooth/controller/**/*"
"area: Bluetooth Host":
- "subsys/bluetooth/host/**/*"
- "subsys/bluetooth/services/**/*"
"area: API":
- "include/**/*"
"area: Samples":
- "samples/**/*"
"area: Tests":
- "tests/**/*"
"area: Kernel":
- "kernel/**/*"
- "tests/kernel/**/*"
"area: Documentation":
- "**/*.rst"
- "**/*.md"
"area: Build System":
- "cmake/**/*"
- "CmakeLists.txt"
"area: Kconfig":
- "scripts/kconfig/**/*"
- "Kconfig"
- "Kconfig.zephyr"
"area: Twister":
- "scripts/twister"
- "scripts/pylib/twister/**/*"
"area: Modules":
- "west.yml"
- "modules/**/*"
"area: Shields":
- "boards/shields/**"
- "samples/shields/**"
"platform: NXP":
- "boards/arm/frdm*/**"
- "boards/arm/hexiwear*/**"
- "boards/arm/lpcxpresso*/**"
- "boards/arm/*imx*/**"
- "drivers/**/*imx*"
- "drivers/**/*mcux*"
- "dts/arm/nxp/*/*"
- "dts/bindings/**/nxp*"
- "soc/arm/nxp*/**"
"platform: STM32":
- "boards/arm/nucleo_*/**"
- "boards/arm/*stm32*/**"
- "drivers/**/*stm32*"
- "dts/arm/st/*/*"
- "include/drivers/*/*stm32*"
- "soc/arm/st_stm32/**"
"platform: SiLabs":
- "boards/arm/efr32_*/**/*"
- "boards/arm/efm32_*/**/*"
- "drivers/**/*gecko*"
- "dts/arm/silabs/**/*"
- "dts/bindings/**/silabs,gecko*"
- "soc/arm/silabs_exx32/**/*"

View File

@@ -1,16 +0,0 @@
name: Backport
on:
pull_request_target:
types:
- closed
- labeled
jobs:
backport:
runs-on: ubuntu-18.04
name: Backport
steps:
- name: Backport
uses: zephyrproject-rtos/action-backport@v1.1.99
with:
github_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,84 +0,0 @@
name: Compliance
on: pull_request
jobs:
compliance_job:
runs-on: ubuntu-latest
name: Run compliance checks on patch series (PR)
steps:
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Checkout the code
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: cache-pip
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: Install python dependencies
run: |
pip3 install setuptools
pip3 install wheel
pip3 install python-magic junitparser==1.6.3 gitlint pylint pykwalify
pip3 install west
- name: west setup
env:
BASE_REF: ${{ github.base_ref }}
run: |
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
git remote -v
git rebase origin/${BASE_REF}
# debug
git log --pretty=oneline | head -n 10
west init -l . || true
west update
- name: Run Compliance Tests
continue-on-error: true
id: compliance
env:
BASE_REF: ${{ github.base_ref }}
run: |
export ZEPHYR_BASE=$PWD
# debug
ls -la
git log --pretty=oneline | head -n 10
./scripts/ci/check_compliance.py -m Codeowners -m Devicetree -m Gitlint -m Identity -m Nits -m pylint -m checkpatch -m Kconfig -c origin/${BASE_REF}..
- name: upload-results
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: compliance.xml
path: compliance.xml
- name: check-warns
run: |
if [[ ! -s "compliance.xml" ]]; then
exit 1;
fi
for file in Nits.txt checkpatch.txt Identity.txt Gitlint.txt pylint.txt Devicetree.txt Kconfig.txt Codeowners.txt; do
if [[ -s $file ]]; then
errors=$(cat $file)
errors="${errors//'%'/'%25'}"
errors="${errors//$'\n'/'%0A'}"
errors="${errors//$'\r'/'%0D'}"
echo "::error file=${file}::$errors"
exit=1
fi
done
if [ ${exit} == 1 ]; then
exit 1;
fi

View File

@@ -1,14 +0,0 @@
name: Conflict Finder
on:
push:
branches-ignore:
- '**'
jobs:
conflict:
runs-on: ubuntu-latest
steps:
- uses: mschilde/auto-label-merge-conflicts@master
with:
CONFLICT_LABEL_NAME: "has conflicts"
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,38 +0,0 @@
# Copyright (c) 2020 Intel Corp.
# SPDX-License-Identifier: Apache-2.0
name: Publish commit for daily testing
on:
schedule:
- cron: '50 22 * * *'
push:
branches:
- refs/tags/*
jobs:
get_version:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-region: us-east-1
- name: install-pip
run: |
pip3 install gitpython
- name: checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Upload to AWS S3
run: |
python3 scripts/ci/version_mgr.py --update .
aws s3 cp versions.json s3://testing.zephyrproject.org/daily_tests/versions.json

View File

@@ -1,64 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# Copyright (c) 2020 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0
name: Devicetree script tests
on:
push:
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
pull_request:
paths:
- 'scripts/dts/**'
- '.github/workflows/devicetree_checks.yml'
jobs:
devicetree-checks:
name: Devicetree script tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- name: checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install python dependencies
run: |
pip3 install wheel
pip3 install pytest pyyaml
- name: run pytest
working-directory: scripts/dts
run: |
python -m pytest testdtlib.py testedtlib.py

View File

@@ -3,31 +3,16 @@
name: Documentation GitHub Workflow
on:
pull_request:
paths:
- 'Makefile'
- 'doc/**'
- '**.rst'
- 'include/**'
- 'kernel/include/kernel_arch_interface.h'
- 'lib/libc/**'
- 'subsys/testsuite/ztest/include/**'
- 'tests/**'
- '.known-issues/doc/**'
- '**/Kconfig*'
- 'west.yml'
- '.github/workflows/doc-build.yml'
on: [pull_request]
jobs:
doc-build:
name: "Documentation Build"
build:
runs-on: ubuntu-latest
steps:
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "::add-path::$HOME/.local/bin"
- name: checkout
uses: actions/checkout@v2
@@ -44,10 +29,11 @@ jobs:
- name: install-pip
run: |
pip3 install setuptools wheel
pip3 install -r scripts/requirements-base.txt
pip3 install -r scripts/requirements-doc.txt
pip3 install west==0.9.0
pip3 install setuptools
pip3 install 'breathe>=4.9.1,<4.15.0' 'docutils>=0.14' \
'sphinx>=1.7.5,<3.0' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools
- name: west setup
run: |

View File

@@ -9,20 +9,16 @@ on:
- cron: '50 22 * * *'
push:
tags:
# only publish v* tags, do not care about zephyr-v* which point to the
# same commit
- 'v*'
- '*'
jobs:
doc-publish:
name: Publish Documentation
build:
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- name: Update PATH for west
run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
echo "::add-path::$HOME/.local/bin"
- name: Determine tag
id: tag
@@ -65,8 +61,10 @@ jobs:
- name: install-pip
run: |
pip3 install setuptools
pip3 install -r scripts/requirements-base.txt
pip3 install -r scripts/requirements-doc.txt
pip3 install 'breathe>=4.9.1,<4.15.0' 'docutils>=0.14' \
'sphinx>=1.7.5,<3.0' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools
- name: west setup
run: |
@@ -102,15 +100,13 @@ jobs:
aws s3 sync --quiet doc/_build/html s3://docs.zephyrproject.org/latest --delete
echo "success sync of latest docs"
else
# we want just the version, without the leading 'v'
DOC_RELEASE=${RELEASE:1}
DOC_RELEASE=${RELEASE}.0
echo "publish release docs: ${DOC_RELEASE}"
aws s3 sync --quiet doc/_build/html s3://docs.zephyrproject.org/${DOC_RELEASE}
echo "success sync of rel docs"
fi
if [ -d doc/_build/doxygen/html ]; then
API_RELEASE=${RELEASE:1}
echo "publish doxygen to apidoc/${API_RELEASE}"
aws s3 sync --quiet doc/_build/doxygen/html s3://docs.zephyrproject.org/apidoc/${API_RELEASE} --delete
echo "publish doxygen"
aws s3 sync --quiet doc/_build/doxygen/html s3://docs.zephyrproject.org/apidoc/${RELEASE} --delete
echo "success publish of doxygen"
fi

View File

@@ -1,53 +0,0 @@
name: Issue Tracker
on:
issues:
types: [opened, labeled, closed]
env:
OUTPUT_FILE_NAME: IssuesReport.md
COMMITTER_EMAIL: actions@github.com
COMMITTER_NAME: github-actions
COMMITTER_USERNAME: github-actions
jobs:
track-issues:
name: "Collect Issue Stats"
runs-on: ubuntu-latest
steps:
- name: Download configuration file
run: |
wget -q https://raw.githubusercontent.com/$GITHUB_REPOSITORY/master/.github/workflows/issues-report-config.json
- name: install-packages
run: |
sudo apt-get install discount
- uses: brcrista/summarize-issues@v3
with:
title: 'Issues Report for ${{ github.repository }}'
configPath: 'issues-report-config.json'
outputPath: ${{ env.OUTPUT_FILE_NAME }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: upload-stats
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: ${{ env.OUTPUT_FILE_NAME }}
path: ${{ env.OUTPUT_FILE_NAME }}
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID_TESTING }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY_TESTING }}
aws-region: us-east-1
- name: Post Results
run: |
mkd2html IssuesReport.md IssuesReport.html
aws s3 cp --quiet IssuesReport.html s3://testing.zephyrproject.org/issues/$GITHUB_REPOSITORY/index.html

View File

@@ -1,37 +0,0 @@
[
{
"section": "High Priority Bugs",
"labels": ["bug", "priority: high"],
"threshold": 0
},
{
"section": "Medium Priority Bugs",
"labels": ["bug", "priority: medium"],
"threshold": 20
},
{
"section": "Low Priority Bugs",
"labels": ["bug", "priority: low"],
"threshold": 100
},
{
"section": "Enhancements",
"labels": ["Enhancement"],
"threshold": 500
},
{
"section": "Features",
"labels": ["Feature"],
"threshold": 100
},
{
"section": "Questions",
"labels": ["question"],
"threshold": 100
},
{
"section": "Static Analysis",
"labels": ["Coverity"],
"threshold": 100
}
]

View File

@@ -1,12 +0,0 @@
name: 'Pull Request Labeler'
on:
- pull_request_target
jobs:
labeler:
name: Pull Request Labeler
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v2.1.1
with:
repo-token: '${{ secrets.GITHUB_TOKEN }}'

View File

@@ -1,28 +0,0 @@
name: Manifest
on:
pull_request_target:
paths:
- 'west.yml'
jobs:
contribs:
runs-on: ubuntu-latest
name: Manifest
steps:
- name: Checkout the code
uses: actions/checkout@v2
with:
path: zephyrproject/zephyr
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- name: Manifest
uses: zephyrproject-rtos/action-manifest@main
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
manifest-path: 'west.yml'
checkout-path: 'zephyrproject/zephyr'
label-prefix: 'manifest-'
verbosity-level: '1'
labels: 'manifest, west'
dnm-labels: 'DNM'

View File

@@ -1,61 +0,0 @@
name: Create a Release
on:
push:
tags:
- 'v*'
jobs:
release:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Get the version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF#refs/tags/}
- name: REUSE Compliance Check
uses: fsfe/reuse-action@v1
with:
args: spdx -o zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: upload-results
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
path: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
- name: Get Diff since last tag
run: |
oldtag=$(git describe --abbrev=0 ${{ github.ref }}^)
echo "Changes since ${oldtag}:" > release-notes.txt
echo "" >> release-notes.txt
echo "" >> release-notes.txt
git shortlog ${oldtag}..${{ github.ref }} >> release-notes.txt
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Zephyr ${{ github.ref }}
body_path: release-notes.txt
draft: true
prerelease: true
- name: Upload Release Assets
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
asset_name: zephyr-${{ steps.get_version.outputs.VERSION }}.spdx
asset_content_type: text/plain

View File

@@ -1,23 +0,0 @@
name: "Close stale pull requests/issues"
on:
schedule:
- cron: "16 00 * * *"
jobs:
stale:
name: Find Stale issues and PRs
runs-on: ubuntu-latest
if: github.repository == 'zephyrproject-rtos/zephyr'
steps:
- uses: actions/stale@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: 'This pull request has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this pull request will automatically be closed in 14 days. Note, that you can always re-open a closed pull request at any time.'
stale-issue-message: 'This issue has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this issue will automatically be closed in 14 days. Note, that you can always re-open a closed issue at any time.'
days-before-stale: 60
days-before-close: 14
stale-issue-label: 'Stale'
stale-pr-label: 'Stale'
exempt-pr-labels: 'Blocked,In progress'
exempt-issue-labels: 'In progress,Enhancement,Feature,Feature Request,RFC,Meta'
operations-per-run: 400

View File

@@ -1,52 +0,0 @@
# Copyright (c) 2020 Intel Corporation.
# SPDX-License-Identifier: Apache-2.0
name: Twister TestSuite
on:
push:
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
pull_request:
paths:
- 'scripts/pylib/twister/**'
- 'scripts/twister'
- 'scripts/tests/twister/**'
- '.github/workflows/twister_tests.yml'
jobs:
twister-tests:
name: Twister Unit Tests
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest]
steps:
- name: checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install-packages
run: |
pip3 install pytest colorama pyyaml ply mock
- name: Run pytest
env:
ZEPHYR_BASE: ./
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
run: |
echo "Run twister tests"
PYTHONPATH=./scripts/tests pytest ./scripts/tests/twister

View File

@@ -8,16 +8,13 @@ on:
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
pull_request:
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
- '.github/workflows/west_cmds.yml'
jobs:
west-commnads:
name: West Command Tests
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
@@ -57,13 +54,12 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip3 install wheel
pip3 install pytest west pyelftools canopen progress mypy intelhex psutil
pip3 install pytest west pyelftools
- name: run pytest-win
if: runner.os == 'Windows'
run: |
python ./scripts/west_commands/run_tests.py
cmd /C "set PYTHONPATH=.\scripts\west_commands && pytest ./scripts/west_commands/tests/"
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |
./scripts/west_commands/run_tests.py
PYTHONPATH=./scripts/west_commands pytest ./scripts/west_commands/tests/

2
.gitignore vendored
View File

@@ -9,7 +9,6 @@
*~
build*/
!doc/guides/build
!tests/drivers/build_all
cscope.*
.dir
@@ -32,7 +31,6 @@ doc/samples
doc/latex
doc/themes/zephyr-docs-theme
sanity-out*
twister-out*
bsim_bt_out
scripts/grub
doc/reference/kconfig/*.rst

View File

@@ -0,0 +1,68 @@
#
# Bluetooth unnamed struct definition
#
# FIXME: all these should match the relative filename
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
^[ \t]*$
^[ \t]*\^$
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
^[ \t]*$
^[ \t]*\^$
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]$
^.*bt_conn_info.__unnamed__.*$
^[- \t]*\^$
#
# bt_gatt_discover_params unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_gatt_discover_params.__unnamed__.*
^[- \t]*\^
#
# Bluetooth GATT unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_gatt_read_params.__unnamed__.*
^[- \t]*\^
#
# Bluetooth mesh unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model.__unnamed__.*
^[- \t]*\^
#
# Bluetooth mesh pub struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]mesh[/\\]access.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^

View File

@@ -0,0 +1,15 @@
#
# Display
#
#
# include
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]display_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*mb_image.__unnamed__
^[- \t]*\^

View File

@@ -1,39 +1,6 @@
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]file_system[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]file_system[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]dma.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]dma.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]audio[/\\]dmic.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_if.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]ieee802154.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]sockets.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]uuid.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]sdp.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]rfcomm.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]hci_raw.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]hci_drivers.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]gatt.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]gap.rst):(?P<lineno>[0-9]+): WARNING: Duplicate C declaration, also defined at (.*)\.$
^Declaration is \'.*\'\.$
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]sensor.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)

View File

@@ -1,14 +1,15 @@
# multiple section 'index'
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]custom-doxygen[/\\]mainpage.md):[0-9]+: warning: multiple use of section label 'index' for main page, \(first occurrence: .*$
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]gatt.rst):(?P<lineno>[0-9]+): WARNING: Error in declarator$
^[ \t]*If.*:$
^[ \t]*Invalid C declaration: .* \[error at [0-9]+\]$
^[ \t]*ATOMIC_DEFINE.*$
^[- \t]*\^$
^[ \t]*If.*:$
^[ \t]*Error in declarator or parameters$
^[ \t]*Invalid C declaration: .* \[error at [0-9]+\]$
^[ \t]*ATOMIC_DEFINE.*$
^[- \t]*\^$
^[ \t]*$
# Display
#
#
# include
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]misc_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*json_obj_descr.__unnamed__
^[- \t]*\^

View File

@@ -0,0 +1,70 @@
#
# Networking
#
#
# include/net/net_ip.h warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*in[_6]+addr.in[46]_u
^[- \t]*\^
#
# include/net/net_mgmt.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_mgmt_event_callback.__unnamed__
^[- \t]*\^
#
# include/net/buf.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_buf.__unnamed__
^[- \t]*\^
#
# include/net/ieee802154.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*ieee802154_req_params.__unnamed__
^[- \t]*\^
#
# include/net/net_context.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_context.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_context.options
^[- \t]*\^
#
# include/net/net_stats.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_stats.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_stats_tc.[a-z]+
^[- \t]*\^
#
# stray duplicate definition warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_if.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)

View File

@@ -0,0 +1,31 @@
#
# UART unnamed struct definition
#doc/api/peripherals/uart.rst
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]uart.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*uart_device_config.__unnamed__.*
^[- \t]*\^
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]uart.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*uart_event.data
^[- \t]*\^

75
.shippable.yml Normal file
View File

@@ -0,0 +1,75 @@
language: c
compiler: gcc
env:
global:
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.11.2
- ZEPHYR_TOOLCHAIN_VARIANT=zephyr
- MATRIX_BUILDS="5"
matrix:
- MATRIX_BUILD="1"
- MATRIX_BUILD="2"
- MATRIX_BUILD="3"
- MATRIX_BUILD="4"
- MATRIX_BUILD="5"
build:
cache: false
cache_dir_list:
- ${SHIPPABLE_BUILD_DIR}/ccache
pre_ci_boot:
image_name: zephyrprojectrtos/ci
image_tag: v0.11.4
pull: true
options: "-e HOME=/home/buildslave --privileged=true --tty --net=bridge --user buildslave"
ci:
- export CCACHE_DIR=${SHIPPABLE_BUILD_DIR}/ccache/.ccache
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -c -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -c -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
- ccache -s
on_failure:
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -f -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -f -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
on_success:
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -s -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -s -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
branches:
only:
- master
- v*-branch
- topic-*
integrations:
notifications:
- integrationName: slack_integration
type: slack
recipients:
- "#ci"
branches:
only:
- master
on_success: never
on_failure: never
- integrationName: email
type: email
recipients:
- builds@zephyrproject.org
branches:
only:
- master
on_success: never
on_failure: always
on_pull_request: never

File diff suppressed because it is too large Load Diff

View File

@@ -16,63 +16,49 @@
/.known-issues/ @nashif
/.github/ @nashif
/.github/workflows/ @galak @nashif
/.buildkite/ @galak
/MAINTAINERS.yml @ioannisg @MaureenHelm
/arch/arc/ @abrodkin @ruuddw
/arch/arc/ @vonhust @ruuddw
/arch/arm/ @MaureenHelm @galak @ioannisg
/arch/arm/core/aarch32/cortex_m/cmse/ @ioannisg
/arch/arm/core/aarch64/ @carlocaione
/arch/arm/include/aarch32/cortex_m/cmse.h @ioannisg
/arch/arm/include/aarch64/ @carlocaione
/arch/arm/core/aarch32/cortex_a_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/common/ @ioannisg @andyross
/soc/arc/snps_*/ @abrodkin @ruuddw
/soc/nios2/ @nashif
/arch/arm/core/aarch32/cortex_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/common/ @andrewboie @ioannisg @andyross
/soc/arc/snps_*/ @vonhust @ruuddw
/soc/nios2/ @nashif @wentongwu
/soc/arm/ @MaureenHelm @galak @ioannisg
/soc/arm/arm/mps2/ @fvincenzo
/soc/arm/atmel_sam/common/*_sam4l_*.c @nandojve
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4l/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/same70/ @nandojve
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/cypress/ @nandojve
/soc/arm/bcm*/ @sbranden
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/nxp*/ @MaureenHelm
/soc/arm/nordic_nrf/ @ioannisg
/soc/arm/nuvoton/ @ssekar15
/soc/arm/nuvoton_npcx/ @MulinChao
/soc/arm/qemu_cortex_a53/ @carlocaione
/soc/arm/quicklogic_eos_s3/ @kowalewskijan @kgugala
/soc/arm/silabs_exx32/efm32pg1b/ @rdmeneze
/soc/arm/silabs_exx32/efr32mg21/ @l-alfred
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/*/power.c @FRASTM
/soc/arm/st_stm32/stm32f4/ @rsalveti @idlethread
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/xtensa/intel_s1000/ @sathishkuttan @dcpleung
/arch/x86/ @jhedberg @nashif @jenmwms @aasthagr
/arch/nios2/ @nashif
/arch/posix/ @aescolar @daor-oti
/arch/riscv/ @kgugala @pgielda
/soc/posix/ @aescolar @daor-oti
/soc/riscv/ @kgugala @pgielda
/arch/x86/ @andrewboie
/arch/nios2/ @andrewboie @wentongwu
/arch/posix/ @aescolar
/arch/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/posix/ @aescolar
/soc/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/riscv/openisa*/ @MaureenHelm
/soc/x86/ @dcpleung @nashif @jenmwms @aasthagr
/arch/xtensa/ @dcpleung @andyross @nashif
/soc/xtensa/ @dcpleung @andyross @nashif
/arch/sparc/ @martin-aberg
/soc/sparc/ @martin-aberg
/boards/arc/ @abrodkin @ruuddw
/soc/x86/ @andrewboie
/arch/xtensa/ @andrewboie @dcpleung @andyross
/soc/xtensa/ @andrewboie @dcpleung @andyross
/boards/arc/ @vonhust @ruuddw
/boards/arm/ @MaureenHelm @galak
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @idlethread
/boards/arm/96b_carbon/ @rsalveti @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
@@ -82,174 +68,117 @@
/boards/arm/cc1352r1_launchxl/ @bwitherspoon
/boards/arm/cc26x2r1_launchxl/ @bwitherspoon
/boards/arm/cc3220sf_launchxl/ @vanti
/boards/arm/cy8* @nandojve
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/efm32pg_stk3401a/ @rdmeneze
/boards/arm/faze/ @mbittan @simonguinot
/boards/arm/frdm*/ @MaureenHelm
/boards/arm/frdm*/doc/ @MaureenHelm @MeganHansen
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @MaureenHelm
/boards/arm/hexiwear*/doc/ @MaureenHelm @MeganHansen
/boards/arm/ip_k66f/ @parthitce
/boards/arm/lpcxpresso*/ @MaureenHelm
/boards/arm/lpcxpresso*/doc/ @MaureenHelm @MeganHansen
/boards/arm/mimx8mm_evk/ @Mani-Sadhasivam
/boards/arm/mimxrt*/ @MaureenHelm
/boards/arm/mimxrt*/doc/ @MaureenHelm @MeganHansen
/boards/arm/mps2_an385/ @fvincenzo
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/npcx7m6fb_evb/ @MulinChao
/boards/arm/nrf*/ @carlescufi @lemrey @ioannisg
/boards/arm/nucleo*/ @erwango @ABOSTM @FRASTM
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nuvoton_pfm_m487/ @ssekar15
/boards/arm/nucleo*/ @erwango
/boards/arm/nucleo_f401re/ @rsalveti @idlethread
/boards/arm/qemu_cortex_a53/ @carlocaione
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg
/boards/arm/quick_feather/ @kowalewskijan @kgugala
/boards/arm/rak5010_nrf52840/ @gpaquet85
/boards/arm/xmc45_relax_kit/ @parthitce
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4l_ek/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_e70_xplained/ @nandojve
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango @ABOSTM @FRASTM
/boards/arm/stm32*_disco/ @erwango
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango @ABOSTM @FRASTM
/boards/common/ @mbolivar-nordic
/boards/deprecated.cmake @tejlmand
/boards/nios2/ @nashif
/boards/nios2/altera_max10/ @nashif
/boards/arm/stm32*_eval/ @erwango
/boards/common/ @mbolivar
/boards/nios2/ @wentongwu
/boards/nios2/altera_max10/ @wentongwu
/boards/arm/stm32_min_dev/ @cbsiddharth
/boards/posix/ @aescolar @daor-oti
/boards/posix/nrf52_bsim/ @aescolar @wopu-ot
/boards/riscv/ @kgugala @pgielda
/boards/posix/ @aescolar
/boards/riscv/ @kgugala @pgielda @nategraff-sifive
/boards/riscv/rv32m1_vega/ @MaureenHelm
/boards/shields/ @erwango
/boards/shields/atmel_rf2xx/ @nandojve
/boards/shields/esp_8266/ @nandojve
/boards/shields/inventek_eswifi/ @nandojve
/boards/x86/ @dcpleung @nashif @jenmwms @aasthagr
/boards/x86/ @andrewboie @nashif
/boards/xtensa/ @nashif @dcpleung
/boards/xtensa/intel_s1000_crb/ @sathishkuttan @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
/boards/sparc/ @martin-aberg
# All cmake related files
/cmake/ @tejlmand @nashif
/cmake/*/arcmwdt/ @abrodkin @evgeniy-paltsev @tejlmand
/CMakeLists.txt @tejlmand @nashif
/cmake/ @SebastianBoe @nashif
/CMakeLists.txt @SebastianBoe @nashif
/doc/ @dbkinder
/doc/guides/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/CMakeLists.txt @carlescufi
/doc/scripts/ @carlescufi
/doc/guides/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/guides/dts/ @galak @mbolivar-nordic
/doc/reference/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/reference/devicetree/ @galak @mbolivar-nordic
/doc/reference/resource_management/ @pabigot
/doc/reference/kernel/other/resource_mgmt.rst @pabigot
/doc/reference/networking/can* @alexanderwachter
/doc/security/ @ceolin @d3zd3z
/drivers/debug/ @nashif
/drivers/*/*sam4l* @nandojve
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/*/*mcux* @MaureenHelm
/drivers/*/*stm32* @erwango @ABOSTM @FRASTM
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*lpc11u6x* @mbittan @simonguinot
/drivers/*/*npcx* @MulinChao
/drivers/*/*stm32* @erwango
/drivers/*/*native_posix* @aescolar
/drivers/adc/ @anangl
/drivers/adc/adc_stm32.c @cybertale
/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
/drivers/can/ @alexanderwachter
/drivers/can/*mcp2515* @karstenkoenig
/drivers/clock_control/*nrf* @nordic-krch
/drivers/clock_control/*esp32* @extremegtx
/drivers/counter/ @nordic-krch
/drivers/console/ipm_console.c @finikorg
/drivers/console/semihost_console.c @luozhongyao
/drivers/counter/counter_cmos.c @dcpleung
/drivers/counter/maxim_ds3231.c @pabigot
/drivers/crypto/*nrf_ecb* @maciekfabia @anangl
/drivers/console/*mux* @jukkar
/drivers/counter/counter_cmos.c @andrewboie
/drivers/display/ @vanwinkeljan
/drivers/display/display_framebuf.c @dcpleung
/drivers/dac/ @martinjaeger
/drivers/display/display_framebuf.c @andrewboie
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale @lowlander
/drivers/dma/*pl330* @raveenp
/drivers/dma/*iproc_pax* @raveenp
/drivers/ec_host_cmd_periph/ @jettr
/drivers/edac/ @finikorg
/drivers/dma/dma_stm32* @cybertale
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*rv32m1* @MaureenHelm
/drivers/entropy/*gecko* @chrta
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/espi/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ps2/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/kscan/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ethernet/ @jukkar @tbursztyka @pfalcon
/drivers/ethernet/*stm32* @Nukersson @lochej
/drivers/ethernet/*w5500* @parthitce
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/flash/ @nashif @nvlsianpu
/drivers/flash/*nrf* @nvlsianpu
/drivers/flash/*spi_nor* @pabigot
/drivers/flash/*stm32* @superna9999
/drivers/gpio/ @mnkp @pabigot
/drivers/gpio/*ht16k33* @henrikbrixandersen
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*sx1509b* @pabigot
/drivers/gpio/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/hwinfo/ @alexanderwachter
/drivers/i2c/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2c/i2c_common.c @sjg20
/drivers/i2c/i2c_emul.c @sjg20
/drivers/i2c/i2c_ite_it8xxx2.c @GTLin08
/drivers/i2c/i2c_shell.c @nashif
/drivers/i2c/Kconfig.i2c_emul @sjg20
/drivers/i2c/Kconfig.it8xxx2 @GTLin08
/drivers/i2c/slave/*eeprom* @henrikbrixandersen
/drivers/i2s/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/ieee802154/ @jukkar @tbursztyka
/drivers/ieee802154/ieee802154_rf2xx* @jukkar @tbursztyka @nandojve
/drivers/ieee802154/ieee802154_cc13xx* @bwitherspoon @cfriedt
/drivers/interrupt_controller/ @dcpleung @nashif
/drivers/interrupt_controller/ @andrewboie
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/*/intc_vexriscv_litex.c @mateusz-holenko @kgugala @pgielda
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic @ioannisg
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic @ioannisg
/drivers/ipm/ipm_intel_adsp.c @finikorg
/drivers/ipm/ipm_cavs_idc* @dcpleung
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic @ioannisg
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic @ioannisg
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/kscan/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar-nordic
/drivers/led_strip/ @mbolivar
/drivers/lora/ @Mani-Sadhasivam
/drivers/memc/ @gmarull
/drivers/modem/*gsm* @jukkar
/drivers/modem/hl7800.c @rerickson1
/drivers/modem/Kconfig.hl7800 @rerickson1
/drivers/pcie/ @dcpleung @nashif @jhedberg
/drivers/peci/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/modem/ @mike-scott
/drivers/pcie/ @andrewboie
/drivers/pinmux/stm32/ @rsalveti @idlethread
/drivers/pinmux/*hsdk* @iriszzw
/drivers/pinmux/*it8xxx2* @ite
/drivers/psci/ @carlocaione
/drivers/ps2/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/*sam0* @nzmichaelh
/drivers/pwm/*stm32* @gmarull
/drivers/pwm/*xlnx* @henrikbrixandersen
/drivers/pwm/pwm_capture.c @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/regulator/ @pabigot
/drivers/pwm/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/sensor/ @MaureenHelm
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
@@ -257,372 +186,281 @@
/drivers/sensor/lis*/ @avisconti
/drivers/sensor/lps*/ @avisconti
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/mpr/ @sven-hm
/drivers/sensor/st*/ @avisconti
/drivers/serial/uart_altera_jtag_hal.c @nashif
/drivers/serial/*ns16550* @dcpleung @nashif @jenmwms @aasthagr
/drivers/serial/*nrfx* @Mierunski @anangl
/drivers/serial/uart_altera_jtag_hal.c @wentongwu
/drivers/serial/*ns16550* @andrewboie
/drivers/serial/Kconfig.litex @mateusz-holenko @kgugala @pgielda
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.mcux_iuart @Mani-Sadhasivam
/drivers/serial/uart_mcux_iuart.c @Mani-Sadhasivam
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/serial/uart_xlnx_uartlite.c @henrikbrixandersen
/drivers/serial/*xmc4xxx* @parthitce
/drivers/serial/*nuvoton* @ssekar15
/drivers/serial/*apbuart* @martin-aberg
/drivers/net/ @jukkar @tbursztyka
/drivers/ptp_clock/ @jukkar
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/spi/ @tbursztyka
/drivers/spi/spi_ll_stm32.* @superna9999
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/timer/apic_timer.c @dcpleung @nashif
/drivers/timer/apic_timer.c @andrewboie
/drivers/timer/arm_arch_timer.c @carlocaione
/drivers/timer/cortex_m_systick.c @ioannisg
/drivers/timer/altera_avalon_timer_hal.c @nashif
/drivers/timer/riscv_machine_timer.c @kgugala @pgielda
/drivers/timer/ite_it8xxx2_timer.c @ite
/drivers/timer/xlnx_psttc_timer* @wjliang @stephanosio
/drivers/timer/altera_avalon_timer_hal.c @wentongwu
/drivers/timer/riscv_machine_timer.c @nategraff-sifive @kgugala @pgielda
/drivers/timer/litex_timer.c @mateusz-holenko @kgugala @pgielda
/drivers/timer/xlnx_psttc_timer.c @wjliang
/drivers/timer/cc13x2_cc26x2_rtc_timer.c @vanti
/drivers/timer/cavs_timer.c @dcpleung
/drivers/timer/stm32_lptim_timer.c @FRASTM
/drivers/timer/leon_gptimer.c @martin-aberg
/drivers/usb/ @jfischer-no @finikorg
/drivers/usb/ @jfischer-phytec-iot @finikorg
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/video/ @loicpoulain
/drivers/i2c/i2c_ll_stm32* @ydamigos
/drivers/i2c/i2c_ll_stm32* @ldts @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/*/*xec* @franciscomunoz @albertofloyd @scottwcpg
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/*sifive* @katsuster
/drivers/watchdog/wdt_handlers.c @dcpleung @nashif
/drivers/watchdog/wdt_handlers.c @andrewboie
/drivers/wifi/ @jukkar @tbursztyka @pfalcon
/drivers/wifi/esp/ @mniestroj
/drivers/wifi/eswifi/ @loicpoulain @nandojve
/drivers/wifi/winc1500/ @kludentwo
/drivers/virtualization/ @tbursztyka
/dts/arc/ @abrodkin @ruuddw @iriszzw
/drivers/wifi/eswifi/ @loicpoulain
/dts/arc/ @vonhust @ruuddw @iriszzw
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/sam4l* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/same70* @nandojve
/dts/arm/atmel/samv71* @nandojve
/dts/arm/atmel/ @galak
/dts/arm/broadcom/ @sbranden
/dts/arm/infineon/ @parthitce
/dts/arm/qemu-virt/ @carlocaione
/dts/arm/quicklogic/ @wtatarski @kowalewskijan @kgugala
/dts/arm/st/ @erwango
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @ioannisg @carlescufi
/dts/arm/nuvoton/ @ssekar15
/dts/arm/nuvoton/npcx/ @MulinChao
/dts/arm/nxp/ @MaureenHelm
/dts/arm/microchip/ @franciscomunoz @albertofloyd @scottwcpg
/dts/arm/silabs/efm32_pg_1b.dtsi @rdmeneze
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efm32_jg_pg* @chrta
/dts/arm/silabs/efr32bg13p* @mnkp
/dts/arm/silabs/efm32jg12b* @chrta
/dts/arm/silabs/efm32pg12b* @chrta
/dts/arm/silabs/efm32pg1b* @rdmeneze
/dts/arm/silabs/efr32mg21* @l-alfred
/dts/riscv/ @kgugala @pgielda
/dts/riscv/it8xxx2.dtsi @ite
/dts/riscv/microsemi-miv.dtsi @galak
/dts/riscv/rv32m1* @MaureenHelm
/dts/riscv/riscv32-fe310.dtsi @nategraff-sifive
/dts/riscv/riscv32-litex-vexriscv.dtsi @mateusz-holenko @kgugala @pgielda
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/armv8-a.dtsi @carlocaione
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/x86/ @jhedberg
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/sparc/ @martin-aberg
/dts/bindings/ @galak
/dts/bindings/can/ @alexanderwachter
/dts/bindings/i2c/zephyr*i2c-emul.yaml @sjg20
/dts/bindings/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/modem/*hl7800.yaml @rerickson1
/dts/bindings/serial/ns16550.yaml @dcpleung @nashif
/dts/bindings/wifi/*esp.yaml @mniestroj
/dts/bindings/*/*npcx* @MulinChao
/dts/bindings/iio/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/serial/ns16550.yaml @andrewboie
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @MaureenHelm
/dts/bindings/*/openisa* @MaureenHelm
/dts/bindings/*/st* @erwango
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda @nategraff-sifive
/dts/bindings/*/litex* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/vexriscv* @mateusz-holenko @kgugala @pgielda
/dts/bindings/psci/* @carlocaione
/dts/posix/ @aescolar @vanwinkeljan @daor-oti
/dts/posix/ @aescolar @vanwinkeljan
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/st* @avisconti
/ext/hal/cmsis/ @MaureenHelm @galak @stephanosio
/ext/lib/crypto/tinycrypt/ @ceolin
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/drivers/*/*litex* @mateusz-holenko @kgugala @pgielda
/include/drivers/adc.h @anangl
/include/drivers/can.h @alexanderwachter
/include/drivers/counter.h @nordic-krch
/include/drivers/dac.h @martinjaeger
/include/drivers/display.h @vanwinkeljan
/include/drivers/espi.h @albertofloyd @franciscomunoz @scottwcpg
/include/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
/include/drivers/flash.h @nashif @carlescufi @galak @MaureenHelm @nvlsianpu
/include/drivers/i2c_emul.h @sjg20
/include/drivers/led/ht16k33.h @henrikbrixandersen
/include/drivers/interrupt_controller/ @dcpleung @nashif
/include/drivers/interrupt_controller/ @andrewboie
/include/drivers/interrupt_controller/gic.h @stephanosio
/include/drivers/modem/hl7800.h @rerickson1
/include/drivers/pcie/ @dcpleung
/include/drivers/pcie/ @andrewboie
/include/drivers/hwinfo.h @alexanderwachter
/include/drivers/led.h @Mani-Sadhasivam
/include/drivers/led_strip.h @mbolivar-nordic
/include/drivers/led_strip.h @mbolivar
/include/drivers/sensor.h @MaureenHelm
/include/drivers/spi.h @tbursztyka
/include/drivers/lora.h @Mani-Sadhasivam
/include/drivers/peci.h @albertofloyd @franciscomunoz @scottwcpg
/include/drivers/psci.h @carlocaione
/include/app_memory/ @dcpleung
/include/arch/arc/ @abrodkin @ruuddw
/include/arch/arc/arch.h @abrodkin @ruuddw
/include/arch/arc/v2/irq.h @abrodkin @ruuddw
/include/app_memory/ @andrewboie
/include/arch/arc/ @vonhust @ruuddw
/include/arch/arc/arch.h @andrewboie
/include/arch/arc/v2/irq.h @andrewboie
/include/arch/arm/aarch32/ @MaureenHelm @galak @ioannisg
/include/arch/arm/aarch32/cortex_a_r/ @stephanosio
/include/arch/arm/aarch32/cortex_r/ @stephanosio
/include/arch/arm/aarch64/ @carlocaione
/include/arch/arm/arm-smccc.h @carlocaione
/include/arch/arm/aarch32/irq.h @carlocaione
/include/arch/nios2/ @nashif
/include/arch/nios2/arch.h @nashif
/include/arch/posix/ @aescolar @daor-oti
/include/arch/riscv/ @kgugala @pgielda
/include/arch/x86/ @jhedberg @dcpleung
/include/arch/common/ @andyross @nashif
/include/arch/xtensa/ @andyross @dcpleung
/include/arch/sparc/ @martin-aberg
/include/sys/atomic.h @andyross
/include/arch/arm/aarch32/irq.h @andrewboie
/include/arch/nios2/ @andrewboie
/include/arch/nios2/arch.h @andrewboie
/include/arch/posix/ @aescolar
/include/arch/riscv/ @nategraff-sifive @kgugala @pgielda
/include/arch/x86/ @andrewboie @wentongwu
/include/arch/common/ @andrewboie @andyross @nashif
/include/arch/xtensa/ @andrewboie
/include/sys/atomic.h @andrewboie @andyross
/include/bluetooth/ @joerchan @jhedberg @Vudentz
/include/cache.h @carlocaione @andyross
/include/cache.h @andrewboie @andyross
/include/canbus/ @alexanderwachter
/include/tracing/ @nashif
/include/tracing/ @wentongwu @nashif
/include/debug/ @nashif
/include/debug/coredump.h @dcpleung
/include/debug/gdbstub.h @ceolin
/include/device.h @tbursztyka @nashif
/include/devicetree.h @galak
/include/device.h @wentongwu @nashif
/include/display/ @vanwinkeljan
/include/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
/include/dt-bindings/dma/stm32_dma.h @cybertale
/include/dt-bindings/pcie/ @dcpleung
/include/dt-bindings/pcie/ @andrewboie
/include/dt-bindings/usb/usb.h @galak @finikorg
/include/emul.h @sjg20
/include/fs/ @nashif @nvlsianpu @de-nordic @pabigot
/include/init.h @nashif @andyross
/include/irq.h @dcpleung @nashif @andyross
/include/irq_offload.h @dcpleung @nashif @andyross
/include/kernel.h @dcpleung @nashif @andyross
/include/kernel_version.h @dcpleung @nashif @andyross
/include/linker/app_smem*.ld @dcpleung @nashif
/include/linker/ @dcpleung @nashif @andyross
/include/fs/ @nashif @wentongwu
/include/init.h @andrewboie @andyross
/include/irq.h @andrewboie @andyross
/include/irq_offload.h @andrewboie @andyross
/include/kernel.h @andrewboie @andyross
/include/kernel_version.h @andrewboie @andyross
/include/linker/app_smem*.ld @andrewboie
/include/linker/ @andrewboie @andyross
/include/logging/ @nordic-krch
/include/lorawan/lorawan.h @Mani-Sadhasivam
/include/mgmt/osdp.h @cbsiddharth
/include/net/ @jukkar @tbursztyka @pfalcon
/include/net/buf.h @jukkar @jhedberg @tbursztyka @pfalcon
/include/net/coap*.h @jukkar @rlubos
/include/net/lwm2m*.h @jukkar @rlubos
/include/net/mqtt.h @jukkar @rlubos
/include/posix/ @pfalcon
/include/power/power.h @pabigot @nashif @ceolin
/include/power/power.h @wentongwu @nashif
/include/ptp_clock.h @jukkar
/include/shared_irq.h @dcpleung @nashif @andyross
/include/shared_irq.h @andrewboie @andyross
/include/shell/ @jakub-uC @nordic-krch
/include/sw_isr_table.h @dcpleung @nashif @andyross
/include/sys_clock.h @dcpleung @nashif @andyross
/include/sys/sys_io.h @dcpleung @nashif @andyross
/include/sys/kobject.h @dcpleung @nashif
/include/toolchain.h @dcpleung @andyross @nashif
/include/toolchain/ @dcpleung @nashif @andyross
/include/zephyr.h @dcpleung @nashif @andyross
/kernel/ @dcpleung @nashif @andyross
/lib/fnmatch/ @carlescufi
/include/sw_isr_table.h @andrewboie @andyross
/include/sys_clock.h @andrewboie @andyross
/include/sys/sys_io.h @andrewboie @andyross
/include/toolchain.h @andrewboie @andyross @nashif
/include/toolchain/ @andrewboie @andyross
/include/zephyr.h @andrewboie @andyross
/kernel/ @andrewboie @andyross
/lib/gui/ @vanwinkeljan
/lib/open-amp/ @arnopo
/lib/os/ @dcpleung @nashif @andyross
/lib/os/ @andrewboie @andyross
/lib/posix/ @pfalcon
/lib/cmsis_rtos_v2/ @nashif
/lib/cmsis_rtos_v1/ @nashif
/lib/libc/ @nashif
/lib/libc/ @nashif @andrewboie
/modules/ @nashif
/modules/Kconfig.tfm @ioannisg @microbuilder
/kernel/device.c @andyross @nashif
/kernel/idle.c @andyross @nashif
/kernel/device.c @andrewboie @andyross @nashif
/kernel/idle.c @andrewboie @andyross @nashif
/samples/ @nashif
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/boards/*microbit* @jhe
/samples/basic/servo_motor/*microbit* @jhe
/lib/updatehub/ @nandojve @otavio
/samples/bluetooth/ @jhedberg @Vudentz @joerchan
/samples/boards/intel_s1000_crb/ @sathishkuttan @dcpleung @nashif
/samples/display/ @vanwinkeljan
/samples/drivers/can/ @alexanderwachter
/samples/drivers/clock_control_litex/ @mateusz-holenko @kgugala @pgielda
/samples/drivers/CAN/ @alexanderwachter
/samples/drivers/display/ @vanwinkeljan
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/drivers/counter/maxim_ds3231/ @pabigot
/samples/lorawan/ @Mani-Sadhasivam
/samples/net/ @jukkar @tbursztyka @pfalcon
/samples/net/cloud/tagoio_http_post/ @nandojve
/samples/net/dns_resolve/ @jukkar @tbursztyka @pfalcon
/samples/net/lwm2m_client/ @rlubos
/samples/net/lwm2m_client/ @mike-scott
/samples/net/mqtt_publisher/ @jukkar @tbursztyka
/samples/net/sockets/coap_*/ @rlubos
/samples/net/sockets/coap_*/ @rveerama1
/samples/net/sockets/ @jukkar @tbursztyka @pfalcon
/samples/net/*civetweb* @Nukersson
/samples/net/updatehub/ @nandojve @otavio
/samples/sensor/ @MaureenHelm
/samples/shields/ @avisconti
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/shell/ @jakub-uC @nordic-krch
/samples/subsys/mgmt/mcumgr/smp_svr/ @aunsbjerg @nvlsianpu
/samples/subsys/mgmt/updatehub/ @nandojve @otavio
/samples/subsys/mgmt/osdp/ @cbsiddharth
/samples/subsys/usb/ @jfischer-no @finikorg
/samples/subsys/power/ @nashif @pabigot @ceolin
/samples/tfm_integration/ @ioannisg @microbuilder
/samples/userspace/ @dcpleung @nashif
/samples/subsys/usb/ @jfischer-phytec-iot @finikorg
/samples/subsys/power/ @wentongwu @pabigot
/samples/userspace/ @andrewboie
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/coredump/ @dcpleung
/scripts/kconfig/ @ulfalizer
/scripts/pylib/twister/expr_parser.py @nashif
/scripts/schemas/twister/ @nashif
/scripts/gen_app_partitions.py @dcpleung @nashif
/scripts/get_maintainer.py @nashif
/scripts/dts/ @mbolivar-nordic @galak
/scripts/elf_helper.py @andrewboie
/scripts/sanity_chk/expr_parser.py @nashif
/scripts/gen_app_partitions.py @andrewboie
/scripts/dts/ @ulfalizer @galak
/scripts/release/ @nashif
/scripts/ci/ @nashif
/arch/x86/gen_gdt.py @dcpleung @nashif
/arch/x86/gen_idt.py @dcpleung @nashif
/scripts/gen_kobject_list.py @dcpleung @nashif
/scripts/gen_syscalls.py @dcpleung @nashif
/scripts/list_boards.py @mbolivar-nordic
/scripts/net/ @jukkar
/scripts/process_gperf.py @dcpleung @nashif
/scripts/gen_relocate_app.py @dcpleung
/scripts/requirements*.txt @mbolivar-nordic @galak @nashif
/scripts/tests/twister/ @aasthagr
/scripts/tests/build/test_subfolder_list.py @rmstoi
/scripts/tracing/ @nashif
/scripts/pylib/twister/ @nashif
/scripts/twister @nashif
/arch/x86/gen_gdt.py @andrewboie
/arch/x86/gen_idt.py @andrewboie
/scripts/gen_kobject_list.py @andrewboie
/scripts/gen_priv_stacks.py @andrewboie @agross-oss @ioannisg
/scripts/gen_syscalls.py @andrewboie
/scripts/net/ @jukkar @pfl
/scripts/process_gperf.py @andrewboie
/scripts/gen_relocate_app.py @wentongwu
/scripts/tracing/ @wentongwu
/scripts/sanity_chk/ @nashif
/scripts/sanitycheck @nashif
/scripts/series-push-hook.sh @erwango
/scripts/west_commands/ @mbolivar-nordic
/scripts/west-commands.yml @mbolivar-nordic
/scripts/west_commands/ @mbolivar
/scripts/west-commands.yml @mbolivar
/scripts/zephyr_module.py @tejlmand
/scripts/user_wordsize.py @cfriedt
/scripts/valgrind.supp @aescolar @daor-oti
/share/zephyr-package/ @tejlmand
/share/zephyrunittest-package/ @tejlmand
/scripts/valgrind.supp @aescolar
/subsys/bluetooth/ @joerchan @jhedberg @Vudentz
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot @kruithofa
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot
/subsys/bluetooth/mesh/ @jhedberg @trond-snekvik @joerchan @Vudentz
/subsys/canbus/ @alexanderwachter
/subsys/cpp/ @pabigot @vanwinkeljan
/subsys/debug/ @nashif
/subsys/debug/coredump/ @dcpleung
/subsys/debug/gdbstub/ @ceolin
/subsys/debug/gdbstub.c @ceolin
/subsys/dfu/ @nvlsianpu
/subsys/tracing/ @nashif
/subsys/debug/asan_hacks.c @vanwinkeljan @aescolar @daor-oti
/subsys/demand_paging/ @dcpleung @nashif
/subsys/tracing/ @nashif @wentongwu
/subsys/debug/asan_hacks.c @vanwinkeljan @aescolar
/subsys/disk/disk_access_spi_sdhc.c @JunYangNXP
/subsys/disk/disk_access_sdhc.h @JunYangNXP
/subsys/disk/disk_access_usdhc.c @JunYangNXP
/subsys/disk/disk_access_stm32_sdmmc.c @anthonybrandon
/subsys/emul/ @sjg20
/subsys/fb/ @jfischer-no
/subsys/fb/ @jfischer-phytec-iot
/subsys/fs/ @nashif
/subsys/fs/fcb/ @nvlsianpu
/subsys/fs/fuse_fs_access.c @vanwinkeljan
/subsys/fs/littlefs_fs.c @pabigot
/subsys/fs/nvs/ @Laczen
/subsys/ipc/ @ioannisg
/subsys/logging/ @nordic-krch
/subsys/logging/log_backend_net.c @nordic-krch @jukkar
/subsys/lorawan/ @Mani-Sadhasivam
/subsys/mgmt/ec_host_cmd/ @jettr
/subsys/mgmt/mcumgr/ @carlescufi @nvlsianpu
/subsys/mgmt/hawkbit/ @Navin-Sankar
/subsys/mgmt/mcumgr/smp_udp.c @aunsbjerg
/subsys/mgmt/updatehub/ @nandojve @otavio
/subsys/mgmt/osdp/ @cbsiddharth
/subsys/mgmt/ @carlescufi @nvlsianpu
/subsys/net/buf.c @jukkar @jhedberg @tbursztyka @pfalcon
/subsys/net/ip/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/dns/ @jukkar @tbursztyka @pfalcon @cfriedt
/subsys/net/lib/lwm2m/ @rlubos
/subsys/net/lib/dns/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/lwm2m/ @mike-scott
/subsys/net/lib/config/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/mqtt/ @jukkar @tbursztyka @rlubos
/subsys/net/lib/coap/ @rlubos
/subsys/net/lib/sockets/socketpair.c @cfriedt
/subsys/net/lib/openthread/ @rlubos
/subsys/net/lib/coap/ @rveerama1
/subsys/net/lib/sockets/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @jukkar @tbursztyka
/subsys/net/l2/canbus/ @alexanderwachter @jukkar
/subsys/net/*/openthread/ @rlubos
/subsys/power/ @nashif @pabigot @ceolin
/subsys/power/ @wentongwu @pabigot
/subsys/random/ @dleach02
/subsys/settings/ @nvlsianpu
/subsys/shell/ @jakub-uC @nordic-krch
/subsys/stats/ @nvlsianpu
/subsys/storage/ @nvlsianpu
/subsys/testsuite/ @nashif
/subsys/timing/ @nashif @dcpleung
/subsys/usb/ @jfischer-no @finikorg
/subsys/usb/class/dfu/usb_dfu.c @nvlsianpu
/subsys/usb/ @jfischer-phytec-iot @finikorg
/tests/ @nashif
/tests/application_development/libcxx/ @pabigot
/tests/arch/arm/ @ioannisg @stephanosio
/tests/benchmarks/cmsis_dsp/ @stephanosio
/tests/boards/native_posix/ @aescolar @daor-oti
/tests/arch/arm/ @ioannisg
/tests/boards/native_posix/ @aescolar
/tests/boards/intel_s1000_crb/ @dcpleung @sathishkuttan
/tests/bluetooth/ @joerchan @jhedberg @Vudentz
/tests/bluetooth/bsim_bt/ @joerchan @jhedberg @Vudentz @aescolar @wopu-ot
/tests/bluetooth/bsim_bt/ @joerchan @jhedberg @Vudentz @aescolar
/tests/posix/ @pfalcon
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin
/tests/drivers/can/ @alexanderwachter
/tests/drivers/counter/ @nordic-krch
/tests/drivers/counter/maxim_ds3231_api/ @pabigot
/tests/drivers/eeprom/ @henrikbrixandersen @sjg20
/tests/drivers/flash_simulator/ @nvlsianpu
/tests/drivers/gpio/ @mnkp @pabigot
/tests/drivers/hwinfo/ @alexanderwachter
/tests/drivers/spi/ @tbursztyka
/tests/drivers/uart/uart_async_api/ @Mierunski
/tests/kernel/ @dcpleung @andyross @nashif
/tests/kernel/ @andrewboie @andyross @nashif
/tests/lib/ @nashif
/tests/lib/cmsis_dsp/ @stephanosio
/tests/net/ @jukkar @tbursztyka @pfalcon
/tests/net/buf/ @jukkar @jhedberg @tbursztyka @pfalcon
/tests/net/lib/ @jukkar @tbursztyka @pfalcon
/tests/net/lib/http_header_fields/ @jukkar @tbursztyka
/tests/net/lib/mqtt_packet/ @jukkar @tbursztyka
/tests/net/lib/coap/ @rlubos
/tests/net/socket/socketpair/ @cfriedt
/tests/net/lib/coap/ @rveerama1
/tests/net/socket/ @jukkar @tbursztyka @pfalcon
/tests/subsys/debug/coredump/ @dcpleung
/tests/subsys/fs/ @nashif @nvlsianpu @de-nordic @pabigot
/tests/subsys/fs/ @nashif @wentongwu
/tests/subsys/settings/ @nvlsianpu
/tests/subsys/shell/ @jakub-uC @nordic-krch
# Get all docs reviewed
*.rst @nashif
/doc/reference/kernel/ @andyross @nashif
*posix*.rst @aescolar @daor-oti
*posix*.rst @aescolar

View File

@@ -4,6 +4,12 @@
# Copyright (c) 2016 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
menu "Modules"
source "$(CMAKE_BINARY_DIR)/Kconfig.modules"
source "modules/Kconfig"
endmenu
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
@@ -14,24 +20,19 @@
# precedence over SoC defaults, so include them in that order.
#
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
source "$(KCONFIG_BINARY_DIR)/Kconfig.shield.defconfig"
source "boards/shields/*/Kconfig.defconfig"
source "$(BOARD_DIR)/Kconfig.defconfig"
source "$(KCONFIG_BINARY_DIR)/Kconfig.soc.defconfig"
menu "Modules"
source "modules/Kconfig"
endmenu
source "$(SOC_DIR)/$(ARCH)/*/Kconfig.defconfig"
source "boards/Kconfig"
source "soc/Kconfig"
source "$(SOC_DIR)/Kconfig"
source "arch/Kconfig"
source "kernel/Kconfig"
source "dts/Kconfig"
source "drivers/Kconfig"
source "lib/Kconfig"
source "subsys/Kconfig"
source "ext/Kconfig"
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig"
@@ -118,15 +119,15 @@ config FLASH_LOAD_SIZE
endif # HAS_FLASH_LOAD_OFFSET
config ROM_START_OFFSET
config TEXT_SECTION_OFFSET
hex
prompt "ROM start offset" if !BOOTLOADER_MCUBOOT
prompt "TEXT section offset" if !BOOTLOADER_MCUBOOT
default 0x200 if BOOTLOADER_MCUBOOT
default 0
help
If the application is built for chain-loading by a bootloader this
variable is required to be set to value that leaves sufficient
space between the beginning of the image and the start of the first
space between the beginning of the image and the start of the .text
section to store an image header or any other metadata.
In the particular case of the MCUboot bootloader this reserves enough
space to store the image header, which should also meet vector table
@@ -153,6 +154,27 @@ config CUSTOM_LINKER_SCRIPT
linker script and avoid having to change the script provided by
Zephyr.
config CUSTOM_RODATA_LD
bool "(DEPRECATED) Include custom-rodata.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
data and linker directives into the rodata section.
config CUSTOM_RWDATA_LD
bool "(DEPRECATED) Include custom-rwdata.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
data and linker directives into the data section.
config CUSTOM_SECTIONS_LD
bool "(DEPRECATED) Include custom-sections.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
arbitrary sections.
config KERNEL_ENTRY
string "Kernel entry symbol"
default "__start"
@@ -171,12 +193,6 @@ endmenu
menu "Compiler Options"
config CODING_GUIDELINE_CHECK
bool "Enforce coding guideline rules"
help
Use available compiler flags to check coding guideline rules during
the build.
config NATIVE_APPLICATION
bool "Build as a native host application"
help
@@ -273,14 +289,6 @@ config OUTPUT_DISASSEMBLY
help
Create an .lst file with the assembly listing of the firmware.
config OUTPUT_DISASSEMBLE_ALL
bool "Disassemble all sections with source. Fill zeros."
default n
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain complete disassembly of the firmware
not just those expected to contain instructions including zeros
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -295,60 +303,39 @@ config OUTPUT_PRINT_MEMORY_USAGE
ram_report and
https://sourceware.org/binutils/docs/ld/MEMORY.html
config CLEANUP_INTERMEDIATE_FILES
bool "Remove all intermediate files"
help
Delete intermediate files to save space and cleanup clutter resulting
from the build process.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
help
Build an Intel HEX binary zephyr/zephyr.hex in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a binary in HEX format. This will build a zephyr.hex file need
by some platforms.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
help
Build a "raw" binary zephyr/zephyr.bin in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
config BUILD_OUTPUT_EFI
bool "Build as an EFI application"
default n
depends on X86_64
help
Build as an EFI application.
This works by creating a "zephyr.efi" EFI binary containing a zephyr
image extracted from a built zephyr.elf file. EFI applications are
relocatable, and cannot be placed at specific locations in memory.
Instead, the stub code will copy the embedded zephyr sections to the
appropriate locations at startup, clear any zero-filled (BSS, etc...)
areas, then jump into the 64 bit entry point.
Build a binary in BIN format. This will build a zephyr.bin file need
by some platforms.
config BUILD_OUTPUT_EXE
bool "Build a binary in ELF format with .exe extension"
help
Build an ELF binary that can run in the host system at
zephyr/zephyr.exe in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a binary in ELF format that can run in the host system. This
will build a zephyr.exe file.
config BUILD_OUTPUT_S19
bool "Build a binary in S19 format"
help
Build an S19 binary zephyr/zephyr.s19 in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a binary in S19 format. This will build a zephyr.s19 file need
by some platforms.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_STRIPPED
bool "Build a stripped binary"
help
Build a stripped binary zephyr/zephyr.strip in the build directory.
The name of this file can be customized with CONFIG_KERNEL_BIN_NAME.
Build a stripped binary. This will build a zephyr.stripped file need
by some platforms.
config APPLICATION_DEFINED_SYSCALL
bool "Scan application folder for any syscall definition"
@@ -362,13 +349,6 @@ config MAKEFILE_EXPORTS
Generates a file with build information that can be read by
third party Makefile-based build systems.
config DEPRECATED_ZEPHYR_INT_TYPES
bool "Allow the use of the deprecated zephyr integer types"
help
Allows the use of the deprecated Zephyr integer typedefs defined in
Zephyr 2.3 and previous versions. These types are:
u8_t, u16_t, u32_t, u64_t, s8_t, s16_t, s32_t, and s64_t.
endmenu
endmenu
@@ -395,16 +375,9 @@ config BOOTLOADER_SRAM_SIZE
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
config MCUBOOT
bool
help
Hidden option used to indicate that the current image is MCUBoot
config BOOTLOADER_MCUBOOT
bool "MCUboot bootloader support"
select USE_DT_CODE_PARTITION
imply INIT_ARCH_HW_AT_BOOT if ARCH_SUPPORTS_ARCH_HW_INIT
depends on !MCUBOOT
help
This option signifies that the target uses MCUboot as a bootloader,
or in other words that the image is to be chain-loaded by MCUboot.
@@ -412,65 +385,10 @@ config BOOTLOADER_MCUBOOT
order for the image generated to be bootable using the MCUboot open
source bootloader. Currently this includes:
* Setting ROM_START_OFFSET to a default value that allows space
* Setting TEXT_SECTION_OFFSET to a default value that allows space
for the MCUboot image header
* Activating SW_VECTOR_RELAY_CLIENT on Cortex-M0
(or Armv8-M baseline) targets with no built-in vector relocation
mechanisms
By default, this option instructs Zephyr to initialize the core
architecture HW registers during boot, when this is supported by
the application. This removes the need by MCUboot to reset
the core registers' state itself.
if BOOTLOADER_MCUBOOT
config MCUBOOT_SIGNATURE_KEY_FILE
string "Path to the mcuboot signing key file"
default ""
help
The file contains a key pair whose public half is verified
by your target's MCUboot image. The file is in PEM format.
If set to a non-empty value, the build system tries to
sign the final binaries using a 'west sign -t imgtool' command.
The signed binaries are placed in the build directory
at zephyr/zephyr.signed.bin and zephyr/zephyr.signed.hex.
The file names can be customized with CONFIG_KERNEL_BIN_NAME.
The existence of bin and hex files depends on CONFIG_BUILD_OUTPUT_BIN
and CONFIG_BUILD_OUTPUT_HEX.
This option should contain an absolute path to the same file
as the BOOT_SIGNATURE_KEY_FILE option in your MCUboot
.config. (The MCUboot config option is used for the MCUboot
bootloader image; this option is for your application which
is to be loaded by MCUboot. The MCUboot config option can be
a relative path from the MCUboot repository root; this option's
behavior is undefined for relative paths.)
If left empty, you must sign the Zephyr binaries manually.
config MCUBOOT_EXTRA_IMGTOOL_ARGS
string "Extra arguments to pass to imgtool"
default ""
help
If CONFIG_MCUBOOT_SIGNATURE_KEY_FILE is a non-empty string,
you can use this option to pass extra options to imgtool.
For example, you could set this to "--version 1.2".
config MCUBOOT_GENERATE_CONFIRMED_IMAGE
bool "Also generate a padded, confirmed image"
help
The signed, padded, and confirmed binaries are placed in the build
directory at zephyr/zephyr.signed.confirmed.bin and
zephyr/zephyr.signed.confirmed.hex.
The file names can be customized with CONFIG_KERNEL_BIN_NAME.
The existence of bin and hex files depends on CONFIG_BUILD_OUTPUT_BIN
and CONFIG_BUILD_OUTPUT_HEX.
endif # BOOTLOADER_MCUBOOT
* Activating SW_VECTOR_RELAY on Cortex-M0 (or Armv8-M baseline)
targets with no built-in vector relocation mechanisms
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
@@ -480,40 +398,21 @@ config BOOTLOADER_ESP_IDF
inside the build folder.
At flash time, the bootloader will be flashed with the zephyr image
config BOOTLOADER_BOSSA
bool "BOSSA bootloader support"
select USE_DT_CODE_PARTITION
depends on SOC_FAMILY_SAM0
config BOOTLOADER_KEXEC
bool "Boot using Linux kexec() system call"
depends on X86
help
Signifies that the target uses a BOSSA compatible bootloader. If CDC
ACM USB support is also enabled then the board will reboot into the
bootloader automatically when bossac is run.
This option signifies that Linux boots the kernel using kexec system call
and utility. This method is used to boot the kernel over the network.
config BOOTLOADER_BOSSA_DEVICE_NAME
string "BOSSA CDC ACM device name"
depends on BOOTLOADER_BOSSA && CDC_ACM_DTE_RATE_CALLBACK_SUPPORT
default "CDC_ACM_0"
config BOOTLOADER_CONTEXT_RESTORE
bool "Boot loader has context restore support"
default y
depends on SYS_POWER_DEEP_SLEEP_STATES && BOOTLOADER_CONTEXT_RESTORE_SUPPORTED
help
Sets the CDC ACM port to watch for reboot commands.
choice
prompt "BOSSA bootloader variant"
depends on BOOTLOADER_BOSSA
config BOOTLOADER_BOSSA_ARDUINO
bool "Arduino"
help
Select the Arduino variant of the BOSSA bootloader. Uses 0x07738135
as the magic value to enter the bootloader.
config BOOTLOADER_BOSSA_ADAFRUIT_UF2
bool "Adafruit UF2"
help
Select the Adafruit UF2 variant of the BOSSA bootloader. Uses
0xf01669ef as the magic value to enter the bootloader.
endchoice
This option signifies that the target has a bootloader
that restores CPU context upon resuming from deep sleep
power state.
config REBOOT
bool "Reboot functionality"

File diff suppressed because it is too large Load Diff

View File

@@ -2,29 +2,24 @@
# Top level makefile for documentation build
#
ifndef ZEPHYR_BASE
$(error The ZEPHYR_BASE environment variable must be set)
endif
BUILDDIR ?= doc/_build
DOC_TAG ?= development
SPHINXOPTS ?= -q
KCONFIG_TURBO_MODE ?= 0
# Documentation targets
# ---------------------------------------------------------------------------
.PHONY: clean htmldocs htmldocs-fast pdfdocs doxygen
htmldocs-fast:
${MAKE} htmldocs KCONFIG_TURBO_MODE=1
htmldocs pdfdocs doxygen: configure
cmake --build ${BUILDDIR} -- $@ # -v # VERBOSE=1
# Run CMake every time cause it's quick and re-configures TURBO_MODE if
# needed
.PHONY: configure
configure:
cmake -GNinja -B${BUILDDIR} -Sdoc/ -DDOC_TAG=${DOC_TAG} \
-DSPHINXOPTS=${SPHINXOPTS} \
-DKCONFIG_TURBO_MODE=${KCONFIG_TURBO_MODE}
clean:
rm -rf ${BUILDDIR}
htmldocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} htmldocs
htmldocs-fast:
mkdir -p ${BUILDDIR} && cmake -GNinja -DKCONFIG_TURBO_MODE=1 -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} htmldocs
pdfdocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} pdfdocs

View File

@@ -8,9 +8,8 @@
<a href="https://bestpractices.coreinfrastructure.org/projects/74"><img
src="https://bestpractices.coreinfrastructure.org/projects/74/badge"></a>
<a href="https://buildkite.com/zephyr/zephyr">
<img
src="https://badge.buildkite.com/f5bd0dc88306cee17c9b38e78d11bb74a6291e3f40e7d13f31.svg?branch=master"></a>
src="https://api.shippable.com/projects/58ffb2b8baa5e307002e1d79/badge?branch=master">
The Zephyr Project is a scalable real-time operating system (RTOS) supporting
@@ -53,7 +52,7 @@ Here's a quick summary of resources to help you find your way around:
* **Source Code**: https://github.com/zephyrproject-rtos/zephyr is the main
repository; https://elixir.bootlin.com/zephyr/latest/source contains a
searchable index
* **Releases**: https://github.com/zephyrproject-rtos/zephyr/releases
* **Releases**: https://zephyrproject.org/developers/#downloads
* **Samples and example code**: see `Sample and Demo Code Examples`_
* **Mailing Lists**: users@lists.zephyrproject.org and
devel@lists.zephyrproject.org are the main user and developer mailing lists,

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 5
PATCHLEVEL = 0
VERSION_MINOR = 2
PATCHLEVEL = 1
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -4,7 +4,7 @@ add_definitions(-D__ZEPHYR_SUPERVISOR__)
include_directories(
${ZEPHYR_BASE}/kernel/include
${ARCH_DIR}/${ARCH}/include
${ZEPHYR_BASE}/arch/${ARCH}/include
)
add_subdirectory(common)

View File

@@ -20,48 +20,21 @@ config ARC
bool
select ARCH_IS_SET
select HAS_DTS
imply XIP
select ARCH_HAS_THREAD_LOCAL_STORAGE
help
ARC architecture
config ARM
bool
select ARCH_IS_SET
select ARCH_SUPPORTS_COREDUMP if CPU_CORTEX_M
select HAS_DTS
# FIXME: current state of the code for all ARM requires this, but
# is really only necessary for Cortex-M with ARM MPU!
select GEN_PRIV_STACKS
select ARCH_HAS_THREAD_LOCAL_STORAGE if ARM64 || CPU_CORTEX_R || CPU_CORTEX_M
help
ARM architecture
config SPARC
bool
select ARCH_IS_SET
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select BIG_ENDIAN
select ATOMIC_OPERATIONS_BUILTIN if SPARC_CASA
select ATOMIC_OPERATIONS_C if !SPARC_CASA
select ARCH_HAS_THREAD_LOCAL_STORAGE
help
SPARC architecture
config X86
bool
select ARCH_IS_SET
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
select ARCH_SUPPORTS_COREDUMP
select CPU_HAS_MMU
select ARCH_MEM_DOMAIN_DATA if USERSPACE && !X86_COMMON_PAGE_TABLE
select ARCH_MEM_DOMAIN_SYNCHRONOUS_API if USERSPACE
select ARCH_HAS_GDBSTUB if !X86_64
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_HAS_DEMAND_PAGING
help
x86 architecture
@@ -70,8 +43,6 @@ config NIOS2
select ARCH_IS_SET
select ATOMIC_OPERATIONS_C
select HAS_DTS
imply XIP
select ARCH_HAS_TIMING_FUNCTIONS
help
Nios II Gen 2 architecture
@@ -79,8 +50,6 @@ config RISCV
bool
select ARCH_IS_SET
select HAS_DTS
select ARCH_HAS_THREAD_LOCAL_STORAGE
imply XIP
help
RISCV architecture
@@ -90,13 +59,13 @@ config XTENSA
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select XTENSA_HAL if "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "xcc"
help
Xtensa architecture
config ARCH_POSIX
bool
select ARCH_IS_SET
select HAS_DTS
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
@@ -229,8 +198,6 @@ config USERSPACE
bool "User mode threads"
depends on ARCH_HAS_USERSPACE
depends on RUNTIME_ERROR_CHECKS
depends on SRAM_REGION_PERMISSIONS
select THREAD_STACK_INFO
help
When enabled, threads may be created or dropped down to user mode,
which has significantly restricted permissions and must interact
@@ -251,30 +218,25 @@ config PRIVILEGED_STACK_SIZE
This option sets the privileged stack region size that will be used
in addition to the user mode thread stack. During normal execution,
this region will be inaccessible from user mode. During system calls,
this region will be utilized by the system call. This value must be
a multiple of the minimum stack alignment.
this region will be utilized by the system call.
config PRIVILEGED_STACK_TEXT_AREA
int "Privileged stacks text area"
default 512 if COVERAGE_GCOV
default 256
depends on ARCH_HAS_USERSPACE
help
Stack text area size for privileged stacks.
config KOBJECT_TEXT_AREA
int "Size if kobject text area"
default 512 if COVERAGE_GCOV
default 512 if NO_OPTIMIZATIONS
default 512 if STACK_CANARIES && RISCV
default 256
depends on ARCH_HAS_USERSPACE
help
Size of kernel object text area. Used in linker script.
config GEN_PRIV_STACKS
bool
help
Selected if the architecture requires that privilege elevation stacks
be allocated in a separate memory area. This is typical of arches
whose MPUs require regions to be power-of-two aligned/sized.
FIXME: This should be removed and replaced with checks against
CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT, but both ARM and ARC
changes will be necessary for this.
config STACK_GROWS_UP
bool "Stack grows towards higher memory addresses"
help
@@ -389,44 +351,14 @@ config IRQ_OFFLOAD
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
config EXTRA_EXCEPTION_INFO
bool "Collect extra exception info"
depends on ARCH_HAS_EXTRA_EXCEPTION_INFO
help
This option enables the collection of extra information, such as
register state, when a fault occurs. This information can be useful
to collect for post-mortem analysis and debug of issues.
endmenu # Interrupt configuration
config INIT_ARCH_HW_AT_BOOT
bool "Initialize internal architecture state at boot"
depends on ARCH_SUPPORTS_ARCH_HW_INIT
help
This option instructs Zephyr to force the initialization
of the internal architectural state (for example ARCH-level
HW registers and system control blocks) during boot to
the reset values as specified by the corresponding
architecture manual. The option is useful when the Zephyr
firmware image is chain-loaded, for example, by a debugger
or a bootloader, and we need to guarantee that the internal
states of the architecture core blocks are restored to the
reset values (as specified by the architecture).
Note: the functionality is architecture-specific. For the
implementation details refer to each architecture where
this feature is supported.
endmenu
#
# Architecture Capabilities
#
config ARCH_HAS_TIMING_FUNCTIONS
bool
config ARCH_HAS_TRUSTED_EXECUTION
bool
@@ -448,28 +380,6 @@ config ARCH_HAS_RAMFUNC_SUPPORT
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
config ARCH_SUPPORTS_COREDUMP
bool
config ARCH_SUPPORTS_ARCH_HW_INIT
bool
config ARCH_HAS_EXTRA_EXCEPTION_INFO
bool
config ARCH_HAS_GDBSTUB
bool
config ARCH_HAS_COHERENCE
bool
help
When selected, the architecture supports the
arch_mem_coherent() API and can link into incoherent/cached
memory using the ".cached" linker section.
config ARCH_HAS_THREAD_LOCAL_STORAGE
bool
#
# Other architecture related options
#
@@ -477,6 +387,53 @@ config ARCH_HAS_THREAD_LOCAL_STORAGE
config ARCH_HAS_THREAD_ABORT
bool
#
# Hidden PM feature configs which are to be selected by
# individual SoC.
#
config HAS_SYS_POWER_STATE_SLEEP_1
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_1
configuration option.
config HAS_SYS_POWER_STATE_SLEEP_2
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_2
configuration option.
config HAS_SYS_POWER_STATE_SLEEP_3
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_3
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_1
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_1
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_2
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_2
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_3
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_3
configuration option.
config BOOTLOADER_CONTEXT_RESTORE_SUPPORTED
bool
help
This option signifies that the target has options of bootloaders
that support context restore upon resume from deep sleep
#
# Hidden CPU family configs
#
@@ -500,174 +457,18 @@ config CPU_HAS_FPU
This option is enabled when the CPU has hardware floating point
unit.
config CPU_HAS_FPU_DOUBLE_PRECISION
bool
select CPU_HAS_FPU
help
When enabled, this indicates that the CPU has a double floating point
precision unit.
config CPU_HAS_MPU
bool
help
This option is enabled when the CPU has a Memory Protection Unit (MPU).
config CPU_HAS_MMU
config MEMORY_PROTECTION
bool
help
This hidden option is selected when the CPU has a Memory Management Unit
(MMU).
This option is enabled when Memory Protection features are supported.
Memory protection support is currently available on ARC, ARM, and x86
architectures.
config ARCH_HAS_DEMAND_PAGING
bool
help
This hidden configuration should be selected by the architecture if
demand paging is supported.
config ARCH_HAS_RESERVED_PAGE_FRAMES
bool
help
This hidden configuration should be selected by the architecture if
certain RAM page frames need to be marked as reserved and never used for
memory mappings. The architecture will need to implement
arch_reserved_pages_update().
config ARCH_MAPS_ALL_RAM
bool
help
This hidden option is selected by the architecture to inform the kernel
that all RAM is mapped at boot, and not just the bounds of the Zephyr image.
If RAM starts at 0x0, the first page must remain un-mapped to catch NULL
pointer dereferences. With this enabled, the kernel will not assume that
virtual memory addresses past the kernel image are available for mappings,
but instead takes into account an entire RAM mapping instead.
This is typically set by architectures which need direct access to all memory.
It is the architecture's responsibility to mark reserved memory regions
as such in arch_reserved_pages_update().
Although the kernel will not disturb this RAM mapping by re-mapping the associated
virtual addresses elsewhere, this is limited to only management of the
virtual address space. The kernel's page frame ontology will not consider
this mapping at all; non-kernel pages will be considered free (unless marked
as reserved) and Z_PAGE_FRAME_MAPPED will not be set.
menuconfig MMU
bool "Enable MMU features"
depends on CPU_HAS_MMU
help
This option is enabled when the CPU's memory management unit is active
and the arch_mem_map() API is available.
if MMU
config MMU_PAGE_SIZE
hex "Size of smallest granularity MMU page"
default 0x1000
help
Size of memory pages. Varies per MMU but 4K is common. For MMUs that
support multiple page sizes, put the smallest one here.
config KERNEL_VM_BASE
hex "Virtual address space base address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
Define the base of the kernel's address space.
By default, this is the same as the DT_CHOSEN_Z_SRAM physical base SRAM
address from DTS, in which case RAM will be identity-mapped. Some
architectures may require RAM to be mapped in this way; they may have
just one RAM region and doing this makes linking much simpler, as
at least when the kernel boots all virtual RAM addresses are the same
as their physical address (demand paging at runtime may later modify
this for non-pinned page frames).
Otherwise, if RAM isn't identity-mapped:
1. It is the architecture's responsibility to transition the
instruction pointer to virtual addresses at early boot before
entering the kernel at z_cstart().
2. The underlying architecture may impose constraints on the bounds of
the kernel's address space, such as not overlapping physical RAM
regions if RAM is not identity-mapped, or the virtual and physical
base addresses being aligned to some common value (which allows
double-linking of paging structures to make the instruction pointer
transition simpler).
Zephyr does not implement a split address space and if multiple
page tables are in use, they all have the same virtual-to-physical
mappings (with potentially different permissions).
config KERNEL_VM_OFFSET
hex "Kernel offset within address space"
default 0
help
Offset that the kernel image begins within its address space,
if this is not the same offset from the beginning of RAM.
Some care may need to be taken in selecting this value. In certain
build-time cases, or when a physical address cannot be looked up
in page tables, the equation:
virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) -
SRAM_BASE_ADDRESS)
Will be used to convert between physical and virtual addresses for
memory that is mapped at boot.
This uncommon and is only necessary if the beginning of VM and
physical memory have dissimilar alignment.
config KERNEL_VM_SIZE
hex "Size of kernel address space in bytes"
default 0x800000
help
Size of the kernel's address space. Constraining this helps control
how much total memory can be used for page tables.
The difference between KERNEL_VM_BASE and KERNEL_VM_SIZE indicates the
size of the virtual region for runtime memory mappings. This is needed
for mapping driver MMIO regions, as well as special RAM mapping use-cases
such as VSDO pages, memory mapped thread stacks, and anonymous memory
mappings. The kernel itself will be mapped in here as well at boot.
Systems with very large amounts of memory (such as 512M or more)
will want to use a 64-bit build of Zephyr, there are no plans to
implement a notion of "high" memory in Zephyr to work around physical
RAM size larger than the defined bounds of the virtual address space.
config DEMAND_PAGING
bool "Enable demand paging [EXPERIMENTAL]"
depends on ARCH_HAS_DEMAND_PAGING
help
Enable demand paging. Requires architecture support in how the kernel
is linked and the implementation of an eviction algorithm and a
backing store for evicted pages.
if DEMAND_PAGING
config DEMAND_PAGING_ALLOW_IRQ
bool "Allow interrupts during page-ins/outs"
help
Allow interrupts to be serviced while pages are being evicted or
retrieved from the backing store. This is much better for system
latency, but any code running in interrupt context that page faults
will cause a kernel panic. Such code must work with exclusively pinned
code and data pages.
The scheduler is still disabled during this operation.
If this option is disabled, the page fault servicing logic
runs with interrupts disabled for the entire operation. However,
ISRs may also page fault.
endif # DEMAND_PAGING
endif # MMU
menuconfig MPU
bool "Enable MPU features"
depends on CPU_HAS_MPU
help
This option, when enabled, indicates to the core kernel that an MPU
is enabled.
if MPU
config MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT
bool
help
@@ -705,137 +506,23 @@ config MPU_GAP_FILLING
documentation for more information on how this option is
used.
endif # MPU
config SRAM_REGION_PERMISSIONS
bool "Assign appropriate permissions to kernel areas in SRAM"
depends on MMU || MPU
default y
help
This option indicates that memory protection hardware
is present, enabled, and regions have been configured at boot for memory
ranges within the kernel image.
If this option is turned on, certain areas of the kernel image will
have the following access policies applied for all threads, including
supervisor threads:
1) All program text will be have read-only, execute memory permission
2) All read-only data will have read-only permission, and execution
disabled if the hardware supports it.
3) All other RAM addresses will have read-write permission, and
execution disabled if the hardware supports it.
Options such as USERSPACE or HW_STACK_PROTECTION may additionally
impose additional policies on the memory map, which may be global
or local to the current running thread.
This option may consume additional memory to satisfy memory protection
hardware alignment constraints.
If this option is disabled, the entire kernel will have default memory
access permissions set, typically read/write/execute. It may be desirable
to turn this off on MMU systems which are using the MMU for demand
paging, do not need memory protection, and would rather not use up
RAM for the alignment between regions.
menu "Floating Point Options"
config FPU
bool "Enable floating point unit (FPU)"
menuconfig FLOAT
bool "Floating point"
depends on CPU_HAS_FPU
depends on ARC || ARM || RISCV || SPARC || X86
depends on ARM || X86 || ARC
help
This option enables the hardware Floating Point Unit (FPU), in order to
support using the floating point registers and instructions.
This option allows threads to use the floating point registers.
By default, only a single thread may use the registers.
When this option is enabled, by default, threads may use the floating
point registers only in an exclusive manner, and this usually means that
only one thread may perform floating point operations.
Disabling this option means that any thread that uses a
floating point register will get a fatal exception.
If it is necessary for multiple threads to perform concurrent floating
point operations, the "FPU register sharing" option must be enabled to
preserve the floating point registers across context switches.
Note that this option cannot be selected for the platforms that do not
include a hardware floating point unit; the floating point support for
those platforms is dependent on the availability of the toolchain-
provided software floating point library.
config FPU_SHARING
bool "FPU register sharing"
depends on FPU && MULTITHREADING
default y if ARM && ARMV7_M_ARMV8_M_FP
config FP_SHARING
bool "Floating point register sharing"
depends on FLOAT
help
This option enables preservation of the hardware floating point registers
across context switches to allow multiple threads to perform concurrent
floating point operations.
Note that on Cortex-M processors with the floating point extension we
enable by default the FPU register sharing mode, as some GCC compilers
may activate a floating point context by generating FP instructions for
any thread, and that context must be preserved when switching such
threads in and out. The developers can still disable the FP sharing
mode in their application projects, and switch to Unshared FP registers
mode, if it is guaranteed that the image code does not generate FP
instructions outside the single thread context that is allowed to do so.
endmenu
menu "Cache Options"
config CACHE_MANAGEMENT
bool "Enable cache management features"
help
This links in the cache management functions (for d-cache and i-cache
where possible).
config DCACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
depends on CACHE_MANAGEMENT
help
This option enables querying some architecture-specific hardware for
finding the d-cache line size at the expense of taking more memory and
code and a slightly increased boot time.
If the CPU's d-cache line size is known in advance, disable this option and
manually enter the value for DCACHE_LINE_SIZE or set it in the DT
using the 'd-cache-line-size' property.
config DCACHE_LINE_SIZE
int "d-cache line size" if !DCACHE_LINE_SIZE_DETECT
depends on CACHE_MANAGEMENT
default 0
help
Size in bytes of a CPU d-cache line. If this is set to 0 the value is
obtained from the 'd-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting DCACHE_LINE_SIZE_DETECT.
config ICACHE_LINE_SIZE_DETECT
bool "Detect i-cache line size at runtime"
depends on CACHE_MANAGEMENT
help
This option enables querying some architecture-specific hardware for
finding the i-cache line size at the expense of taking more memory and
code and a slightly increased boot time.
If the CPU's i-cache line size is known in advance, disable this option and
manually enter the value for ICACHE_LINE_SIZE or set it in the DT
using the 'i-cache-line-size' property.
config ICACHE_LINE_SIZE
int "i-cache line size" if !ICACHE_LINE_SIZE_DETECT
depends on CACHE_MANAGEMENT
default 0
help
Size in bytes of a CPU i-cache line. If this is set to 0 the value is
obtained from the 'i-cache-line-size' DT property instead if present.
Detect automatically at runtime by selecting ICACHE_LINE_SIZE_DETECT.
endmenu
This option allows multiple threads to use the floating point
registers.
config ARCH
string

View File

@@ -12,8 +12,4 @@ zephyr_cc_option(-fno-delete-null-pointer-checks)
zephyr_cc_option_ifdef(CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS -munaligned-access)
# Instruct compiler to use register R26 as thread pointer
# for thread local storage.
zephyr_cc_option_ifdef(CONFIG_THREAD_LOCAL_STORAGE -mtp-regno=26)
add_subdirectory(core)

View File

@@ -23,10 +23,7 @@ config CPU_ARCEM
config CPU_ARCHS
bool "ARC HS cores"
select CPU_ARCV2
# FIXME: ATOMIC_OPERATIONS_BUILTIN still has some problem in arcmwdt
# toolchain, so choosing ATOMIC_OPERATIONS_C instead.
select ATOMIC_OPERATIONS_C if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "arcmwdt"
select ATOMIC_OPERATIONS_BUILTIN if "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "arcmwdt"
select ATOMIC_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU
@@ -156,7 +153,7 @@ config ARC_STACK_PROTECTION
bool
default y if HW_STACK_PROTECTION
select ARC_STACK_CHECKING if ARC_HAS_STACK_CHECKING
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU && ARC_MPU_VER !=2)
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU)
select THREAD_STACK_INFO
help
This option enables either:
@@ -192,6 +189,9 @@ config FAULT_DUMP
0: Off.
config XIP
default y if !UART_NSIM
config GEN_ISR_TABLES
default y
@@ -211,7 +211,7 @@ config CODE_DENSITY
config ARC_HAS_ACCL_REGS
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
default y if FPU
default y if FLOAT
help
Depending on the configuration, CPU can contain accumulator reg-pair
(also referred to as r58:r59). These can also be used by gcc as GPR so
@@ -282,8 +282,34 @@ source "arch/arc/core/mpu/Kconfig"
endmenu
config DCACHE_LINE_SIZE
config CACHE_LINE_SIZE_DETECT
bool "Detect d-cache line size at runtime"
help
This option enables querying the d-cache build register for finding
the d-cache line size at the expense of taking more memory and code
and a slightly increased boot time.
If the CPU's d-cache line size is known in advance, disable this
option and manually enter the value for CACHE_LINE_SIZE.
config CACHE_LINE_SIZE
int "Cache line size" if !CACHE_LINE_SIZE_DETECT
default 32
help
Size in bytes of a CPU d-cache line.
Detect automatically at runtime by selecting CACHE_LINE_SIZE_DETECT.
config ARCH_CACHE_FLUSH_DETECT
bool
config CACHE_FLUSHING
bool "Enable d-cache flushing mechanism"
help
This links in the sys_cache_flush() function, which provides a
way to flush multiple lines of the d-cache.
If the d-cache is present, set this to y.
If the d-cache is NOT present, set this to n.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"

View File

@@ -2,11 +2,6 @@
zephyr_library()
if(CONFIG_COVERAGE)
zephyr_compile_options($<TARGET_PROPERTY:compiler,coverage>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,coverage>)
endif()
zephyr_library_sources(
thread.c
thread_entry_wrapper.S
@@ -24,17 +19,15 @@ zephyr_library_sources(
vector_table.c
)
zephyr_library_sources_ifdef(CONFIG_CACHE_MANAGEMENT cache.c)
zephyr_library_sources_ifdef(CONFIG_CACHE_FLUSHING cache.c)
zephyr_library_sources_ifdef(CONFIG_ARC_FIRQ fast_irq.S)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_if_kconfig(irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE tls.c)
add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)

View File

@@ -24,7 +24,7 @@ static struct k_spinlock arc_connect_spinlock;
/* Generate an inter-core interrupt to the target core */
void z_arc_connect_ici_generate(uint32_t core)
void z_arc_connect_ici_generate(u32_t core)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_IRQ, core);
@@ -32,7 +32,7 @@ void z_arc_connect_ici_generate(uint32_t core)
}
/* Acknowledge the inter-core interrupt raised by core */
void z_arc_connect_ici_ack(uint32_t core)
void z_arc_connect_ici_ack(u32_t core)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_ACK, core);
@@ -40,9 +40,9 @@ void z_arc_connect_ici_ack(uint32_t core)
}
/* Read inter-core interrupt status */
uint32_t z_arc_connect_ici_read_status(uint32_t core)
u32_t z_arc_connect_ici_read_status(u32_t core)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_READ_STATUS, core);
@@ -53,9 +53,9 @@ uint32_t z_arc_connect_ici_read_status(uint32_t core)
}
/* Check the source of inter-core interrupt */
uint32_t z_arc_connect_ici_check_src(void)
u32_t z_arc_connect_ici_check_src(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
@@ -68,7 +68,7 @@ uint32_t z_arc_connect_ici_check_src(void)
/* Clear the inter-core interrupt */
void z_arc_connect_ici_clear(void)
{
uint32_t cpu, c;
u32_t cpu, c;
LOCKED(&arc_connect_spinlock) {
@@ -89,7 +89,7 @@ void z_arc_connect_ici_clear(void)
}
/* Reset the cores in core_mask */
void z_arc_connect_debug_reset(uint32_t core_mask)
void z_arc_connect_debug_reset(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RESET,
@@ -98,7 +98,7 @@ void z_arc_connect_debug_reset(uint32_t core_mask)
}
/* Halt the cores in core_mask */
void z_arc_connect_debug_halt(uint32_t core_mask)
void z_arc_connect_debug_halt(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_HALT,
@@ -107,7 +107,7 @@ void z_arc_connect_debug_halt(uint32_t core_mask)
}
/* Run the cores in core_mask */
void z_arc_connect_debug_run(uint32_t core_mask)
void z_arc_connect_debug_run(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RUN,
@@ -116,7 +116,7 @@ void z_arc_connect_debug_run(uint32_t core_mask)
}
/* Set core mask */
void z_arc_connect_debug_mask_set(uint32_t core_mask, uint32_t mask)
void z_arc_connect_debug_mask_set(u32_t core_mask, u32_t mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_MASK,
@@ -125,9 +125,9 @@ void z_arc_connect_debug_mask_set(uint32_t core_mask, uint32_t mask)
}
/* Read core mask */
uint32_t z_arc_connect_debug_mask_read(uint32_t core_mask)
u32_t z_arc_connect_debug_mask_read(u32_t core_mask)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_READ_MASK,
@@ -141,7 +141,7 @@ uint32_t z_arc_connect_debug_mask_read(uint32_t core_mask)
/*
* Select cores that should be halted if the core issuing the command is halted
*/
void z_arc_connect_debug_select_set(uint32_t core_mask)
void z_arc_connect_debug_select_set(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_SELECT,
@@ -150,9 +150,9 @@ void z_arc_connect_debug_select_set(uint32_t core_mask)
}
/* Read the select value */
uint32_t z_arc_connect_debug_select_read(void)
u32_t z_arc_connect_debug_select_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_SELECT, 0);
@@ -163,9 +163,9 @@ uint32_t z_arc_connect_debug_select_read(void)
}
/* Read the status, halt or run of all cores in the system */
uint32_t z_arc_connect_debug_en_read(void)
u32_t z_arc_connect_debug_en_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_EN, 0);
@@ -176,9 +176,9 @@ uint32_t z_arc_connect_debug_en_read(void)
}
/* Read the last command sent */
uint32_t z_arc_connect_debug_cmd_read(void)
u32_t z_arc_connect_debug_cmd_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CMD, 0);
@@ -189,9 +189,9 @@ uint32_t z_arc_connect_debug_cmd_read(void)
}
/* Read the value of internal MCD_CORE register */
uint32_t z_arc_connect_debug_core_read(void)
u32_t z_arc_connect_debug_core_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CORE, 0);
@@ -210,11 +210,11 @@ void z_arc_connect_gfrc_clear(void)
}
/* Read total 64 bits of global free running counter */
uint64_t z_arc_connect_gfrc_read(void)
u64_t z_arc_connect_gfrc_read(void)
{
uint32_t low;
uint32_t high;
uint32_t key;
u32_t low;
u32_t high;
u32_t key;
/*
* each core has its own arc connect interface, i.e.,
@@ -233,7 +233,7 @@ uint64_t z_arc_connect_gfrc_read(void)
arch_irq_unlock(key);
return (((uint64_t)high) << 32) | low;
return (((u64_t)high) << 32) | low;
}
/* Enable global free running counter */
@@ -253,7 +253,7 @@ void z_arc_connect_gfrc_disable(void)
}
/* Disable global free running counter */
void z_arc_connect_gfrc_core_set(uint32_t core_mask)
void z_arc_connect_gfrc_core_set(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_GFRC_SET_CORE,
@@ -262,9 +262,9 @@ void z_arc_connect_gfrc_core_set(uint32_t core_mask)
}
/* Set the relevant cores to halt global free running counter */
uint32_t z_arc_connect_gfrc_halt_read(void)
u32_t z_arc_connect_gfrc_halt_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HALT, 0);
@@ -275,9 +275,9 @@ uint32_t z_arc_connect_gfrc_halt_read(void)
}
/* Read the internal CORE register */
uint32_t z_arc_connect_gfrc_core_read(void)
u32_t z_arc_connect_gfrc_core_read(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_CORE, 0);
@@ -304,9 +304,9 @@ void z_arc_connect_idu_disable(void)
}
/* Read enable status of interrupt distribute unit */
uint32_t z_arc_connect_idu_read_enable(void)
u32_t z_arc_connect_idu_read_enable(void)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_ENABLE, 0);
@@ -320,8 +320,8 @@ uint32_t z_arc_connect_idu_read_enable(void)
* Set the triggering mode and distribution mode for the specified common
* interrupt
*/
void z_arc_connect_idu_set_mode(uint32_t irq_num,
uint16_t trigger_mode, uint16_t distri_mode)
void z_arc_connect_idu_set_mode(u32_t irq_num,
u16_t trigger_mode, u16_t distri_mode)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MODE,
@@ -330,9 +330,9 @@ void z_arc_connect_idu_set_mode(uint32_t irq_num,
}
/* Read the internal MODE register of the specified common interrupt */
uint32_t z_arc_connect_idu_read_mode(uint32_t irq_num)
u32_t z_arc_connect_idu_read_mode(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MODE, irq_num);
@@ -346,7 +346,7 @@ uint32_t z_arc_connect_idu_read_mode(uint32_t irq_num)
* Set the target cores to receive the specified common interrupt
* when it is triggered
*/
void z_arc_connect_idu_set_dest(uint32_t irq_num, uint32_t core_mask)
void z_arc_connect_idu_set_dest(u32_t irq_num, u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_DEST,
@@ -355,9 +355,9 @@ void z_arc_connect_idu_set_dest(uint32_t irq_num, uint32_t core_mask)
}
/* Read the internal DEST register of the specified common interrupt */
uint32_t z_arc_connect_idu_read_dest(uint32_t irq_num)
u32_t z_arc_connect_idu_read_dest(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_DEST, irq_num);
@@ -368,7 +368,7 @@ uint32_t z_arc_connect_idu_read_dest(uint32_t irq_num)
}
/* Assert the specified common interrupt */
void z_arc_connect_idu_gen_cirq(uint32_t irq_num)
void z_arc_connect_idu_gen_cirq(u32_t irq_num)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_GEN_CIRQ, irq_num);
@@ -376,7 +376,7 @@ void z_arc_connect_idu_gen_cirq(uint32_t irq_num)
}
/* Acknowledge the specified common interrupt */
void z_arc_connect_idu_ack_cirq(uint32_t irq_num)
void z_arc_connect_idu_ack_cirq(u32_t irq_num)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ACK_CIRQ, irq_num);
@@ -384,9 +384,9 @@ void z_arc_connect_idu_ack_cirq(uint32_t irq_num)
}
/* Read the internal STATUS register of the specified common interrupt */
uint32_t z_arc_connect_idu_check_status(uint32_t irq_num)
u32_t z_arc_connect_idu_check_status(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_STATUS, irq_num);
@@ -397,9 +397,9 @@ uint32_t z_arc_connect_idu_check_status(uint32_t irq_num)
}
/* Read the internal SOURCE register of the specified common interrupt */
uint32_t z_arc_connect_idu_check_source(uint32_t irq_num)
u32_t z_arc_connect_idu_check_source(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_SOURCE, irq_num);
@@ -410,7 +410,7 @@ uint32_t z_arc_connect_idu_check_source(uint32_t irq_num)
}
/* Mask or unmask the specified common interrupt */
void z_arc_connect_idu_set_mask(uint32_t irq_num, uint32_t mask)
void z_arc_connect_idu_set_mask(u32_t irq_num, u32_t mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MASK,
@@ -419,9 +419,9 @@ void z_arc_connect_idu_set_mask(uint32_t irq_num, uint32_t mask)
}
/* Read the internal MASK register of the specified common interrupt */
uint32_t z_arc_connect_idu_read_mask(uint32_t irq_num)
u32_t z_arc_connect_idu_read_mask(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MASK, irq_num);
@@ -435,9 +435,9 @@ uint32_t z_arc_connect_idu_read_mask(uint32_t irq_num)
* Check if it is the first-acknowledging core to the common interrupt
* if IDU is programmed in the first-acknowledged mode
*/
uint32_t z_arc_connect_idu_check_first(uint32_t irq_num)
u32_t z_arc_connect_idu_check_first(u32_t irq_num)
{
uint32_t ret = 0;
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_FIRST, irq_num);

View File

@@ -35,7 +35,7 @@ volatile struct {
* master core that it's waken
*
*/
volatile uint32_t arc_cpu_wake_flag;
volatile u32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
/*
@@ -87,7 +87,7 @@ void z_arc_slave_start(int cpu_num)
#ifdef CONFIG_SMP
static void sched_ipi_handler(const void *unused)
static void sched_ipi_handler(void *unused)
{
ARG_UNUSED(unused);
@@ -95,10 +95,41 @@ static void sched_ipi_handler(const void *unused)
z_sched_ipi();
}
/**
* @brief Check whether need to do thread switch in isr context
*
* @details u64_t is used to let compiler use (r0, r1) as return register.
* use register r0 and register r1 as return value, r0 has
* new thread, r1 has old thread. If r0 == 0, it means no thread switch.
*/
u64_t z_arc_smp_switch_in_isr(void)
{
u64_t ret = 0;
u32_t new_thread;
u32_t old_thread;
old_thread = (u32_t)_current;
new_thread = (u32_t)z_get_next_ready_thread();
if (new_thread != old_thread) {
#ifdef CONFIG_TIMESLICING
z_reset_time_slice();
#endif
_current_cpu->swap_ok = 0;
((struct k_thread *)new_thread)->base.cpu =
arch_curr_cpu()->id;
_current_cpu->current = (struct k_thread *) new_thread;
ret = new_thread | ((u64_t)(old_thread) << 32);
}
return ret;
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
uint32_t i;
u32_t i;
/* broadcast sched_ipi request to other cores
* if the target is current core, hardware will ignore it
@@ -108,12 +139,15 @@ void arch_sched_ipi(void)
}
}
static int arc_smp_init(const struct device *dev)
static int arc_smp_init(struct device *dev)
{
ARG_UNUSED(dev);
struct arc_connect_bcr bcr;
/* necessary master core init */
_kernel.cpus[0].id = 0;
_kernel.cpus[0].irq_stack = Z_THREAD_STACK_BUFFER(_interrupt_stack)
+ CONFIG_ISR_STACK_SIZE;
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
@@ -131,17 +165,6 @@ static int arc_smp_init(const struct device *dev)
return -ENODEV;
}
if (bcr.dbg) {
/* configure inter-core debug unit if available */
uint32_t core_mask = (1 << CONFIG_MP_NUM_CPUS) - 1;
z_arc_connect_debug_select_set(core_mask);
/* Debugger halt cores at conditions */
z_arc_connect_debug_mask_set(core_mask, (ARC_CONNECT_CMD_DEBUG_MASK_SH
| ARC_CONNECT_CMD_DEBUG_MASK_BH | ARC_CONNECT_CMD_DEBUG_MASK_AH
| ARC_CONNECT_CMD_DEBUG_MASK_H));
}
if (bcr.gfrc) {
/* global free running count init */
z_arc_connect_gfrc_enable();

View File

@@ -25,8 +25,14 @@
#include <init.h>
#include <stdbool.h>
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
size_t sys_cache_line_size;
#if (CONFIG_CACHE_LINE_SIZE == 0) && !defined(CONFIG_CACHE_LINE_SIZE_DETECT)
#error Cannot use this implementation with a cache line size of 0
#endif
#if defined(CONFIG_CACHE_LINE_SIZE_DETECT)
#define DCACHE_LINE_SIZE sys_cache_line_size
#else
#define DCACHE_LINE_SIZE CONFIG_CACHE_LINE_SIZE
#endif
#define DC_CTRL_DC_ENABLE 0x0 /* enable d-cache */
@@ -49,40 +55,54 @@ static bool dcache_available(void)
return (val == 0) ? false : true;
}
static void dcache_dc_ctrl(uint32_t dcache_en_mask)
static void dcache_dc_ctrl(u32_t dcache_en_mask)
{
if (dcache_available()) {
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, dcache_en_mask);
}
}
void arch_dcache_enable(void)
static void dcache_enable(void)
{
dcache_dc_ctrl(DC_CTRL_DC_ENABLE);
}
static void arch_dcache_flush(void *start_addr_ptr, size_t size)
/**
*
* @brief Flush multiple d-cache lines to memory
*
* No alignment is required for either <start_addr> or <size>, but since
* dcache_flush_mlines() iterates on the d-cache lines, a cache line
* alignment for both is optimal.
*
* The d-cache line size is specified either via the CONFIG_CACHE_LINE_SIZE
* kconfig option or it is detected at runtime.
*
* @param start_addr the pointer to start the multi-line flush
* @param size the number of bytes that are to be flushed
*
* @return N/A
*/
static void dcache_flush_mlines(u32_t start_addr, u32_t size)
{
size_t line_size = sys_dcache_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
u32_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
if (!dcache_available() || (size == 0U)) {
return;
}
end_addr = start_addr + size;
end_addr = start_addr + size - 1;
start_addr &= (u32_t)(~(DCACHE_LINE_SIZE - 1));
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* --enter critical section-- */
key = irq_lock(); /* --enter critical section-- */
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
__asm__ volatile("nop_s");
__asm__ volatile("nop_s");
__asm__ volatile("nop_s");
/* wait for flush completion */
do {
if ((z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) &
@@ -90,59 +110,42 @@ static void arch_dcache_flush(void *start_addr_ptr, size_t size)
break;
}
} while (1);
start_addr += line_size;
} while (start_addr < end_addr);
start_addr += DCACHE_LINE_SIZE;
} while (start_addr <= end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
irq_unlock(key); /* --exit critical section-- */
}
static void arch_dcache_invd(void *start_addr_ptr, size_t size)
/**
*
* @brief Flush d-cache lines to main memory
*
* No alignment is required for either <virt> or <size>, but since
* sys_cache_flush() iterates on the d-cache lines, a d-cache line alignment for
* both is optimal.
*
* The d-cache line size is specified either via the CONFIG_CACHE_LINE_SIZE
* kconfig option or it is detected at runtime.
*
* @param start_addr the pointer to start the multi-line flush
* @param size the number of bytes that are to be flushed
*
* @return N/A
*/
void sys_cache_flush(vaddr_t start_addr, size_t size)
{
size_t line_size = sys_dcache_line_size_get();
uintptr_t start_addr = (uintptr_t)start_addr_ptr;
uintptr_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U) || line_size == 0U) {
return;
}
end_addr = start_addr + size;
start_addr = ROUND_DOWN(start_addr, line_size);
key = arch_irq_lock(); /* -enter critical section- */
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDL, start_addr);
__builtin_arc_nop();
__builtin_arc_nop();
__builtin_arc_nop();
start_addr += line_size;
} while (start_addr < end_addr);
irq_unlock(key); /* -exit critical section- */
dcache_flush_mlines((u32_t)start_addr, (u32_t)size);
}
int arch_dcache_range(void *addr, size_t size, int op)
{
if (op == K_CACHE_INVD) {
/*
* TODO: On invalidate we can contextually flush by setting the
* DC_CTRL_INVALID_FLUSH bit
*/
arch_dcache_invd(addr, size);
} else if (op == K_CACHE_WB) {
arch_dcache_flush(addr, size);
} else {
return -ENOTSUP;
}
return 0;
}
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
#if defined(CONFIG_CACHE_LINE_SIZE_DETECT)
size_t sys_cache_line_size;
static void init_dcache_line_size(void)
{
uint32_t val;
u32_t val;
val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
__ASSERT((val&0xff) != 0U, "d-cache is not present");
@@ -150,20 +153,15 @@ static void init_dcache_line_size(void)
val *= 16U;
sys_cache_line_size = (size_t) val;
}
size_t arch_dcache_line_size_get(void)
{
return sys_cache_line_size;
}
#endif
static int init_dcache(const struct device *unused)
static int init_dcache(struct device *unused)
{
ARG_UNUSED(unused);
arch_dcache_enable();
dcache_enable();
#if defined(CONFIG_DCACHE_LINE_SIZE_DETECT)
#if defined(CONFIG_CACHE_LINE_SIZE_DETECT)
init_dcache_line_size();
#endif

View File

@@ -22,7 +22,7 @@ GTEXT(arch_cpu_atomic_idle)
GDATA(z_arc_cpu_sleep_mode)
SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
.align 4
.balign 4
.word 0
/*
@@ -43,7 +43,19 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
ld r1, [z_arc_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
/*
* It's found that (in nsim_hs_smp), when cpu
* is sleeping, no response to inter-processor interrupt
* although it's pending and interrupts are enabled.
* here is a workround
*/
#if !defined(CONFIG_SOC_NSIM) && !defined(CONFIG_SMP)
sleep r1
#else
seti r1
_z_arc_idle_loop:
b _z_arc_idle_loop
#endif
j_s [blink]
nop

View File

@@ -126,7 +126,7 @@ firq_nest_1:
firq_nest:
#endif
push_s r0
j _isr_demux
j @_isr_demux
@@ -149,24 +149,30 @@ SECTION_FUNC(TEXT, _firq_exit)
_check_nest_int_by_irq_act r0, r1
jne _firq_no_switch
jne _firq_no_reschedule
/* sp is struct k_thread **old of z_arc_switch_in_isr
* which is a wrapper of z_get_next_switch_handle.
* r0 contains the 1st thread in ready queue. if
* it equals _current(r2) ,then do swap, or no swap.
*/
_get_next_switch_handle
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif
cmp r0, r2
bne _firq_switch
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
brne r0, 0, _firq_reschedule
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* fall to no switch */
/* Check if the current thread (in r2) is the cached thread */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
brne r0, r2, _firq_reschedule
#endif
/* fall to no rescheduling */
.align 4
_firq_no_switch:
/* restore interrupted context' sp */
.balign 4
_firq_no_reschedule:
pop sp
/*
* Keeping this code block close to those that use it allows using brxx
* instruction instead of a pair of cmp and bxx
@@ -176,20 +182,20 @@ _firq_no_switch:
#endif
rtie
.align 4
_firq_switch:
/* restore interrupted context' sp */
.balign 4
_firq_reschedule:
pop sp
#if CONFIG_RGF_NUM_BANKS != 1
#ifdef CONFIG_SMP
/*
* save r0, r2 in irq stack for a while, as they will be changed by register
* save r0, r1 in irq stack for a while, as they will be changed by register
* bank switch
*/
_get_curr_cpu_irq_stack r1
st r0, [r1, -4]
st r2, [r1, -8]
_get_curr_cpu_irq_stack r2
st r0, [r2, -4]
st r1, [r2, -8]
#endif
/*
* We know there is no interrupted interrupt of lower priority at this
* point, so when switching back to register bank 0, it will contain the
@@ -237,56 +243,101 @@ _firq_create_irq_stack_frame:
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET] /* ilink into pc */
#ifdef CONFIG_SMP
/*
* load r0, r2 from irq stack
* load r0, r1 from irq stack
*/
_get_curr_cpu_irq_stack r1
ld r0, [r1, -4]
ld r2, [r1, -8]
_get_curr_cpu_irq_stack r2
ld r0, [r2, -4]
ld r1, [r2, -8]
#endif
/* r2 is old thread */
_irq_store_old_thread_callee_regs
#endif
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread, will be
* restored when thread switched back
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
#ifdef CONFIG_SMP
mov_s r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
#endif
_save_callee_saved_regs
st _CAUSE_FIRQ, [r2, _thread_offset_to_relinquish_cause]
/* mov new thread (r0) to r2 */
#ifdef CONFIG_SMP
mov_s r2, r0
#else
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
mov r2, r0
_load_new_thread_callee_regs
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
breq r3, _CAUSE_RIRQ, _firq_switch_from_rirq
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* see comments in regular_irq.S
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _firq_return_from_rirq
nop_s
breq r3, _CAUSE_FIRQ, _firq_switch_from_firq
breq r3, _CAUSE_FIRQ, _firq_return_from_firq
nop_s
/* fall through */
.align 4
_firq_switch_from_coop:
_set_misc_regs_irq_switch_from_coop
.balign 4
_firq_return_from_coop:
/* pc into ilink */
pop_s r0
mov ilink, r0
mov_s ilink, r0
pop_s r0 /* status32 into r0 */
sr r0, [_ARC_V2_STATUS32_P0]
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
#endif
rtie
.align 4
_firq_switch_from_rirq:
_firq_switch_from_firq:
.balign 4
_firq_return_from_rirq:
_firq_return_from_firq:
_set_misc_regs_irq_switch_from_irq
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
_pop_irq_stack_frame
@@ -294,12 +345,5 @@ _firq_switch_from_firq:
sr ilink, [_ARC_V2_STATUS32_P0]
ld ilink, [sp, -8] /* pc into ilink */
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
#endif
/* LP registers are already restored, just switch back to bank 0 */
rtie

View File

@@ -16,45 +16,21 @@
#include <offsets_short.h>
#include <arch/cpu.h>
#include <logging/log.h>
#include <kernel_arch_data.h>
#include <arch/arc/v2/exc.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
static void dump_arc_esf(const z_arch_esf_t *esf)
{
LOG_ERR(" r0: 0x%08x r1: 0x%08x r2: 0x%08x r3: 0x%08x",
esf->r0, esf->r1, esf->r2, esf->r3);
LOG_ERR(" r4: 0x%08x r5: 0x%08x r6: 0x%08x r7: 0x%08x",
esf->r4, esf->r5, esf->r6, esf->r7);
LOG_ERR(" r8: 0x%08x r9: 0x%08x r10: 0x%08x r11: 0x%08x",
esf->r8, esf->r9, esf->r10, esf->r11);
LOG_ERR("r12: 0x%08x r13: 0x%08x pc: 0x%08x",
esf->r12, esf->r13, esf->pc);
LOG_ERR(" blink: 0x%08x status32: 0x%08x", esf->blink, esf->status32);
LOG_ERR("lp_end: 0x%08x lp_start: 0x%08x lp_count: 0x%08x",
esf->lp_end, esf->lp_start, esf->lp_count);
}
#endif
LOG_MODULE_DECLARE(os);
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
if (esf != NULL) {
dump_arc_esf(esf);
if (reason == K_ERR_CPU_EXCEPTION) {
LOG_ERR("Faulting instruction address = 0x%lx",
z_arc_v2_aux_reg_read(_ARC_V2_ERET));
}
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
z_fatal_error(reason, esf);
}
FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
{
/* TODO: convert ssf_ptr contents into an esf, they are not the same */
ARG_UNUSED(ssf_ptr);
z_arc_fatal_error(K_ERR_KERNEL_OOPS, NULL);
z_arc_fatal_error(K_ERR_KERNEL_OOPS, ssf_ptr);
CODE_UNREACHABLE;
}

View File

@@ -20,7 +20,7 @@
#include <kernel_structs.h>
#include <exc_handle.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
LOG_MODULE_DECLARE(os);
#ifdef CONFIG_USERSPACE
Z_EXC_DECLARE(z_arc_user_string_nlen);
@@ -31,74 +31,95 @@ static const struct z_exc_handle exceptions[] = {
#endif
#if defined(CONFIG_MPU_STACK_GUARD)
#define IS_MPU_GUARD_VIOLATION(guard_start, fault_addr, stack_ptr) \
((fault_addr >= guard_start) && \
(fault_addr < (guard_start + STACK_GUARD_SIZE)) && \
(stack_ptr <= (guard_start + STACK_GUARD_SIZE)))
/**
* @brief Assess occurrence of current thread's stack corruption
*
* This function performs an assessment whether a memory fault (on a given
* memory address) is the result of a stack overflow of the current thread.
* This function performs an assessment whether a memory fault (on a
* given memory address) is the result of stack memory corruption of
* the current thread.
*
* When called, we know at this point that we received an ARC
* protection violation, with any cause code, with the protection access
* error either "MPU" or "Secure MPU". In other words, an MPU fault of
* some kind. Need to determine whether this is a general MPU access
* exception or the specific case of a stack overflow.
* Thread stack corruption for supervisor threads or user threads in
* privilege mode (when User Space is supported) is reported upon an
* attempt to access the stack guard area (if MPU Stack Guard feature
* is supported). Additionally the current thread stack pointer
* must be pointing inside or below the guard area.
*
* Thread stack corruption for user threads in user mode is reported,
* if the current stack pointer is pointing below the start of the current
* thread's stack.
*
* Notes:
* - we assume a fully descending stack,
* - we assume a stacking error has occurred,
* - the function shall be called when handling MPU privilege violation
*
* If stack corruption is detected, the function returns the lowest
* allowed address where the Stack Pointer can safely point to, to
* prevent from errors when un-stacking the corrupted stack frame
* upon exception return.
*
* @param fault_addr memory address on which memory access violation
* has been reported.
* @param sp stack pointer when exception comes out
* @retval True if this appears to be a stack overflow
* @retval False if this does not appear to be a stack overflow
*
* @return The lowest allowed stack frame pointer, if error is a
* thread stack corruption, otherwise return 0.
*/
static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
static u32_t z_check_thread_stack_fail(const u32_t fault_addr, u32_t sp)
{
const struct k_thread *thread = _current;
uint32_t guard_end, guard_start;
if (!thread) {
/* TODO: Under what circumstances could we get here ? */
return false;
return 0;
}
#ifdef CONFIG_USERSPACE
if ((thread->base.user_options & K_USER) != 0) {
if ((z_arc_v2_aux_reg_read(_ARC_V2_ERSTATUS) &
_ARC_V2_STATUS32_U) != 0) {
/* Normal user mode context. There is no specific
* "guard" installed in this case, instead what's
* happening is that the stack pointer is crashing
* into the privilege mode stack buffer which
* immediately precededs it.
*/
guard_end = thread->stack_info.start;
guard_start = (uint32_t)thread->stack_obj;
#if defined(CONFIG_USERSPACE)
if (thread->arch.priv_stack_start) {
/* User thread */
if (z_arc_v2_aux_reg_read(_ARC_V2_ERSTATUS)
& _ARC_V2_STATUS32_U) {
/* Thread's user stack corruption */
#ifdef CONFIG_ARC_HAS_SECURE
sp = z_arc_v2_aux_reg_read(_ARC_V2_SEC_U_SP);
#else
sp = z_arc_v2_aux_reg_read(_ARC_V2_USER_SP);
#endif
if (sp <= (u32_t)thread->stack_obj) {
return (u32_t)thread->stack_obj;
}
} else {
/* Special case: handling a syscall on privilege stack.
* There is guard memory reserved immediately before
* it.
*/
guard_end = thread->arch.priv_stack_start;
guard_start = guard_end - Z_ARC_STACK_GUARD_SIZE;
/* User thread in privilege mode */
if (IS_MPU_GUARD_VIOLATION(
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
fault_addr, sp)) {
/* Thread's privilege stack corruption */
return thread->arch.priv_stack_start;
}
}
} else
#endif /* CONFIG_USERSPACE */
{
} else {
/* Supervisor thread */
guard_end = thread->stack_info.start;
guard_start = guard_end - Z_ARC_STACK_GUARD_SIZE;
if (IS_MPU_GUARD_VIOLATION((u32_t)thread->stack_obj,
fault_addr, sp)) {
/* Supervisor thread stack corruption */
return (u32_t)thread->stack_obj + STACK_GUARD_SIZE;
}
}
/* treat any MPU exceptions within the guard region as a stack
* overflow.As some instrustions
* (like enter_s {r13-r26, fp, blink}) push a collection of
* registers on to the stack. In this situation, the fault_addr
* will less than guard_end, but sp will greater than guard_end.
*/
if (fault_addr < guard_end && fault_addr >= guard_start) {
return true;
#else /* CONFIG_USERSPACE */
if (IS_MPU_GUARD_VIOLATION(thread->stack_info.start,
fault_addr, sp)) {
/* Thread stack corruption */
return thread->stack_info.start + STACK_GUARD_SIZE;
}
#endif /* CONFIG_USERSPACE */
return false;
return 0;
}
#endif
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
@@ -108,7 +129,7 @@ static bool z_check_thread_stack_fail(const uint32_t fault_addr, uint32_t sp)
* These codes and parameters do not have associated* names in
* the technical manual, just switch on the values in Table 6-5
*/
static const char *get_protv_access_err(uint32_t parameter)
static const char *get_protv_access_err(u32_t parameter)
{
switch (parameter) {
case 0x1:
@@ -130,7 +151,7 @@ static const char *get_protv_access_err(uint32_t parameter)
}
}
static void dump_protv_exception(uint32_t cause, uint32_t parameter)
static void dump_protv_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
@@ -164,7 +185,7 @@ static void dump_protv_exception(uint32_t cause, uint32_t parameter)
}
}
static void dump_machine_check_exception(uint32_t cause, uint32_t parameter)
static void dump_machine_check_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
@@ -212,7 +233,7 @@ static void dump_machine_check_exception(uint32_t cause, uint32_t parameter)
}
}
static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
static void dump_privilege_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
@@ -268,7 +289,7 @@ static void dump_privilege_exception(uint32_t cause, uint32_t parameter)
}
}
static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parameter)
static void dump_exception_info(u32_t vector, u32_t cause, u32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
LOG_ERR("interrupt %u", vector);
@@ -342,19 +363,19 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
* invokes the user provided routine k_sys_fatal_error_handler() which is
* responsible for implementing the error handling policy.
*/
void _Fault(z_arch_esf_t *esf, uint32_t old_sp)
void _Fault(z_arch_esf_t *esf, u32_t old_sp)
{
uint32_t vector, cause, parameter;
uint32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
uint32_t ecr = z_arc_v2_aux_reg_read(_ARC_V2_ECR);
u32_t vector, cause, parameter;
u32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
u32_t ecr = z_arc_v2_aux_reg_read(_ARC_V2_ECR);
#ifdef CONFIG_USERSPACE
for (int i = 0; i < ARRAY_SIZE(exceptions); i++) {
uint32_t start = (uint32_t)exceptions[i].start;
uint32_t end = (uint32_t)exceptions[i].end;
u32_t start = (u32_t)exceptions[i].start;
u32_t end = (u32_t)exceptions[i].end;
if (esf->pc >= start && esf->pc < end) {
esf->pc = (uint32_t)(exceptions[i].fixup);
esf->pc = (u32_t)(exceptions[i].fixup);
return;
}
}

View File

@@ -19,6 +19,7 @@
#include <syscall.h>
GTEXT(_Fault)
GTEXT(z_do_kernel_oops)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -37,17 +38,6 @@ GTEXT(__ev_maligned)
GTEXT(z_irq_do_offload);
#endif
.macro _save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET]
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
.endm
/*
* The exception handling will use top part of interrupt stack to
@@ -84,7 +74,7 @@ _exc_entry:
* and exception is raised, then here it's guaranteed that
* exception handling has necessary stack to use
*/
mov ilink, sp
mov_s ilink, sp
_get_curr_cpu_irq_stack sp
sub sp, sp, (CONFIG_ISR_STACK_SIZE - CONFIG_ARC_EXCEPTION_STACK_SIZE)
@@ -99,12 +89,20 @@ _exc_entry:
*/
_create_irq_stack_frame
_save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* sp is parameter of _Fault */
mov_s r0, sp
/* ilink is the thread's original sp */
mov r1, ilink
mov_s r1, ilink
jl _Fault
_exc_return:
@@ -116,11 +114,21 @@ _exc_return:
* exception comes out, thread context?irq_context?nest irq context?
*/
_get_next_switch_handle
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
breq r0, 0, _exc_return_from_exc
mov_s r2, r0
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
breq r0, r2, _exc_return_from_exc
mov_s r2, r0
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/*
@@ -143,7 +151,7 @@ _exc_return:
/* save r2 in ilink because of the possible following reg
* bank switch
*/
mov ilink, r2
mov_s ilink, r2
#endif
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~(_ARC_V2_STATUS32_AE | _ARC_V2_STATUS32_RB(7)))
@@ -176,21 +184,18 @@ _exc_return:
mov r2, ilink
#endif
/* Assumption: r2 has next thread */
b _rirq_newthread_switch
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
_exc_return_from_exc:
/* exception handler may change return address.
* reload it
*/
ld_s r0, [sp, ___isf_t_pc_OFFSET]
sr r0, [_ARC_V2_ERET]
_pop_irq_stack_frame
mov sp, ilink
mov_s sp, ilink
rtie
/* separated entry for trap which may be used by irq_offload, USERPSACE */
SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_trap)
/* get the id of trap_s */
lr ilink, [_ARC_V2_ECR]
@@ -199,7 +204,7 @@ SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_trap)
cmp ilink, _TRAP_S_CALL_SYSTEM_CALL
bne _do_non_syscall_trap
/* do sys_call */
mov ilink, K_SYSCALL_LIMIT
mov_s ilink, K_SYSCALL_LIMIT
cmp r6, ilink
blo valid_syscall_id
@@ -213,12 +218,15 @@ valid_syscall_id:
*/
_create_irq_stack_frame
_save_exc_regs_into_stack
/* exc return and do sys call in kernel mode,
* so need to clear U bit, r0 is already loaded
* with ERSTATUS in _save_exc_regs_into_stack
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0, [_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
bclr r0, r0, _ARC_V2_STATUS32_U_BIT
sr r0, [_ARC_V2_ERSTATUS]
@@ -241,7 +249,15 @@ _do_non_syscall_trap:
/* save caller saved registers */
_create_irq_stack_frame
_save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
@@ -260,8 +276,6 @@ exc_nest_handle:
_dec_int_nest_counter r0, r1
_pop_irq_stack_frame
/* ERSTATUS, ERET are not changed, so ok to rtie */
rtie
#endif /* CONFIG_IRQ_OFFLOAD */
b _exc_entry

View File

@@ -32,10 +32,10 @@
*/
#if defined(CONFIG_ARC_FIRQ_STACK)
#if defined(CONFIG_SMP)
K_KERNEL_STACK_ARRAY_DEFINE(_firq_interrupt_stack, CONFIG_MP_NUM_CPUS,
K_THREAD_STACK_ARRAY_DEFINE(_firq_interrupt_stack, CONFIG_MP_NUM_CPUS,
CONFIG_ARC_FIRQ_STACK_SIZE);
#else
K_KERNEL_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
K_THREAD_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
#endif
/*
@@ -46,40 +46,40 @@ K_KERNEL_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
void z_arc_firq_stack_set(void)
{
#ifdef CONFIG_SMP
char *firq_sp = Z_KERNEL_STACK_BUFFER(
char *firq_sp = Z_THREAD_STACK_BUFFER(
_firq_interrupt_stack[z_arc_v2_core_id()]) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#else
char *firq_sp = Z_KERNEL_STACK_BUFFER(_firq_interrupt_stack) +
char *firq_sp = Z_THREAD_STACK_BUFFER(_firq_interrupt_stack) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#endif
/* the z_arc_firq_stack_set must be called when irq diasbled, as
* it can be called not only in the init phase but also other places
*/
unsigned int key = arch_irq_lock();
unsigned int key = irq_lock();
__asm__ volatile (
/* only ilink will not be banked, so use ilink as channel
* between 2 banks
*/
"mov %%ilink, %0\n\t"
"lr %0, [%1]\n\t"
"or %0, %0, %2\n\t"
"kflag %0\n\t"
"mov %%sp, %%ilink\n\t"
"mov ilink, %0 \n\t"
"lr %0, [%1] \n\t"
"or %0, %0, %2 \n\t"
"kflag %0 \n\t"
"mov sp, ilink \n\t"
/* switch back to bank0, use ilink to avoid the pollution of
* bank1's gp regs.
*/
"lr %%ilink, [%1]\n\t"
"and %%ilink, %%ilink, %3\n\t"
"kflag %%ilink\n\t"
"lr ilink, [%1] \n\t"
"and ilink, ilink, %3 \n\t"
"kflag ilink \n\t"
:
: "r"(firq_sp), "i"(_ARC_V2_STATUS32),
"i"(_ARC_V2_STATUS32_RB(1)),
"i"(~_ARC_V2_STATUS32_RB(7))
);
arch_irq_unlock(key);
irq_unlock(key);
}
#endif
@@ -95,7 +95,10 @@ void z_arc_firq_stack_set(void)
void arch_irq_enable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_enable(irq);
irq_unlock(key);
}
/*
@@ -109,7 +112,10 @@ void arch_irq_enable(unsigned int irq)
void arch_irq_disable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_disable(irq);
irq_unlock(key);
}
/**
@@ -137,10 +143,12 @@ int arch_irq_is_enabled(unsigned int irq)
* @return N/A
*/
void z_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
ARG_UNUSED(flags);
unsigned int key = irq_lock();
__ASSERT(prio < CONFIG_NUM_IRQ_PRIO_LEVELS,
"invalid priority %d for irq %d", prio, irq);
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocted to secure world
@@ -154,6 +162,7 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
ARC_N_IRQ_START_LEVEL : prio;
#endif
z_arc_v2_irq_unit_prio_set(irq, prio);
irq_unlock(key);
}
/*
@@ -165,7 +174,7 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
* @return N/A
*/
void z_irq_spurious(const void *unused)
void z_irq_spurious(void *unused)
{
ARG_UNUSED(unused);
z_fatal_error(K_ERR_SPURIOUS_IRQ, NULL);
@@ -173,8 +182,8 @@ void z_irq_spurious(const void *unused)
#ifdef CONFIG_DYNAMIC_INTERRUPTS
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(const void *parameter),
const void *parameter, uint32_t flags)
void (*routine)(void *parameter), void *parameter,
u32_t flags)
{
z_isr_install(irq, routine, parameter);
z_irq_priority_set(irq, priority, flags);

View File

@@ -12,7 +12,7 @@
#include <irq_offload.h>
static irq_offload_routine_t offload_routine;
static const void *offload_param;
static void *offload_param;
/* Called by trap_s exception handler */
void z_irq_do_offload(void)
@@ -20,7 +20,7 @@ void z_irq_do_offload(void)
offload_routine(offload_param);
}
void arch_irq_offload(irq_offload_routine_t routine, const void *parameter)
void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
{
offload_routine = routine;

View File

@@ -24,8 +24,8 @@
GTEXT(_isr_wrapper)
GTEXT(_isr_demux)
#if defined(CONFIG_PM)
GTEXT(z_pm_save_idle_exit)
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
GTEXT(z_sys_power_save_idle_exit)
#endif
/*
@@ -35,35 +35,37 @@ _rirq_enter/_firq_enter: they are jump points.
The flow is the following:
ISR -> _isr_wrapper -- + -> _rirq_enter -> _isr_demux -> ISR -> _rirq_exit
|
+ -> _firq_enter -> _isr_demux -> ISR -> _firq_exit
|
+ -> _firq_enter -> _isr_demux -> ISR -> _firq_exit
Context switch explanation:
The context switch code is spread in these files:
isr_wrapper.s, switch.s, swap_macros.h, fast_irq.s, regular_irq.s
isr_wrapper.s, switch.s, swap_macros.s, fast_irq.s, regular_irq.s
IRQ stack frame layout:
high address
high address
status32
pc
lp_count
lp_start
lp_end
blink
r13
...
sp -> r0
status32
pc
lp_count
lp_start
lp_end
blink
r13
...
sp -> r0
low address
low address
The context switch code adopts this standard so that it is easier to follow:
- r2 contains _kernel.current ASAP, and the incoming thread when we
transition from outgoing thread to incoming thread
- r1 contains _kernel ASAP and is not overwritten over the lifespan of
the functions.
- r2 contains _kernel.current ASAP, and the incoming thread when we
transition from outgoing thread to incoming thread
Not loading _kernel into r0 allows loading _kernel without stomping on
the parameter in r0 in arch_switch().
@@ -98,49 +100,47 @@ done upfront, and the rest is done when needed:
o RIRQ
All needed registers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread's stack.
All needed registers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread control structure (TCS).
o FIRQ
First, a FIRQ can be interrupting a lower-priority RIRQ: if this is
the case, the FIRQ does not take a scheduling decision and leaves it
the RIRQ to handle. This limits the amount of code that has to run at
interrupt-level.
First, a FIRQ can be interrupting a lower-priority RIRQ: if this is the case,
the FIRQ does not take a scheduling decision and leaves it the RIRQ to
handle. This limits the amount of code that has to run at interrupt-level.
CONFIG_RGF_NUM_BANKS==1 case:
Registers are saved on the stack frame just as they are for RIRQ.
Context switch can happen just as it does in the RIRQ case, however,
if the FIRQ interrupted a RIRQ, the FIRQ will return from interrupt
and let the RIRQ do the context switch. At entry, one register is
needed in order to have code to save other registers. r0 is saved
first in the stack and restored later
CONFIG_RGF_NUM_BANKS==1 case:
Registers are saved on the stack frame just as they are for RIRQ.
Context switch can happen just as it does in the RIRQ case, however,
if the FIRQ interrupted a RIRQ, the FIRQ will return from interrupt and
let the RIRQ do the context switch. At entry, one register is needed in order
to have code to save other registers. r0 is saved first in a global called
saved_r0.
CONFIG_RGF_NUM_BANKS!=1 case:
During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own
stack. GPRs are banked, loop registers are saved in unused callee saved
regs upon interrupt entry. If returning to a thread, loop registers are
restored and the CPU switches back to bank 0 for the GPRs. If a context
switch is needed, at this point only are all the registers saved.
First, a stack frame with the same layout as the automatic RIRQ one is
created and then the callee-saved GPRs are saved in the stack.
status32_p0 and ilink are saved in this case, not status32 and pc.
To create the stack frame, the FIRQ handling code must first go back to
using bank0 of registers, since that is where the registers containing
the exiting thread are saved. Care must be taken not to touch any
register before saving them: the only one usable at that point is the
stack pointer.
CONFIG_RGF_NUM_BANKS!=1 case:
During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own stack.
GPRs are banked, loop registers are saved in unused callee saved regs upon
interrupt entry. If returning to a thread, loop registers are restored and the
CPU switches back to bank 0 for the GPRs. If a context switch is
needed, at this point only are all the registers saved. First, a
stack frame with the same layout as the automatic RIRQ one is created
and then the callee-saved GPRs are saved in the TCS. status32_p0 and
ilink are saved in this case, not status32 and pc.
To create the stack frame, the FIRQ handling code must first go back to using
bank0 of registers, since that is where the registers containing the exiting
thread are saved. Care must be taken not to touch any register before saving
them: the only one usable at that point is the stack pointer.
o coop
When a coop context switch is done, the callee-saved registers are
saved in the stack. The other GPRs do not need to be saved, since the
compiler has already placed them on the stack.
When a coop context switch is done, the callee-saved registers are
saved in the TCS. The other GPRs do not need to be saved, since the
compiler has already placed them on the stack.
For restoring the contexts, there are six cases. In all cases, the
callee-saved registers of the incoming thread have to be restored. Then, there
@@ -148,85 +148,86 @@ are specifics for each case:
From coop:
o to coop
o to coop
Do a normal function call return.
Do a normal function call return.
o to any irq
o to any irq
The incoming interrupted thread has an IRQ stack frame containing the
caller-saved registers that has to be popped. status32 has to be
restored, then we jump to the interrupted instruction.
The incoming interrupted thread has an IRQ stack frame containing the
caller-saved registers that has to be popped. status32 has to be restored,
then we jump to the interrupted instruction.
From FIRQ:
When CONFIG_RGF_NUM_BANKS==1, context switch is done as it is for RIRQ.
When CONFIG_RGF_NUM_BANKS!=1, the processor is put back to using bank0,
not bank1 anymore, because it had to save the outgoing context from
bank0, and now has to load the incoming one into bank0.
When CONFIG_RGF_NUM_BANKS==1, context switch is done as it is for RIRQ.
When CONFIG_RGF_NUM_BANKS!=1, the processor is put back to using bank0,
not bank1 anymore, because it had to save the outgoing context from bank0,
and now has to load the incoming one
into bank0.
o to coop
o to coop
The address of the returning instruction from arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
The address of the returning instruction from arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
o to any irq
o to any irq
The IRQ has saved the caller-saved registers in a stack frame, which
must be popped, and status32 and pc loaded in status32_p0 and ilink.
The IRQ has saved the caller-saved registers in a stack frame, which must be
popped, and status32 and pc loaded in status32_p0 and ilink.
From RIRQ:
o to coop
o to coop
The interrupt return mechanism in the processor expects a stack frame,
but the outgoing context did not create one. A fake one is created
here, with only the relevant values filled in: pc, status32.
The interrupt return mechanism in the processor expects a stack frame, but
the outgoing context did not create one. A fake one is created here, with
only the relevant values filled in: pc, status32.
There is a discrepancy between the ABI from the ARCv2 docs,
including the way the processor pushes GPRs in pairs in the IRQ stack
frame, and the ABI GCC uses. r13 should be a callee-saved register,
but GCC treats it as caller-saved. This means that the processor pushes
it in the stack frame along with r12, but the compiler does not save it
before entering a function. So, it is saved as part of the callee-saved
registers, and restored there, but the processor restores it _a second
time_ when popping the IRQ stack frame. Thus, the correct value must
also be put in the fake stack frame when returning to a thread that
context switched out cooperatively.
There is a discrepancy between the ABI from the ARCv2 docs, including the
way the processor pushes GPRs in pairs in the IRQ stack frame, and the ABI
GCC uses. r13 should be a callee-saved register, but GCC treats it as
caller-saved. This means that the processor pushes it in the stack frame
along with r12, but the compiler does not save it before entering a
function. So, it is saved as part of the callee-saved registers, and
restored there, but the processor restores it _a second time_ when popping
the IRQ stack frame. Thus, the correct value must also be put in the fake
stack frame when returning to a thread that context switched out
cooperatively.
o to any irq
o to any irq
Both types of IRQs already have an IRQ stack frame: simply return from
interrupt.
Both types of IRQs already have an IRQ stack frame: simply return from
interrupt.
*/
SECTION_FUNC(TEXT, _isr_wrapper)
#ifdef CONFIG_ARC_FIRQ
#if defined(CONFIG_ARC_FIRQ)
#if CONFIG_RGF_NUM_BANKS == 1
/* free r0 here, use r0 to check whether irq is firq.
* for rirq, as sp will not change and r0 already saved, this action
* in fact is useless
* in fact is an action like nop.
* for firq, r0 will be restored later
*/
push r0
st_s r0, [sp]
#endif
lr r0, [_ARC_V2_AUX_IRQ_ACT]
ffs r0, r0
cmp r0, 0
#if CONFIG_RGF_NUM_BANKS == 1
bnz rirq_path
pop r0
ld_s r0, [sp]
/* 1-register bank FIRQ handling must save registers on stack */
_create_irq_stack_frame
lr r0, [_ARC_V2_STATUS32_P0]
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET]
lr r0, [_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET]
mov_s r3, _firq_exit
mov_s r2, _firq_enter
j_s [r2]
rirq_path:
add sp, sp, 4
mov_s r3, _rirq_exit
mov_s r2, _rirq_enter
j_s [r2]
@@ -243,23 +244,32 @@ rirq_path:
j_s [r2]
#endif
/* r0, r1, and r3 will be used in exit_tickless_idle macro */
#if defined(CONFIG_TRACING)
GTEXT(sys_trace_isr_enter)
#endif
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
.macro exit_tickless_idle
#if defined(CONFIG_PM)
clri r0 /* do not interrupt exiting tickless idle operations */
push_s r1
push_s r0
mov_s r1, _kernel
ld_s r3, [r1, _kernel_offset_to_idle] /* requested idle duration */
breq r3, 0, _skip_pm_save_idle_exit
ld_s r0, [r1, _kernel_offset_to_idle] /* requested idle duration */
breq r0, 0, _skip_sys_power_save_idle_exit
st 0, [r1, _kernel_offset_to_idle] /* zero idle duration */
push_s blink
jl z_pm_save_idle_exit
jl z_sys_power_save_idle_exit
pop_s blink
_skip_pm_save_idle_exit:
_skip_sys_power_save_idle_exit:
pop_s r0
pop_s r1
seti r0
#endif
.endm
#else
#define exit_tickless_idle
#endif
/* when getting here, r3 contains the interrupt exit stub to call */
SECTION_FUNC(TEXT, _isr_demux)
@@ -275,10 +285,11 @@ SECTION_FUNC(TEXT, _isr_demux)
push r59
#endif
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_enter
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_start_of_isr
#endif
/* cannot be done before this point because we must be able to run C */
/* r0 is available to be stomped here, and exit_tickless_idle uses it */
exit_tickless_idle
lr r0, [_ARC_V2_ICAUSE]
@@ -294,13 +305,16 @@ irq_hint_handled:
add3 r0, r1, r0 /* table entries are 8-bytes wide */
ld_s r1, [r0, 4] /* ISR into r1 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
push_s r1
bl read_timer_end_of_isr
pop_s r1
pop_s r0
#endif
jl_s.d [r1]
ld_s r0, [r0] /* delay slot: ISR parameter into r0 */
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_exit
#endif
#ifdef CONFIG_ARC_HAS_ACCL_REGS
pop r59
pop r58

View File

@@ -2,5 +2,5 @@
zephyr_library()
zephyr_library_sources_ifdef(CONFIG_ARC_CORE_MPU arc_core_mpu.c)
zephyr_library_sources_ifdef(CONFIG_ARC_MPU arc_mpu.c)
zephyr_library_sources_if_kconfig(arc_core_mpu.c)
zephyr_library_sources_if_kconfig(arc_mpu.c)

View File

@@ -18,21 +18,16 @@ config ARC_CORE_MPU
config MPU_STACK_GUARD
bool "Thread Stack Guards"
depends on ARC_CORE_MPU && ARC_MPU_VER !=2
depends on ARC_CORE_MPU
help
Enable thread stack guards via MPU. ARC supports built-in stack protection.
If your core supports that, it is preferred over MPU stack guard.
For ARC_MPU_VER == 2, it requires 2048 extra bytes and a strong start address
alignment, this will bring big waste of memory, so no support for it.
If your core supports that, it is preferred over MPU stack guard
config ARC_MPU
bool "ARC MPU Support"
select MPU
select SRAM_REGION_PERMISSIONS
select ARC_CORE_MPU
select THREAD_STACK_INFO
select GEN_PRIV_STACKS if ARC_MPU_VER = 2
select MEMORY_PROTECTION
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if ARC_MPU_VER = 2
select MPU_REQUIRES_NON_OVERLAPPING_REGIONS if ARC_MPU_VER = 3
help
Target has ARC MPU (currently only works for EMSK 2.2/2.3 ARCEM7D)

View File

@@ -32,6 +32,64 @@ int arch_mem_domain_max_partitions_get(void)
return arc_core_mpu_get_max_domain_partition_regions();
}
/*
* Reset MPU region for a single memory partition
*/
void arch_mem_domain_partition_remove(struct k_mem_domain *domain,
u32_t partition_id)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_remove_mem_partition(domain, partition_id);
arc_core_mpu_enable();
}
/*
* Configure MPU memory domain
*/
void arch_mem_domain_thread_add(struct k_thread *thread)
{
if (_current != thread) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_configure_mem_domain(thread);
arc_core_mpu_enable();
}
/*
* Destroy MPU regions for the mem domain
*/
void arch_mem_domain_destroy(struct k_mem_domain *domain)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_remove_mem_domain(domain);
arc_core_mpu_enable();
}
void arch_mem_domain_partition_add(struct k_mem_domain *domain,
u32_t partition_id)
{
/* No-op on this architecture */
}
void arch_mem_domain_thread_remove(struct k_thread *thread)
{
if (_current != thread) {
return;
}
arch_mem_domain_destroy(thread->mem_domain_info.mem_domain);
}
/*
* Validate the given buffer is user accessible or not
*/

View File

@@ -21,20 +21,20 @@ LOG_MODULE_REGISTER(mpu);
* @brief Get the number of supported MPU regions
*
*/
static inline uint8_t get_num_regions(void)
static inline u8_t get_num_regions(void)
{
uint32_t num = z_arc_v2_aux_reg_read(_ARC_V2_MPU_BUILD);
u32_t num = z_arc_v2_aux_reg_read(_ARC_V2_MPU_BUILD);
num = (num & 0xFF00U) >> 8U;
return (uint8_t)num;
return (u8_t)num;
}
/**
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct parameter set.
*/
static inline uint32_t get_region_attr_by_type(uint32_t type)
static inline u32_t get_region_attr_by_type(u32_t type)
{
switch (type) {
case THREAD_STACK_USER_REGION:

View File

@@ -16,28 +16,32 @@
#define AUX_MPU_RDP_ATTR_MASK (0x1FC)
#define AUX_MPU_RDP_SIZE_MASK (0xE03)
#define _ARC_V2_MPU_EN (0x409)
#define _ARC_V2_MPU_RDB0 (0x422)
#define _ARC_V2_MPU_RDP0 (0x423)
/* For MPU version 2, the minimum protection region size is 2048 bytes */
#define ARC_FEATURE_MPU_ALIGNMENT_BITS 11
/**
* This internal function initializes a MPU region
*/
static inline void _region_init(uint32_t index, uint32_t region_addr, uint32_t size,
uint32_t region_attr)
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
u8_t bits = find_msb_set(size) - 1;
index = index * 2U;
if (bits < ARC_FEATURE_MPU_ALIGNMENT_BITS) {
bits = ARC_FEATURE_MPU_ALIGNMENT_BITS;
}
if ((1 << bits) < size) {
bits++;
}
if (size > 0) {
uint8_t bits = find_msb_set(size) - 1;
if (bits < ARC_FEATURE_MPU_ALIGNMENT_BITS) {
bits = ARC_FEATURE_MPU_ALIGNMENT_BITS;
}
if ((1 << bits) < size) {
bits++;
}
region_attr &= ~(AUX_MPU_RDP_SIZE_MASK);
region_attr |= AUX_MPU_RDP_REGION_SIZE(bits);
region_addr |= AUX_MPU_RDB_VALID_MASK;
@@ -53,7 +57,7 @@ static inline void _region_init(uint32_t index, uint32_t region_addr, uint32_t s
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct region index.
*/
static inline int get_region_index_by_type(uint32_t type)
static inline int get_region_index_by_type(u32_t type)
{
/*
* The new MPU regions are allocated per type after the statically
@@ -71,12 +75,18 @@ static inline int get_region_index_by_type(uint32_t type)
- THREAD_STACK_REGION;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return get_num_regions() - mpu_config.num_regions - type;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return get_num_regions() - mpu_config.num_regions - type;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not supported.
* since stack guard is not enabled.
*/
return get_num_regions() - mpu_config.num_regions - type + 1;
#endif
default:
__ASSERT(0, "Unsupported type");
return -EINVAL;
@@ -86,7 +96,7 @@ static inline int get_region_index_by_type(uint32_t type)
/**
* This internal function checks if region is enabled or not
*/
static inline bool _is_enabled_region(uint32_t r_index)
static inline bool _is_enabled_region(u32_t r_index)
{
return ((z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDB0 + r_index * 2U)
& AUX_MPU_RDB_VALID_MASK) == AUX_MPU_RDB_VALID_MASK);
@@ -95,11 +105,11 @@ static inline bool _is_enabled_region(uint32_t r_index)
/**
* This internal function check if the given buffer in in the region
*/
static inline bool _is_in_region(uint32_t r_index, uint32_t start, uint32_t size)
static inline bool _is_in_region(u32_t r_index, u32_t start, u32_t size)
{
uint32_t r_addr_start;
uint32_t r_addr_end;
uint32_t r_size_lshift;
u32_t r_addr_start;
u32_t r_addr_end;
u32_t r_size_lshift;
r_addr_start = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDB0 + r_index * 2U)
& (~AUX_MPU_RDB_VALID_MASK);
@@ -118,9 +128,9 @@ static inline bool _is_in_region(uint32_t r_index, uint32_t start, uint32_t size
/**
* This internal function check if the region is user accessible or not
*/
static inline bool _is_user_accessible_region(uint32_t r_index, int write)
static inline bool _is_user_accessible_region(u32_t r_index, int write)
{
uint32_t r_ap;
u32_t r_ap;
r_ap = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDP0 + r_index * 2U);
@@ -142,10 +152,10 @@ static inline bool _is_user_accessible_region(uint32_t r_index, int write)
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(uint8_t type, uint32_t base, uint32_t size)
static inline int _mpu_configure(u8_t type, u32_t base, u32_t size)
{
int32_t region_index = get_region_index_by_type(type);
uint32_t region_attr = get_region_attr_by_type(type);
s32_t region_index = get_region_index_by_type(type);
u32_t region_attr = get_region_attr_by_type(type);
LOG_DBG("Region info: 0x%x 0x%x", base, size);
@@ -191,13 +201,52 @@ void arc_core_mpu_disable(void)
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
#if defined(CONFIG_MPU_STACK_GUARD)
#if defined(CONFIG_USERSPACE)
if ((thread->base.user_options & K_USER) != 0) {
/* the areas before and after the user stack of thread is
* kernel only. These area can be used as stack guard.
* -----------------------
* | kernel only area |
* |---------------------|
* | user stack |
* |---------------------|
* |privilege stack guard|
* |---------------------|
* | privilege stack |
* -----------------------
*/
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
} else {
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
}
#else
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
#endif
#endif
#if defined(CONFIG_USERSPACE)
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {
LOG_DBG("configure user thread %p's stack", thread);
if (_mpu_configure(THREAD_STACK_USER_REGION,
(uint32_t)thread->stack_info.start,
thread->stack_info.size) < 0) {
(u32_t)thread->stack_obj, thread->stack_info.size) < 0) {
LOG_ERR("user thread %p's stack failed", thread);
return;
}
@@ -214,9 +263,9 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
*
* @param region_attr region attribute of default region
*/
void arc_core_mpu_default(uint32_t region_attr)
void arc_core_mpu_default(u32_t region_attr)
{
uint32_t val = z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) &
u32_t val = z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) &
(~AUX_MPU_RDP_ATTR_MASK);
region_attr &= AUX_MPU_RDP_ATTR_MASK;
@@ -231,8 +280,8 @@ void arc_core_mpu_default(uint32_t region_attr)
* @param base base address
* @param region_attr region attribute
*/
int arc_core_mpu_region(uint32_t index, uint32_t base, uint32_t size,
uint32_t region_attr)
int arc_core_mpu_region(u32_t index, u32_t base, u32_t size,
u32_t region_attr)
{
if (index >= get_num_regions()) {
return -EINVAL;
@@ -256,7 +305,7 @@ void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
int region_index =
get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
uint32_t num_partitions;
u32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = NULL;
@@ -313,7 +362,7 @@ void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
* @param partition_id memory partition id
*/
void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
uint32_t part_id)
u32_t part_id)
{
ARG_UNUSED(domain);
@@ -348,7 +397,7 @@ int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
*/
for (r_index = 0; r_index < get_num_regions(); r_index++) {
if (!_is_enabled_region(r_index) ||
!_is_in_region(r_index, (uint32_t)addr, size)) {
!_is_in_region(r_index, (u32_t)addr, size)) {
continue;
}
@@ -370,12 +419,12 @@ int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
static int arc_mpu_init(const struct device *arg)
static int arc_mpu_init(struct device *arg)
{
ARG_UNUSED(arg);
uint32_t num_regions;
uint32_t i;
u32_t num_regions;
u32_t i;
num_regions = get_num_regions();

View File

@@ -12,29 +12,22 @@
#define AUX_MPU_RPER_ATTR_MASK (0x1FF)
#define _ARC_V2_MPU_EN (0x409)
/* aux regs added in MPU version 3 */
#define _ARC_V2_MPU_INDEX (0x448) /* MPU index */
#define _ARC_V2_MPU_RSTART (0x449) /* MPU region start address */
#define _ARC_V2_MPU_REND (0x44A) /* MPU region end address */
#define _ARC_V2_MPU_RPER (0x44B) /* MPU region permission register */
#define _ARC_V2_MPU_PROBE (0x44C) /* MPU probe register */
/* For MPU version 3, the minimum protection region size is 32 bytes */
#define ARC_FEATURE_MPU_ALIGNMENT_BITS 5
#define CALC_REGION_END_ADDR(start, size) \
(start + size - (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS))
/* ARC MPU version 3 does not support mpu region overlap in hardware
* so if we want to allocate MPU region dynamically, e.g. thread stack,
* memory domain from a background region, a dynamic region splitting
* approach is designed. pls see comments in
* _dynamic_region_allocate_and_init
* But this approach has an impact on performance of thread switch.
* As a trade off, we can use the default mpu region as the background region
* to avoid the dynamic region splitting. This will give more privilege to
* codes in kernel mode which can access the memory region not covered by
* explicit mpu entry. Considering memory protection is mainly used to
* isolate malicious codes in user mode, it makes sense to get better
* thread switch performance through default mpu region.
* CONFIG_MPU_GAP_FILLING is used to turn this on/off.
*
*/
#if defined(CONFIG_MPU_GAP_FILLING)
#if defined(CONFIG_USERSPACE) && defined(CONFIG_MPU_STACK_GUARD)
/* 1 for stack guard , 1 for user thread, 1 for split */
#define MPU_REGION_NUM_FOR_THREAD 3
@@ -45,21 +38,22 @@
#define MPU_REGION_NUM_FOR_THREAD 0
#endif
#define MPU_DYNAMIC_REGION_AREAS_NUM 2
/**
* @brief internal structure holding information of
* memory areas where dynamic MPU programming is allowed.
*/
struct dynamic_region_info {
uint8_t index;
uint32_t base;
uint32_t size;
uint32_t attr;
u8_t index;
u32_t base;
u32_t size;
u32_t attr;
};
static uint8_t dynamic_regions_num;
static uint8_t dynamic_region_index;
#define MPU_DYNAMIC_REGION_AREAS_NUM 2
static u8_t static_regions_num;
static u8_t dynamic_regions_num;
static u8_t dynamic_region_index;
/**
* Global array, holding the MPU region index of
@@ -67,43 +61,40 @@ static uint8_t dynamic_region_index;
* regions may be configured.
*/
static struct dynamic_region_info dyn_reg_info[MPU_DYNAMIC_REGION_AREAS_NUM];
#endif /* CONFIG_MPU_GAP_FILLING */
static uint8_t static_regions_num;
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* \todo through secure service to access mpu */
static inline void _region_init(uint32_t index, uint32_t region_addr, uint32_t size,
uint32_t region_attr)
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
}
static inline void _region_set_attr(uint32_t index, uint32_t attr)
static inline void _region_set_attr(u32_t index, u32_t attr)
{
}
static inline uint32_t _region_get_attr(uint32_t index)
static inline u32_t _region_get_attr(u32_t index)
{
return 0;
}
static inline uint32_t _region_get_start(uint32_t index)
static inline u32_t _region_get_start(u32_t index)
{
return 0;
}
static inline void _region_set_start(uint32_t index, uint32_t start)
static inline void _region_set_start(u32_t index, u32_t start)
{
}
static inline uint32_t _region_get_end(uint32_t index)
static inline u32_t _region_get_end(u32_t index)
{
return 0;
}
static inline void _region_set_end(uint32_t index, uint32_t end)
static inline void _region_set_end(u32_t index, u32_t end)
{
}
@@ -111,7 +102,7 @@ static inline void _region_set_end(uint32_t index, uint32_t end)
* This internal function probes the given addr's MPU index.if not
* in MPU, returns error
*/
static inline int _mpu_probe(uint32_t addr)
static inline int _mpu_probe(u32_t addr)
{
return -EINVAL;
}
@@ -119,7 +110,7 @@ static inline int _mpu_probe(uint32_t addr)
/**
* This internal function checks if MPU region is enabled or not
*/
static inline bool _is_enabled_region(uint32_t r_index)
static inline bool _is_enabled_region(u32_t r_index)
{
return false;
}
@@ -127,14 +118,14 @@ static inline bool _is_enabled_region(uint32_t r_index)
/**
* This internal function check if the region is user accessible or not
*/
static inline bool _is_user_accessible_region(uint32_t r_index, int write)
static inline bool _is_user_accessible_region(u32_t r_index, int write)
{
return false;
}
#else /* CONFIG_ARC_NORMAL_FIRMWARE */
/* the following functions are prepared for SECURE_FRIMWARE */
static inline void _region_init(uint32_t index, uint32_t region_addr, uint32_t size,
uint32_t region_attr)
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
if (size < (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS)) {
size = (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS);
@@ -152,34 +143,34 @@ static inline void _region_init(uint32_t index, uint32_t region_addr, uint32_t s
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RPER, region_attr);
}
static inline void _region_set_attr(uint32_t index, uint32_t attr)
static inline void _region_set_attr(u32_t index, u32_t attr)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RPER, attr |
AUX_MPU_RPER_VALID_MASK);
}
static inline uint32_t _region_get_attr(uint32_t index)
static inline u32_t _region_get_attr(u32_t index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
return z_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER);
}
static inline uint32_t _region_get_start(uint32_t index)
static inline u32_t _region_get_start(u32_t index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
return z_arc_v2_aux_reg_read(_ARC_V2_MPU_RSTART);
}
static inline void _region_set_start(uint32_t index, uint32_t start)
static inline void _region_set_start(u32_t index, u32_t start)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RSTART, start);
}
static inline uint32_t _region_get_end(uint32_t index)
static inline u32_t _region_get_end(u32_t index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
@@ -187,7 +178,7 @@ static inline uint32_t _region_get_end(uint32_t index)
(1 << ARC_FEATURE_MPU_ALIGNMENT_BITS);
}
static inline void _region_set_end(uint32_t index, uint32_t end)
static inline void _region_set_end(u32_t index, u32_t end)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_REND, end -
@@ -198,9 +189,9 @@ static inline void _region_set_end(uint32_t index, uint32_t end)
* This internal function probes the given addr's MPU index.if not
* in MPU, returns error
*/
static inline int _mpu_probe(uint32_t addr)
static inline int _mpu_probe(u32_t addr)
{
uint32_t val;
u32_t val;
z_arc_v2_aux_reg_write(_ARC_V2_MPU_PROBE, addr);
val = z_arc_v2_aux_reg_read(_ARC_V2_MPU_INDEX);
@@ -216,7 +207,7 @@ static inline int _mpu_probe(uint32_t addr)
/**
* This internal function checks if MPU region is enabled or not
*/
static inline bool _is_enabled_region(uint32_t r_index)
static inline bool _is_enabled_region(u32_t r_index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, r_index);
return ((z_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER) &
@@ -226,9 +217,9 @@ static inline bool _is_enabled_region(uint32_t r_index)
/**
* This internal function check if the region is user accessible or not
*/
static inline bool _is_user_accessible_region(uint32_t r_index, int write)
static inline bool _is_user_accessible_region(u32_t r_index, int write)
{
uint32_t r_ap;
u32_t r_ap;
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, r_index);
r_ap = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER);
@@ -245,22 +236,6 @@ static inline bool _is_user_accessible_region(uint32_t r_index, int write)
#endif /* CONFIG_ARC_NORMAL_FIRMWARE */
/**
* This internal function checks the area given by (start, size)
* and returns the index if the area match one MPU entry
*/
static inline int _get_region_index(uint32_t start, uint32_t size)
{
int index = _mpu_probe(start);
if (index > 0 && index == _mpu_probe(start + size - 1)) {
return index;
}
return -EINVAL;
}
#if defined(CONFIG_MPU_GAP_FILLING)
/**
* This internal function allocates a dynamic MPU region and returns
* the index or error
@@ -275,6 +250,21 @@ static inline int _dynamic_region_allocate_index(void)
return dynamic_region_index++;
}
/**
* This internal function checks the area given by (start, size)
* and returns the index if the area match one MPU entry
*/
static inline int _get_region_index(u32_t start, u32_t size)
{
int index = _mpu_probe(start);
if (index > 0 && index == _mpu_probe(start + size - 1)) {
return index;
}
return -EINVAL;
}
/* @brief allocate and init a dynamic MPU region
*
* This internal function performs the allocation and initialization of
@@ -285,8 +275,8 @@ static inline int _dynamic_region_allocate_index(void)
* @param attr region attribute
* @return <0 failure, >0 allocated dynamic region index
*/
static int _dynamic_region_allocate_and_init(uint32_t base, uint32_t size,
uint32_t attr)
static int _dynamic_region_allocate_and_init(u32_t base, u32_t size,
u32_t attr)
{
int u_region_index = _get_region_index(base, size);
int region_index;
@@ -311,10 +301,10 @@ static int _dynamic_region_allocate_and_init(uint32_t base, uint32_t size,
* region, possibly splitting the underlying region into two.
*/
uint32_t u_region_start = _region_get_start(u_region_index);
uint32_t u_region_end = _region_get_end(u_region_index);
uint32_t u_region_attr = _region_get_attr(u_region_index);
uint32_t end = base + size;
u32_t u_region_start = _region_get_start(u_region_index);
u32_t u_region_end = _region_get_end(u_region_index);
u32_t u_region_attr = _region_get_attr(u_region_index);
u32_t end = base + size;
if ((base == u_region_start) && (end == u_region_end)) {
@@ -387,8 +377,8 @@ static int _dynamic_region_allocate_and_init(uint32_t base, uint32_t size,
*/
static void _mpu_reset_dynamic_regions(void)
{
uint32_t i;
uint32_t num_regions = get_num_regions();
u32_t i;
u32_t num_regions = get_num_regions();
for (i = static_regions_num; i < num_regions; i++) {
_region_init(i, 0, 0, 0);
@@ -413,75 +403,12 @@ static void _mpu_reset_dynamic_regions(void)
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(uint8_t type, uint32_t base, uint32_t size)
static inline int _mpu_configure(u8_t type, u32_t base, u32_t size)
{
uint32_t region_attr = get_region_attr_by_type(type);
u32_t region_attr = get_region_attr_by_type(type);
return _dynamic_region_allocate_and_init(base, size, region_attr);
}
#else
/**
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct region index.
*/
static inline int get_region_index_by_type(uint32_t type)
{
/*
* The new MPU regions are allocated per type after the statically
* configured regions. The type is one-indexed rather than
* zero-indexed.
*
* For ARC MPU v2, the smaller index has higher priority, so the
* index is allocated in reverse order. Static regions start from
* the biggest index, then thread related regions.
*
*/
switch (type) {
case THREAD_STACK_USER_REGION:
return static_regions_num + THREAD_STACK_REGION;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return static_regions_num + type;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return static_regions_num + type;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not enabled.
*/
return static_regions_num + type - 1;
#endif
default:
__ASSERT(0, "Unsupported type");
return -EINVAL;
}
}
/**
* @brief configure the base address and size for an MPU region
*
* @param type MPU region type
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(uint8_t type, uint32_t base, uint32_t size)
{
int region_index = get_region_index_by_type(type);
uint32_t region_attr = get_region_attr_by_type(type);
LOG_DBG("Region info: 0x%x 0x%x", base, size);
if (region_attr == 0U || region_index < 0) {
return -EINVAL;
}
_region_init(region_index, base, size, region_attr);
return 0;
}
#endif
/* ARC Core MPU Driver API Implementation for ARC MPUv3 */
@@ -492,9 +419,9 @@ void arc_core_mpu_enable(void)
{
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* the default region:
* secure:0x8000, SID:0x10000, KW:0x100 KR:0x80
* normal:0x000, SID:0x10000, KW:0x100 KR:0x80, KE:0x4 0
*/
#define MPU_ENABLE_ATTR 0x18180
#define MPU_ENABLE_ATTR 0x101c0
#else
#define MPU_ENABLE_ATTR 0
#endif
@@ -520,7 +447,6 @@ void arc_core_mpu_disable(void)
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
#if defined(CONFIG_MPU_GAP_FILLING)
/* the mpu entries of ARC MPUv3 are divided into 2 parts:
* static entries: global mpu entries, not changed in context switch
* dynamic entries: MPU entries changed in context switch and
@@ -533,50 +459,60 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
* entries
*/
_mpu_reset_dynamic_regions();
#endif
#if defined(CONFIG_MPU_STACK_GUARD)
uint32_t guard_start;
/* Set location of guard area when the thread is running in
* supervisor mode. For a supervisor thread, this is just low
* memory in the stack buffer. For a user thread, it only runs
* in supervisor mode when handling a system call on the privilege
* elevation stack.
*/
#if defined(CONFIG_USERSPACE)
if ((thread->base.user_options & K_USER) != 0U) {
guard_start = thread->arch.priv_stack_start;
} else
#endif
{
guard_start = thread->stack_info.start;
/* the areas before and after the user stack of thread is
* kernel only. These area can be used as stack guard.
* -----------------------
* | kernel only area |
* |---------------------|
* | user stack |
* |---------------------|
* |privilege stack guard|
* |---------------------|
* | privilege stack |
* -----------------------
*/
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
} else {
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
}
guard_start -= Z_ARC_STACK_GUARD_SIZE;
if (_mpu_configure(THREAD_STACK_GUARD_REGION, guard_start,
Z_ARC_STACK_GUARD_SIZE) < 0) {
#else
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
#endif /* CONFIG_MPU_STACK_GUARD */
#endif
#endif
#if defined(CONFIG_USERSPACE)
u32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = thread->mem_domain_info.mem_domain;
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {
LOG_DBG("configure user thread %p's stack", thread);
if (_mpu_configure(THREAD_STACK_USER_REGION,
(uint32_t)thread->stack_info.start,
thread->stack_info.size) < 0) {
(u32_t)thread->stack_obj, thread->stack_info.size) < 0) {
LOG_ERR("thread %p's stack failed", thread);
return;
}
}
#if defined(CONFIG_MPU_GAP_FILLING)
uint32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = thread->mem_domain_info.mem_domain;
/* configure thread's memory domain */
if (mem_domain) {
LOG_DBG("configure thread %p's domain: %p",
@@ -588,7 +524,7 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
pparts = NULL;
}
for (uint32_t i = 0; i < num_partitions; i++) {
for (u32_t i = 0; i < num_partitions; i++) {
if (pparts->size) {
if (_dynamic_region_allocate_and_init(pparts->start,
pparts->size, pparts->attr) < 0) {
@@ -600,9 +536,6 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
}
pparts++;
}
#else
arc_core_mpu_configure_mem_domain(thread);
#endif
#endif
}
@@ -611,7 +544,7 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
*
* @param region_attr region attribute of default region
*/
void arc_core_mpu_default(uint32_t region_attr)
void arc_core_mpu_default(u32_t region_attr)
{
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* \todo through secure service to access mpu */
@@ -628,8 +561,8 @@ void arc_core_mpu_default(uint32_t region_attr)
* @param size region size
* @param region_attr region attribute
*/
int arc_core_mpu_region(uint32_t index, uint32_t base, uint32_t size,
uint32_t region_attr)
int arc_core_mpu_region(u32_t index, u32_t base, u32_t size,
u32_t region_attr)
{
if (index >= get_num_regions()) {
return -EINVAL;
@@ -648,56 +581,10 @@ int arc_core_mpu_region(uint32_t index, uint32_t base, uint32_t size,
*
* @param thread the thread which has memory domain
*/
#if defined(CONFIG_MPU_GAP_FILLING)
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
arc_core_mpu_configure_thread(thread);
}
#else
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
uint32_t region_index;
uint32_t num_partitions;
uint32_t num_regions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = NULL;
if (thread) {
mem_domain = thread->mem_domain_info.mem_domain;
}
if (mem_domain) {
LOG_DBG("configure domain: %p", mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
LOG_DBG("disable domain partition regions");
num_partitions = 0U;
pparts = NULL;
}
num_regions = get_num_regions();
region_index = get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
while (num_partitions && region_index < num_regions) {
if (pparts->size > 0) {
LOG_DBG("set region 0x%x 0x%lx 0x%x",
region_index, pparts->start, pparts->size);
_region_init(region_index, pparts->start,
pparts->size, pparts->attr);
region_index++;
}
pparts++;
num_partitions--;
}
while (region_index < num_regions) {
/* clear the left mpu entries */
_region_init(region_index, 0, 0, 0);
region_index++;
}
}
#endif
/**
* @brief remove MPU regions for the memory partitions of the memory domain
@@ -706,7 +593,7 @@ void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
*/
void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
{
uint32_t num_partitions;
u32_t num_partitions;
struct k_mem_partition *pparts;
int index;
@@ -720,17 +607,13 @@ void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
pparts = NULL;
}
for (uint32_t i = 0; i < num_partitions; i++) {
for (u32_t i = 0; i < num_partitions; i++) {
if (pparts->size) {
index = _get_region_index(pparts->start,
pparts->size);
if (index > 0) {
#if defined(CONFIG_MPU_GAP_FILLING)
_region_set_attr(index,
REGION_KERNEL_RAM_ATTR);
#else
_region_init(index, 0, 0, 0);
#endif
REGION_KERNEL_RAM_ATTR);
}
}
pparts++;
@@ -743,7 +626,7 @@ void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
* @param partition_id memory partition id
*/
void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
uint32_t partition_id)
u32_t partition_id)
{
struct k_mem_partition *partition = &domain->partitions[partition_id];
@@ -755,11 +638,7 @@ void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
}
LOG_DBG("remove region 0x%x", region_index);
#if defined(CONFIG_MPU_GAP_FILLING)
_region_set_attr(region_index, REGION_KERNEL_RAM_ATTR);
#else
_region_init(region_index, 0, 0, 0);
#endif
}
/**
@@ -767,13 +646,8 @@ void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
*/
int arc_core_mpu_get_max_domain_partition_regions(void)
{
#if defined(CONFIG_MPU_GAP_FILLING)
/* consider the worst case: each partition requires split */
return (get_num_regions() - MPU_REGION_NUM_FOR_THREAD) / 2;
#else
return get_num_regions() -
get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION) - 1;
#endif
}
/**
@@ -789,9 +663,9 @@ int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
* we can stop the iteration immediately once we find the
* matched region that grants permission or denies access.
*/
r_index = _mpu_probe((uint32_t)addr);
r_index = _mpu_probe((u32_t)addr);
/* match and the area is in one region */
if (r_index >= 0 && r_index == _mpu_probe((uint32_t)addr + (size - 1))) {
if (r_index >= 0 && r_index == _mpu_probe((u32_t)addr + (size - 1))) {
if (_is_user_accessible_region(r_index, write)) {
r_index = 0;
} else {
@@ -814,11 +688,11 @@ int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
static int arc_mpu_init(const struct device *arg)
static int arc_mpu_init(struct device *arg)
{
ARG_UNUSED(arg);
uint32_t num_regions;
uint32_t i;
u32_t num_regions;
u32_t i;
num_regions = get_num_regions();
@@ -830,18 +704,11 @@ static int arc_mpu_init(const struct device *arg)
return -EINVAL;
}
static_regions_num = 0;
/* Disable MPU */
arc_core_mpu_disable();
for (i = 0U; i < mpu_config.num_regions; i++) {
/* skip empty region */
if (mpu_config.mpu_regions[i].size == 0) {
continue;
}
#if defined(CONFIG_MPU_GAP_FILLING)
_region_init(static_regions_num,
_region_init(i,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
@@ -865,22 +732,11 @@ static int arc_mpu_init(const struct device *arg)
dynamic_regions_num++;
}
static_regions_num++;
#else
/* dynamic region will be covered by default mpu setting
* no need to configure
*/
if (!(mpu_config.mpu_regions[i].attr & REGION_DYNAMIC)) {
_region_init(static_regions_num,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
static_regions_num++;
}
#endif
}
for (i = static_regions_num; i < num_regions; i++) {
static_regions_num = mpu_config.num_regions;
for (; i < num_regions; i++) {
_region_init(i, 0, 0, 0);
}

View File

@@ -37,11 +37,6 @@ GEN_OFFSET_SYM(_thread_arch_t, u_stack_top);
#endif
#endif
#ifdef CONFIG_USERSPACE
GEN_OFFSET_SYM(_thread_arch_t, priv_stack_start);
#endif
/* ARCv2-specific IRQ stack frame structure member offsets */
GEN_OFFSET_SYM(_isf_t, r0);
GEN_OFFSET_SYM(_isf_t, r1);
@@ -104,7 +99,7 @@ GEN_OFFSET_SYM(_callee_saved_stack_t, r30);
GEN_OFFSET_SYM(_callee_saved_stack_t, r58);
GEN_OFFSET_SYM(_callee_saved_stack_t, r59);
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, fpu_status);
GEN_OFFSET_SYM(_callee_saved_stack_t, fpu_ctrl);
#ifdef CONFIG_FP_FPU_DA

View File

@@ -46,7 +46,7 @@ static void disable_icache(void)
return; /* skip if i-cache is not present */
}
z_arc_v2_aux_reg_write(_ARC_V2_IC_IVIC, 0);
__builtin_arc_nop();
__asm__ __volatile__ ("nop");
z_arc_v2_aux_reg_write(_ARC_V2_IC_CTRL, 1);
}

View File

@@ -23,7 +23,18 @@
GTEXT(_rirq_enter)
GTEXT(_rirq_exit)
GTEXT(_rirq_newthread_switch)
GTEXT(_rirq_common_interrupt_swap)
#if 0 /* TODO: when FIRQ is not present, all would be regular */
#define NUM_REGULAR_IRQ_PRIO_LEVELS CONFIG_NUM_IRQ_PRIO_LEVELS
#else
#define NUM_REGULAR_IRQ_PRIO_LEVELS (CONFIG_NUM_IRQ_PRIO_LEVELS-1)
#endif
/* note: the above define assumes that prio 0 IRQ is for FIRQ, and
* that all others are regular interrupts.
* TODO: Revist this if FIRQ becomes configurable.
*/
/*
@@ -203,16 +214,23 @@ will be corrupted.
SECTION_FUNC(TEXT, _rirq_enter)
/* the ISR will be handled in separate interrupt stack,
* so stack checking must be diabled, or exception will
* be caused
*/
_disable_stack_checking r2
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r2, [_ARC_V2_SEC_STAT]
bclr r2, r2, _ARC_V2_SEC_STAT_SSC_BIT
sflag r2
#else
/* disable stack checking */
lr r2, [_ARC_V2_STATUS32]
bclr r2, r2, _ARC_V2_STATUS32_SC_BIT
kflag r2
#endif
#endif
clri
/* check whether irq stack is used, if
* not switch to isr stack
*/
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
bne.d rirq_nest
@@ -242,17 +260,41 @@ SECTION_FUNC(TEXT, _rirq_exit)
_check_nest_int_by_irq_act r0, r1
jne _rirq_no_switch
jne _rirq_no_reschedule
/* sp is struct k_thread **old of z_arc_switch_in_isr
* which is a wrapper of z_get_next_switch_handle.
* r0 contains the 1st thread in ready queue. if
* it equals _current(r2) ,then do swap, or no swap.
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
cmp_s r0, 0
beq _rirq_no_reschedule
mov_s r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/*
* Both (a)reschedule and (b)non-reschedule cases need to load the
* current thread's stack, but don't have to use it until the decision
* is taken: load the delay slots with the 'load stack pointer'
* instruction.
*
* a) needs to load it to save outgoing context.
* b) needs to load it to restore the interrupted context.
*/
_get_next_switch_handle
cmp r0, r2
beq _rirq_no_switch
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
cmp_s r0, r2
beq _rirq_no_reschedule
/* cached thread to run is in r0, fall through */
#endif
.balign 4
_rirq_reschedule:
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to remember SEC_STAT.IRM bit */
@@ -260,34 +302,81 @@ SECTION_FUNC(TEXT, _rirq_exit)
push_s r3
#endif
/* r2 is old thread */
_irq_store_old_thread_callee_regs
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
/* _save_callee_saved_regs expects outgoing thread in r2 */
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
/* mov new thread (r0) to r2 */
mov r2, r0
#ifdef CONFIG_SMP
mov_s r2, r0
#else
/* incoming thread is in r0: it becomes the new 'current' */
mov_s r2, r0
st_s r2, [r1, _kernel_offset_to_current]
#endif
/* _rirq_newthread_switch required by exception handling */
.align 4
_rirq_newthread_switch:
.balign 4
_rirq_common_interrupt_swap:
/* r2 contains pointer to new thread */
_load_new_thread_callee_regs
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
breq r3, _CAUSE_RIRQ, _rirq_switch_from_rirq
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* when USERSPACE is enabled, according to ARCv2 ISA, SP will be switched
* if interrupt comes out in user mode, and will be recorded in bit 31
* (U bit) of IRQ_ACT. when interrupt exits, SP will be switched back
* according to U bit.
*
* For the case that context switches in interrupt, the target sp must be
* thread's kernel stack, no need to do hardware sp switch. so, U bit should
* be cleared.
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _rirq_return_from_rirq
nop_s
breq r3, _CAUSE_FIRQ, _rirq_switch_from_firq
breq r3, _CAUSE_FIRQ, _rirq_return_from_firq
nop_s
/* fall through */
.align 4
_rirq_switch_from_coop:
.balign 4
_rirq_return_from_coop:
/* for a cooperative switch, it's not in irq, so
* need to set some regs for irq return
*/
_set_misc_regs_irq_switch_from_coop
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* must return to secure mode, so set IRM bit to 1 */
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_IRM_BIT
sflag r0
#endif
/*
* See verbose explanation of
@@ -309,29 +398,27 @@ _rirq_switch_from_coop:
*/
st_s r13, [sp, ___isf_t_r13_OFFSET]
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
#endif
/* stack now has the IRQ stack frame layout, pointing to sp */
/* rtie will pop the rest from the stack */
rtie
.align 4
_rirq_switch_from_firq:
_rirq_switch_from_rirq:
_set_misc_regs_irq_switch_from_irq
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
.balign 4
_rirq_return_from_firq:
_rirq_return_from_rirq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
_rirq_no_switch:
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
_rirq_no_reschedule:
rtie

View File

@@ -16,14 +16,14 @@
#include <arch/cpu.h>
#include <swap_macros.h>
GDATA(z_interrupt_stacks)
GDATA(_interrupt_stack)
GDATA(z_main_stack)
GDATA(_VectorTable)
/* use one of the available interrupt stacks during init */
#define INIT_STACK z_interrupt_stacks
#define INIT_STACK _interrupt_stack
#define INIT_STACK_SIZE CONFIG_ISR_STACK_SIZE
GTEXT(__reset)
@@ -121,40 +121,12 @@ invalidate_dcache:
done_cache_invalidate:
/*
* Init ARC internal architecture state
* Force to initialize internal architecture state to reset values
* For scenarios where board hardware is not re-initialized between tests,
* some settings need to be restored to its default initial states as a
* substitution of normal hardware reset sequence.
*/
#ifdef CONFIG_INIT_ARCH_HW_AT_BOOT
/* Set MPU (v3) registers to default */
#if CONFIG_ARC_MPU_VER == 3
/* Set default reset value to _ARC_V2_MPU_EN register */
#define ARC_MPU_EN_RESET_VALUE 0x400181C0
mov_s r1, ARC_MPU_EN_RESET_VALUE
sr r1, [_ARC_V2_MPU_EN]
/* Get MPU region numbers */
lr r3, [_ARC_V2_MPU_BUILD]
lsr_s r3, r3, 8
and r3, r3, 0xff
mov_s r1, 0
mov_s r2, 0
/* Set all MPU regions by iterating index */
mpu_regions_reset:
brge r2, r3, done_mpu_regions_reset
sr r2, [_ARC_V2_MPU_INDEX]
sr r1, [_ARC_V2_MPU_RSTART]
sr r1, [_ARC_V2_MPU_REND]
sr r1, [_ARC_V2_MPU_RPER]
add_s r2, r2, 1
b_s mpu_regions_reset
done_mpu_regions_reset:
#endif
#if defined(CONFIG_SYS_POWER_DEEP_SLEEP_STATES) && \
!defined(CONFIG_BOOTLOADER_CONTEXT_RESTORE)
jl @_sys_resume_from_deep_sleep
#endif
#if defined(CONFIG_SMP) || CONFIG_MP_NUM_CPUS > 1
#if CONFIG_MP_NUM_CPUS > 1
_get_cpu_id r0
breq r0, 0, _master_core_startup
@@ -162,9 +134,6 @@ done_mpu_regions_reset:
* Non-masters wait for master core (core 0) to boot enough
*/
_slave_core_wait:
#if CONFIG_MP_NUM_CPUS == 1
kflag 1
#endif
ld r1, [arc_cpu_wake_flag]
brne r0, r1, _slave_core_wait
@@ -173,9 +142,7 @@ _slave_core_wait:
st 0, [arc_cpu_wake_flag]
#if defined(CONFIG_ARC_FIRQ_STACK)
push r0
jl z_arc_firq_stack_set
pop r0
#endif
j z_arc_slave_start
@@ -191,7 +158,7 @@ _master_core_startup:
mov_s sp, z_main_stack
add sp, sp, CONFIG_MAIN_STACK_SIZE
mov_s r0, z_interrupt_stacks
mov_s r0, _interrupt_stack
mov_s r1, 0xaa
mov_s r2, CONFIG_ISR_STACK_SIZE
jl memset
@@ -205,4 +172,4 @@ _master_core_startup:
jl z_arc_firq_stack_set
#endif
j _PrepC
j @_PrepC

View File

@@ -19,9 +19,9 @@ static void _default_sjli_entry(void);
* \todo: how to let user to install customized sjli entry easily, e.g.
* through macros or with the help of compiler?
*/
const static uint32_t _sjli_vector_table[CONFIG_SJLI_TABLE_SIZE] = {
[0] = (uint32_t)_arc_do_secure_call,
[1 ... (CONFIG_SJLI_TABLE_SIZE - 1)] = (uint32_t)_default_sjli_entry,
const static u32_t _sjli_vector_table[CONFIG_SJLI_TABLE_SIZE] = {
[0] = (u32_t)_arc_do_secure_call,
[1 ... (CONFIG_SJLI_TABLE_SIZE - 1)] = (u32_t)_default_sjli_entry,
};
/*
@@ -48,7 +48,7 @@ static void sjli_table_init(void)
/*
* @brief initializaiton of secureshield related functions.
*/
static int arc_secureshield_init(const struct device *arg)
static int arc_secureshield_init(struct device *arg)
{
sjli_table_init();

View File

@@ -22,7 +22,7 @@
* an secure service to access secure aux regs. Check should be done
* to decide whether the access is valid.
*/
static int32_t arc_s_aux_read(uint32_t aux_reg)
static s32_t arc_s_aux_read(u32_t aux_reg)
{
return -1;
}
@@ -37,7 +37,7 @@ static int32_t arc_s_aux_read(uint32_t aux_reg)
* an secure service to access secure aux regs. Check should be done
* to decide whether the access is valid.
*/
static int32_t arc_s_aux_write(uint32_t aux_reg, uint32_t val)
static s32_t arc_s_aux_write(u32_t aux_reg, u32_t val)
{
if (aux_reg == _ARC_V2_AUX_IRQ_ACT) {
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocated to secure world
@@ -64,7 +64,7 @@ static int32_t arc_s_aux_write(uint32_t aux_reg, uint32_t val)
* apply one. Necessary check should be done to decide whether the apply is
* valid
*/
static int32_t arc_s_irq_alloc(uint32_t intno)
static s32_t arc_s_irq_alloc(u32_t intno)
{
z_arc_v2_irq_uinit_secure_set(intno, 0);
return 0;

View File

@@ -22,7 +22,7 @@
#include <v2/irq.h>
#include <swap_macros.h>
GTEXT(z_arc_switch)
GTEXT(arch_switch)
/**
*
@@ -52,16 +52,30 @@ GTEXT(z_arc_switch)
*
*/
SECTION_FUNC(TEXT, z_arc_switch)
SECTION_FUNC(TEXT, arch_switch)
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
push_s r1
push_s blink
bl read_timer_start_of_swap
pop_s blink
pop_s r1
pop_s r0
#endif
/*
* r0 = new_thread->switch_handle = switch_to thread,
* r1 = &old_thread->switch_handle
* get old_thread from r1
* r1 = &old_thread->switch_handle = &switch_from thread
*/
sub r2, r1, ___thread_t_switch_handle_OFFSET
ld_s r2, [r1]
/*
* r2 may be dummy_thread in z_cstart, dummy_thread->switch_handle
* must be 0
*/
breq r2, 0, _switch_to_target_thread
st _CAUSE_COOP, [r2, _thread_offset_to_relinquish_cause]
@@ -83,16 +97,39 @@ SECTION_FUNC(TEXT, z_arc_switch)
push_s blink
_store_old_thread_callee_regs
_save_callee_saved_regs
#ifdef CONFIG_ARC_STACK_CHECKING
/* disable stack checking here, as sp will be changed to target
* thread'sp
*/
_disable_stack_checking r3
#if defined(CONFIG_ARC_HAS_SECURE) && defined(CONFIG_ARC_SECURE_FIRMWARE)
bclr r3, r3, _ARC_V2_SEC_STAT_SSC_BIT
sflag r3
#else
bclr r3, r3, _ARC_V2_STATUS32_SC_BIT
kflag r3
#endif
#endif
_switch_to_target_thread:
mov_s r2, r0
_load_new_thread_callee_regs
/* entering here, r2 contains the new current thread */
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
bl configure_mpu_thread
pop_s r2
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _switch_return_from_rirq
nop_s
@@ -101,7 +138,7 @@ SECTION_FUNC(TEXT, z_arc_switch)
/* fall through to _switch_return_from_coop */
.align 4
.balign 4
_switch_return_from_coop:
pop_s blink /* pc into blink */
@@ -114,28 +151,35 @@ _switch_return_from_coop:
pop_s r3 /* status32 into r3 */
kflag r3 /* write status32 */
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
push_s blink
bl z_thread_mark_switched_in
pop_s blink
#ifdef CONFIG_EXECUTION_BENCHMARKING
b _capture_value_for_benchmarking
#endif
return_loc:
j_s [blink]
.align 4
.balign 4
_switch_return_from_rirq:
_switch_return_from_firq:
_set_misc_regs_irq_switch_from_irq
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
/* use lowest interrupt priority to simulate
* a interrupt return to load left regs of new
* thread
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
lr r3, [_ARC_V2_AUX_IRQ_ACT]
/* use lowest interrupt priority */
#ifdef CONFIG_ARC_SECURE_FIRMWARE
or r3, r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
@@ -150,11 +194,15 @@ _switch_return_from_firq:
#else
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_INSTRUMENT_THREAD_SWITCHING
rtie
#ifdef CONFIG_EXECUTION_BENCHMARKING
.balign 4
_capture_value_for_benchmarking:
push_s blink
bl z_thread_mark_switched_in
bl read_timer_end_of_swap
pop_s blink
#endif
rtie
b return_loc
#endif /* CONFIG_EXECUTION_BENCHMARKING */

View File

@@ -22,199 +22,254 @@
/* initial stack frame */
struct init_stack_frame {
uint32_t pc;
u32_t pc;
#ifdef CONFIG_ARC_HAS_SECURE
uint32_t sec_stat;
u32_t sec_stat;
#endif
uint32_t status32;
uint32_t r3;
uint32_t r2;
uint32_t r1;
uint32_t r0;
u32_t status32;
u32_t r3;
u32_t r2;
u32_t r1;
u32_t r0;
};
#ifdef CONFIG_USERSPACE
struct user_init_stack_frame {
struct init_stack_frame iframe;
uint32_t user_sp;
};
static bool is_user(struct k_thread *thread)
{
return (thread->base.user_options & K_USER) != 0;
}
#endif
/* Set all stack-related architecture variables for the provided thread */
static void setup_stack_vars(struct k_thread *thread)
{
#ifdef CONFIG_USERSPACE
if (is_user(thread)) {
#ifdef CONFIG_GEN_PRIV_STACKS
thread->arch.priv_stack_start =
(uint32_t)z_priv_stack_find(thread->stack_obj);
#else
thread->arch.priv_stack_start = (uint32_t)(thread->stack_obj);
#endif /* CONFIG_GEN_PRIV_STACKS */
thread->arch.priv_stack_start += Z_ARC_STACK_GUARD_SIZE;
} else {
thread->arch.priv_stack_start = 0;
}
#endif /* CONFIG_USERSPACE */
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_USERSPACE
if (is_user(thread)) {
thread->arch.k_stack_top = thread->arch.priv_stack_start;
thread->arch.k_stack_base = (thread->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE);
thread->arch.u_stack_top = thread->stack_info.start;
thread->arch.u_stack_base = (thread->stack_info.start +
thread->stack_info.size);
} else
#endif /* CONFIG_USERSPACE */
{
thread->arch.k_stack_top = (uint32_t)thread->stack_info.start;
thread->arch.k_stack_base = (uint32_t)(thread->stack_info.start +
thread->stack_info.size);
#ifdef CONFIG_USERSPACE
thread->arch.u_stack_top = 0;
thread->arch.u_stack_base = 0;
#endif /* CONFIG_USERSPACE */
}
#endif /* CONFIG_ARC_STACK_CHECKING */
}
/* Get the initial stack frame pointer from the thread's stack buffer. */
static struct init_stack_frame *get_iframe(struct k_thread *thread,
char *stack_ptr)
{
#ifdef CONFIG_USERSPACE
if (is_user(thread)) {
/* Initial stack frame for a user thread is slightly larger;
* we land in z_user_thread_entry_wrapper on the privilege
* stack, and pop off an additional value for the user
* stack pointer.
*/
struct user_init_stack_frame *uframe;
uframe = Z_STACK_PTR_TO_FRAME(struct user_init_stack_frame,
thread->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE);
uframe->user_sp = (uint32_t)stack_ptr;
return &uframe->iframe;
}
#endif
return Z_STACK_PTR_TO_FRAME(struct init_stack_frame, stack_ptr);
}
/*
* Pre-populate values in the registers inside _callee_saved_stack struct
* so these registers have pre-defined values when new thread begins
* execution. For example, setting up the thread pointer for thread local
* storage here so the thread starts with thread pointer already set up.
*/
static inline void arch_setup_callee_saved_regs(struct k_thread *thread,
uintptr_t stack_ptr)
{
_callee_saved_stack_t *regs = UINT_TO_POINTER(stack_ptr);
ARG_UNUSED(regs);
#ifdef CONFIG_THREAD_LOCAL_STORAGE
/* R26 is used for thread pointer */
regs->r26 = thread->tls;
#endif
}
/*
* @brief Initialize a new thread from its stack space
*
* The thread control structure is put at the lower address of the stack. An
* initial context, to be "restored" by __return_from_coop(), is put at
* the other end of the stack, and thus reusable by the stack when not
* needed anymore.
*
* The initial context is a basic stack frame that contains arguments for
* z_thread_entry() return address, that points at z_thread_entry()
* and status register.
*
* <options> is currently unused.
*
* @param pStackmem the pointer to aligned stack memory
* @param stackSize the stack size in bytes
* @param pEntry thread entry point routine
* @param parameter1 first param to entry point
* @param parameter2 second param to entry point
* @param parameter3 third param to entry point
* @param priority thread priority
* @param options thread options: K_ESSENTIAL
*
* @return N/A
*/
void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
char *stack_ptr, k_thread_entry_t entry,
void *p1, void *p2, void *p3)
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
{
struct init_stack_frame *iframe;
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
Z_ASSERT_VALID_PRIO(priority, pEntry);
setup_stack_vars(thread);
/* Set up initial stack frame */
iframe = get_iframe(thread, stack_ptr);
char *stackEnd;
char *stackAdjEnd;
struct init_stack_frame *pInitCtx;
#ifdef CONFIG_USERSPACE
/* enable US bit, US is read as zero in user mode. This will allow user
size_t stackAdjSize;
size_t offset = 0;
/* adjust stack and stack size */
#if CONFIG_ARC_MPU_VER == 2
stackAdjSize = Z_ARC_MPUV2_SIZE_ALIGN(stackSize);
#elif CONFIG_ARC_MPU_VER == 3
stackAdjSize = STACK_SIZE_ALIGN(stackSize);
#endif
stackEnd = pStackMem + stackAdjSize;
#ifdef CONFIG_STACK_POINTER_RANDOM
offset = stackAdjSize - stackSize;
#endif
if (options & K_USER) {
thread->arch.priv_stack_start =
(u32_t)(stackEnd + STACK_GUARD_SIZE);
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd +
ARCH_THREAD_STACK_RESERVED);
/* reserve 4 bytes for the start of user sp */
stackAdjEnd -= 4;
(*(u32_t *)stackAdjEnd) = STACK_ROUND_DOWN(
(u32_t)stackEnd - offset);
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
thread->userspace_local_data =
(struct _thread_userspace_local_data *)
STACK_ROUND_DOWN(stackEnd -
sizeof(*thread->userspace_local_data) - offset);
/* update the start of user sp */
(*(u32_t *)stackAdjEnd) = (u32_t) thread->userspace_local_data;
#endif
} else {
/* for kernel thread, the privilege stack is merged into thread stack */
/* if MPU_STACK_GUARD is enabled, reserve the the stack area
* |---------------------| |----------------|
* | user stack | | stack guard |
* |---------------------| to |----------------|
* | stack guard | | kernel thread |
* |---------------------| | stack |
* | privilege stack | | |
* ---------------------------------------------
*/
pStackMem += STACK_GUARD_SIZE;
stackAdjSize = stackAdjSize + CONFIG_PRIVILEGED_STACK_SIZE;
stackEnd += ARCH_THREAD_STACK_RESERVED;
thread->arch.priv_stack_start = 0;
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd
- sizeof(*thread->userspace_local_data) - offset);
thread->userspace_local_data =
(struct _thread_userspace_local_data *)stackAdjEnd;
#else
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd - offset);
#endif
}
z_new_thread_init(thread, pStackMem, stackAdjSize, priority, options);
/* carve the thread entry struct from the "base" of
the privileged stack */
pInitCtx = (struct init_stack_frame *)(
stackAdjEnd - sizeof(struct init_stack_frame));
/* fill init context */
pInitCtx->status32 = 0U;
if (options & K_USER) {
pInitCtx->pc = ((u32_t)z_user_thread_entry_wrapper);
} else {
pInitCtx->pc = ((u32_t)z_thread_entry_wrapper);
}
/*
* enable US bit, US is read as zero in user mode. This will allow use
* mode sleep instructions, and it enables a form of denial-of-service
* attack by putting the processor in sleep mode, but since interrupt
* level/mask can't be set from user space that's not worse than
* executing a loop without yielding.
*/
iframe->status32 = _ARC_V2_STATUS32_US;
if (is_user(thread)) {
iframe->pc = (uint32_t)z_user_thread_entry_wrapper;
} else {
iframe->pc = (uint32_t)z_thread_entry_wrapper;
}
#else
iframe->status32 = 0;
iframe->pc = ((uint32_t)z_thread_entry_wrapper);
#endif /* CONFIG_USERSPACE */
#ifdef CONFIG_ARC_SECURE_FIRMWARE
iframe->sec_stat = z_arc_v2_aux_reg_read(_ARC_V2_SEC_STAT);
#endif
iframe->r0 = (uint32_t)entry;
iframe->r1 = (uint32_t)p1;
iframe->r2 = (uint32_t)p2;
iframe->r3 = (uint32_t)p3;
pInitCtx->status32 |= _ARC_V2_STATUS32_US;
#else /* For no USERSPACE feature */
pStackMem += ARCH_THREAD_STACK_RESERVED;
stackEnd = pStackMem + stackSize;
z_new_thread_init(thread, pStackMem, stackSize, priority, options);
stackAdjEnd = stackEnd;
pInitCtx = (struct init_stack_frame *)(
STACK_ROUND_DOWN(stackAdjEnd) -
sizeof(struct init_stack_frame));
pInitCtx->status32 = 0U;
pInitCtx->pc = ((u32_t)z_thread_entry_wrapper);
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
pInitCtx->sec_stat = z_arc_v2_aux_reg_read(_ARC_V2_SEC_STAT);
#endif
pInitCtx->r0 = (u32_t)pEntry;
pInitCtx->r1 = (u32_t)parameter1;
pInitCtx->r2 = (u32_t)parameter2;
pInitCtx->r3 = (u32_t)parameter3;
/* stack check configuration */
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
iframe->sec_stat |= _ARC_V2_SEC_STAT_SSC;
pInitCtx->sec_stat |= _ARC_V2_SEC_STAT_SSC;
#else
iframe->status32 |= _ARC_V2_STATUS32_SC;
#endif /* CONFIG_ARC_SECURE_FIRMWARE */
#endif /* CONFIG_ARC_STACK_CHECKING */
#ifdef CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS
iframe->status32 |= _ARC_V2_STATUS32_AD;
pInitCtx->status32 |= _ARC_V2_STATUS32_SC;
#endif
/* Set required thread members */
#ifdef CONFIG_USERSPACE
if (options & K_USER) {
thread->arch.u_stack_top = (u32_t)pStackMem;
thread->arch.u_stack_base = (u32_t)stackEnd;
thread->arch.k_stack_top =
(u32_t)(stackEnd + STACK_GUARD_SIZE);
thread->arch.k_stack_base = (u32_t)
(stackEnd + ARCH_THREAD_STACK_RESERVED);
} else {
thread->arch.k_stack_top = (u32_t)pStackMem;
thread->arch.k_stack_base = (u32_t)stackEnd;
thread->arch.u_stack_top = 0;
thread->arch.u_stack_base = 0;
}
#else
thread->arch.k_stack_top = (u32_t) pStackMem;
thread->arch.k_stack_base = (u32_t) stackEnd;
#endif
#endif
#ifdef CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS
pInitCtx->status32 |= _ARC_V2_STATUS32_AD;
#endif
thread->switch_handle = thread;
thread->arch.relinquish_cause = _CAUSE_COOP;
thread->callee_saved.sp =
(uint32_t)iframe - ___callee_saved_stack_t_SIZEOF;
arch_setup_callee_saved_regs(thread, thread->callee_saved.sp);
(u32_t)pInitCtx - ___callee_saved_stack_t_SIZEOF;
/* initial values in all other regs/k_thread entries are irrelevant */
}
void *z_arch_get_next_switch_handle(struct k_thread **old_thread)
{
*old_thread = _current;
return z_get_next_switch_handle(*old_thread);
}
#ifdef CONFIG_USERSPACE
FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
{
setup_stack_vars(_current);
/*
* adjust the thread stack layout
* |----------------| |---------------------|
* | stack guard | | user stack |
* |----------------| to |---------------------|
* | kernel thread | | stack guard |
* | stack | |---------------------|
* | | | privilege stack |
* ---------------------------------------------
*/
_current->stack_info.start = (u32_t)_current->stack_obj;
_current->stack_info.size -= CONFIG_PRIVILEGED_STACK_SIZE;
_current->arch.priv_stack_start =
(u32_t)(_current->stack_info.start +
_current->stack_info.size + STACK_GUARD_SIZE);
#ifdef CONFIG_ARC_STACK_CHECKING
_current->arch.k_stack_top = _current->arch.priv_stack_start;
_current->arch.k_stack_base = _current->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE;
_current->arch.u_stack_top = _current->stack_info.start;
_current->arch.u_stack_base = _current->stack_info.start +
_current->stack_info.size;
#endif
/* possible optimizaiton: no need to load mem domain anymore */
/* need to lock cpu here ? */
configure_mpu_thread(_current);
z_arc_userspace_enter(user_entry, p1, p2, p3,
(uint32_t)_current->stack_info.start,
(_current->stack_info.size -
_current->stack_info.delta), _current);
(u32_t)_current->stack_obj,
_current->stack_info.size);
CODE_UNREACHABLE;
}
#endif
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
int arch_float_disable(struct k_thread *thread)
{
unsigned int key;
@@ -247,4 +302,4 @@ int arch_float_enable(struct k_thread *thread)
return 0;
}
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */

View File

@@ -23,17 +23,17 @@
*
* @return 64-bit time stamp value
*/
uint64_t z_tsc_read(void)
u64_t z_tsc_read(void)
{
unsigned int key;
uint64_t t;
uint32_t count;
u64_t t;
u32_t count;
key = arch_irq_lock();
t = (uint64_t)z_tick_get();
key = irq_lock();
t = (u64_t)z_tick_get();
count = z_arc_v2_aux_reg_read(_ARC_V2_TMR0_COUNT);
arch_irq_unlock(key);
irq_unlock(key);
t *= k_ticks_to_cyc_floor64(1);
t += (uint64_t)count;
t += (u64_t)count;
return t;
}

View File

@@ -1,42 +0,0 @@
/*
* Copyright (c) 2020 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <kernel_structs.h>
#include <kernel_internal.h>
#include <kernel_tls.h>
#include <sys/util.h>
size_t arch_tls_stack_setup(struct k_thread *new_thread, char *stack_ptr)
{
/*
* TLS area for ARC has some data fields following by
* thread data and bss. These fields are supposed to be
* used by toolchain and OS TLS code to aid in locating
* the TLS data/bss. Zephyr currently has no use for
* this so we can simply skip these. However, since GCC
* is generating code assuming these fields are there,
* we simply skip them when setting the TLS pointer.
*/
/*
* Since we are populating things backwards,
* setup the TLS data/bss area first.
*/
stack_ptr -= z_tls_data_size();
z_tls_copy(stack_ptr);
/* Skip two pointers due to toolchain */
stack_ptr -= sizeof(uintptr_t) * 2;
/*
* Set thread TLS pointer which is used in
* context switch to point to TLS area.
*/
new_thread->tls = POINTER_TO_UINT(stack_ptr);
return (z_tls_data_size() + (sizeof(uintptr_t) * 2));
}

View File

@@ -93,15 +93,22 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
/*
* In ARCv2, the U bit can only be set through exception return
*/
#ifdef CONFIG_ARC_STACK_CHECKING
/* disable stack checking as the stack should be initialized */
_disable_stack_checking blink
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr blink, [_ARC_V2_SEC_STAT]
bclr blink, blink, _ARC_V2_SEC_STAT_SSC_BIT
sflag blink
#else
lr blink, [_ARC_V2_STATUS32]
bclr blink, blink, _ARC_V2_STATUS32_SC_BIT
kflag blink
#endif
#endif
/* the end of user stack in r5 */
add r5, r4, r5
/* get start of privilege stack, r6 points to current thread */
ld blink, [r6, _thread_offset_to_priv_stack_start]
add blink, blink, CONFIG_PRIVILEGED_STACK_SIZE
/* start of privilege stack */
add blink, r5, CONFIG_PRIVILEGED_STACK_SIZE+STACK_GUARD_SIZE
mov_s sp, r5
push_s r0
@@ -111,9 +118,6 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
mov r5, sp /* skip r0, r1, r2, r3 */
/* to avoid the leakage of kernel info, the thread stack needs to be
* re-initialized
*/
#ifdef CONFIG_INIT_STACKS
mov_s r0, 0xaaaaaaaa
#else
@@ -124,22 +128,23 @@ _clear_user_stack:
cmp r4, r5
jlt _clear_user_stack
/* reload the stack checking regs as the original kernel stack
* becomes user stack
*/
#ifdef CONFIG_ARC_STACK_CHECKING
/* current thread in r6, SMP case is also considered */
mov r2, r6
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_stack_check_regs
_enable_stack_checking r0
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_SSC_BIT
sflag r0
#else
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_SC_BIT
kflag r0
#endif
#endif
/* the following codes are used to switch from kernel mode
* to user mode by fake exception, because U bit can only be set
* by exception
*/
_arc_go_to_user_space:
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_U_BIT
@@ -151,7 +156,7 @@ _arc_go_to_user_space:
/* fake exception return */
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_AE_BIT
bclr r0, r0, _ARC_V2_STATUS32_AE_BIT
kflag r0
/* when exception returns from kernel to user, sp and _ARC_V2_USER_SP
@@ -174,10 +179,15 @@ _arc_go_to_user_space:
clear_scratch_regs
mov fp, 0
mov r29, 0
mov r30, 0
mov blink, 0
mov_s fp, 0
mov_s r29, 0
mov_s r30, 0
mov_s blink, 0
#ifdef CONFIG_EXECUTION_BENCHMARKING
b _capture_value_for_benchmarking_userspace
return_loc_userspace_enter:
#endif /* CONFIG_EXECUTION_BENCHMARKING */
rtie
@@ -214,8 +224,8 @@ SECTION_FUNC(TEXT, _arc_do_syscall)
/* save return value */
st_s r0, [sp, ___isf_t_r0_OFFSET]
mov r29, 0
mov r30, 0
mov_s r29, 0
mov_s r30, 0
/* through fake exception return, go back to the caller */
lr r0, [_ARC_V2_STATUS32]
@@ -288,3 +298,20 @@ inc_len:
/* increment length measurement, loop again */
add_s r0, r0, 1
b_s strlen_loop
#ifdef CONFIG_EXECUTION_BENCHMARKING
.balign 4
_capture_value_for_benchmarking_userspace:
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_save_callee_saved_regs
push_s blink
bl read_timer_end_of_userspace_enter
pop_s blink
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_callee_saved_regs
b return_loc_userspace_enter
#endif

View File

@@ -28,39 +28,39 @@
#include "vector_table.h"
struct vector_table {
uint32_t reset;
uint32_t memory_error;
uint32_t instruction_error;
uint32_t ev_machine_check;
uint32_t ev_tlb_miss_i;
uint32_t ev_tlb_miss_d;
uint32_t ev_prot_v;
uint32_t ev_privilege_v;
uint32_t ev_swi;
uint32_t ev_trap;
uint32_t ev_extension;
uint32_t ev_div_zero;
uint32_t ev_dc_error;
uint32_t ev_maligned;
uint32_t unused_1;
uint32_t unused_2;
u32_t reset;
u32_t memory_error;
u32_t instruction_error;
u32_t ev_machine_check;
u32_t ev_tlb_miss_i;
u32_t ev_tlb_miss_d;
u32_t ev_prot_v;
u32_t ev_privilege_v;
u32_t ev_swi;
u32_t ev_trap;
u32_t ev_extension;
u32_t ev_div_zero;
u32_t ev_dc_error;
u32_t ev_maligned;
u32_t unused_1;
u32_t unused_2;
};
struct vector_table _VectorTable Z_GENERIC_SECTION(.exc_vector_table) = {
(uint32_t)__reset,
(uint32_t)__memory_error,
(uint32_t)__instruction_error,
(uint32_t)__ev_machine_check,
(uint32_t)__ev_tlb_miss_i,
(uint32_t)__ev_tlb_miss_d,
(uint32_t)__ev_prot_v,
(uint32_t)__ev_privilege_v,
(uint32_t)__ev_swi,
(uint32_t)__ev_trap,
(uint32_t)__ev_extension,
(uint32_t)__ev_div_zero,
(uint32_t)__ev_dc_error,
(uint32_t)__ev_maligned,
(u32_t)__reset,
(u32_t)__memory_error,
(u32_t)__instruction_error,
(u32_t)__ev_machine_check,
(u32_t)__ev_tlb_miss_i,
(u32_t)__ev_tlb_miss_d,
(u32_t)__ev_prot_v,
(u32_t)__ev_privilege_v,
(u32_t)__ev_swi,
(u32_t)__ev_trap,
(u32_t)__ev_extension,
(u32_t)__ev_div_zero,
(u32_t)__ev_dc_error,
(u32_t)__ev_maligned,
0,
0
};

View File

@@ -8,4 +8,4 @@
KEEP(*(.exc_vector_table))
KEEP(*(".exc_vector_table.*"))
KEEP(*(_IRQ_VECTOR_TABLE_SECTION_SYMS))
KEEP(*(IRQ_VECTOR_TABLE))

View File

@@ -37,70 +37,70 @@ extern "C" {
#ifdef CONFIG_ARC_HAS_SECURE
struct _irq_stack_frame {
uint32_t lp_end;
uint32_t lp_start;
uint32_t lp_count;
u32_t lp_end;
u32_t lp_start;
u32_t lp_count;
#ifdef CONFIG_CODE_DENSITY
/*
* Currently unsupported. This is where those registers are
* automatically pushed on the stack by the CPU when taking a regular
* IRQ.
*/
uint32_t ei_base;
uint32_t ldi_base;
uint32_t jli_base;
u32_t ei_base;
u32_t ldi_base;
u32_t jli_base;
#endif
uint32_t r0;
uint32_t r1;
uint32_t r2;
uint32_t r3;
uint32_t r4;
uint32_t r5;
uint32_t r6;
uint32_t r7;
uint32_t r8;
uint32_t r9;
uint32_t r10;
uint32_t r11;
uint32_t r12;
uint32_t r13;
uint32_t blink;
uint32_t pc;
uint32_t sec_stat;
uint32_t status32;
u32_t r0;
u32_t r1;
u32_t r2;
u32_t r3;
u32_t r4;
u32_t r5;
u32_t r6;
u32_t r7;
u32_t r8;
u32_t r9;
u32_t r10;
u32_t r11;
u32_t r12;
u32_t r13;
u32_t blink;
u32_t pc;
u32_t sec_stat;
u32_t status32;
};
#else
struct _irq_stack_frame {
uint32_t r0;
uint32_t r1;
uint32_t r2;
uint32_t r3;
uint32_t r4;
uint32_t r5;
uint32_t r6;
uint32_t r7;
uint32_t r8;
uint32_t r9;
uint32_t r10;
uint32_t r11;
uint32_t r12;
uint32_t r13;
uint32_t blink;
uint32_t lp_end;
uint32_t lp_start;
uint32_t lp_count;
u32_t r0;
u32_t r1;
u32_t r2;
u32_t r3;
u32_t r4;
u32_t r5;
u32_t r6;
u32_t r7;
u32_t r8;
u32_t r9;
u32_t r10;
u32_t r11;
u32_t r12;
u32_t r13;
u32_t blink;
u32_t lp_end;
u32_t lp_start;
u32_t lp_count;
#ifdef CONFIG_CODE_DENSITY
/*
* Currently unsupported. This is where those registers are
* automatically pushed on the stack by the CPU when taking a regular
* IRQ.
*/
uint32_t ei_base;
uint32_t ldi_base;
uint32_t jli_base;
u32_t ei_base;
u32_t ldi_base;
u32_t jli_base;
#endif
uint32_t pc;
uint32_t status32;
u32_t pc;
u32_t status32;
};
#endif
@@ -110,47 +110,47 @@ typedef struct _irq_stack_frame _isf_t;
/* callee-saved registers pushed on the stack, not in k_thread */
struct _callee_saved_stack {
uint32_t r13;
uint32_t r14;
uint32_t r15;
uint32_t r16;
uint32_t r17;
uint32_t r18;
uint32_t r19;
uint32_t r20;
uint32_t r21;
uint32_t r22;
uint32_t r23;
uint32_t r24;
uint32_t r25;
uint32_t r26;
uint32_t fp; /* r27 */
u32_t r13;
u32_t r14;
u32_t r15;
u32_t r16;
u32_t r17;
u32_t r18;
u32_t r19;
u32_t r20;
u32_t r21;
u32_t r22;
u32_t r23;
u32_t r24;
u32_t r25;
u32_t r26;
u32_t fp; /* r27 */
#ifdef CONFIG_USERSPACE
#ifdef CONFIG_ARC_HAS_SECURE
uint32_t user_sp;
uint32_t kernel_sp;
u32_t user_sp;
u32_t kernel_sp;
#else
uint32_t user_sp;
u32_t user_sp;
#endif
#endif
/* r28 is the stack pointer and saved separately */
/* r29 is ILINK and does not need to be saved */
uint32_t r30;
u32_t r30;
#ifdef CONFIG_ARC_HAS_ACCL_REGS
uint32_t r58;
uint32_t r59;
u32_t r58;
u32_t r59;
#endif
#ifdef CONFIG_FPU_SHARING
uint32_t fpu_status;
uint32_t fpu_ctrl;
#ifdef CONFIG_FP_SHARING
u32_t fpu_status;
u32_t fpu_ctrl;
#ifdef CONFIG_FP_FPU_DA
uint32_t dpfp2h;
uint32_t dpfp2l;
uint32_t dpfp1h;
uint32_t dpfp1l;
u32_t dpfp2h;
u32_t dpfp2l;
u32_t dpfp1h;
u32_t dpfp1l;
#endif
#endif
@@ -168,4 +168,12 @@ typedef struct _callee_saved_stack _callee_saved_stack_t;
#endif /* _ASMLANGUAGE */
/* stacks */
#define STACK_ALIGN_SIZE 4
#define STACK_ROUND_UP(x) ROUND_UP(x, STACK_ALIGN_SIZE)
#define STACK_ROUND_DOWN(x) ROUND_DOWN(x, STACK_ALIGN_SIZE)
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_DATA_H_ */

View File

@@ -36,6 +36,8 @@ extern "C" {
static ALWAYS_INLINE void arch_kernel_init(void)
{
z_irq_setup();
_current_cpu->irq_stack =
Z_THREAD_STACK_BUFFER(_interrupt_stack) + CONFIG_ISR_STACK_SIZE;
}
@@ -48,7 +50,7 @@ static ALWAYS_INLINE void arch_kernel_init(void)
*/
static ALWAYS_INLINE int Z_INTERRUPT_CAUSE(void)
{
uint32_t irq_num = z_arc_v2_aux_reg_read(_ARC_V2_ICAUSE);
u32_t irq_num = z_arc_v2_aux_reg_read(_ARC_V2_ICAUSE);
return irq_num;
}
@@ -62,20 +64,14 @@ extern void z_thread_entry_wrapper(void);
extern void z_user_thread_entry_wrapper(void);
extern void z_arc_userspace_enter(k_thread_entry_t user_entry, void *p1,
void *p2, void *p3, uint32_t stack, uint32_t size,
struct k_thread *thread);
void *p2, void *p3, u32_t stack, u32_t size);
extern void arch_switch(void *switch_to, void **switched_from);
extern void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
extern void arch_sched_ipi(void);
extern void z_arc_switch(void *switch_to, void **switched_from);
static inline void arch_switch(void *switch_to, void **switched_from)
{
z_arc_switch(switch_to, switched_from);
}
#ifdef __cplusplus
}
#endif

View File

@@ -32,9 +32,6 @@
#define _thread_offset_to_u_stack_top \
(___thread_t_arch_OFFSET + ___thread_arch_t_u_stack_top_OFFSET)
#define _thread_offset_to_priv_stack_start \
(___thread_t_arch_OFFSET + ___thread_arch_t_priv_stack_start_OFFSET)
#define _thread_offset_to_sp \
(___thread_t_callee_saved_OFFSET + ___callee_saved_t_sp_OFFSET)

View File

@@ -13,11 +13,10 @@
#include <offsets_short.h>
#include <toolchain.h>
#include <arch/cpu.h>
#include <arch/arc/tool-compat.h>
#ifdef _ASMLANGUAGE
/* save callee regs of current thread in r2 */
/* entering this macro, current is in r2 */
.macro _save_callee_saved_regs
sub_s sp, sp, ___callee_saved_stack_t_SIZEOF
@@ -64,7 +63,7 @@
st r59, [sp, ___callee_saved_stack_t_r59_OFFSET]
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 1f
@@ -90,7 +89,7 @@
st sp, [r2, _thread_offset_to_sp]
.endm
/* load the callee regs of thread (in r2)*/
/* entering this macro, current is in r2 */
.macro _load_callee_saved_regs
/* restore stack pointer from struct k_thread */
ld sp, [r2, _thread_offset_to_sp]
@@ -100,7 +99,7 @@
ld r59, [sp, ___callee_saved_stack_t_r59_OFFSET]
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 2f
@@ -163,7 +162,6 @@
.endm
/* discard callee regs */
.macro _discard_callee_saved_regs
add_s sp, sp, ___callee_saved_stack_t_SIZEOF
.endm
@@ -267,7 +265,7 @@
.endm
/*
* To use this macro, r2 should have the value of thread struct pointer to
* To use this macor, r2 should have the value of thread struct pointer to
* _kernel.current. r3 is a scratch reg.
*/
.macro _load_stack_check_regs
@@ -299,225 +297,84 @@
/* check and increase the interrupt nest counter
* after increase, check whether nest counter == 1
* the result will be EQ bit of status32
* two temp regs are needed
*/
.macro _check_and_inc_int_nest_counter, reg1, reg2
.macro _check_and_inc_int_nest_counter reg1 reg2
#ifdef CONFIG_SMP
_get_cpu_id MACRO_ARG(reg1)
ld.as MACRO_ARG(reg1), [_curr_cpu, MACRO_ARG(reg1)]
ld MACRO_ARG(reg2), [MACRO_ARG(reg1), ___cpu_t_nested_OFFSET]
_get_cpu_id \reg1
ld.as \reg1, [@_curr_cpu, \reg1]
ld \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
mov MACRO_ARG(reg1), _kernel
ld MACRO_ARG(reg2), [MACRO_ARG(reg1), _kernel_offset_to_nested]
mov \reg1, _kernel
ld \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
add MACRO_ARG(reg2), MACRO_ARG(reg2), 1
add \reg2, \reg2, 1
#ifdef CONFIG_SMP
st MACRO_ARG(reg2), [MACRO_ARG(reg1), ___cpu_t_nested_OFFSET]
st \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
st MACRO_ARG(reg2), [MACRO_ARG(reg1), _kernel_offset_to_nested]
st \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
cmp MACRO_ARG(reg2), 1
cmp \reg2, 1
.endm
/* decrease interrupt stack nest counter
* the counter > 0, interrupt stack is used, or
* not used
*/
.macro _dec_int_nest_counter, reg1, reg2
/* decrease interrupt nest counter */
.macro _dec_int_nest_counter reg1 reg2
#ifdef CONFIG_SMP
_get_cpu_id MACRO_ARG(reg1)
ld.as MACRO_ARG(reg1), [_curr_cpu, MACRO_ARG(reg1)]
ld MACRO_ARG(reg2), [MACRO_ARG(reg1), ___cpu_t_nested_OFFSET]
_get_cpu_id \reg1
ld.as \reg1, [@_curr_cpu, \reg1]
ld \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
mov MACRO_ARG(reg1), _kernel
ld MACRO_ARG(reg2), [MACRO_ARG(reg1), _kernel_offset_to_nested]
mov \reg1, _kernel
ld \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
sub MACRO_ARG(reg2), MACRO_ARG(reg2), 1
sub \reg2, \reg2, 1
#ifdef CONFIG_SMP
st MACRO_ARG(reg2), [MACRO_ARG(reg1), ___cpu_t_nested_OFFSET]
st \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
st MACRO_ARG(reg2), [MACRO_ARG(reg1), _kernel_offset_to_nested]
st \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
.endm
/* If multi bits in IRQ_ACT are set, i.e. last bit != fist bit, it's
* in nest interrupt. The result will be EQ bit of status32
* need two temp reg to do this
*/
.macro _check_nest_int_by_irq_act, reg1, reg2
lr MACRO_ARG(reg1), [_ARC_V2_AUX_IRQ_ACT]
.macro _check_nest_int_by_irq_act reg1, reg2
lr \reg1, [_ARC_V2_AUX_IRQ_ACT]
#ifdef CONFIG_ARC_SECURE_FIRMWARE
and MACRO_ARG(reg1), MACRO_ARG(reg1), ((1 << ARC_N_IRQ_START_LEVEL) - 1)
and \reg1, \reg1, ((1 << ARC_N_IRQ_START_LEVEL) - 1)
#else
and MACRO_ARG(reg1), MACRO_ARG(reg1), 0xffff
and \reg1, \reg1, 0xffff
#endif
ffs MACRO_ARG(reg2), MACRO_ARG(reg1)
fls MACRO_ARG(reg1), MACRO_ARG(reg1)
cmp MACRO_ARG(reg1), MACRO_ARG(reg2)
ffs \reg2, \reg1
fls \reg1, \reg1
cmp \reg1, \reg2
.endm
/* macro to get id of current cpu
* the result will be in reg (a reg)
*/
.macro _get_cpu_id, reg
lr MACRO_ARG(reg), [_ARC_V2_IDENTITY]
xbfu MACRO_ARG(reg), MACRO_ARG(reg), 0xe8
.macro _get_cpu_id reg
lr \reg, [_ARC_V2_IDENTITY]
xbfu \reg, \reg, 0xe8
.endm
/* macro to get the interrupt stack of current cpu
* the result will be in irq_sp (a reg)
*/
.macro _get_curr_cpu_irq_stack, irq_sp
.macro _get_curr_cpu_irq_stack irq_sp
#ifdef CONFIG_SMP
_get_cpu_id MACRO_ARG(irq_sp)
ld.as MACRO_ARG(irq_sp), [_curr_cpu, MACRO_ARG(irq_sp)]
ld MACRO_ARG(irq_sp), [MACRO_ARG(irq_sp), ___cpu_t_irq_stack_OFFSET]
_get_cpu_id \irq_sp
ld.as \irq_sp, [@_curr_cpu, \irq_sp]
ld \irq_sp, [\irq_sp, ___cpu_t_irq_stack_OFFSET]
#else
mov MACRO_ARG(irq_sp), _kernel
ld MACRO_ARG(irq_sp), [MACRO_ARG(irq_sp), _kernel_offset_to_irq_stack]
mov \irq_sp, _kernel
ld \irq_sp, [\irq_sp, _kernel_offset_to_irq_stack]
#endif
.endm
/* macro to push aux reg through reg */
.macro PUSHAX, reg, aux
lr MACRO_ARG(reg), [MACRO_ARG(aux)]
st.a MACRO_ARG(reg), [sp, -4]
.macro PUSHAX reg aux
lr \reg, [\aux]
st.a \reg, [sp, -4]
.endm
/* macro to pop aux reg through reg */
.macro POPAX, reg, aux
ld.ab MACRO_ARG(reg), [sp, 4]
sr MACRO_ARG(reg), [MACRO_ARG(aux)]
.endm
/* macro to store old thread call regs */
.macro _store_old_thread_callee_regs
_save_callee_saved_regs
#ifdef CONFIG_SMP
/* save old thread into switch handle which is required by
* wait_for_switch
*/
st r2, [r2, ___thread_t_switch_handle_OFFSET]
#endif
.endm
/* macro to store old thread call regs in interrupt*/
.macro _irq_store_old_thread_callee_regs
#if defined(CONFIG_USERSPACE)
/*
* when USERSPACE is enabled, according to ARCv2 ISA, SP will be switched
* if interrupt comes out in user mode, and will be recorded in bit 31
* (U bit) of IRQ_ACT. when interrupt exits, SP will be switched back
* according to U bit.
*
* need to remember the user/kernel status of interrupted thread, will be
* restored when thread switched back
*
*/
lr r1, [_ARC_V2_AUX_IRQ_ACT]
and r3, r1, 0x80000000
push_s r3
bclr r1, r1, 31
sr r1, [_ARC_V2_AUX_IRQ_ACT]
#endif
_store_old_thread_callee_regs
.endm
/* macro to load new thread callee regs */
.macro _load_new_thread_callee_regs
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
bl configure_mpu_thread
pop_s r2
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
.endm
/* when switch to thread caused by coop, some status regs need to set */
.macro _set_misc_regs_irq_switch_from_coop
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* must return to secure mode, so set IRM bit to 1 */
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_IRM_BIT
sflag r0
#endif
.endm
/* when switch to thread caused by irq, some status regs need to set */
.macro _set_misc_regs_irq_switch_from_irq
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
.endm
/* macro to get next switch handle in assembly */
.macro _get_next_switch_handle
push_s r2
mov r0, sp
bl z_arch_get_next_switch_handle
pop_s r2
.endm
/* macro to disable stack checking in assembly, need a GPR
* to do this
*/
.macro _disable_stack_checking, reg
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr MACRO_ARG(reg), [_ARC_V2_SEC_STAT]
bclr MACRO_ARG(reg), MACRO_ARG(reg), _ARC_V2_SEC_STAT_SSC_BIT
sflag MACRO_ARG(reg)
#else
lr MACRO_ARG(reg), [_ARC_V2_STATUS32]
bclr MACRO_ARG(reg), MACRO_ARG(reg), _ARC_V2_STATUS32_SC_BIT
kflag MACRO_ARG(reg)
#endif
#endif
.endm
/* macro to enable stack checking in assembly, need a GPR
* to do this
*/
.macro _enable_stack_checking, reg
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr MACRO_ARG(reg), [_ARC_V2_SEC_STAT]
bset MACRO_ARG(reg), MACRO_ARG(reg), _ARC_V2_SEC_STAT_SSC_BIT
sflag MACRO_ARG(reg)
#else
lr MACRO_ARG(reg), [_ARC_V2_STATUS32]
bset MACRO_ARG(reg), MACRO_ARG(reg), _ARC_V2_STATUS32_SC_BIT
kflag MACRO_ARG(reg)
#endif
#endif
.macro POPAX reg aux
ld.ab \reg, [sp, 4]
sr \reg, [\aux]
.endm
#endif /* _ASMLANGUAGE */

View File

@@ -36,11 +36,11 @@ extern "C" {
*/
static ALWAYS_INLINE void z_icache_setup(void)
{
uint32_t icache_config = (
u32_t icache_config = (
IC_CACHE_DIRECT | /* direct mapping (one-way assoc.) */
IC_CACHE_ENABLE /* i-cache enabled */
);
uint32_t val;
u32_t val;
val = z_arc_v2_aux_reg_read(_ARC_V2_I_CACHE_BUILD);
val &= 0xff;

View File

@@ -46,6 +46,9 @@ extern "C" {
#define _ARC_V2_INIT_IRQ_LOCK_KEY (0x10 | _ARC_V2_DEF_IRQ_LEVEL)
#ifndef _ASMLANGUAGE
extern K_THREAD_STACK_DEFINE(_interrupt_stack, CONFIG_ISR_STACK_SIZE);
/*
* z_irq_setup
*
@@ -53,7 +56,7 @@ extern "C" {
*/
static ALWAYS_INLINE void z_irq_setup(void)
{
uint32_t aux_irq_ctrl_value = (
u32_t aux_irq_ctrl_value = (
_ARC_V2_AUX_IRQ_CTRL_LOOP_REGS | /* save lp_xxx registers */
#ifdef CONFIG_CODE_DENSITY
_ARC_V2_AUX_IRQ_CTRL_LP | /* save code density registers */

View File

@@ -1,11 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
if(CONFIG_ARM64)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littleaarch64)
add_subdirectory(core/aarch64)
include(aarch64.cmake)
else()
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearm)
add_subdirectory(core/aarch32)
include(aarch32.cmake)
endif()

View File

@@ -6,6 +6,9 @@
menu "ARM Options"
depends on ARM
rsource "core/aarch32/Kconfig"
rsource "core/aarch64/Kconfig"
config ARCH
default "arm"
@@ -13,37 +16,4 @@ config ARM64
bool
select 64BIT
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config ARM_CUSTOM_INTERRUPT_CONTROLLER
bool
depends on !CPU_CORTEX_M
help
This option indicates that the ARM CPU is connected to a custom (i.e.
non-GIC) interrupt controller.
A number of Cortex-A and Cortex-R cores (Cortex-A5, Cortex-R4/5, ...)
allow interfacing to a custom external interrupt controller and this
option must be selected when such cores are connected to an interrupt
controller that is not the ARM Generic Interrupt Controller (GIC).
When this option is selected, the architecture interrupt control
functions are mapped to the SoC interrupt control interface, which is
implemented at the SoC level.
N.B. This option is only applicable to the Cortex-A and Cortex-R
family cores. The Cortex-M family cores are always equipped with
the ARM Nested Vectored Interrupt Controller (NVIC).
if !ARM64
rsource "core/aarch32/Kconfig"
endif
if ARM64
rsource "core/aarch64/Kconfig"
endif
endmenu

28
arch/arm/aarch32.cmake Normal file
View File

@@ -0,0 +1,28 @@
# SPDX-License-Identifier: Apache-2.0
set(ARCH_FOR_cortex-m0 armv6s-m )
set(ARCH_FOR_cortex-m0plus armv6s-m )
set(ARCH_FOR_cortex-m3 armv7-m )
set(ARCH_FOR_cortex-m4 armv7e-m )
set(ARCH_FOR_cortex-m23 armv8-m.base )
set(ARCH_FOR_cortex-m33 armv8-m.main+dsp)
set(ARCH_FOR_cortex-m33+nodsp armv8-m.main )
set(ARCH_FOR_cortex-r4 armv7-r )
if(ARCH_FOR_${GCC_M_CPU})
set(ARCH_FLAG -march=${ARCH_FOR_${GCC_M_CPU}})
endif()
zephyr_compile_options(
-mabi=aapcs
${ARCH_FLAG}
)
zephyr_ld_options(
-mabi=aapcs
${ARCH_FLAG}
)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-little${ARCH}) # BFD format
add_subdirectory(core/aarch32)

5
arch/arm/aarch64.cmake Normal file
View File

@@ -0,0 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littleaarch64) # BFD format
add_subdirectory(core/aarch64)

View File

@@ -3,16 +3,17 @@
zephyr_library()
if (CONFIG_COVERAGE)
zephyr_compile_options($<TARGET_PROPERTY:compiler,coverage>)
zephyr_link_libraries($<TARGET_PROPERTY:linker,coverage>)
toolchain_cc_coverage()
endif ()
zephyr_library_sources(
exc_exit.S
swap.c
swap_helper.S
irq_manage.c
thread.c
cpu_idle.S
fault_s.S
fatal.c
nmi.c
nmi_on_reset.S
@@ -22,8 +23,7 @@ zephyr_library_sources(
zephyr_library_sources_ifdef(CONFIG_GEN_SW_ISR_TABLE isr_wrapper.S)
zephyr_library_sources_ifdef(CONFIG_CPLUSPLUS __aeabi_atexit.c)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_SW_VECTOR_RELAY irq_relay.S)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE ../common/tls.c)
zephyr_library_sources_ifdef(CONFIG_CPU_CORTEX_M0 irq_relay.S)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M cortex_m)
@@ -32,6 +32,6 @@ add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M_HAS_CMSE cortex_m/cmse)
add_subdirectory_ifdef(CONFIG_ARM_SECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_ARM_NONSECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_R cortex_a_r)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_R cortex_r)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)

View File

@@ -3,6 +3,13 @@
# Copyright (c) 2015 Wind River Systems, Inc.
# SPDX-License-Identifier: Apache-2.0
if !ARM64
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config CPU_CORTEX_M
bool
select CPU_CORTEX
@@ -17,10 +24,6 @@ config CPU_CORTEX_M
select ARCH_HAS_RAMFUNC_SUPPORT
select ARCH_HAS_NESTED_EXCEPTION_DETECTION
select SWAP_NONATOMIC
select ARCH_HAS_EXTRA_EXCEPTION_INFO
select ARCH_HAS_TIMING_FUNCTIONS if CPU_CORTEX_M_HAS_DWT
select ARCH_SUPPORTS_ARCH_HW_INIT
imply XIP
help
This option signifies the use of a CPU of the Cortex-M family.
@@ -73,38 +76,6 @@ config ISA_ARM
processor start-up. Much of its functionality was subsumed into T32 with
the introduction of Thumb-2 technology.
config ASSEMBLER_ISA_THUMB2
bool
default y if ISA_THUMB2 && !ISA_ARM
depends on !ISA_ARM
help
This helper symbol specifies the default target instruction set for
the assembler.
When only the Thumb-2 ISA is supported (i.e. on Cortex-M cores), the
assembler must use the Thumb-2 instruction set.
When both the Thumb-2 and ARM ISAs are supported (i.e. on Cortex-A
and Cortex-R cores), the assembler must use the ARM instruction set
because the architecture assembly code makes use of the ARM
instructions.
config COMPILER_ISA_THUMB2
bool "Compile C/C++ functions using Thumb-2 instruction set"
depends on ISA_THUMB2
default y
help
This option configures the compiler to compile all C/C++ functions
using the Thumb-2 instruction set.
N.B. The scope of this symbol is not necessarily limited to the C and
C++ languages; in fact, this symbol refers to all forms of
"compiled" code.
When an additional natively-compiled language support is added
in the future, this symbol shall also specify the Thumb-2
instruction set for that language.
config NUM_IRQS
int
@@ -214,10 +185,74 @@ config ARM_NONSECURE_FIRMWARE
resources of the Cortex-M MCU, and, therefore, it shall avoid
accessing them.
menu "ARM TrustZone Options"
depends on ARM_SECURE_FIRMWARE || ARM_NONSECURE_FIRMWARE
comment "Secure firmware"
depends on ARM_SECURE_FIRMWARE
comment "Non-secure firmware"
depends on !ARM_SECURE_FIRMWARE
config ARM_SECURE_BUSFAULT_HARDFAULT_NMI
bool "BusFault, HardFault, and NMI target Secure state"
depends on ARM_SECURE_FIRMWARE
help
Force NMI, HardFault, and BusFault (in Mainline ARMv8-M)
exceptions as Secure exceptions.
config ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
bool "Secure Firmware has Secure Entry functions"
depends on ARM_SECURE_FIRMWARE
help
Option indicates that ARM Secure Firmware contains
Secure Entry functions that may be called from
Non-Secure state. Secure Entry functions must be
located in Non-Secure Callable memory regions.
config ARM_NSC_REGION_BASE_ADDRESS
hex "ARM Non-Secure Callable Region base address"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
default 0
help
Start address of Non-Secure Callable section.
Notes:
- The default value (i.e. when the user does not configure
the option explicitly) instructs the linker script to
place the Non-Secure Callable section, automatically,
inside the .text area.
- Certain requirements/restrictions may apply regarding
the size and the alignment of the starting address for
a Non-Secure Callable section, depending on the available
security attribution unit (SAU or IDAU) for a given SOC.
config ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
bool "Non-Secure Firmware uses Secure Entry functions"
depends on ARM_NONSECURE_FIRMWARE
help
Option indicates that ARM Non-Secure Firmware uses Secure
Entry functions provided by the Secure Firmware. The Secure
Firmware must be configured to provide these functions.
config ARM_ENTRY_VENEERS_LIB_NAME
string "Entry Veneers symbol file"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS \
|| ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
default "libentryveneers.a"
help
Library file to find the symbol table for the entry veneers.
The library will typically come from building the Secure
Firmware that contains secure entry functions, and allows
the Non-Secure Firmware to call into the Secure Firmware.
endmenu
choice
prompt "Floating point ABI"
default FP_HARDABI
depends on FPU
depends on FLOAT
config FP_HARDABI
bool "Floating point Hard ABI"
@@ -235,4 +270,10 @@ config FP_SOFTABI
endchoice
rsource "cortex_m/Kconfig"
rsource "cortex_a_r/Kconfig"
rsource "cortex_r/Kconfig"
rsource "cortex_m/mpu/Kconfig"
rsource "cortex_m/tz/Kconfig"
endif # !ARM64

View File

@@ -1,15 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_library_sources(
vector_table.S
reset.S
exc.S
exc_exit.S
fault.c
irq_init.c
reboot.c
stacks.c
tcm.c
)

View File

@@ -1,101 +0,0 @@
# ARM Cortex-R platform configuration options
# Copyright (c) 2018 Marvell
# Copyright (c) 2018 Lexmark International, Inc.
# SPDX-License-Identifier: Apache-2.0
# NOTE: We have the specific core implementations first and outside of the
# if CPU_CORTEX_R block so that SoCs can select which core they are using
# without having to select all the options related to that core. Everything
# else is captured inside the if CPU_CORTEX_R block so they are not exposed
# if one selects a different ARM Cortex Family (Cortex-A or Cortex-M)
config CPU_CORTEX_R4
bool
select CPU_CORTEX_R
select ARMV7_R
select ARMV7_R_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-R4 CPU
config CPU_CORTEX_R5
bool
select CPU_CORTEX_R
select ARMV7_R
select ARMV7_R_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-R5 CPU
config CPU_CORTEX_R7
bool
select CPU_CORTEX_R
select ARMV7_R
select ARMV7_R_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-R7 CPU
if CPU_CORTEX_R
config ARMV7_R
bool
select ATOMIC_OPERATIONS_BUILTIN
select ISA_ARM
select ISA_THUMB2
help
This option signifies the use of an ARMv7-R processor
implementation.
From https://developer.arm.com/products/architecture/cpu-architecture/r-profile:
The Armv7-R architecture implements a traditional Arm architecture with
multiple modes and supports a Protected Memory System Architecture
(PMSA) based on a Memory Protection Unit (MPU). It supports the Arm (32)
and Thumb (T32) instruction sets.
config ARMV7_R_FP
bool
depends on ARMV7_R
help
This option signifies the use of an ARMv7-R processor
implementation supporting the Floating-Point Extension.
config ARMV7_EXCEPTION_STACK_SIZE
int "Undefined Instruction and Abort stack size (in bytes)"
default 256
help
This option specifies the size of the stack used by the undefined
instruction and data abort exception handlers.
config ARMV7_FIQ_STACK_SIZE
int "FIQ stack size (in bytes)"
default 256
help
This option specifies the size of the stack used by the FIQ handler.
config ARMV7_SVC_STACK_SIZE
int "SVC stack size (in bytes)"
default 512
help
This option specifies the size of the stack used by the SVC handler.
config ARMV7_SYS_STACK_SIZE
int "SYS stack size (in bytes)"
default 1024
help
This option specifies the size of the stack used by the system mode.
config RUNTIME_NMI
default y
config GEN_ISR_TABLES
default y
config GEN_IRQ_VECTOR_TABLE
default n
config DISABLE_TCM_ECC
bool "Disable ECC on TCM"
help
This option disables ECC checks on Tightly Coupled Memory.
endif # CPU_CORTEX_R

View File

@@ -1,148 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Exception handlers for ARM Cortex-A and Cortex-R
*
* This file implements the exception handlers (undefined instruction, prefetch
* abort and data abort) for ARM Cortex-A and Cortex-R processors.
*
* All exception handlers save the exception stack frame into the exception
* mode stack rather than the system mode stack, in order to ensure predictable
* exception behaviour (i.e. an arbitrary thread stack overflow cannot cause
* exception handling and thereby subsequent total system failure).
*
* In case the exception is due to a fatal (unrecoverable) fault, the fault
* handler is responsible for invoking the architecture fatal exception handler
* (z_arm_fatal_error) which invokes the kernel fatal exception handler
* (z_fatal_error) that either locks up the system or aborts the current thread
* depending on the application exception handler implementation.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_fault_undef_instruction)
GTEXT(z_arm_fault_prefetch)
GTEXT(z_arm_fault_data)
GTEXT(z_arm_undef_instruction)
GTEXT(z_arm_prefetch_abort)
GTEXT(z_arm_data_abort)
/**
* @brief Undefined instruction exception handler
*
* An undefined instruction (UNDEF) exception is generated when an undefined
* instruction, or a VFP instruction when the VFP is not enabled, is
* encountered.
*/
SECTION_SUBSEC_FUNC(TEXT, __exc, z_arm_undef_instruction)
/*
* The undefined instruction address is offset by 2 if the previous
* mode is Thumb; otherwise, it is offset by 4.
*/
push {r0}
mrs r0, spsr
tst r0, #T_BIT
subeq lr, #4 /* ARM (!T_BIT) */
subne lr, #2 /* Thumb (T_BIT) */
pop {r0}
/*
* Store r0-r3, r12, lr, lr_und and spsr_und into the stack to
* construct an exception stack frame.
*/
srsdb sp, #MODE_UND!
stmfd sp, {r0-r3, r12, lr}^
sub sp, #24
/* Increment exception nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Invoke fault handler */
mov r0, sp
bl z_arm_fault_undef_instruction
/* Exit exception */
b z_arm_exc_exit
/**
* @brief Prefetch abort exception handler
*
* A prefetch abort (PABT) exception is generated when the processor marks the
* prefetched instruction as invalid and the instruction is executed.
*/
SECTION_SUBSEC_FUNC(TEXT, __exc, z_arm_prefetch_abort)
/*
* The faulting instruction address is always offset by 4 for the
* prefetch abort exceptions.
*/
sub lr, #4
/*
* Store r0-r3, r12, lr, lr_abt and spsr_abt into the stack to
* construct an exception stack frame.
*/
srsdb sp, #MODE_ABT!
stmfd sp, {r0-r3, r12, lr}^
sub sp, #24
/* Increment exception nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Invoke fault handler */
mov r0, sp
bl z_arm_fault_prefetch
/* Exit exception */
b z_arm_exc_exit
/**
* @brief Data abort exception handler
*
* A data abort (DABT) exception is generated when an error occurs on a data
* memory access. This exception can be either synchronous or asynchronous,
* depending on the type of fault that caused it.
*/
SECTION_SUBSEC_FUNC(TEXT, __exc, z_arm_data_abort)
/*
* The faulting instruction address is always offset by 8 for the data
* abort exceptions.
*/
sub lr, #8
/*
* Store r0-r3, r12, lr, lr_abt and spsr_abt into the stack to
* construct an exception stack frame.
*/
srsdb sp, #MODE_ABT!
stmfd sp, {r0-r3, r12, lr}^
sub sp, #24
/* Increment exception nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Invoke fault handler */
mov r0, sp
bl z_arm_fault_data
/* Exit exception */
b z_arm_exc_exit

View File

@@ -1,174 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
* Copyright (c) 2016 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-A and Cortex-R exception/interrupt exit API
*
* Provides functions for performing kernel handling when exiting exceptions,
* or interrupts that are installed directly in the vector table (i.e. that are
* not wrapped around by _isr_wrapper()).
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_exc_exit)
GTEXT(z_arm_int_exit)
GTEXT(z_arm_pendsv)
GDATA(_kernel)
/**
* @brief Kernel housekeeping when exiting interrupt handler installed directly
* in the vector table
*
* Kernel allows installing interrupt handlers (ISRs) directly into the vector
* table to get the lowest interrupt latency possible. This allows the ISR to
* be invoked directly without going through a software interrupt table.
* However, upon exiting the ISR, some kernel work must still be performed,
* namely possible context switching. While ISRs connected in the software
* interrupt table do this automatically via a wrapper, ISRs connected directly
* in the vector table must invoke z_arm_int_exit() as the *very last* action
* before returning.
*
* e.g.
*
* void myISR(void)
* {
* printk("in %s\n", __FUNCTION__);
* doStuff();
* z_arm_int_exit();
* }
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_int_exit)
#ifdef CONFIG_PREEMPT_ENABLED
/* Do not context switch if exiting a nested interrupt */
ldr r3, =_kernel
ldr r0, [r3, #_kernel_offset_to_nested]
cmp r0, #1
bhi __EXIT_INT
ldr r1, [r3, #_kernel_offset_to_current]
ldr r0, [r3, #_kernel_offset_to_ready_q_cache]
cmp r0, r1
blne z_arm_pendsv
__EXIT_INT:
#endif /* CONFIG_PREEMPT_ENABLED */
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif /* CONFIG_STACK_SENTINEL */
/* Disable nested interrupts while exiting */
cpsid i
/* Decrement interrupt nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
sub r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Restore previous stack pointer */
pop {r2, r3}
add sp, sp, r3
/*
* Restore lr_svc stored into the SVC mode stack by the mode entry
* function. This ensures that the return address of the interrupted
* context is preserved in case of interrupt nesting.
*/
pop {lr}
/*
* Restore r0-r3, r12 and lr_irq stored into the process stack by the
* mode entry function. These registers are saved by _isr_wrapper for
* IRQ mode and z_arm_svc for SVC mode.
*
* r0-r3 are either the values from the thread before it was switched
* out or they are the args to _new_thread for a new thread.
*/
cps #MODE_SYS
pop {r0-r3, r12, lr}
rfeia sp!
/**
* @brief Kernel housekeeping when exiting exception handler
*
* The exception exit routine performs appropriate housekeeping tasks depending
* on the mode of exit:
*
* If exiting a nested or non-fatal exception, the exit routine restores the
* saved exception stack frame to resume the excepted context.
*
* If exiting a non-nested fatal exception, the exit routine, assuming that the
* current faulting thread is aborted, discards the saved exception stack
* frame containing the aborted thread context and switches to the next
* scheduled thread.
*
* void z_arm_exc_exit(bool fatal)
*
* @param fatal True if exiting from a fatal fault; otherwise, false
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_exc_exit)
/* Do not context switch if exiting a nested exception */
ldr r3, =_kernel
ldr r1, [r3, #_kernel_offset_to_nested]
cmp r1, #1
bhi __EXIT_EXC
/* If the fault is not fatal, return to the current thread context */
cmp r0, #0
beq __EXIT_EXC
/*
* If the fault is fatal, the current thread must have been aborted by
* the exception handler. Clean up the exception stack frame and switch
* to the next scheduled thread.
*/
/* Clean up exception stack frame */
add sp, #32
/*
* Switch in the next scheduled thread.
*
* Note that z_arm_pendsv must be called in the SVC mode because it
* switches to the SVC mode during context switch and returns to the
* caller using lr_svc.
*/
cps #MODE_SVC
bl z_arm_pendsv
/* Decrement exception nesting count */
ldr r3, =_kernel
ldr r0, [r3, #_kernel_offset_to_nested]
sub r0, r0, #1
str r0, [r3, #_kernel_offset_to_nested]
/* Return to the switched thread */
cps #MODE_SYS
pop {r0-r3, r12, lr}
rfeia sp!
__EXIT_EXC:
/* Decrement exception nesting count */
ldr r0, [r3, #_kernel_offset_to_nested]
sub r0, r0, #1
str r0, [r3, #_kernel_offset_to_nested]
/*
* Restore r0-r3, r12, lr, lr_und and spsr_und from the exception stack
* and return to the current thread.
*/
ldmia sp, {r0-r3, r12, lr}^
add sp, #24
rfeia sp!

View File

@@ -1,165 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <kernel_internal.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#define FAULT_DUMP_VERBOSE (CONFIG_FAULT_DUMP == 2)
#if FAULT_DUMP_VERBOSE
static const char *get_dbgdscr_moe_string(uint32_t moe)
{
switch (moe) {
case DBGDSCR_MOE_HALT_REQUEST:
return "Halt Request";
case DBGDSCR_MOE_BREAKPOINT:
return "Breakpoint";
case DBGDSCR_MOE_ASYNC_WATCHPOINT:
return "Asynchronous Watchpoint";
case DBGDSCR_MOE_BKPT_INSTRUCTION:
return "BKPT Instruction";
case DBGDSCR_MOE_EXT_DEBUG_REQUEST:
return "External Debug Request";
case DBGDSCR_MOE_VECTOR_CATCH:
return "Vector Catch";
case DBGDSCR_MOE_OS_UNLOCK_CATCH:
return "OS Unlock Catch";
case DBGDSCR_MOE_SYNC_WATCHPOINT:
return "Synchronous Watchpoint";
default:
return "Unknown";
}
}
static void dump_debug_event(void)
{
/* Read and parse debug mode of entry */
uint32_t dbgdscr = __get_DBGDSCR();
uint32_t moe = (dbgdscr & DBGDSCR_MOE_Msk) >> DBGDSCR_MOE_Pos;
/* Print debug event information */
LOG_ERR("Debug Event (%s)", get_dbgdscr_moe_string(moe));
}
static void dump_fault(uint32_t status, uint32_t addr)
{
/*
* Dump fault status and, if applicable, tatus-specific information.
* Note that the fault address is only displayed for the synchronous
* faults because it is unpredictable for asynchronous faults.
*/
switch (status) {
case FSR_FS_ALIGNMENT_FAULT:
LOG_ERR("Alignment Fault @ 0x%08x", addr);
break;
case FSR_FS_BACKGROUND_FAULT:
LOG_ERR("Background Fault @ 0x%08x", addr);
break;
case FSR_FS_PERMISSION_FAULT:
LOG_ERR("Permission Fault @ 0x%08x", addr);
break;
case FSR_FS_SYNC_EXTERNAL_ABORT:
LOG_ERR("Synchronous External Abort @ 0x%08x", addr);
break;
case FSR_FS_ASYNC_EXTERNAL_ABORT:
LOG_ERR("Asynchronous External Abort");
break;
case FSR_FS_SYNC_PARITY_ERROR:
LOG_ERR("Synchronous Parity/ECC Error @ 0x%08x", addr);
break;
case FSR_FS_ASYNC_PARITY_ERROR:
LOG_ERR("Asynchronous Parity/ECC Error");
break;
case FSR_FS_DEBUG_EVENT:
dump_debug_event();
break;
default:
LOG_ERR("Unknown (%u)", status);
}
}
#endif
/**
* @brief Undefined instruction fault handler
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_undef_instruction(z_arch_esf_t *esf)
{
/* Print fault information */
LOG_ERR("***** UNDEFINED INSTRUCTION ABORT *****");
/* Invoke kernel fatal exception handler */
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
/* All undefined instructions are treated as fatal for now */
return true;
}
/**
* @brief Prefetch abort fault handler
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_prefetch(z_arch_esf_t *esf)
{
/* Read and parse Instruction Fault Status Register (IFSR) */
uint32_t ifsr = __get_IFSR();
uint32_t fs = ((ifsr & IFSR_FS1_Msk) >> 6) | (ifsr & IFSR_FS0_Msk);
/* Read Instruction Fault Address Register (IFAR) */
uint32_t ifar = __get_IFAR();
/* Print fault information*/
LOG_ERR("***** PREFETCH ABORT *****");
if (FAULT_DUMP_VERBOSE) {
dump_fault(fs, ifar);
}
/* Invoke kernel fatal exception handler */
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
/* All prefetch aborts are treated as fatal for now */
return true;
}
/**
* @brief Data abort fault handler
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_data(z_arch_esf_t *esf)
{
/* Read and parse Data Fault Status Register (DFSR) */
uint32_t dfsr = __get_DFSR();
uint32_t fs = ((dfsr & DFSR_FS1_Msk) >> 6) | (dfsr & DFSR_FS0_Msk);
/* Read Data Fault Address Register (DFAR) */
uint32_t dfar = __get_DFAR();
/* Print fault information*/
LOG_ERR("***** DATA ABORT *****");
if (FAULT_DUMP_VERBOSE) {
dump_fault(fs, dfar);
}
/* Invoke kernel fatal exception handler */
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
/* All data aborts are treated as fatal for now */
return true;
}
/**
* @brief Initialisation of fault handling
*/
void z_arm_fault_init(void)
{
/* Nothing to do for now */
}

View File

@@ -1,30 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-R interrupt initialization
*/
#include <arch/cpu.h>
#include <drivers/interrupt_controller/gic.h>
/**
*
* @brief Initialize interrupts
*
* @return N/A
*/
void z_arm_interrupt_init(void)
{
/*
* Initialise interrupt controller.
*/
#ifdef CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER
/* Invoke SoC-specific interrupt controller initialisation */
z_soc_irq_init();
#endif
}

View File

@@ -1,28 +0,0 @@
/*
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <aarch32/cortex_a_r/stack.h>
#include <string.h>
#include <kernel_internal.h>
K_KERNEL_STACK_DEFINE(z_arm_fiq_stack, CONFIG_ARMV7_FIQ_STACK_SIZE);
K_KERNEL_STACK_DEFINE(z_arm_abort_stack, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
K_KERNEL_STACK_DEFINE(z_arm_undef_stack, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
K_KERNEL_STACK_DEFINE(z_arm_svc_stack, CONFIG_ARMV7_SVC_STACK_SIZE);
K_KERNEL_STACK_DEFINE(z_arm_sys_stack, CONFIG_ARMV7_SYS_STACK_SIZE);
#if defined(CONFIG_INIT_STACKS)
void z_arm_init_stacks(void)
{
memset(z_arm_fiq_stack, 0xAA, CONFIG_ARMV7_FIQ_STACK_SIZE);
memset(z_arm_svc_stack, 0xAA, CONFIG_ARMV7_SVC_STACK_SIZE);
memset(z_arm_abort_stack, 0xAA, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
memset(z_arm_undef_stack, 0xAA, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
memset(Z_KERNEL_STACK_BUFFER(z_interrupt_stacks[0]), 0xAA,
K_KERNEL_STACK_SIZEOF(z_interrupt_stacks[0]));
}
#endif

View File

@@ -1,17 +0,0 @@
/*
* Copyright (c) 2020, Antmicro
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <arch/arm/aarch32/cortex_a_r/cmsis.h>
void z_arm_tcm_disable_ecc(void)
{
uint32_t actlr;
actlr = __get_ACTLR();
actlr &= ~(ACTLR_ATCMPCEN_Msk | ACTLR_B0TCMPCEN_Msk |
ACTLR_B1TCMPCEN_Msk);
__set_ACTLR(actlr);
}

View File

@@ -5,44 +5,19 @@ zephyr_library()
zephyr_library_sources(
vector_table.S
reset.S
fault_s.S
fault.c
exc_exit.S
scb.c
irq_init.c
thread_abort.c
)
zephyr_library_sources_ifdef(CONFIG_DEBUG_COREDUMP coredump.c)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE __aeabi_read_tp.S)
if(CONFIG_CORTEX_M_DWT)
if (CONFIG_TIMING_FUNCTIONS)
zephyr_library_sources(timing.c)
endif()
endif()
if (CONFIG_SW_VECTOR_RELAY)
if (CONFIG_CPU_CORTEX_M_HAS_VTOR)
set(relay_vector_table_sort_key relay_vectors)
else()
# Using 0x0 prefix will result in placing the relay vector table section
# at the beginning of ROM_START (i.e before other sections in ROM_START);
# required for CPUs without VTOR, which need to have the exception vector
# table starting at a fixed address at the beginning of ROM.
set(relay_vector_table_sort_key 0x0relay_vectors)
endif()
zephyr_linker_sources(
zephyr_linker_sources_ifdef(CONFIG_SW_VECTOR_RELAY
ROM_START
SORT_KEY ${relay_vector_table_sort_key}
SORT_KEY 0x0relay_vectors
relay_vector_table.ld
)
endif()
if (CONFIG_SW_VECTOR_RELAY OR CONFIG_SW_VECTOR_RELAY_CLIENT)
zephyr_linker_sources(
zephyr_linker_sources_ifdef(CONFIG_SW_VECTOR_RELAY
RAM_SECTIONS
vt_pointer_section.ld
)
endif()

Some files were not shown because too many files have changed in this diff Show More