Compare commits

..

38 Commits

Author SHA1 Message Date
Johan Hedberg
2bd062c258 release: bump version to Zephyr 2.2.1
Update version to 2.2.1

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-05-29 11:50:11 +03:00
Johan Hedberg
d2160cbf4a doc: releases: Add release notes for Zephyr 2.2.1
Add the release notes for Zephyr 2.2.1.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-05-29 11:50:11 +03:00
David Brown
9a3aa3c9c3 updatehub: Require peer verification with DTLS
DTLS without peer verification offers no security whatsoever (and is
arguably worse than not using DTLS in the first place).

Change the verification option to require this peer verification.  To
use this, it may be necessary to install and use a root certificate.

Signed-off-by: David Brown <david.brown@linaro.org>
2020-05-28 22:28:36 +03:00
François Delawarde
137ebbc43f bluetooth: host: fix wrong bt/cf settings loading
This commits fixes the loading of bt/cf settings into memory. Only data
was loaded and not the address.

Signed-off-by: François Delawarde <fnde@demant.com>
2020-05-27 12:37:37 +03:00
Andries Kruithof
a520506076 Bluetooth: controller: split: Update feature exchange to BTCore V5.0
The existing feature exchange procedure does not give the proper
response as specified in the BT core spec 5.0.
The old behaviour is that the feature-response returns the logical and
of the features for both peers.
The behaviour implemented here is that the feature-response returns the
featureset of the peer, except for octet 0 which is the logical and of
the supported features.
Tested by using the bt shell, and having different featuresets
on the 2 peers.

This fixes #25483

Signed-off-by: Andries Kruithof <Andries.Kruithof@nordicsemi.no>
2020-05-27 12:37:04 +03:00
Vinayak Kariappa Chettimada
2758c33e39 Bluetooth: controller: split: Fix slave latency cancel race
Fix missing transmit buffer demutiplexing before checking if
slave latency needs to be maintained or cancelled.

This bug was detected when new transmit buffer was enqueued
overlapping with on-air radio transmission of empty PDU
preceding the handling of radio event done.

Symptoms of this bug being data transmission latency of upto
slave latency plus one times connection interval.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-05-22 16:47:57 +03:00
Morten Priess
2fa7a7e977 Bluetooth: controller: Added support for vendor ticker nodes
With BT_CTLR_USER_TICKER_ID_RANGE it is possible for vendors to add a
number of ticker nodes for proprietary purposes. The feature depends on
BT_CTLR_USER_EXT.

Signed-off-by: Morten Priess <mtpr@oticon.com>
2020-05-18 13:05:17 +03:00
Johan Hedberg
dbfc2ebc6b Bluetooth: Fix NULL pointer dereference when bt_send() fails
The last parameter to hci_cmd_done() is expected to be a valid net_buf
since the function immediately tries to dereference it. Fix this by
passing the appropriate buffer reference to the function.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2020-05-06 11:56:34 +03:00
Trond Einar Snekvik
82083a90cc Bluetooth: Mesh: net_key_status only pull one key idx
Fixes bug where the config client's net_key_status handler would attempt
to pull two key indexes from a message which only holds one.

Fixes #24601.

Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
2020-04-28 21:31:18 +03:00
Gerson Fernando Budke
341681f312 lib: updatehub: Improve probe security
Improve buffer overflow security on probe_cb. This ensures that socket
buffer have fixed lenght and content received by COAP fills properly on
metadata buffer. After that, ensures that metadata content is a valid
string with length lower than metadata size.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
3f4221db6f lib: updatehub: Refact to use bin2hex
Use bin2hex instead inline conversion.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
4d949eebef lib: updatehub: Fix variable-size string copy
A malformed JSON payload that is received from an UpdateHub server
may trigger memory corruption in the Zephyr OS. This could result
in a denial of service in the best case, or code execution in the
worst case.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
2020-04-22 14:37:28 +03:00
Otavio Salvador
8e8ba2396c CODEOWNERS: Update UpdateHub owners
This replaces @chtavares592 with @nandojve as he will contributing to it
from now on.

Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit a3d6b627b7)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
0e51f8bd79 lib: updatehub: Add missing do upgrade request call
After a success image download, UpdateHub needs inform MCUboot that
must test new image and then, on success, commit this new image. This
add missing upgrade request call step and fixes the upgarde flow.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit d1e2d345fb)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
3e91120f2b lib: updatehub: Fix download block error
The current version aborts update when found last transfer block. Now,
system checks only at end of coap block transfer total size and install
if download is ok.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 1128eab3f2)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
fb244f7542 lib: updatehub: Extract sha256 final method
Extract finish sha256 calc method.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 1fe1b0eec6)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
f7ac49efe6 lib: updatehub: Fix buffer sizes
The MAX_PAYLOAD_SIZE must reflect the size of COAP_BLOCK_x. This is
necessary becase BLOCK size represents max payload size. The current
value create inconsistencies for coap lib. The same way,
MAX_DOWNLOAD_DATA must allocate sufficient space for MAX_PAYLOAD_SIZE
plus all space for coap header etc.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 5f5919a900)
2020-04-22 14:37:28 +03:00
Gerson Fernando Budke
fce4c9e317 lib: updatehub: Fix build warnings
Fix all build warnings.

Signed-off-by: Gerson Fernando Budke <gerson.budke@ossystems.com.br>
Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
(cherry picked from commit 92f9cd9f85)
2020-04-22 14:37:28 +03:00
Flavio Ceolin
385f88242e net: coap: Fix possible overflow
Fix possible integer overflow when parsing a CoAP packet.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-04-22 13:02:46 +03:00
Flavio Ceolin
4e8e72bd4c math: extras: Add overflow functions to u16
Add functions to addition and multiplication for u16_t.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-04-22 13:02:46 +03:00
Vinayak Kariappa Chettimada
eee10e9aff Bluetooth: controller: split: Fix DLE duplicate requests
Fix implementation to handle back-to-back and duplicate
LENGTH_REQ PDU reception.

Relates to #23707.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:50 +03:00
Vinayak Kariappa Chettimada
b7e7153b95 Bluetooth: controller: split: Simplify DLE state checks
Simplify the Data Length Update Procedure state check when
processing incoming LENGTH_REQ/RSP PDUs.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:50 +03:00
Vinayak Kariappa Chettimada
525500123a Bluetooth: controller: split: Validate chan map and hop value
Add validation of channel map and hop increment value
received in CONNECT_IND PDU.

Zero bit count leads to controller assert or divide-by-zero
fault.

Hop increment shall be between 5 and 16 by BT Specification.

Relates to #23705.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:29 +03:00
Vinayak Kariappa Chettimada
ff39065ed4 Bluetooth: controller: Fix ticker ticks_current value
Update the ticks_current value on last stopped ticker
instance, so that when a new ticker instance is started
the anchor ticks calculation uses the correct current tick
with respect to supplied anchor ticks.

Fixes #23805.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 13:00:03 +03:00
Vinayak Kariappa Chettimada
1b253c2dba Bluetooth: controller: split: Fix slave latency during conn update
Fix regression in cancelling slave latency during Connection
Update Procedure.

Slave latency should not be applied between the ack of a
Connection Update Indication PDU and until the instant.
When caching was introduced, implementation missed this
consideration.

Relates to #23813.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 12:59:43 +03:00
Vinayak Kariappa Chettimada
37c46e3cf5 Bluetooth: controller: split: Fix regression in handling invalid pkt seq
Fix regression in handling invalid packet sequence in the
first packet in a connection.

This relates to commit 62c1e1a52b ("Bluetooth: controller:
split: Fix assert on invalid packet sequence")

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-22 12:59:20 +03:00
Andries Kruithof
c5f9a2d2bc [Backport-v2.2-branch] Bluetooth: controller: legacy: DLE timing
Fixes #23482.

There is a bug in calculation of time for the DLE procedure:
the preamble size is incorrect for the 2M phy

This is for the legacy code, see PR #23570 for the split controller

Signed-off-by: Andries Kruithof <Andries.Kruithof@nordicsemi.no>
2020-04-22 12:58:52 +03:00
Joakim Andersson
7e4bcff791 Bluetooth: SMP: Fix bond lost on pairing failure.
Fix an an issue where established bonding information in the peripheral
are deleted when the central does not have the bond information.
This could be because the central has removed the bond information, or
this is in fact not the central but someone spoofing it's identity, or
an accidental RPA match.

This is a regression from: a3e89e84a8

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2020-04-22 12:58:19 +03:00
Vinayak Kariappa Chettimada
ee4006e42e Bluetooth: controller: nRF: Revert use of ticker compat mode as default
Ticker is now fixed to avoid catch up of periodic timeout under
large ISR latencies.

Revert commit a749e28d98 ("Bluetooth: controller: split:
nRF: Use ticker compat mode as default").

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
bec2a497b9 Bluetooth: controller: Consider must_expire while avoiding catchup
If must_expire is set for a ticker instance, then ticker
expiry shall still perform catchup.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
2295af0f05 Bluetooth: controller: Fix ticks_slot_previous calculation
Fix ticks_slot_previous calculation in the ticker_job when
tickers are skipped.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
8e12b42af2 Bluetooth: controller: Fix ticker state on skipped
Reset ticker state in ticker_job for ticker instances that
have been skipped in the ticker_worker.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
a3c2a72c6c Bluetooth: controller: Remove lazy handling in ticker_worker
Removed lazy handling from ticker_worker as it is now taken
care in ticker_job.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Vinayak Kariappa Chettimada
c2e829ba8c Bluetooth: controller: Fix ticker catch up on ISR latency
Fix ticker to avoid catch up of periodic timeouts in case of
large ISR latencies like in case of flash erase scenarios.

Relates to #23815.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2020-04-20 21:46:38 +03:00
Andries Kruithof
b83d9a5eea Bluetooth: controller: split: correct timing calculation in PKT_US
A bug in the PKT_US resulted in wrong calculations for the 2M phy.
Fixes the bug, verified on EBQ.
Also adds some defines for improved readability.

Fixes #23482

Signed-off-by: Andries Kruithof <Andries.Kruithof@nordicsemi.no>
2020-04-12 11:54:29 +03:00
Carles Cufi
a69cdb31ca ci: do not use latest sphinx/breathe releases for docs
Sphinx version 3.0.0 release recently break doc build, lock version
of sphinx to something compatible while we upgrade dependencies...

breathe v4.15.0 was just released and fixes some compatibility issues
with latest sphinx, the combo works, however with many warnings that we
still need to either fix or whitelist. This is temporary until we are
able to use those new versions.

Backport of #24109 and #24166

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2020-04-08 17:14:25 +02:00
Dan Erichsen
d39cb42d09 bluetooth: host: Do not send unwanted SC indicate
Fixes #23485

When we create a GATT table dynamically, we also create a hash
identifying this table. This hash can be stored in persistent memory and
we can thus determine after recreating the GATT table whether the
services have changed or not from before the reboot.

When these hashes are identical, it implies that the table has not
changed, wherefore a service changed indication should not be sent to
any bonded clients. The method for achieving this was to remove the
gatt_sc.work entry from the work queue. This work queue entry was to
send an indication to the clients when the table had been allocated.
If the final entry then caused the hashes to match, the indication
would be cancelled.

On unit testing this behaviour in simulation and in practice, we found
that the indication was sent nonetheless, and the issue was located to
be tied to the SERVICE_RANGE_CHANGED flag which is set when the services
are changed and is cleared when the indications are being sent out.

It was the job of the work queue entry to clear this flag, and as the
entry was never serviced, the flag was never cleared, and when
sc_commit() is called at the end of the process, it believes that there
is a new service change pending and therefore starts the job over, thus
creating a redundant indication to the clients.

This commit fixes the issue by clearing the flag when the work entry
is removed due to a hash match. This has been unittested in a live
environment, in a simulation environment, and sanitycheck has been run
on it.

Signed-off-by: Dan Erichsen <daee@demant.com>
2020-03-17 18:16:57 +01:00
Wolfgang Puffitsch
9d83a7cd27 Bluetooth: controller: split: Fix response to unexpected LL_FEATURE_RSP
Fix response to unexpected LL_FEATURE_RSP for the case that
BT_CTLR_SLAVE_FEAT_REQ is disabled. Fixes LL/PAC/SLA/BV-01 for such a
configuration.

Signed-off-by: Wolfgang Puffitsch <wopu@demant.com>
2020-03-17 18:16:47 +01:00
5884 changed files with 147638 additions and 395551 deletions

View File

@@ -16,5 +16,4 @@
--ignore MINMAX
--ignore CONST_STRUCT
--ignore FILE_PATH_CHANGES
--ignore SPDX_LICENSE_TAG
--exclude ext

View File

@@ -65,12 +65,18 @@ ExperimentalAutoDetectBinPacking: false
#FixNamespaceComments: false # Unknown to clang-format-4.0
# Taken from:
# git grep -h '^#define [^[:space:]]*FOR_EACH[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*FOR_EACH[^[:space:]]*\)(.*$, - '\1'," \
# git grep -h '^#define [^[:space:]]*for_each[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*for_each[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
ForEachMacros:
- 'FOR_EACH'
- 'FOR_EACH_FIXED_ARG'
- 'for_each_linux_bus'
- 'for_each_linux_driver'
- 'metal_bitmap_for_each_clear_bit'
- 'metal_bitmap_for_each_set_bit'
- 'metal_for_each_page_size_down'
- 'metal_for_each_page_size_up'
- 'metal_list_for_each'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
@@ -85,11 +91,11 @@ ForEachMacros:
- 'SYS_SLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SLIST_FOR_EACH_NODE'
- 'SYS_SLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
- 'Z_GENLIST_FOR_EACH_CONTAINER'
- 'Z_GENLIST_FOR_EACH_CONTAINER_SAFE'
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
#IncludeBlocks: Preserve # Unknown to clang-format-5.0
IncludeCategories:

View File

@@ -11,11 +11,6 @@ insert_final_newline = true
trim_trailing_whitespace = true
max_line_length = 80
# Assembly
[*.S]
indent_style = tab
indent_size = 8
# C
[*.{c,h}]
indent_style = tab
@@ -32,7 +27,7 @@ indent_style = tab
indent_size = 8
# YAML
[*.{yml,yaml}]
[*.yml]
indent_style = space
indent_size = 2
@@ -60,8 +55,3 @@ indent_size = 2
# Makefile
[Makefile]
indent_style = tab
# Device tree
[*.{dts,dtsi,overlay}]
indent_style = tab
indent_size = 8

4
.gitattributes vendored
View File

@@ -3,7 +3,3 @@
.gitattributes export-ignore
.gitignore export-ignore
.mailmap export-ignore
# Tell linguist that generated test pattern files should not be included in the
# language statistics.
*.pat linguist-generated=true

View File

@@ -2,7 +2,7 @@
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: Enhancement
labels: enhancement
assignees: ''
---

View File

@@ -33,7 +33,7 @@ jobs:
pip3 install 'breathe>=4.9.1,<4.15.0' 'docutils>=0.14' \
'sphinx>=1.7.5,<3.0' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools canopen progress
pip3 install pyelftools
- name: west setup
run: |

View File

@@ -54,7 +54,7 @@ jobs:
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip3 install pytest west pyelftools canopen progress
pip3 install pytest west pyelftools
- name: run pytest-win
if: runner.os == 'Windows'
run: |

1
.gitignore vendored
View File

@@ -9,7 +9,6 @@
*~
build*/
!doc/guides/build
!tests/drivers/build_all
cscope.*
.dir

View File

@@ -1,25 +0,0 @@
#
# crypto unnamed struct definition
#doc/reference/crypto/index.rst
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]crypto[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*cipher_ctx.mode_params.*
^[- \t]*\^
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]crypto[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*cipher_ctx.key.*
^[- \t]*\^

View File

@@ -4,7 +4,7 @@ compiler: gcc
env:
global:
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.11.3
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.11.2
- ZEPHYR_TOOLCHAIN_VARIANT=zephyr
- MATRIX_BUILDS="5"
matrix:
@@ -20,7 +20,7 @@ build:
- ${SHIPPABLE_BUILD_DIR}/ccache
pre_ci_boot:
image_name: zephyrprojectrtos/ci
image_tag: v0.11.8
image_tag: v0.11.4
pull: true
options: "-e HOME=/home/buildslave --privileged=true --tty --net=bridge --user buildslave"

View File

@@ -54,7 +54,7 @@ set(SYSCALL_LIST_H_TARGET syscall_list_h_target)
set(DRIVER_VALIDATION_H_TARGET driver_validation_h_target)
set(KOBJ_TYPES_H_TARGET kobj_types_h_target)
set(LINKER_SCRIPT_TARGET linker_script_target)
set(PARSE_SYSCALLS_TARGET parse_syscalls_target)
define_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT BRIEF_DOCS " " FULL_DOCS " ")
set_property( GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-little${ARCH}) # BFD format
@@ -351,6 +351,9 @@ endif()
if(CONFIG_USERSPACE)
set(APP_SMEM_ALIGNED_DEP app_smem_aligned_linker)
set(APP_SMEM_UNALIGNED_DEP app_smem_unaligned_linker)
if(CONFIG_ARM)
set(PRIV_STACK_DEP priv_stacks_prebuilt)
endif()
endif()
get_property(TOPT GLOBAL PROPERTY TOPT)
@@ -410,7 +413,6 @@ set_ifndef(DTS_CAT_OF_FIXUP_FILES ${ZEPHYR_BINARY_DIR}/include/generated/devicet
# Concatenate the fixups into a single header file for easy
# #include'ing
file(WRITE ${DTS_CAT_OF_FIXUP_FILES} "/* May only be included by devicetree.h */\n\n")
set(DISCOVERED_FIXUP_FILES)
foreach(fixup_file
${DTS_BOARD_FIXUP_FILE}
${DTS_SOC_FIXUP_FILE}
@@ -420,14 +422,9 @@ foreach(fixup_file
if(EXISTS ${fixup_file})
file(READ ${fixup_file} contents)
file(APPEND ${DTS_CAT_OF_FIXUP_FILES} "${contents}")
string(APPEND DISCOVERED_FIXUP_FILES "- ${fixup_file}\n")
endif()
endforeach()
if (DISCOVERED_FIXUP_FILES)
message(WARNING "One or more dts_fixup.h files detected:\n${DISCOVERED_FIXUP_FILES}Use of these files is deprecated; use the devicetree.h API instead.")
endif()
# Unfortunately, the order in which CMakeLists.txt code is processed
# matters so we need to be careful about how we order the processing
# of subdirectories. One example is "Compiler flags added late in the
@@ -458,6 +455,7 @@ else()
endif()
add_subdirectory(boards)
add_subdirectory(ext)
add_subdirectory(subsys)
add_subdirectory(drivers)
@@ -478,7 +476,6 @@ if(EXISTS ${CMAKE_BINARY_DIR}/zephyr_modules.txt)
string(TOUPPER ${module_name} MODULE_NAME_UPPER)
set(ZEPHYR_${MODULE_NAME_UPPER}_MODULE_DIR ${module_path})
set(ZEPHYR_${MODULE_NAME_UPPER}_MODULE_DIR ${module_path} PARENT_SCOPE)
endforeach()
foreach(module_name ${module_names})
@@ -495,9 +492,8 @@ if(EXISTS ${CMAKE_BINARY_DIR}/zephyr_modules.txt)
set(ZEPHYR_CURRENT_MODULE_DIR)
endif()
set(syscall_list_h ${CMAKE_CURRENT_BINARY_DIR}/include/generated/syscall_list.h)
set(syscalls_json ${CMAKE_CURRENT_BINARY_DIR}/misc/generated/syscalls.json)
set(struct_tags_json ${CMAKE_CURRENT_BINARY_DIR}/misc/generated/struct_tags.json)
set(syscall_list_h ${CMAKE_CURRENT_BINARY_DIR}/include/generated/syscall_list.h)
set(syscalls_json ${CMAKE_CURRENT_BINARY_DIR}/misc/generated/syscalls.json)
# The syscalls subdirs txt file is constructed by python containing a list of folders to use for
# dependency handling, including empty folders.
@@ -589,22 +585,16 @@ endforeach()
add_custom_command(
OUTPUT
${syscalls_json}
${struct_tags_json}
COMMAND
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/parse_syscalls.py
--include ${ZEPHYR_BASE}/include # Read files from this dir
--include ${ZEPHYR_BASE}/drivers # For net sockets
--include ${ZEPHYR_BASE}/subsys/net # More net sockets
${parse_syscalls_include_args} # Read files from these dirs also
--json-file ${syscalls_json} # Write this file
--tag-struct-file ${struct_tags_json} # Write subsystem list to this file
--json-file ${syscalls_json} # Write this file
DEPENDS ${syscalls_subdirs_trigger} ${PARSE_SYSCALLS_HEADER_DEPENDS}
)
add_custom_target(${SYSCALL_LIST_H_TARGET} DEPENDS ${syscall_list_h})
add_custom_target(${PARSE_SYSCALLS_TARGET}
DEPENDS ${syscalls_json} ${struct_tags_json})
# 64-bit systems do not require special handling of 64-bit system call
# parameters or return values, indicate this to the system call boilerplate
@@ -613,10 +603,6 @@ if(CONFIG_64BIT)
set(SYSCALL_LONG_REGISTERS_ARG --long-registers)
endif()
if(CONFIG_TIMEOUT_64BIT)
set(SYSCALL_SPLIT_TIMEOUT_ARG --split-type k_timeout_t)
endif()
add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}
# Also, some files are written to include/generated/syscalls/
COMMAND
@@ -627,15 +613,10 @@ add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}
--syscall-dispatch include/generated/syscall_dispatch.c # Write this file
--syscall-list ${syscall_list_h}
${SYSCALL_LONG_REGISTERS_ARG}
${SYSCALL_SPLIT_TIMEOUT_ARG}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
DEPENDS ${PARSE_SYSCALLS_TARGET}
${syscalls_json}
DEPENDS ${syscalls_json}
)
# This is passed into all calls to the gen_kobject_list.py script.
set(gen_kobject_list_include_args --include ${struct_tags_json})
set(DRV_VALIDATION ${PROJECT_BINARY_DIR}/include/generated/driver-validation.h)
add_custom_command(
OUTPUT ${DRV_VALIDATION}
@@ -643,17 +624,13 @@ add_custom_command(
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/gen_kobject_list.py
--validation-output ${DRV_VALIDATION}
${gen_kobject_list_include_args}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
${ZEPHYR_BASE}/scripts/gen_kobject_list.py
${PARSE_SYSCALLS_TARGET}
${struct_tags_json}
DEPENDS ${ZEPHYR_BASE}/scripts/gen_kobject_list.py
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(${DRIVER_VALIDATION_H_TARGET} DEPENDS ${DRV_VALIDATION})
include(${ZEPHYR_BASE}/cmake/kobj.cmake)
include($ENV{ZEPHYR_BASE}/cmake/kobj.cmake)
gen_kobj(KOBJ_INCLUDE_PATH)
# Add a pseudo-target that is up-to-date when all generated headers
@@ -718,6 +695,7 @@ endif() # CONFIG_CODE_DATA_RELOCATION
configure_linker_script(
linker.cmd
""
${PRIV_STACK_DEP}
${APP_SMEM_ALIGNED_DEP}
${CODE_RELOCATION_DEP}
zephyr_generated_headers
@@ -797,7 +775,147 @@ if(CONFIG_USERSPACE)
get_property(compile_definitions_interface TARGET zephyr_interface
PROPERTY INTERFACE_COMPILE_DEFINITIONS)
endif()
# Warning most of this gperf code is duplicated below for
# gen_kobject_list.py / output_lib
if(CONFIG_ARM AND CONFIG_USERSPACE)
set(GEN_PRIV_STACKS $ENV{ZEPHYR_BASE}/scripts/gen_priv_stacks.py)
set(PROCESS_PRIV_STACKS_GPERF $ENV{ZEPHYR_BASE}/scripts/process_gperf.py)
set(PRIV_STACKS priv_stacks_hash.gperf)
set(PRIV_STACKS_OUTPUT_SRC_PRE priv_stacks_hash_preprocessed.c)
set(PRIV_STACKS_OUTPUT_SRC priv_stacks_hash.c)
set(PRIV_STACKS_OUTPUT_OBJ priv_stacks_hash.c.obj)
set(PRIV_STACKS_OUTPUT_OBJ_RENAMED priv_stacks_hash_renamed.o)
# Essentially what we are doing here is extracting some information
# out of the nearly finished elf file, generating the source code
# for a hash table based on that information, and then compiling and
# linking the hash table back into a now even more nearly finished
# elf file.
# Use the script GEN_PRIV_STACKS to scan the kernel binary's
# (${ZEPHYR_PREBUILT_EXECUTABLE}) DWARF information to produce a table of kernel
# objects (PRIV_STACKS) which we will then pass to gperf
add_custom_command(
OUTPUT ${PRIV_STACKS}
COMMAND
${PYTHON_EXECUTABLE}
${GEN_PRIV_STACKS}
--kernel $<TARGET_FILE:priv_stacks_prebuilt>
--output ${PRIV_STACKS}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS priv_stacks_prebuilt
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(priv_stacks DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS})
if(${GPERF} STREQUAL GPERF-NOTFOUND)
message(FATAL_ERROR "Unable to find gperf")
endif()
# Use gperf to generate C code (PRIV_STACKS_OUTPUT_SRC_PRE) which implements a
# perfect hashtable based on PRIV_STACKS
add_custom_command(
OUTPUT ${PRIV_STACKS_OUTPUT_SRC_PRE}
COMMAND
${GPERF} -C
--output-file ${PRIV_STACKS_OUTPUT_SRC_PRE}
${PRIV_STACKS}
DEPENDS priv_stacks ${PRIV_STACKS}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(priv_stacks_output_src_pre DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_SRC_PRE})
# For our purposes the code/data generated by gperf is not optimal.
#
# The script PROCESS_GPERF creates a new c file OUTPUT_SRC based on
# OUTPUT_SRC_PRE to greatly reduce the amount of code/data generated
# since we know we are always working with pointer values
add_custom_command(
OUTPUT ${PRIV_STACKS_OUTPUT_SRC}
COMMAND
${PYTHON_EXECUTABLE}
${PROCESS_PRIV_STACKS_GPERF}
-i ${PRIV_STACKS_OUTPUT_SRC_PRE}
-o ${PRIV_STACKS_OUTPUT_SRC}
-p "struct _k_priv_stack_map"
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS priv_stacks_output_src_pre ${PRIV_STACKS_OUTPUT_SRC_PRE}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(priv_stacks_output_src DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_SRC})
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_SRC}
PROPERTIES COMPILE_DEFINITIONS "${compile_definitions_interface}")
set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_SRC}
PROPERTIES COMPILE_FLAGS
"${NO_COVERAGE_FLAGS} -fno-function-sections -fno-data-sections ")
# We need precise control of where generated text/data ends up in the final
# kernel image. Disable function/data sections and use objcopy to move
# generated data into special section names
add_library(priv_stacks_output_lib STATIC
${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_SRC}
)
# Turn off -ffunction-sections, etc.
# NB: Using a library instead of target_compile_options(priv_stacks_output_lib
# [...]) because a library's options have precedence
add_library(priv_stacks_output_lib_interface INTERFACE)
foreach(incl ${include_dir_in_interface})
target_include_directories(priv_stacks_output_lib_interface INTERFACE ${incl})
endforeach()
foreach(incl ${sys_include_dir_in_interface})
target_include_directories(priv_stacks_output_lib_interface SYSTEM INTERFACE ${incl})
endforeach()
target_link_libraries(priv_stacks_output_lib priv_stacks_output_lib_interface)
set(PRIV_STACKS_OUTPUT_OBJ_PATH ${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/priv_stacks_output_lib.dir/${PRIV_STACKS_OUTPUT_OBJ})
set(obj_copy_cmd "")
set(obj_copy_sections_rename
.bss=.priv_stacks.noinit
.data=.priv_stacks.data
.text=.priv_stacks.text
.rodata=.priv_stacks.rodata
)
bintools_objcopy(
RESULT_CMD_LIST obj_copy_cmd
SECTION_RENAME ${obj_copy_sections_rename}
FILE_INPUT ${PRIV_STACKS_OUTPUT_OBJ_PATH}
FILE_OUTPUT ${PRIV_STACKS_OUTPUT_OBJ_RENAMED}
)
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_OBJ_RENAMED}
${obj_copy_cmd}
DEPENDS priv_stacks_output_lib
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(priv_stacks_output_obj_renamed DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_OBJ_RENAMED})
add_library(priv_stacks_output_obj_renamed_lib STATIC IMPORTED GLOBAL)
set_property(
TARGET priv_stacks_output_obj_renamed_lib
PROPERTY
IMPORTED_LOCATION ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS_OUTPUT_OBJ_RENAMED}
)
add_dependencies(
priv_stacks_output_obj_renamed_lib
priv_stacks_output_obj_renamed
)
set_property(GLOBAL APPEND PROPERTY GENERATED_KERNEL_OBJECT_FILES priv_stacks_output_obj_renamed_lib)
endif()
# Warning: most of this gperf code is duplicated above for
# gen_priv_stacks.py / priv_stacks_output_lib
if(CONFIG_USERSPACE)
set(GEN_KOBJ_LIST ${ZEPHYR_BASE}/scripts/gen_kobject_list.py)
set(PROCESS_GPERF ${ZEPHYR_BASE}/scripts/process_gperf.py)
@@ -823,10 +941,8 @@ if(CONFIG_USERSPACE)
${GEN_KOBJ_LIST}
--kernel $<TARGET_FILE:${ZEPHYR_PREBUILT_EXECUTABLE}>
--gperf-output ${OBJ_LIST}
${gen_kobject_list_include_args}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
${ZEPHYR_PREBUILT_EXECUTABLE}
DEPENDS ${ZEPHYR_PREBUILT_EXECUTABLE}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
)
add_custom_target(obj_list DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${OBJ_LIST})
@@ -856,7 +972,7 @@ if(CONFIG_USERSPACE)
${PROCESS_GPERF}
-i ${OUTPUT_SRC_PRE}
-o ${OUTPUT_SRC}
-p "struct z_object"
-p "struct _k_object"
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS output_src_pre ${OUTPUT_SRC_PRE}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
@@ -1038,6 +1154,42 @@ if(CONFIG_USERSPACE)
)
endif()
if(CONFIG_USERSPACE AND CONFIG_ARM)
configure_linker_script(
linker_priv_stacks.cmd
""
${CODE_RELOCATION_DEP}
${APP_SMEM_ALIGNED_DEP}
${APP_SMEM_ALIGNED_LD}
zephyr_generated_headers
)
add_custom_target(
linker_priv_stacks_script
DEPENDS
linker_priv_stacks.cmd
)
set_property(TARGET
linker_priv_stacks_script
PROPERTY INCLUDE_DIRECTORIES
${ZEPHYR_INCLUDE_DIRS}
)
set(PRIV_STACK_LIB priv_stacks_output_obj_renamed_lib)
add_executable( priv_stacks_prebuilt misc/empty_file.c)
toolchain_ld_link_elf(
TARGET_ELF priv_stacks_prebuilt
OUTPUT_MAP ${PROJECT_BINARY_DIR}/priv_stacks_prebuilt.map
LIBRARIES_PRE_SCRIPT ""
LINKER_SCRIPT ${PROJECT_BINARY_DIR}/linker_priv_stacks.cmd
LIBRARIES_POST_SCRIPT ""
DEPENDENCIES ${CODE_RELOCATION_DEP}
)
set_property(TARGET priv_stacks_prebuilt PROPERTY LINK_DEPENDS ${PROJECT_BINARY_DIR}/linker_priv_stacks.cmd)
add_dependencies( priv_stacks_prebuilt linker_priv_stacks_script ${OFFSETS_LIB})
endif()
# FIXME: Is there any way to get rid of empty_file.c?
add_executable( ${ZEPHYR_PREBUILT_EXECUTABLE} misc/empty_file.c)
toolchain_ld_link_elf(
@@ -1045,10 +1197,11 @@ toolchain_ld_link_elf(
OUTPUT_MAP ${PROJECT_BINARY_DIR}/${ZEPHYR_PREBUILT_EXECUTABLE}.map
LIBRARIES_PRE_SCRIPT ""
LINKER_SCRIPT ${PROJECT_BINARY_DIR}/linker.cmd
LIBRARIES_POST_SCRIPT ${PRIV_STACK_LIB}
DEPENDENCIES ${CODE_RELOCATION_DEP}
)
set_property(TARGET ${ZEPHYR_PREBUILT_EXECUTABLE} PROPERTY LINK_DEPENDS ${PROJECT_BINARY_DIR}/linker.cmd)
add_dependencies( ${ZEPHYR_PREBUILT_EXECUTABLE} ${LINKER_SCRIPT_TARGET} ${OFFSETS_LIB})
add_dependencies( ${ZEPHYR_PREBUILT_EXECUTABLE} ${PRIV_STACK_DEP} ${LINKER_SCRIPT_TARGET} ${OFFSETS_LIB})
set(generated_kernel_files ${GKSF} ${GKOF})
@@ -1063,6 +1216,7 @@ else()
configure_linker_script(
linker_pass_final.cmd
"-DLINKER_PASS2"
${PRIV_STACK_DEP}
${CODE_RELOCATION_DEP}
${ZEPHYR_PREBUILT_EXECUTABLE}
zephyr_generated_headers
@@ -1090,7 +1244,7 @@ else()
DEPENDENCIES ${CODE_RELOCATION_DEP}
)
set_property(TARGET ${ZEPHYR_FINAL_EXECUTABLE} PROPERTY LINK_DEPENDS ${PROJECT_BINARY_DIR}/linker_pass_final.cmd)
add_dependencies( ${ZEPHYR_FINAL_EXECUTABLE} ${LINKER_PASS_FINAL_SCRIPT_TARGET})
add_dependencies( ${ZEPHYR_FINAL_EXECUTABLE} ${PRIV_STACK_DEP} ${LINKER_PASS_FINAL_SCRIPT_TARGET})
# Use the pass2 elf as the final elf
set(logical_target_for_zephyr_elf ${ZEPHYR_FINAL_EXECUTABLE})
@@ -1222,15 +1376,10 @@ endif()
if(CONFIG_OUTPUT_DISASSEMBLY)
set(out_disassembly_cmd "")
set(out_disassembly_byprod "")
if(CONFIG_OUTPUT_DISASSEMBLE_ALL)
set(disassembly_type DISASSEMBLE_ALL)
else()
set(disassembly_type DISASSEMBLE_SOURCE)
endif()
bintools_objdump(
RESULT_CMD_LIST out_disassembly_cmd
RESULT_BYPROD_LIST out_disassembly_byprod
${disassembly_type}
DISASSEMBLE_SOURCE
FILE_INPUT ${KERNEL_ELF_NAME}
FILE_OUTPUT ${KERNEL_LST_NAME}
)

View File

@@ -22,7 +22,7 @@
/arch/arm/core/aarch64/ @carlocaione
/arch/arm/include/aarch32/cortex_m/cmse.h @ioannisg
/arch/arm/include/aarch64/ @carlocaione
/arch/arm/core/aarch32/cortex_a_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/arm/core/aarch32/cortex_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/common/ @andrewboie @ioannisg @andyross
/soc/arc/snps_*/ @vonhust @ruuddw
/soc/nios2/ @nashif @wentongwu
@@ -31,15 +31,13 @@
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/same70/ @nandojve
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/bcm*/ @sbranden
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/nxp*/ @MaureenHelm
/soc/arm/nordic_nrf/ @ioannisg
/soc/arm/qemu_cortex_a53/ @carlocaione
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/stm32f4/ @idlethread
/soc/arm/st_stm32/stm32f4/ @rsalveti @idlethread
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
@@ -48,9 +46,9 @@
/soc/xtensa/intel_s1000/ @sathishkuttan @dcpleung
/arch/x86/ @andrewboie
/arch/nios2/ @andrewboie @wentongwu
/arch/posix/ @aescolar @daor-oti
/arch/posix/ @aescolar
/arch/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/posix/ @aescolar @daor-oti
/soc/posix/ @aescolar
/soc/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/riscv/openisa*/ @MaureenHelm
/soc/x86/ @andrewboie
@@ -60,7 +58,7 @@
/boards/arm/ @MaureenHelm @galak
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @idlethread
/boards/arm/96b_carbon/ @rsalveti @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
@@ -76,7 +74,6 @@
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @MaureenHelm
/boards/arm/hexiwear*/doc/ @MaureenHelm @MeganHansen
/boards/arm/ip_k66f/ @parthitce
/boards/arm/lpcxpresso*/ @MaureenHelm
/boards/arm/lpcxpresso*/doc/ @MaureenHelm @MeganHansen
/boards/arm/mimxrt*/ @MaureenHelm
@@ -85,14 +82,12 @@
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/nrf*/ @carlescufi @lemrey @ioannisg
/boards/arm/nucleo*/ @erwango
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nucleo_f401re/ @rsalveti @idlethread
/boards/arm/qemu_cortex_a53/ @carlocaione
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg
/boards/arm/xmc45_relax_kit/ @parthitce
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_e70_xplained/ @nandojve
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
@@ -102,13 +97,11 @@
/boards/arm/stm32*_disco/ @erwango
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango
/boards/common/ @mbolivar-nordic
/boards/deprecated.cmake @tejlmand
/boards/common/ @mbolivar
/boards/nios2/ @wentongwu
/boards/nios2/altera_max10/ @wentongwu
/boards/arm/stm32_min_dev/ @cbsiddharth
/boards/posix/ @aescolar @daor-oti
/boards/posix/nrf52_bsim/ @aescolar @wopu-ot
/boards/posix/ @aescolar
/boards/riscv/ @kgugala @pgielda @nategraff-sifive
/boards/riscv/rv32m1_vega/ @MaureenHelm
/boards/shields/ @erwango
@@ -117,23 +110,21 @@
/boards/xtensa/intel_s1000_crb/ @sathishkuttan @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
# All cmake related files
/cmake/ @tejlmand @nashif
/CMakeLists.txt @tejlmand @nashif
/cmake/ @SebastianBoe @nashif
/CMakeLists.txt @SebastianBoe @nashif
/doc/ @dbkinder
/doc/guides/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/CMakeLists.txt @carlescufi
/doc/scripts/ @carlescufi
/doc/guides/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/guides/dts/ @galak @mbolivar-nordic
/doc/reference/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/reference/devicetree/ @galak @mbolivar-nordic
/doc/reference/resource_management/ @pabigot
/doc/reference/kernel/other/resource_mgmt.rst @pabigot
/doc/reference/networking/can* @alexanderwachter
/drivers/debug/ @nashif
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*mcux* @MaureenHelm
/drivers/*/*stm32* @erwango
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*native_posix* @aescolar
/drivers/adc/ @anangl
/drivers/adc/adc_stm32.c @cybertale
/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
@@ -141,25 +132,18 @@
/drivers/can/*mcp2515* @karstenkoenig
/drivers/clock_control/*nrf* @nordic-krch
/drivers/counter/ @nordic-krch
/drivers/console/semihost_console.c @luozhongyao
/drivers/counter/counter_cmos.c @andrewboie
/drivers/counter/maxim_ds3231.c @pabigot
/drivers/crypto/*nrf_ecb* @maciekfabia @anangl
/drivers/console/*mux* @jukkar
/drivers/display/ @vanwinkeljan
/drivers/display/display_framebuf.c @andrewboie
/drivers/dac/ @martinjaeger
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*rv32m1* @MaureenHelm
/drivers/entropy/*gecko* @chrta
/drivers/espi/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ps2/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/kscan/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/peci/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ethernet/ @jukkar @tbursztyka @pfalcon
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/flash/ @nashif @nvlsianpu
@@ -184,16 +168,15 @@
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic @ioannisg
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic @ioannisg
/drivers/ipm/ipm_cavs_idc* @dcpleung
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic @ioannisg
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic @ioannisg
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar-nordic
/drivers/led_strip/ @mbolivar
/drivers/lora/ @Mani-Sadhasivam
/drivers/modem/ @mike-scott
/drivers/pcie/ @andrewboie
/drivers/pinmux/stm32/ @idlethread
/drivers/pinmux/stm32/ @rsalveti @idlethread
/drivers/pinmux/*hsdk* @iriszzw
/drivers/pwm/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/sensor/ @MaureenHelm
@@ -206,14 +189,12 @@
/drivers/sensor/st*/ @avisconti
/drivers/serial/uart_altera_jtag_hal.c @wentongwu
/drivers/serial/*ns16550* @andrewboie
/drivers/serial/*nrfx* @Mierunski @anangl
/drivers/serial/Kconfig.litex @mateusz-holenko @kgugala @pgielda
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/serial/*xmc4xxx* @parthitce
/drivers/net/ @jukkar @tbursztyka
/drivers/ptp_clock/ @jukkar
/drivers/pwm/*rv32m1* @henrikbrixandersen
@@ -227,9 +208,8 @@
/drivers/timer/altera_avalon_timer_hal.c @wentongwu
/drivers/timer/riscv_machine_timer.c @nategraff-sifive @kgugala @pgielda
/drivers/timer/litex_timer.c @mateusz-holenko @kgugala @pgielda
/drivers/timer/xlnx_psttc_timer* @wjliang @stephanosio
/drivers/timer/xlnx_psttc_timer.c @wjliang
/drivers/timer/cc13x2_cc26x2_rtc_timer.c @vanti
/drivers/timer/cavs_timer.c @dcpleung
/drivers/usb/ @jfischer-phytec-iot @finikorg
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/video/ @loicpoulain
@@ -246,11 +226,8 @@
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/same70* @nandojve
/dts/arm/atmel/samv71* @nandojve
/dts/arm/atmel/ @galak
/dts/arm/broadcom/ @sbranden
/dts/arm/infineon/ @parthitce
/dts/arm/qemu-virt/ @carlocaione
/dts/arm/st/ @erwango
/dts/arm/ti/cc13?2* @bwitherspoon
@@ -261,7 +238,6 @@
/dts/arm/microchip/ @franciscomunoz @albertofloyd @scottwcpg
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efm32_jg_pg* @chrta
/dts/arm/silabs/efr32bg13p* @mnkp
/dts/arm/silabs/efm32jg12b* @chrta
/dts/arm/silabs/efm32pg12b* @chrta
/dts/riscv/microsemi-miv.dtsi @galak
@@ -285,14 +261,15 @@
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda @nategraff-sifive
/dts/bindings/*/litex* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/vexriscv* @mateusz-holenko @kgugala @pgielda
/dts/posix/ @aescolar @vanwinkeljan @daor-oti
/dts/posix/ @aescolar @vanwinkeljan
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/st* @avisconti
/ext/hal/cmsis/ @MaureenHelm @galak @stephanosio
/ext/lib/crypto/tinycrypt/ @ceolin
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/drivers/adc.h @anangl
/include/drivers/can.h @alexanderwachter
/include/drivers/counter.h @nordic-krch
/include/drivers/dac.h @martinjaeger
/include/drivers/display.h @vanwinkeljan
/include/drivers/espi.h @albertofloyd @franciscomunoz @scottwcpg
/include/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
@@ -303,22 +280,21 @@
/include/drivers/pcie/ @andrewboie
/include/drivers/hwinfo.h @alexanderwachter
/include/drivers/led.h @Mani-Sadhasivam
/include/drivers/led_strip.h @mbolivar-nordic
/include/drivers/led_strip.h @mbolivar
/include/drivers/sensor.h @MaureenHelm
/include/drivers/spi.h @tbursztyka
/include/drivers/lora.h @Mani-Sadhasivam
/include/drivers/peci.h @albertofloyd @franciscomunoz @scottwcpg
/include/app_memory/ @andrewboie
/include/arch/arc/ @vonhust @ruuddw
/include/arch/arc/arch.h @andrewboie
/include/arch/arc/v2/irq.h @andrewboie
/include/arch/arm/aarch32/ @MaureenHelm @galak @ioannisg
/include/arch/arm/aarch32/cortex_a_r/ @stephanosio
/include/arch/arm/aarch32/cortex_r/ @stephanosio
/include/arch/arm/aarch64/ @carlocaione
/include/arch/arm/aarch32/irq.h @andrewboie
/include/arch/nios2/ @andrewboie
/include/arch/nios2/arch.h @andrewboie
/include/arch/posix/ @aescolar @daor-oti
/include/arch/posix/ @aescolar
/include/arch/riscv/ @nategraff-sifive @kgugala @pgielda
/include/arch/x86/ @andrewboie @wentongwu
/include/arch/common/ @andrewboie @andyross @nashif
@@ -330,7 +306,6 @@
/include/tracing/ @wentongwu @nashif
/include/debug/ @nashif
/include/device.h @wentongwu @nashif
/include/devicetree.h @galak
/include/display/ @vanwinkeljan
/include/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
@@ -360,9 +335,7 @@
/include/toolchain/ @andrewboie @andyross
/include/zephyr.h @andrewboie @andyross
/kernel/ @andrewboie @andyross
/lib/fnmatch/ @carlescufi
/lib/gui/ @vanwinkeljan
/lib/open-amp/ @arnopo
/lib/os/ @andrewboie @andyross
/lib/posix/ @pfalcon
/lib/cmsis_rtos_v2/ @nashif
@@ -373,19 +346,18 @@
/kernel/idle.c @andrewboie @andyross @nashif
/samples/ @nashif
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/boards/*microbit* @jhe
/samples/basic/servo_motor/*microbit* @jhe
/lib/updatehub/ @nandojve @otavio
/samples/bluetooth/ @jhedberg @Vudentz @joerchan
/samples/boards/intel_s1000_crb/ @sathishkuttan @dcpleung @nashif
/samples/display/ @vanwinkeljan
/samples/drivers/can/ @alexanderwachter
/samples/drivers/CAN/ @alexanderwachter
/samples/drivers/display/ @vanwinkeljan
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/drivers/counter/maxim_ds3231/ @pabigot
/samples/net/ @jukkar @tbursztyka @pfalcon
/samples/net/dns_resolve/ @jukkar @tbursztyka @pfalcon
/samples/net/lwm2m_client/ @rlubos
/samples/net/lwm2m_client/ @mike-scott
/samples/net/mqtt_publisher/ @jukkar @tbursztyka
/samples/net/sockets/coap_*/ @rveerama1
/samples/net/sockets/ @jukkar @tbursztyka @pfalcon
@@ -394,51 +366,44 @@
/samples/shields/ @avisconti
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/shell/ @jakub-uC @nordic-krch
/samples/subsys/mgmt/mcumgr/smp_svr/ @aunsbjerg @nvlsianpu
/samples/subsys/usb/ @jfischer-phytec-iot @finikorg
/samples/subsys/power/ @wentongwu @pabigot
/samples/userspace/ @andrewboie
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/kconfig/ @ulfalizer
/scripts/elf_helper.py @andrewboie
/scripts/sanity_chk/expr_parser.py @nashif
/scripts/gen_app_partitions.py @andrewboie
/scripts/dts/ @ulfalizer @galak
/scripts/release/ @nashif
/scripts/ci/ @nashif
/arch/x86/gen_gdt.py @andrewboie
/arch/x86/gen_idt.py @andrewboie
/scripts/gen_kobject_list.py @andrewboie
/scripts/gen_priv_stacks.py @andrewboie @agross-oss @ioannisg
/scripts/gen_syscalls.py @andrewboie
/scripts/net/ @jukkar @pfl
/scripts/process_gperf.py @andrewboie
/scripts/gen_relocate_app.py @wentongwu
/scripts/requirements*.txt @mbolivar @galak @nashif
/scripts/tests/sanitycheck/ @aasthagr
/scripts/tracing/ @wentongwu
/scripts/sanity_chk/ @nashif
/scripts/sanitycheck @nashif
/scripts/series-push-hook.sh @erwango
/scripts/west_commands/ @mbolivar-nordic
/scripts/west-commands.yml @mbolivar-nordic
/scripts/west_commands/ @mbolivar
/scripts/west-commands.yml @mbolivar
/scripts/zephyr_module.py @tejlmand
/scripts/user_wordsize.py @cfriedt
/scripts/valgrind.supp @aescolar @daor-oti
/share/zephyr-package/ @tejlmand
/share/zephyrunittest-package/ @tejlmand
/scripts/valgrind.supp @aescolar
/subsys/bluetooth/ @joerchan @jhedberg @Vudentz
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot
/subsys/bluetooth/mesh/ @jhedberg @trond-snekvik @joerchan @Vudentz
/subsys/canbus/ @alexanderwachter
/subsys/cpp/ @pabigot @vanwinkeljan
/subsys/debug/ @nashif
/subsys/dfu/ @nvlsianpu
/subsys/tracing/ @nashif @wentongwu
/subsys/debug/asan_hacks.c @vanwinkeljan @aescolar @daor-oti
/subsys/debug/asan_hacks.c @vanwinkeljan @aescolar
/subsys/disk/disk_access_spi_sdhc.c @JunYangNXP
/subsys/disk/disk_access_sdhc.h @JunYangNXP
/subsys/disk/disk_access_usdhc.c @JunYangNXP
/subsys/disk/disk_access_stm32_sdmmc.c @anthonybrandon
/subsys/fb/ @jfischer-phytec-iot
/subsys/fs/ @nashif
/subsys/fs/fcb/ @nvlsianpu
@@ -448,21 +413,19 @@
/subsys/logging/ @nordic-krch
/subsys/logging/log_backend_net.c @nordic-krch @jukkar
/subsys/mgmt/ @carlescufi @nvlsianpu
/subsys/mgmt/smp_udp.c @aunsbjerg
/subsys/net/buf.c @jukkar @jhedberg @tbursztyka @pfalcon
/subsys/net/ip/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/dns/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/lwm2m/ @rlubos
/subsys/net/lib/lwm2m/ @mike-scott
/subsys/net/lib/config/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/mqtt/ @jukkar @tbursztyka @rlubos
/subsys/net/lib/openthread/ @rlubos
/subsys/net/lib/coap/ @rveerama1
/subsys/net/lib/sockets/socketpair.c @cfriedt
/subsys/net/lib/sockets/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @jukkar @tbursztyka
/subsys/net/l2/canbus/ @alexanderwachter @jukkar
/subsys/net/*/openthread/ @rlubos
/subsys/power/ @wentongwu @pabigot
/subsys/random/ @dleach02
/subsys/settings/ @nvlsianpu
@@ -472,18 +435,15 @@
/subsys/usb/ @jfischer-phytec-iot @finikorg
/tests/ @nashif
/tests/application_development/libcxx/ @pabigot
/tests/arch/arm/ @ioannisg @stephanosio
/tests/benchmarks/cmsis_dsp/ @stephanosio
/tests/boards/native_posix/ @aescolar @daor-oti
/tests/arch/arm/ @ioannisg
/tests/boards/native_posix/ @aescolar
/tests/boards/intel_s1000_crb/ @dcpleung @sathishkuttan
/tests/bluetooth/ @joerchan @jhedberg @Vudentz
/tests/bluetooth/bsim_bt/ @joerchan @jhedberg @Vudentz @aescolar @wopu-ot
/tests/bluetooth/bsim_bt/ @joerchan @jhedberg @Vudentz @aescolar
/tests/posix/ @pfalcon
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin
/tests/drivers/can/ @alexanderwachter
/tests/drivers/counter/ @nordic-krch
/tests/drivers/counter/maxim_ds3231_api/ @pabigot
/tests/drivers/flash_simulator/ @nvlsianpu
/tests/drivers/gpio/ @mnkp @pabigot
/tests/drivers/hwinfo/ @alexanderwachter
@@ -491,18 +451,16 @@
/tests/drivers/uart/uart_async_api/ @Mierunski
/tests/kernel/ @andrewboie @andyross @nashif
/tests/lib/ @nashif
/tests/lib/cmsis_dsp/ @stephanosio
/tests/net/ @jukkar @tbursztyka @pfalcon
/tests/net/buf/ @jukkar @jhedberg @tbursztyka @pfalcon
/tests/net/lib/ @jukkar @tbursztyka @pfalcon
/tests/net/lib/http_header_fields/ @jukkar @tbursztyka
/tests/net/lib/mqtt_packet/ @jukkar @tbursztyka
/tests/net/lib/coap/ @rveerama1
/tests/net/socket/socketpair/ @cfriedt
/tests/net/socket/ @jukkar @tbursztyka @pfalcon
/tests/subsys/fs/ @nashif @wentongwu
/tests/subsys/settings/ @nvlsianpu
/tests/subsys/shell/ @jakub-uC @nordic-krch
# Get all docs reviewed
*.rst @nashif
*posix*.rst @aescolar @daor-oti
*posix*.rst @aescolar

View File

@@ -32,6 +32,7 @@ source "dts/Kconfig"
source "drivers/Kconfig"
source "lib/Kconfig"
source "subsys/Kconfig"
source "ext/Kconfig"
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig"
@@ -288,14 +289,6 @@ config OUTPUT_DISASSEMBLY
help
Create an .lst file with the assembly listing of the firmware.
config OUTPUT_DISASSEMBLE_ALL
bool "Disassemble all sections with source. Fill zeros."
default n
depends on OUTPUT_DISASSEMBLY
help
The .lst file will contain complete disassembly of the firmware
not just those expected to contain instructions including zeros
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
@@ -356,13 +349,6 @@ config MAKEFILE_EXPORTS
Generates a file with build information that can be read by
third party Makefile-based build systems.
config LEGACY_DEVICETREE_MACROS
bool "Allow use of legacy devicetree macros"
help
Allows use of legacy devicetree macros which were used in
Zephyr 2.2 and previous versions, rather than the devicetree.h
API introduced during the Zephyr 2.3 development cycle.
endmenu
endmenu

View File

@@ -2,6 +2,10 @@
# Top level makefile for documentation build
#
ifndef ZEPHYR_BASE
$(error The ZEPHYR_BASE environment variable must be set)
endif
BUILDDIR ?= doc/_build
DOC_TAG ?= development
SPHINXOPTS ?= -q
@@ -19,6 +23,3 @@ htmldocs-fast:
pdfdocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} pdfdocs
doxygen:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} doxygen

View File

@@ -52,7 +52,7 @@ Here's a quick summary of resources to help you find your way around:
* **Source Code**: https://github.com/zephyrproject-rtos/zephyr is the main
repository; https://elixir.bootlin.com/zephyr/latest/source contains a
searchable index
* **Releases**: https://github.com/zephyrproject-rtos/zephyr/releases
* **Releases**: https://zephyrproject.org/developers/#downloads
* **Samples and example code**: see `Sample and Demo Code Examples`_
* **Mailing Lists**: users@lists.zephyrproject.org and
devel@lists.zephyrproject.org are the main user and developer mailing lists,

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 3
PATCHLEVEL = 0
VERSION_MINOR = 2
PATCHLEVEL = 1
VERSION_TWEAK = 0
EXTRAVERSION = rc2
EXTRAVERSION =

View File

@@ -27,9 +27,6 @@ config ARM
bool
select ARCH_IS_SET
select HAS_DTS
# FIXME: current state of the code for all ARM requires this, but
# is really only necessary for Cortex-M with ARM MPU!
select GEN_PRIV_STACKS
help
ARM architecture
@@ -38,7 +35,6 @@ config X86
select ARCH_IS_SET
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN if !X86_64
help
x86 architecture
@@ -63,13 +59,13 @@ config XTENSA
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select XTENSA_HAL if "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "xcc"
help
Xtensa architecture
config ARCH_POSIX
bool
select ARCH_IS_SET
select HAS_DTS
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
@@ -222,8 +218,15 @@ config PRIVILEGED_STACK_SIZE
This option sets the privileged stack region size that will be used
in addition to the user mode thread stack. During normal execution,
this region will be inaccessible from user mode. During system calls,
this region will be utilized by the system call. This value must be
a multiple of the minimum stack alignment.
this region will be utilized by the system call.
config PRIVILEGED_STACK_TEXT_AREA
int "Privileged stacks text area"
default 512 if COVERAGE_GCOV
default 256
depends on ARCH_HAS_USERSPACE
help
Stack text area size for privileged stacks.
config KOBJECT_TEXT_AREA
int "Size if kobject text area"
@@ -234,17 +237,6 @@ config KOBJECT_TEXT_AREA
help
Size of kernel object text area. Used in linker script.
config GEN_PRIV_STACKS
bool
help
Selected if the architecture requires that privilege elevation stacks
be allocated in a separate memory area. This is typical of arches
whose MPUs require regions to be power-of-two aligned/sized.
FIXME: This should be removed and replaced with checks against
CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT, but both ARM and ARC
changes will be necessary for this.
config STACK_GROWS_UP
bool "Stack grows towards higher memory addresses"
help
@@ -465,13 +457,6 @@ config CPU_HAS_FPU
This option is enabled when the CPU has hardware floating point
unit.
config CPU_HAS_FPU_DOUBLE_PRECISION
bool
select CPU_HAS_FPU
help
When enabled, this indicates that the CPU has a double floating point
precision unit.
config CPU_HAS_MPU
bool
help
@@ -521,38 +506,23 @@ config MPU_GAP_FILLING
documentation for more information on how this option is
used.
menu "Floating Point Options"
config FPU
bool "Enable floating point unit (FPU)"
menuconfig FLOAT
bool "Floating point"
depends on CPU_HAS_FPU
depends on ARC || ARM || RISCV || X86
depends on ARM || X86 || ARC
help
This option enables the hardware Floating Point Unit (FPU), in order to
support using the floating point registers and instructions.
This option allows threads to use the floating point registers.
By default, only a single thread may use the registers.
When this option is enabled, by default, threads may use the floating
point registers only in an exclusive manner, and this usually means that
only one thread may perform floating point operations.
Disabling this option means that any thread that uses a
floating point register will get a fatal exception.
If it is necessary for multiple threads to perform concurrent floating
point operations, the "FPU register sharing" option must be enabled to
preserve the floating point registers across context switches.
Note that this option cannot be selected for the platforms that do not
include a hardware floating point unit; the floating point support for
those platforms is dependent on the availability of the toolchain-
provided software floating point library.
config FPU_SHARING
bool "FPU register sharing"
depends on FPU
config FP_SHARING
bool "Floating point register sharing"
depends on FLOAT
help
This option enables preservation of the hardware floating point registers
across context switches to allow multiple threads to perform concurrent
floating point operations.
endmenu
This option allows multiple threads to use the floating point
registers.
config ARCH
string

View File

@@ -153,7 +153,7 @@ config ARC_STACK_PROTECTION
bool
default y if HW_STACK_PROTECTION
select ARC_STACK_CHECKING if ARC_HAS_STACK_CHECKING
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU && ARC_MPU_VER !=2)
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU)
select THREAD_STACK_INFO
help
This option enables either:
@@ -211,7 +211,7 @@ config CODE_DENSITY
config ARC_HAS_ACCL_REGS
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
default y if FPU
default y if FLOAT
help
Depending on the configuration, CPU can contain accumulator reg-pair
(also referred to as r58:r59). These can also be used by gcc as GPR so

View File

@@ -95,6 +95,37 @@ static void sched_ipi_handler(void *unused)
z_sched_ipi();
}
/**
* @brief Check whether need to do thread switch in isr context
*
* @details u64_t is used to let compiler use (r0, r1) as return register.
* use register r0 and register r1 as return value, r0 has
* new thread, r1 has old thread. If r0 == 0, it means no thread switch.
*/
u64_t z_arc_smp_switch_in_isr(void)
{
u64_t ret = 0;
u32_t new_thread;
u32_t old_thread;
old_thread = (u32_t)_current;
new_thread = (u32_t)z_get_next_ready_thread();
if (new_thread != old_thread) {
#ifdef CONFIG_TIMESLICING
z_reset_time_slice();
#endif
_current_cpu->swap_ok = 0;
((struct k_thread *)new_thread)->base.cpu =
arch_curr_cpu()->id;
_current_cpu->current = (struct k_thread *) new_thread;
ret = new_thread | ((u64_t)(old_thread) << 32);
}
return ret;
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
@@ -114,6 +145,9 @@ static int arc_smp_init(struct device *dev)
struct arc_connect_bcr bcr;
/* necessary master core init */
_kernel.cpus[0].id = 0;
_kernel.cpus[0].irq_stack = Z_THREAD_STACK_BUFFER(_interrupt_stack)
+ CONFIG_ISR_STACK_SIZE;
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);

View File

@@ -96,7 +96,7 @@ static void dcache_flush_mlines(u32_t start_addr, u32_t size)
end_addr = start_addr + size - 1;
start_addr &= (u32_t)(~(DCACHE_LINE_SIZE - 1));
key = arch_irq_lock(); /* --enter critical section-- */
key = irq_lock(); /* --enter critical section-- */
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLDL, start_addr);
@@ -113,7 +113,7 @@ static void dcache_flush_mlines(u32_t start_addr, u32_t size)
start_addr += DCACHE_LINE_SIZE;
} while (start_addr <= end_addr);
arch_irq_unlock(key); /* --exit critical section-- */
irq_unlock(key); /* --exit critical section-- */
}

View File

@@ -47,15 +47,14 @@ SECTION_FUNC(TEXT, arch_cpu_idle)
* It's found that (in nsim_hs_smp), when cpu
* is sleeping, no response to inter-processor interrupt
* although it's pending and interrupts are enabled.
* (Here fire SNPS JIRA issue P10019563-41294 to trace)
* here is a workround
*/
#if defined(CONFIG_SOC_NSIM) && defined(CONFIG_SMP)
#if !defined(CONFIG_SOC_NSIM) && !defined(CONFIG_SMP)
sleep r1
#else
seti r1
_z_arc_idle_loop:
b _z_arc_idle_loop
#else
sleep r1
#endif
j_s [blink]
nop

View File

@@ -149,25 +149,30 @@ SECTION_FUNC(TEXT, _firq_exit)
_check_nest_int_by_irq_act r0, r1
jne _firq_no_switch
jne _firq_no_reschedule
/* sp is struct k_thread **old of z_arc_switch_in_isr
* which is a wrapper of z_get_next_switch_handle.
* r0 contains the 1st thread in ready queue. if
* it equals _current(r2) ,then do swap, or no swap.
*/
_get_next_switch_handle
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif
/* restore interrupted context' sp */
pop sp
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
brne r0, 0, _firq_reschedule
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
cmp r0, r2
bne _firq_switch
/* fall to no switch */
/* Check if the current thread (in r2) is the cached thread */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
brne r0, r2, _firq_reschedule
#endif
/* fall to no rescheduling */
.balign 4
_firq_no_switch:
_firq_no_reschedule:
pop sp
/*
* Keeping this code block close to those that use it allows using brxx
* instruction instead of a pair of cmp and bxx
@@ -178,17 +183,19 @@ _firq_no_switch:
rtie
.balign 4
_firq_switch:
_firq_reschedule:
pop sp
#if CONFIG_RGF_NUM_BANKS != 1
#ifdef CONFIG_SMP
/*
* save r0, r2 in irq stack for a while, as they will be changed by register
* save r0, r1 in irq stack for a while, as they will be changed by register
* bank switch
*/
_get_curr_cpu_irq_stack r1
st r0, [r1, -4]
st r2, [r1, -8]
_get_curr_cpu_irq_stack r2
st r0, [r2, -4]
st r1, [r2, -8]
#endif
/*
* We know there is no interrupted interrupt of lower priority at this
* point, so when switching back to register bank 0, it will contain the
@@ -236,35 +243,79 @@ _firq_create_irq_stack_frame:
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET] /* ilink into pc */
#ifdef CONFIG_SMP
/*
* load r0, r2 from irq stack
* load r0, r1 from irq stack
*/
_get_curr_cpu_irq_stack r1
ld r0, [r1, -4]
ld r2, [r1, -8]
_get_curr_cpu_irq_stack r2
ld r0, [r2, -4]
ld r1, [r2, -8]
#endif
/* r2 is old thread */
_irq_store_old_thread_callee_regs
#endif
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread, will be
* restored when thread switched back
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
#ifdef CONFIG_SMP
mov_s r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
#endif
_save_callee_saved_regs
st _CAUSE_FIRQ, [r2, _thread_offset_to_relinquish_cause]
/* mov new thread (r0) to r2 */
#ifdef CONFIG_SMP
mov_s r2, r0
#else
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
mov r2, r0
_load_new_thread_callee_regs
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
breq r3, _CAUSE_RIRQ, _firq_switch_from_rirq
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* see comments in regular_irq.S
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _firq_return_from_rirq
nop_s
breq r3, _CAUSE_FIRQ, _firq_switch_from_firq
breq r3, _CAUSE_FIRQ, _firq_return_from_firq
nop_s
/* fall through */
.balign 4
_firq_switch_from_coop:
_set_misc_regs_irq_switch_from_coop
_firq_return_from_coop:
/* pc into ilink */
pop_s r0
mov_s ilink, r0
@@ -275,10 +326,18 @@ _firq_switch_from_coop:
rtie
.balign 4
_firq_switch_from_rirq:
_firq_switch_from_firq:
_firq_return_from_rirq:
_firq_return_from_firq:
_set_misc_regs_irq_switch_from_irq
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
_pop_irq_stack_frame

View File

@@ -19,6 +19,7 @@
#include <syscall.h>
GTEXT(_Fault)
GTEXT(z_do_kernel_oops)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -37,17 +38,6 @@ GTEXT(__ev_maligned)
GTEXT(z_irq_do_offload);
#endif
.macro _save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET]
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
.endm
/*
* The exception handling will use top part of interrupt stack to
@@ -99,7 +89,15 @@ _exc_entry:
*/
_create_irq_stack_frame
_save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* sp is parameter of _Fault */
mov_s r0, sp
@@ -116,11 +114,21 @@ _exc_return:
* exception comes out, thread context?irq_context?nest irq context?
*/
_get_next_switch_handle
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
breq r0, 0, _exc_return_from_exc
mov_s r2, r0
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
breq r0, r2, _exc_return_from_exc
mov_s r2, r0
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/*
@@ -176,13 +184,10 @@ _exc_return:
mov r2, ilink
#endif
/* Assumption: r2 has next thread */
b _rirq_newthread_switch
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
_exc_return_from_exc:
/* exception handler may change return address.
* reload it
*/
ld_s r0, [sp, ___isf_t_pc_OFFSET]
sr r0, [_ARC_V2_ERET]
@@ -190,7 +195,7 @@ _exc_return_from_exc:
mov_s sp, ilink
rtie
/* separated entry for trap which may be used by irq_offload, USERPSACE */
SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_trap)
/* get the id of trap_s */
lr ilink, [_ARC_V2_ECR]
@@ -213,12 +218,15 @@ valid_syscall_id:
*/
_create_irq_stack_frame
_save_exc_regs_into_stack
/* exc return and do sys call in kernel mode,
* so need to clear U bit, r0 is already loaded
* with ERSTATUS in _save_exc_regs_into_stack
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0, [_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
bclr r0, r0, _ARC_V2_STATUS32_U_BIT
sr r0, [_ARC_V2_ERSTATUS]
@@ -241,7 +249,15 @@ _do_non_syscall_trap:
/* save caller saved registers */
_create_irq_stack_frame
_save_exc_regs_into_stack
#ifdef CONFIG_ARC_HAS_SECURE
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
@@ -260,8 +276,6 @@ exc_nest_handle:
_dec_int_nest_counter r0, r1
_pop_irq_stack_frame
/* ERSTATUS, ERET are not changed, so ok to rtie */
rtie
#endif /* CONFIG_IRQ_OFFLOAD */
b _exc_entry

View File

@@ -57,7 +57,7 @@ void z_arc_firq_stack_set(void)
/* the z_arc_firq_stack_set must be called when irq diasbled, as
* it can be called not only in the init phase but also other places
*/
unsigned int key = arch_irq_lock();
unsigned int key = irq_lock();
__asm__ volatile (
/* only ilink will not be banked, so use ilink as channel
@@ -79,7 +79,7 @@ void z_arc_firq_stack_set(void)
"i"(_ARC_V2_STATUS32_RB(1)),
"i"(~_ARC_V2_STATUS32_RB(7))
);
arch_irq_unlock(key);
irq_unlock(key);
}
#endif
@@ -95,7 +95,10 @@ void z_arc_firq_stack_set(void)
void arch_irq_enable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_enable(irq);
irq_unlock(key);
}
/*
@@ -109,7 +112,10 @@ void arch_irq_enable(unsigned int irq)
void arch_irq_disable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_disable(irq);
irq_unlock(key);
}
/**
@@ -141,6 +147,8 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
ARG_UNUSED(flags);
unsigned int key = irq_lock();
__ASSERT(prio < CONFIG_NUM_IRQ_PRIO_LEVELS,
"invalid priority %d for irq %d", prio, irq);
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocted to secure world
@@ -154,6 +162,7 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
ARC_N_IRQ_START_LEVEL : prio;
#endif
z_arc_v2_irq_unit_prio_set(irq, prio);
irq_unlock(key);
}
/*

View File

@@ -35,35 +35,37 @@ _rirq_enter/_firq_enter: they are jump points.
The flow is the following:
ISR -> _isr_wrapper -- + -> _rirq_enter -> _isr_demux -> ISR -> _rirq_exit
|
+ -> _firq_enter -> _isr_demux -> ISR -> _firq_exit
|
+ -> _firq_enter -> _isr_demux -> ISR -> _firq_exit
Context switch explanation:
The context switch code is spread in these files:
isr_wrapper.s, switch.s, swap_macros.h, fast_irq.s, regular_irq.s
isr_wrapper.s, switch.s, swap_macros.s, fast_irq.s, regular_irq.s
IRQ stack frame layout:
high address
high address
status32
pc
lp_count
lp_start
lp_end
blink
r13
...
sp -> r0
status32
pc
lp_count
lp_start
lp_end
blink
r13
...
sp -> r0
low address
low address
The context switch code adopts this standard so that it is easier to follow:
- r2 contains _kernel.current ASAP, and the incoming thread when we
transition from outgoing thread to incoming thread
- r1 contains _kernel ASAP and is not overwritten over the lifespan of
the functions.
- r2 contains _kernel.current ASAP, and the incoming thread when we
transition from outgoing thread to incoming thread
Not loading _kernel into r0 allows loading _kernel without stomping on
the parameter in r0 in arch_switch().
@@ -98,49 +100,47 @@ done upfront, and the rest is done when needed:
o RIRQ
All needed registers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread's stack.
All needed registers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
the kernel. If a context switch is required, the callee-saved GPRs
are then saved in the thread control structure (TCS).
o FIRQ
First, a FIRQ can be interrupting a lower-priority RIRQ: if this is
the case, the FIRQ does not take a scheduling decision and leaves it
the RIRQ to handle. This limits the amount of code that has to run at
interrupt-level.
First, a FIRQ can be interrupting a lower-priority RIRQ: if this is the case,
the FIRQ does not take a scheduling decision and leaves it the RIRQ to
handle. This limits the amount of code that has to run at interrupt-level.
CONFIG_RGF_NUM_BANKS==1 case:
Registers are saved on the stack frame just as they are for RIRQ.
Context switch can happen just as it does in the RIRQ case, however,
if the FIRQ interrupted a RIRQ, the FIRQ will return from interrupt
and let the RIRQ do the context switch. At entry, one register is
needed in order to have code to save other registers. r0 is saved
first in the stack and restored later
CONFIG_RGF_NUM_BANKS==1 case:
Registers are saved on the stack frame just as they are for RIRQ.
Context switch can happen just as it does in the RIRQ case, however,
if the FIRQ interrupted a RIRQ, the FIRQ will return from interrupt and
let the RIRQ do the context switch. At entry, one register is needed in order
to have code to save other registers. r0 is saved first in a global called
saved_r0.
CONFIG_RGF_NUM_BANKS!=1 case:
During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own
stack. GPRs are banked, loop registers are saved in unused callee saved
regs upon interrupt entry. If returning to a thread, loop registers are
restored and the CPU switches back to bank 0 for the GPRs. If a context
switch is needed, at this point only are all the registers saved.
First, a stack frame with the same layout as the automatic RIRQ one is
created and then the callee-saved GPRs are saved in the stack.
status32_p0 and ilink are saved in this case, not status32 and pc.
To create the stack frame, the FIRQ handling code must first go back to
using bank0 of registers, since that is where the registers containing
the exiting thread are saved. Care must be taken not to touch any
register before saving them: the only one usable at that point is the
stack pointer.
CONFIG_RGF_NUM_BANKS!=1 case:
During early initialization, the sp in the 2nd register bank is made to
refer to _firq_stack. This allows for the FIRQ handler to use its own stack.
GPRs are banked, loop registers are saved in unused callee saved regs upon
interrupt entry. If returning to a thread, loop registers are restored and the
CPU switches back to bank 0 for the GPRs. If a context switch is
needed, at this point only are all the registers saved. First, a
stack frame with the same layout as the automatic RIRQ one is created
and then the callee-saved GPRs are saved in the TCS. status32_p0 and
ilink are saved in this case, not status32 and pc.
To create the stack frame, the FIRQ handling code must first go back to using
bank0 of registers, since that is where the registers containing the exiting
thread are saved. Care must be taken not to touch any register before saving
them: the only one usable at that point is the stack pointer.
o coop
When a coop context switch is done, the callee-saved registers are
saved in the stack. The other GPRs do not need to be saved, since the
compiler has already placed them on the stack.
When a coop context switch is done, the callee-saved registers are
saved in the TCS. The other GPRs do not need to be saved, since the
compiler has already placed them on the stack.
For restoring the contexts, there are six cases. In all cases, the
callee-saved registers of the incoming thread have to be restored. Then, there
@@ -148,56 +148,57 @@ are specifics for each case:
From coop:
o to coop
o to coop
Do a normal function call return.
Do a normal function call return.
o to any irq
o to any irq
The incoming interrupted thread has an IRQ stack frame containing the
caller-saved registers that has to be popped. status32 has to be
restored, then we jump to the interrupted instruction.
The incoming interrupted thread has an IRQ stack frame containing the
caller-saved registers that has to be popped. status32 has to be restored,
then we jump to the interrupted instruction.
From FIRQ:
When CONFIG_RGF_NUM_BANKS==1, context switch is done as it is for RIRQ.
When CONFIG_RGF_NUM_BANKS!=1, the processor is put back to using bank0,
not bank1 anymore, because it had to save the outgoing context from
bank0, and now has to load the incoming one into bank0.
When CONFIG_RGF_NUM_BANKS==1, context switch is done as it is for RIRQ.
When CONFIG_RGF_NUM_BANKS!=1, the processor is put back to using bank0,
not bank1 anymore, because it had to save the outgoing context from bank0,
and now has to load the incoming one
into bank0.
o to coop
o to coop
The address of the returning instruction from arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
The address of the returning instruction from arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
o to any irq
o to any irq
The IRQ has saved the caller-saved registers in a stack frame, which
must be popped, and status32 and pc loaded in status32_p0 and ilink.
The IRQ has saved the caller-saved registers in a stack frame, which must be
popped, and status32 and pc loaded in status32_p0 and ilink.
From RIRQ:
o to coop
o to coop
The interrupt return mechanism in the processor expects a stack frame,
but the outgoing context did not create one. A fake one is created
here, with only the relevant values filled in: pc, status32.
The interrupt return mechanism in the processor expects a stack frame, but
the outgoing context did not create one. A fake one is created here, with
only the relevant values filled in: pc, status32.
There is a discrepancy between the ABI from the ARCv2 docs,
including the way the processor pushes GPRs in pairs in the IRQ stack
frame, and the ABI GCC uses. r13 should be a callee-saved register,
but GCC treats it as caller-saved. This means that the processor pushes
it in the stack frame along with r12, but the compiler does not save it
before entering a function. So, it is saved as part of the callee-saved
registers, and restored there, but the processor restores it _a second
time_ when popping the IRQ stack frame. Thus, the correct value must
also be put in the fake stack frame when returning to a thread that
context switched out cooperatively.
There is a discrepancy between the ABI from the ARCv2 docs, including the
way the processor pushes GPRs in pairs in the IRQ stack frame, and the ABI
GCC uses. r13 should be a callee-saved register, but GCC treats it as
caller-saved. This means that the processor pushes it in the stack frame
along with r12, but the compiler does not save it before entering a
function. So, it is saved as part of the callee-saved registers, and
restored there, but the processor restores it _a second time_ when popping
the IRQ stack frame. Thus, the correct value must also be put in the fake
stack frame when returning to a thread that context switched out
cooperatively.
o to any irq
o to any irq
Both types of IRQs already have an IRQ stack frame: simply return from
interrupt.
Both types of IRQs already have an IRQ stack frame: simply return from
interrupt.
*/
SECTION_FUNC(TEXT, _isr_wrapper)
@@ -243,9 +244,15 @@ rirq_path:
j_s [r2]
#endif
#if defined(CONFIG_TRACING)
GTEXT(sys_trace_isr_enter)
#endif
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
.macro exit_tickless_idle
clri r0 /* do not interrupt exiting tickless idle operations */
push_s r1
push_s r0
mov_s r1, _kernel
ld_s r0, [r1, _kernel_offset_to_idle] /* requested idle duration */
breq r0, 0, _skip_sys_power_save_idle_exit
@@ -256,6 +263,8 @@ rirq_path:
pop_s blink
_skip_sys_power_save_idle_exit:
pop_s r0
pop_s r1
seti r0
.endm
#else
@@ -279,10 +288,6 @@ SECTION_FUNC(TEXT, _isr_demux)
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_start_of_isr
#endif
#if defined(CONFIG_TRACING)
bl sys_trace_isr_enter
#endif
/* cannot be done before this point because we must be able to run C */
/* r0 is available to be stomped here, and exit_tickless_idle uses it */
exit_tickless_idle
@@ -310,10 +315,6 @@ irq_hint_handled:
jl_s.d [r1]
ld_s r0, [r0] /* delay slot: ISR parameter into r0 */
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_exit
#endif
#ifdef CONFIG_ARC_HAS_ACCL_REGS
pop r59
pop r58

View File

@@ -18,20 +18,16 @@ config ARC_CORE_MPU
config MPU_STACK_GUARD
bool "Thread Stack Guards"
depends on ARC_CORE_MPU && ARC_MPU_VER !=2
depends on ARC_CORE_MPU
help
Enable thread stack guards via MPU. ARC supports built-in stack protection.
If your core supports that, it is preferred over MPU stack guard.
For ARC_MPU_VER == 2, it requires 2048 extra bytes and a strong start address
alignment, this will bring big waste of memory, so no support for it.
If your core supports that, it is preferred over MPU stack guard
config ARC_MPU
bool "ARC MPU Support"
select ARC_CORE_MPU
select THREAD_STACK_INFO
select MEMORY_PROTECTION
select GEN_PRIV_STACKS if ARC_MPU_VER = 2
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if ARC_MPU_VER = 2
select MPU_REQUIRES_NON_OVERLAPPING_REGIONS if ARC_MPU_VER = 3
help
Target has ARC MPU (currently only works for EMSK 2.2/2.3 ARCEM7D)

View File

@@ -75,12 +75,18 @@ static inline int get_region_index_by_type(u32_t type)
- THREAD_STACK_REGION;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return get_num_regions() - mpu_config.num_regions - type;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return get_num_regions() - mpu_config.num_regions - type;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not supported.
* since stack guard is not enabled.
*/
return get_num_regions() - mpu_config.num_regions - type + 1;
#endif
default:
__ASSERT(0, "Unsupported type");
return -EINVAL;
@@ -195,6 +201,46 @@ void arc_core_mpu_disable(void)
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
#if defined(CONFIG_MPU_STACK_GUARD)
#if defined(CONFIG_USERSPACE)
if ((thread->base.user_options & K_USER) != 0) {
/* the areas before and after the user stack of thread is
* kernel only. These area can be used as stack guard.
* -----------------------
* | kernel only area |
* |---------------------|
* | user stack |
* |---------------------|
* |privilege stack guard|
* |---------------------|
* | privilege stack |
* -----------------------
*/
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
} else {
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
}
#else
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
#endif
#endif
#if defined(CONFIG_USERSPACE)
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {

View File

@@ -28,23 +28,6 @@
#define CALC_REGION_END_ADDR(start, size) \
(start + size - (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS))
/* ARC MPU version 3 does not support mpu region overlap in hardware
* so if we want to allocate MPU region dynamically, e.g. thread stack,
* memory domain from a background region, a dynamic region splitting
* approach is designed. pls see comments in
* _dynamic_region_allocate_and_init
* But this approach has an impact on performance of thread switch.
* As a trade off, we can use the default mpu region as the background region
* to avoid the dynamic region splitting. This will give more privilege to
* codes in kernel mode which can access the memory region not covered by
* explicit mpu entry. Considering memory protection is mainly used to
* isolate malicious codes in user mode, it makes sense to get better
* thread switch performance through default mpu region.
* CONFIG_MPU_GAP_FILLING is used to turn this on/off.
*
*/
#if defined(CONFIG_MPU_GAP_FILLING)
#if defined(CONFIG_USERSPACE) && defined(CONFIG_MPU_STACK_GUARD)
/* 1 for stack guard , 1 for user thread, 1 for split */
#define MPU_REGION_NUM_FOR_THREAD 3
@@ -55,8 +38,6 @@
#define MPU_REGION_NUM_FOR_THREAD 0
#endif
#define MPU_DYNAMIC_REGION_AREAS_NUM 2
/**
* @brief internal structure holding information of
* memory areas where dynamic MPU programming is allowed.
@@ -68,6 +49,9 @@ struct dynamic_region_info {
u32_t attr;
};
#define MPU_DYNAMIC_REGION_AREAS_NUM 2
static u8_t static_regions_num;
static u8_t dynamic_regions_num;
static u8_t dynamic_region_index;
@@ -77,9 +61,6 @@ static u8_t dynamic_region_index;
* regions may be configured.
*/
static struct dynamic_region_info dyn_reg_info[MPU_DYNAMIC_REGION_AREAS_NUM];
#endif /* CONFIG_MPU_GAP_FILLING */
static u8_t static_regions_num;
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* \todo through secure service to access mpu */
@@ -255,6 +236,20 @@ static inline bool _is_user_accessible_region(u32_t r_index, int write)
#endif /* CONFIG_ARC_NORMAL_FIRMWARE */
/**
* This internal function allocates a dynamic MPU region and returns
* the index or error
*/
static inline int _dynamic_region_allocate_index(void)
{
if (dynamic_region_index >= get_num_regions()) {
LOG_ERR("no enough mpu entries %d", dynamic_region_index);
return -EINVAL;
}
return dynamic_region_index++;
}
/**
* This internal function checks the area given by (start, size)
* and returns the index if the area match one MPU entry
@@ -270,21 +265,6 @@ static inline int _get_region_index(u32_t start, u32_t size)
return -EINVAL;
}
#if defined(CONFIG_MPU_GAP_FILLING)
/**
* This internal function allocates a dynamic MPU region and returns
* the index or error
*/
static inline int _dynamic_region_allocate_index(void)
{
if (dynamic_region_index >= get_num_regions()) {
LOG_ERR("no enough mpu entries %d", dynamic_region_index);
return -EINVAL;
}
return dynamic_region_index++;
}
/* @brief allocate and init a dynamic MPU region
*
* This internal function performs the allocation and initialization of
@@ -429,69 +409,6 @@ static inline int _mpu_configure(u8_t type, u32_t base, u32_t size)
return _dynamic_region_allocate_and_init(base, size, region_attr);
}
#else
/**
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct region index.
*/
static inline int get_region_index_by_type(u32_t type)
{
/*
* The new MPU regions are allocated per type after the statically
* configured regions. The type is one-indexed rather than
* zero-indexed.
*
* For ARC MPU v2, the smaller index has higher priority, so the
* index is allocated in reverse order. Static regions start from
* the biggest index, then thread related regions.
*
*/
switch (type) {
case THREAD_STACK_USER_REGION:
return static_regions_num + THREAD_STACK_REGION;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return static_regions_num + type;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return static_regions_num + type;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not enabled.
*/
return static_regions_num + type - 1;
#endif
default:
__ASSERT(0, "Unsupported type");
return -EINVAL;
}
}
/**
* @brief configure the base address and size for an MPU region
*
* @param type MPU region type
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(u8_t type, u32_t base, u32_t size)
{
int region_index = get_region_index_by_type(type);
u32_t region_attr = get_region_attr_by_type(type);
LOG_DBG("Region info: 0x%x 0x%x", base, size);
if (region_attr == 0U || region_index < 0) {
return -EINVAL;
}
_region_init(region_index, base, size, region_attr);
return 0;
}
#endif
/* ARC Core MPU Driver API Implementation for ARC MPUv3 */
@@ -502,9 +419,9 @@ void arc_core_mpu_enable(void)
{
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* the default region:
* secure:0x8000, SID:0x10000, KW:0x100 KR:0x80
* normal:0x000, SID:0x10000, KW:0x100 KR:0x80, KE:0x4 0
*/
#define MPU_ENABLE_ATTR 0x18180
#define MPU_ENABLE_ATTR 0x101c0
#else
#define MPU_ENABLE_ATTR 0
#endif
@@ -530,7 +447,6 @@ void arc_core_mpu_disable(void)
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
#if defined(CONFIG_MPU_GAP_FILLING)
/* the mpu entries of ARC MPUv3 are divided into 2 parts:
* static entries: global mpu entries, not changed in context switch
* dynamic entries: MPU entries changed in context switch and
@@ -543,7 +459,6 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
* entries
*/
_mpu_reset_dynamic_regions();
#endif
#if defined(CONFIG_MPU_STACK_GUARD)
#if defined(CONFIG_USERSPACE)
if ((thread->base.user_options & K_USER) != 0U) {
@@ -584,6 +499,10 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
#endif
#if defined(CONFIG_USERSPACE)
u32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = thread->mem_domain_info.mem_domain;
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {
LOG_DBG("configure user thread %p's stack", thread);
@@ -594,11 +513,6 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
}
}
#if defined(CONFIG_MPU_GAP_FILLING)
u32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = thread->mem_domain_info.mem_domain;
/* configure thread's memory domain */
if (mem_domain) {
LOG_DBG("configure thread %p's domain: %p",
@@ -622,9 +536,6 @@ void arc_core_mpu_configure_thread(struct k_thread *thread)
}
pparts++;
}
#else
arc_core_mpu_configure_mem_domain(thread);
#endif
#endif
}
@@ -670,56 +581,10 @@ int arc_core_mpu_region(u32_t index, u32_t base, u32_t size,
*
* @param thread the thread which has memory domain
*/
#if defined(CONFIG_MPU_GAP_FILLING)
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
arc_core_mpu_configure_thread(thread);
}
#else
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
u32_t region_index;
u32_t num_partitions;
u32_t num_regions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = NULL;
if (thread) {
mem_domain = thread->mem_domain_info.mem_domain;
}
if (mem_domain) {
LOG_DBG("configure domain: %p", mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
LOG_DBG("disable domain partition regions");
num_partitions = 0U;
pparts = NULL;
}
num_regions = get_num_regions();
region_index = get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
while (num_partitions && region_index < num_regions) {
if (pparts->size > 0) {
LOG_DBG("set region 0x%x 0x%lx 0x%x",
region_index, pparts->start, pparts->size);
_region_init(region_index, pparts->start,
pparts->size, pparts->attr);
region_index++;
}
pparts++;
num_partitions--;
}
while (region_index < num_regions) {
/* clear the left mpu entries */
_region_init(region_index, 0, 0, 0);
region_index++;
}
}
#endif
/**
* @brief remove MPU regions for the memory partitions of the memory domain
@@ -747,12 +612,8 @@ void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
index = _get_region_index(pparts->start,
pparts->size);
if (index > 0) {
#if defined(CONFIG_MPU_GAP_FILLING)
_region_set_attr(index,
REGION_KERNEL_RAM_ATTR);
#else
_region_init(index, 0, 0, 0);
#endif
REGION_KERNEL_RAM_ATTR);
}
}
pparts++;
@@ -777,11 +638,7 @@ void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
}
LOG_DBG("remove region 0x%x", region_index);
#if defined(CONFIG_MPU_GAP_FILLING)
_region_set_attr(region_index, REGION_KERNEL_RAM_ATTR);
#else
_region_init(region_index, 0, 0, 0);
#endif
}
/**
@@ -789,13 +646,8 @@ void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
*/
int arc_core_mpu_get_max_domain_partition_regions(void)
{
#if defined(CONFIG_MPU_GAP_FILLING)
/* consider the worst case: each partition requires split */
return (get_num_regions() - MPU_REGION_NUM_FOR_THREAD) / 2;
#else
return get_num_regions() -
get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION) - 1;
#endif
}
/**
@@ -852,18 +704,11 @@ static int arc_mpu_init(struct device *arg)
return -EINVAL;
}
static_regions_num = 0;
/* Disable MPU */
arc_core_mpu_disable();
for (i = 0U; i < mpu_config.num_regions; i++) {
/* skip empty region */
if (mpu_config.mpu_regions[i].size == 0) {
continue;
}
#if defined(CONFIG_MPU_GAP_FILLING)
_region_init(static_regions_num,
_region_init(i,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
@@ -887,22 +732,11 @@ static int arc_mpu_init(struct device *arg)
dynamic_regions_num++;
}
static_regions_num++;
#else
/* dynamic region will be covered by default mpu setting
* no need to configure
*/
if (!(mpu_config.mpu_regions[i].attr & REGION_DYNAMIC)) {
_region_init(static_regions_num,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
static_regions_num++;
}
#endif
}
for (i = static_regions_num; i < num_regions; i++) {
static_regions_num = mpu_config.num_regions;
for (; i < num_regions; i++) {
_region_init(i, 0, 0, 0);
}

View File

@@ -37,11 +37,6 @@ GEN_OFFSET_SYM(_thread_arch_t, u_stack_top);
#endif
#endif
#ifdef CONFIG_USERSPACE
GEN_OFFSET_SYM(_thread_arch_t, priv_stack_start);
#endif
/* ARCv2-specific IRQ stack frame structure member offsets */
GEN_OFFSET_SYM(_isf_t, r0);
GEN_OFFSET_SYM(_isf_t, r1);
@@ -104,7 +99,7 @@ GEN_OFFSET_SYM(_callee_saved_stack_t, r30);
GEN_OFFSET_SYM(_callee_saved_stack_t, r58);
GEN_OFFSET_SYM(_callee_saved_stack_t, r59);
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, fpu_status);
GEN_OFFSET_SYM(_callee_saved_stack_t, fpu_ctrl);
#ifdef CONFIG_FP_FPU_DA

View File

@@ -23,7 +23,18 @@
GTEXT(_rirq_enter)
GTEXT(_rirq_exit)
GTEXT(_rirq_newthread_switch)
GTEXT(_rirq_common_interrupt_swap)
#if 0 /* TODO: when FIRQ is not present, all would be regular */
#define NUM_REGULAR_IRQ_PRIO_LEVELS CONFIG_NUM_IRQ_PRIO_LEVELS
#else
#define NUM_REGULAR_IRQ_PRIO_LEVELS (CONFIG_NUM_IRQ_PRIO_LEVELS-1)
#endif
/* note: the above define assumes that prio 0 IRQ is for FIRQ, and
* that all others are regular interrupts.
* TODO: Revist this if FIRQ becomes configurable.
*/
/*
@@ -203,16 +214,23 @@ will be corrupted.
SECTION_FUNC(TEXT, _rirq_enter)
/* the ISR will be handled in separate interrupt stack,
* so stack checking must be diabled, or exception will
* be caused
*/
_disable_stack_checking r2
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r2, [_ARC_V2_SEC_STAT]
bclr r2, r2, _ARC_V2_SEC_STAT_SSC_BIT
sflag r2
#else
/* disable stack checking */
lr r2, [_ARC_V2_STATUS32]
bclr r2, r2, _ARC_V2_STATUS32_SC_BIT
kflag r2
#endif
#endif
clri
/* check whether irq stack is used, if
* not switch to isr stack
*/
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
bne.d rirq_nest
@@ -242,17 +260,41 @@ SECTION_FUNC(TEXT, _rirq_exit)
_check_nest_int_by_irq_act r0, r1
jne _rirq_no_switch
jne _rirq_no_reschedule
/* sp is struct k_thread **old of z_arc_switch_in_isr
* which is a wrapper of z_get_next_switch_handle.
* r0 contains the 1st thread in ready queue. if
* it equals _current(r2) ,then do swap, or no swap.
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
cmp_s r0, 0
beq _rirq_no_reschedule
mov_s r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/*
* Both (a)reschedule and (b)non-reschedule cases need to load the
* current thread's stack, but don't have to use it until the decision
* is taken: load the delay slots with the 'load stack pointer'
* instruction.
*
* a) needs to load it to save outgoing context.
* b) needs to load it to restore the interrupted context.
*/
_get_next_switch_handle
cmp r0, r2
beq _rirq_no_switch
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
cmp_s r0, r2
beq _rirq_no_reschedule
/* cached thread to run is in r0, fall through */
#endif
.balign 4
_rirq_reschedule:
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to remember SEC_STAT.IRM bit */
@@ -260,34 +302,81 @@ SECTION_FUNC(TEXT, _rirq_exit)
push_s r3
#endif
/* r2 is old thread */
_irq_store_old_thread_callee_regs
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
/* _save_callee_saved_regs expects outgoing thread in r2 */
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
/* mov new thread (r0) to r2 */
mov r2, r0
#ifdef CONFIG_SMP
mov_s r2, r0
#else
/* incoming thread is in r0: it becomes the new 'current' */
mov_s r2, r0
st_s r2, [r1, _kernel_offset_to_current]
#endif
/* _rirq_newthread_switch required by exception handling */
.balign 4
_rirq_newthread_switch:
_rirq_common_interrupt_swap:
/* r2 contains pointer to new thread */
_load_new_thread_callee_regs
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
breq r3, _CAUSE_RIRQ, _rirq_switch_from_rirq
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* when USERSPACE is enabled, according to ARCv2 ISA, SP will be switched
* if interrupt comes out in user mode, and will be recorded in bit 31
* (U bit) of IRQ_ACT. when interrupt exits, SP will be switched back
* according to U bit.
*
* For the case that context switches in interrupt, the target sp must be
* thread's kernel stack, no need to do hardware sp switch. so, U bit should
* be cleared.
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _rirq_return_from_rirq
nop_s
breq r3, _CAUSE_FIRQ, _rirq_switch_from_firq
breq r3, _CAUSE_FIRQ, _rirq_return_from_firq
nop_s
/* fall through */
.balign 4
_rirq_switch_from_coop:
_rirq_return_from_coop:
/* for a cooperative switch, it's not in irq, so
* need to set some regs for irq return
*/
_set_misc_regs_irq_switch_from_coop
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* must return to secure mode, so set IRM bit to 1 */
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_IRM_BIT
sflag r0
#endif
/*
* See verbose explanation of
@@ -314,10 +403,22 @@ _rirq_switch_from_coop:
rtie
.balign 4
_rirq_switch_from_firq:
_rirq_switch_from_rirq:
_rirq_return_from_firq:
_rirq_return_from_rirq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
_rirq_no_reschedule:
_set_misc_regs_irq_switch_from_irq
_rirq_no_switch:
rtie

View File

@@ -16,14 +16,14 @@
#include <arch/cpu.h>
#include <swap_macros.h>
GDATA(z_interrupt_stacks)
GDATA(_interrupt_stack)
GDATA(z_main_stack)
GDATA(_VectorTable)
/* use one of the available interrupt stacks during init */
#define INIT_STACK z_interrupt_stacks
#define INIT_STACK _interrupt_stack
#define INIT_STACK_SIZE CONFIG_ISR_STACK_SIZE
GTEXT(__reset)
@@ -126,7 +126,7 @@ done_cache_invalidate:
jl @_sys_resume_from_deep_sleep
#endif
#if defined(CONFIG_SMP) || CONFIG_MP_NUM_CPUS > 1
#if CONFIG_MP_NUM_CPUS > 1
_get_cpu_id r0
breq r0, 0, _master_core_startup
@@ -134,9 +134,6 @@ done_cache_invalidate:
* Non-masters wait for master core (core 0) to boot enough
*/
_slave_core_wait:
#if CONFIG_MP_NUM_CPUS == 1
kflag 1
#endif
ld r1, [arc_cpu_wake_flag]
brne r0, r1, _slave_core_wait
@@ -145,9 +142,7 @@ _slave_core_wait:
st 0, [arc_cpu_wake_flag]
#if defined(CONFIG_ARC_FIRQ_STACK)
push r0
jl z_arc_firq_stack_set
pop r0
#endif
j z_arc_slave_start
@@ -163,7 +158,7 @@ _master_core_startup:
mov_s sp, z_main_stack
add sp, sp, CONFIG_MAIN_STACK_SIZE
mov_s r0, z_interrupt_stacks
mov_s r0, _interrupt_stack
mov_s r1, 0xaa
mov_s r2, CONFIG_ISR_STACK_SIZE
jl memset

View File

@@ -67,12 +67,15 @@ SECTION_FUNC(TEXT, arch_switch)
#endif
/*
* r0 = new_thread->switch_handle = switch_to thread,
* r1 = &old_thread->switch_handle
* get old_thread from r1
* r1 = &old_thread->switch_handle = &switch_from thread
*/
sub r2, r1, ___thread_t_switch_handle_OFFSET
ld_s r2, [r1]
/*
* r2 may be dummy_thread in z_cstart, dummy_thread->switch_handle
* must be 0
*/
breq r2, 0, _switch_to_target_thread
st _CAUSE_COOP, [r2, _thread_offset_to_relinquish_cause]
@@ -94,16 +97,39 @@ SECTION_FUNC(TEXT, arch_switch)
push_s blink
_store_old_thread_callee_regs
_save_callee_saved_regs
#ifdef CONFIG_ARC_STACK_CHECKING
/* disable stack checking here, as sp will be changed to target
* thread'sp
*/
_disable_stack_checking r3
#if defined(CONFIG_ARC_HAS_SECURE) && defined(CONFIG_ARC_SECURE_FIRMWARE)
bclr r3, r3, _ARC_V2_SEC_STAT_SSC_BIT
sflag r3
#else
bclr r3, r3, _ARC_V2_STATUS32_SC_BIT
kflag r3
#endif
#endif
_switch_to_target_thread:
mov_s r2, r0
_load_new_thread_callee_regs
/* entering here, r2 contains the new current thread */
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
bl configure_mpu_thread
pop_s r2
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _switch_return_from_rirq
nop_s
@@ -126,12 +152,9 @@ _switch_return_from_coop:
kflag r3 /* write status32 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s blink
bl read_timer_end_of_swap
pop_s blink
#endif /* CONFIG_EXECUTION_BENCHMARKING */
b _capture_value_for_benchmarking
#endif
return_loc:
j_s [blink]
@@ -139,14 +162,24 @@ _switch_return_from_coop:
_switch_return_from_rirq:
_switch_return_from_firq:
_set_misc_regs_irq_switch_from_irq
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
/* use lowest interrupt priority to simulate
* a interrupt return to load left regs of new
* thread
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
lr r3, [_ARC_V2_AUX_IRQ_ACT]
/* use lowest interrupt priority */
#ifdef CONFIG_ARC_SECURE_FIRMWARE
or r3, r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
@@ -162,3 +195,14 @@ _switch_return_from_firq:
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
rtie
#ifdef CONFIG_EXECUTION_BENCHMARKING
.balign 4
_capture_value_for_benchmarking:
push_s blink
bl read_timer_end_of_swap
pop_s blink
b return_loc
#endif /* CONFIG_EXECUTION_BENCHMARKING */

View File

@@ -64,9 +64,10 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
int priority, unsigned int options)
{
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
Z_ASSERT_VALID_PRIO(priority, pEntry);
char *stackEnd;
char *priv_stack_end;
char *stackAdjEnd;
struct init_stack_frame *pInitCtx;
#ifdef CONFIG_USERSPACE
@@ -74,9 +75,12 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackAdjSize;
size_t offset = 0;
stackAdjSize = Z_ARC_MPU_SIZE_ALIGN(stackSize);
/* adjust stack and stack size */
#if CONFIG_ARC_MPU_VER == 2
stackAdjSize = Z_ARC_MPUV2_SIZE_ALIGN(stackSize);
#elif CONFIG_ARC_MPU_VER == 3
stackAdjSize = STACK_SIZE_ALIGN(stackSize);
#endif
stackEnd = pStackMem + stackAdjSize;
#ifdef CONFIG_STACK_POINTER_RANDOM
@@ -84,57 +88,61 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
#endif
if (options & K_USER) {
#ifdef CONFIG_GEN_PRIV_STACKS
thread->arch.priv_stack_start =
(u32_t)z_priv_stack_find(thread->stack_obj);
#else
thread->arch.priv_stack_start =
(u32_t)(stackEnd + STACK_GUARD_SIZE);
#endif
priv_stack_end = (char *)Z_STACK_PTR_ALIGN(
thread->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE);
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd +
ARCH_THREAD_STACK_RESERVED);
/* reserve 4 bytes for the start of user sp */
priv_stack_end -= 4;
(*(u32_t *)priv_stack_end) = Z_STACK_PTR_ALIGN(
stackAdjEnd -= 4;
(*(u32_t *)stackAdjEnd) = STACK_ROUND_DOWN(
(u32_t)stackEnd - offset);
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
thread->userspace_local_data =
(struct _thread_userspace_local_data *)
Z_STACK_PTR_ALIGN(stackEnd -
STACK_ROUND_DOWN(stackEnd -
sizeof(*thread->userspace_local_data) - offset);
/* update the start of user sp */
(*(u32_t *)priv_stack_end) =
(u32_t) thread->userspace_local_data;
(*(u32_t *)stackAdjEnd) = (u32_t) thread->userspace_local_data;
#endif
} else {
/* for kernel thread, the privilege stack is merged into thread stack */
/* if MPU_STACK_GUARD is enabled, reserve the the stack area
* |---------------------| |----------------|
* | user stack | | stack guard |
* |---------------------| to |----------------|
* | stack guard | | kernel thread |
* |---------------------| | stack |
* | privilege stack | | |
* ---------------------------------------------
*/
pStackMem += STACK_GUARD_SIZE;
stackEnd += STACK_GUARD_SIZE;
stackAdjSize = stackAdjSize + CONFIG_PRIVILEGED_STACK_SIZE;
stackEnd += ARCH_THREAD_STACK_RESERVED;
thread->arch.priv_stack_start = 0;
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
priv_stack_end = (char *)Z_STACK_PTR_ALIGN(stackEnd
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd
- sizeof(*thread->userspace_local_data) - offset);
thread->userspace_local_data =
(struct _thread_userspace_local_data *)priv_stack_end;
(struct _thread_userspace_local_data *)stackAdjEnd;
#else
priv_stack_end = (char *)Z_STACK_PTR_ALIGN(stackEnd - offset);
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd - offset);
#endif
}
z_new_thread_init(thread, pStackMem, stackAdjSize);
z_new_thread_init(thread, pStackMem, stackAdjSize, priority, options);
/* carve the thread entry struct from the "base" of
the privileged stack */
pInitCtx = (struct init_stack_frame *)(
priv_stack_end - sizeof(struct init_stack_frame));
stackAdjEnd - sizeof(struct init_stack_frame));
/* fill init context */
pInitCtx->status32 = 0U;
@@ -153,15 +161,15 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
*/
pInitCtx->status32 |= _ARC_V2_STATUS32_US;
#else /* For no USERSPACE feature */
pStackMem += STACK_GUARD_SIZE;
pStackMem += ARCH_THREAD_STACK_RESERVED;
stackEnd = pStackMem + stackSize;
z_new_thread_init(thread, pStackMem, stackSize);
z_new_thread_init(thread, pStackMem, stackSize, priority, options);
priv_stack_end = stackEnd;
stackAdjEnd = stackEnd;
pInitCtx = (struct init_stack_frame *)(
Z_STACK_PTR_ALIGN(priv_stack_end) -
STACK_ROUND_DOWN(stackAdjEnd) -
sizeof(struct init_stack_frame));
pInitCtx->status32 = 0U;
@@ -189,9 +197,9 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
thread->arch.u_stack_top = (u32_t)pStackMem;
thread->arch.u_stack_base = (u32_t)stackEnd;
thread->arch.k_stack_top =
(u32_t)(thread->arch.priv_stack_start);
(u32_t)(stackEnd + STACK_GUARD_SIZE);
thread->arch.k_stack_base = (u32_t)
(thread->arch.priv_stack_start + CONFIG_PRIVILEGED_STACK_SIZE);
(stackEnd + ARCH_THREAD_STACK_RESERVED);
} else {
thread->arch.k_stack_top = (u32_t)pStackMem;
thread->arch.k_stack_base = (u32_t)stackEnd;
@@ -216,12 +224,6 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
/* initial values in all other regs/k_thread entries are irrelevant */
}
void *z_arch_get_next_switch_handle(struct k_thread **old_thread)
{
*old_thread = _current;
return z_get_next_switch_handle(*old_thread);
}
#ifdef CONFIG_USERSPACE
@@ -229,17 +231,22 @@ FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
{
/*
* adjust the thread stack layout
* |----------------| |---------------------|
* | stack guard | | user stack |
* |----------------| to |---------------------|
* | kernel thread | | stack guard |
* | stack | |---------------------|
* | | | privilege stack |
* ---------------------------------------------
*/
_current->stack_info.start = (u32_t)_current->stack_obj;
#ifdef CONFIG_GEN_PRIV_STACKS
_current->arch.priv_stack_start =
(u32_t)z_priv_stack_find(_current->stack_obj);
#else
_current->stack_info.size -= CONFIG_PRIVILEGED_STACK_SIZE;
_current->arch.priv_stack_start =
(u32_t)(_current->stack_info.start +
_current->stack_info.size + STACK_GUARD_SIZE);
#endif
#ifdef CONFIG_ARC_STACK_CHECKING
_current->arch.k_stack_top = _current->arch.priv_stack_start;
@@ -255,14 +262,14 @@ FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
configure_mpu_thread(_current);
z_arc_userspace_enter(user_entry, p1, p2, p3,
(u32_t)_current->stack_obj,
_current->stack_info.size, _current);
(u32_t)_current->stack_obj,
_current->stack_info.size);
CODE_UNREACHABLE;
}
#endif
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
int arch_float_disable(struct k_thread *thread)
{
unsigned int key;
@@ -295,4 +302,4 @@ int arch_float_enable(struct k_thread *thread)
return 0;
}
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */

View File

@@ -29,10 +29,10 @@ u64_t z_tsc_read(void)
u64_t t;
u32_t count;
key = arch_irq_lock();
key = irq_lock();
t = (u64_t)z_tick_get();
count = z_arc_v2_aux_reg_read(_ARC_V2_TMR0_COUNT);
arch_irq_unlock(key);
irq_unlock(key);
t *= k_ticks_to_cyc_floor64(1);
t += (u64_t)count;
return t;

View File

@@ -93,15 +93,22 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
/*
* In ARCv2, the U bit can only be set through exception return
*/
#ifdef CONFIG_ARC_STACK_CHECKING
/* disable stack checking as the stack should be initialized */
_disable_stack_checking blink
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr blink, [_ARC_V2_SEC_STAT]
bclr blink, blink, _ARC_V2_SEC_STAT_SSC_BIT
sflag blink
#else
lr blink, [_ARC_V2_STATUS32]
bclr blink, blink, _ARC_V2_STATUS32_SC_BIT
kflag blink
#endif
#endif
/* the end of user stack in r5 */
add r5, r4, r5
/* get start of privilege stack, r6 points to current thread */
ld blink, [r6, _thread_offset_to_priv_stack_start]
add blink, blink, CONFIG_PRIVILEGED_STACK_SIZE
/* start of privilege stack */
add blink, r5, CONFIG_PRIVILEGED_STACK_SIZE+STACK_GUARD_SIZE
mov_s sp, r5
push_s r0
@@ -111,9 +118,6 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
mov r5, sp /* skip r0, r1, r2, r3 */
/* to avoid the leakage of kernel info, the thread stack needs to be
* re-initialized
*/
#ifdef CONFIG_INIT_STACKS
mov_s r0, 0xaaaaaaaa
#else
@@ -124,22 +128,23 @@ _clear_user_stack:
cmp r4, r5
jlt _clear_user_stack
/* reload the stack checking regs as the original kernel stack
* becomes user stack
*/
#ifdef CONFIG_ARC_STACK_CHECKING
/* current thread in r6, SMP case is also considered */
mov r2, r6
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_stack_check_regs
_enable_stack_checking r0
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_SSC_BIT
sflag r0
#else
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_SC_BIT
kflag r0
#endif
#endif
/* the following codes are used to switch from kernel mode
* to user mode by fake exception, because U bit can only be set
* by exception
*/
_arc_go_to_user_space:
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_U_BIT
@@ -151,7 +156,7 @@ _arc_go_to_user_space:
/* fake exception return */
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_AE_BIT
bclr r0, r0, _ARC_V2_STATUS32_AE_BIT
kflag r0
/* when exception returns from kernel to user, sp and _ARC_V2_USER_SP
@@ -180,8 +185,9 @@ _arc_go_to_user_space:
mov_s blink, 0
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_end_of_userspace_enter
#endif
b _capture_value_for_benchmarking_userspace
return_loc_userspace_enter:
#endif /* CONFIG_EXECUTION_BENCHMARKING */
rtie
@@ -292,3 +298,20 @@ inc_len:
/* increment length measurement, loop again */
add_s r0, r0, 1
b_s strlen_loop
#ifdef CONFIG_EXECUTION_BENCHMARKING
.balign 4
_capture_value_for_benchmarking_userspace:
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_save_callee_saved_regs
push_s blink
bl read_timer_end_of_userspace_enter
pop_s blink
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_callee_saved_regs
b return_loc_userspace_enter
#endif

View File

@@ -8,4 +8,4 @@
KEEP(*(.exc_vector_table))
KEEP(*(".exc_vector_table.*"))
KEEP(*(_IRQ_VECTOR_TABLE_SECTION_NAME))
KEEP(*(IRQ_VECTOR_TABLE))

View File

@@ -143,7 +143,7 @@ struct _callee_saved_stack {
u32_t r59;
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
u32_t fpu_status;
u32_t fpu_ctrl;
#ifdef CONFIG_FP_FPU_DA
@@ -168,4 +168,12 @@ typedef struct _callee_saved_stack _callee_saved_stack_t;
#endif /* _ASMLANGUAGE */
/* stacks */
#define STACK_ALIGN_SIZE 4
#define STACK_ROUND_UP(x) ROUND_UP(x, STACK_ALIGN_SIZE)
#define STACK_ROUND_DOWN(x) ROUND_DOWN(x, STACK_ALIGN_SIZE)
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_DATA_H_ */

View File

@@ -36,6 +36,8 @@ extern "C" {
static ALWAYS_INLINE void arch_kernel_init(void)
{
z_irq_setup();
_current_cpu->irq_stack =
Z_THREAD_STACK_BUFFER(_interrupt_stack) + CONFIG_ISR_STACK_SIZE;
}
@@ -62,8 +64,7 @@ extern void z_thread_entry_wrapper(void);
extern void z_user_thread_entry_wrapper(void);
extern void z_arc_userspace_enter(k_thread_entry_t user_entry, void *p1,
void *p2, void *p3, u32_t stack, u32_t size,
struct k_thread *thread);
void *p2, void *p3, u32_t stack, u32_t size);
extern void arch_switch(void *switch_to, void **switched_from);

View File

@@ -32,9 +32,6 @@
#define _thread_offset_to_u_stack_top \
(___thread_t_arch_OFFSET + ___thread_arch_t_u_stack_top_OFFSET)
#define _thread_offset_to_priv_stack_start \
(___thread_t_arch_OFFSET + ___thread_arch_t_priv_stack_start_OFFSET)
#define _thread_offset_to_sp \
(___thread_t_callee_saved_OFFSET + ___callee_saved_t_sp_OFFSET)

View File

@@ -16,7 +16,7 @@
#ifdef _ASMLANGUAGE
/* save callee regs of current thread in r2 */
/* entering this macro, current is in r2 */
.macro _save_callee_saved_regs
sub_s sp, sp, ___callee_saved_stack_t_SIZEOF
@@ -63,7 +63,7 @@
st r59, [sp, ___callee_saved_stack_t_r59_OFFSET]
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 1f
@@ -89,7 +89,7 @@
st sp, [r2, _thread_offset_to_sp]
.endm
/* load the callee regs of thread (in r2)*/
/* entering this macro, current is in r2 */
.macro _load_callee_saved_regs
/* restore stack pointer from struct k_thread */
ld sp, [r2, _thread_offset_to_sp]
@@ -99,7 +99,7 @@
ld r59, [sp, ___callee_saved_stack_t_r59_OFFSET]
#endif
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 2f
@@ -162,7 +162,6 @@
.endm
/* discard callee regs */
.macro _discard_callee_saved_regs
add_s sp, sp, ___callee_saved_stack_t_SIZEOF
.endm
@@ -266,7 +265,7 @@
.endm
/*
* To use this macro, r2 should have the value of thread struct pointer to
* To use this macor, r2 should have the value of thread struct pointer to
* _kernel.current. r3 is a scratch reg.
*/
.macro _load_stack_check_regs
@@ -298,7 +297,6 @@
/* check and increase the interrupt nest counter
* after increase, check whether nest counter == 1
* the result will be EQ bit of status32
* two temp regs are needed
*/
.macro _check_and_inc_int_nest_counter reg1 reg2
#ifdef CONFIG_SMP
@@ -307,21 +305,18 @@
ld \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
mov \reg1, _kernel
ld \reg2, [\reg1, _kernel_offset_to_nested]
ld \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
add \reg2, \reg2, 1
#ifdef CONFIG_SMP
st \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
st \reg2, [\reg1, _kernel_offset_to_nested]
st \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
cmp \reg2, 1
.endm
/* decrease interrupt stack nest counter
* the counter > 0, interrupt stack is used, or
* not used
*/
/* decrease interrupt nest counter */
.macro _dec_int_nest_counter reg1 reg2
#ifdef CONFIG_SMP
_get_cpu_id \reg1
@@ -329,19 +324,18 @@
ld \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
mov \reg1, _kernel
ld \reg2, [\reg1, _kernel_offset_to_nested]
ld \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
sub \reg2, \reg2, 1
#ifdef CONFIG_SMP
st \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
st \reg2, [\reg1, _kernel_offset_to_nested]
st \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
.endm
/* If multi bits in IRQ_ACT are set, i.e. last bit != fist bit, it's
* in nest interrupt. The result will be EQ bit of status32
* need two temp reg to do this
*/
.macro _check_nest_int_by_irq_act reg1, reg2
lr \reg1, [_ARC_V2_AUX_IRQ_ACT]
@@ -355,18 +349,11 @@
cmp \reg1, \reg2
.endm
/* macro to get id of current cpu
* the result will be in reg (a reg)
*/
.macro _get_cpu_id reg
lr \reg, [_ARC_V2_IDENTITY]
xbfu \reg, \reg, 0xe8
.endm
/* macro to get the interrupt stack of current cpu
* the result will be in irq_sp (a reg)
*/
.macro _get_curr_cpu_irq_stack irq_sp
#ifdef CONFIG_SMP
_get_cpu_id \irq_sp
@@ -390,135 +377,6 @@
sr \reg, [\aux]
.endm
/* macro to store old thread call regs */
.macro _store_old_thread_callee_regs
_save_callee_saved_regs
#ifdef CONFIG_SMP
/* save old thread into switch handle which is required by
* wait_for_switch
*/
st r2, [r2, ___thread_t_switch_handle_OFFSET]
#endif
.endm
/* macro to store old thread call regs in interrupt*/
.macro _irq_store_old_thread_callee_regs
#if defined(CONFIG_USERSPACE)
/*
* when USERSPACE is enabled, according to ARCv2 ISA, SP will be switched
* if interrupt comes out in user mode, and will be recorded in bit 31
* (U bit) of IRQ_ACT. when interrupt exits, SP will be switched back
* according to U bit.
*
* need to remember the user/kernel status of interrupted thread, will be
* restored when thread switched back
*
*/
lr r1, [_ARC_V2_AUX_IRQ_ACT]
and r3, r1, 0x80000000
push_s r3
bclr r1, r1, 31
sr r1, [_ARC_V2_AUX_IRQ_ACT]
#endif
_store_old_thread_callee_regs
.endm
/* macro to load new thread callee regs */
.macro _load_new_thread_callee_regs
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/*
* _load_callee_saved_regs expects incoming thread in r2.
* _load_callee_saved_regs restores the stack pointer.
*/
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
bl configure_mpu_thread
pop_s r2
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
.endm
/* when switch to thread caused by coop, some status regs need to set */
.macro _set_misc_regs_irq_switch_from_coop
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* must return to secure mode, so set IRM bit to 1 */
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_IRM_BIT
sflag r0
#endif
.endm
/* when switch to thread caused by irq, some status regs need to set */
.macro _set_misc_regs_irq_switch_from_irq
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
.endm
/* macro to get next switch handle in assembly */
.macro _get_next_switch_handle
push_s r2
mov r0, sp
bl z_arch_get_next_switch_handle
pop_s r2
.endm
/* macro to disable stack checking in assembly, need a GPR
* to do this
*/
.macro _disable_stack_checking reg
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr \reg, [_ARC_V2_SEC_STAT]
bclr \reg, \reg, _ARC_V2_SEC_STAT_SSC_BIT
sflag \reg
#else
lr \reg, [_ARC_V2_STATUS32]
bclr \reg, \reg, _ARC_V2_STATUS32_SC_BIT
kflag \reg
#endif
#endif
.endm
/* macro to enable stack checking in assembly, need a GPR
* to do this
*/
.macro _enable_stack_checking reg
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr \reg, [_ARC_V2_SEC_STAT]
bset \reg, \reg, _ARC_V2_SEC_STAT_SSC_BIT
sflag \reg
#else
lr \reg, [_ARC_V2_STATUS32]
bset \reg, \reg, _ARC_V2_STATUS32_SC_BIT
kflag \reg
#endif
#endif
.endm
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_SWAP_MACROS_H_ */

View File

@@ -46,6 +46,9 @@ extern "C" {
#define _ARC_V2_INIT_IRQ_LOCK_KEY (0x10 | _ARC_V2_DEF_IRQ_LEVEL)
#ifndef _ASMLANGUAGE
extern K_THREAD_STACK_DEFINE(_interrupt_stack, CONFIG_ISR_STACK_SIZE);
/*
* z_irq_setup
*

View File

@@ -1,11 +1,7 @@
# SPDX-License-Identifier: Apache-2.0
if(CONFIG_ARM64)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littleaarch64)
add_subdirectory(core/aarch64)
include(aarch64.cmake)
else()
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearm)
add_subdirectory(core/aarch32)
include(aarch32.cmake)
endif()

View File

@@ -6,6 +6,9 @@
menu "ARM Options"
depends on ARM
rsource "core/aarch32/Kconfig"
rsource "core/aarch64/Kconfig"
config ARCH
default "arm"
@@ -13,37 +16,4 @@ config ARM64
bool
select 64BIT
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config ARM_CUSTOM_INTERRUPT_CONTROLLER
bool
depends on !CPU_CORTEX_M
help
This option indicates that the ARM CPU is connected to a custom (i.e.
non-GIC) interrupt controller.
A number of Cortex-A and Cortex-R cores (Cortex-A5, Cortex-R4/5, ...)
allow interfacing to a custom external interrupt controller and this
option must be selected when such cores are connected to an interrupt
controller that is not the ARM Generic Interrupt Controller (GIC).
When this option is selected, the architecture interrupt control
functions are mapped to the SoC interrupt control interface, which is
implemented at the SoC level.
N.B. This option is only applicable to the Cortex-A and Cortex-R
family cores. The Cortex-M family cores are always equipped with
the ARM Nested Vectored Interrupt Controller (NVIC).
if !ARM64
rsource "core/aarch32/Kconfig"
endif
if ARM64
rsource "core/aarch64/Kconfig"
endif
endmenu

28
arch/arm/aarch32.cmake Normal file
View File

@@ -0,0 +1,28 @@
# SPDX-License-Identifier: Apache-2.0
set(ARCH_FOR_cortex-m0 armv6s-m )
set(ARCH_FOR_cortex-m0plus armv6s-m )
set(ARCH_FOR_cortex-m3 armv7-m )
set(ARCH_FOR_cortex-m4 armv7e-m )
set(ARCH_FOR_cortex-m23 armv8-m.base )
set(ARCH_FOR_cortex-m33 armv8-m.main+dsp)
set(ARCH_FOR_cortex-m33+nodsp armv8-m.main )
set(ARCH_FOR_cortex-r4 armv7-r )
if(ARCH_FOR_${GCC_M_CPU})
set(ARCH_FLAG -march=${ARCH_FOR_${GCC_M_CPU}})
endif()
zephyr_compile_options(
-mabi=aapcs
${ARCH_FLAG}
)
zephyr_ld_options(
-mabi=aapcs
${ARCH_FLAG}
)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-little${ARCH}) # BFD format
add_subdirectory(core/aarch32)

5
arch/arm/aarch64.cmake Normal file
View File

@@ -0,0 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littleaarch64) # BFD format
add_subdirectory(core/aarch64)

View File

@@ -7,11 +7,13 @@ if (CONFIG_COVERAGE)
endif ()
zephyr_library_sources(
exc_exit.S
swap.c
swap_helper.S
irq_manage.c
thread.c
cpu_idle.S
fault_s.S
fatal.c
nmi.c
nmi_on_reset.S
@@ -30,6 +32,6 @@ add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M_HAS_CMSE cortex_m/cmse)
add_subdirectory_ifdef(CONFIG_ARM_SECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_ARM_NONSECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_R cortex_a_r)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_R cortex_r)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)

View File

@@ -3,6 +3,13 @@
# Copyright (c) 2015 Wind River Systems, Inc.
# SPDX-License-Identifier: Apache-2.0
if !ARM64
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config CPU_CORTEX_M
bool
select CPU_CORTEX
@@ -25,7 +32,6 @@ config CPU_CORTEX_R
select CPU_CORTEX
select HAS_CMSIS_CORE
select HAS_FLASH_LOAD_OFFSET
select ARCH_HAS_THREAD_ABORT
help
This option signifies the use of a CPU of the Cortex-R family.
@@ -70,38 +76,6 @@ config ISA_ARM
processor start-up. Much of its functionality was subsumed into T32 with
the introduction of Thumb-2 technology.
config ASSEMBLER_ISA_THUMB2
bool
default y if ISA_THUMB2 && !ISA_ARM
depends on !ISA_ARM
help
This helper symbol specifies the default target instruction set for
the assembler.
When only the Thumb-2 ISA is supported (i.e. on Cortex-M cores), the
assembler must use the Thumb-2 instruction set.
When both the Thumb-2 and ARM ISAs are supported (i.e. on Cortex-A
and Cortex-R cores), the assembler must use the ARM instruction set
because the architecture assembly code makes use of the ARM
instructions.
config COMPILER_ISA_THUMB2
bool "Compile C/C++ functions using Thumb-2 instruction set"
depends on ISA_THUMB2
default y
help
This option configures the compiler to compile all C/C++ functions
using the Thumb-2 instruction set.
N.B. The scope of this symbol is not necessarily limited to the C and
C++ languages; in fact, this symbol refers to all forms of
"compiled" code.
When an additional natively-compiled language support is added
in the future, this symbol shall also specify the Thumb-2
instruction set for that language.
config NUM_IRQS
int
@@ -211,10 +185,74 @@ config ARM_NONSECURE_FIRMWARE
resources of the Cortex-M MCU, and, therefore, it shall avoid
accessing them.
menu "ARM TrustZone Options"
depends on ARM_SECURE_FIRMWARE || ARM_NONSECURE_FIRMWARE
comment "Secure firmware"
depends on ARM_SECURE_FIRMWARE
comment "Non-secure firmware"
depends on !ARM_SECURE_FIRMWARE
config ARM_SECURE_BUSFAULT_HARDFAULT_NMI
bool "BusFault, HardFault, and NMI target Secure state"
depends on ARM_SECURE_FIRMWARE
help
Force NMI, HardFault, and BusFault (in Mainline ARMv8-M)
exceptions as Secure exceptions.
config ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
bool "Secure Firmware has Secure Entry functions"
depends on ARM_SECURE_FIRMWARE
help
Option indicates that ARM Secure Firmware contains
Secure Entry functions that may be called from
Non-Secure state. Secure Entry functions must be
located in Non-Secure Callable memory regions.
config ARM_NSC_REGION_BASE_ADDRESS
hex "ARM Non-Secure Callable Region base address"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
default 0
help
Start address of Non-Secure Callable section.
Notes:
- The default value (i.e. when the user does not configure
the option explicitly) instructs the linker script to
place the Non-Secure Callable section, automatically,
inside the .text area.
- Certain requirements/restrictions may apply regarding
the size and the alignment of the starting address for
a Non-Secure Callable section, depending on the available
security attribution unit (SAU or IDAU) for a given SOC.
config ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
bool "Non-Secure Firmware uses Secure Entry functions"
depends on ARM_NONSECURE_FIRMWARE
help
Option indicates that ARM Non-Secure Firmware uses Secure
Entry functions provided by the Secure Firmware. The Secure
Firmware must be configured to provide these functions.
config ARM_ENTRY_VENEERS_LIB_NAME
string "Entry Veneers symbol file"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS \
|| ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
default "libentryveneers.a"
help
Library file to find the symbol table for the entry veneers.
The library will typically come from building the Secure
Firmware that contains secure entry functions, and allows
the Non-Secure Firmware to call into the Secure Firmware.
endmenu
choice
prompt "Floating point ABI"
default FP_HARDABI
depends on FPU
depends on FLOAT
config FP_HARDABI
bool "Floating point Hard ABI"
@@ -232,4 +270,10 @@ config FP_SOFTABI
endchoice
rsource "cortex_m/Kconfig"
rsource "cortex_a_r/Kconfig"
rsource "cortex_r/Kconfig"
rsource "cortex_m/mpu/Kconfig"
rsource "cortex_m/tz/Kconfig"
endif # !ARM64

View File

@@ -1,148 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Exception handlers for ARM Cortex-A and Cortex-R
*
* This file implements the exception handlers (undefined instruction, prefetch
* abort and data abort) for ARM Cortex-A and Cortex-R processors.
*
* All exception handlers save the exception stack frame into the exception
* mode stack rather than the system mode stack, in order to ensure predictable
* exception behaviour (i.e. an arbitrary thread stack overflow cannot cause
* exception handling and thereby subsequent total system failure).
*
* In case the exception is due to a fatal (unrecoverable) fault, the fault
* handler is responsible for invoking the architecture fatal exception handler
* (z_arm_fatal_error) which invokes the kernel fatal exception handler
* (z_fatal_error) that either locks up the system or aborts the current thread
* depending on the application exception handler implementation.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_fault_undef_instruction)
GTEXT(z_arm_fault_prefetch)
GTEXT(z_arm_fault_data)
GTEXT(z_arm_undef_instruction)
GTEXT(z_arm_prefetch_abort)
GTEXT(z_arm_data_abort)
/**
* @brief Undefined instruction exception handler
*
* An undefined instruction (UNDEF) exception is generated when an undefined
* instruction, or a VFP instruction when the VFP is not enabled, is
* encountered.
*/
SECTION_SUBSEC_FUNC(TEXT, __exc, z_arm_undef_instruction)
/*
* The undefined instruction address is offset by 2 if the previous
* mode is Thumb; otherwise, it is offset by 4.
*/
push {r0}
mrs r0, spsr
tst r0, #T_BIT
subeq lr, #4 /* ARM (!T_BIT) */
subne lr, #2 /* Thumb (T_BIT) */
pop {r0}
/*
* Store r0-r3, r12, lr, lr_und and spsr_und into the stack to
* construct an exception stack frame.
*/
srsdb sp, #MODE_UND!
stmfd sp, {r0-r3, r12, lr}^
sub sp, #24
/* Increment exception nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Invoke fault handler */
mov r0, sp
bl z_arm_fault_undef_instruction
/* Exit exception */
b z_arm_exc_exit
/**
* @brief Prefetch abort exception handler
*
* A prefetch abort (PABT) exception is generated when the processor marks the
* prefetched instruction as invalid and the instruction is executed.
*/
SECTION_SUBSEC_FUNC(TEXT, __exc, z_arm_prefetch_abort)
/*
* The faulting instruction address is always offset by 4 for the
* prefetch abort exceptions.
*/
sub lr, #4
/*
* Store r0-r3, r12, lr, lr_abt and spsr_abt into the stack to
* construct an exception stack frame.
*/
srsdb sp, #MODE_ABT!
stmfd sp, {r0-r3, r12, lr}^
sub sp, #24
/* Increment exception nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Invoke fault handler */
mov r0, sp
bl z_arm_fault_prefetch
/* Exit exception */
b z_arm_exc_exit
/**
* @brief Data abort exception handler
*
* A data abort (DABT) exception is generated when an error occurs on a data
* memory access. This exception can be either synchronous or asynchronous,
* depending on the type of fault that caused it.
*/
SECTION_SUBSEC_FUNC(TEXT, __exc, z_arm_data_abort)
/*
* The faulting instruction address is always offset by 8 for the data
* abort exceptions.
*/
sub lr, #8
/*
* Store r0-r3, r12, lr, lr_abt and spsr_abt into the stack to
* construct an exception stack frame.
*/
srsdb sp, #MODE_ABT!
stmfd sp, {r0-r3, r12, lr}^
sub sp, #24
/* Increment exception nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Invoke fault handler */
mov r0, sp
bl z_arm_fault_data
/* Exit exception */
b z_arm_exc_exit

View File

@@ -1,159 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
* Copyright (c) 2016 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-A and Cortex-R exception/interrupt exit API
*
* Provides functions for performing kernel handling when exiting exceptions,
* or interrupts that are installed directly in the vector table (i.e. that are
* not wrapped around by _isr_wrapper()).
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_exc_exit)
GTEXT(z_arm_int_exit)
GTEXT(z_arm_pendsv)
GDATA(_kernel)
/**
* @brief Kernel housekeeping when exiting interrupt handler installed directly
* in the vector table
*
* Kernel allows installing interrupt handlers (ISRs) directly into the vector
* table to get the lowest interrupt latency possible. This allows the ISR to
* be invoked directly without going through a software interrupt table.
* However, upon exiting the ISR, some kernel work must still be performed,
* namely possible context switching. While ISRs connected in the software
* interrupt table do this automatically via a wrapper, ISRs connected directly
* in the vector table must invoke z_arm_int_exit() as the *very last* action
* before returning.
*
* e.g.
*
* void myISR(void)
* {
* printk("in %s\n", __FUNCTION__);
* doStuff();
* z_arm_int_exit();
* }
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_int_exit)
#ifdef CONFIG_PREEMPT_ENABLED
/* Do not context switch if exiting a nested interrupt */
ldr r3, =_kernel
ldr r0, [r3, #_kernel_offset_to_nested]
cmp r0, #1
bhi __EXIT_INT
ldr r1, [r3, #_kernel_offset_to_current]
ldr r0, [r3, #_kernel_offset_to_ready_q_cache]
cmp r0, r1
blne z_arm_pendsv
__EXIT_INT:
#endif /* CONFIG_PREEMPT_ENABLED */
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif /* CONFIG_STACK_SENTINEL */
/* Disable nested interrupts while exiting */
cpsid i
/* Decrement interrupt nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
sub r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
/* Restore previous stack pointer */
pop {r2, r3}
add sp, sp, r3
/*
* Restore r0-r3, r12 and lr_irq stored into the process stack by the
* mode entry function. These registers are saved by _isr_wrapper for
* IRQ mode and z_arm_svc for SVC mode.
*
* r0-r3 are either the values from the thread before it was switched
* out or they are the args to _new_thread for a new thread.
*/
cps #MODE_SYS
pop {r0-r3, r12, lr}
rfeia sp!
/**
* @brief Kernel housekeeping when exiting exception handler
*
* The exception exit routine performs appropriate housekeeping tasks depending
* on the mode of exit:
*
* If exiting a nested or non-fatal exception, the exit routine restores the
* saved exception stack frame to resume the excepted context.
*
* If exiting a non-nested fatal exception, the exit routine, assuming that the
* current faulting thread is aborted, discards the saved exception stack
* frame containing the aborted thread context and switches to the next
* scheduled thread.
*
* void z_arm_exc_exit(bool fatal)
*
* @param fatal True if exiting from a fatal fault; otherwise, false
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_exc_exit)
/* Do not context switch if exiting a nested exception */
ldr r3, =_kernel
ldr r1, [r3, #_kernel_offset_to_nested]
cmp r1, #1
bhi __EXIT_EXC
/* If the fault is not fatal, return to the current thread context */
cmp r0, #0
beq __EXIT_EXC
/*
* If the fault is fatal, the current thread must have been aborted by
* the exception handler. Clean up the exception stack frame and switch
* to the next scheduled thread.
*/
/* Clean up exception stack frame */
add sp, #32
/* Switch in the next scheduled thread */
bl z_arm_pendsv
/* Decrement exception nesting count */
ldr r0, [r3, #_kernel_offset_to_nested]
sub r0, r0, #1
str r0, [r3, #_kernel_offset_to_nested]
/* Return to the switched thread */
cps #MODE_SYS
pop {r0-r3, r12, lr}
rfeia sp!
__EXIT_EXC:
/* Decrement exception nesting count */
ldr r0, [r3, #_kernel_offset_to_nested]
sub r0, r0, #1
str r0, [r3, #_kernel_offset_to_nested]
/*
* Restore r0-r3, r12, lr, lr_und and spsr_und from the exception stack
* and return to the current thread.
*/
ldmia sp, {r0-r3, r12, lr}^
add sp, #24
rfeia sp!

View File

@@ -1,165 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <kernel_internal.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
#define FAULT_DUMP_VERBOSE (CONFIG_FAULT_DUMP == 2)
#if FAULT_DUMP_VERBOSE
static const char *get_dbgdscr_moe_string(u32_t moe)
{
switch (moe) {
case DBGDSCR_MOE_HALT_REQUEST:
return "Halt Request";
case DBGDSCR_MOE_BREAKPOINT:
return "Breakpoint";
case DBGDSCR_MOE_ASYNC_WATCHPOINT:
return "Asynchronous Watchpoint";
case DBGDSCR_MOE_BKPT_INSTRUCTION:
return "BKPT Instruction";
case DBGDSCR_MOE_EXT_DEBUG_REQUEST:
return "External Debug Request";
case DBGDSCR_MOE_VECTOR_CATCH:
return "Vector Catch";
case DBGDSCR_MOE_OS_UNLOCK_CATCH:
return "OS Unlock Catch";
case DBGDSCR_MOE_SYNC_WATCHPOINT:
return "Synchronous Watchpoint";
default:
return "Unknown";
}
}
static void dump_debug_event(void)
{
/* Read and parse debug mode of entry */
u32_t dbgdscr = __get_DBGDSCR();
u32_t moe = (dbgdscr & DBGDSCR_MOE_Msk) >> DBGDSCR_MOE_Pos;
/* Print debug event information */
LOG_ERR("Debug Event (%s)", get_dbgdscr_moe_string(moe));
}
static void dump_fault(u32_t status, u32_t addr)
{
/*
* Dump fault status and, if applicable, tatus-specific information.
* Note that the fault address is only displayed for the synchronous
* faults because it is unpredictable for asynchronous faults.
*/
switch (status) {
case FSR_FS_ALIGNMENT_FAULT:
LOG_ERR("Alignment Fault @ 0x%08x", addr);
break;
case FSR_FS_BACKGROUND_FAULT:
LOG_ERR("Background Fault @ 0x%08x", addr);
break;
case FSR_FS_PERMISSION_FAULT:
LOG_ERR("Permission Fault @ 0x%08x", addr);
break;
case FSR_FS_SYNC_EXTERNAL_ABORT:
LOG_ERR("Synchronous External Abort @ 0x%08x", addr);
break;
case FSR_FS_ASYNC_EXTERNAL_ABORT:
LOG_ERR("Asynchronous External Abort");
break;
case FSR_FS_SYNC_PARITY_ERROR:
LOG_ERR("Synchronous Parity/ECC Error @ 0x%08x", addr);
break;
case FSR_FS_ASYNC_PARITY_ERROR:
LOG_ERR("Asynchronous Parity/ECC Error");
break;
case FSR_FS_DEBUG_EVENT:
dump_debug_event();
break;
default:
LOG_ERR("Unknown (%u)", status);
}
}
#endif
/**
* @brief Undefined instruction fault handler
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_undef_instruction(z_arch_esf_t *esf)
{
/* Print fault information */
LOG_ERR("***** UNDEFINED INSTRUCTION ABORT *****");
/* Invoke kernel fatal exception handler */
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
/* All undefined instructions are treated as fatal for now */
return true;
}
/**
* @brief Prefetch abort fault handler
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_prefetch(z_arch_esf_t *esf)
{
/* Read and parse Instruction Fault Status Register (IFSR) */
u32_t ifsr = __get_IFSR();
u32_t fs = ((ifsr & IFSR_FS1_Msk) >> 6) | (ifsr & IFSR_FS0_Msk);
/* Read Instruction Fault Address Register (IFAR) */
u32_t ifar = __get_IFAR();
/* Print fault information*/
LOG_ERR("***** PREFETCH ABORT *****");
if (FAULT_DUMP_VERBOSE) {
dump_fault(fs, ifar);
}
/* Invoke kernel fatal exception handler */
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
/* All prefetch aborts are treated as fatal for now */
return true;
}
/**
* @brief Data abort fault handler
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_data(z_arch_esf_t *esf)
{
/* Read and parse Data Fault Status Register (DFSR) */
u32_t dfsr = __get_DFSR();
u32_t fs = ((dfsr & DFSR_FS1_Msk) >> 6) | (dfsr & DFSR_FS0_Msk);
/* Read Data Fault Address Register (DFAR) */
u32_t dfar = __get_DFAR();
/* Print fault information*/
LOG_ERR("***** DATA ABORT *****");
if (FAULT_DUMP_VERBOSE) {
dump_fault(fs, dfar);
}
/* Invoke kernel fatal exception handler */
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
/* All data aborts are treated as fatal for now */
return true;
}
/**
* @brief Initialisation of fault handling
*/
void z_arm_fault_init(void)
{
/* Nothing to do for now */
}

View File

@@ -1,33 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-R interrupt initialization
*/
#include <arch/cpu.h>
#include <drivers/interrupt_controller/gic.h>
/**
*
* @brief Initialize interrupts
*
* @return N/A
*/
void z_arm_interrupt_init(void)
{
/*
* Initialise interrupt controller.
*/
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
/* Initialise the Generic Interrupt Controller (GIC) driver */
arm_gic_init();
#else
/* Invoke SoC-specific interrupt controller initialisation */
z_soc_irq_init();
#endif
}

View File

@@ -1,46 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
* Copyright (c) 2016 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-A and Cortex-R k_thread_abort() routine
*
* The ARM Cortex-A and Cortex-R architectures provide their own
* k_thread_abort() to deal with different CPU modes when a thread aborts.
*/
#include <kernel.h>
#include <kswap.h>
extern void z_thread_single_abort(struct k_thread *thread);
void z_impl_k_thread_abort(k_tid_t thread)
{
__ASSERT(!(thread->base.user_options & K_ESSENTIAL),
"essential thread aborted");
z_thread_single_abort(thread);
z_thread_monitor_exit(thread);
/*
* Swap context if and only if the thread is not aborted inside an
* interrupt/exception handler; it is not necessary to swap context
* inside an interrupt/exception handler because the handler swaps
* context when exiting.
*/
if (!arch_is_in_isr()) {
if (thread == _current) {
/* Direct use of swap: reschedule doesn't have a test
* for "is _current dead" and we don't want one for
* performance reasons.
*/
z_swap_unlocked();
} else {
z_reschedule_unlocked();
}
}
}

View File

@@ -5,9 +5,7 @@ zephyr_library()
zephyr_library_sources(
vector_table.S
reset.S
fault_s.S
fault.c
exc_exit.S
scb.c
irq_init.c
thread_abort.c

View File

@@ -279,7 +279,4 @@ config SW_VECTOR_RELAY
endmenu
rsource "mpu/Kconfig"
rsource "tz/Kconfig"
endif # CPU_CORTEX_M

View File

@@ -1,6 +1,5 @@
/*
* Copyright (c) 2014 Wind River Systems, Inc.
* Copyright (c) 2020 Nordic Semiconductor ASA.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -266,14 +265,6 @@ static u32_t mem_manage_fault(z_arch_esf_t *esf, int from_hard_fault,
* process the error further if the stack frame is on
* PSP. For always-banked MemManage Fault, this is
* equivalent to inspecting the RETTOBASE flag.
*
* Note:
* It is possible that MMFAR address is not written by the
* Cortex-M core; this occurs when the stacking error is
* not accompanied by a data access violation error (i.e.
* when stack overflows due to the exception entry frame
* stacking): z_check_thread_stack_fail() shall be able to
* handle the case of 'mmfar' holding the -EINVAL value.
*/
if (SCB->ICSR & SCB_ICSR_RETTOBASE_Msk) {
u32_t min_stack_ptr = z_check_thread_stack_fail(mmfar,

View File

@@ -24,7 +24,7 @@
* @return N/A
*/
void z_arm_interrupt_init(void)
void z_arm_int_lib_init(void)
{
int irq = 0;

View File

@@ -63,7 +63,7 @@ config MPU_STACK_GUARD
config MPU_STACK_GUARD_MIN_SIZE_FLOAT
int
depends on MPU_STACK_GUARD
depends on FPU_SHARING
depends on FP_SHARING
default 128
help
Minimum size (and alignment when applicable) of an ARM MPU
@@ -72,7 +72,7 @@ config MPU_STACK_GUARD_MIN_SIZE_FLOAT
128, to accommodate the length of a Cortex-M exception stack
frame when the floating point context is active. The FP context
is only stacked in sharing FP registers mode, therefore, the
option is applicable only when FPU_SHARING is selected.
option is applicable only when FP_SHARING is selected.
config MPU_ALLOW_FLASH_WRITE
bool "Add MPU access to write to flash"

View File

@@ -215,7 +215,7 @@ void z_arm_configure_dynamic_mpu_regions(struct k_thread *thread)
u32_t guard_start;
u32_t guard_size = MPU_GUARD_ALIGN_AND_SIZE;
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
if ((thread->base.user_options & K_FP_REGS) != 0) {
guard_size = MPU_GUARD_ALIGN_AND_SIZE_FLOAT;
}

View File

@@ -15,22 +15,6 @@
#include <logging/log.h>
LOG_MODULE_DECLARE(mpu);
#if defined(CONFIG_ARMV8_M_BASELINE) || defined(CONFIG_ARMV8_M_MAINLINE)
/* The order here is on purpose since ARMv8-M SoCs may define
* CONFIG_ARMV6_M_ARMV8_M_BASELINE or CONFIG_ARMV7_M_ARMV8_M_MAINLINE
* so we want to check for ARMv8-M first.
*/
#define MPU_NODEID DT_INST(0, arm_armv8m_mpu)
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
#define MPU_NODEID DT_INST(0, arm_armv7m_mpu)
#elif defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
#define MPU_NODEID DT_INST(0, arm_armv6m_mpu)
#endif
#if DT_NODE_HAS_PROP(MPU_NODEID, arm_num_mpu_regions)
#define NUM_MPU_REGIONS DT_PROP(MPU_NODEID, arm_num_mpu_regions)
#endif
/*
* Global status variable holding the number of HW MPU region indices, which
* have been reserved by the MPU driver to program the static (fixed) memory
@@ -50,9 +34,9 @@ static inline u8_t get_num_regions(void)
* have a fixed number of 8 MPU regions.
*/
return 8;
#elif defined(NUM_MPU_REGIONS)
#elif defined(DT_NUM_MPU_REGIONS)
/* Retrieve the number of regions from DTS configuration. */
return NUM_MPU_REGIONS;
return DT_NUM_MPU_REGIONS;
#else
u32_t type = MPU->TYPE;
@@ -343,10 +327,10 @@ static int arm_mpu_init(struct device *arg)
__ASSERT(
(MPU->TYPE & MPU_TYPE_DREGION_Msk) >> MPU_TYPE_DREGION_Pos == 8,
"Invalid number of MPU regions\n");
#elif defined(NUM_MPU_REGIONS)
#elif defined(DT_NUM_MPU_REGIONS)
__ASSERT(
(MPU->TYPE & MPU_TYPE_DREGION_Msk) >> MPU_TYPE_DREGION_Pos ==
NUM_MPU_REGIONS,
DT_NUM_MPU_REGIONS,
"Invalid number of MPU regions\n");
#endif /* CORTEX_M0PLUS || CPU_CORTEX_M3 || CPU_CORTEX_M4 */
return 0;

View File

@@ -20,7 +20,7 @@ _ASM_FILE_PROLOGUE
GTEXT(z_arm_reset)
GTEXT(memset)
GDATA(z_interrupt_stacks)
GDATA(_interrupt_stack)
#if defined(CONFIG_PLATFORM_SPECIFIC_INIT)
GTEXT(z_platform_init)
#endif
@@ -78,7 +78,7 @@ SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
#endif
#ifdef CONFIG_INIT_STACKS
ldr r0, =z_interrupt_stacks
ldr r0, =_interrupt_stack
ldr r1, =0xaa
ldr r2, =CONFIG_ISR_STACK_SIZE
bl memset
@@ -86,9 +86,9 @@ SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
/*
* Set PSP and use it to boot without using MSP, so that it
* gets set to z_interrupt_stacks during initialization.
* gets set to _interrupt_stack during initialization.
*/
ldr r0, =z_interrupt_stacks
ldr r0, =_interrupt_stack
ldr r1, =CONFIG_ISR_STACK_SIZE
adds r0, r0, r1
msr PSP, r0

View File

@@ -9,70 +9,3 @@ config ARM_TRUSTZONE_M
depends on ARMV8_M_SE
help
Platform has support for ARM TrustZone-M.
if ARM_TRUSTZONE_M
menu "ARM TrustZone-M Options"
depends on ARM_SECURE_FIRMWARE || ARM_NONSECURE_FIRMWARE
comment "Secure firmware"
depends on ARM_SECURE_FIRMWARE
comment "Non-secure firmware"
depends on !ARM_SECURE_FIRMWARE
config ARM_SECURE_BUSFAULT_HARDFAULT_NMI
bool "BusFault, HardFault, and NMI target Secure state"
depends on ARM_SECURE_FIRMWARE
help
Force NMI, HardFault, and BusFault (in Mainline ARMv8-M)
exceptions as Secure exceptions.
config ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
bool "Secure Firmware has Secure Entry functions"
depends on ARM_SECURE_FIRMWARE
help
Option indicates that ARM Secure Firmware contains
Secure Entry functions that may be called from
Non-Secure state. Secure Entry functions must be
located in Non-Secure Callable memory regions.
config ARM_NSC_REGION_BASE_ADDRESS
hex "ARM Non-Secure Callable Region base address"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
default 0
help
Start address of Non-Secure Callable section.
Notes:
- The default value (i.e. when the user does not configure
the option explicitly) instructs the linker script to
place the Non-Secure Callable section, automatically,
inside the .text area.
- Certain requirements/restrictions may apply regarding
the size and the alignment of the starting address for
a Non-Secure Callable section, depending on the available
security attribution unit (SAU or IDAU) for a given SOC.
config ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
bool "Non-Secure Firmware uses Secure Entry functions"
depends on ARM_NONSECURE_FIRMWARE
help
Option indicates that ARM Non-Secure Firmware uses Secure
Entry functions provided by the Secure Firmware. The Secure
Firmware must be configured to provide these functions.
config ARM_ENTRY_VENEERS_LIB_NAME
string "Entry Veneers symbol file"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS \
|| ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
default "libentryveneers.a"
help
Library file to find the symbol table for the entry veneers.
The library will typically come from building the Secure
Firmware that contains secure entry functions, and allows
the Non-Secure Firmware to call into the Secure Firmware.
endmenu
endif # ARM_TRUSTZONE_M

View File

@@ -4,23 +4,18 @@
* SPDX-License-Identifier: Apache-2.0
*/
/* nRF-specific defines. */
#ifdef CONFIG_CPU_HAS_NRF_IDAU
/* This SOC needs the NSC region to be at the end of an SPU region. */
#define NSC_ALIGN \
. = ALIGN(CONFIG_NRF_SPU_FLASH_REGION_SIZE) \
- (1 << LOG2CEIL(__sg_size))
#define NSC_ALIGN_END . = ALIGN(CONFIG_NRF_SPU_FLASH_REGION_SIZE)
#endif /* CONFIG_CPU_HAS_NRF_IDAU */
#if CONFIG_ARM_NSC_REGION_BASE_ADDRESS != 0
#define NSC_ALIGN . = ABSOLUTE(CONFIG_ARM_NSC_REGION_BASE_ADDRESS)
#elif !defined(NSC_ALIGN)
#elif defined(CONFIG_CPU_HAS_NRF_IDAU)
/* The nRF9160 needs the NSC region to be at the end of a 32 kB region. */
#define NSC_ALIGN . = ALIGN(0x8000) - (1 << LOG2CEIL(__sg_size))
#else
#define NSC_ALIGN . = ALIGN(4)
#endif /* CONFIG_ARM_NSC_REGION_BASE_ADDRESS */
#endif
#ifndef NSC_ALIGN_END
#ifdef CONFIG_CPU_HAS_NRF_IDAU
#define NSC_ALIGN_END . = ALIGN(0x8000)
#else
#define NSC_ALIGN_END . = ALIGN(4)
#endif
@@ -36,14 +31,11 @@ __sg_size = __sg_end - __sg_start;
NSC_ALIGN_END;
__nsc_size = . - __sg_start;
/* nRF-specific ASSERT. */
#ifdef CONFIG_CPU_HAS_NRF_IDAU
#define NRF_SG_START (__sg_start % CONFIG_NRF_SPU_FLASH_REGION_SIZE)
#define NRF_SG_SIZE (CONFIG_NRF_SPU_FLASH_REGION_SIZE - NRF_SG_START)
ASSERT((__sg_size == 0)
|| (((1 << LOG2CEIL(NRF_SG_SIZE)) == NRF_SG_SIZE) /* Pow of 2 */
&& (NRF_SG_SIZE >= 32)
&& (NRF_SG_SIZE <= 4096)),
ASSERT(1 << LOG2CEIL(0x8000 - (__sg_start % 0x8000))
== (0x8000 - (__sg_start % 0x8000))
&& (0x8000 - (__sg_start % 0x8000)) >= 32
&& (0x8000 - (__sg_start % 0x8000)) <= 4096,
"The Non-Secure Callable region size must be a power of 2 \
between 32 and 4096 bytes.")
#endif

View File

@@ -1,6 +1,5 @@
/*
* Copyright (c) 2013-2015 Wind River Systems, Inc.
* Copyright (c) 2020 Nordic Semiconductor ASA.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -39,48 +38,37 @@ SECTION_SUBSEC_FUNC(exc_vector_table,_vector_table_section,_vector_table)
.word z_arm_hard_fault
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word 0
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_svc
.word 0
.word z_arm_reserved
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
.word z_arm_mpu_fault
.word z_arm_bus_fault
.word z_arm_usage_fault
#if defined(CONFIG_ARMV8_M_SE)
#if defined(CONFIG_ARM_SECURE_FIRMWARE)
.word z_arm_secure_fault
#else
.word z_arm_exc_spurious
.word z_arm_reserved
#endif /* CONFIG_ARM_SECURE_FIRMWARE */
#else
.word 0
#endif /* CONFIG_ARMV8_M_SE */
.word 0
.word 0
.word 0
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_svc
.word z_arm_debug_monitor
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
.word 0
.word z_arm_reserved
.word z_arm_pendsv
#if defined(CONFIG_CPU_CORTEX_M_HAS_SYSTICK)
#if defined(CONFIG_SYS_CLOCK_EXISTS)
/* Install z_clock_isr even if CORTEX_M_SYSTICK is not set
* (e.g. to support out-of-tree SysTick-based timer drivers).
*/
.word z_clock_isr
#else
.word z_arm_exc_spurious
#endif /* CONFIG_SYS_CLOCK_EXISTS */
#else
.word 0
#endif /* CONFIG_CPU_CORTEX_M_HAS_SYSTICK */
.word z_arm_reserved
#endif

View File

@@ -48,7 +48,7 @@ GTEXT(z_arm_debug_monitor)
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
GTEXT(z_arm_pendsv)
GTEXT(z_arm_exc_spurious)
GTEXT(z_arm_reserved)
GTEXT(z_arm_prep_c)
GTEXT(_isr_wrapper)

View File

@@ -5,11 +5,7 @@ zephyr_library()
zephyr_library_sources(
vector_table.S
reset.S
exc.S
exc_exit.S
fault.c
irq_init.c
reboot.c
stacks.c
thread_abort.c
)

View File

@@ -33,7 +33,6 @@ config ARMV7_R
bool
select ATOMIC_OPERATIONS_BUILTIN
select ISA_ARM
select ISA_THUMB2
help
This option signifies the use of an ARMv7-R processor
implementation.

View File

@@ -0,0 +1,29 @@
/*
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <kernel_internal.h>
#include <kernel_structs.h>
/**
*
* @brief Fault handler
*
* This routine is called when fatal error conditions are detected by hardware
* and is responsible only for reporting the error. Once reported, it then
* invokes the user provided routine _SysFatalErrorHandler() which is
* responsible for implementing the error handling policy.
*
* This is a stub for more exception handling code to be added later.
*/
void z_arm_fault(z_arch_esf_t *esf, u32_t exc_return)
{
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
}
void z_arm_fault_init(void)
{
}

View File

@@ -21,7 +21,7 @@
_ASM_FILE_PROLOGUE
GTEXT(z_arm_reset)
GDATA(z_interrupt_stacks)
GDATA(_interrupt_stack)
GDATA(z_arm_svc_stack)
GDATA(z_arm_sys_stack)
GDATA(z_arm_fiq_stack)
@@ -111,7 +111,7 @@ SECTION_SUBSEC_FUNC(TEXT, _reset_section, __start)
mov r13, #0 /* r13_sys */
mov r14, #0 /* r14_sys */
#if defined(CONFIG_FPU)
#if defined(CONFIG_FLOAT)
/*
* Initialise FPU registers to a defined state.
*/
@@ -142,7 +142,7 @@ SECTION_SUBSEC_FUNC(TEXT, _reset_section, __start)
fmdrr d13, r1, r1
fmdrr d14, r1, r1
fmdrr d15, r1, r1
#endif /* CONFIG_FPU */
#endif /* CONFIG_FLOAT */
#endif /* CONFIG_CPU_HAS_DCLS */
@@ -156,7 +156,7 @@ SECTION_SUBSEC_FUNC(TEXT, _reset_section, __start)
/* IRQ mode stack */
msr CPSR_c, #(MODE_IRQ | I_BIT | F_BIT)
ldr sp, =(z_interrupt_stacks + CONFIG_ISR_STACK_SIZE)
ldr sp, =(_interrupt_stack + CONFIG_ISR_STACK_SIZE)
/* ABT mode stack */
msr CPSR_c, #(MODE_ABT | I_BIT | F_BIT)

View File

@@ -5,9 +5,8 @@
*/
#include <kernel.h>
#include <aarch32/cortex_a_r/stack.h>
#include <aarch32/cortex_r/stack.h>
#include <string.h>
#include <kernel_internal.h>
K_THREAD_STACK_DEFINE(z_arm_fiq_stack, CONFIG_ARMV7_FIQ_STACK_SIZE);
K_THREAD_STACK_DEFINE(z_arm_abort_stack, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
@@ -22,7 +21,6 @@ void z_arm_init_stacks(void)
memset(z_arm_svc_stack, 0xAA, CONFIG_ARMV7_SVC_STACK_SIZE);
memset(z_arm_abort_stack, 0xAA, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
memset(z_arm_undef_stack, 0xAA, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
memset(Z_THREAD_STACK_BUFFER(z_interrupt_stacks[0]), 0xAA,
CONFIG_ISR_STACK_SIZE);
memset(&_interrupt_stack, 0xAA, CONFIG_ISR_STACK_SIZE);
}
#endif

View File

@@ -20,12 +20,12 @@ GTEXT(arch_cpu_idle)
GTEXT(arch_cpu_atomic_idle)
#if defined(CONFIG_CPU_CORTEX_M)
#define _SCB_SCR 0xE000ED10
#define _SCB_SCR 0xE000ED10
#define _SCB_SCR_SEVONPEND (1 << 4)
#define _SCB_SCR_SLEEPDEEP (1 << 2)
#define _SCB_SCR_SLEEPONEXIT (1 << 1)
#define _SCR_INIT_BITS _SCB_SCR_SEVONPEND
#define _SCB_SCR_SEVONPEND (1 << 4)
#define _SCB_SCR_SLEEPDEEP (1 << 2)
#define _SCB_SCR_SLEEPONEXIT (1 << 1)
#define _SCR_INIT_BITS _SCB_SCR_SEVONPEND
#endif
/**
@@ -44,80 +44,48 @@ GTEXT(arch_cpu_atomic_idle)
SECTION_FUNC(TEXT, z_arm_cpu_idle_init)
#if defined(CONFIG_CPU_CORTEX_M)
ldr r1, =_SCB_SCR
movs.n r2, #_SCR_INIT_BITS
str r2, [r1]
ldr r1, =_SCB_SCR
movs.n r2, #_SCR_INIT_BITS
str r2, [r1]
#endif
bx lr
bx lr
SECTION_FUNC(TEXT, arch_cpu_idle)
#ifdef CONFIG_TRACING
push {r0, lr}
bl sys_trace_idle
push {r0, lr}
bl sys_trace_idle
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#endif /* CONFIG_TRACING */
#if defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/*
* PRIMASK is always cleared on ARMv7-M and ARMv8-M Mainline (not used
* for interrupt locking), and configuring BASEPRI to the lowest
* priority to ensure wake-up will cause interrupts to be serviced
* before entering low power state.
*
* Set PRIMASK before configuring BASEPRI to prevent interruption
* before wake-up.
*/
cpsid i
/*
* Set wake-up interrupt priority to the lowest and synchronise to
* ensure that this is visible to the WFI instruction.
*/
eors.n r0, r0
msr BASEPRI, r0
isb
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
cpsie i
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* clear BASEPRI so wfi is awakened by incoming interrupts */
eors.n r0, r0
msr BASEPRI, r0
#else
/*
* For all the other ARM architectures that do not implement BASEPRI,
* PRIMASK is used as the interrupt locking mechanism, and it is not
* necessary to set PRIMASK here, as PRIMASK would have already been
* set by the caller as part of interrupt locking if necessary
* (i.e. if the caller sets _kernel.idle).
*/
#endif /* CONFIG_ARMV7_M_ARMV8_M_MAINLINE */
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
/*
* Wait for all memory transactions to complete before entering low
* power state.
*/
dsb
/* Enter low power state */
wfi
/*
* Clear PRIMASK and flush instruction buffer to immediately service
* the wake-up interrupt.
*/
cpsie i
isb
bx lr
bx lr
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
#ifdef CONFIG_TRACING
push {r0, lr}
bl sys_trace_idle
push {r0, lr}
bl sys_trace_idle
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#endif /* CONFIG_TRACING */
@@ -125,39 +93,37 @@ SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
* Lock PRIMASK while sleeping: wfe will still get interrupted by
* incoming interrupts but the CPU will not service them right away.
*/
cpsid i
cpsid i
/*
* No need to set SEVONPEND, it's set once in z_arm_cpu_idle_init()
* and never touched again.
* No need to set SEVONPEND, it's set once in z_CpuIdleInit() and never
* touched again.
*/
/* r0: interrupt mask from caller */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
/* No BASEPRI, call wfe directly
* (SEVONPEND is set in z_arm_cpu_idle_init())
*/
/* No BASEPRI, call wfe directly (SEVONPEND set in z_CpuIdleInit()) */
wfe
cmp r0, #0
bne _irq_disabled
cpsie i
cmp r0, #0
bne _irq_disabled
cpsie i
_irq_disabled:
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* r1: zero, for setting BASEPRI (needs a register) */
eors.n r1, r1
eors.n r1, r1
/* unlock BASEPRI so wfe gets interrupted by incoming interrupts */
msr BASEPRI, r1
msr BASEPRI, r1
wfe
msr BASEPRI, r0
cpsie i
msr BASEPRI, r0
cpsie i
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
bx lr
bx lr

View File

@@ -6,7 +6,8 @@
/**
* @file
* @brief ARM Cortex-M exception/interrupt exit API
* @brief ARM Cortex-M and Cortex-R exception/interrupt exit API
*
*
* Provides functions for performing kernel handling when exiting exceptions or
* interrupts that are installed directly in the vector table (i.e. that are not
@@ -23,11 +24,14 @@ _ASM_FILE_PROLOGUE
GTEXT(z_arm_exc_exit)
GTEXT(z_arm_int_exit)
GDATA(_kernel)
#if defined(CONFIG_CPU_CORTEX_R)
GTEXT(z_arm_pendsv)
#endif
/**
*
* @brief Kernel housekeeping when exiting interrupt handler installed
* directly in vector table
* directly in vector table
*
* Kernel allows installing interrupt handlers (ISRs) directly into the vector
* table to get the lowest interrupt latency possible. This allows the ISR to
@@ -41,11 +45,11 @@ GDATA(_kernel)
* e.g.
*
* void myISR(void)
* {
* printk("in %s\n", __FUNCTION__);
* doStuff();
* z_arm_int_exit();
* }
* {
* printk("in %s\n", __FUNCTION__);
* doStuff();
* z_arm_int_exit();
* }
*
* @return N/A
*/
@@ -59,7 +63,7 @@ SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_int_exit)
/**
*
* @brief Kernel housekeeping when exiting exception handler installed
* directly in vector table
* directly in vector table
*
* See z_arm_int_exit().
*
@@ -67,19 +71,28 @@ SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_int_exit)
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_exc_exit)
#if defined(CONFIG_CPU_CORTEX_R)
/* r0 contains the caller mode */
push {r0, lr}
#endif
#ifdef CONFIG_PREEMPT_ENABLED
ldr r3, =_kernel
ldr r0, =_kernel
ldr r1, [r3, #_kernel_offset_to_current]
ldr r0, [r3, #_kernel_offset_to_ready_q_cache]
cmp r0, r1
beq _EXIT_EXC
ldr r1, [r0, #_kernel_offset_to_current]
/* context switch required, pend the PendSV exception */
ldr r1, =_SCS_ICSR
ldr r2, =_SCS_ICSR_PENDSV
str r2, [r1]
ldr r0, [r0, #_kernel_offset_to_ready_q_cache]
cmp r0, r1
beq _EXIT_EXC
#if defined(CONFIG_CPU_CORTEX_M)
/* context switch required, pend the PendSV exception */
ldr r1, =_SCS_ICSR
ldr r2, =_SCS_ICSR_PENDSV
str r2, [r1]
#elif defined(CONFIG_CPU_CORTEX_R)
bl z_arm_pendsv
#endif
_ExcExitWithGdbStub:
@@ -87,14 +100,50 @@ _EXIT_EXC:
#endif /* CONFIG_PREEMPT_ENABLED */
#ifdef CONFIG_STACK_SENTINEL
push {r0, lr}
bl z_check_stack_sentinel
#if defined(CONFIG_CPU_CORTEX_M)
push {r0, lr}
bl z_check_stack_sentinel
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#else
bl z_check_stack_sentinel
#endif /* CONFIG_CPU_CORTEX_M */
#endif /* CONFIG_STACK_SENTINEL */
bx lr
#if defined(CONFIG_CPU_CORTEX_M)
bx lr
#elif defined(CONFIG_CPU_CORTEX_R)
/* Restore the caller mode to r0 */
pop {r0, lr}
/*
* Restore r0-r3, r12 and lr stored into the process stack by the mode
* entry function. These registers are saved by _isr_wrapper for IRQ mode
* and z_arm_svc for SVC mode.
*
* r0-r3 are either the values from the thread before it was switched out
* or they are the args to _new_thread for a new thread.
*/
push {r4, r5}
cmp r0, #RET_FROM_SVC
cps #MODE_SYS
ldmia sp!, {r0-r5}
beq _svc_exit
cps #MODE_IRQ
b _exc_exit
_svc_exit:
cps #MODE_SVC
_exc_exit:
mov r12, r4
mov lr, r5
pop {r4, r5}
movs pc, lr
#endif

View File

@@ -23,7 +23,7 @@ static void esf_dump(const z_arch_esf_t *esf)
LOG_ERR("r3/a4: 0x%08x r12/ip: 0x%08x r14/lr: 0x%08x",
esf->basic.a4, esf->basic.ip, esf->basic.lr);
LOG_ERR(" xpsr: 0x%08x", esf->basic.xpsr);
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
for (int i = 0; i < 16; i += 4) {
LOG_ERR("s[%2d]: 0x%08x s[%2d]: 0x%08x"
" s[%2d]: 0x%08x s[%2d]: 0x%08x",

View File

@@ -7,9 +7,9 @@
/**
* @file
* @brief Fault handlers for ARM Cortex-M
* @brief Fault handlers for ARM Cortex-M and Cortex-R
*
* Fault handlers for ARM Cortex-M processors.
* Fault handlers for ARM Cortex-M and Cortex-R processors.
*/
#include <toolchain.h>
@@ -30,25 +30,32 @@ GTEXT(z_arm_usage_fault)
GTEXT(z_arm_secure_fault)
#endif /* CONFIG_ARM_SECURE_FIRMWARE*/
GTEXT(z_arm_debug_monitor)
#elif defined(CONFIG_ARMV7_R)
GTEXT(z_arm_undef_instruction)
GTEXT(z_arm_prefetch_abort)
GTEXT(z_arm_data_abort)
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
GTEXT(z_arm_exc_spurious)
GTEXT(z_arm_reserved)
/**
*
* @brief Fault handler installed in the fault vectors
* @brief Fault handler installed in the fault and reserved vectors
*
* Entry point for the HardFault, MemManageFault, BusFault, UsageFault,
* SecureFault and Debug Monitor exceptions.
* SecureFault, Debug Monitor, and reserved exceptions.
*
* The function supplies the values of
* For Cortex-M: the function supplies the values of
* - the MSP
* - the PSP
* - the EXC_RETURN value
* as parameters to the z_arm_fault() C function that will perform the
* rest of the fault handling (i.e. z_arm_fault(MSP, PSP, EXC_RETURN)).
*
* For Cortex-R: the function simply invokes z_arm_fault() with currently
* unused arguments.
*
* Provides these symbols:
*
* z_arm_hard_fault
@@ -57,7 +64,7 @@ GTEXT(z_arm_exc_spurious)
* z_arm_usage_fault
* z_arm_secure_fault
* z_arm_debug_monitor
* z_arm_exc_spurious
* z_arm_reserved
*/
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_hard_fault)
@@ -71,19 +78,40 @@ SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_usage_fault)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_secure_fault)
#endif /* CONFIG_ARM_SECURE_FIRMWARE */
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_debug_monitor)
#elif defined(CONFIG_ARMV7_R)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_undef_instruction)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_prefetch_abort)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_data_abort)
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_exc_spurious)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_reserved)
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) || \
defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
mrs r0, MSP
mrs r1, PSP
mov r2, lr /* EXC_RETURN */
push {r0, lr}
#elif defined(CONFIG_ARMV7_R)
/*
* Pass null for the esf to z_arm_fault for now. A future PR will add
* better exception debug for Cortex-R that subsumes what esf
* provides.
*/
mov r0, #0
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE || CONFIG_ARMv7_M_ARMV8_M_MAINLINE */
bl z_arm_fault
#if defined(CONFIG_CPU_CORTEX_M)
pop {r0, pc}
#elif defined(CONFIG_CPU_CORTEX_R)
pop {r0, lr}
subs pc, lr, #8
#endif
.end

View File

@@ -18,8 +18,9 @@
#include <arch/cpu.h>
#if defined(CONFIG_CPU_CORTEX_M)
#include <arch/arm/aarch32/cortex_m/cmsis.h>
#elif defined(CONFIG_CPU_CORTEX_A) || defined(CONFIG_CPU_CORTEX_R)
#include <drivers/interrupt_controller/gic.h>
#elif defined(CONFIG_CPU_CORTEX_R)
#include <device.h>
#include <irq_nextlevel.h>
#endif
#include <sys/__assert.h>
#include <toolchain.h>
@@ -88,39 +89,33 @@ void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
* affecting performance (can still be useful on systems with a
* reduced set of priorities, like Cortex-M0/M0+).
*/
__ASSERT(prio <= (BIT(NUM_IRQ_PRIO_BITS) - 1),
__ASSERT(prio <= (BIT(DT_NUM_IRQ_PRIO_BITS) - 1),
"invalid priority %d! values must be less than %lu\n",
prio - _IRQ_PRIO_OFFSET,
BIT(NUM_IRQ_PRIO_BITS) - (_IRQ_PRIO_OFFSET));
BIT(DT_NUM_IRQ_PRIO_BITS) - (_IRQ_PRIO_OFFSET));
NVIC_SetPriority((IRQn_Type)irq, prio);
}
#elif defined(CONFIG_CPU_CORTEX_A) || defined(CONFIG_CPU_CORTEX_R)
/*
* For Cortex-A and Cortex-R cores, the default interrupt controller is the ARM
* Generic Interrupt Controller (GIC) and therefore the architecture interrupt
* control functions are mapped to the GIC driver interface.
*
* When a custom interrupt controller is used (i.e.
* CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER is enabled), the architecture
* interrupt control functions are mapped to the SoC layer in
* `include/arch/arm/aarch32/irq.h`.
*/
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
#elif defined(CONFIG_CPU_CORTEX_R)
void arch_irq_enable(unsigned int irq)
{
arm_gic_irq_enable(irq);
struct device *dev = _sw_isr_table[0].arg;
irq_enable_next_level(dev, (irq >> 8) - 1);
}
void arch_irq_disable(unsigned int irq)
{
arm_gic_irq_disable(irq);
struct device *dev = _sw_isr_table[0].arg;
irq_disable_next_level(dev, (irq >> 8) - 1);
}
int arch_irq_is_enabled(unsigned int irq)
{
return arm_gic_irq_is_enabled(irq);
struct device *dev = _sw_isr_table[0].arg;
return irq_is_enabled_next_level(dev);
}
/**
@@ -137,28 +132,31 @@ int arch_irq_is_enabled(unsigned int irq)
*/
void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
arm_gic_irq_set_priority(irq, prio, flags);
struct device *dev = _sw_isr_table[0].arg;
if (irq == 0)
return;
irq_set_priority_next_level(dev, (irq >> 8) - 1, prio, flags);
}
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
#endif /* CONFIG_CPU_CORTEX_M */
void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
#endif
/**
*
* @brief Spurious interrupt handler
*
* Installed in all _sw_isr_table slots at boot time. Throws an error if
* Installed in all dynamic interrupt slots at boot time. Throws an error if
* called.
*
* See z_arm_reserved().
*
* @return N/A
*/
void z_irq_spurious(void *unused)
{
ARG_UNUSED(unused);
z_arm_fatal_error(K_ERR_SPURIOUS_IRQ, NULL);
z_arm_reserved();
}
#ifdef CONFIG_SYS_POWER_MANAGEMENT

View File

@@ -1,6 +1,5 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -46,39 +45,20 @@ SECTION_FUNC(TEXT, _isr_wrapper)
push {r0,lr} /* r0, lr are now the first items on the stack */
#elif defined(CONFIG_CPU_CORTEX_R)
/*
* Save away r0-r3, r12 and lr_irq for the previous context to the
* process stack since they are clobbered here. Also, save away lr
* and spsr_irq since we may swap processes and return to a different
* thread.
* Save away r0-r3 from previous context to the process stack since
* they are clobbered here. Also, save away lr since we may swap
* processes and return to a different thread.
*/
sub lr, lr, #4
srsdb #MODE_SYS!
push {r4, r5}
mov r4, r12
sub r5, lr, #4
cps #MODE_SYS
push {r0-r3, r12, lr}
stmdb sp!, {r0-r5}
cps #MODE_IRQ
/*
* Use SVC mode stack for predictable interrupt behaviour; running ISRs
* in the SYS/USR mode stack (i.e. interrupted thread stack) leaves the
* ISR stack usage at the mercy of the interrupted thread and this can
* be prone to stack overflows if any of the ISRs and/or preemptible
* threads have high stack usage.
*
* When userspace is enabled, this also prevents leaking privileged
* information to the user mode.
*/
cps #MODE_SVC
/* Align stack at double-word boundary */
and r3, sp, #4
sub sp, sp, r3
push {r2, r3}
/* Increment interrupt nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
#endif /* CONFIG_CPU_CORTEX_M */
pop {r4, r5}
#endif
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_start_of_isr
@@ -98,13 +78,12 @@ SECTION_FUNC(TEXT, _isr_wrapper)
* is called with interrupts disabled.
*/
/*
* FIXME: Remove the Cortex-M conditional compilation checks for `cpsid i`
* and `cpsie i` after the Cortex-R port is updated to support
* interrupt nesting. For more details, refer to the issue #21758.
*/
#if defined(CONFIG_CPU_CORTEX_M)
/*
* Disable interrupts to prevent nesting while exiting idle state. This
* is only necessary for the Cortex-M because it is the only ARM
* architecture variant that automatically enables interrupts when
* entering an ISR.
*/
cpsid i /* PRIMASK = 1 */
#endif
@@ -147,6 +126,7 @@ _idle_state_cleared:
#if defined(CONFIG_CPU_CORTEX_M)
mrs r0, IPSR /* get exception number */
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
ldr r1, =16
subs r0, r1 /* get IRQ number */
@@ -154,56 +134,34 @@ _idle_state_cleared:
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
sub r0, r0, #16 /* get IRQ number */
lsl r0, r0, #3 /* table is 8-byte wide */
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#elif defined(CONFIG_CPU_CORTEX_R)
/* Get active IRQ number from the interrupt controller */
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
bl arm_gic_get_active
#else
bl z_soc_irq_get_active
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
push {r0, r1}
lsl r0, r0, #3 /* table is 8-byte wide */
#elif defined(CONFIG_ARMV7_R)
/*
* Cortex-R only has one IRQ line so the main handler will be at
* offset 0 of the table.
*/
mov r0, #0
#else
#error Unknown ARM architecture
#endif /* CONFIG_CPU_CORTEX_M */
#if !defined(CONFIG_CPU_CORTEX_M)
/*
* Enable interrupts to allow nesting.
*
* Note that interrupts are disabled up to this point on the ARM
* architecture variants other than the Cortex-M. It is also important
* to note that that most interrupt controllers require that the nested
* interrupts are handled after the active interrupt is acknowledged;
* this is be done through the `get_active` interrupt controller
* interface function.
*/
cpsie i
#endif /* !CONFIG_CPU_CORTEX_M */
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
ldr r1, =_sw_isr_table
add r1, r1, r0 /* table entry: ISRs must have their MSB set to stay
* in thumb mode */
ldm r1!,{r0,r3} /* arg in r0, ISR in r3 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
push {r0, r3} /* Save r0 and r3 into stack */
stm sp!,{r0-r3} /* Save r0 to r3 into stack */
push {r0, lr}
bl read_timer_end_of_isr
pop {r0, r3} /* Restore r0 and r3 regs */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r3}
mov lr,r3
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
ldm sp!,{r0-r3} /* Restore r0 to r3 regs */
#endif /* CONFIG_EXECUTION_BENCHMARKING */
blx r3 /* call ISR */
#if defined(CONFIG_CPU_CORTEX_R)
/* Signal end-of-interrupt */
pop {r0, r1}
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
bl arm_gic_eoi
#else
bl z_soc_irq_eoi
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
#endif /* CONFIG_CPU_CORTEX_R */
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_exit
#endif
@@ -215,7 +173,7 @@ _idle_state_cleared:
pop {r0, lr}
#elif defined(CONFIG_ARMV7_R)
/*
* r0 and lr_irq were saved on the process stack since a swap could
* r0,lr were saved on the process stack since a swap could
* happen. exc_exit will handle getting those values back
* from the process stack to return to the correct location
* so there is no need to do anything here.
@@ -224,6 +182,10 @@ _idle_state_cleared:
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#if defined(CONFIG_CPU_CORTEX_R)
mov r0, #RET_FROM_IRQ
#endif
/* Use 'bx' instead of 'b' because 'bx' can jump further, and use
* 'bx' instead of 'blx' because exception return is done in
* z_arm_int_exit() */

View File

@@ -91,5 +91,5 @@ void z_NmiHandlerSet(void (*pHandler)(void))
void z_arm_nmi(void)
{
handler();
z_arm_int_exit();
z_arm_exc_exit();
}

View File

@@ -21,7 +21,7 @@
#include <linker/linker-defs.h>
#if defined(CONFIG_ARMV7_R)
#include <aarch32/cortex_a_r/stack.h>
#include <aarch32/cortex_r/stack.h>
#endif
#if defined(__GNUC__)
@@ -86,7 +86,7 @@ static inline void z_arm_floating_point_init(void)
*/
SCB->CPACR &= (~(CPACR_CP10_Msk | CPACR_CP11_Msk));
#if defined(CONFIG_FPU)
#if defined(CONFIG_FLOAT)
/*
* Enable CP10 and CP11 Co-Processors to enable access to floating
* point registers.
@@ -102,7 +102,7 @@ static inline void z_arm_floating_point_init(void)
* Upon reset, the FPU Context Control Register is 0xC0000000
* (both Automatic and Lazy state preservation is enabled).
*/
#if !defined(CONFIG_FPU_SHARING)
#if !defined(CONFIG_FP_SHARING)
/* Default mode is Unshared FP registers mode. We disable the
* automatic stacking of FP registers (automatic setting of
* FPCA bit in the CONTROL register), upon exception entries,
@@ -125,7 +125,7 @@ static inline void z_arm_floating_point_init(void)
* out during context-switch.
*/
FPU->FPCCR = FPU_FPCCR_ASPEN_Msk | FPU_FPCCR_LSPEN_Msk;
#endif /* CONFIG_FPU_SHARING */
#endif /* CONFIG_FP_SHARING */
/* Make the side-effects of modifying the FPCCR be realized
* immediately.
@@ -143,7 +143,7 @@ static inline void z_arm_floating_point_init(void)
* of floating point instructions.
*/
#endif /* CONFIG_FPU */
#endif /* CONFIG_FLOAT */
/*
* Upon reset, the CONTROL.FPCA bit is, normally, cleared. However,
@@ -154,7 +154,7 @@ static inline void z_arm_floating_point_init(void)
* In Sharing FP Registers mode CONTROL.FPCA is cleared before switching
* to main, so it may be skipped here (saving few boot cycles).
*/
#if !defined(CONFIG_FPU) || !defined(CONFIG_FPU_SHARING)
#if !defined(CONFIG_FLOAT) || !defined(CONFIG_FP_SHARING)
__set_CONTROL(__get_CONTROL() & (~(CONTROL_FPCA_Msk)));
#endif
}
@@ -180,7 +180,7 @@ void z_arm_prep_c(void)
#if defined(CONFIG_ARMV7_R) && defined(CONFIG_INIT_STACKS)
z_arm_init_stacks();
#endif
z_arm_interrupt_init();
z_arm_int_lib_init();
z_cstart();
CODE_UNREACHABLE;
}

View File

@@ -1,7 +1,6 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
* Copyright (c) 2017-2019 Nordic Semiconductor ASA.
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -89,7 +88,7 @@ SECTION_FUNC(TEXT, z_arm_pendsv)
stmea r0!, {r3-r7}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
stmia r0, {v1-v8, ip}
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
/* Assess whether switched-out thread had been using the FP registers. */
ldr r0, =0x10 /* EXC_RETURN.F_Type Mask */
tst lr, r0 /* EXC_RETURN & EXC_RETURN.F_Type_Msk */
@@ -108,12 +107,11 @@ out_fp_active:
out_fp_endif:
str r0, [r2, #_thread_offset_to_mode]
#endif /* CONFIG_FPU_SHARING */
#endif /* CONFIG_FP_SHARING */
#elif defined(CONFIG_ARMV7_R)
/* Store rest of process context */
cps #MODE_SYS
stm r0, {r4-r11, sp}
cps #MODE_SVC
mrs r12, SPSR
stm r0, {r4-r12,sp,lr}^
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
@@ -248,7 +246,7 @@ _thread_irq_disabled:
/* restore BASEPRI for the incoming thread */
msr BASEPRI, r0
#ifdef CONFIG_FPU_SHARING
#ifdef CONFIG_FP_SHARING
/* Assess whether switched-in thread had been using the FP registers. */
ldr r0, [r2, #_thread_offset_to_mode]
tst r0, #0x04 /* thread.arch.mode & CONTROL.FPCA Msk */
@@ -317,10 +315,9 @@ _thread_irq_disabled:
ldr r0, =_thread_offset_to_callee_saved
add r0, r2
/* restore r4-r11 and sp for incoming thread */
cps #MODE_SYS
ldm r0, {r4-r11, sp}
cps #MODE_SVC
/* restore r4-r12 for incoming thread, plus system sp and lr */
ldm r0, {r4-r12,sp,lr}^
msr SPSR_fsxc, r12
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
@@ -588,21 +585,15 @@ SECTION_FUNC(TEXT, z_arm_svc)
* Save r12 and the lr as we could be swapping in another process and
* returning to a different location.
*/
srsdb #MODE_SYS!
push {r4, r5}
mov r4, r12
mov r5, lr
cps #MODE_SYS
push {r0-r3, r12, lr}
stmdb sp!, {r0-r5}
cps #MODE_SVC
/* Align stack at double-word boundary */
and r3, sp, #4
sub sp, sp, r3
push {r2, r3}
/* Increment interrupt nesting count */
ldr r2, =_kernel
ldr r0, [r2, #_kernel_offset_to_nested]
add r0, r0, #1
str r0, [r2, #_kernel_offset_to_nested]
pop {r4, r5}
/* Get SVC number */
mrs r0, spsr
@@ -634,6 +625,7 @@ demux:
blx z_irq_do_offload /* call C routine which executes the offload */
/* exception return is done in z_arm_int_exit() */
mov r0, #RET_FROM_SVC
b z_arm_int_exit
#endif
@@ -641,6 +633,7 @@ _context_switch:
/* handler mode exit, to PendSV */
bl z_arm_pendsv
mov r0, #RET_FROM_SVC
b z_arm_int_exit
_oops:

View File

@@ -16,6 +16,10 @@
#include <ksched.h>
#include <wait_q.h>
#ifdef CONFIG_USERSPACE
extern u8_t *z_priv_stack_find(void *obj);
#endif
/* An initial context, to be "restored" by z_arm_pendsv(), is put at the other
* end of the stack, and thus reusable by the stack when not needed anymore.
*
@@ -38,15 +42,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
/* Offset between the top of stack and the high end of stack area. */
u32_t top_of_stack_offset = 0U;
#if defined(CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT) \
&& defined(CONFIG_USERSPACE)
/* This is required to work-around the case where the thread
* is created without using K_THREAD_STACK_SIZEOF() macro in
* k_thread_create(). If K_THREAD_STACK_SIZEOF() is used, the
* Guard size has already been take out of stackSize.
*/
stackSize -= MPU_GUARD_ALIGN_AND_SIZE;
#endif
Z_ASSERT_VALID_PRIO(priority, pEntry);
#if defined(CONFIG_USERSPACE)
/* Truncate the stack size to align with the MPU region granularity.
@@ -58,7 +54,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* Reserve space on top of stack for local data. */
u32_t p_local_data = Z_STACK_PTR_ALIGN(pStackMem + stackSize
u32_t p_local_data = STACK_ROUND_DOWN(pStackMem + stackSize
- sizeof(*thread->userspace_local_data));
thread->userspace_local_data =
@@ -71,7 +67,17 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
#endif /* CONFIG_THREAD_USERSPACE_LOCAL_DATA */
#endif /* CONFIG_USERSPACE */
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING) \
#if defined(CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT) \
&& defined(CONFIG_USERSPACE)
/* This is required to work-around the case where the thread
* is created without using K_THREAD_STACK_SIZEOF() macro in
* k_thread_create(). If K_THREAD_STACK_SIZEOF() is used, the
* Guard size has already been take out of stackSize.
*/
stackSize -= MPU_GUARD_ALIGN_AND_SIZE;
#endif
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING) \
&& defined(CONFIG_MPU_STACK_GUARD)
/* For a thread which intends to use the FP services, it is required to
* allocate a wider MPU guard region, to always successfully detect an
@@ -92,7 +98,8 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
struct __esf *pInitCtx;
z_new_thread_init(thread, pStackMem, stackSize);
z_new_thread_init(thread, pStackMem, stackSize, priority,
options);
/* Carve the thread entry struct from the "base" of the stack
*
@@ -100,7 +107,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
* stack frame (state context), because no FP operations have been
* performed yet for this thread.
*/
pInitCtx = (struct __esf *)(Z_STACK_PTR_ALIGN(stackEnd -
pInitCtx = (struct __esf *)(STACK_ROUND_DOWN(stackEnd -
(char *)top_of_stack_offset - sizeof(struct __basic_sf)));
#if defined(CONFIG_USERSPACE)
@@ -122,22 +129,18 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
pInitCtx->basic.a2 = (u32_t)parameter1;
pInitCtx->basic.a3 = (u32_t)parameter2;
pInitCtx->basic.a4 = (u32_t)parameter3;
#if defined(CONFIG_CPU_CORTEX_M)
pInitCtx->basic.xpsr =
0x01000000UL; /* clear all, thumb bit is 1, even if RO */
#else
pInitCtx->basic.xpsr = A_BIT | MODE_SYS;
#if defined(CONFIG_COMPILER_ISA_THUMB2)
pInitCtx->basic.xpsr |= T_BIT;
#endif /* CONFIG_COMPILER_ISA_THUMB2 */
#endif /* CONFIG_CPU_CORTEX_M */
thread->callee_saved.psp = (u32_t)pInitCtx;
#if defined(CONFIG_CPU_CORTEX_R)
pInitCtx->basic.lr = (u32_t)pInitCtx->basic.pc;
thread->callee_saved.spsr = A_BIT | T_BIT | MODE_SYS;
thread->callee_saved.lr = (u32_t)pInitCtx->basic.pc;
#endif
thread->arch.basepri = 0;
#if defined(CONFIG_USERSPACE) || defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_USERSPACE) || defined(CONFIG_FP_SHARING)
thread->arch.mode = 0;
#if defined(CONFIG_USERSPACE)
thread->arch.priv_stack_start = 0;
@@ -166,13 +169,13 @@ FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
* privileged stack. Adjust the available (writable) stack
* buffer area accordingly.
*/
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
_current->arch.priv_stack_start +=
(_current->base.user_options & K_FP_REGS) ?
MPU_GUARD_ALIGN_AND_SIZE_FLOAT : MPU_GUARD_ALIGN_AND_SIZE;
#else
_current->arch.priv_stack_start += MPU_GUARD_ALIGN_AND_SIZE;
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */
#endif /* CONFIG_MPU_STACK_GUARD */
z_arm_userspace_enter(user_entry, p1, p2, p3,
@@ -234,7 +237,7 @@ void configure_builtin_stack_guard(struct k_thread *thread)
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
#define IS_MPU_GUARD_VIOLATION(guard_start, guard_len, fault_addr, stack_ptr) \
((fault_addr != -EINVAL) ? \
((fault_addr == -EINVAL) ? \
((fault_addr >= guard_start) && \
(fault_addr < (guard_start + guard_len)) && \
(stack_ptr < (guard_start + guard_len))) \
@@ -285,12 +288,12 @@ u32_t z_check_thread_stack_fail(const u32_t fault_addr, const u32_t psp)
return 0;
}
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
u32_t guard_len = (thread->base.user_options & K_FP_REGS) ?
MPU_GUARD_ALIGN_AND_SIZE_FLOAT : MPU_GUARD_ALIGN_AND_SIZE;
#else
u32_t guard_len = MPU_GUARD_ALIGN_AND_SIZE;
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */
#if defined(CONFIG_USERSPACE)
if (thread->arch.priv_stack_start) {
@@ -333,7 +336,7 @@ u32_t z_check_thread_stack_fail(const u32_t fault_addr, const u32_t psp)
}
#endif /* CONFIG_MPU_STACK_GUARD || CONFIG_USERSPACE */
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
int arch_float_disable(struct k_thread *thread)
{
if (thread != _current) {
@@ -365,25 +368,25 @@ int arch_float_disable(struct k_thread *thread)
return 0;
}
#endif /* CONFIG_FPU && CONFIG_FPU_SHARING */
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */
void arch_switch_to_main_thread(struct k_thread *main_thread,
k_thread_stack_t *main_stack,
size_t main_stack_size,
k_thread_entry_t _main)
{
#if defined(CONFIG_FPU)
#if defined(CONFIG_FLOAT)
/* Initialize the Floating Point Status and Control Register when in
* Unshared FP Registers mode (In Shared FP Registers mode, FPSCR is
* initialized at thread creation for threads that make use of the FP).
*/
__set_FPSCR(0);
#if defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FP_SHARING)
/* In Sharing mode clearing FPSCR may set the CONTROL.FPCA flag. */
__set_CONTROL(__get_CONTROL() & (~(CONTROL_FPCA_Msk)));
__ISB();
#endif /* CONFIG_FPU_SHARING */
#endif /* CONFIG_FPU */
#endif /* CONFIG_FP_SHARING */
#endif /* CONFIG_FLOAT */
#ifdef CONFIG_ARM_MPU
/* Configure static memory map. This will program MPU regions,
@@ -401,7 +404,7 @@ void arch_switch_to_main_thread(struct k_thread *main_thread,
start_of_main_stack =
Z_THREAD_STACK_BUFFER(main_stack) + main_stack_size;
start_of_main_stack = (char *)Z_STACK_PTR_ALIGN(start_of_main_stack);
start_of_main_stack = (char *)STACK_ROUND_DOWN(start_of_main_stack);
_current = main_thread;
#ifdef CONFIG_TRACING

View File

@@ -8,7 +8,7 @@ _vector_start = .;
KEEP(*(.exc_vector_table))
KEEP(*(".exc_vector_table.*"))
KEEP(*(_IRQ_VECTOR_TABLE_SECTION_NAME))
KEEP(*(IRQ_VECTOR_TABLE))
KEEP(*(.vectors))

View File

@@ -9,7 +9,6 @@ endif ()
zephyr_library_sources(
cpu_idle.S
fatal.c
irq_init.c
irq_manage.c
prep_c.c
reset.S

View File

@@ -3,6 +3,13 @@
# Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
# SPDX-License-Identifier: Apache-2.0
if ARM64
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config CPU_CORTEX_A
bool
select CPU_CORTEX
@@ -17,13 +24,6 @@ config CPU_CORTEX_A53
help
This option signifies the use of a Cortex-A53 CPU
config CPU_CORTEX_A72
bool
select CPU_CORTEX_A
select ARMV8_A
help
This option signifies the use of a Cortex-A72 CPU
config SWITCH_TO_EL1
bool "Switch to EL1 at boot"
default y
@@ -49,6 +49,9 @@ config TEST_EXTRA_STACKSIZE
config SYSTEM_WORKQUEUE_STACK_SIZE
default 4096
config OFFLOAD_WORKQUEUE_STACK_SIZE
default 4096
config CMSIS_THREAD_MAX_STACK_SIZE
default 4096
@@ -164,6 +167,8 @@ config ARM64_PA_BITS
default 42 if ARM64_PA_BITS_42
default 48 if ARM64_PA_BITS_48
endif # ARM_MMU
endif #ARM_MMU
endif # CPU_CORTEX_A
endif # ARM64

View File

@@ -13,8 +13,6 @@
#include <linker/sections.h>
#include <arch/cpu.h>
_ASM_FILE_PROLOGUE
GTEXT(arch_cpu_idle)
SECTION_FUNC(TEXT, arch_cpu_idle)
#ifdef CONFIG_TRACING

View File

@@ -1,34 +0,0 @@
/*
* Copyright (c) 2020 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM64 Cortex-A interrupt initialisation
*/
#include <arch/cpu.h>
#include <drivers/interrupt_controller/gic.h>
/**
* @brief Initialise interrupts
*
* This function invokes the ARM Generic Interrupt Controller (GIC) driver to
* initialise the interrupt system on the SoCs that use the GIC as the primary
* interrupt controller.
*
* When a custom interrupt controller is used, however, the SoC layer function
* is invoked for SoC-specific interrupt system initialisation.
*/
void z_arm64_interrupt_init(void)
{
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
/* Initialise the Generic Interrupt Controller (GIC) driver */
arm_gic_init();
#else
/* Invoke SoC-specific interrupt controller initialisation */
z_soc_irq_init();
#endif
}

View File

@@ -11,58 +11,46 @@
#include <kernel.h>
#include <arch/cpu.h>
#include <device.h>
#include <tracing/tracing.h>
#include <irq.h>
#include <irq_nextlevel.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <sw_isr_table.h>
#include <drivers/interrupt_controller/gic.h>
void z_arm64_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
/*
* The default interrupt controller for AArch64 is the ARM Generic Interrupt
* Controller (GIC) and therefore the architecture interrupt control functions
* are mapped to the GIC driver interface.
*
* When a custom interrupt controller is used (i.e.
* CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER is enabled), the architecture
* interrupt control functions are mapped to the SoC layer in
* `include/arch/arm/aarch64/irq.h`.
*/
void arch_irq_enable(unsigned int irq)
{
arm_gic_irq_enable(irq);
struct device *dev = _sw_isr_table[0].arg;
irq_enable_next_level(dev, (irq >> 8) - 1);
}
void arch_irq_disable(unsigned int irq)
{
arm_gic_irq_disable(irq);
struct device *dev = _sw_isr_table[0].arg;
irq_disable_next_level(dev, (irq >> 8) - 1);
}
int arch_irq_is_enabled(unsigned int irq)
{
return arm_gic_irq_is_enabled(irq);
struct device *dev = _sw_isr_table[0].arg;
return irq_is_enabled_next_level(dev);
}
void z_arm64_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
arm_gic_irq_set_priority(irq, prio, flags);
}
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
struct device *dev = _sw_isr_table[0].arg;
#ifdef CONFIG_DYNAMIC_INTERRUPTS
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(void *parameter), void *parameter,
u32_t flags)
{
z_isr_install(irq, routine, parameter);
z_arm64_irq_priority_set(irq, priority, flags);
return irq;
if (irq == 0)
return;
irq_set_priority_next_level(dev, (irq >> 8) - 1, prio, flags);
}
#endif
void z_irq_spurious(void *unused)
{

View File

@@ -14,9 +14,6 @@
#include <offsets_short.h>
#include <arch/cpu.h>
#include <sw_isr_table.h>
#include "macro_priv.inc"
_ASM_FILE_PROLOGUE
GDATA(_sw_isr_table)
@@ -33,7 +30,27 @@ GDATA(_sw_isr_table)
GTEXT(_isr_wrapper)
SECTION_FUNC(TEXT, _isr_wrapper)
z_arm64_enter_exc
/*
* Save x0-x18 (and x30) on the process stack because they can be
* clobbered by the ISR and/or context switch.
*
* Two things can happen:
*
* - No context-switch: in this case x19-x28 are callee-saved register
* so we can be sure they are not going to be clobbered by ISR.
* - Context-switch: the callee-saved registers are saved by
* z_arm64_pendsv() in the kernel structure.
*/
stp x0, x1, [sp, #-16]!
stp x2, x3, [sp, #-16]!
stp x4, x5, [sp, #-16]!
stp x6, x7, [sp, #-16]!
stp x8, x9, [sp, #-16]!
stp x10, x11, [sp, #-16]!
stp x12, x13, [sp, #-16]!
stp x14, x15, [sp, #-16]!
stp x16, x17, [sp, #-16]!
stp x18, x30, [sp, #-16]!
/* ++(_kernel->nested) to be checked by arch_is_in_isr() */
ldr x1, =_kernel
@@ -45,35 +62,10 @@ SECTION_FUNC(TEXT, _isr_wrapper)
bl sys_trace_isr_enter
#endif
/* Get active IRQ number from the interrupt controller */
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
bl arm_gic_get_active
#else
bl z_soc_irq_get_active
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
stp x0, x1, [sp, #-16]!
lsl x0, x0, #4 /* table is 16-byte wide */
/* Retrieve the interrupt service routine */
/* Cortex-A has one IRQ line so the main handler will be at offset 0 */
ldr x1, =_sw_isr_table
add x1, x1, x0
ldp x0, x3, [x1] /* arg in x0, ISR in x3 */
/*
* Call the ISR. Unmask and mask again the IRQs to support nested
* exception handlers
*/
msr daifclr, #(DAIFSET_IRQ)
blr x3
msr daifset, #(DAIFSET_IRQ)
/* Signal end-of-interrupt */
ldp x0, x1, [sp], #16
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
bl arm_gic_eoi
#else
bl z_soc_irq_eoi
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
#ifdef CONFIG_TRACING
bl sys_trace_isr_exit
@@ -85,9 +77,6 @@ SECTION_FUNC(TEXT, _isr_wrapper)
sub x2, x2, #1
str x2, [x1, #_kernel_offset_to_nested]
cmp x2, #0
bne exit
/* Check if we need to context switch */
ldr x2, [x1, #_kernel_offset_to_current]
ldr x3, [x1, #_kernel_offset_to_ready_q_cache]
@@ -95,18 +84,18 @@ SECTION_FUNC(TEXT, _isr_wrapper)
beq exit
/* Switch thread */
bl z_arm64_context_switch
bl z_arm64_pendsv
/* We return here in two cases:
*
* - The ISR was taken and no context switch was performed.
* - A context-switch was performed during the ISR in the past and now
* the thread has been switched in again and we return here from the
* ret in z_arm64_context_switch() because x30 was saved and restored.
* ret in z_arm64_pendsv() because x30 was saved and restored.
*/
exit:
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
#endif
z_arm64_exit_exc
b z_arm64_exit_exc

View File

@@ -0,0 +1,24 @@
/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _MACRO_H_
#define _MACRO_H_
#ifdef _ASMLANGUAGE
.macro switch_el, xreg, el3_label, el2_label, el1_label
mrs \xreg, CurrentEL
cmp \xreg, 0xc
beq \el3_label
cmp \xreg, 0x8
beq \el2_label
cmp \xreg, 0x4
beq \el1_label
.endm
#endif /* _ASMLANGUAGE */
#endif /* _MACRO_H_ */

View File

@@ -1,127 +0,0 @@
/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _MACRO_PRIV_INC_
#define _MACRO_PRIV_INC_
#ifdef _ASMLANGUAGE
/**
* @brief Save volatile registers
*
* Save the volatile registers and x30 on the process stack. This is
* needed if the thread is switched out because they can be clobbered by the
* ISR and/or context switch.
*
* @return N/A
*/
.macro z_arm64_enter_exc
/*
* Two things can happen:
*
* - No context-switch: in this case x19-x28 are callee-saved register
* so we can be sure they are not going to be clobbered by ISR.
* - Context-switch: the callee-saved registers are saved by
* z_arm64_pendsv() in the kernel structure.
*/
stp x0, x1, [sp, #-16]!
stp x2, x3, [sp, #-16]!
stp x4, x5, [sp, #-16]!
stp x6, x7, [sp, #-16]!
stp x8, x9, [sp, #-16]!
stp x10, x11, [sp, #-16]!
stp x12, x13, [sp, #-16]!
stp x14, x15, [sp, #-16]!
stp x16, x17, [sp, #-16]!
stp x18, x30, [sp, #-16]!
/*
* Store SPSR_ELn and ELR_ELn. This is needed to support nested
* exception handlers
*/
switch_el x3, 3f, 2f, 1f
3:
mrs x0, spsr_el3
mrs x1, elr_el3
b 0f
2:
mrs x0, spsr_el2
mrs x1, elr_el2
b 0f
1:
mrs x0, spsr_el1
mrs x1, elr_el1
0:
stp x0, x1, [sp, #-16]!
.endm
/**
* @brief Restore volatile registers and x30
*
* This is the common exit point for z_arm64_pendsv() and _isr_wrapper(). We
* restore the registers saved on the process stack including X30. The return
* address used by eret (in ELR_ELn) is either restored by z_arm64_pendsv() if
* a context-switch happened or not touched at all by the ISR if there was no
* context-switch.
*
* @return N/A
*/
.macro z_arm64_exit_exc
/*
* Restore SPSR_ELn and ELR_ELn. This is needed to support nested
* exception handlers
*/
ldp x0, x1, [sp], #16
switch_el x3, 3f, 2f, 1f
3:
msr spsr_el3, x0
msr elr_el3, x1
b 0f
2:
msr spsr_el2, x0
msr elr_el2, x1
b 0f
1:
msr spsr_el1, x0
msr elr_el1, x1
0:
/*
* In x30 we can have:
*
* - The address of irq_unlock() in swap.c when swapping in a thread
* that was cooperatively swapped out (used by ret in
* z_arm64_call_svc())
* - A previos generic value if the thread that we are swapping in was
* swapped out preemptively by the ISR.
*/
ldp x18, x30, [sp], #16
ldp x16, x17, [sp], #16
ldp x14, x15, [sp], #16
ldp x12, x13, [sp], #16
ldp x10, x11, [sp], #16
ldp x8, x9, [sp], #16
ldp x6, x7, [sp], #16
ldp x4, x5, [sp], #16
ldp x2, x3, [sp], #16
ldp x0, x1, [sp], #16
/*
* In general in the ELR_ELn register we can find:
*
* - The address of ret in z_arm64_call_svc() in case of arch_swap()
* (see swap.c)
* - The address of the next instruction at the time of the IRQ when the
* thread was switched out.
* - The address of z_thread_entry() for new threads (see thread.c).
*/
eret
.endm
#endif /* _ASMLANGUAGE */
#endif /* _MACRO_PRIV_INC_ */

View File

@@ -28,7 +28,6 @@ extern FUNC_NORETURN void z_cstart(void);
void z_arm64_prep_c(void)
{
z_bss_zero();
z_arm64_interrupt_init();
z_cstart();
CODE_UNREACHABLE;

View File

@@ -15,18 +15,7 @@
#include <linker/sections.h>
#include <arch/cpu.h>
#include "vector_table.h"
#include "macro_priv.inc"
_ASM_FILE_PROLOGUE
/*
* Platform may do platform specific init at EL3.
* The function implementation must preserve callee saved registers as per
* Aarch64 ABI PCS.
*/
WTEXT(z_arch_el3_plat_init)
SECTION_FUNC(TEXT,z_arch_el3_plat_init)
ret
#include "macro.h"
/**
*
@@ -53,63 +42,18 @@ SECTION_SUBSEC_FUNC(TEXT,_reset_section,__reset)
GTEXT(__start)
SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
/* Setup vector table */
adr x19, _vector_table
#ifdef CONFIG_SWITCH_TO_EL1
switch_el x1, 3f, 2f, 1f
3:
/* Initialize VBAR */
msr vbar_el3, x19
isb
/* Initialize sctlr_el3 to reset value */
mov_imm x1, SCTLR_EL3_RES1
mrs x0, sctlr_el3
orr x0, x0, x1
msr sctlr_el3, x0
isb
/* SError, IRQ and FIQ routing enablement in EL3 */
mrs x0, scr_el3
orr x0, x0, #(SCR_EL3_IRQ | SCR_EL3_FIQ | SCR_EL3_EA)
msr scr_el3, x0
/*
* Disable access traps to EL3 for CPACR, Trace, FP, ASIMD,
* SVE from lower EL.
*/
mov_imm x0, CPTR_EL3_RES_VAL
mov_imm x1, (CPTR_EL3_TTA | CPTR_EL3_TFP | CPTR_EL3_TCPAC)
bic x0, x0, x1
orr x0, x0, #(CPTR_EL3_EZ)
msr cptr_el3, x0
isb
/* Platform specific configurations needed in EL3 */
bl z_arch_el3_plat_init
#ifdef CONFIG_SWITCH_TO_EL1
/*
* Zephyr entry happened in EL3. Do EL3 specific init before
* dropping to lower EL.
*/
/* Enable access control configuration from lower EL */
mrs x0, actlr_el3
orr x0, x0, #(ACTLR_EL3_L2ACTLR | ACTLR_EL3_L2ECTLR \
| ACTLR_EL3_L2CTLR)
orr x0, x0, #(ACTLR_EL3_CPUACTLR | ACTLR_EL3_CPUECTLR)
msr actlr_el3, x0
/* Initialize sctlr_el1 to reset value */
mov_imm x0, SCTLR_EL1_RES1
msr sctlr_el1, x0
/* Disable MMU and async exceptions routing to EL1 */
msr sctlr_el1, xzr
/* Disable EA/IRQ/FIQ routing to EL3 and set EL1 to AArch64 */
mov x0, xzr
orr x0, x0, #(SCR_EL3_RW)
msr scr_el3, x0
/* On eret return to secure EL1h with DAIF masked */
/* On eret return to EL1 with DAIF masked */
mov x0, xzr
orr x0, x0, #(DAIF_MASK)
orr x0, x0, #(SPSR_EL3_TO_EL1)
@@ -119,21 +63,39 @@ SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
adr x0, 1f
msr elr_el3, x0
eret
2:
/* Boot from EL2 not supported */
bl .
1:
#endif
/* Setup vector table */
adr x0, _vector_table
switch_el x1, 3f, 2f, 1f
3:
/* Initialize VBAR */
msr vbar_el3, x0
/* SError, IRQ and FIQ routing enablement in EL3 */
mrs x0, scr_el3
orr x0, x0, #(SCR_EL3_IRQ | SCR_EL3_FIQ | SCR_EL3_EA)
msr scr_el3, x0
/* Disable access trapping in EL3 for NEON/FP */
msr cptr_el3, xzr
/*
* Enable the instruction cache, stack pointer and data access
* alignment checks.
* alignment checks and disable speculative loads.
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el3
orr x0, x0, x1
msr sctlr_el3, x0
isb
b 0f
2:
/* Initialize VBAR */
msr vbar_el2, x19
msr vbar_el2, x0
/* SError, IRQ and FIQ routing enablement in EL2 */
mrs x0, hcr_el2
@@ -145,30 +107,29 @@ SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
/*
* Enable the instruction cache, stack pointer and data access
* alignment checks.
* alignment checks and disable speculative loads.
*/
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el2
orr x0, x0, x1
msr sctlr_el2, x0
b 0f
1:
/* Initialize VBAR */
msr vbar_el1, x19
msr vbar_el1, x0
/* Disable access trapping in EL1 for NEON/FP */
mov x0, #(CPACR_EL1_FPEN_NOTRAP)
msr cpacr_el1, x0
mov x1, #(CPACR_EL1_FPEN_NOTRAP)
msr cpacr_el1, x1
/*
* Enable the instruction cache and el1 stack alignment check.
* Enable the instruction cache, stack pointer and data access
* alignment checks and disable speculative loads.
*/
mov x1, #(SCTLR_I_BIT | SCTLR_SA_BIT)
mov x1, #(SCTLR_I_BIT | SCTLR_A_BIT | SCTLR_SA_BIT)
mrs x0, sctlr_el1
orr x0, x0, x1
msr sctlr_el1, x0
0:
isb
@@ -177,7 +138,7 @@ SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
/* Switch to SP_ELn and setup the stack */
msr spsel, #1
ldr x0, =(z_interrupt_stacks)
ldr x0, =(_interrupt_stack)
add x0, x0, #(CONFIG_ISR_STACK_SIZE)
mov sp, x0

View File

@@ -17,22 +17,28 @@
#include <offsets_short.h>
#include <arch/cpu.h>
#include <syscall.h>
#include "macro_priv.inc"
_ASM_FILE_PROLOGUE
#include "macro.h"
GDATA(_kernel)
GDATA(_k_neg_eagain)
/**
* @brief Routine to handle context switches
* @brief PendSV exception handler, handling context switches
*
* This function is directly called either by _isr_wrapper() in case of
* preemption, or z_arm64_svc() in case of cooperative switching.
* The PendSV exception is the only execution context in the system that can
* perform context switching. When an execution context finds out it has to
* switch contexts, it pends the PendSV exception.
*
* When PendSV is pended, the decision that a context switch must happen has
* already been taken. In other words, when z_arm64_pendsv() runs, we *know* we
* have to swap *something*.
*
* For Cortex-A, PendSV exception is not supported by the architecture and this
* function is directly called either by _isr_wrapper() in case of preemption,
* or z_arm64_svc() in case of cooperative switching.
*/
GTEXT(z_arm64_context_switch)
SECTION_FUNC(TEXT, z_arm64_context_switch)
GTEXT(z_arm64_pendsv)
SECTION_FUNC(TEXT, z_arm64_pendsv)
#ifdef CONFIG_TRACING
stp xzr, x30, [sp, #-16]!
bl sys_trace_thread_switched_in
@@ -46,7 +52,7 @@ SECTION_FUNC(TEXT, z_arm64_context_switch)
ldr x0, =_thread_offset_to_callee_saved
add x0, x0, x2
/* Store rest of process context including x30 */
/* Store rest of process context including x30, SPSR_ELn and ELR_ELn */
stp x19, x20, [x0], #16
stp x21, x22, [x0], #16
stp x23, x24, [x0], #16
@@ -54,6 +60,21 @@ SECTION_FUNC(TEXT, z_arm64_context_switch)
stp x27, x28, [x0], #16
stp x29, x30, [x0], #16
switch_el x3, 3f, 2f, 1f
3:
mrs x4, spsr_el3
mrs x5, elr_el3
b 0f
2:
mrs x4, spsr_el2
mrs x5, elr_el2
b 0f
1:
mrs x4, spsr_el1
mrs x5, elr_el1
0:
stp x4, x5, [x0], #16
/* Save the current SP */
mov x6, sp
str x6, [x0]
@@ -62,11 +83,15 @@ SECTION_FUNC(TEXT, z_arm64_context_switch)
ldr x2, [x1, #_kernel_offset_to_ready_q_cache]
str x2, [x1, #_kernel_offset_to_current]
/* load _kernel into x1 and current k_thread into x2 */
ldr x1, =_kernel
ldr x2, [x1, #_kernel_offset_to_current]
/* addr of callee-saved regs in thread in x0 */
ldr x0, =_thread_offset_to_callee_saved
add x0, x0, x2
/* Restore x19-x29 plus x30 */
/* Restore x19-x29 plus x30, SPSR_ELn and ELR_ELn */
ldp x19, x20, [x0], #16
ldp x21, x22, [x0], #16
ldp x23, x24, [x0], #16
@@ -74,6 +99,21 @@ SECTION_FUNC(TEXT, z_arm64_context_switch)
ldp x27, x28, [x0], #16
ldp x29, x30, [x0], #16
ldp x4, x5, [x0], #16
switch_el x3, 3f, 2f, 1f
3:
msr spsr_el3, x4
msr elr_el3, x5
b 0f
2:
msr spsr_el2, x4
msr elr_el2, x5
b 0f
1:
msr spsr_el1, x4
msr elr_el1, x5
0:
ldr x6, [x0]
mov sp, x6
@@ -86,14 +126,14 @@ SECTION_FUNC(TEXT, z_arm64_context_switch)
/* We restored x30 from the process stack. There are three possible
* cases:
*
* - We return to z_arm64_svc() when swapping in a thread that was
* - We return to z_arm64_svc() when swappin in a thread that was
* swapped out by z_arm64_svc() before jumping into
* z_arm64_exit_exc()
* - We return to _isr_wrapper() when swapping in a thread that was
* - We return to _isr_wrapper() when swappin in a thread that was
* swapped out by _isr_wrapper() before jumping into
* z_arm64_exit_exc()
* - We return (jump) into z_thread_entry_wrapper() for new threads
* (see thread.c)
* (see thread.c)
*/
ret
@@ -106,24 +146,6 @@ SECTION_FUNC(TEXT, z_arm64_context_switch)
GTEXT(z_thread_entry_wrapper)
SECTION_FUNC(TEXT, z_thread_entry_wrapper)
/*
* Restore SPSR_ELn and ELR_ELn saved in the temporary stack by
* arch_new_thread()
*/
ldp x0, x1, [sp], #16
switch_el x3, 3f, 2f, 1f
3:
msr spsr_el3, x0
msr elr_el3, x1
b 0f
2:
msr spsr_el2, x0
msr elr_el2, x1
b 0f
1:
msr spsr_el1, x0
msr elr_el1, x1
0:
/*
* z_thread_entry_wrapper is called for every new thread upon the return
* of arch_swap() or ISR. Its address, as well as its input function
@@ -149,10 +171,22 @@ SECTION_FUNC(TEXT, z_thread_entry_wrapper)
*
* @return N/A
*/
GTEXT(z_arm64_svc)
SECTION_FUNC(TEXT, z_arm64_svc)
z_arm64_enter_exc
/*
* Save the volatile registers and x30 on the process stack. This is
* needed if the thread is switched out.
*/
stp x0, x1, [sp, #-16]!
stp x2, x3, [sp, #-16]!
stp x4, x5, [sp, #-16]!
stp x6, x7, [sp, #-16]!
stp x8, x9, [sp, #-16]!
stp x10, x11, [sp, #-16]!
stp x12, x13, [sp, #-16]!
stp x14, x15, [sp, #-16]!
stp x16, x17, [sp, #-16]!
stp x18, x30, [sp, #-16]!
switch_el x3, 3f, 2f, 1f
3:
@@ -197,24 +231,62 @@ offload:
b inv
context_switch:
/* Check if we need to context switch */
ldr x1, =_kernel
ldr x2, [x1, #_kernel_offset_to_current]
ldr x3, [x1, #_kernel_offset_to_ready_q_cache]
cmp x2, x3
beq exit
/* Switch thread */
bl z_arm64_context_switch
bl z_arm64_pendsv
exit:
z_arm64_exit_exc
b z_arm64_exit_exc
inv:
mov x1, sp
mov x0, #0 /* K_ERR_CPU_EXCEPTION */
b z_arm64_fatal_error
/**
* @brief Restore volatile registers and x30
*
* This is the common exit point for z_arm64_pendsv() and _isr_wrapper(). We
* restore the registers saved on the process stack including X30. The return
* address used by eret (in ELR_ELn) is either restored by z_arm64_pendsv() if
* a context-switch happened or not touched at all by the ISR if there was no
* context-switch.
*
* @return N/A
*/
GTEXT(z_arm64_exit_exc)
SECTION_FUNC(TEXT, z_arm64_exit_exc)
/*
* In x30 we can have:
*
* - The address of irq_unlock() in swap.c when swapping in a thread
* that was cooperatively swapped out (used by ret in
* z_arm64_call_svc())
* - A previos generic value if the thread that we are swapping in was
* swapped out preemptively by the ISR.
*/
ldp x18, x30, [sp], #16
ldp x16, x17, [sp], #16
ldp x14, x15, [sp], #16
ldp x12, x13, [sp], #16
ldp x10, x11, [sp], #16
ldp x8, x9, [sp], #16
ldp x6, x7, [sp], #16
ldp x4, x5, [sp], #16
ldp x2, x3, [sp], #16
ldp x0, x1, [sp], #16
/*
* In general in the ELR_ELn register we can find:
*
* - The address of ret in z_arm64_call_svc() in case of arch_swap()
* (see swap.c)
* - The address of the next instruction at the time of the IRQ when the
* thread was switched out.
* - The address of z_thread_entry() for new threads (see thread.c).
*/
eret
GTEXT(z_arm64_call_svc)
SECTION_FUNC(TEXT, z_arm64_call_svc)
svc #_SVC_CALL_CONTEXT_SWITCH

View File

@@ -21,9 +21,8 @@
* @brief Initialize a new thread from its stack space
*
* The control structure (thread) is put at the lower address of the stack. An
* initial context, to be "restored" by z_arm64_context_switch(), is put at the
* other end of the stack, and thus reusable by the stack when not needed
* anymore.
* initial context, to be "restored" by z_arm64_pendsv(), is put at the other
* end of the stack, and thus reusable by the stack when not needed anymore.
*
* <options> is currently unused.
*
@@ -41,25 +40,6 @@
void z_thread_entry_wrapper(k_thread_entry_t k, void *p1, void *p2, void *p3);
struct init_stack_frame {
/* top of the stack / most recently pushed */
/* SPSL_ELn and ELR_ELn */
u64_t spsr;
u64_t elr;
/*
* Used by z_thread_entry_wrapper. pulls these off the stack and
* into argument registers before calling z_thread_entry()
*/
u64_t entry_point;
u64_t arg1;
u64_t arg2;
u64_t arg3;
/* least recently pushed */
};
void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
@@ -67,39 +47,36 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
{
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
char *stackEnd;
struct init_stack_frame *pInitCtx;
struct __esf *pInitCtx;
stackEnd = pStackMem + stackSize;
z_new_thread_init(thread, pStackMem, stackSize);
z_new_thread_init(thread, pStackMem, stackSize, priority, options);
pInitCtx = (struct init_stack_frame *)(Z_STACK_PTR_ALIGN(stackEnd -
sizeof(struct init_stack_frame)));
pInitCtx = (struct __esf *)(STACK_ROUND_DOWN(stackEnd -
sizeof(struct __basic_sf)));
pInitCtx->entry_point = (u64_t)pEntry;
pInitCtx->arg1 = (u64_t)parameter1;
pInitCtx->arg2 = (u64_t)parameter2;
pInitCtx->arg3 = (u64_t)parameter3;
/*
* - ELR_ELn: to be used by eret in z_thread_entry_wrapper() to return
* to z_thread_entry() with pEntry in x0(entry_point) and the parameters
* already in place in x1(arg1), x2(arg2), x3(arg3).
* - SPSR_ELn: to enable IRQs (we are masking debug exceptions, SError
* interrupts and FIQs).
*/
pInitCtx->elr = (u64_t)z_thread_entry;
pInitCtx->spsr = SPSR_MODE_EL1H | DAIF_FIQ;
pInitCtx->basic.regs[0] = (u64_t)pEntry;
pInitCtx->basic.regs[1] = (u64_t)parameter1;
pInitCtx->basic.regs[2] = (u64_t)parameter2;
pInitCtx->basic.regs[3] = (u64_t)parameter3;
/*
* We are saving:
*
* - SP: to pop out pEntry and parameters when going through
* z_thread_entry_wrapper().
* - x30: to be used by ret in z_arm64_context_switch() when the new
* task is first scheduled.
* - x30: to be used by ret in z_arm64_pendsv() when the new task is
* first scheduled.
* - ELR_EL1: to be used by eret in z_thread_entry_wrapper() to return
* to z_thread_entry() with pEntry in x0 and the parameters already
* in place in x1, x2, x3.
* - SPSR_EL1: to enable IRQs (we are masking debug exceptions, SError
* interrupts and FIQs).
*/
thread->callee_saved.sp = (u64_t)pInitCtx;
thread->callee_saved.x30 = (u64_t)z_thread_entry_wrapper;
thread->callee_saved.elr = (u64_t)z_thread_entry;
thread->callee_saved.spsr = SPSR_MODE_EL1H | DAIF_FIQ;
}

View File

@@ -11,11 +11,7 @@
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include "vector_table.h"
#include "macro_priv.inc"
_ASM_FILE_PROLOGUE
/*
* Four types of exceptions:
@@ -90,7 +86,16 @@ SECTION_SUBSEC_FUNC(exc_vector_table,_vector_table_section,_vector_table)
/* Current EL with SPx / SError */
.align 7
z_arm64_enter_exc
stp x0, x1, [sp, #-16]!
stp x2, x3, [sp, #-16]!
stp x4, x5, [sp, #-16]!
stp x6, x7, [sp, #-16]!
stp x8, x9, [sp, #-16]!
stp x10, x11, [sp, #-16]!
stp x12, x13, [sp, #-16]!
stp x14, x15, [sp, #-16]!
stp x16, x17, [sp, #-16]!
stp x18, x30, [sp, #-16]!
mov x1, sp
mov x0, #0 /* K_ERR_CPU_EXCEPTION */

View File

@@ -29,14 +29,14 @@
GEN_OFFSET_SYM(_thread_arch_t, basepri);
GEN_OFFSET_SYM(_thread_arch_t, swap_return_value);
#if defined(CONFIG_USERSPACE) || defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_USERSPACE) || defined(CONFIG_FP_SHARING)
GEN_OFFSET_SYM(_thread_arch_t, mode);
#if defined(CONFIG_USERSPACE)
GEN_OFFSET_SYM(_thread_arch_t, priv_stack_start);
#endif
#endif
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
GEN_OFFSET_SYM(_thread_arch_t, preempt_float);
#endif
@@ -49,7 +49,7 @@ GEN_OFFSET_SYM(_basic_sf_t, lr);
GEN_OFFSET_SYM(_basic_sf_t, pc);
GEN_OFFSET_SYM(_basic_sf_t, xpsr);
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
GEN_OFFSET_SYM(_esf_t, s);
GEN_OFFSET_SYM(_esf_t, fpscr);
#endif
@@ -65,6 +65,10 @@ GEN_OFFSET_SYM(_callee_saved_t, v6);
GEN_OFFSET_SYM(_callee_saved_t, v7);
GEN_OFFSET_SYM(_callee_saved_t, v8);
GEN_OFFSET_SYM(_callee_saved_t, psp);
#if defined(CONFIG_CPU_CORTEX_R)
GEN_OFFSET_SYM(_callee_saved_t, spsr);
GEN_OFFSET_SYM(_callee_saved_t, lr);
#endif
/* size of the entire preempt registers structure */
@@ -82,7 +86,7 @@ GEN_ABSOLUTE_SYM(___thread_stack_info_t_SIZEOF,
* point registers.
*/
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
GEN_ABSOLUTE_SYM(_K_THREAD_NO_FLOAT_SIZEOF, sizeof(struct k_thread) -
sizeof(struct _preempt_float));
#else

View File

@@ -1,71 +0,0 @@
/*
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Exception/interrupt context helpers for Cortex-A and Cortex-R CPUs
*
* Exception/interrupt context helpers.
*/
#ifndef ZEPHYR_ARCH_ARM_INCLUDE_AARCH32_CORTEX_A_R_EXC_H_
#define ZEPHYR_ARCH_ARM_INCLUDE_AARCH32_CORTEX_A_R_EXC_H_
#include <arch/cpu.h>
#ifdef _ASMLANGUAGE
/* nothing */
#else
#include <irq_offload.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifdef CONFIG_IRQ_OFFLOAD
extern volatile irq_offload_routine_t offload_routine;
#endif
/* Check the CPSR mode bits to see if we are in IRQ or FIQ mode */
static ALWAYS_INLINE bool arch_is_in_isr(void)
{
return (_kernel.cpus[0].nested != 0);
}
/**
* @brief Setup system exceptions
*
* Enable fault exceptions.
*
* @return N/A
*/
static ALWAYS_INLINE void z_arm_exc_setup(void)
{
}
/**
* @brief Clear Fault exceptions
*
* Clear out exceptions for Mem, Bus, Usage and Hard Faults
*
* @return N/A
*/
static ALWAYS_INLINE void z_arm_clear_faults(void)
{
}
extern void z_arm_cortex_r_svc(void);
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARM_INCLUDE_AARCH32_CORTEX_A_R_EXC_H_ */

View File

@@ -1,47 +0,0 @@
/*
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Stack helpers for Cortex-A and Cortex-R CPUs
*
* Stack helper functions.
*/
#ifndef ZEPHYR_ARCH_ARM_INCLUDE_AARCH32_CORTEX_A_R_STACK_H_
#define ZEPHYR_ARCH_ARM_INCLUDE_AARCH32_CORTEX_A_R_STACK_H_
#ifdef __cplusplus
extern "C" {
#endif
#ifdef _ASMLANGUAGE
/* nothing */
#else
extern void z_arm_init_stacks(void);
/**
*
* @brief Setup interrupt stack
*
* On Cortex-R, the interrupt stack is set up by reset.S
*
* @return N/A
*/
static ALWAYS_INLINE void z_arm_interrupt_stack_setup(void)
{
}
#endif /* _ASMLANGUAGE */
#ifdef __cplusplus
}
#endif
#endif /* ZEPHYR_ARCH_ARM_INCLUDE_AARCH32_CORTEX_A_R_STACK_H_ */

View File

@@ -26,8 +26,7 @@
extern "C" {
#endif
extern K_THREAD_STACK_ARRAY_DEFINE(z_interrupt_stacks, CONFIG_MP_NUM_CPUS,
CONFIG_ISR_STACK_SIZE);
extern K_THREAD_STACK_DEFINE(_interrupt_stack, CONFIG_ISR_STACK_SIZE);
/**
*
@@ -40,13 +39,13 @@ extern K_THREAD_STACK_ARRAY_DEFINE(z_interrupt_stacks, CONFIG_MP_NUM_CPUS,
*/
static ALWAYS_INLINE void z_arm_interrupt_stack_setup(void)
{
u32_t msp = (u32_t)(Z_THREAD_STACK_BUFFER(z_interrupt_stacks[0])) +
K_THREAD_STACK_SIZEOF(z_interrupt_stacks[0]);
u32_t msp = (u32_t)(Z_THREAD_STACK_BUFFER(_interrupt_stack)) +
K_THREAD_STACK_SIZEOF(_interrupt_stack);
__set_MSP(msp);
#if defined(CONFIG_BUILTIN_STACK_GUARD)
#if defined(CONFIG_CPU_CORTEX_M_HAS_SPLIM)
__set_MSPLIM((u32_t)z_interrupt_stacks[0]);
__set_MSPLIM((u32_t)_interrupt_stack);
#else
#error "Built-in MSP limit checks not supported by HW"
#endif

Some files were not shown because too many files have changed in this diff Show More