Compare commits

..

15 Commits

Author SHA1 Message Date
Johan Hedberg
8746c3fd01 Bluetooth: Mesh: Fix matching for "All Proxies" group address
The bt_mesh_fixed_group_match() function is intended to match the
various well-known group addresses, however it was never updated when
Proxy support was added.

Fixes #19015

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-12-19 13:09:38 -05:00
Mariusz Skamra
a4aaf8bbf4 Bluetooth: ATT: Fix responding to unknown ATT command
Host shall ignore the unknown ATT PDU that has Command Flag set.
Fixes regression introduced in 3b271b8455.

Fixes: GATT/SR/UNS/BI-02-C
Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-12-19 12:37:04 -05:00
Luiz Augusto von Dentz
f8ba0cccb7 Bluetooth: GATT: Fix not storing SC changes
CCC storaged is no longer declared separetly so check if ccc->cfg
matches with sc_ccc_cfg no longer works so instead use the cfg_changed
callback and match against sc_ccc_cfg_changed.

Fixes #19267

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-12-19 11:01:57 -05:00
Vinayak Kariappa Chettimada
1b31b7eb79 Bluetooth: controller: split: Fix to reject invalid enable command
Fix to reject invalid advertise and scan enable commands.

Fixes BT HCI.TS.5.1.1 tests:
HCI/DDI/BI-06-C
HCI/DDI/BI-07-C

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-12-19 11:01:37 -05:00
Mariusz Skamra
0bab5dcec1 Bluetooth: tester: Adapt to BTP Get Attribute Value API change
Adapt the gatt_get_attribute_value_cmd to recent changes in API.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-12-19 11:01:20 -05:00
Johan Hedberg
feef3a64ca Bluetooth: Mesh: Fix Clear Procedure start timestamp initialization
The start timestamp was supposed to signify the starting point of the
clear procedure. The code was incorrectly initializing it to the *end*
point of the procedure.

Fixes #19263

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-12-19 11:01:05 -05:00
Yannis Damigos
3bfd36efd1 i2c_ll_stm32_v2: Send STOP manually after NACK
In master trasmitter mode AutoEndMode is
always disabled, so we need to send STOP
manually if NACK is received.

Fixes #19059

Signed-off-by: Yannis Damigos <giannis.damigos@gmail.com>
2019-12-19 11:00:44 -05:00
Joakim Andersson
2f507cd9ff Bluetooth: ATT: Fix disconnected ATT not releasing buffers
Fix bug in ATT reset handling, not releasing queued notification
buffers when the connection is terminated.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 11:00:16 -05:00
Joakim Andersson
5ccd5c753d Bluetooth: host: Fix whitelist for non-central bluetooth applications
Fix compilation issue when wanting to use whitelist in bluetooth
applications that does not have CONFIG_BT_CENTRAL defined.
These functions are useful even for broadcaster and observer roles.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 10:59:54 -05:00
Joakim Andersson
d515cf298a Bluetooth: GATT: Fix bug in bt_gatt_attr_next unable for static handles
Fix bug in bt_gatt_attr_next when given an attribute that has static
allocation. The handle is then 0 and the function would always return
the attribute with handle 1.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 10:59:37 -05:00
Lingao Meng
1ebe758115 Bluetooth: Mesh: Fixed Provision Random buffer size
Fixed some minor issues, missing a byte for opcode.

Fixes: #19767

Signed-off-by: Lingao Meng <mengabc1086@gmail.com>
2019-12-19 10:58:59 -05:00
Joakim Andersson
7304c52ecc Bluetooth: GATT: Fix gatt buffer leak for write commands and notify
Fix GATT buffer leak when bt_att_send returns error the allocated
buffer is never freed. Discovered case where the link was disconnected
during the function call, so when GATT checkd the link was still
connected, but ATT checkd the link was disconnected.

Fixes: #19889

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-12-19 10:58:40 -05:00
Erwan Gouriou
d4a1812e9e dt: stm32f0: Fix clock bus for SPI1 and few timers
There is no APB2 bus on stm32f0 series.
What could be found as APB2 in CMSIS files is actually
second group of APB (A.K.A APB1_2).
Fix nodes that are using this wrong reference accorss the series.

Fixes #20310

Signed-off-by: Erwan Gouriou <erwan.gouriou@linaro.org>
2019-12-19 10:58:20 -05:00
Pavlo Hamov
cc004036f6 drivers: i2c: stm32_Slave: Fix addr flag handling
In the main Addr handler code the F1 workaround was used.
Add compile time swith depending on SOC family.
So workaround is not afffecting F2/F4 families.

Signed-off-by: Pavlo Hamov <pavlo_hamov@jabil.com>
2019-12-19 10:57:43 -05:00
Anas Nashif
ca3eb0eb31 doc: link-roles: convert bytes to string
Get the correct branch name as a string instead of bytes.

Fixes #19165

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-17 04:37:48 +08:00
8256 changed files with 174679 additions and 337145 deletions

View File

@@ -1,3 +1,4 @@
--mailback
--emacs
--summary-file
--show-types

View File

@@ -9,7 +9,6 @@ charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
max_line_length = 80
# C
[*.{c,h}]

View File

@@ -1,16 +0,0 @@
license:
main: apache-2.0
report_missing: true
category: Permissive
copyright:
check: true
exclude:
extensions:
- yml
- yaml
- html
- rst
- conf
- cfg
langs:
- HTML

View File

@@ -1,65 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Documentation GitHub Workflow
on: [pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Update PATH for west
run: |
echo "::add-path::$HOME/.local/bin"
- name: checkout
uses: actions/checkout@v2
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen
- name: cache-pip
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: install-pip
run: |
pip3 install setuptools
pip3 install 'breathe>=4.9.1' 'docutils>=0.14' \
'sphinx>=1.7.5' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools
- name: west setup
run: |
west init -l . || true
- name: build-docs
run: |
source zephyr-env.sh
make htmldocs
tar cvf htmldocs.tar --directory=./doc/_build html
- name: upload-build
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: htmldocs.tar
path: htmldocs.tar
- name: check-warns
run: |
if [ -s doc/_build/doc.warnings ]; then
docwarn=$(cat doc/_build/doc.warnings)
docwarn="${docwarn//'%'/'%25'}"
docwarn="${docwarn//$'\n'/'%0A'}"
docwarn="${docwarn//$'\r'/'%0D'}"
# We treat doc warnings as errors
echo "::error file=doc.warnings::$docwarn"
exit 1
fi

View File

@@ -1,112 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Doc build for Release or Daily
# Either a daily based on schedule/cron or only on tag push
on:
schedule:
- cron: '50 22 * * *'
push:
tags:
- '*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Update PATH for west
run: |
echo "::add-path::$HOME/.local/bin"
- name: Determine tag
id: tag
run: |
# We expect to get here either due to a schedule event in which
# case we are doing a daily build of the docs, or because a new
# tag was pushed, in which case we are building docs for a release
if [ ${GITHUB_EVENT_NAME} == "schedule" ]; then
echo ::set-output name=TYPE::daily;
echo ::set-output name=RELEASE::latest;
elif [ ${GITHUB_EVENT_NAME} == "push" ]; then
# If push due to a tag GITHUB_REF will look like refs/tags/TAG-FOO
# chop of 'refs/tags' so RELEASE=TAG-FOO
echo ::set-output name=TYPE::release;
echo ::set-output name=RELEASE::${GITHUB_REF/refs\/tags\//};
else
exit 1
fi
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: checkout
uses: actions/checkout@v2
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen
- name: cache-pip
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: install-pip
run: |
pip3 install setuptools
pip3 install 'breathe>=4.9.1' 'docutils>=0.14' \
'sphinx>=1.7.5' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools
- name: west setup
run: |
west init -l . || true
- name: build-docs
env:
DOC_TAG: ${{ steps.tag.outputs.TYPE }}
run: |
source zephyr-env.sh
make DOC_TAG=${DOC_TAG} htmldocs
- name: check-warns
run: |
if [ -s doc/_build/doc.warnings ]; then
docwarn=$(cat doc/_build/doc.warnings)
docwarn="${docwarn//'%'/'%25'}"
docwarn="${docwarn//$'\n'/'%0A'}"
docwarn="${docwarn//$'\r'/'%0D'}"
# We treat doc warnings as errors
echo "::error file=doc.warnings::$docwarn"
exit 1
fi
- name: Upload to AWS S3
env:
RELEASE: ${{ steps.tag.outputs.RELEASE }}
run: |
echo "DOC_RELEASE=[$RELEASE]"
if [ "$RELEASE" == "latest" ]; then
export
echo "publish latest docs"
aws s3 sync --quiet doc/_build/html s3://docs.zephyrproject.org/latest --delete
echo "success sync of latest docs"
else
DOC_RELEASE=${RELEASE}.0
echo "publish release docs: ${DOC_RELEASE}"
aws s3 sync --quiet doc/_build/html s3://docs.zephyrproject.org/${DOC_RELEASE}
echo "success sync of rel docs"
fi
if [ -d doc/_build/doxygen/html ]; then
echo "publish doxygen"
aws s3 sync --quiet doc/_build/doxygen/html s3://docs.zephyrproject.org/apidoc/${RELEASE} --delete
echo "success publish of doxygen"
fi

View File

@@ -1,32 +0,0 @@
name: Scancode
on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-latest
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@v1
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@v3
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v1
with:
name: scancode
path: ./artifacts
- name: Verify
run: |
if [ -s ./artifacts/report.txt ]; then
report=$(cat ./artifacts/report.txt)
report="${report//'%'/'%25'}"
report="${report//$'\n'/'%0A'}"
report="${report//$'\r'/'%0D'}"
echo "::error file=./artifacts/report.txt::$report"
exit 1
fi

View File

@@ -1,65 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Zephyr West Command Tests
on:
push:
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
pull_request:
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- name: checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip3 install pytest west pyelftools
- name: run pytest-win
if: runner.os == 'Windows'
run: |
cmd /C "set PYTHONPATH=.\scripts\west_commands && pytest ./scripts/west_commands/tests/"
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |
PYTHONPATH=./scripts/west_commands pytest ./scripts/west_commands/tests/

1
.gitignore vendored
View File

@@ -8,7 +8,6 @@
*.swo
*~
build*/
!doc/guides/build
cscope.*
.dir

View File

@@ -48,21 +48,3 @@
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model.__unnamed__.*
^[- \t]*\^
#
# Bluetooth mesh pub struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]mesh[/\\]access.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^

View File

@@ -1,6 +1,6 @@
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]file_system[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]file_system[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]dma.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]dma.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]sensor.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]sensor.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.

View File

@@ -67,4 +67,4 @@
#
# stray duplicate definition warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_if.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_if.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.

View File

@@ -28,5 +28,3 @@ Jun Li <jun.r.li@intel.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>

View File

@@ -4,7 +4,7 @@ compiler: gcc
env:
global:
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.11.2
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.10.3
- ZEPHYR_TOOLCHAIN_VARIANT=zephyr
- MATRIX_BUILDS="5"
matrix:
@@ -20,7 +20,7 @@ build:
- ${SHIPPABLE_BUILD_DIR}/ccache
pre_ci_boot:
image_name: zephyrprojectrtos/ci
image_tag: v0.11.4
image_tag: v0.9
pull: true
options: "-e HOME=/home/buildslave --privileged=true --tty --net=bridge --user buildslave"

View File

@@ -35,10 +35,12 @@ endif()
# For Zephyr more specifically this breaks (at least)
# -fmacro-prefix-map=${ZEPHYR_BASE}=
project(Zephyr-Kernel VERSION ${PROJECT_VERSION})
enable_language(C CXX ASM)
# Verify that the toolchain can compile a dummy file, if it is not we
# won't be able to test for compatibility with certain C flags.
zephyr_check_compiler_flag(C "" toolchain_is_ok)
check_c_compiler_flag("" toolchain_is_ok)
assert(toolchain_is_ok "The toolchain is unable to build a dummy C file. See CMakeError.log.")
# In some cases the "final" things are not used at all and "_prebuilt"
@@ -50,6 +52,7 @@ set(ZEPHYR_FINAL_EXECUTABLE zephyr_final)
# Set some phony targets to collect dependencies
set(OFFSETS_H_TARGET offsets_h)
set(SYSCALL_MACROS_H_TARGET syscall_macros_h_target)
set(SYSCALL_LIST_H_TARGET syscall_list_h_target)
set(DRIVER_VALIDATION_H_TARGET driver_validation_h_target)
set(KOBJ_TYPES_H_TARGET kobj_types_h_target)
@@ -72,7 +75,10 @@ add_library(zephyr_interface INTERFACE)
zephyr_library_named(zephyr)
zephyr_include_directories(
kernel/include
${ARCH_DIR}/${ARCH}/include
include
include/drivers
${PROJECT_BINARY_DIR}/include/generated
${USERINCLUDE}
${STDINCLUDE}
@@ -142,13 +148,6 @@ else()
assert(0 "Unreachable code. Expected optimization level to have been chosen. See Kconfig.zephyr")
endif()
if(NOT CONFIG_ARCH_IS_SET)
message(WARNING "\
None of the CONFIG_<arch> (e.g. CONFIG_X86) symbols are set. \
Select one of them from the SOC_SERIES_* symbol or, lacking that, from the \
SOC_* symbol.")
endif()
# Apply the final optimization flag(s)
zephyr_compile_options(${OPTIMIZATION_FLAG})
@@ -334,14 +333,12 @@ zephyr_cc_option_ifdef(CONFIG_STACK_USAGE -fstack-usage)
# in binaries, makes failure logs more deterministic and most
# importantly makes builds more deterministic
# If several match then the last one wins. This matters for instances
# like tests/ and samples/: they're inside all of them! Then let's
# strip as little as possible.
# If both match then the last one wins. This matters for tests/ and
# samples/ inside *both* CMAKE_SOURCE_DIR and ZEPHYR_BASE: for them
# let's strip the shortest prefix.
zephyr_cc_option(-fmacro-prefix-map=${CMAKE_SOURCE_DIR}=CMAKE_SOURCE_DIR)
zephyr_cc_option(-fmacro-prefix-map=${ZEPHYR_BASE}=ZEPHYR_BASE)
if(WEST_TOPDIR)
zephyr_cc_option(-fmacro-prefix-map=${WEST_TOPDIR}=WEST_TOPDIR)
endif()
# TODO: -fmacro-prefix-map=modules/etc. "build/zephyr_modules.txt" might help.
# TODO: Archiver arguments
# ar_option(D)
@@ -408,11 +405,11 @@ set_ifndef( DTS_BOARD_FIXUP_FILE ${BOARD_DIR}
set_ifndef( DTS_SOC_FIXUP_FILE ${SOC_DIR}/${ARCH}/${SOC_PATH}/dts_fixup.h)
set( DTS_APP_FIXUP_FILE ${APPLICATION_SOURCE_DIR}/dts_fixup.h)
set_ifndef(DTS_CAT_OF_FIXUP_FILES ${ZEPHYR_BINARY_DIR}/include/generated/devicetree_fixups.h)
set_ifndef(DTS_CAT_OF_FIXUP_FILES ${ZEPHYR_BINARY_DIR}/include/generated/generated_dts_board_fixups.h)
# Concatenate the fixups into a single header file for easy
# #include'ing
file(WRITE ${DTS_CAT_OF_FIXUP_FILES} "/* May only be included by devicetree.h */\n\n")
file(WRITE ${DTS_CAT_OF_FIXUP_FILES} "/* May only be included by generated_dts_board.h */\n\n")
foreach(fixup_file
${DTS_BOARD_FIXUP_FILE}
${DTS_SOC_FIXUP_FILE}
@@ -463,7 +460,6 @@ add_subdirectory(drivers)
if(EXISTS ${CMAKE_BINARY_DIR}/zephyr_modules.txt)
file(STRINGS ${CMAKE_BINARY_DIR}/zephyr_modules.txt ZEPHYR_MODULES_TXT
ENCODING UTF-8)
set(module_names)
foreach(module ${ZEPHYR_MODULES_TXT})
# Match "<name>":"<path>" for each line of file, each corresponding to
@@ -471,27 +467,28 @@ if(EXISTS ${CMAKE_BINARY_DIR}/zephyr_modules.txt)
# lazy regexes (it supports greedy only).
string(REGEX REPLACE "\"(.*)\":\".*\"" "\\1" module_name ${module})
string(REGEX REPLACE "\".*\":\"(.*)\"" "\\1" module_path ${module})
list(APPEND module_names ${module_name})
string(TOUPPER ${module_name} MODULE_NAME_UPPER)
set(ZEPHYR_${MODULE_NAME_UPPER}_MODULE_DIR ${module_path})
endforeach()
foreach(module_name ${module_names})
message("Including module: ${module_name} in path: ${module_path}")
# Note the second, binary_dir parameter requires the added
# subdirectory to have its own, local cmake target(s). If not then
# this binary_dir is created but stays empty. Object files land in
# the main binary dir instead.
# https://cmake.org/pipermail/cmake/2019-June/069547.html
string(TOUPPER ${module_name} MODULE_NAME_UPPER)
set(ZEPHYR_CURRENT_MODULE_DIR ${ZEPHYR_${MODULE_NAME_UPPER}_MODULE_DIR})
add_subdirectory(${ZEPHYR_CURRENT_MODULE_DIR} ${CMAKE_BINARY_DIR}/modules/${module_name})
add_subdirectory(${module_path} ${CMAKE_BINARY_DIR}/modules/${module_name})
endforeach()
# Done processing modules, clear ZEPHYR_CURRENT_MODULE_DIR.
set(ZEPHYR_CURRENT_MODULE_DIR)
endif()
set(syscall_macros_h ${ZEPHYR_BINARY_DIR}/include/generated/syscall_macros.h)
add_custom_target(${SYSCALL_MACROS_H_TARGET} DEPENDS ${syscall_macros_h})
add_custom_command( OUTPUT ${syscall_macros_h}
COMMAND
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/gen_syscall_header.py
> ${syscall_macros_h}
DEPENDS ${ZEPHYR_BASE}/scripts/gen_syscall_header.py
)
set(syscall_list_h ${CMAKE_CURRENT_BINARY_DIR}/include/generated/syscall_list.h)
set(syscalls_json ${CMAKE_CURRENT_BINARY_DIR}/misc/generated/syscalls.json)
@@ -567,21 +564,12 @@ else()
set_property(DIRECTORY APPEND PROPERTY CMAKE_CONFIGURE_DEPENDS ${syscalls_subdirs_txt})
endif()
# syscall declarations are searched for in the SYSCALL_INCLUDE_DIRS
# SYSCALL_INCLUDE_DIRECTORY will include the directories that needs to be
# searched for syscall declarations if CONFIG_APPLICATION_DEFINED_SYSCALL is set
if(CONFIG_APPLICATION_DEFINED_SYSCALL)
list(APPEND SYSCALL_INCLUDE_DIRS ${APPLICATION_SOURCE_DIR})
set(SYSCALL_INCLUDE_DIRECTORY --include ${APPLICATION_SOURCE_DIR})
endif()
if(CONFIG_ZTEST)
list(APPEND SYSCALL_INCLUDE_DIRS ${ZEPHYR_BASE}/subsys/testsuite/ztest/include)
endif()
foreach(d ${SYSCALL_INCLUDE_DIRS})
list(APPEND parse_syscalls_include_args
--include ${d}
)
endforeach()
add_custom_command(
OUTPUT
${syscalls_json}
@@ -589,20 +577,12 @@ add_custom_command(
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/parse_syscalls.py
--include ${ZEPHYR_BASE}/include # Read files from this dir
${parse_syscalls_include_args} # Read files from these dirs also
${SYSCALL_INCLUDE_DIRECTORY}
--json-file ${syscalls_json} # Write this file
DEPENDS ${syscalls_subdirs_trigger} ${PARSE_SYSCALLS_HEADER_DEPENDS}
)
add_custom_target(${SYSCALL_LIST_H_TARGET} DEPENDS ${syscall_list_h})
# 64-bit systems do not require special handling of 64-bit system call
# parameters or return values, indicate this to the system call boilerplate
# generation script.
if(CONFIG_64BIT)
set(SYSCALL_LONG_REGISTERS_ARG --long-registers)
endif()
add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}
# Also, some files are written to include/generated/syscalls/
COMMAND
@@ -612,7 +592,6 @@ add_custom_command(OUTPUT include/generated/syscall_dispatch.c ${syscall_list_h}
--base-output include/generated/syscalls # Write to this dir
--syscall-dispatch include/generated/syscall_dispatch.c # Write this file
--syscall-list ${syscall_list_h}
${SYSCALL_LONG_REGISTERS_ARG}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
DEPENDS ${syscalls_json}
)
@@ -633,14 +612,6 @@ add_custom_target(${DRIVER_VALIDATION_H_TARGET} DEPENDS ${DRV_VALIDATION})
include($ENV{ZEPHYR_BASE}/cmake/kobj.cmake)
gen_kobj(KOBJ_INCLUDE_PATH)
# Add a pseudo-target that is up-to-date when all generated headers
# are up-to-date.
add_custom_target(zephyr_generated_headers)
add_dependencies(zephyr_generated_headers
offsets_h
)
# Generate offsets.c.obj from offsets.c
# Generate offsets.h from offsets.c.obj
@@ -650,13 +621,10 @@ set(OFFSETS_C_PATH ${ARCH_DIR}/${ARCH}/core/offsets/offsets.c)
set(OFFSETS_H_PATH ${PROJECT_BINARY_DIR}/include/generated/offsets.h)
add_library( ${OFFSETS_LIB} OBJECT ${OFFSETS_C_PATH})
target_include_directories(${OFFSETS_LIB} PRIVATE
kernel/include
${ARCH_DIR}/${ARCH}/include
)
target_link_libraries(${OFFSETS_LIB} zephyr_interface)
add_dependencies( ${OFFSETS_LIB}
${SYSCALL_LIST_H_TARGET}
${SYSCALL_MACROS_H_TARGET}
${DRIVER_VALIDATION_H_TARGET}
${KOBJ_TYPES_H_TARGET}
)
@@ -683,7 +651,7 @@ get_property(ZEPHYR_LIBS_PROPERTY GLOBAL PROPERTY ZEPHYR_LIBS)
foreach(zephyr_lib ${ZEPHYR_LIBS_PROPERTY})
# TODO: Could this become an INTERFACE property of zephyr_interface?
add_dependencies(${zephyr_lib} zephyr_generated_headers)
add_dependencies(${zephyr_lib} ${OFFSETS_H_TARGET})
endforeach()
get_property(OUTPUT_FORMAT GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT)
@@ -698,7 +666,7 @@ configure_linker_script(
${PRIV_STACK_DEP}
${APP_SMEM_ALIGNED_DEP}
${CODE_RELOCATION_DEP}
zephyr_generated_headers
${OFFSETS_H_TARGET}
)
add_custom_target(
@@ -812,10 +780,6 @@ if(CONFIG_ARM AND CONFIG_USERSPACE)
)
add_custom_target(priv_stacks DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/${PRIV_STACKS})
if(${GPERF} STREQUAL GPERF-NOTFOUND)
message(FATAL_ERROR "Unable to find gperf")
endif()
# Use gperf to generate C code (PRIV_STACKS_OUTPUT_SRC_PRE) which implements a
# perfect hashtable based on PRIV_STACKS
add_custom_command(
@@ -1080,8 +1044,8 @@ if(CONFIG_USERSPACE)
if(CONFIG_NEWLIB_LIBC)
set(NEWLIB_PART -l libc.a z_libc_partition)
endif()
if(CONFIG_NEWLIB_LIBC_NANO)
set(NEWLIB_PART -l libc_nano.a z_libc_partition)
if(CONFIG_MBEDTLS)
set(MBEDTLS_PART -l lib..__modules__crypto__mbedtls.a k_mbedtls_partition)
endif()
add_custom_command(
@@ -1090,14 +1054,12 @@ if(CONFIG_USERSPACE)
${ZEPHYR_BASE}/scripts/gen_app_partitions.py
-d ${OBJ_FILE_DIR}
-o ${APP_SMEM_UNALIGNED_LD}
${NEWLIB_PART}
$<TARGET_PROPERTY:zephyr_property_target,COMPILE_OPTIONS>
${NEWLIB_PART} ${MBEDTLS_PART}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
kernel
${ZEPHYR_LIBS_PROPERTY}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}/
COMMAND_EXPAND_LISTS
COMMENT "Generating app_smem_unaligned linker section"
)
@@ -1107,7 +1069,7 @@ if(CONFIG_USERSPACE)
${CODE_RELOCATION_DEP}
${APP_SMEM_UNALIGNED_DEP}
${APP_SMEM_UNALIGNED_LD}
zephyr_generated_headers
${OFFSETS_H_TARGET}
)
add_custom_target(
@@ -1141,15 +1103,13 @@ if(CONFIG_USERSPACE)
${ZEPHYR_BASE}/scripts/gen_app_partitions.py
-e $<TARGET_FILE:app_smem_unaligned_prebuilt>
-o ${APP_SMEM_ALIGNED_LD}
${NEWLIB_PART}
$<TARGET_PROPERTY:zephyr_property_target,COMPILE_OPTIONS>
${NEWLIB_PART} ${MBEDTLS_PART}
$<$<BOOL:${CMAKE_VERBOSE_MAKEFILE}>:--verbose>
DEPENDS
kernel
${ZEPHYR_LIBS_PROPERTY}
app_smem_unaligned_prebuilt
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}/
COMMAND_EXPAND_LISTS
COMMENT "Generating app_smem_aligned linker section"
)
endif()
@@ -1161,7 +1121,7 @@ if(CONFIG_USERSPACE AND CONFIG_ARM)
${CODE_RELOCATION_DEP}
${APP_SMEM_ALIGNED_DEP}
${APP_SMEM_ALIGNED_LD}
zephyr_generated_headers
${OFFSETS_H_TARGET}
)
add_custom_target(
@@ -1219,7 +1179,7 @@ else()
${PRIV_STACK_DEP}
${CODE_RELOCATION_DEP}
${ZEPHYR_PREBUILT_EXECUTABLE}
zephyr_generated_headers
${OFFSETS_H_TARGET}
)
set(LINKER_PASS_FINAL_SCRIPT_TARGET linker_pass_final_script_target)
@@ -1501,8 +1461,6 @@ if(HEX_FILES_TO_MERGE)
add_custom_target(mergehex ALL DEPENDS ${MERGED_HEX_NAME})
list(APPEND FLASH_DEPS mergehex)
message(VERBOSE "Merging hex files: ${HEX_FILES_TO_MERGE}")
endif()
if(EMU_PLATFORM)
@@ -1521,48 +1479,24 @@ add_subdirectory(cmake/flash)
add_subdirectory(cmake/usage)
add_subdirectory(cmake/reports)
add_subdirectory_ifdef(
CONFIG_MAKEFILE_EXPORTS
cmake/makefile_exports
)
if(NOT CONFIG_TEST)
if(CONFIG_ASSERT AND (NOT CONFIG_FORCE_NO_ASSERT))
message(WARNING "__ASSERT() statements are globally ENABLED")
message(WARNING "
------------------------------------------------------------
--- WARNING: __ASSERT() statements are globally ENABLED ---
--- The kernel will run more slowly and use more memory ---
------------------------------------------------------------"
)
endif()
endif()
if(CONFIG_BOARD_DEPRECATED_RELEASE)
if(CONFIG_BOARD_DEPRECATED)
message(WARNING "
WARNING: The board '${BOARD}' is deprecated and will be
removed in version ${CONFIG_BOARD_DEPRECATED_RELEASE}"
removed in version ${CONFIG_BOARD_DEPRECATED}"
)
endif()
if(CONFIG_SOC_DEPRECATED_RELEASE)
message(WARNING "
WARNING: The SoC '${SOC_NAME}' is deprecated and will be
removed in version ${CONFIG_SOC_DEPRECATED_RELEASE}"
)
endif()
# In CMake projects, 'CMAKE_BUILD_TYPE' usually determines the
# optimization flag, but in Zephyr it is determined through
# Kconfig. Here we give a warning when there is a mismatch between the
# two in case the user is not aware of this.
set(build_types None Debug Release RelWithDebInfo MinSizeRel)
if((CMAKE_BUILD_TYPE IN_LIST build_types) AND (NOT NO_BUILD_TYPE_WARNING))
string(TOUPPER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_uppercase)
if(NOT (${OPTIMIZATION_FLAG} IN_LIST CMAKE_C_FLAGS_${CMAKE_BUILD_TYPE_uppercase}))
message(WARNING "
The CMake build type was set to '${CMAKE_BUILD_TYPE}', but the optimization flag was set to '${OPTIMIZATION_FLAG}'.
This may be intentional and the warning can be turned off by setting the CMake variable 'NO_BUILD_TYPE_WARNING'"
)
endif()
endif()
# @Intent: Set compiler specific flags for standard C includes
# Done at the very end, so any other system includes which may
# be added by Zephyr components were first in list.

View File

@@ -5,53 +5,50 @@
# Order is important; for each modified file, the last matching
# pattern takes the most precedence.
# That is, with the last pattern being
# *.rst @nashif
# if only .rst files are being modified, only nashif is
# *.rst @dbkinder
# if only .rst files are being modified, only dbkinder is
# automatically requested for review, but you can manually
# add others as needed.
# Do not use wildcard on all source yet
# * @galak @nashif
/.known-issues/ @nashif
/.github/ @nashif
/.github/workflows/ @galak @nashif
/.known-issues/ @inakypg @nashif
/arch/arc/ @vonhust @ruuddw
/arch/arm/ @MaureenHelm @galak @ioannisg
/arch/arm/core/aarch32/cortex_m/cmse/ @ioannisg
/arch/arm/core/aarch64/ @carlocaione
/arch/arm/include/aarch32/cortex_m/cmse.h @ioannisg
/arch/arm/include/aarch64/ @carlocaione
/arch/arm/core/aarch32/cortex_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/arm/core/cortex_m/cmse/ @ioannisg
/arch/arm/include/cortex_m/cmse.h @ioannisg
/arch/arm/core/cortex_r/ @MaureenHelm @galak @ioannisg @bbolen
/arch/common/ @andrewboie @ioannisg @andyross
/arch/x86_64/ @andyross
/soc/arc/snps_*/ @vonhust @ruuddw
/soc/nios2/ @nashif @wentongwu
/soc/arm/ @MaureenHelm @galak @ioannisg
/soc/arm/arm/mps2/ @fvincenzo
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/bcm*/ @sbranden
/soc/arm/nxp*/ @MaureenHelm
/soc/arm/nordic_nrf/ @ioannisg
/soc/arm/qemu_cortex_a53/ @carlocaione
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/stm32f4/ @rsalveti @idlethread
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/st_stm32/stm32mp1/ @arnop2
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/xtensa/intel_s1000/ @sathishkuttan @dcpleung
/arch/x86/ @andrewboie
/soc/x86_64/ @andyross
/arch/x86/ @andrewboie @gnuless
/arch/nios2/ @andrewboie @wentongwu
/arch/posix/ @aescolar
/arch/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/posix/ @aescolar
/soc/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/riscv/openisa*/ @MaureenHelm
/soc/x86/ @andrewboie
/arch/x86/core/ @andrewboie @gnuless
/arch/x86/core/ia32/crt0.S @andrewboie @gnuless
/arch/x86/core/pcie.c @gnuless
/arch/x86/core/multiboot.c @gnuless
/soc/x86/ @andrewboie @gnuless
/arch/xtensa/ @andrewboie @dcpleung @andyross
/soc/xtensa/ @andrewboie @dcpleung @andyross
/boards/arc/ @vonhust @ruuddw
@@ -71,7 +68,6 @@
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/frdm*/ @MaureenHelm
/boards/arm/frdm*/doc/ @MaureenHelm @MeganHansen
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @MaureenHelm
/boards/arm/hexiwear*/doc/ @MaureenHelm @MeganHansen
/boards/arm/lpcxpresso*/ @MaureenHelm
@@ -83,21 +79,12 @@
/boards/arm/nrf*/ @carlescufi @lemrey @ioannisg
/boards/arm/nucleo*/ @erwango
/boards/arm/nucleo_f401re/ @rsalveti @idlethread
/boards/arm/qemu_cortex_a53/ @carlocaione
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango
/boards/common/ @mbolivar
/boards/nios2/ @wentongwu
/boards/nios2/altera_max10/ @wentongwu
/boards/arm/stm32_min_dev/ @cbsiddharth
@@ -106,9 +93,9 @@
/boards/riscv/rv32m1_vega/ @MaureenHelm
/boards/shields/ @erwango
/boards/x86/ @andrewboie @nashif
/boards/x86/up_squared/ @gnuless
/boards/xtensa/ @nashif @dcpleung
/boards/xtensa/intel_s1000_crb/ @sathishkuttan @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
# All cmake related files
/cmake/ @SebastianBoe @nashif
/CMakeLists.txt @SebastianBoe @nashif
@@ -118,11 +105,9 @@
/doc/scripts/ @carlescufi
/doc/guides/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/reference/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/reference/kernel/other/resource_mgmt.rst @pabigot
/doc/reference/networking/can* @alexanderwachter
/drivers/debug/ @nashif
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*mcux* @MaureenHelm
/drivers/*/*qmsi* @nashif
/drivers/*/*stm32* @erwango
/drivers/*/*native_posix* @aescolar
/drivers/adc/ @anangl
@@ -130,55 +115,31 @@
/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
/drivers/can/ @alexanderwachter
/drivers/can/*mcp2515* @karstenkoenig
/drivers/clock_control/*nrf* @nordic-krch
/drivers/counter/ @nordic-krch
/drivers/counter/counter_cmos.c @andrewboie
/drivers/counter/counter_cmos.c @gnuless
/drivers/display/ @vanwinkeljan
/drivers/display/display_framebuf.c @andrewboie
/drivers/dma/*dw* @tbursztyka
/drivers/display/display_framebuf.c @gnuless
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*rv32m1* @MaureenHelm
/drivers/espi/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ps2/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/kscan/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ethernet/ @jukkar @tbursztyka @pfalcon
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/flash/ @nashif @nvlsianpu
/drivers/flash/*nrf* @nvlsianpu
/drivers/flash/*spi_nor* @pabigot
/drivers/flash/ @nashif
/drivers/flash/*native_posix* @vanwinkeljan @aescolar
/drivers/flash/*stm32* @superna9999
/drivers/gpio/ @mnkp @pabigot
/drivers/gpio/*ht16k33* @henrikbrixandersen
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*sx1509b* @pabigot
/drivers/gpio/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/gpio/*stm32* @rsalveti @idlethread
/drivers/hwinfo/ @alexanderwachter
/drivers/i2c/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2c/i2c_shell.c @nashif
/drivers/ieee802154/ @jukkar @tbursztyka
/drivers/ieee802154/ieee802154_rf2xx* @jukkar @tbursztyka @nandojve
/drivers/interrupt_controller/ @andrewboie
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/*/intc_vexriscv_litex.c @mateusz-holenko @kgugala @pgielda
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic @ioannisg
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic @ioannisg
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic @ioannisg
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic @ioannisg
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/ipm/ipm_stm32_ipcc.c @arnop2
/drivers/*/vexriscv_litex.c @mateusz-holenko @kgugala @pgielda
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar
/drivers/lora/ @Mani-Sadhasivam
/drivers/modem/ @mike-scott
/drivers/pcie/ @andrewboie
/drivers/pci/ @gnuless
/drivers/pcie/ @gnuless
/drivers/pinmux/stm32/ @rsalveti @idlethread
/drivers/pinmux/*hsdk* @iriszzw
/drivers/pwm/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/sensor/ @MaureenHelm
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
@@ -188,7 +149,7 @@
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/st*/ @avisconti
/drivers/serial/uart_altera_jtag_hal.c @wentongwu
/drivers/serial/*ns16550* @andrewboie
/drivers/serial/*ns16550* @gnuless
/drivers/serial/Kconfig.litex @mateusz-holenko @kgugala @pgielda
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
@@ -197,62 +158,42 @@
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/net/ @jukkar @tbursztyka
/drivers/ptp_clock/ @jukkar
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/spi/ @tbursztyka
/drivers/spi/spi_ll_stm32.* @superna9999
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/timer/apic_timer.c @andrewboie
/drivers/timer/arm_arch_timer.c @carlocaione
/drivers/timer/apic_timer.c @gnuless
/drivers/timer/cortex_m_systick.c @ioannisg
/drivers/timer/altera_avalon_timer_hal.c @wentongwu
/drivers/timer/riscv_machine_timer.c @nategraff-sifive @kgugala @pgielda
/drivers/timer/litex_timer.c @mateusz-holenko @kgugala @pgielda
/drivers/timer/xlnx_psttc_timer.c @wjliang
/drivers/timer/cc13x2_cc26x2_rtc_timer.c @vanti
/drivers/usb/ @jfischer-phytec-iot @finikorg
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/video/ @loicpoulain
/drivers/i2c/i2c_ll_stm32* @ldts @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/i2c/i2c_dw* @gnuless
/drivers/*/*xec* @franciscomunoz @albertofloyd @scottwcpg
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/wdt_handlers.c @andrewboie
/drivers/wifi/ @jukkar @tbursztyka @pfalcon
/drivers/wifi/eswifi/ @loicpoulain
/dts/arc/ @vonhust @ruuddw @iriszzw
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/samv71* @nandojve
/dts/arm/broadcom/ @sbranden
/dts/arm/qemu-virt/ @carlocaione
/dts/arm/st/ @erwango
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @ioannisg @carlescufi
/dts/arm/nxp/ @MaureenHelm
/dts/arm/microchip/ @franciscomunoz @albertofloyd @scottwcpg
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efm32_jg_pg* @chrta
/dts/arm/silabs/efm32jg12b* @chrta
/dts/arm/silabs/efm32pg12b* @chrta
/dts/riscv/microsemi-miv.dtsi @galak
/dts/riscv/rv32m1* @MaureenHelm
/dts/riscv/riscv32-fe310.dtsi @nategraff-sifive
/dts/riscv/riscv32-litex-vexriscv.dtsi @mateusz-holenko @kgugala @pgielda
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/armv8-a.dtsi @carlocaione
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/arm/armv7-r.dtsi @bbolen
/dts/arm/xilinx/ @bbolen
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/bindings/ @galak
/dts/bindings/can/ @alexanderwachter
/dts/bindings/iio/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/serial/ns16550.yaml @andrewboie
/dts/bindings/serial/ns16550.yaml @gnuless
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @MaureenHelm
/dts/bindings/*/openisa* @MaureenHelm
@@ -264,63 +205,58 @@
/dts/posix/ @aescolar @vanwinkeljan
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/st* @avisconti
/ext/hal/cmsis/ @MaureenHelm @galak @stephanosio
/ext/hal/cmsis/ @MaureenHelm @galak
/ext/lib/crypto/tinycrypt/ @ceolin
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/drivers/adc.h @anangl
/include/drivers/can.h @alexanderwachter
/include/drivers/counter.h @nordic-krch
/include/drivers/display.h @vanwinkeljan
/include/drivers/espi.h @albertofloyd @franciscomunoz @scottwcpg
/include/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
/include/drivers/flash.h @nashif @carlescufi @galak @MaureenHelm @nvlsianpu
/include/drivers/led/ht16k33.h @henrikbrixandersen
/include/drivers/interrupt_controller/ @andrewboie
/include/drivers/interrupt_controller/gic.h @stephanosio
/include/drivers/pcie/ @andrewboie
/include/drivers/interrupt_controller/ @andrewboie @gnuless
/include/drivers/pcie/ @gnuless
/include/drivers/hwinfo.h @alexanderwachter
/include/drivers/led.h @Mani-Sadhasivam
/include/drivers/led_strip.h @mbolivar
/include/drivers/sensor.h @MaureenHelm
/include/drivers/spi.h @tbursztyka
/include/drivers/lora.h @Mani-Sadhasivam
/include/app_memory/ @andrewboie
/include/arch/arc/ @vonhust @ruuddw
/include/arch/arc/arch.h @andrewboie
/include/arch/arc/v2/irq.h @andrewboie
/include/arch/arm/aarch32/ @MaureenHelm @galak @ioannisg
/include/arch/arm/aarch32/cortex_r/ @stephanosio
/include/arch/arm/aarch64/ @carlocaione
/include/arch/arm/aarch32/irq.h @andrewboie
/include/arch/arm/ @MaureenHelm @galak @ioannisg
/include/arch/arm/irq.h @andrewboie
/include/arch/nios2/ @andrewboie
/include/arch/nios2/arch.h @andrewboie
/include/arch/posix/ @aescolar
/include/arch/riscv/ @nategraff-sifive @kgugala @pgielda
/include/arch/x86/ @andrewboie @wentongwu
/include/arch/common/ @andrewboie @andyross @nashif
/include/arch/x86/ia32/arch.h @andrewboie
/include/arch/x86/multiboot.h @gnuless
/include/arch/xtensa/ @andrewboie
/include/sys/atomic.h @andrewboie @andyross
/include/bluetooth/ @joerchan @jhedberg @Vudentz
/include/cache.h @andrewboie @andyross
/include/canbus/ @alexanderwachter
/include/tracing/ @wentongwu @nashif
/include/debug/ @nashif
/include/device.h @wentongwu @nashif
/include/display/ @vanwinkeljan
/include/display/framebuf.h @gnuless
/include/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
/include/dt-bindings/dma/stm32_dma.h @cybertale
/include/dt-bindings/pcie/ @andrewboie
/include/dt-bindings/pcie/ @gnuless
/include/dt-bindings/usb/usb.h @galak @finikorg
/include/fs/ @nashif @wentongwu
/include/init.h @andrewboie @andyross
/include/irq.h @andrewboie @andyross
/include/irq_offload.h @andrewboie @andyross
/include/espi.h @albertofloyd @franciscomunoz @scottwcpg
/include/kernel.h @andrewboie @andyross
/include/kernel_version.h @andrewboie @andyross
/include/linker/app_smem*.ld @andrewboie
/include/linker/ @andrewboie @andyross
/include/logging/ @nordic-krch
/include/misc/ @andrewboie @andyross
/include/net/ @jukkar @tbursztyka @pfalcon
/include/net/buf.h @jukkar @jhedberg @tbursztyka @pfalcon
/include/posix/ @pfalcon
@@ -333,9 +269,11 @@
/include/sys/sys_io.h @andrewboie @andyross
/include/toolchain.h @andrewboie @andyross @nashif
/include/toolchain/ @andrewboie @andyross
/include/updatehub.h @chtavares592 @otavio
/include/zephyr.h @andrewboie @andyross
/kernel/ @andrewboie @andyross
/lib/gui/ @vanwinkeljan
/lib/libc/ @nashif
/lib/os/ @andrewboie @andyross
/lib/posix/ @pfalcon
/lib/cmsis_rtos_v2/ @nashif
@@ -347,28 +285,28 @@
/samples/ @nashif
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/*microbit* @jhe
/samples/bluetooth/ @joerchan @jhedberg @Vudentz
/lib/updatehub/ @chtavares592 @otavio
/samples/bluetooth/ @jhedberg @Vudentz @joerchan
/samples/bluetooth/ @sjanc @jhedberg @Vudentz
/samples/boards/intel_s1000_crb/ @sathishkuttan @dcpleung @nashif
/samples/display/ @vanwinkeljan
/samples/drivers/CAN/ @alexanderwachter
/samples/drivers/display/ @vanwinkeljan
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/gui/ @vanwinkeljan
/samples/net/ @jukkar @tbursztyka @pfalcon
/samples/net/dns_resolve/ @jukkar @tbursztyka @pfalcon
/samples/net/lwm2m_client/ @mike-scott
/samples/net/mqtt_publisher/ @jukkar @tbursztyka
/samples/net/sockets/coap_*/ @rveerama1
/samples/net/sockets/ @jukkar @tbursztyka @pfalcon
/samples/net/updatehub/ @chtavares592 @otavio
/samples/sensor/ @MaureenHelm
/samples/net/updatehub/ @chtavares592 @otavio
/samples/sensor/ @bogdan-davidoaia
/samples/shields/ @avisconti
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/shell/ @jakub-uC @nordic-krch
/samples/subsys/usb/ @jfischer-phytec-iot @finikorg
/samples/subsys/power/ @wentongwu @pabigot
/samples/userspace/ @andrewboie
/samples/subsys/power/ @wentongwu @pizi-nordic
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/kconfig/ @ulfalizer
@@ -376,31 +314,24 @@
/scripts/sanity_chk/expr_parser.py @nashif
/scripts/gen_app_partitions.py @andrewboie
/scripts/dts/ @ulfalizer @galak
/scripts/release/ @nashif
/arch/x86/gen_gdt.py @andrewboie
/arch/x86/gen_idt.py @andrewboie
/scripts/gen_kobject_list.py @andrewboie
/scripts/gen_priv_stacks.py @andrewboie @agross-oss @ioannisg
/scripts/gen_syscall_header.py @andrewboie
/scripts/gen_syscalls.py @andrewboie
/scripts/net/ @jukkar @pfl
/scripts/process_gperf.py @andrewboie
/scripts/gen_relocate_app.py @wentongwu
/scripts/tracing/ @wentongwu
/scripts/sanity_chk/ @nashif
/scripts/sanitycheck @nashif
/scripts/series-push-hook.sh @erwango
/scripts/west_commands/ @mbolivar
/scripts/west-commands.yml @mbolivar
/scripts/zephyr_module.py @tejlmand
/scripts/valgrind.supp @aescolar
/subsys/bluetooth/ @joerchan @jhedberg @Vudentz
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot
/subsys/bluetooth/mesh/ @jhedberg @trond-snekvik @joerchan @Vudentz
/subsys/canbus/ @alexanderwachter
/subsys/cpp/ @pabigot @vanwinkeljan
/subsys/debug/ @nashif
/subsys/tracing/ @nashif @wentongwu
/subsys/debug/asan_hacks.c @vanwinkeljan @aescolar
/subsys/disk/disk_access_spi_sdhc.c @JunYangNXP
/subsys/disk/disk_access_sdhc.h @JunYangNXP
/subsys/disk/disk_access_usdhc.c @JunYangNXP
@@ -411,14 +342,13 @@
/subsys/fs/littlefs_fs.c @pabigot
/subsys/fs/nvs/ @Laczen
/subsys/logging/ @nordic-krch
/subsys/logging/log_backend_net.c @nordic-krch @jukkar
/subsys/mgmt/ @carlescufi @nvlsianpu
/subsys/net/buf.c @jukkar @jhedberg @tbursztyka @pfalcon
/subsys/net/ip/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/dns/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/lwm2m/ @mike-scott
/subsys/net/lib/config/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/config/ @jukkar @tbursztyka
/subsys/net/lib/mqtt/ @jukkar @tbursztyka @rlubos
/subsys/net/lib/openthread/ @rlubos
/subsys/net/lib/coap/ @rveerama1
@@ -426,8 +356,7 @@
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @jukkar @tbursztyka
/subsys/net/l2/canbus/ @alexanderwachter @jukkar
/subsys/power/ @wentongwu @pabigot
/subsys/random/ @dleach02
/subsys/power/ @wentongwu @pizi-nordic
/subsys/settings/ @nvlsianpu
/subsys/shell/ @jakub-uC @nordic-krch
/subsys/storage/ @nvlsianpu
@@ -439,13 +368,11 @@
/tests/boards/native_posix/ @aescolar
/tests/boards/intel_s1000_crb/ @dcpleung @sathishkuttan
/tests/bluetooth/ @joerchan @jhedberg @Vudentz
/tests/bluetooth/bsim_bt/ @joerchan @jhedberg @Vudentz @aescolar
/tests/posix/ @pfalcon
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin
/tests/drivers/can/ @alexanderwachter
/tests/drivers/flash_simulator/ @nvlsianpu
/tests/drivers/gpio/ @mnkp @pabigot
/tests/drivers/hwinfo/ @alexanderwachter
/tests/drivers/spi/ @tbursztyka
/tests/drivers/uart/uart_async_api/ @Mierunski
@@ -462,5 +389,5 @@
/tests/subsys/settings/ @nvlsianpu
/tests/subsys/shell/ @jakub-uC @nordic-krch
# Get all docs reviewed
*.rst @nashif
*posix*.rst @aescolar
*.rst @dbkinder
*posix*.rst @dbkinder @aescolar

View File

@@ -1,8 +1,10 @@
# General configuration options
# Kconfig - general configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
mainmenu "Zephyr Kernel Configuration"
source "Kconfig.zephyr"

View File

@@ -1,8 +1,11 @@
# General configuration options
# Kconfig - general configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# Copyright (c) 2016 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
menu "Modules"
@@ -11,30 +14,37 @@ source "modules/Kconfig"
endmenu
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
# After merging all definitions for a symbol/choice, Kconfig picks the first
# property (e.g. the first default) with a satisfied condition.
# Include these first so that any properties (e.g. defaults) below can be
# overridden in *.defconfig files (by defining symbols in multiple locations).
# After merging all the symbol definitions, Kconfig picks the first property
# (e.g. the first default) with a satisfied condition.
#
# Shield defaults should have precedence over board defaults, which should have
# precedence over SoC defaults, so include them in that order.
# Board defaults should be parsed before SoC defaults, because boards usually
# overrides SoC values.
#
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
source "boards/shields/*/Kconfig.defconfig"
# Note: $ARCH and $BOARD_DIR might be glob patterns.
source "$(BOARD_DIR)/Kconfig.defconfig"
source "$(SOC_DIR)/$(ARCH)/*/Kconfig.defconfig"
source "boards/Kconfig"
source "$(SOC_DIR)/Kconfig"
source "arch/Kconfig"
source "kernel/Kconfig"
source "dts/Kconfig"
source "drivers/Kconfig"
source "lib/Kconfig"
source "subsys/Kconfig"
source "ext/Kconfig"
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig"
menu "Build and Link Features"
@@ -64,9 +74,9 @@ config LINKER_ORPHAN_SECTION_ERROR
endchoice
config CODE_DATA_RELOCATION
bool "Relocate code/data sections"
depends on ARM
help
bool "Relocate code/data sections"
depends on ARM
help
When selected this will relocate .text, data and .bss sections from
the specified files and places it in the required memory region. The
files should be specified in the CMakeList.txt file with
@@ -78,24 +88,17 @@ config HAS_FLASH_LOAD_OFFSET
This option is selected by targets having a FLASH_LOAD_OFFSET
and FLASH_LOAD_SIZE.
if HAS_FLASH_LOAD_OFFSET
config USE_DT_CODE_PARTITION
bool "Link application into /chosen/zephyr,code-partition from devicetree"
config USE_CODE_PARTITION
bool "link into code-partition"
depends on HAS_FLASH_LOAD_OFFSET
help
When enabled, the application will be linked into the flash partition
selected by the zephyr,code-partition property in /chosen in devicetree.
When this is disabled, the flash load offset and size can be set manually
below.
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_CODE_PARTITION := zephyr,code-partition
When selected application will be linked into chosen code-partition.
config FLASH_LOAD_OFFSET
# Only user-configurable when USE_DT_CODE_PARTITION is disabled
hex "Kernel load offset" if !USE_DT_CODE_PARTITION
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_CODE_PARTITION)) if USE_DT_CODE_PARTITION
hex "Kernel load offset"
default $(dt_hex_val,DT_CODE_PARTITION_OFFSET) if USE_CODE_PARTITION
default 0
depends on HAS_FLASH_LOAD_OFFSET
help
This option specifies the byte offset from the beginning of flash that
the kernel should be loaded into. Changing this value from zero will
@@ -105,10 +108,10 @@ config FLASH_LOAD_OFFSET
If unsure, leave at the default value 0.
config FLASH_LOAD_SIZE
# Only user-configurable when USE_DT_CODE_PARTITION is disabled
hex "Kernel load size" if !USE_DT_CODE_PARTITION
default $(dt_chosen_reg_size_hex,$(DT_CHOSEN_Z_CODE_PARTITION)) if USE_DT_CODE_PARTITION
hex "Kernel load size"
default $(dt_hex_val,DT_CODE_PARTITION_SIZE) if USE_CODE_PARTITION
default 0
depends on HAS_FLASH_LOAD_OFFSET
help
If non-zero, this option specifies the size, in bytes, of the flash
area that the Zephyr image will be allowed to occupy. If zero, the
@@ -117,8 +120,6 @@ config FLASH_LOAD_SIZE
If unsure, leave at the default value 0.
endif # HAS_FLASH_LOAD_OFFSET
config TEXT_SECTION_OFFSET
hex
prompt "TEXT section offset" if !BOOTLOADER_MCUBOOT
@@ -175,6 +176,12 @@ config CUSTOM_SECTIONS_LD
Include a customized linker script fragment for inserting additional
arbitrary sections.
config LINK_WHOLE_ARCHIVE
bool "Allow linking with --whole-archive"
help
This options allows linking external libraries with the
--whole-archive option to keep all symbols.
config KERNEL_ENTRY
string "Kernel entry symbol"
default "__start"
@@ -247,28 +254,6 @@ config COMPILER_OPT
endmenu
choice
prompt "Error checking behavior for CHECK macro"
default RUNTIME_ERROR_CHECKS
config ASSERT_ON_ERRORS
bool "Assert on all errors"
help
Assert on errors covered with the CHECK macro.
config NO_RUNTIME_CHECKS
bool "No runtime error checks"
help
Do not do any runtime checks or asserts when using the CHECK macro.
config RUNTIME_ERROR_CHECKS
bool "Enable runtime error checks"
help
Always perform runtime checks covered with the CHECK macro. This
option is the default and the only option used during testing.
endchoice
menu "Build Options"
config KERNEL_BIN_NAME
@@ -330,6 +315,7 @@ config BUILD_OUTPUT_S19
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/bin/s19 files."
depends on BUILD_OUTPUT_HEX || BUILD_OUTPUT_BIN || BUILD_OUTPUT_S19
config BUILD_OUTPUT_STRIPPED
bool "Build a stripped binary"
@@ -343,12 +329,6 @@ config APPLICATION_DEFINED_SYSCALL
Scan additional folders inside application source folder
for application defined syscalls.
config MAKEFILE_EXPORTS
bool "Generate build metadata files named Makefile.exports"
help
Generates a file with build information that can be read by
third party Makefile-based build systems.
endmenu
endmenu
@@ -377,7 +357,7 @@ config BOOTLOADER_SRAM_SIZE
config BOOTLOADER_MCUBOOT
bool "MCUboot bootloader support"
select USE_DT_CODE_PARTITION
select USE_CODE_PARTITION
help
This option signifies that the target uses MCUboot as a bootloader,
or in other words that the image is to be chain-loaded by MCUboot.
@@ -389,6 +369,8 @@ config BOOTLOADER_MCUBOOT
for the MCUboot image header
* Activating SW_VECTOR_RELAY on Cortex-M0 (or Armv8-M baseline)
targets with no built-in vector relocation mechanisms
* Including dts/common/mcuboot.overlay when building the Device
Tree in order to place and link the image at the slot0 offset
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
@@ -432,6 +414,7 @@ config MISRA_SANE
endmenu
menu "Compatibility"
config COMPAT_INCLUDES

View File

@@ -1,3 +1,4 @@
.. raw:: html
<a href="https://www.zephyrproject.org">
@@ -40,8 +41,6 @@ Community Support
Community support is provided via mailing lists and Slack; see the Resources
below for details.
.. _project-resources:
Resources
*********

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 2
VERSION_MINOR = 0
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -2,10 +2,5 @@
add_definitions(-D__ZEPHYR_SUPERVISOR__)
include_directories(
${ZEPHYR_BASE}/kernel/include
${ZEPHYR_BASE}/arch/${ARCH}/include
)
add_subdirectory(common)
add_subdirectory(${ARCH_DIR}/${ARCH} arch/${ARCH})

View File

@@ -1,9 +1,12 @@
# General architecture configuration options
# Kconfig - general architecture configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# Copyright (c) 2015 Intel Corporation
# Copyright (c) 2016 Cadence Design Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Include these first so that any properties (e.g. defaults) below can be
# overridden (by defining symbols in multiple locations)
@@ -11,75 +14,52 @@
# Note: $ARCH might be a glob pattern
source "$(ARCH_DIR)/$(ARCH)/Kconfig"
# Architecture symbols
#
# Should be 'select'ed by low-level symbols like SOC_SERIES_* or, lacking that,
# by SOC_*.
choice ARCH_CHOICE
prompt "Architecture"
default X86
config ARC
bool
select ARCH_IS_SET
bool "ARC architecture"
select HAS_DTS
help
ARC architecture
config ARM
bool
select ARCH_IS_SET
bool "ARM architecture"
select HAS_DTS
help
ARM architecture
config X86
bool
select ARCH_IS_SET
bool "x86 architecture"
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
help
x86 architecture
config X86_64
bool "x86_64 architecture"
select ATOMIC_OPERATIONS_BUILTIN
select SCHED_IPI_SUPPORTED
config NIOS2
bool
select ARCH_IS_SET
bool "Nios II Gen 2 architecture"
select ATOMIC_OPERATIONS_C
select HAS_DTS
help
Nios II Gen 2 architecture
config RISCV
bool
select ARCH_IS_SET
bool "RISCV architecture"
select HAS_DTS
help
RISCV architecture
config XTENSA
bool
select ARCH_IS_SET
bool "Xtensa architecture"
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select XTENSA_HAL if "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "xcc"
help
Xtensa architecture
config ARCH_POSIX
bool
select ARCH_IS_SET
bool "POSIX (native) architecture"
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
select ARCH_HAS_THREAD_ABORT
select NATIVE_APPLICATION
select HAS_COVERAGE_SUPPORT
help
POSIX (native) architecture
config ARCH_IS_SET
bool
help
Helper symbol to detect SoCs forgetting to select one of the arch
symbols above. See the top-level CMakeLists.txt.
endchoice
menu "General Architecture Options"
@@ -92,14 +72,15 @@ module-str = mpu
source "subsys/logging/Kconfig.template.log_config"
config BIG_ENDIAN
bool
help
This option tells the build system that the target system is big-endian.
Little-endian architecture is the default and should leave this option
unselected. This option is selected by arch/$ARCH/Kconfig,
soc/**/Kconfig, or boards/**/Kconfig and the user should generally avoid
modifying it. The option is used to select linker script OUTPUT_FORMAT
and command line option for gen_isr_tables.py.
bool
help
This option tells the build system that the target system is
big-endian. Little-endian architecture is the default and
should leave this option unselected. This option is selected
by arch/$ARCH/Kconfig, soc/**/Kconfig, or boards/**/Kconfig
and the user should generally avoid modifying it. The option
is used to select linker script OUTPUT_FORMAT and command
line option for gen_isr_tables.py.
config 64BIT
bool
@@ -110,33 +91,27 @@ config 64BIT
soc/**/Kconfig, or boards/**/Kconfig and the user should generally
avoid modifying it.
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_SRAM := zephyr,sram
if ARC || ARM || NIOS2 || X86 || X86_64
config SRAM_SIZE
int "SRAM Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_SRAM),0,K)
default $(dt_int_val,DT_SRAM_SIZE)
help
The SRAM size in kB. The default value comes from /chosen/zephyr,sram in
devicetree. The user should generally avoid changing it via menuconfig or
in configuration files.
This option specifies the size of the SRAM in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
it via the menu configuration.
config SRAM_BASE_ADDRESS
hex "SRAM Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
default $(dt_hex_val,DT_SRAM_BASE_ADDRESS)
help
The SRAM base address. The default value comes from from
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
if ARC || ARM || NIOS2 || X86
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_FLASH := zephyr,flash
This option specifies the base address of the SRAM on the board. It is
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
config FLASH_SIZE
int "Flash Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_FLASH),0,K) if (XIP && ARM) || !ARM
default $(dt_int_val,DT_FLASH_SIZE) if (XIP && ARM) || !ARM
help
This option specifies the size of the flash in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
@@ -144,13 +119,13 @@ config FLASH_SIZE
config FLASH_BASE_ADDRESS
hex "Flash Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH)) if (XIP && ARM) || !ARM
default $(dt_hex_val,DT_FLASH_BASE_ADDRESS) if (XIP && ARM) || !ARM
help
This option specifies the base address of the flash on the board. It is
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
endif # ARM || ARC || NIOS2 || X86
endif # ARM || ARC || NIOS2 || X86 || X86_64
if ARCH_HAS_TRUSTED_EXECUTION
@@ -197,7 +172,6 @@ config HW_STACK_PROTECTION
config USERSPACE
bool "User mode threads"
depends on ARCH_HAS_USERSPACE
depends on RUNTIME_ERROR_CHECKS
help
When enabled, threads may be created or dropped down to user mode,
which has significantly restricted permissions and must interact
@@ -243,14 +217,6 @@ config STACK_GROWS_UP
Select this option if the architecture has upward growing thread
stacks. This is not common.
config NO_UNUSED_STACK_INSPECTION
bool
help
Selected if the architecture will generate a fault if unused stack
memory is examined, which is the region between the current stack
pointer and the deepest available address in the current stack
region.
config MAX_THREAD_BYTES
int "Bytes to use when tracking object thread permissions"
default 2
@@ -265,26 +231,31 @@ config DYNAMIC_OBJECTS
bool "Allow kernel objects to be allocated at runtime"
depends on USERSPACE
help
Enabling this option allows for kernel objects to be requested from
the calling thread's resource pool, at a slight cost in performance
due to the supplemental run-time tables required to validate such
objects.
Enabling this option allows for kernel objects to be requested from
the calling thread's resource pool, at a slight cost in performance
due to the supplemental run-time tables required to validate such
objects.
Objects allocated in this way can be freed with a supervisor-only
API call, or when the number of references to that object drops to
zero.
Objects allocated in this way can be freed with a supervisor-only
API call, or when the number of references to that object drops to
zero.
if ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config NOCACHE_MEMORY
bool "Support for uncached memory"
depends on ARCH_HAS_NOCACHE_MEMORY_SUPPORT
help
Add a "nocache" read-write memory section that is configured to
not be cached. This memory section can be used to perform DMA
transfers when cache coherence issues are not optimal or can not
be solved using cache maintenance operations.
menu "Interrupt Configuration"
endif # ARCH_HAS_NOCACHE_MEMORY_SUPPORT
menu "Interrupt Configuration"
#
# Interrupt related configs
#
config DYNAMIC_INTERRUPTS
bool "Enable installation of IRQs at runtime"
help
@@ -343,13 +314,12 @@ config GEN_IRQ_START_VECTOR
This is a hidden option which needs to be set per architecture and
left alone.
config IRQ_OFFLOAD
bool "Enable IRQ offload"
depends on TEST
help
Enable irq_offload() API which allows functions to be synchronously
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
run in interrupt context. Mainly useful for test cases.
endmenu # Interrupt configuration
@@ -358,7 +328,6 @@ endmenu
#
# Architecture Capabilities
#
config ARCH_HAS_TRUSTED_EXECUTION
bool
@@ -377,9 +346,6 @@ config ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config ARCH_HAS_RAMFUNC_SUPPORT
bool
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
#
# Other architecture related options
#
@@ -391,51 +357,57 @@ config ARCH_HAS_THREAD_ABORT
# Hidden PM feature configs which are to be selected by
# individual SoC.
#
config HAS_SYS_POWER_STATE_SLEEP_1
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_1
configuration option.
config HAS_SYS_POWER_STATE_SLEEP_2
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_2
configuration option.
config HAS_SYS_POWER_STATE_SLEEP_3
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_3
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_1
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_1
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_2
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_2
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_3
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_3
configuration option.
config BOOTLOADER_CONTEXT_RESTORE_SUPPORTED
# Hidden
bool
help
This option signifies that the target has options of bootloaders
that support context restore upon resume from deep sleep
#
# Hidden CPU family configs
# End hidden CPU family configs
#
config CPU_HAS_TEE
@@ -445,12 +417,6 @@ config CPU_HAS_TEE
Execution Environment (e.g. when it has a security attribution
unit).
config CPU_HAS_DCLS
bool
help
This option is enabled when the processor hardware is configured in
Dual-redundant Core Lock-step (DCLS) topology.
config CPU_HAS_FPU
bool
help
@@ -459,11 +425,13 @@ config CPU_HAS_FPU
config CPU_HAS_MPU
bool
# Omit prompt to signify "hidden" option
help
This option is enabled when the CPU has a Memory Protection Unit (MPU).
config MEMORY_PROTECTION
bool
# Omit prompt to signify "hidden" option
help
This option is enabled when Memory Protection features are supported.
Memory protection support is currently available on ARC, ARM, and x86
@@ -471,41 +439,18 @@ config MEMORY_PROTECTION
config MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT
bool
# Omit prompt to signify "hidden" option
help
This option is enabled when the MPU requires a power of two alignment
and size for MPU regions.
config MPU_REQUIRES_NON_OVERLAPPING_REGIONS
bool
# Omit prompt to signify "hidden" option
help
This option is enabled when the MPU requires the active (i.e. enabled)
MPU regions to be non-overlapping with each other.
config MPU_GAP_FILLING
bool "Force MPU to be filling in background memory regions"
depends on MPU_REQUIRES_NON_OVERLAPPING_REGIONS
default y if !USERSPACE
help
This Kconfig option instructs the MPU driver to enforce
a full kernel SRAM partitioning, when it programs the
dynamic MPU regions (user thread stack, PRIV stack guard
and application memory domains) during context-switch. We
allow this to be a configurable option, in order to be able
to switch the option off and have an increased number of MPU
regions available for application memory domain programming.
Notes:
An increased number of MPU regions should only be required,
when building with USERSPACE support. As a result, when we
build without USERSPACE support, gap filling should always
be required.
When the option is switched off, access to memory areas not
covered by explicit MPU regions is restricted to privileged
code on an ARCH-specific basis. Refer to ARCH-specific
documentation for more information on how this option is
used.
menuconfig FLOAT
bool "Floating point"
depends on CPU_HAS_FPU
@@ -517,13 +462,20 @@ menuconfig FLOAT
Disabling this option means that any thread that uses a
floating point register will get a fatal exception.
if FLOAT
config FP_SHARING
bool "Floating point register sharing"
depends on FLOAT
help
This option allows multiple threads to use the floating point
registers.
endif # FLOAT
#
# End hidden PM feature configs
#
config ARCH
string
help

View File

@@ -1,7 +1,10 @@
# ARC options
#
# Copyright (c) 2014, 2019 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
menu "ARC Options"
depends on ARC
@@ -13,14 +16,14 @@ choice
prompt "ARC core family"
default CPU_ARCEM
config CPU_ARCEM
config CPU_ARCEM
bool "ARC EM cores"
select CPU_ARCV2
select ATOMIC_OPERATIONS_C
help
This option signifies the use of an ARC EM CPU
config CPU_ARCHS
config CPU_ARCHS
bool "ARC HS cores"
select CPU_ARCV2
select ATOMIC_OPERATIONS_BUILTIN
@@ -29,39 +32,9 @@ config CPU_ARCHS
endchoice
config CPU_EM4
bool
help
If y, the SoC uses an ARC EM4 CPU
config CPU_EM4_DMIPS
bool
help
If y, the SoC uses an ARC EM4 DMIPS CPU
config CPU_EM4_FPUS
bool
help
If y, the SoC uses an ARC EM4 DMIPS CPU with the single-precision
floating-point extension
config CPU_EM4_FPUDA
bool
help
If y, the SoC uses an ARC EM4 DMIPS CPU with single-precision
floating-point and double assist instructions
config CPU_EM6
bool
help
If y, the SoC uses an ARC EM6 CPU
config FP_FPU_DA
bool
menu "ARCv2 Family Options"
config CPU_ARCV2
config CPU_ARCV2
bool
select ARCH_HAS_STACK_PROTECTION if ARC_HAS_STACK_CHECKING || ARC_MPU
select ARCH_HAS_USERSPACE if ARC_MPU
@@ -71,7 +44,15 @@ config CPU_ARCV2
help
This option signifies the use of a CPU of the ARCv2 family.
config NUM_IRQ_PRIO_LEVELS
config DATA_ENDIANNESS_LITTLE
bool
default y
help
This is driven by the processor implementation, since it is fixed in
hardware. The BSP should set this value to 'n' if the data is
implemented as big endian.
config NUM_IRQ_PRIO_LEVELS
int "Number of supported interrupt priority levels"
range 1 16
help
@@ -80,7 +61,7 @@ config NUM_IRQ_PRIO_LEVELS
The BSP must provide a valid default for proper operation.
config NUM_IRQS
config NUM_IRQS
int "Upper limit of interrupt numbers/IDs used"
range 17 256
help
@@ -91,7 +72,7 @@ config NUM_IRQS
The BSP must provide a valid default. This drives the size of the
vector table.
config RGF_NUM_BANKS
config RGF_NUM_BANKS
int "Number of General Purpose Register Banks"
depends on CPU_ARCV2
range 1 2
@@ -114,21 +95,7 @@ config ARC_FIRQ
If FIRQ is disabled, the handle of interrupts with highest priority
will be same with other interrupts.
config ARC_FIRQ_STACK
bool "Enable separate firq stack"
depends on ARC_FIRQ && RGF_NUM_BANKS > 1
help
Use separate stack for FIRQ handing. When the fast irq is also a direct
irq, this will get the minimal interrupt latency.
config ARC_FIRQ_STACK_SIZE
int "FIRQ stack size"
depends on ARC_FIRQ_STACK
default 1024
help
The size of firq stack.
config ARC_HAS_STACK_CHECKING
config ARC_HAS_STACK_CHECKING
bool "ARC has STACK_CHECKING"
default y
help
@@ -136,20 +103,19 @@ config ARC_HAS_STACK_CHECKING
checking stack accesses and raising an exception when a stack
overflow or underflow is detected.
config ARC_CONNECT
config ARC_CONNECT
bool "ARC has ARC connect"
select SCHED_IPI_SUPPORTED
help
ARC is configured with ARC CONNECT which is a hardware for connecting
multi cores.
config ARC_STACK_CHECKING
config ARC_STACK_CHECKING
bool
select NO_UNUSED_STACK_INSPECTION
help
Use ARC STACK_CHECKING to do stack protection
config ARC_STACK_PROTECTION
config ARC_STACK_PROTECTION
bool
default y if HW_STACK_PROTECTION
select ARC_STACK_CHECKING if ARC_HAS_STACK_CHECKING
@@ -165,8 +131,9 @@ config ARC_STACK_PROTECTION
selection of the ARC stack checking is
prioritized over the MPU-based stack guard.
config ARC_USE_UNALIGNED_MEM_ACCESS
config ARC_USE_UNALIGNED_MEM_ACCESS
bool "Enable unaligned access in HW"
default n if CPU_ARCEM
default y if CPU_ARCHS
depends on (CPU_ARCEM && !ARC_HAS_SECURE) || CPU_ARCHS
help
@@ -174,7 +141,7 @@ config ARC_USE_UNALIGNED_MEM_ACCESS
to support unaligned memory access which is then disabled by default.
Enable unaligned access in hardware and make software to use it.
config FAULT_DUMP
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
@@ -189,7 +156,7 @@ config FAULT_DUMP
0: Off.
config XIP
config XIP
default y if !UART_NSIM
config GEN_ISR_TABLES
@@ -234,7 +201,8 @@ config SJLI_TABLE_SIZE
sjli instruction.
config ARC_SECURE_FIRMWARE
bool "Generate Secure Firmware"
prompt "Generate Secure Firmware"
bool
depends on ARC_HAS_SECURE
default y if TRUSTED_EXECUTION_SECURE
help
@@ -250,7 +218,8 @@ config ARC_SECURE_FIRMWARE
and normal resources of the ARC processors.
config ARC_NORMAL_FIRMWARE
bool "Generate Normal Firmware"
prompt "Generate Normal Firmware"
bool
depends on !ARC_SECURE_FIRMWARE
depends on ARC_HAS_SECURE
default y if TRUSTED_EXECUTION_NONSECURE
@@ -311,15 +280,6 @@ config CACHE_FLUSHING
If the d-cache is present, set this to y.
If the d-cache is NOT present, set this to n.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768
help
Size in bytes of exception handling stack which is at the top of
interrupt stack to get smaller memory footprint because exception
is not frequent. To reduce the impact on interrupt handling,
especially nested interrupt, it cannot be too large.
endmenu
config ARC_EXCEPTION_DEBUG

View File

@@ -3,32 +3,28 @@
zephyr_library()
zephyr_library_sources(
thread.c
thread_entry_wrapper.S
cpu_idle.S
fatal.c
fault.c
fault_s.S
irq_manage.c
timestamp.c
isr_wrapper.S
regular_irq.S
switch.S
prep_c.c
reset.S
vector_table.c
)
thread.c
thread_entry_wrapper.S
cpu_idle.S
fatal.c
fault.c
fault_s.S
irq_manage.c
timestamp.c
isr_wrapper.S
regular_irq.S
switch.S
prep_c.c
reset.S
vector_table.c
)
zephyr_library_sources_ifdef(CONFIG_CACHE_FLUSHING cache.c)
zephyr_library_sources_ifdef(CONFIG_ARC_FIRQ fast_irq.S)
zephyr_library_sources_if_kconfig(irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_SMP arc_smp.c)

View File

@@ -223,7 +223,7 @@ u64_t z_arc_connect_gfrc_read(void)
* sub-components. For GFRC, HW allows simultaneously accessing to
* counters. So an irq lock is enough.
*/
key = arch_irq_lock();
key = z_arch_irq_lock();
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_LO, 0);
low = z_arc_connect_cmd_readback();
@@ -231,7 +231,7 @@ u64_t z_arc_connect_gfrc_read(void)
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HI, 0);
high = z_arc_connect_cmd_readback();
arch_irq_unlock(key);
z_arch_irq_unlock(key);
return (((u64_t)high) << 32) | low;
}
@@ -349,7 +349,7 @@ u32_t z_arc_connect_idu_read_mode(u32_t irq_num)
void z_arc_connect_idu_set_dest(u32_t irq_num, u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_DEST,
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MODE,
irq_num, core_mask);
}
}

View File

@@ -6,7 +6,7 @@
/**
* @file
* @brief codes required for ARC multicore and Zephyr smp support
* @brief codes required for ARC smp support
*
*/
#include <device.h>
@@ -23,70 +23,6 @@
#define ARCV2_ICI_IRQ_PRIORITY 1
volatile struct {
arch_cpustart_t fn;
void *arg;
} arc_cpu_init[CONFIG_MP_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile u32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_NUM_CPUS];
/* Called from Zephyr initialization */
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0) {
;
}
}
/* the C entry of slave cores */
void z_arc_slave_start(int cpu_num)
{
arch_cpustart_t fn;
#ifdef CONFIG_SMP
z_icache_setup();
z_irq_setup();
z_arc_connect_ici_clear();
z_irq_priority_set(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY, 0);
irq_enable(IRQ_ICI);
#endif
/* call the function set by arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(arc_cpu_init[cpu_num].arg);
}
#ifdef CONFIG_SMP
static void sched_ipi_handler(void *unused)
{
ARG_UNUSED(unused);
@@ -102,36 +38,89 @@ static void sched_ipi_handler(void *unused)
* use register r0 and register r1 as return value, r0 has
* new thread, r1 has old thread. If r0 == 0, it means no thread switch.
*/
u64_t z_arc_smp_switch_in_isr(void)
u64_t z_arch_smp_switch_in_isr(void)
{
u64_t ret = 0;
u32_t new_thread;
u32_t old_thread;
if (!_current_cpu->swap_ok) {
return 0;
}
old_thread = (u32_t)_current;
new_thread = (u32_t)z_get_next_ready_thread();
if (new_thread != old_thread) {
#ifdef CONFIG_TIMESLICING
z_reset_time_slice();
#endif
_current_cpu->swap_ok = 0;
((struct k_thread *)new_thread)->base.cpu =
arch_curr_cpu()->id;
_current_cpu->current = (struct k_thread *) new_thread;
z_arch_curr_cpu()->id;
_current = (struct k_thread *) new_thread;
ret = new_thread | ((u64_t)(old_thread) << 32);
}
return ret;
}
volatile struct {
void (*fn)(int, void*);
void *arg;
} arc_cpu_init[CONFIG_MP_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile u32_t arc_cpu_wake_flag;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_NUM_CPUS];
/* Called from Zephyr initialization */
void z_arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
void (*fn)(int, void *), void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0) {
;
}
}
/* the C entry of slave cores */
void z_arch_slave_start(int cpu_num)
{
void (*fn)(int, void*);
z_icache_setup();
z_irq_setup();
z_irq_priority_set(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY, 0);
irq_enable(IRQ_ICI);
/* call the function set by z_arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(cpu_num, arc_cpu_init[cpu_num].arg);
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
void z_arch_sched_ipi(void)
{
u32_t i;
/* broadcast sched_ipi request to other cores
/* broadcast sched_ipi request to all cores
* if the target is current core, hardware will ignore it
*/
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++) {
@@ -154,7 +143,6 @@ static int arc_smp_init(struct device *dev)
if (bcr.ipi) {
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY,
sched_ipi_handler, NULL, 0);
@@ -182,4 +170,3 @@ static int arc_smp_init(struct device *dev)
}
SYS_INIT(arc_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View File

@@ -17,11 +17,11 @@
#include <linker/sections.h>
#include <arch/cpu.h>
GTEXT(arch_cpu_idle)
GTEXT(arch_cpu_atomic_idle)
GDATA(z_arc_cpu_sleep_mode)
GTEXT(k_cpu_idle)
GTEXT(k_cpu_atomic_idle)
GDATA(k_cpu_sleep_mode)
SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
SECTION_VAR(BSS, k_cpu_sleep_mode)
.balign 4
.word 0
@@ -33,29 +33,17 @@ SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
* void nanCpuIdle(void)
*/
SECTION_FUNC(TEXT, arch_cpu_idle)
SECTION_FUNC(TEXT, k_cpu_idle)
#ifdef CONFIG_TRACING
push_s blink
jl sys_trace_idle
jl z_sys_trace_idle
pop_s blink
#endif
ld r1, [z_arc_cpu_sleep_mode]
ld r1, [k_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
/*
* It's found that (in nsim_hs_smp), when cpu
* is sleeping, no response to inter-processor interrupt
* although it's pending and interrupts are enabled.
* here is a workround
*/
#if !defined(CONFIG_SOC_NSIM) && !defined(CONFIG_SMP)
sleep r1
#else
seti r1
_z_arc_idle_loop:
b _z_arc_idle_loop
#endif
j_s [blink]
nop
@@ -64,17 +52,17 @@ _z_arc_idle_loop:
*
* This function exits with interrupts restored to <key>.
*
* void arch_cpu_atomic_idle(unsigned int key)
* void k_cpu_atomic_idle(unsigned int key)
*/
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
SECTION_FUNC(TEXT, k_cpu_atomic_idle)
#ifdef CONFIG_TRACING
push_s blink
jl sys_trace_idle
jl z_sys_trace_idle
pop_s blink
#endif
ld r1, [z_arc_cpu_sleep_mode]
ld r1, [k_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
sleep r1
j_s.d [blink]

View File

@@ -16,7 +16,6 @@
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <swap_macros.h>
@@ -81,7 +80,7 @@ SECTION_FUNC(TEXT, _firq_enter)
_check_and_inc_int_nest_counter r0, r1
bne.d firq_nest
mov_s r0, sp
mov r0, sp
_get_curr_cpu_irq_stack sp
#if CONFIG_RGF_NUM_BANKS != 1
@@ -102,25 +101,20 @@ firq_nest:
* save original value of _ARC_V2_USER_SP and ilink into
* the stack of interrupted context first, then restore them later
*/
push ilink
PUSHAX ilink, _ARC_V2_USER_SP
st ilink, [sp]
lr ilink, [_ARC_V2_USER_SP]
st ilink, [sp, -4]
/* sp here is the sp of interrupted context */
sr sp, [_ARC_V2_USER_SP]
/* here, bank 0 sp must go back to the value before push and
* PUSHAX as we will switch to bank1, the pop and POPAX later will
* change bank1's sp, not bank0's sp
*/
add sp, sp, 8
/* switch back to banked reg, only ilink can be used */
lr ilink, [_ARC_V2_STATUS32]
or ilink, ilink, _ARC_V2_STATUS32_RB(1)
kflag ilink
lr sp, [_ARC_V2_USER_SP]
POPAX ilink, _ARC_V2_USER_SP
pop ilink
ld ilink, [sp, -4]
sr ilink, [_ARC_V2_USER_SP]
ld ilink, [sp]
firq_nest_1:
#else
firq_nest:
@@ -155,8 +149,10 @@ SECTION_FUNC(TEXT, _firq_exit)
bl z_check_stack_sentinel
#endif
#ifdef CONFIG_PREEMPT_ENABLED
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
bl z_arch_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
brne r0, 0, _firq_reschedule
#else
@@ -169,6 +165,8 @@ SECTION_FUNC(TEXT, _firq_exit)
#endif
/* fall to no rescheduling */
#endif /* CONFIG_PREEMPT_ENABLED */
.balign 4
_firq_no_reschedule:
pop sp
@@ -182,6 +180,8 @@ _firq_no_reschedule:
#endif
rtie
#ifdef CONFIG_PREEMPT_ENABLED
.balign 4
_firq_reschedule:
pop sp
@@ -201,36 +201,12 @@ _firq_reschedule:
* point, so when switching back to register bank 0, it will contain the
* registers from the interrupted thread.
*/
#if defined(CONFIG_USERSPACE)
/* when USERSPACE is configured, here need to consider the case where firq comes
* out in user mode, according to ARCv2 ISA and nsim, the following micro ops
* will be executed:
* sp<-reg bank1'sp
* switch between sp and _ARC_V2_USER_SP
* then:
* sp is the sp of kernel stack of interrupted thread
* _ARC_V2_USER_SP is reg bank1'sp
* the sp of user stack of interrupted thread is reg bank0'sp
* if firq comes out in kernel mode, the following micro ops will be executed:
* sp<-reg bank'sp
* so, sw needs to do necessary handling to set up the correct sp
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bbit0 r0, 31, _firq_from_kernel
aex sp, [_ARC_V2_USER_SP]
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
aex sp, [_ARC_V2_USER_SP]
b _firq_create_irq_stack_frame
_firq_from_kernel:
#endif
/* chose register bank #0 */
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
_firq_create_irq_stack_frame:
/* we're back on the outgoing thread's stack */
_create_irq_stack_frame
@@ -243,7 +219,6 @@ _firq_create_irq_stack_frame:
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET] /* ilink into pc */
#ifdef CONFIG_SMP
/*
* load r0, r1 from irq stack
@@ -254,18 +229,8 @@ _firq_create_irq_stack_frame:
#endif
#endif
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread, will be
* restored when thread switched back
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
#ifdef CONFIG_SMP
mov_s r2, r1
mov r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
@@ -275,7 +240,7 @@ _firq_create_irq_stack_frame:
st _CAUSE_FIRQ, [r2, _thread_offset_to_relinquish_cause]
#ifdef CONFIG_SMP
mov_s r2, r0
mov r2, r0
#else
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
@@ -292,7 +257,7 @@ _firq_create_irq_stack_frame:
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
mov r0, r2
bl configure_mpu_thread
pop_s r2
#endif
@@ -308,9 +273,9 @@ _firq_create_irq_stack_frame:
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _firq_return_from_rirq
nop_s
nop
breq r3, _CAUSE_FIRQ, _firq_return_from_firq
nop_s
nop
/* fall through */
@@ -318,7 +283,7 @@ _firq_create_irq_stack_frame:
_firq_return_from_coop:
/* pc into ilink */
pop_s r0
mov_s ilink, r0
mov ilink, r0
pop_s r0 /* status32 into r0 */
sr r0, [_ARC_V2_STATUS32_P0]
@@ -329,16 +294,6 @@ _firq_return_from_coop:
_firq_return_from_rirq:
_firq_return_from_firq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
_pop_irq_stack_frame
ld ilink, [sp, -4] /* status32 into ilink */
@@ -347,3 +302,5 @@ _firq_return_from_firq:
/* LP registers are already restored, just switch back to bank 0 */
rtie
#endif /* CONFIG_PREEMPT_ENABLED */

View File

@@ -12,33 +12,24 @@
* ARCv2 CPUs.
*/
#include <kernel.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <arch/cpu.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
#include <logging/log_ctrl.h>
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
if (reason == K_ERR_CPU_EXCEPTION) {
LOG_ERR("Faulting instruction address = 0x%lx",
z_arc_v2_aux_reg_read(_ARC_V2_ERET));
z_fatal_print("Faulting instruction address = 0x%lx",
z_arc_v2_aux_reg_read(_ARC_V2_ERET));
}
z_fatal_error(reason, esf);
}
FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
FUNC_NORETURN void z_arch_syscall_oops(void *ssf_ptr)
{
z_arc_fatal_error(K_ERR_KERNEL_OOPS, ssf_ptr);
CODE_UNREACHABLE;
}
FUNC_NORETURN void arch_system_halt(unsigned int reason)
{
ARG_UNUSED(reason);
__asm__("brk");
CODE_UNREACHABLE;
}

View File

@@ -16,17 +16,16 @@
#include <inttypes.h>
#include <kernel.h>
#include <kernel_internal.h>
#include <kernel_structs.h>
#include <exc_handle.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
#include <logging/log_ctrl.h>
#ifdef CONFIG_USERSPACE
Z_EXC_DECLARE(z_arc_user_string_nlen);
Z_EXC_DECLARE(z_arch_user_string_nlen);
static const struct z_exc_handle exceptions[] = {
Z_EXC_HANDLE(z_arc_user_string_nlen)
Z_EXC_HANDLE(z_arch_user_string_nlen)
};
#endif
@@ -155,32 +154,32 @@ static void dump_protv_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
break;
case 0x1:
LOG_ERR("Memory read protection violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Memory read protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x2:
LOG_ERR("Memory write protection violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Memory write protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x3:
LOG_ERR("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
z_fatal_print("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
break;
case 0x10:
LOG_ERR("Normal vector table in secure memory");
z_fatal_print("Normal vector table in secure memory");
break;
case 0x11:
LOG_ERR("NS handler code located in S memory");
z_fatal_print("NS handler code located in S memory");
break;
case 0x12:
LOG_ERR("NSC Table Range Violation");
z_fatal_print("NSC Table Range Violation");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
@@ -189,46 +188,46 @@ static void dump_machine_check_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("double fault");
z_fatal_print("double fault");
break;
case 0x1:
LOG_ERR("overlapping TLB entries");
z_fatal_print("overlapping TLB entries");
break;
case 0x2:
LOG_ERR("fatal TLB error");
z_fatal_print("fatal TLB error");
break;
case 0x3:
LOG_ERR("fatal cache error");
z_fatal_print("fatal cache error");
break;
case 0x4:
LOG_ERR("internal memory error on instruction fetch");
z_fatal_print("internal memory error on instruction fetch");
break;
case 0x5:
LOG_ERR("internal memory error on data fetch");
z_fatal_print("internal memory error on data fetch");
break;
case 0x6:
LOG_ERR("illegal overlapping MPU entries");
z_fatal_print("illegal overlapping MPU entries");
if (parameter == 0x1) {
LOG_ERR(" - jump and branch target");
z_fatal_print(" - jump and branch target");
}
break;
case 0x10:
LOG_ERR("secure vector table not located in secure memory");
z_fatal_print("secure vector table not located in secure memory");
break;
case 0x11:
LOG_ERR("NSC jump table not located in secure memory");
z_fatal_print("NSC jump table not located in secure memory");
break;
case 0x12:
LOG_ERR("secure handler code not located in secure memory");
z_fatal_print("secure handler code not located in secure memory");
break;
case 0x13:
LOG_ERR("NSC target address not located in secure memory");
z_fatal_print("NSC target address not located in secure memory");
break;
case 0x80:
LOG_ERR("uncorrectable ECC or parity error in vector memory");
z_fatal_print("uncorrectable ECC or parity error in vector memory");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
@@ -237,54 +236,54 @@ static void dump_privilege_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("Privilege violation");
z_fatal_print("Privilege violation");
break;
case 0x1:
LOG_ERR("disabled extension");
z_fatal_print("disabled extension");
break;
case 0x2:
LOG_ERR("action point hit");
z_fatal_print("action point hit");
break;
case 0x10:
switch (parameter) {
case 0x1:
LOG_ERR("N to S return using incorrect return mechanism");
z_fatal_print("N to S return using incorrect return mechanism");
break;
case 0x2:
LOG_ERR("N to S return with incorrect operating mode");
z_fatal_print("N to S return with incorrect operating mode");
break;
case 0x3:
LOG_ERR("IRQ/exception return fetch from wrong mode");
z_fatal_print("IRQ/exception return fetch from wrong mode");
break;
case 0x4:
LOG_ERR("attempt to halt secure processor in NS mode");
z_fatal_print("attempt to halt secure processor in NS mode");
break;
case 0x20:
LOG_ERR("attempt to access secure resource from normal mode");
z_fatal_print("attempt to access secure resource from normal mode");
break;
case 0x40:
LOG_ERR("SID violation on resource access (APEX/UAUX/key NVM)");
z_fatal_print("SID violation on resource access (APEX/UAUX/key NVM)");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
break;
case 0x13:
switch (parameter) {
case 0x20:
LOG_ERR("attempt to access secure APEX feature from NS mode");
z_fatal_print("attempt to access secure APEX feature from NS mode");
break;
case 0x40:
LOG_ERR("SID violation on access to APEX feature");
z_fatal_print("SID violation on access to APEX feature");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
@@ -292,7 +291,7 @@ static void dump_privilege_exception(u32_t cause, u32_t parameter)
static void dump_exception_info(u32_t vector, u32_t cause, u32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
LOG_ERR("interrupt %u", vector);
z_fatal_print("interrupt %u", vector);
return;
}
@@ -301,55 +300,55 @@ static void dump_exception_info(u32_t vector, u32_t cause, u32_t parameter)
*/
switch (vector) {
case ARC_EV_RESET:
LOG_ERR("Reset");
z_fatal_print("Reset");
break;
case ARC_EV_MEM_ERROR:
LOG_ERR("Memory Error");
z_fatal_print("Memory Error");
break;
case ARC_EV_INS_ERROR:
LOG_ERR("Instruction Error");
z_fatal_print("Instruction Error");
break;
case ARC_EV_MACHINE_CHECK:
LOG_ERR("EV_MachineCheck");
z_fatal_print("EV_MachineCheck");
dump_machine_check_exception(cause, parameter);
break;
case ARC_EV_TLB_MISS_I:
LOG_ERR("EV_TLBMissI");
z_fatal_print("EV_TLBMissI");
break;
case ARC_EV_TLB_MISS_D:
LOG_ERR("EV_TLBMissD");
z_fatal_print("EV_TLBMissD");
break;
case ARC_EV_PROT_V:
LOG_ERR("EV_ProtV");
z_fatal_print("EV_ProtV");
dump_protv_exception(cause, parameter);
break;
case ARC_EV_PRIVILEGE_V:
LOG_ERR("EV_PrivilegeV");
z_fatal_print("EV_PrivilegeV");
dump_privilege_exception(cause, parameter);
break;
case ARC_EV_SWI:
LOG_ERR("EV_SWI");
z_fatal_print("EV_SWI");
break;
case ARC_EV_TRAP:
LOG_ERR("EV_Trap");
z_fatal_print("EV_Trap");
break;
case ARC_EV_EXTENSION:
LOG_ERR("EV_Extension");
z_fatal_print("EV_Extension");
break;
case ARC_EV_DIV_ZERO:
LOG_ERR("EV_DivZero");
z_fatal_print("EV_DivZero");
break;
case ARC_EV_DC_ERROR:
LOG_ERR("EV_DCError");
z_fatal_print("EV_DCError");
break;
case ARC_EV_MISALIGNED:
LOG_ERR("EV_Misaligned");
z_fatal_print("EV_Misaligned");
break;
case ARC_EV_VEC_UNIT:
LOG_ERR("EV_VecUnit");
z_fatal_print("EV_VecUnit");
break;
default:
LOG_ERR("unknown");
z_fatal_print("unknown");
break;
}
}
@@ -402,9 +401,9 @@ void _Fault(z_arch_esf_t *esf, u32_t old_sp)
return;
}
LOG_ERR("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
LOG_ERR("Address 0x%x", exc_addr);
z_fatal_print("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
z_fatal_print("Address 0x%x", exc_addr);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
dump_exception_info(vector, cause, parameter);
#endif

View File

@@ -38,17 +38,8 @@ GTEXT(__ev_maligned)
GTEXT(z_irq_do_offload);
#endif
/*
* The exception handling will use top part of interrupt stack to
* get smaller memory footprint, because exception is not frequent.
* To reduce the impact on interrupt handling, especially nested interrupt
* the top part of interrupt stack cannot be too large, so add a check
* here
*/
#if CONFIG_ARC_EXCEPTION_STACK_SIZE > (CONFIG_ISR_STACK_SIZE >> 1)
#error "interrupt stack size is too small"
#endif
/* the necessary stack size for exception handling */
#define EXCEPTION_STACK_SIZE 384
/*
* @brief Fault handler installed in the fault and reserved vectors
@@ -74,9 +65,9 @@ _exc_entry:
* and exception is raised, then here it's guaranteed that
* exception handling has necessary stack to use
*/
mov_s ilink, sp
mov ilink, sp
_get_curr_cpu_irq_stack sp
sub sp, sp, (CONFIG_ISR_STACK_SIZE - CONFIG_ARC_EXCEPTION_STACK_SIZE)
sub sp, sp, (CONFIG_ISR_STACK_SIZE - EXCEPTION_STACK_SIZE)
/*
* save caller saved registers
@@ -100,9 +91,9 @@ _exc_entry:
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* sp is parameter of _Fault */
mov_s r0, sp
mov r0, sp
/* ilink is the thread's original sp */
mov_s r1, ilink
mov r1, ilink
jl _Fault
_exc_return:
@@ -114,10 +105,11 @@ _exc_return:
* exception comes out, thread context?irq_context?nest irq context?
*/
#ifdef CONFIG_PREEMPT_ENABLED
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
bl z_arch_smp_switch_in_isr
breq r0, 0, _exc_return_from_exc
mov_s r2, r0
mov r2, r0
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
@@ -151,7 +143,7 @@ _exc_return:
/* save r2 in ilink because of the possible following reg
* bank switch
*/
mov_s ilink, r2
mov ilink, r2
#endif
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~(_ARC_V2_STATUS32_AE | _ARC_V2_STATUS32_RB(7)))
@@ -164,18 +156,18 @@ _exc_return:
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
mov_s r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
mov r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
mov_s r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
mov r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
push_s r2
mov_s r0, _ARC_V2_AUX_IRQ_ACT
mov_s r1, r3
mov_s r6, ARC_S_CALL_AUX_WRITE
push r2
mov r0, _ARC_V2_AUX_IRQ_ACT
mov r1, r3
mov r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
pop_s r2
pop r2
#else
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
@@ -186,13 +178,14 @@ _exc_return:
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
#endif
_exc_return_from_exc:
ld_s r0, [sp, ___isf_t_pc_OFFSET]
sr r0, [_ARC_V2_ERET]
_pop_irq_stack_frame
mov_s sp, ilink
mov sp, ilink
rtie
@@ -203,36 +196,30 @@ SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_trap)
#ifdef CONFIG_USERSPACE
cmp ilink, _TRAP_S_CALL_SYSTEM_CALL
bne _do_non_syscall_trap
/* do sys_call */
mov_s ilink, K_SYSCALL_LIMIT
/* do sys_call */
mov ilink, K_SYSCALL_LIMIT
cmp r6, ilink
blo valid_syscall_id
blt valid_syscall_id
mov_s r0, r6
mov_s r6, K_SYSCALL_BAD
mov r0, r6
mov r6, K_SYSCALL_BAD
valid_syscall_id:
/* create a sys call frame
* caller regs (r0 - 12) are saved in _create_irq_stack_frame
* ok to use them later
*/
_create_irq_stack_frame
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0, [_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
lr ilink, [_ARC_V2_ERSEC_STAT]
push ilink
#endif
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr ilink, [_ARC_V2_ERET]
push ilink
lr ilink, [_ARC_V2_ERSTATUS]
push ilink
bclr r0, r0, _ARC_V2_STATUS32_U_BIT
sr r0, [_ARC_V2_ERSTATUS]
mov_s r0, _arc_do_syscall
sr r0, [_ARC_V2_ERET]
bclr ilink, ilink, _ARC_V2_STATUS32_U_BIT
sr ilink, [_ARC_V2_ERSTATUS]
mov ilink, _arc_do_syscall
sr ilink, [_ARC_V2_ERET]
rtie
@@ -263,7 +250,7 @@ _do_non_syscall_trap:
_check_and_inc_int_nest_counter r0, r1
bne.d exc_nest_handle
mov_s r0, sp
mov r0, sp
_get_curr_cpu_irq_stack sp
exc_nest_handle:
@@ -275,6 +262,83 @@ exc_nest_handle:
_dec_int_nest_counter r0, r1
lr r0, [_ARC_V2_AUX_IRQ_ACT]
and r0, r0, 0xffff
cmp r0, 0
bne _exc_return_from_exc
#ifdef CONFIG_PREEMPT_ENABLED
#ifdef CONFIG_SMP
bl z_arch_smp_switch_in_isr
breq r0, 0, _exc_return_from_irqoffload_trap
mov r2, r1
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
mov r2, r0
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
breq r0, r2, _exc_return_from_irqoffload_trap
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/*
* sync up the ERSEC_STAT.ERM and SEC_STAT.IRM.
* use a fake interrupt return to simulate an exception turn.
* ERM and IRM record which mode the cpu should return, 1: secure
* 0: normal
*/
lr r3,[_ARC_V2_ERSEC_STAT]
btst r3, 31
bset.nz r3, r3, _ARC_V2_SEC_STAT_IRM_BIT
bclr.z r3, r3, _ARC_V2_SEC_STAT_IRM_BIT
sflag r3
/* save _ARC_V2_SEC_STAT */
and r3, r3, 0xff
push r3
#endif
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
/* note: Ok to use _CAUSE_RIRQ since everything is saved */
mov r2, r0
#ifndef CONFIG_SMP
st_s r2, [r1, _kernel_offset_to_current]
#endif
/* clear AE bit to forget this was an exception */
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~_ARC_V2_STATUS32_AE)
kflag r3
/* pretend lowest priority interrupt happened to use common handler */
lr r3, [_ARC_V2_AUX_IRQ_ACT]
#ifdef CONFIG_ARC_SECURE_FIRMWARE
or r3, r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
or r3, r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
push_s r2
mov r0, _ARC_V2_AUX_IRQ_ACT
mov r1, r3
mov r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
pop_s r2
#else
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
#endif
_exc_return_from_irqoffload_trap:
_pop_irq_stack_frame
rtie
#endif /* CONFIG_IRQ_OFFLOAD */

View File

@@ -26,63 +26,6 @@
#include <irq.h>
#include <sys/printk.h>
/*
* storage space for the interrupt stack of fast_irq
*/
#if defined(CONFIG_ARC_FIRQ_STACK)
#if defined(CONFIG_SMP)
K_THREAD_STACK_ARRAY_DEFINE(_firq_interrupt_stack, CONFIG_MP_NUM_CPUS,
CONFIG_ARC_FIRQ_STACK_SIZE);
#else
K_THREAD_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
#endif
/*
* @brief Set the stack pointer for firq handling
*
* @return N/A
*/
void z_arc_firq_stack_set(void)
{
#ifdef CONFIG_SMP
char *firq_sp = Z_THREAD_STACK_BUFFER(
_firq_interrupt_stack[z_arc_v2_core_id()]) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#else
char *firq_sp = Z_THREAD_STACK_BUFFER(_firq_interrupt_stack) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#endif
/* the z_arc_firq_stack_set must be called when irq diasbled, as
* it can be called not only in the init phase but also other places
*/
unsigned int key = irq_lock();
__asm__ volatile (
/* only ilink will not be banked, so use ilink as channel
* between 2 banks
*/
"mov ilink, %0 \n\t"
"lr %0, [%1] \n\t"
"or %0, %0, %2 \n\t"
"kflag %0 \n\t"
"mov sp, ilink \n\t"
/* switch back to bank0, use ilink to avoid the pollution of
* bank1's gp regs.
*/
"lr ilink, [%1] \n\t"
"and ilink, ilink, %3 \n\t"
"kflag ilink \n\t"
:
: "r"(firq_sp), "i"(_ARC_V2_STATUS32),
"i"(_ARC_V2_STATUS32_RB(1)),
"i"(~_ARC_V2_STATUS32_RB(7))
);
irq_unlock(key);
}
#endif
/*
* @brief Enable an interrupt line
*
@@ -93,7 +36,7 @@ void z_arc_firq_stack_set(void)
* @return N/A
*/
void arch_irq_enable(unsigned int irq)
void z_arch_irq_enable(unsigned int irq)
{
unsigned int key = irq_lock();
@@ -110,7 +53,7 @@ void arch_irq_enable(unsigned int irq)
* @return N/A
*/
void arch_irq_disable(unsigned int irq)
void z_arch_irq_disable(unsigned int irq)
{
unsigned int key = irq_lock();
@@ -118,17 +61,6 @@ void arch_irq_disable(unsigned int irq)
irq_unlock(key);
}
/**
* @brief Return IRQ enable state
*
* @param irq IRQ line
* @return interrupt enable state, true or false
*/
int arch_irq_is_enabled(unsigned int irq)
{
return z_arc_v2_irq_unit_int_enabled(irq);
}
/*
* @internal
*
@@ -181,9 +113,9 @@ void z_irq_spurious(void *unused)
}
#ifdef CONFIG_DYNAMIC_INTERRUPTS
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(void *parameter), void *parameter,
u32_t flags)
int z_arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(void *parameter), void *parameter,
u32_t flags)
{
z_isr_install(irq, routine, parameter);
z_irq_priority_set(irq, priority, flags);

View File

@@ -20,9 +20,11 @@ void z_irq_do_offload(void)
offload_routine(offload_param);
}
void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
void irq_offload(irq_offload_routine_t routine, void *parameter)
{
unsigned int key;
key = irq_lock();
offload_routine = routine;
offload_param = parameter;
@@ -30,4 +32,6 @@ void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
:
: [id] "i"(_TRAP_S_SCALL_IRQ_OFFLOAD) : );
irq_unlock(key);
}

View File

@@ -68,7 +68,7 @@ The context switch code adopts this standard so that it is easier to follow:
transition from outgoing thread to incoming thread
Not loading _kernel into r0 allows loading _kernel without stomping on
the parameter in r0 in arch_switch().
the parameter in r0 in z_arch_switch().
ARCv2 processors have two kinds of interrupts: fast (FIRQ) and regular. The
@@ -168,7 +168,7 @@ From FIRQ:
o to coop
The address of the returning instruction from arch_switch() is loaded
The address of the returning instruction from z_arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
o to any irq
@@ -209,27 +209,27 @@ SECTION_FUNC(TEXT, _isr_wrapper)
* in fact is an action like nop.
* for firq, r0 will be restored later
*/
st_s r0, [sp]
st r0, [sp]
#endif
lr r0, [_ARC_V2_AUX_IRQ_ACT]
ffs r0, r0
cmp r0, 0
#if CONFIG_RGF_NUM_BANKS == 1
bnz rirq_path
ld_s r0, [sp]
ld r0, [sp]
/* 1-register bank FIRQ handling must save registers on stack */
_create_irq_stack_frame
lr r0, [_ARC_V2_STATUS32_P0]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0, [_ARC_V2_ERET]
lr r0, [_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET]
mov_s r3, _firq_exit
mov_s r2, _firq_enter
mov r3, _firq_exit
mov r2, _firq_enter
j_s [r2]
rirq_path:
mov_s r3, _rirq_exit
mov_s r2, _rirq_enter
mov r3, _rirq_exit
mov r2, _rirq_enter
j_s [r2]
#else
mov.z r3, _firq_exit
@@ -239,13 +239,25 @@ rirq_path:
j_s [r2]
#endif
#else
mov_s r3, _rirq_exit
mov_s r2, _rirq_enter
mov r3, _rirq_exit
mov r2, _rirq_enter
j_s [r2]
#endif
#if defined(CONFIG_TRACING)
GTEXT(sys_trace_isr_enter)
GTEXT(z_sys_trace_isr_enter)
.macro log_interrupt_k_event
clri r0 /* do not interrupt event logger operations */
push_s r0
push_s blink
jl z_sys_trace_isr_enter
pop_s blink
pop_s r0
seti r0
.endm
#else
#define log_interrupt_k_event
#endif
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
@@ -291,6 +303,7 @@ SECTION_FUNC(TEXT, _isr_demux)
/* cannot be done before this point because we must be able to run C */
/* r0 is available to be stomped here, and exit_tickless_idle uses it */
exit_tickless_idle
log_interrupt_k_event
lr r0, [_ARC_V2_ICAUSE]
/* handle software triggered interrupt */
@@ -301,7 +314,7 @@ irq_hint_handled:
sub r0, r0, 16
mov_s r1, _sw_isr_table
mov r1, _sw_isr_table
add3 r0, r1, r0 /* table entries are 8-bytes wide */
ld_s r1, [r0, 4] /* ISR into r1 */
@@ -326,4 +339,4 @@ irq_hint_handled:
/* back from ISR, jump to exit stub */
pop_s r3
j_s [r3]
nop_s
nop

View File

@@ -1,8 +1,10 @@
# Memory Protection Unit (MPU) configuration options
# Kconfig - Memory Protection Unit (MPU) configuration options
#
# Copyright (c) 2017 Synopsys
#
# SPDX-License-Identifier: Apache-2.0
#
config ARC_MPU_VER
int "ARC MPU version"
range 2 4

View File

@@ -27,7 +27,7 @@ void configure_mpu_thread(struct k_thread *thread)
#if defined(CONFIG_USERSPACE)
int arch_mem_domain_max_partitions_get(void)
int z_arch_mem_domain_max_partitions_get(void)
{
return arc_core_mpu_get_max_domain_partition_regions();
}
@@ -35,8 +35,8 @@ int arch_mem_domain_max_partitions_get(void)
/*
* Reset MPU region for a single memory partition
*/
void arch_mem_domain_partition_remove(struct k_mem_domain *domain,
u32_t partition_id)
void z_arch_mem_domain_partition_remove(struct k_mem_domain *domain,
u32_t partition_id)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
@@ -50,7 +50,7 @@ void arch_mem_domain_partition_remove(struct k_mem_domain *domain,
/*
* Configure MPU memory domain
*/
void arch_mem_domain_thread_add(struct k_thread *thread)
void z_arch_mem_domain_thread_add(struct k_thread *thread)
{
if (_current != thread) {
return;
@@ -64,7 +64,7 @@ void arch_mem_domain_thread_add(struct k_thread *thread)
/*
* Destroy MPU regions for the mem domain
*/
void arch_mem_domain_destroy(struct k_mem_domain *domain)
void z_arch_mem_domain_destroy(struct k_mem_domain *domain)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
@@ -75,25 +75,25 @@ void arch_mem_domain_destroy(struct k_mem_domain *domain)
arc_core_mpu_enable();
}
void arch_mem_domain_partition_add(struct k_mem_domain *domain,
u32_t partition_id)
void z_arch_mem_domain_partition_add(struct k_mem_domain *domain,
u32_t partition_id)
{
/* No-op on this architecture */
}
void arch_mem_domain_thread_remove(struct k_thread *thread)
void z_arch_mem_domain_thread_remove(struct k_thread *thread)
{
if (_current != thread) {
return;
}
arch_mem_domain_destroy(thread->mem_domain_info.mem_domain);
z_arch_mem_domain_destroy(thread->mem_domain_info.mem_domain);
}
/*
* Validate the given buffer is user accessible or not
*/
int arch_buffer_validate(void *addr, size_t size, int write)
int z_arch_buffer_validate(void *addr, size_t size, int write)
{
return arc_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -656,7 +656,7 @@ int arc_core_mpu_get_max_domain_partition_regions(void)
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
int r_index;
int key = arch_irq_lock();
/*
* For ARC MPU v3, overlapping is not supported.
@@ -667,17 +667,13 @@ int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
/* match and the area is in one region */
if (r_index >= 0 && r_index == _mpu_probe((u32_t)addr + (size - 1))) {
if (_is_user_accessible_region(r_index, write)) {
r_index = 0;
return 0;
} else {
r_index = -EPERM;
return -EPERM;
}
} else {
r_index = -EPERM;
}
arch_irq_unlock(key);
return r_index;
return -EPERM;
}
#endif /* CONFIG_USERSPACE */
@@ -715,9 +711,9 @@ static int arc_mpu_init(struct device *arg)
/* record the static region which can be split */
if (mpu_config.mpu_regions[i].attr & REGION_DYNAMIC) {
if (dynamic_regions_num >=
if (dynamic_regions_num >
MPU_DYNAMIC_REGION_AREAS_NUM) {
LOG_ERR("not enough dynamic regions %d",
LOG_ERR("no enough dynamic regions %d",
dynamic_regions_num);
return -EINVAL;
}

View File

@@ -22,9 +22,8 @@
* completeness.
*/
#include <kernel.h>
#include <kernel_arch_data.h>
#include <gen_offset.h>
#include <kernel_structs.h>
#include <kernel_offsets.h>
GEN_OFFSET_SYM(_thread_arch_t, relinquish_cause);

View File

@@ -17,7 +17,6 @@
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <swap_macros.h>
@@ -64,7 +63,7 @@ PRE-CONTEXT-SWITCH STACK
--------------------------------------
SP -> | Return address; PC (Program Counter), in fact value taken from
| BLINK register in arch_switch()
| BLINK register in z_arch_switch()
--------------------------------------
| STATUS32 value, we explicitly save it here for later usage, read-on
--------------------------------------
@@ -234,7 +233,7 @@ SECTION_FUNC(TEXT, _rirq_enter)
_check_and_inc_int_nest_counter r0, r1
bne.d rirq_nest
mov_s r0, sp
mov r0, sp
_get_curr_cpu_irq_stack sp
rirq_nest:
@@ -266,14 +265,16 @@ SECTION_FUNC(TEXT, _rirq_exit)
bl z_check_stack_sentinel
#endif
#ifdef CONFIG_PREEMPT_ENABLED
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
bl z_arch_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
cmp_s r0, 0
cmp r0, 0
beq _rirq_no_reschedule
mov_s r2, r1
mov r2, r1
#else
mov_s r1, _kernel
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/*
@@ -299,16 +300,7 @@ _rirq_reschedule:
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to remember SEC_STAT.IRM bit */
lr r3, [_ARC_V2_SEC_STAT]
push_s r3
#endif
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
push r3
#endif
/* _save_callee_saved_regs expects outgoing thread in r2 */
_save_callee_saved_regs
@@ -316,10 +308,10 @@ _rirq_reschedule:
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
#ifdef CONFIG_SMP
mov_s r2, r0
mov r2, r0
#else
/* incoming thread is in r0: it becomes the new 'current' */
mov_s r2, r0
mov r2, r0
st_s r2, [r1, _kernel_offset_to_current]
#endif
@@ -338,7 +330,7 @@ _rirq_common_interrupt_swap:
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
mov r0, r2
bl configure_mpu_thread
pop_s r2
#endif
@@ -362,9 +354,9 @@ _rirq_common_interrupt_swap:
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _rirq_return_from_rirq
nop_s
nop
breq r3, _CAUSE_FIRQ, _rirq_return_from_firq
nop_s
nop
/* fall through */
@@ -402,21 +394,14 @@ _rirq_return_from_coop:
/* rtie will pop the rest from the stack */
rtie
#endif /* CONFIG_PREEMPT_ENABLED */
.balign 4
_rirq_return_from_firq:
_rirq_return_from_rirq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
pop r3
sflag r3
#endif
_rirq_no_reschedule:

View File

@@ -17,7 +17,7 @@
#include <swap_macros.h>
GDATA(_interrupt_stack)
GDATA(z_main_stack)
GDATA(_main_stack)
GDATA(_VectorTable)
/* use one of the available interrupt stacks during init */
@@ -48,7 +48,7 @@ SECTION_FUNC(TEXT,__start)
/* lock interrupts: will get unlocked when switch to main task
* also make sure the processor in the correct status
*/
mov_s r0, 0
mov r0, 0
kflag r0
#ifdef CONFIG_ARC_SECURE_FIRMWARE
@@ -60,9 +60,9 @@ SECTION_FUNC(TEXT,__start)
* ARCV2 timer (timer0) is a free run timer, let it start to count
* here.
*/
mov_s r0, 0xffffffff
mov r0, 0xffffffff
sr r0, [_ARC_V2_TMR0_LIMIT]
mov_s r0, 0
mov r0, 0
sr r0, [_ARC_V2_TMR0_COUNT]
#endif
/* interrupt related init */
@@ -76,7 +76,7 @@ SECTION_FUNC(TEXT,__start)
/* set the vector table base early,
* so that exception vectors can be handled.
*/
mov_s r0, _VectorTable
mov r0, _VectorTable
#ifdef CONFIG_ARC_SECURE_FIRMWARE
sr r0, [_ARC_V2_IRQ_VECT_BASE_S]
#else
@@ -95,7 +95,7 @@ SECTION_FUNC(TEXT,__start)
kflag r0
#endif
mov_s r1, 1
mov r1, 1
invalidate_and_disable_icache:
@@ -106,9 +106,9 @@ invalidate_and_disable_icache:
mov_s r2, 0
sr r2, [_ARC_V2_IC_IVIC]
/* writing to IC_IVIC needs 3 NOPs */
nop_s
nop_s
nop_s
nop
nop
nop
sr r1, [_ARC_V2_IC_CTRL]
invalidate_dcache:
@@ -126,7 +126,7 @@ done_cache_invalidate:
jl @_sys_resume_from_deep_sleep
#endif
#if CONFIG_MP_NUM_CPUS > 1
#ifdef CONFIG_SMP
_get_cpu_id r0
breq r0, 0, _master_core_startup
@@ -137,14 +137,13 @@ _slave_core_wait:
ld r1, [arc_cpu_wake_flag]
brne r0, r1, _slave_core_wait
ld sp, [arc_cpu_sp]
/* signal master core that slave core runs */
st 0, [arc_cpu_wake_flag]
#if defined(CONFIG_ARC_FIRQ_STACK)
jl z_arc_firq_stack_set
#endif
j z_arc_slave_start
/* get sp set by master core */
_get_curr_cpu_irq_stack sp
j z_arch_slave_start
_master_core_startup:
#endif
@@ -155,7 +154,7 @@ _master_core_startup:
* FIRQ stack when CONFIG_INIT_STACKS is enabled before switching to
* one of them for the rest of the early boot
*/
mov_s sp, z_main_stack
mov sp, _main_stack
add sp, sp, CONFIG_MAIN_STACK_SIZE
mov_s r0, _interrupt_stack
@@ -165,11 +164,7 @@ _master_core_startup:
#endif /* CONFIG_INIT_STACKS */
mov_s sp, INIT_STACK
mov sp, INIT_STACK
add sp, sp, INIT_STACK_SIZE
#if defined(CONFIG_ARC_FIRQ_STACK)
jl z_arc_firq_stack_set
#endif
j @_PrepC

View File

@@ -6,7 +6,7 @@
zephyr_library()
zephyr_library_sources(
arc_sjli.c
arc_secure.S
secure_sys_services.c
)
arc_sjli.c
arc_secure.S
secure_sys_services.c
)

View File

@@ -17,42 +17,41 @@
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <v2/irq.h>
#include <swap_macros.h>
GTEXT(arch_switch)
GTEXT(z_arch_switch)
/**
*
* @brief Initiate a cooperative context switch
*
* The arch_switch routine is invoked by various kernel services to effect
* a cooperative context switch. Prior to invoking arch_switch, the caller
* The z_arch_switch routine is invoked by various kernel services to effect
* a cooperative context switch. Prior to invoking z_arch_switch, the caller
* disables interrupts via irq_lock()
* Given that arch_switch() is called to effect a cooperative context switch,
* Given that z_arch_switch() is called to effect a cooperative context switch,
* the caller-saved integer registers are saved on the stack by the function
* call preamble to arch_switch. This creates a custom stack frame that will
* be popped when returning from arch_switch, but is not suitable for handling
* call preamble to z_arch_switch. This creates a custom stack frame that will
* be popped when returning from z_arch_switch, but is not suitable for handling
* a return from an exception. Thus, the fact that the thread is pending because
* of a cooperative call to arch_switch() has to be recorded via the
* of a cooperative call to z_arch_switch() has to be recorded via the
* _CAUSE_COOP code in the relinquish_cause of the thread's k_thread structure.
* The _rirq_exit()/_firq_exit() code will take care of doing the right thing
* to restore the thread status.
*
* When arch_switch() is invoked, we know the decision to perform a context
* When z_arch_switch() is invoked, we know the decision to perform a context
* switch or not has already been taken and a context switch must happen.
*
*
* C function prototype:
*
* void arch_switch(void *switch_to, void **switched_from);
* void z_arch_switch(void *switch_to, void **switched_from);
*
*/
SECTION_FUNC(TEXT, arch_switch)
SECTION_FUNC(TEXT, z_arch_switch)
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
@@ -90,7 +89,7 @@ SECTION_FUNC(TEXT, arch_switch)
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r3, [_ARC_V2_SEC_STAT]
#else
mov_s r3, 0
mov r3, 0
#endif
push_s r3
#endif
@@ -114,7 +113,7 @@ SECTION_FUNC(TEXT, arch_switch)
_switch_to_target_thread:
mov_s r2, r0
mov r2, r0
/* entering here, r2 contains the new current thread */
#ifdef CONFIG_ARC_STACK_CHECKING
@@ -132,9 +131,9 @@ _switch_to_target_thread:
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _switch_return_from_rirq
nop_s
nop
breq r3, _CAUSE_FIRQ, _switch_return_from_firq
nop_s
nop
/* fall through to _switch_return_from_coop */
@@ -162,19 +161,9 @@ return_loc:
_switch_return_from_rirq:
_switch_return_from_firq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
pop r3
sflag r3
#endif
@@ -187,9 +176,9 @@ _switch_return_from_firq:
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
mov_s r0, _ARC_V2_AUX_IRQ_ACT
mov_s r1, r3
mov_s r6, ARC_S_CALL_AUX_WRITE
mov r0, _ARC_V2_AUX_IRQ_ACT
mov r1, r3
mov r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
#else
sr r3, [_ARC_V2_AUX_IRQ_ACT]

View File

@@ -12,7 +12,8 @@
*/
#include <kernel.h>
#include <ksched.h>
#include <toolchain.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <wait_q.h>
@@ -58,10 +59,10 @@ struct init_stack_frame {
*
* @return N/A
*/
void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
void z_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
{
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
Z_ASSERT_VALID_PRIO(priority, pEntry);
@@ -92,7 +93,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
(u32_t)(stackEnd + STACK_GUARD_SIZE);
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd +
ARCH_THREAD_STACK_RESERVED);
Z_ARCH_THREAD_STACK_RESERVED);
/* reserve 4 bytes for the start of user sp */
stackAdjEnd -= 4;
@@ -122,7 +123,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
*/
pStackMem += STACK_GUARD_SIZE;
stackAdjSize = stackAdjSize + CONFIG_PRIVILEGED_STACK_SIZE;
stackEnd += ARCH_THREAD_STACK_RESERVED;
stackEnd += Z_ARCH_THREAD_STACK_RESERVED;
thread->arch.priv_stack_start = 0;
@@ -161,7 +162,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
*/
pInitCtx->status32 |= _ARC_V2_STATUS32_US;
#else /* For no USERSPACE feature */
pStackMem += ARCH_THREAD_STACK_RESERVED;
pStackMem += Z_ARCH_THREAD_STACK_RESERVED;
stackEnd = pStackMem + stackSize;
z_new_thread_init(thread, pStackMem, stackSize, priority, options);
@@ -199,7 +200,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
thread->arch.k_stack_top =
(u32_t)(stackEnd + STACK_GUARD_SIZE);
thread->arch.k_stack_base = (u32_t)
(stackEnd + ARCH_THREAD_STACK_RESERVED);
(stackEnd + Z_ARCH_THREAD_STACK_RESERVED);
} else {
thread->arch.k_stack_top = (u32_t)pStackMem;
thread->arch.k_stack_base = (u32_t)stackEnd;
@@ -227,8 +228,8 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
#ifdef CONFIG_USERSPACE
FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
FUNC_NORETURN void z_arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
{
/*
@@ -270,7 +271,7 @@ FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
#endif
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
int arch_float_disable(struct k_thread *thread)
int z_arch_float_disable(struct k_thread *thread)
{
unsigned int key;
@@ -287,7 +288,7 @@ int arch_float_disable(struct k_thread *thread)
}
int arch_float_enable(struct k_thread *thread)
int z_arch_float_enable(struct k_thread *thread)
{
unsigned int key;

View File

@@ -22,7 +22,7 @@ GTEXT(z_thread_entry_wrapper1)
* @brief Wrapper for z_thread_entry
*
* The routine pops parameters for the z_thread_entry from stack frame, prepared
* by the arch_new_thread() routine.
* by the z_new_thread() routine.
*
* @return N/A
*/

View File

@@ -33,7 +33,7 @@ u64_t z_tsc_read(void)
t = (u64_t)z_tick_get();
count = z_arc_v2_aux_reg_read(_ARC_V2_TMR0_COUNT);
irq_unlock(key);
t *= k_ticks_to_cyc_floor64(1);
t *= (u64_t)sys_clock_hw_cycles_per_tick();
t += (u64_t)count;
return t;
}

View File

@@ -14,44 +14,44 @@
#include <v2/irq.h>
.macro clear_scratch_regs
mov_s r1, 0
mov_s r2, 0
mov_s r3, 0
mov_s r4, 0
mov_s r5, 0
mov_s r6, 0
mov_s r7, 0
mov_s r8, 0
mov_s r9, 0
mov_s r10, 0
mov_s r11, 0
mov_s r12, 0
mov r1, 0
mov r2, 0
mov r3, 0
mov r4, 0
mov r5, 0
mov r6, 0
mov r7, 0
mov r8, 0
mov r9, 0
mov r10, 0
mov r11, 0
mov r12, 0
.endm
.macro clear_callee_regs
mov_s r25, 0
mov_s r24, 0
mov_s r23, 0
mov_s r22, 0
mov_s r21, 0
mov_s r20, 0
mov_s r19, 0
mov_s r18, 0
mov_s r17, 0
mov_s r16, 0
mov r25, 0
mov r24, 0
mov r23, 0
mov r22, 0
mov r21, 0
mov r20, 0
mov r19, 0
mov r18, 0
mov r17, 0
mov r16, 0
mov_s r15, 0
mov_s r14, 0
mov_s r13, 0
mov r15, 0
mov r14, 0
mov r13, 0
.endm
GTEXT(z_arc_userspace_enter)
GTEXT(_arc_do_syscall)
GTEXT(z_user_thread_entry_wrapper)
GTEXT(arch_user_string_nlen)
GTEXT(z_arc_user_string_nlen_fault_start)
GTEXT(z_arc_user_string_nlen_fault_end)
GTEXT(z_arc_user_string_nlen_fixup)
GTEXT(z_arch_user_string_nlen)
GTEXT(z_arch_user_string_nlen_fault_start)
GTEXT(z_arch_user_string_nlen_fault_end)
GTEXT(z_arch_user_string_nlen_fixup)
/*
* @brief Wrapper for z_thread_entry in the case of user thread
* The init parameters are in privileged stack
@@ -67,7 +67,7 @@ SECTION_FUNC(TEXT, z_user_thread_entry_wrapper)
/* the start of user sp is in r5 */
pop r5
/* start of privilege stack in blink */
mov_s blink, sp
mov blink, sp
st.aw r0, [r5, -4]
st.aw r1, [r5, -4]
@@ -109,7 +109,7 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
add r5, r4, r5
/* start of privilege stack */
add blink, r5, CONFIG_PRIVILEGED_STACK_SIZE+STACK_GUARD_SIZE
mov_s sp, r5
mov sp, r5
push_s r0
push_s r1
@@ -119,9 +119,9 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
mov r5, sp /* skip r0, r1, r2, r3 */
#ifdef CONFIG_INIT_STACKS
mov_s r0, 0xaaaaaaaa
mov r0, 0xaaaaaaaa
#else
mov_s r0, 0x0
mov r0, 0x0
#endif
_clear_user_stack:
st.ab r0, [r4, 4]
@@ -129,7 +129,7 @@ _clear_user_stack:
jlt _clear_user_stack
#ifdef CONFIG_ARC_STACK_CHECKING
mov_s r1, _kernel
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_stack_check_regs
@@ -149,7 +149,7 @@ _arc_go_to_user_space:
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_U_BIT
mov_s r1, z_thread_entry_wrapper1
mov r1, z_thread_entry_wrapper1
sr r0, [_ARC_V2_ERSTATUS]
sr r1, [_ARC_V2_ERET]
@@ -171,18 +171,18 @@ _arc_go_to_user_space:
#else
sr r5, [_ARC_V2_USER_SP]
#endif
mov_s sp, blink
mov sp, blink
mov_s r0, 0
mov r0, 0
clear_callee_regs
clear_scratch_regs
mov_s fp, 0
mov_s r29, 0
mov_s r30, 0
mov_s blink, 0
mov fp, 0
mov r29, 0
mov r30, 0
mov blink, 0
#ifdef CONFIG_EXECUTION_BENCHMARKING
b _capture_value_for_benchmarking_userspace
@@ -202,55 +202,50 @@ return_loc_userspace_enter:
*
*/
SECTION_FUNC(TEXT, _arc_do_syscall)
/*
* r0-r5: arg1-arg6, r6 is call id which is already checked in
* trap_s handler, r7 is the system call stack frame pointer
* need to recover r0, r1, r2 because they will be modified in
* _create_irq_stack_frame. If a specific syscall frame (different
* with irq stack frame) is defined, the cover of r0, r1, r2 can be
* optimized.
*/
ld_s r0, [sp, ___isf_t_r0_OFFSET]
ld_s r1, [sp, ___isf_t_r1_OFFSET]
ld_s r2, [sp, ___isf_t_r2_OFFSET]
/* r0-r5: arg1-arg6, r6 is call id */
/* the call id is already checked in trap_s handler */
push_s blink
mov r7, sp
mov_s blink, _k_syscall_table
mov blink, _k_syscall_table
ld.as r6, [blink, r6]
jl [r6]
/* save return value */
st_s r0, [sp, ___isf_t_r0_OFFSET]
/*
* no need to clear callee regs, as they will be saved and restored
* automatically
*/
clear_scratch_regs
mov_s r29, 0
mov_s r30, 0
mov r29, 0
mov r30, 0
pop_s blink
/* through fake exception return, go back to the caller */
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_AE_BIT
kflag r0
lr r6, [_ARC_V2_STATUS32]
bclr r6, r6, _ARC_V2_STATUS32_AE_BIT
kflag r6
/* the status and return address are saved in trap_s handler */
pop r6
sr r6, [_ARC_V2_ERSTATUS]
pop r6
sr r6, [_ARC_V2_ERET]
#ifdef CONFIG_ARC_SECURE_FIRMWARE
ld_s r0, [sp, ___isf_t_sec_stat_OFFSET]
sr r0,[_ARC_V2_ERSEC_STAT]
pop r6
sr r6, [_ARC_V2_ERSEC_STAT]
#endif
ld_s r0, [sp, ___isf_t_status32_OFFSET]
sr r0,[_ARC_V2_ERSTATUS]
ld_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
sr r0,[_ARC_V2_ERET]
_pop_irq_stack_frame
mov r6, 0
rtie
/*
* size_t arch_user_string_nlen(const char *s, size_t maxsize, int *err_arg)
* size_t z_arch_user_string_nlen(const char *s, size_t maxsize, int *err_arg)
*/
SECTION_FUNC(TEXT, arch_user_string_nlen)
SECTION_FUNC(TEXT, z_arch_user_string_nlen)
/* int err; */
sub_s sp,sp,0x4
@@ -269,11 +264,11 @@ SECTION_FUNC(TEXT, arch_user_string_nlen)
mov lp_count, r1
strlen_loop:
z_arc_user_string_nlen_fault_start:
z_arch_user_string_nlen_fault_start:
/* is the byte at ++r12 a NULL? if so, we're done. Might fault! */
ldb.aw r1, [r12, 1]
z_arc_user_string_nlen_fault_end:
z_arch_user_string_nlen_fault_end:
brne_s r1, 0, not_null
strlen_done:
@@ -281,7 +276,7 @@ strlen_done:
mov_s r1, 0
st_s r1, [sp, 0]
z_arc_user_string_nlen_fixup:
z_arch_user_string_nlen_fixup:
/* *err_arg = err; Pop stack and return */
ld_s r1, [sp, 0]
add_s sp, sp, 4

View File

@@ -64,3 +64,4 @@ struct vector_table _VectorTable Z_GENERIC_SECTION(.exc_vector_table) = {
0,
0
};

View File

@@ -1,11 +0,0 @@
/*
* Copyright (c) 2019 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
/* when !XIP, .text is in RAM, and vector table must be at its very start */
KEEP(*(.exc_vector_table))
KEEP(*(".exc_vector_table.*"))
KEEP(*(IRQ_VECTOR_TABLE))

View File

@@ -24,9 +24,11 @@
#include <linker/sections.h>
#include <arch/cpu.h>
#include <vector_table.h>
#include <kernel_arch_thread.h>
#ifndef _ASMLANGUAGE
#include <kernel.h>
#include <kernel_internal.h>
#include <zephyr/types.h>
#include <sys/util.h>
#include <sys/dlist.h>

View File

@@ -22,8 +22,6 @@
#if !defined(_ASMLANGUAGE)
#include <kernel_arch_data.h>
#ifdef CONFIG_CPU_ARCV2
#include <v2/cache.h>
#include <v2/irq.h>
@@ -33,7 +31,20 @@
extern "C" {
#endif
static ALWAYS_INLINE void arch_kernel_init(void)
static ALWAYS_INLINE _cpu_t *z_arch_curr_cpu(void)
{
#ifdef CONFIG_SMP
u32_t core;
core = z_arc_v2_core_id();
return &_kernel.cpus[core];
#else
return &_kernel.cpus[0];
#endif
}
static ALWAYS_INLINE void kernel_arch_init(void)
{
z_irq_setup();
_current_cpu->irq_stack =
@@ -55,10 +66,7 @@ static ALWAYS_INLINE int Z_INTERRUPT_CAUSE(void)
return irq_num;
}
static inline bool arch_is_in_isr(void)
{
return z_arc_v2_irq_unit_is_in_isr();
}
#define z_is_in_isr z_arc_v2_irq_unit_is_in_isr
extern void z_thread_entry_wrapper(void);
extern void z_user_thread_entry_wrapper(void);
@@ -67,10 +75,10 @@ extern void z_arc_userspace_enter(k_thread_entry_t user_entry, void *p1,
void *p2, void *p3, u32_t stack, u32_t size);
extern void arch_switch(void *switch_to, void **switched_from);
extern void z_arch_switch(void *switch_to, void **switched_from);
extern void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
extern void arch_sched_ipi(void);
extern void z_arch_sched_ipi(void);
#ifdef __cplusplus
}

View File

@@ -0,0 +1,73 @@
/*
* Copyright (c) 2017 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Per-arch thread definition
*
* This file contains definitions for
*
* struct _thread_arch
* struct _callee_saved
*
* necessary to instantiate instances of struct k_thread.
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_THREAD_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_THREAD_H_
/*
* Reason a thread has relinquished control.
*/
#define _CAUSE_NONE 0
#define _CAUSE_COOP 1
#define _CAUSE_RIRQ 2
#define _CAUSE_FIRQ 3
#ifndef _ASMLANGUAGE
#include <zephyr/types.h>
#ifdef __cplusplus
extern "C" {
#endif
struct _callee_saved {
u32_t sp; /* r28 */
};
typedef struct _callee_saved _callee_saved_t;
struct _thread_arch {
/* one of the _CAUSE_xxxx definitions above */
int relinquish_cause;
#ifdef CONFIG_ARC_STACK_CHECKING
/* High address of stack region, stack grows downward from this
* location. Usesd for hardware stack checking
*/
u32_t k_stack_base;
u32_t k_stack_top;
#ifdef CONFIG_USERSPACE
u32_t u_stack_base;
u32_t u_stack_top;
#endif
#endif
#ifdef CONFIG_USERSPACE
u32_t priv_stack_start;
#endif
};
typedef struct _thread_arch _thread_arch_t;
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_THREAD_H_ */

View File

@@ -42,18 +42,18 @@
#ifdef CONFIG_ARC_HAS_SECURE
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r13, [_ARC_V2_SEC_U_SP]
st_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
lr r13, [_ARC_V2_SEC_K_SP]
st_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
#else
lr r13, [_ARC_V2_USER_SP]
st_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
lr r13, [_ARC_V2_KERNEL_SP]
st_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
#endif /* CONFIG_ARC_SECURE_FIRMWARE */
#else
lr r13, [_ARC_V2_USER_SP]
st_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
#endif
#endif
st r30, [sp, ___callee_saved_stack_t_r30_OFFSET]
@@ -64,7 +64,7 @@
#endif
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
ld r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 1f
lr r13, [_ARC_V2_FPU_STATUS]
@@ -100,7 +100,7 @@
#endif
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
ld r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 2f
@@ -125,18 +125,18 @@
#ifdef CONFIG_USERSPACE
#ifdef CONFIG_ARC_HAS_SECURE
#ifdef CONFIG_ARC_SECURE_FIRMWARE
ld_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
ld r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
sr r13, [_ARC_V2_SEC_U_SP]
ld_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
ld r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
sr r13, [_ARC_V2_SEC_K_SP]
#else
ld_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
sr r13, [_ARC_V2_USER_SP]
ld_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
ld r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
sr r13, [_ARC_V2_KERNEL_SP]
#endif /* CONFIG_ARC_SECURE_FIRMWARE */
#else
ld_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
ld r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
sr r13, [_ARC_V2_USER_SP]
#endif
#endif
@@ -258,7 +258,7 @@
* The pc and status32 values will still be on the stack. We cannot
* pop them yet because the callers of _pop_irq_stack_frame must reload
* status32 differently depending on the execution context they are
* running in (arch_switch(), firq or exception).
* running in (z_arch_switch(), firq or exception).
*/
add_s sp, sp, ___isf_t_SIZEOF
@@ -365,18 +365,6 @@
#endif
.endm
/* macro to push aux reg through reg */
.macro PUSHAX reg aux
lr \reg, [\aux]
st.a \reg, [sp, -4]
.endm
/* macro to pop aux reg through reg */
.macro POPAX reg aux
ld.ab \reg, [sp, 4]
sr \reg, [\aux]
.endm
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_SWAP_MACROS_H_ */

View File

@@ -65,7 +65,7 @@ static ALWAYS_INLINE void z_irq_setup(void)
_ARC_V2_AUX_IRQ_CTRL_14_REGS /* save r0 -> r13 (caller-saved) */
);
z_arc_cpu_sleep_mode = _ARC_V2_WAKE_IRQ_LEVEL;
k_cpu_sleep_mode = _ARC_V2_WAKE_IRQ_LEVEL;
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* normal mode cannot write irq_ctrl, ignore it */

View File

@@ -1,7 +1,26 @@
# SPDX-License-Identifier: Apache-2.0
if(CONFIG_ARM64)
include(aarch64.cmake)
else()
include(aarch32.cmake)
set(ARCH_FOR_cortex-m0 armv6s-m )
set(ARCH_FOR_cortex-m0plus armv6s-m )
set(ARCH_FOR_cortex-m3 armv7-m )
set(ARCH_FOR_cortex-m4 armv7e-m )
set(ARCH_FOR_cortex-m23 armv8-m.base )
set(ARCH_FOR_cortex-m33 armv8-m.main+dsp)
set(ARCH_FOR_cortex-m33+nodsp armv8-m.main )
set(ARCH_FOR_cortex-r4 armv7-r )
if(ARCH_FOR_${GCC_M_CPU})
set(ARCH_FLAG -march=${ARCH_FOR_${GCC_M_CPU}})
endif()
zephyr_compile_options(
-mabi=aapcs
${ARCH_FLAG}
)
zephyr_ld_options(
-mabi=aapcs
${ARCH_FLAG}
)
add_subdirectory(core)

View File

@@ -1,19 +1,18 @@
# ARM architecture configuration options
# Kconfig - ARM architecture configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
menu "ARM Options"
depends on ARM
rsource "core/aarch32/Kconfig"
rsource "core/aarch64/Kconfig"
source "arch/arm/core/Kconfig"
config ARCH
default "arm"
config ARM64
bool
select 64BIT
endmenu

View File

@@ -1,28 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set(ARCH_FOR_cortex-m0 armv6s-m )
set(ARCH_FOR_cortex-m0plus armv6s-m )
set(ARCH_FOR_cortex-m3 armv7-m )
set(ARCH_FOR_cortex-m4 armv7e-m )
set(ARCH_FOR_cortex-m23 armv8-m.base )
set(ARCH_FOR_cortex-m33 armv8-m.main+dsp)
set(ARCH_FOR_cortex-m33+nodsp armv8-m.main )
set(ARCH_FOR_cortex-r4 armv7-r )
if(ARCH_FOR_${GCC_M_CPU})
set(ARCH_FLAG -march=${ARCH_FOR_${GCC_M_CPU}})
endif()
zephyr_compile_options(
-mabi=aapcs
${ARCH_FLAG}
)
zephyr_ld_options(
-mabi=aapcs
${ARCH_FLAG}
)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-little${ARCH}) # BFD format
add_subdirectory(core/aarch32)

View File

@@ -1,5 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf64-littleaarch64) # BFD format
add_subdirectory(core/aarch64)

View File

@@ -0,0 +1,37 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_compile_options_ifdef(CONFIG_COVERAGE_GCOV
-ftest-coverage
-fprofile-arcs
-fno-inline
)
zephyr_library_sources(
exc_exit.S
swap.c
swap_helper.S
irq_manage.c
thread.c
cpu_idle.S
fault_s.S
fatal.c
nmi.c
nmi_on_reset.S
prep_c.c
)
zephyr_library_sources_ifdef(CONFIG_GEN_SW_ISR_TABLE isr_wrapper.S)
zephyr_library_sources_ifdef(CONFIG_CPLUSPLUS __aeabi_atexit.c)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_CPU_CORTEX_M0 irq_relay.S)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M cortex_m)
add_subdirectory_ifdef(CONFIG_ARM_MPU cortex_m/mpu)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M_HAS_CMSE cortex_m/cmse)
add_subdirectory_ifdef(CONFIG_ARM_SECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_ARM_NONSECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_R cortex_r)

282
arch/arm/core/Kconfig Normal file
View File

@@ -0,0 +1,282 @@
# Kconfig - ARM core configuration options
#
# Copyright (c) 2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
config CPU_CORTEX
bool
# Omit prompt to signify "hidden" option
help
This option signifies the use of a CPU of the Cortex family.
config CPU_CORTEX_M
bool
# Omit prompt to signify "hidden" option
select CPU_CORTEX
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select HAS_CMSIS
select HAS_FLASH_LOAD_OFFSET
select ARCH_HAS_THREAD_ABORT
select ARCH_HAS_TRUSTED_EXECUTION if ARM_TRUSTZONE_M
select ARCH_HAS_STACK_PROTECTION if ARM_MPU || CPU_CORTEX_M_HAS_SPLIM
select ARCH_HAS_USERSPACE if ARM_MPU
select ARCH_HAS_NOCACHE_MEMORY_SUPPORT if ARM_MPU && CPU_HAS_ARM_MPU && CPU_CORTEX_M7
select ARCH_HAS_RAMFUNC_SUPPORT
select SWAP_NONATOMIC
help
This option signifies the use of a CPU of the Cortex-M family.
config CPU_CORTEX_R
bool
select CPU_CORTEX
select HAS_FLASH_LOAD_OFFSET
help
This option signifies the use of a CPU of the Cortex-R family.
config ISA_THUMB2
bool
help
From: http://www.arm.com/products/processors/technologies/instruction-set-architectures.php
Thumb-2 technology is the instruction set underlying the ARM Cortex
architecture which provides enhanced levels of performance, energy
efficiency, and code density for a wide range of embedded
applications.
Thumb-2 technology builds on the success of Thumb, the innovative
high code density instruction set for ARM microprocessor cores, to
increase the power of the ARM microprocessor core available to
developers of low cost, high performance systems.
The technology is backwards compatible with existing ARM and Thumb
solutions, while significantly extending the features available to
the Thumb instructions set. This allows more of the application to
benefit from the best in class code density of Thumb.
For performance optimized code Thumb-2 technology uses 31 percent
less memory to reduce system cost, while providing up to 38 percent
higher performance than existing high density code, which can be used
to prolong battery-life or to enrich the product feature set. Thumb-2
technology is featured in the processor, and in all ARMv7
architecture-based processors.
config ISA_ARM
bool
help
From: https://developer.arm.com/products/architecture/instruction-sets/a32-and-t32-instruction-sets
A32 instructions, known as Arm instructions in pre-Armv8 architectures,
are 32 bits wide, and are aligned on 4-byte boundaries. A32 instructions
are supported by both A-profile and R-profile architectures.
A32 was traditionally used in applications requiring the highest
performance, or for handling hardware exceptions such as interrupts and
processor start-up. Much of its functionality was subsumed into T32 with
the introduction of Thumb-2 technology.
config DATA_ENDIANNESS_LITTLE
bool
default y if CPU_CORTEX
help
This is driven by the processor implementation, since it is fixed in
hardware. The board should set this value to 'n' if the data is
implemented as big endian.
config STACK_ALIGN_DOUBLE_WORD
bool "Align stacks on double-words (8 octets)"
default y
help
This is needed to conform to AAPCS, the procedure call standard for
the ARM. It wastes stack space. The option also enforces alignment
of stack upon exception entry on Cortex-M3 and Cortex-M4 (ARMv7-M).
Note that for ARMv6-M, ARMv8-M, and Cortex-M7 MCUs stack alignment
on exception entry is enabled by default and it is not configurable.
config RUNTIME_NMI
bool "Attach an NMI handler at runtime"
select REBOOT
help
The kernel provides a simple NMI handler that simply hangs in a tight
loop if triggered. This fills the requirement that there must be an
NMI handler installed when the CPU boots. If a custom handler is
needed, enable this option and attach it via _NmiHandlerSet().
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
help
Different levels for display information when a fault occurs.
2: The default. Display specific and verbose information. Consumes
the most memory (long strings).
1: Display general and short information. Consumes less memory
(short strings).
0: Off.
config BUILTIN_STACK_GUARD
bool "Thread Stack Guards based on built-in ARM stack limit checking"
depends on CPU_CORTEX_M_HAS_SPLIM
select THREAD_STACK_INFO
help
Enable Thread/Interrupt Stack Guards via built-in Stack Pointer
limit checking. The functionality must be supported by HW.
config ARM_STACK_PROTECTION
bool
default y if HW_STACK_PROTECTION
imply BUILTIN_STACK_GUARD if CPU_CORTEX_M_HAS_SPLIM
select MPU_STACK_GUARD if (!BUILTIN_STACK_GUARD && ARM_MPU)
help
This option enables either:
- The built-in Stack Pointer limit checking, or
- the MPU-based stack guard
to cause a system fatal error
if the bounds of the current process stack are overflowed.
The two stack guard options are mutually exclusive. The
selection of the built-in Stack Pointer limit checking is
prioritized over the MPU-based stack guard. The developer
still has the option to manually select the MPU-based
stack guard, if this is desired.
config ARM_SECURE_FIRMWARE
bool
depends on ARMV8_M_SE
default y if TRUSTED_EXECUTION_SECURE
help
This option indicates that we are building a Zephyr image that
is intended to execute in Secure state. The option is only
applicable to ARMv8-M MCUs that implement the Security Extension.
This option enables Zephyr to include code that executes in
Secure state, as well as to exclude code that is designed to
execute only in Non-secure state.
Code executing in Secure state has access to both the Secure
and Non-Secure resources of the Cortex-M MCU.
Code executing in Non-Secure state may trigger Secure Faults,
if Secure MCU resources are accessed from the Non-Secure state.
Secure Faults may only be handled by code executing in Secure
state.
config ARM_NONSECURE_FIRMWARE
bool
depends on !ARM_SECURE_FIRMWARE
depends on ARMV8_M_SE
default y if TRUSTED_EXECUTION_NONSECURE
help
This option indicates that we are building a Zephyr image that
is intended to execute in Non-Secure state. Execution of this
image is triggered by Secure firmware that executes in Secure
state. The option is only applicable to ARMv8-M MCUs that
implement the Security Extension.
This option enables Zephyr to include code that executes in
Non-Secure state only, as well as to exclude code that is
designed to execute only in Secure state.
Code executing in Non-Secure state has no access to Secure
resources of the Cortex-M MCU, and, therefore, it shall avoid
accessing them.
menu "ARM TrustZone Options"
depends on ARM_SECURE_FIRMWARE || ARM_NONSECURE_FIRMWARE
comment "Secure firmware"
depends on ARM_SECURE_FIRMWARE
comment "Non-secure firmware"
depends on !ARM_SECURE_FIRMWARE
config ARM_SECURE_BUSFAULT_HARDFAULT_NMI
bool "BusFault, HardFault, and NMI target Secure state"
depends on ARM_SECURE_FIRMWARE
help
Force NMI, HardFault, and BusFault (in Mainline ARMv8-M)
exceptions as Secure exceptions.
config ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
bool "Secure Firmware has Secure Entry functions"
depends on ARM_SECURE_FIRMWARE
help
Option indicates that ARM Secure Firmware contains
Secure Entry functions that may be called from
Non-Secure state. Secure Entry functions must be
located in Non-Secure Callable memory regions.
config ARM_NSC_REGION_BASE_ADDRESS
hex "ARM Non-Secure Callable Region base address"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
default 0
help
Start address of Non-Secure Callable section.
Notes:
- The default value (i.e. when the user does not configure
the option explicitly) instructs the linker script to
place the Non-Secure Callable section, automatically,
inside the .text area.
- Certain requirements/restrictions may apply regarding
the size and the alignment of the starting address for
a Non-Secure Callable section, depending on the available
security attribution unit (SAU or IDAU) for a given SOC.
config ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
bool "Non-Secure Firmware uses Secure Entry functions"
depends on ARM_NONSECURE_FIRMWARE
help
Option indicates that ARM Non-Secure Firmware uses Secure
Entry functions provided by the Secure Firmware. The Secure
Firmware must be configured to provide these functions.
config ARM_ENTRY_VENEERS_LIB_NAME
string "Entry Veneers symbol file"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS \
|| ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
default "libentryveneers.a"
help
Library file to find the symbol table for the entry veneers.
The library will typically come from building the Secure
Firmware that contains secure entry functions, and allows
the Non-Secure Firmware to call into the Secure Firmware.
endmenu
menu "Architecture Floating Point Options"
depends on CPU_HAS_FPU
choice
prompt "Floating point ABI"
default FP_HARDABI
depends on FLOAT
config FP_HARDABI
bool "Floating point Hard ABI"
help
This option selects the Floating point ABI in which hardware floating
point instructions are generated and uses FPU-specific calling
conventions
config FP_SOFTABI
bool "Floating point Soft ABI"
help
This option selects the Floating point ABI in which hardware floating
point instructions are generated but soft-float calling conventions.
endchoice
endmenu
source "arch/arm/core/cortex_m/Kconfig"
source "arch/arm/core/cortex_r/Kconfig"
source "arch/arm/core/cortex_m/mpu/Kconfig"
source "arch/arm/core/cortex_m/tz/Kconfig"

View File

@@ -1,37 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
if (CONFIG_COVERAGE)
toolchain_cc_coverage()
endif ()
zephyr_library_sources(
exc_exit.S
swap.c
swap_helper.S
irq_manage.c
thread.c
cpu_idle.S
fault_s.S
fatal.c
nmi.c
nmi_on_reset.S
prep_c.c
)
zephyr_library_sources_ifdef(CONFIG_GEN_SW_ISR_TABLE isr_wrapper.S)
zephyr_library_sources_ifdef(CONFIG_CPLUSPLUS __aeabi_atexit.c)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_CPU_CORTEX_M0 irq_relay.S)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M cortex_m)
add_subdirectory_ifdef(CONFIG_ARM_MPU cortex_m/mpu)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_M_HAS_CMSE cortex_m/cmse)
add_subdirectory_ifdef(CONFIG_ARM_SECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_ARM_NONSECURE_FIRMWARE cortex_m/tz)
add_subdirectory_ifdef(CONFIG_CPU_CORTEX_R cortex_r)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)

View File

@@ -1,279 +0,0 @@
# ARM core configuration options
# Copyright (c) 2015 Wind River Systems, Inc.
# SPDX-License-Identifier: Apache-2.0
if !ARM64
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config CPU_CORTEX_M
bool
select CPU_CORTEX
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select HAS_CMSIS_CORE
select HAS_FLASH_LOAD_OFFSET
select ARCH_HAS_THREAD_ABORT
select ARCH_HAS_TRUSTED_EXECUTION if ARM_TRUSTZONE_M
select ARCH_HAS_STACK_PROTECTION if (ARM_MPU && !ARMV6_M_ARMV8_M_BASELINE) || CPU_CORTEX_M_HAS_SPLIM
select ARCH_HAS_USERSPACE if ARM_MPU
select ARCH_HAS_NOCACHE_MEMORY_SUPPORT if ARM_MPU && CPU_HAS_ARM_MPU && CPU_CORTEX_M7
select ARCH_HAS_RAMFUNC_SUPPORT
select ARCH_HAS_NESTED_EXCEPTION_DETECTION
select SWAP_NONATOMIC
help
This option signifies the use of a CPU of the Cortex-M family.
config CPU_CORTEX_R
bool
select CPU_CORTEX
select HAS_CMSIS_CORE
select HAS_FLASH_LOAD_OFFSET
help
This option signifies the use of a CPU of the Cortex-R family.
config ISA_THUMB2
bool
help
From: http://www.arm.com/products/processors/technologies/instruction-set-architectures.php
Thumb-2 technology is the instruction set underlying the ARM Cortex
architecture which provides enhanced levels of performance, energy
efficiency, and code density for a wide range of embedded
applications.
Thumb-2 technology builds on the success of Thumb, the innovative
high code density instruction set for ARM microprocessor cores, to
increase the power of the ARM microprocessor core available to
developers of low cost, high performance systems.
The technology is backwards compatible with existing ARM and Thumb
solutions, while significantly extending the features available to
the Thumb instructions set. This allows more of the application to
benefit from the best in class code density of Thumb.
For performance optimized code Thumb-2 technology uses 31 percent
less memory to reduce system cost, while providing up to 38 percent
higher performance than existing high density code, which can be used
to prolong battery-life or to enrich the product feature set. Thumb-2
technology is featured in the processor, and in all ARMv7
architecture-based processors.
config ISA_ARM
bool
help
From: https://developer.arm.com/products/architecture/instruction-sets/a32-and-t32-instruction-sets
A32 instructions, known as Arm instructions in pre-Armv8 architectures,
are 32 bits wide, and are aligned on 4-byte boundaries. A32 instructions
are supported by both A-profile and R-profile architectures.
A32 was traditionally used in applications requiring the highest
performance, or for handling hardware exceptions such as interrupts and
processor start-up. Much of its functionality was subsumed into T32 with
the introduction of Thumb-2 technology.
config NUM_IRQS
int
config STACK_ALIGN_DOUBLE_WORD
bool "Align stacks on double-words (8 octets)"
default y
help
This is needed to conform to AAPCS, the procedure call standard for
the ARM. It wastes stack space. The option also enforces alignment
of stack upon exception entry on Cortex-M3 and Cortex-M4 (ARMv7-M).
Note that for ARMv6-M, ARMv8-M, and Cortex-M7 MCUs stack alignment
on exception entry is enabled by default and it is not configurable.
config RUNTIME_NMI
bool "Attach an NMI handler at runtime"
select REBOOT
help
The kernel provides a simple NMI handler that simply hangs in a tight
loop if triggered. This fills the requirement that there must be an
NMI handler installed when the CPU boots. If a custom handler is
needed, enable this option and attach it via _NmiHandlerSet().
config PLATFORM_SPECIFIC_INIT
bool "Enable platform (SOC) specific startup hook"
help
The platform specific initialization code (z_platform_init) is executed
at the beginning of the startup code (__start).
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
help
Different levels for display information when a fault occurs.
2: The default. Display specific and verbose information. Consumes
the most memory (long strings).
1: Display general and short information. Consumes less memory
(short strings).
0: Off.
config BUILTIN_STACK_GUARD
bool "Thread Stack Guards based on built-in ARM stack limit checking"
depends on CPU_CORTEX_M_HAS_SPLIM
select THREAD_STACK_INFO
help
Enable Thread/Interrupt Stack Guards via built-in Stack Pointer
limit checking. The functionality must be supported by HW.
config ARM_STACK_PROTECTION
bool
default y if HW_STACK_PROTECTION
imply BUILTIN_STACK_GUARD if CPU_CORTEX_M_HAS_SPLIM
select MPU_STACK_GUARD if (!BUILTIN_STACK_GUARD && ARM_MPU)
help
This option enables either:
- The built-in Stack Pointer limit checking, or
- the MPU-based stack guard
to cause a system fatal error
if the bounds of the current process stack are overflowed.
The two stack guard options are mutually exclusive. The
selection of the built-in Stack Pointer limit checking is
prioritized over the MPU-based stack guard. The developer
still has the option to manually select the MPU-based
stack guard, if this is desired.
config ARM_SECURE_FIRMWARE
bool
depends on ARMV8_M_SE
default y if TRUSTED_EXECUTION_SECURE
help
This option indicates that we are building a Zephyr image that
is intended to execute in Secure state. The option is only
applicable to ARMv8-M MCUs that implement the Security Extension.
This option enables Zephyr to include code that executes in
Secure state, as well as to exclude code that is designed to
execute only in Non-secure state.
Code executing in Secure state has access to both the Secure
and Non-Secure resources of the Cortex-M MCU.
Code executing in Non-Secure state may trigger Secure Faults,
if Secure MCU resources are accessed from the Non-Secure state.
Secure Faults may only be handled by code executing in Secure
state.
config ARM_NONSECURE_FIRMWARE
bool
depends on !ARM_SECURE_FIRMWARE
depends on ARMV8_M_SE
default y if TRUSTED_EXECUTION_NONSECURE
help
This option indicates that we are building a Zephyr image that
is intended to execute in Non-Secure state. Execution of this
image is triggered by Secure firmware that executes in Secure
state. The option is only applicable to ARMv8-M MCUs that
implement the Security Extension.
This option enables Zephyr to include code that executes in
Non-Secure state only, as well as to exclude code that is
designed to execute only in Secure state.
Code executing in Non-Secure state has no access to Secure
resources of the Cortex-M MCU, and, therefore, it shall avoid
accessing them.
menu "ARM TrustZone Options"
depends on ARM_SECURE_FIRMWARE || ARM_NONSECURE_FIRMWARE
comment "Secure firmware"
depends on ARM_SECURE_FIRMWARE
comment "Non-secure firmware"
depends on !ARM_SECURE_FIRMWARE
config ARM_SECURE_BUSFAULT_HARDFAULT_NMI
bool "BusFault, HardFault, and NMI target Secure state"
depends on ARM_SECURE_FIRMWARE
help
Force NMI, HardFault, and BusFault (in Mainline ARMv8-M)
exceptions as Secure exceptions.
config ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
bool "Secure Firmware has Secure Entry functions"
depends on ARM_SECURE_FIRMWARE
help
Option indicates that ARM Secure Firmware contains
Secure Entry functions that may be called from
Non-Secure state. Secure Entry functions must be
located in Non-Secure Callable memory regions.
config ARM_NSC_REGION_BASE_ADDRESS
hex "ARM Non-Secure Callable Region base address"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS
default 0
help
Start address of Non-Secure Callable section.
Notes:
- The default value (i.e. when the user does not configure
the option explicitly) instructs the linker script to
place the Non-Secure Callable section, automatically,
inside the .text area.
- Certain requirements/restrictions may apply regarding
the size and the alignment of the starting address for
a Non-Secure Callable section, depending on the available
security attribution unit (SAU or IDAU) for a given SOC.
config ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
bool "Non-Secure Firmware uses Secure Entry functions"
depends on ARM_NONSECURE_FIRMWARE
help
Option indicates that ARM Non-Secure Firmware uses Secure
Entry functions provided by the Secure Firmware. The Secure
Firmware must be configured to provide these functions.
config ARM_ENTRY_VENEERS_LIB_NAME
string "Entry Veneers symbol file"
depends on ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS \
|| ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
default "libentryveneers.a"
help
Library file to find the symbol table for the entry veneers.
The library will typically come from building the Secure
Firmware that contains secure entry functions, and allows
the Non-Secure Firmware to call into the Secure Firmware.
endmenu
choice
prompt "Floating point ABI"
default FP_HARDABI
depends on FLOAT
config FP_HARDABI
bool "Floating point Hard ABI"
help
This option selects the Floating point ABI in which hardware floating
point instructions are generated and uses FPU-specific calling
conventions
config FP_SOFTABI
bool "Floating point Soft ABI"
help
This option selects the Floating point ABI in which hardware floating
point instructions are generated but soft-float calling conventions.
endchoice
rsource "cortex_m/Kconfig"
rsource "cortex_r/Kconfig"
rsource "cortex_m/mpu/Kconfig"
rsource "cortex_m/tz/Kconfig"
endif # !ARM64

View File

@@ -1,23 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_library_sources(
vector_table.S
reset.S
fault.c
scb.c
irq_init.c
thread_abort.c
)
zephyr_linker_sources_ifdef(CONFIG_SW_VECTOR_RELAY
ROM_START
SORT_KEY 0x0relay_vectors
relay_vector_table.ld
)
zephyr_linker_sources_ifdef(CONFIG_SW_VECTOR_RELAY
RAM_SECTIONS
vt_pointer_section.ld
)

View File

@@ -1,282 +0,0 @@
# ARM Cortex-M platform configuration options
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# SPDX-License-Identifier: Apache-2.0
# NOTE: We have the specific core implementations first and outside of the
# if CPU_CORTEX_M block so that SoCs can select which core they are using
# without having to select all the options related to that core. Everything
# else is captured inside the if CPU_CORTEX_M block so they are not exposed
# if one select a differnet ARM Cortex Family (Cortex-A or Cortex-R)
config CPU_CORTEX_M0
bool
select CPU_CORTEX_M
select ARMV6_M_ARMV8_M_BASELINE
help
This option signifies the use of a Cortex-M0 CPU
config CPU_CORTEX_M0PLUS
bool
select CPU_CORTEX_M
select ARMV6_M_ARMV8_M_BASELINE
help
This option signifies the use of a Cortex-M0+ CPU
config CPU_CORTEX_M3
bool
select CPU_CORTEX_M
select ARMV7_M_ARMV8_M_MAINLINE
help
This option signifies the use of a Cortex-M3 CPU
config CPU_CORTEX_M4
bool
select CPU_CORTEX_M
select ARMV7_M_ARMV8_M_MAINLINE
select ARMV7_M_ARMV8_M_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-M4 CPU
config CPU_CORTEX_M23
bool
select CPU_CORTEX_M
select ARMV8_M_BASELINE
select ARMV8_M_SE if CPU_HAS_TEE
help
This option signifies the use of a Cortex-M23 CPU
config CPU_CORTEX_M33
bool
select CPU_CORTEX_M
select ARMV8_M_MAINLINE
select ARMV8_M_SE if CPU_HAS_TEE
select ARMV7_M_ARMV8_M_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-M33 CPU
config CPU_CORTEX_M7
bool
select CPU_CORTEX_M
select ARMV7_M_ARMV8_M_MAINLINE
select ARMV7_M_ARMV8_M_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-M7 CPU
if CPU_CORTEX_M
config CPU_CORTEX_M_HAS_SYSTICK
bool
help
This option is enabled when the CPU implements the SysTick timer.
config CPU_CORTEX_M_HAS_DWT
bool
depends on !CPU_CORTEX_M0 && !CPU_CORTEX_M0PLUS
help
This option signifies that the CPU implements the Data Watchpoint and
Trace (DWT) unit specified by the ARMv7-M and above.
While ARMv6-M does define a "DWT" unit, this is significantly different
from the DWT specified by the ARMv7-M and above in terms of both feature
set and register mappings.
config CPU_CORTEX_M_HAS_BASEPRI
bool
depends on ARMV7_M_ARMV8_M_MAINLINE
help
This option signifies the CPU has the BASEPRI register.
The BASEPRI register defines the minimum priority for
exception processing. When BASEPRI is set to a nonzero
value, it prevents the activation of all exceptions with
the same or lower priority level as the BASEPRI value.
Always present in CPUs that implement the ARMv7-M or
ARM8-M Mainline architectures.
config CPU_CORTEX_M_HAS_VTOR
bool
depends on !CPU_CORTEX_M0
help
This option signifies the CPU has the VTOR register.
The VTOR indicates the offset of the vector table base
address from memory address 0x00000000. Always present
in CPUs implementing the ARMv7-M or ARMv8-M architectures.
Optional in CPUs implementing ARMv6-M, ARMv8-M Baseline
architectures (except for Cortex-M0, where it is never
implemented).
config CPU_CORTEX_M_HAS_SPLIM
bool
depends on ARMV8_M_MAINLINE || (ARMV8_M_SE && !ARM_NONSECURE_FIRMWARE)
help
This option signifies the CPU has the MSPLIM, PSPLIM registers.
The stack pointer limit registers, MSPLIM, PSPLIM, limit the
extend to which the Main and Process Stack Pointers, respectively,
can descend. MSPLIM, PSPLIM are always present in ARMv8-M
MCUs that implement the ARMv8-M Main Extension (Mainline).
In an ARMv8-M Mainline implementation with the Security Extension
the MSPLIM, PSPLIM registers have additional Secure instances.
In an ARMv8-M Baseline implementation with the Security Extension
the MSPLIM, PSPLIM registers have only Secure instances.
config CPU_CORTEX_M_HAS_PROGRAMMABLE_FAULT_PRIOS
bool
depends on ARMV7_M_ARMV8_M_MAINLINE
help
This option signifies the CPU may trigger system faults
(other than HardFault) with configurable priority, and,
therefore, it needs to reserve a priority level for them.
config CPU_CORTEX_M0_HAS_VECTOR_TABLE_REMAP
bool
help
This option signifies the Cortex-M0 has some mechanisms that can map
the vector table to SRAM
config CPU_CORTEX_M_HAS_CMSE
bool
depends on ARMV8_M_BASELINE || ARMV8_M_MAINLINE
help
This option signifies the Cortex-M CPU has the CMSE intrinsics.
config ARMV6_M_ARMV8_M_BASELINE
bool
select ATOMIC_OPERATIONS_C
select ISA_THUMB2
help
This option signifies the use of an ARMv6-M processor
implementation, or the use of an ARMv8-M processor
supporting the Baseline implementation.
Notes:
- A Processing Element (PE) without the Main Extension
is also referred to as a Baseline Implementation. A
Baseline implementation has a subset of the instructions,
registers, and features, of a Mainline implementation.
- ARMv6-M compatibility is provided by all ARMv8-M
implementations.
config ARMV8_M_BASELINE
bool
select ARMV6_M_ARMV8_M_BASELINE
select CPU_CORTEX_M_HAS_CMSE
help
This option signifies the use of an ARMv8-M processor
implementation.
ARMv8-M Baseline includes additional features
not present in the ARMv6-M architecture.
config ARMV7_M_ARMV8_M_MAINLINE
bool
select ATOMIC_OPERATIONS_BUILTIN
select ISA_THUMB2
select CPU_CORTEX_M_HAS_BASEPRI
select CPU_CORTEX_M_HAS_VTOR
select CPU_CORTEX_M_HAS_PROGRAMMABLE_FAULT_PRIOS
select CPU_CORTEX_M_HAS_SYSTICK
help
This option signifies the use of an ARMv7-M processor
implementation, or the use of a backwards-compatible
ARMv8-M processor implementation supporting the Main
Extension.
Notes:
- A Processing Element (PE) with the Main Extension is also
referred to as a Mainline Implementation.
- ARMv7-M compatibility requires the Main Extension.
From https://developer.arm.com/products/architecture/m-profile:
The Main Extension provides backwards compatibility
with ARMv7-M.
config ARMV8_M_MAINLINE
bool
select ARMV7_M_ARMV8_M_MAINLINE
select CPU_CORTEX_M_HAS_SPLIM
select CPU_CORTEX_M_HAS_CMSE
help
This option signifies the use of an ARMv8-M processor
implementation, supporting the Main Extension.
ARMv8-M Main Extension includes additional features
not present in the ARMv7-M architecture.
config ARMV8_M_SE
bool
depends on ARMV8_M_BASELINE || ARMV8_M_MAINLINE
select CPU_CORTEX_M_HAS_SPLIM if !ARM_NONSECURE_FIRMWARE
help
This option signifies the use of an ARMv8-M processor
implementation (Baseline or Mainline) supporting the
Security Extensions.
config ARMV7_M_ARMV8_M_FP
bool
depends on ARMV7_M_ARMV8_M_MAINLINE && !CPU_CORTEX_M3
help
This option signifies the use of an ARMv7-M processor
implementation, or the use of an ARMv8-M processor
implementation supporting the Floating-Point Extension.
config ARMV8_M_DSP
bool
depends on ARMV8_M_MAINLINE
help
This option signifies the use of an ARMv8-M processor
implementation supporting the DSP Extension.
config XIP
default y
menu "ARM Cortex-M0/M0+/M3/M4/M7/M23/M33 options"
depends on ARMV6_M_ARMV8_M_BASELINE || ARMV7_M_ARMV8_M_MAINLINE
config GEN_ISR_TABLES
default y
config ZERO_LATENCY_IRQS
bool "Enable zero-latency interrupts"
depends on CPU_CORTEX_M_HAS_BASEPRI
help
The kernel may reserve some of the highest interrupts priorities in
the system for its own use. These interrupts will not be masked
by interrupt locking.
When connecting interrupts the kernel will offset all interrupts
to lower priority than those reserved by the kernel.
Zero-latency interrupt can be used to set up an interrupt at the
highest interrupt priority which will not be blocked by interrupt
locking.
Since Zero-latency ISRs will run in the same priority or possibly at
higher priority than the rest of the kernel they cannot use any
kernel functionality.
config DYNAMIC_DIRECT_INTERRUPTS
bool "Enable support for dynamic direct interrupts"
depends on DYNAMIC_INTERRUPTS
help
Direct interrupts are designed for performance-critical interrupt
handling and do not go through all of the common interrupt handling
code. This option enables the installation of interrupt service
routines for direct interrupts at runtime.
Note: this requires enabling support for dynamic interrupts in the
kernel.
config SW_VECTOR_RELAY
bool "Enable Software Vector Relay"
default y if BOOTLOADER_MCUBOOT
depends on ARMV6_M_ARMV8_M_BASELINE && !(CPU_CORTEX_M0_HAS_VECTOR_TABLE_REMAP || CPU_CORTEX_M_HAS_VTOR)
help
Add Vector Table relay handler and relay vector table, to
relay interrupts based on a vector table pointer. This is only
required for Cortex-M0 (or an Armv8-M baseline core) with no hardware
vector table relocation mechanisms or for Cortex-M0+
(or an Armv8-M baseline core) with no VTOR and no other hardware
relocation table mechanisms.
endmenu
endif # CPU_CORTEX_M

View File

@@ -1,5 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_library_sources(arm_core_cmse.c)

File diff suppressed because it is too large Load Diff

View File

@@ -1,34 +0,0 @@
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-M interrupt initialization
*
*/
#include <arch/cpu.h>
#include <arch/arm/aarch32/cortex_m/cmsis.h>
/**
*
* @brief Initialize interrupts
*
* Ensures all interrupts have their priority set to _EXC_IRQ_DEFAULT_PRIO and
* not 0, which they have it set to when coming out of reset. This ensures that
* interrupt locking via BASEPRI works as expected.
*
* @return N/A
*/
void z_arm_int_lib_init(void)
{
int irq = 0;
for (; irq < CONFIG_NUM_IRQS; irq++) {
NVIC_SetPriority((IRQn_Type)irq, _IRQ_PRIO_OFFSET);
}
}

View File

@@ -1,102 +0,0 @@
# Memory Protection Unit (MPU) configuration options
# Copyright (c) 2017 Linaro Limited
# SPDX-License-Identifier: Apache-2.0
if CPU_HAS_MPU
config ARM_MPU
bool "ARM MPU Support"
select MEMORY_PROTECTION
select THREAD_STACK_INFO
select ARCH_HAS_EXECUTABLE_PAGE_BIT
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if !(CPU_HAS_NXP_MPU || ARMV8_M_BASELINE || ARMV8_M_MAINLINE)
select MPU_REQUIRES_NON_OVERLAPPING_REGIONS if CPU_HAS_ARM_MPU && (ARMV8_M_BASELINE || ARMV8_M_MAINLINE)
help
MCU implements Memory Protection Unit.
Notes:
The ARMv6-M and ARMv7-M MPU architecture requires a power-of-two
alignment of MPU region base address and size.
The NXP MPU as well as the ARMv8-M MPU do not require MPU regions
to have power-of-two alignment for base address and region size.
The ARMv8-M MPU requires the active MPU regions be non-overlapping.
As a result of this, the ARMv8-M MPU needs to fully partition the
memory map when programming dynamic memory regions (e.g. PRIV stack
guard, user thread stack, and application memory domains), if the
system requires PRIV access policy different from the access policy
of the ARMv8-M background memory map. The application developer may
enforce full PRIV (kernel) memory partition by enabling the
CONFIG_MPU_GAP_FILLING option.
By not enforcing full partition, MPU may leave part of kernel
SRAM area covered only by the default ARMv8-M memory map. This
is fine for User Mode, since the background ARM map does not
allow nPRIV access at all. However, since the background map
policy allows instruction fetches by privileged code, forcing
this Kconfig option off prevents the system from directly
triggering MemManage exceptions upon accidental attempts to
execute code from SRAM in XIP builds.
Since this does not compromise User Mode, we make the skipping
of full partitioning the default behavior for the ARMv8-M MPU
driver.
config ARM_MPU_REGION_MIN_ALIGN_AND_SIZE
int
default 256 if ARM_MPU && ARMV6_M_ARMV8_M_BASELINE && !ARMV8_M_BASELINE
default 32 if ARM_MPU
default 4
help
Minimum size (and alignment) of an ARM MPU region. Use this
symbol to guarantee minimum size and alignment of MPU regions.
A minimum 4-byte alignment is enforced in ARM builds without
support for Memory Protection.
if ARM_MPU
config MPU_STACK_GUARD
bool "Thread Stack Guards"
help
Enable Thread Stack Guards via MPU
config MPU_STACK_GUARD_MIN_SIZE_FLOAT
int
depends on MPU_STACK_GUARD
depends on FP_SHARING
default 128
help
Minimum size (and alignment when applicable) of an ARM MPU
region, which guards the stack of a thread that is using the
Floating Point (FP) context. The width of the guard is set to
128, to accommodate the length of a Cortex-M exception stack
frame when the floating point context is active. The FP context
is only stacked in sharing FP registers mode, therefore, the
option is applicable only when FP_SHARING is selected.
config MPU_ALLOW_FLASH_WRITE
bool "Add MPU access to write to flash"
help
Enable this to allow MPU RWX access to flash memory
config CUSTOM_SECTION_ALIGN
bool "Custom Section Align"
help
MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT(ARMv7-M) sometimes cause memory
wasting in linker scripts defined memory sections. Use this symbol
to guarantee user custom section align size to avoid more memory used
for respect alignment. But that needs carefully configure MPU region
and sub-regions(ARMv7-M) to cover this feature.
config CUSTOM_SECTION_MIN_ALIGN_SIZE
int "Custom Section Align Size"
default 32
help
Custom align size of memory section in linker scripts. Usually
it should consume less alignment memory. Although this alignment
size is configured by users, it must also respect the power of
two regulation if hardware requires.
endif # ARM_MPU
endif # CPU_HAS_MPU

View File

@@ -1,10 +0,0 @@
/*
* Copyright (c) 2019 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
KEEP(*(.vector_relay_table))
KEEP(*(".vector_relay_table.*"))
KEEP(*(.vector_relay_handler))
KEEP(*(".vector_relay_handler.*"))

View File

@@ -1,111 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Reset handler
*
* Reset handler that prepares the system for running C code.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include "vector_table.h"
_ASM_FILE_PROLOGUE
GTEXT(z_arm_reset)
GTEXT(memset)
GDATA(_interrupt_stack)
#if defined(CONFIG_PLATFORM_SPECIFIC_INIT)
GTEXT(z_platform_init)
#endif
/**
*
* @brief Reset vector
*
* Ran when the system comes out of reset. The processor is in thread mode with
* privileged level. At this point, the main stack pointer (MSP) is already
* pointing to a valid area in SRAM.
*
* Locking interrupts prevents anything but NMIs and hard faults from
* interrupting the CPU. A default NMI handler is already in place in the
* vector table, and the boot code should not generate hard fault, or we're in
* deep trouble.
*
* We want to use the process stack pointer (PSP) instead of the MSP, since the
* MSP is to be set up to point to the one-and-only interrupt stack during
* later boot. That would not be possible if in use for running C code.
*
* When these steps are completed, jump to z_arm_prep_c(), which will finish
* setting up the system for running C code.
*
* @return N/A
*/
SECTION_SUBSEC_FUNC(TEXT,_reset_section,z_arm_reset)
/*
* The entry point is located at the z_arm_reset symbol, which
* is fetched by a XIP image playing the role of a bootloader, which jumps to
* it, not through the reset vector mechanism. Such bootloaders might want to
* search for a __start symbol instead, so create that alias here.
*/
SECTION_SUBSEC_FUNC(TEXT,_reset_section,__start)
#if defined(CONFIG_PLATFORM_SPECIFIC_INIT)
bl z_platform_init
#endif
/* lock interrupts: will get unlocked when switch to main task */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
cpsid i
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
movs.n r0, #_EXC_IRQ_DEFAULT_PRIO
msr BASEPRI, r0
#else
#error Unknown ARM architecture
#endif
#ifdef CONFIG_WDOG_INIT
/* board-specific watchdog initialization is necessary */
bl z_arm_watchdog_init
#endif
#ifdef CONFIG_INIT_STACKS
ldr r0, =_interrupt_stack
ldr r1, =0xaa
ldr r2, =CONFIG_ISR_STACK_SIZE
bl memset
#endif
/*
* Set PSP and use it to boot without using MSP, so that it
* gets set to _interrupt_stack during initialization.
*/
ldr r0, =_interrupt_stack
ldr r1, =CONFIG_ISR_STACK_SIZE
adds r0, r0, r1
msr PSP, r0
mrs r0, CONTROL
movs r1, #2
orrs r0, r1 /* CONTROL_SPSEL_Msk */
msr CONTROL, r0
/*
* When changing the stack pointer, software must use an ISB instruction
* immediately after the MSR instruction. This ensures that instructions
* after the ISB instruction execute using the new stack pointer.
*/
isb
/*
* 'bl' jumps the furthest of the branch instructions that are
* supported on all platforms. So it is used when jumping to z_arm_prep_c
* (even though we do not intend to return).
*/
bl z_arm_prep_c

View File

@@ -1,41 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
# '-mcmse' enables the generation of code for the Secure state of the ARMv8-M
# Security Extensions. This option is required when building a Secure firmware.
zephyr_compile_options_ifdef(CONFIG_ARM_SECURE_FIRMWARE -mcmse)
if(CONFIG_ARM_FIRMWARE_HAS_SECURE_ENTRY_FUNCS)
# --out-implib and --cmse-implib instruct the linker to produce
# an import library that consists of a relocatable file containing
# only a symbol table with the entry veneers. The library may be used
# when building a Non-Secure image which shall have access to Secure
# Entry functions.
zephyr_ld_options(
${LINKERFLAGPREFIX},--out-implib=${CMAKE_BINARY_DIR}/${CONFIG_ARM_ENTRY_VENEERS_LIB_NAME}
)
zephyr_ld_options(
${LINKERFLAGPREFIX},--cmse-implib
)
# Indicate that the entry veneers library file is created during linking of this firmware.
set_property(
GLOBAL APPEND PROPERTY
extra_post_build_byproducts
${CMAKE_BINARY_DIR}/${CONFIG_ARM_ENTRY_VENEERS_LIB_NAME}
)
zephyr_linker_sources(SECTIONS SORT_KEY z_end secure_entry_functions.ld)
endif()
# Link the entry veneers library file with the Non-Secure Firmware that needs it.
zephyr_link_libraries_ifdef(CONFIG_ARM_FIRMWARE_USES_SECURE_ENTRY_FUNCS
${CMAKE_BINARY_DIR}/${CONFIG_ARM_ENTRY_VENEERS_LIB_NAME}
)
if(CONFIG_ARM_SECURE_FIRMWARE)
zephyr_library()
zephyr_library_sources(arm_core_tz.c)
endif()

View File

@@ -1,11 +0,0 @@
# ARM TrustZone-M core configuration options
# Copyright (c) 2018 Nordic Semiconductor ASA.
# SPDX-License-Identifier: Apache-2.0
config ARM_TRUSTZONE_M
bool "ARM TrustZone-M support"
depends on CPU_HAS_TEE
depends on ARMV8_M_SE
help
Platform has support for ARM TrustZone-M.

View File

@@ -1,41 +0,0 @@
/*
* Copyright (c) 2019 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
#if CONFIG_ARM_NSC_REGION_BASE_ADDRESS != 0
#define NSC_ALIGN . = ABSOLUTE(CONFIG_ARM_NSC_REGION_BASE_ADDRESS)
#elif defined(CONFIG_CPU_HAS_NRF_IDAU)
/* The nRF9160 needs the NSC region to be at the end of a 32 kB region. */
#define NSC_ALIGN . = ALIGN(0x8000) - (1 << LOG2CEIL(__sg_size))
#else
#define NSC_ALIGN . = ALIGN(4)
#endif
#ifdef CONFIG_CPU_HAS_NRF_IDAU
#define NSC_ALIGN_END . = ALIGN(0x8000)
#else
#define NSC_ALIGN_END . = ALIGN(4)
#endif
SECTION_PROLOGUE(.gnu.sgstubs,,)
{
NSC_ALIGN;
__sg_start = .;
/* No input section necessary, since the Secure Entry Veneers are
automatically placed after the .gnu.sgstubs output section. */
} GROUP_LINK_IN(ROMABLE_REGION)
__sg_end = .;
__sg_size = __sg_end - __sg_start;
NSC_ALIGN_END;
__nsc_size = . - __sg_start;
#ifdef CONFIG_CPU_HAS_NRF_IDAU
ASSERT(1 << LOG2CEIL(0x8000 - (__sg_start % 0x8000))
== (0x8000 - (__sg_start % 0x8000))
&& (0x8000 - (__sg_start % 0x8000)) >= 32
&& (0x8000 - (__sg_start % 0x8000)) <= 4096,
"The Non-Secure Callable region size must be a power of 2 \
between 32 and 4096 bytes.")
#endif

View File

@@ -1,74 +0,0 @@
/*
* Copyright (c) 2013-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Populated vector table in ROM
*
* Vector table at the beginning of the image for starting system. The reset
* vector is the system entry point, ie. the first instruction executed.
*
* The table is populated with all the system exception handlers. The NMI vector
* must be populated with a valid handler since it can happen at any time. The
* rest should not be triggered until the kernel is ready to handle them.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include "vector_table.h"
_ASM_FILE_PROLOGUE
GDATA(z_main_stack)
SECTION_SUBSEC_FUNC(exc_vector_table,_vector_table_section,_vector_table)
/*
* setting the _very_ early boot on the main stack allows to use memset
* on the interrupt stack when CONFIG_INIT_STACKS is enabled before
* switching to the interrupt stack for the rest of the early boot
*/
.word z_main_stack + CONFIG_MAIN_STACK_SIZE
.word z_arm_reset
.word z_arm_nmi
.word z_arm_hard_fault
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_svc
.word z_arm_reserved
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
.word z_arm_mpu_fault
.word z_arm_bus_fault
.word z_arm_usage_fault
#if defined(CONFIG_ARM_SECURE_FIRMWARE)
.word z_arm_secure_fault
#else
.word z_arm_reserved
#endif /* CONFIG_ARM_SECURE_FIRMWARE */
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_reserved
.word z_arm_svc
.word z_arm_debug_monitor
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
.word z_arm_reserved
.word z_arm_pendsv
#if defined(CONFIG_SYS_CLOCK_EXISTS)
.word z_clock_isr
#else
.word z_arm_reserved
#endif

View File

@@ -1,70 +0,0 @@
/*
* Copyright (c) 2013-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Definitions for the boot vector table
*
*
* Definitions for the boot vector table.
*
* System exception handler names all have the same format:
*
* __<exception name with underscores>
*
* No other symbol has the same format, so they are easy to spot.
*/
#ifndef ZEPHYR_ARCH_ARM_CORE_AARCH32_CORTEX_M_VECTOR_TABLE_H_
#define ZEPHYR_ARCH_ARM_CORE_AARCH32_CORTEX_M_VECTOR_TABLE_H_
#ifdef _ASMLANGUAGE
#include <toolchain.h>
#include <linker/sections.h>
#include <sys/util.h>
GTEXT(__start)
GTEXT(_vector_table)
GTEXT(z_arm_reset)
GTEXT(z_arm_nmi)
GTEXT(z_arm_hard_fault)
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
GTEXT(z_arm_svc)
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
GTEXT(z_arm_mpu_fault)
GTEXT(z_arm_bus_fault)
GTEXT(z_arm_usage_fault)
#if defined(CONFIG_ARM_SECURE_FIRMWARE)
GTEXT(z_arm_secure_fault)
#endif /* CONFIG_ARM_SECURE_FIRMWARE */
GTEXT(z_arm_svc)
GTEXT(z_arm_debug_monitor)
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
GTEXT(z_arm_pendsv)
GTEXT(z_arm_reserved)
GTEXT(z_arm_prep_c)
GTEXT(_isr_wrapper)
#else /* _ASMLANGUAGE */
#ifdef __cplusplus
extern "C" {
#endif
extern void *_vector_table[];
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARM_CORE_AARCH32_CORTEX_M_VECTOR_TABLE_H_ */

View File

@@ -1,87 +0,0 @@
# ARM Cortex-R platform configuration options
# Copyright (c) 2018 Marvell
# Copyright (c) 2018 Lexmark International, Inc.
# SPDX-License-Identifier: Apache-2.0
# NOTE: We have the specific core implementations first and outside of the
# if CPU_CORTEX_R block so that SoCs can select which core they are using
# without having to select all the options related to that core. Everything
# else is captured inside the if CPU_CORTEX_R block so they are not exposed
# if one selects a different ARM Cortex Family (Cortex-A or Cortex-M)
config CPU_CORTEX_R4
bool
select CPU_CORTEX_R
select ARMV7_R
select ARMV7_R_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-R4 CPU
config CPU_CORTEX_R5
bool
select CPU_CORTEX_R
select ARMV7_R
select ARMV7_R_FP if CPU_HAS_FPU
help
This option signifies the use of a Cortex-R5 CPU
if CPU_CORTEX_R
config ARMV7_R
bool
select ATOMIC_OPERATIONS_BUILTIN
select ISA_ARM
help
This option signifies the use of an ARMv7-R processor
implementation.
From https://developer.arm.com/products/architecture/cpu-architecture/r-profile:
The Armv7-R architecture implements a traditional Arm architecture with
multiple modes and supports a Protected Memory System Architecture
(PMSA) based on a Memory Protection Unit (MPU). It supports the Arm (32)
and Thumb (T32) instruction sets.
config ARMV7_R_FP
bool
depends on ARMV7_R
help
This option signifies the use of an ARMv7-R processor
implementation supporting the Floating-Point Extension.
config ARMV7_EXCEPTION_STACK_SIZE
int "Undefined Instruction and Abort stack size (in bytes)"
default 256
help
This option specifies the size of the stack used by the undefined
instruction and data abort exception handlers.
config ARMV7_FIQ_STACK_SIZE
int "FIQ stack size (in bytes)"
default 256
help
This option specifies the size of the stack used by the FIQ handler.
config ARMV7_SVC_STACK_SIZE
int "SVC stack size (in bytes)"
default 512
help
This option specifies the size of the stack used by the SVC handler.
config ARMV7_SYS_STACK_SIZE
int "SYS stack size (in bytes)"
default 1024
help
This option specifies the size of the stack used by the system mode.
config RUNTIME_NMI
default y
config GEN_ISR_TABLES
default y
config GEN_IRQ_VECTOR_TABLE
default n
endif # CPU_CORTEX_R

View File

@@ -1,29 +0,0 @@
/*
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <kernel_internal.h>
#include <kernel_structs.h>
/**
*
* @brief Fault handler
*
* This routine is called when fatal error conditions are detected by hardware
* and is responsible only for reporting the error. Once reported, it then
* invokes the user provided routine _SysFatalErrorHandler() which is
* responsible for implementing the error handling policy.
*
* This is a stub for more exception handling code to be added later.
*/
void z_arm_fault(z_arch_esf_t *esf, u32_t exc_return)
{
z_arm_fatal_error(K_ERR_CPU_EXCEPTION, esf);
}
void z_arm_fault_init(void)
{
}

View File

@@ -1,187 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
* Copyright (c) 2019 Stephanos Ioannidis <root@stephanos.io>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Reset handler
*
* Reset handler that prepares the system for running C code.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <offsets_short.h>
#include "vector_table.h"
_ASM_FILE_PROLOGUE
GTEXT(z_arm_reset)
GDATA(_interrupt_stack)
GDATA(z_arm_svc_stack)
GDATA(z_arm_sys_stack)
GDATA(z_arm_fiq_stack)
GDATA(z_arm_abort_stack)
GDATA(z_arm_undef_stack)
#if defined(CONFIG_PLATFORM_SPECIFIC_INIT)
GTEXT(z_platform_init)
#endif
/**
*
* @brief Reset vector
*
* Ran when the system comes out of reset. The processor is in Supervisor mode
* and interrupts are disabled. The processor architectural registers are in
* an indeterminate state.
*
* When these steps are completed, jump to z_arm_prep_c(), which will finish
* setting up the system for running C code.
*
* @return N/A
*/
SECTION_SUBSEC_FUNC(TEXT, _reset_section, z_arm_reset)
SECTION_SUBSEC_FUNC(TEXT, _reset_section, __start)
#if defined(CONFIG_CPU_HAS_DCLS)
/*
* Initialise CPU registers to a defined state if the processor is
* configured as Dual-redundant Core Lock-step (DCLS). This is required
* for state convergence of the two parallel executing cores.
*/
/* Common and SVC mode registers */
mov r0, #0
mov r1, #0
mov r2, #0
mov r3, #0
mov r4, #0
mov r5, #0
mov r6, #0
mov r7, #0
mov r8, #0
mov r9, #0
mov r10, #0
mov r11, #0
mov r12, #0
mov r13, #0 /* r13_svc */
mov r14, #0 /* r14_svc */
mrs r0, cpsr
msr spsr_cxsf, r0 /* spsr_svc */
/* FIQ mode registers */
cps #MODE_FIQ
mov r8, #0 /* r8_fiq */
mov r9, #0 /* r9_fiq */
mov r10, #0 /* r10_fiq */
mov r11, #0 /* r11_fiq */
mov r12, #0 /* r12_fiq */
mov r13, #0 /* r13_fiq */
mov r14, #0 /* r14_fiq */
mrs r0, cpsr
msr spsr_cxsf, r0 /* spsr_fiq */
/* IRQ mode registers */
cps #MODE_IRQ
mov r13, #0 /* r13_irq */
mov r14, #0 /* r14_irq */
mrs r0, cpsr
msr spsr_cxsf, r0 /* spsr_irq */
/* ABT mode registers */
cps #MODE_ABT
mov r13, #0 /* r13_abt */
mov r14, #0 /* r14_abt */
mrs r0, cpsr
msr spsr_cxsf, r0 /* spsr_abt */
/* UND mode registers */
cps #MODE_UND
mov r13, #0 /* r13_und */
mov r14, #0 /* r14_und */
mrs r0, cpsr
msr spsr_cxsf, r0 /* spsr_und */
/* SYS mode registers */
cps #MODE_SYS
mov r13, #0 /* r13_sys */
mov r14, #0 /* r14_sys */
#if defined(CONFIG_FLOAT)
/*
* Initialise FPU registers to a defined state.
*/
/* Allow VFP coprocessor access */
mrc p15, 0, r0, c1, c0, 2
orr r0, r0, #(CPACR_CP10(CPACR_FA) | CPACR_CP11(CPACR_FA))
mcr p15, 0, r0, c1, c0, 2
/* Enable VFP */
mov r0, #FPEXC_EN
fmxr fpexc, r0
/* Initialise VFP registers */
fmdrr d0, r1, r1
fmdrr d1, r1, r1
fmdrr d2, r1, r1
fmdrr d3, r1, r1
fmdrr d4, r1, r1
fmdrr d5, r1, r1
fmdrr d6, r1, r1
fmdrr d7, r1, r1
fmdrr d8, r1, r1
fmdrr d9, r1, r1
fmdrr d10, r1, r1
fmdrr d11, r1, r1
fmdrr d12, r1, r1
fmdrr d13, r1, r1
fmdrr d14, r1, r1
fmdrr d15, r1, r1
#endif /* CONFIG_FLOAT */
#endif /* CONFIG_CPU_HAS_DCLS */
/*
* Configure stack.
*/
/* FIQ mode stack */
msr CPSR_c, #(MODE_FIQ | I_BIT | F_BIT)
ldr sp, =(z_arm_fiq_stack + CONFIG_ARMV7_FIQ_STACK_SIZE)
/* IRQ mode stack */
msr CPSR_c, #(MODE_IRQ | I_BIT | F_BIT)
ldr sp, =(_interrupt_stack + CONFIG_ISR_STACK_SIZE)
/* ABT mode stack */
msr CPSR_c, #(MODE_ABT | I_BIT | F_BIT)
ldr sp, =(z_arm_abort_stack + CONFIG_ARMV7_EXCEPTION_STACK_SIZE)
/* UND mode stack */
msr CPSR_c, #(MODE_UND | I_BIT | F_BIT)
ldr sp, =(z_arm_undef_stack + CONFIG_ARMV7_EXCEPTION_STACK_SIZE)
/* SVC mode stack */
msr CPSR_c, #(MODE_SVC | I_BIT | F_BIT)
ldr sp, =(z_arm_svc_stack + CONFIG_ARMV7_SVC_STACK_SIZE)
/* SYS mode stack */
msr CPSR_c, #(MODE_SYS | I_BIT | F_BIT)
ldr sp, =(z_arm_sys_stack + CONFIG_ARMV7_SYS_STACK_SIZE)
#if defined(CONFIG_PLATFORM_SPECIFIC_INIT)
/* Execute platform-specific initialisation if applicable */
bl z_platform_init
#endif
#if defined(CONFIG_WDOG_INIT)
/* board-specific watchdog initialization is necessary */
bl z_arm_watchdog_init
#endif
b z_arm_prep_c

View File

@@ -1,26 +0,0 @@
/*
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <aarch32/cortex_r/stack.h>
#include <string.h>
K_THREAD_STACK_DEFINE(z_arm_fiq_stack, CONFIG_ARMV7_FIQ_STACK_SIZE);
K_THREAD_STACK_DEFINE(z_arm_abort_stack, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
K_THREAD_STACK_DEFINE(z_arm_undef_stack, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
K_THREAD_STACK_DEFINE(z_arm_svc_stack, CONFIG_ARMV7_SVC_STACK_SIZE);
K_THREAD_STACK_DEFINE(z_arm_sys_stack, CONFIG_ARMV7_SYS_STACK_SIZE);
#if defined(CONFIG_INIT_STACKS)
void z_arm_init_stacks(void)
{
memset(z_arm_fiq_stack, 0xAA, CONFIG_ARMV7_FIQ_STACK_SIZE);
memset(z_arm_svc_stack, 0xAA, CONFIG_ARMV7_SVC_STACK_SIZE);
memset(z_arm_abort_stack, 0xAA, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
memset(z_arm_undef_stack, 0xAA, CONFIG_ARMV7_EXCEPTION_STACK_SIZE);
memset(&_interrupt_stack, 0xAA, CONFIG_ISR_STACK_SIZE);
}
#endif

View File

@@ -1,27 +0,0 @@
/*
* Copyright (c) 2018 Marvell
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Populated vector table in ROM
*/
#include <toolchain.h>
#include <linker/sections.h>
#include "vector_table.h"
_ASM_FILE_PROLOGUE
SECTION_SUBSEC_FUNC(exc_vector_table,_vector_table_section,_vector_table)
ldr pc, =z_arm_reset /* offset 0 */
ldr pc, =z_arm_undef_instruction /* undef instruction offset 4 */
ldr pc, =z_arm_svc /* svc offset 8 */
ldr pc, =z_arm_prefetch_abort /* prefetch abort offset 0xc */
ldr pc, =z_arm_data_abort /* data abort offset 0x10 */
nop /* offset 0x14 */
ldr pc, =_isr_wrapper /* IRQ offset 0x18 */
ldr pc, =z_arm_nmi /* FIQ offset 0x1c */

View File

@@ -1,60 +0,0 @@
/*
* Copyright (c) 2018 Marvell
* Copyright (c) 2018 Lexmark International, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Definitions for the boot vector table
*
*
* Definitions for the boot vector table.
*
* System exception handler names all have the same format:
*
* __<exception name with underscores>
*
* No other symbol has the same format, so they are easy to spot.
*/
#ifndef _VECTOR_TABLE__H_
#define _VECTOR_TABLE__H_
#ifdef _ASMLANGUAGE
#include <toolchain.h>
#include <linker/sections.h>
#include <sys/util.h>
GTEXT(__start)
GTEXT(_vector_table)
GTEXT(z_arm_nmi)
GTEXT(z_arm_undef_instruction)
GTEXT(z_arm_svc)
GTEXT(z_arm_prefetch_abort)
GTEXT(z_arm_data_abort)
GTEXT(z_arm_pendsv)
GTEXT(z_arm_reserved)
GTEXT(z_arm_prep_c)
GTEXT(_isr_wrapper)
#else /* _ASMLANGUAGE */
#ifdef __cplusplus
extern "C" {
#endif
extern void *_vector_table[];
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* _VECTOR_TABLE__H_ */

View File

@@ -1,129 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-M and Cortex-R power management
*
*/
#include <toolchain.h>
#include <linker/sections.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_cpu_idle_init)
GTEXT(arch_cpu_idle)
GTEXT(arch_cpu_atomic_idle)
#if defined(CONFIG_CPU_CORTEX_M)
#define _SCB_SCR 0xE000ED10
#define _SCB_SCR_SEVONPEND (1 << 4)
#define _SCB_SCR_SLEEPDEEP (1 << 2)
#define _SCB_SCR_SLEEPONEXIT (1 << 1)
#define _SCR_INIT_BITS _SCB_SCR_SEVONPEND
#endif
/**
*
* @brief Initialization of CPU idle
*
* Only called by arch_kernel_init(). Sets SEVONPEND bit once for the system's
* duration.
*
* @return N/A
*
* C function prototype:
*
* void z_arm_cpu_idle_init(void);
*/
SECTION_FUNC(TEXT, z_arm_cpu_idle_init)
#if defined(CONFIG_CPU_CORTEX_M)
ldr r1, =_SCB_SCR
movs.n r2, #_SCR_INIT_BITS
str r2, [r1]
#endif
bx lr
SECTION_FUNC(TEXT, arch_cpu_idle)
#ifdef CONFIG_TRACING
push {r0, lr}
bl sys_trace_idle
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#endif /* CONFIG_TRACING */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
cpsie i
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* clear BASEPRI so wfi is awakened by incoming interrupts */
eors.n r0, r0
msr BASEPRI, r0
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
wfi
bx lr
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
#ifdef CONFIG_TRACING
push {r0, lr}
bl sys_trace_idle
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#endif /* CONFIG_TRACING */
/*
* Lock PRIMASK while sleeping: wfe will still get interrupted by
* incoming interrupts but the CPU will not service them right away.
*/
cpsid i
/*
* No need to set SEVONPEND, it's set once in z_CpuIdleInit() and never
* touched again.
*/
/* r0: interrupt mask from caller */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
/* No BASEPRI, call wfe directly (SEVONPEND set in z_CpuIdleInit()) */
wfe
cmp r0, #0
bne _irq_disabled
cpsie i
_irq_disabled:
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* r1: zero, for setting BASEPRI (needs a register) */
eors.n r1, r1
/* unlock BASEPRI so wfe gets interrupted by incoming interrupts */
msr BASEPRI, r1
wfe
msr BASEPRI, r0
cpsie i
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
bx lr

View File

@@ -1,149 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-M and Cortex-R exception/interrupt exit API
*
*
* Provides functions for performing kernel handling when exiting exceptions or
* interrupts that are installed directly in the vector table (i.e. that are not
* wrapped around by _isr_wrapper()).
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_exc_exit)
GTEXT(z_arm_int_exit)
GDATA(_kernel)
#if defined(CONFIG_CPU_CORTEX_R)
GTEXT(z_arm_pendsv)
#endif
/**
*
* @brief Kernel housekeeping when exiting interrupt handler installed
* directly in vector table
*
* Kernel allows installing interrupt handlers (ISRs) directly into the vector
* table to get the lowest interrupt latency possible. This allows the ISR to
* be invoked directly without going through a software interrupt table.
* However, upon exiting the ISR, some kernel work must still be performed,
* namely possible context switching. While ISRs connected in the software
* interrupt table do this automatically via a wrapper, ISRs connected directly
* in the vector table must invoke z_arm_int_exit() as the *very last* action
* before returning.
*
* e.g.
*
* void myISR(void)
* {
* printk("in %s\n", __FUNCTION__);
* doStuff();
* z_arm_int_exit();
* }
*
* @return N/A
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_int_exit)
/* z_arm_int_exit falls through to z_arm_exc_exit (they are aliases of each
* other)
*/
/**
*
* @brief Kernel housekeeping when exiting exception handler installed
* directly in vector table
*
* See z_arm_int_exit().
*
* @return N/A
*/
SECTION_SUBSEC_FUNC(TEXT, _HandlerModeExit, z_arm_exc_exit)
#if defined(CONFIG_CPU_CORTEX_R)
/* r0 contains the caller mode */
push {r0, lr}
#endif
#ifdef CONFIG_PREEMPT_ENABLED
ldr r0, =_kernel
ldr r1, [r0, #_kernel_offset_to_current]
ldr r0, [r0, #_kernel_offset_to_ready_q_cache]
cmp r0, r1
beq _EXIT_EXC
#if defined(CONFIG_CPU_CORTEX_M)
/* context switch required, pend the PendSV exception */
ldr r1, =_SCS_ICSR
ldr r2, =_SCS_ICSR_PENDSV
str r2, [r1]
#elif defined(CONFIG_CPU_CORTEX_R)
bl z_arm_pendsv
#endif
_ExcExitWithGdbStub:
_EXIT_EXC:
#endif /* CONFIG_PREEMPT_ENABLED */
#ifdef CONFIG_STACK_SENTINEL
#if defined(CONFIG_CPU_CORTEX_M)
push {r0, lr}
bl z_check_stack_sentinel
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#else
bl z_check_stack_sentinel
#endif /* CONFIG_CPU_CORTEX_M */
#endif /* CONFIG_STACK_SENTINEL */
#if defined(CONFIG_CPU_CORTEX_M)
bx lr
#elif defined(CONFIG_CPU_CORTEX_R)
/* Restore the caller mode to r0 */
pop {r0, lr}
/*
* Restore r0-r3, r12 and lr stored into the process stack by the mode
* entry function. These registers are saved by _isr_wrapper for IRQ mode
* and z_arm_svc for SVC mode.
*
* r0-r3 are either the values from the thread before it was switched out
* or they are the args to _new_thread for a new thread.
*/
push {r4, r5}
cmp r0, #RET_FROM_SVC
cps #MODE_SYS
ldmia sp!, {r0-r5}
beq _svc_exit
cps #MODE_IRQ
b _exc_exit
_svc_exit:
cps #MODE_SVC
_exc_exit:
mov r12, r4
mov lr, r5
pop {r4, r5}
movs pc, lr
#endif

View File

@@ -1,99 +0,0 @@
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Kernel fatal error handler for ARM Cortex-M and Cortex-R
*
* This module provides the z_arm_fatal_error() routine for ARM Cortex-M
* and Cortex-R CPUs.
*/
#include <kernel.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
static void esf_dump(const z_arch_esf_t *esf)
{
LOG_ERR("r0/a1: 0x%08x r1/a2: 0x%08x r2/a3: 0x%08x",
esf->basic.a1, esf->basic.a2, esf->basic.a3);
LOG_ERR("r3/a4: 0x%08x r12/ip: 0x%08x r14/lr: 0x%08x",
esf->basic.a4, esf->basic.ip, esf->basic.lr);
LOG_ERR(" xpsr: 0x%08x", esf->basic.xpsr);
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
for (int i = 0; i < 16; i += 4) {
LOG_ERR("s[%2d]: 0x%08x s[%2d]: 0x%08x"
" s[%2d]: 0x%08x s[%2d]: 0x%08x",
i, (u32_t)esf->s[i],
i + 1, (u32_t)esf->s[i + 1],
i + 2, (u32_t)esf->s[i + 2],
i + 3, (u32_t)esf->s[i + 3]);
}
LOG_ERR("fpscr: 0x%08x", esf->fpscr);
#endif
LOG_ERR("Faulting instruction address (r15/pc): 0x%08x",
esf->basic.pc);
}
void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
if (esf != NULL) {
esf_dump(esf);
}
z_fatal_error(reason, esf);
}
/**
* @brief Handle a software-generated fatal exception
* (e.g. kernel oops, panic, etc.).
*
* Notes:
* - the function is invoked in SVC Handler
* - if triggered from nPRIV mode, only oops and stack fail error reasons
* may be propagated to the fault handling process.
* - We expect the supplied exception stack frame to always be a valid
* frame. That is because, if the ESF cannot be stacked during an SVC,
* a processor fault (e.g. stacking error) will be generated, and the
* fault handler will executed insted of the SVC.
*
* @param esf exception frame
*/
void z_do_kernel_oops(const z_arch_esf_t *esf)
{
/* Stacked R0 holds the exception reason. */
unsigned int reason = esf->basic.r0;
#if defined(CONFIG_USERSPACE)
if ((__get_CONTROL() & CONTROL_nPRIV_Msk) == CONTROL_nPRIV_Msk) {
/*
* Exception triggered from nPRIV mode.
*
* User mode is only allowed to induce oopses and stack check
* failures via software-triggered system fatal exceptions.
*/
if (!((esf->basic.r0 == K_ERR_KERNEL_OOPS) ||
(esf->basic.r0 == K_ERR_STACK_CHK_FAIL))) {
reason = K_ERR_KERNEL_OOPS;
}
}
#endif /* CONFIG_USERSPACE */
z_arm_fatal_error(reason, esf);
}
FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
{
u32_t *ssf_contents = ssf_ptr;
z_arch_esf_t oops_esf = { 0 };
/* TODO: Copy the rest of the register set out of ssf_ptr */
oops_esf.basic.pc = ssf_contents[3];
z_arm_fatal_error(K_ERR_KERNEL_OOPS, &oops_esf);
CODE_UNREACHABLE;
}

View File

@@ -1,117 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
* Copyright (c) 2017-2019 Nordic Semiconductor ASA.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Fault handlers for ARM Cortex-M and Cortex-R
*
* Fault handlers for ARM Cortex-M and Cortex-R processors.
*/
#include <toolchain.h>
#include <linker/sections.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_fault)
GTEXT(z_arm_hard_fault)
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* HardFault is used for all fault conditions on ARMv6-M. */
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
GTEXT(z_arm_mpu_fault)
GTEXT(z_arm_bus_fault)
GTEXT(z_arm_usage_fault)
#if defined(CONFIG_ARM_SECURE_FIRMWARE)
GTEXT(z_arm_secure_fault)
#endif /* CONFIG_ARM_SECURE_FIRMWARE*/
GTEXT(z_arm_debug_monitor)
#elif defined(CONFIG_ARMV7_R)
GTEXT(z_arm_undef_instruction)
GTEXT(z_arm_prefetch_abort)
GTEXT(z_arm_data_abort)
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
GTEXT(z_arm_reserved)
/**
*
* @brief Fault handler installed in the fault and reserved vectors
*
* Entry point for the HardFault, MemManageFault, BusFault, UsageFault,
* SecureFault, Debug Monitor, and reserved exceptions.
*
* For Cortex-M: the function supplies the values of
* - the MSP
* - the PSP
* - the EXC_RETURN value
* as parameters to the z_arm_fault() C function that will perform the
* rest of the fault handling (i.e. z_arm_fault(MSP, PSP, EXC_RETURN)).
*
* For Cortex-R: the function simply invokes z_arm_fault() with currently
* unused arguments.
*
* Provides these symbols:
*
* z_arm_hard_fault
* z_arm_mpu_fault
* z_arm_bus_fault
* z_arm_usage_fault
* z_arm_secure_fault
* z_arm_debug_monitor
* z_arm_reserved
*/
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_hard_fault)
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* HardFault is used for all fault conditions on ARMv6-M. */
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_mpu_fault)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_bus_fault)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_usage_fault)
#if defined(CONFIG_ARM_SECURE_FIRMWARE)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_secure_fault)
#endif /* CONFIG_ARM_SECURE_FIRMWARE */
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_debug_monitor)
#elif defined(CONFIG_ARMV7_R)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_undef_instruction)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_prefetch_abort)
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_data_abort)
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
SECTION_SUBSEC_FUNC(TEXT,__fault,z_arm_reserved)
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) || \
defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
mrs r0, MSP
mrs r1, PSP
mov r2, lr /* EXC_RETURN */
push {r0, lr}
#elif defined(CONFIG_ARMV7_R)
/*
* Pass null for the esf to z_arm_fault for now. A future PR will add
* better exception debug for Cortex-R that subsumes what esf
* provides.
*/
mov r0, #0
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE || CONFIG_ARMv7_M_ARMV8_M_MAINLINE */
bl z_arm_fault
#if defined(CONFIG_CPU_CORTEX_M)
pop {r0, pc}
#elif defined(CONFIG_CPU_CORTEX_R)
pop {r0, lr}
subs pc, lr, #8
#endif
.end

View File

@@ -1,294 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-M and Cortex-R interrupt management
*
*
* Interrupt management: enabling/disabling and dynamic ISR
* connecting/replacing. SW_ISR_TABLE_DYNAMIC has to be enabled for
* connecting ISRs at runtime.
*/
#include <kernel.h>
#include <arch/cpu.h>
#if defined(CONFIG_CPU_CORTEX_M)
#include <arch/arm/aarch32/cortex_m/cmsis.h>
#elif defined(CONFIG_CPU_CORTEX_R)
#include <device.h>
#include <irq_nextlevel.h>
#endif
#include <sys/__assert.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <sw_isr_table.h>
#include <irq.h>
#include <tracing/tracing.h>
extern void z_arm_reserved(void);
#if defined(CONFIG_CPU_CORTEX_M)
#define NUM_IRQS_PER_REG 32
#define REG_FROM_IRQ(irq) (irq / NUM_IRQS_PER_REG)
#define BIT_FROM_IRQ(irq) (irq % NUM_IRQS_PER_REG)
void arch_irq_enable(unsigned int irq)
{
NVIC_EnableIRQ((IRQn_Type)irq);
}
void arch_irq_disable(unsigned int irq)
{
NVIC_DisableIRQ((IRQn_Type)irq);
}
int arch_irq_is_enabled(unsigned int irq)
{
return NVIC->ISER[REG_FROM_IRQ(irq)] & BIT(BIT_FROM_IRQ(irq));
}
/**
* @internal
*
* @brief Set an interrupt's priority
*
* The priority is verified if ASSERT_ON is enabled. The maximum number
* of priority levels is a little complex, as there are some hardware
* priority levels which are reserved.
*
* @return N/A
*/
void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
/* The kernel may reserve some of the highest priority levels.
* So we offset the requested priority level with the number
* of priority levels reserved by the kernel.
*/
#if defined(CONFIG_ZERO_LATENCY_IRQS)
/* If we have zero latency interrupts, those interrupts will
* run at a priority level which is not masked by irq_lock().
* Our policy is to express priority levels with special properties
* via flags
*/
if (flags & IRQ_ZERO_LATENCY) {
prio = _EXC_ZERO_LATENCY_IRQS_PRIO;
} else {
prio += _IRQ_PRIO_OFFSET;
}
#else
ARG_UNUSED(flags);
prio += _IRQ_PRIO_OFFSET;
#endif
/* The last priority level is also used by PendSV exception, but
* allow other interrupts to use the same level, even if it ends up
* affecting performance (can still be useful on systems with a
* reduced set of priorities, like Cortex-M0/M0+).
*/
__ASSERT(prio <= (BIT(DT_NUM_IRQ_PRIO_BITS) - 1),
"invalid priority %d! values must be less than %lu\n",
prio - _IRQ_PRIO_OFFSET,
BIT(DT_NUM_IRQ_PRIO_BITS) - (_IRQ_PRIO_OFFSET));
NVIC_SetPriority((IRQn_Type)irq, prio);
}
#elif defined(CONFIG_CPU_CORTEX_R)
void arch_irq_enable(unsigned int irq)
{
struct device *dev = _sw_isr_table[0].arg;
irq_enable_next_level(dev, (irq >> 8) - 1);
}
void arch_irq_disable(unsigned int irq)
{
struct device *dev = _sw_isr_table[0].arg;
irq_disable_next_level(dev, (irq >> 8) - 1);
}
int arch_irq_is_enabled(unsigned int irq)
{
struct device *dev = _sw_isr_table[0].arg;
return irq_is_enabled_next_level(dev);
}
/**
* @internal
*
* @brief Set an interrupt's priority
*
* The priority is verified if ASSERT_ON is enabled. The maximum number
* of priority levels is a little complex, as there are some hardware
* priority levels which are reserved: three for various types of exceptions,
* and possibly one additional to support zero latency interrupts.
*
* @return N/A
*/
void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
struct device *dev = _sw_isr_table[0].arg;
if (irq == 0)
return;
irq_set_priority_next_level(dev, (irq >> 8) - 1, prio, flags);
}
#endif
/**
*
* @brief Spurious interrupt handler
*
* Installed in all dynamic interrupt slots at boot time. Throws an error if
* called.
*
* See z_arm_reserved().
*
* @return N/A
*/
void z_irq_spurious(void *unused)
{
ARG_UNUSED(unused);
z_arm_reserved();
}
#ifdef CONFIG_SYS_POWER_MANAGEMENT
void _arch_isr_direct_pm(void)
{
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
unsigned int key;
/* irq_lock() does what we wan for this CPU */
key = irq_lock();
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* Lock all interrupts. irq_lock() will on this CPU only disable those
* lower than BASEPRI, which is not what we want. See comments in
* arch/arm/core/aarch32/isr_wrapper.S
*/
__asm__ volatile("cpsid i" : : : "memory");
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
if (_kernel.idle) {
s32_t idle_val = _kernel.idle;
_kernel.idle = 0;
z_sys_power_save_idle_exit(idle_val);
}
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
irq_unlock(key);
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
__asm__ volatile("cpsie i" : : : "memory");
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
}
#endif
#if defined(CONFIG_ARM_SECURE_FIRMWARE)
/**
*
* @brief Set the target security state for the given IRQ
*
* Function sets the security state (Secure or Non-Secure) targeted
* by the given irq. It requires ARMv8-M MCU.
* It is only compiled if ARM_SECURE_FIRMWARE is defined.
* It should only be called while in Secure state, otherwise, a write attempt
* to NVIC.ITNS register is write-ignored(WI), as the ITNS register is not
* banked between security states and, therefore, has no Non-Secure instance.
*
* It shall assert if the operation is not performed successfully.
*
* @param irq IRQ line
* @param secure_state 1 if target state is Secure, 0 otherwise.
*
* @return N/A
*/
void irq_target_state_set(unsigned int irq, int secure_state)
{
if (secure_state) {
/* Set target to Secure */
if (NVIC_ClearTargetState(irq) != 0) {
__ASSERT(0, "NVIC SetTargetState error");
}
} else {
/* Set target state to Non-Secure */
if (NVIC_SetTargetState(irq) != 1) {
__ASSERT(0, "NVIC SetTargetState error");
}
}
}
/**
*
* @brief Determine whether the given IRQ targets the Secure state
*
* Function determines whether the given irq targets the Secure state
* or not (i.e. targets the Non-Secure state). It requires ARMv8-M MCU.
* It is only compiled if ARM_SECURE_FIRMWARE is defined.
* It should only be called while in Secure state, otherwise, a read attempt
* to NVIC.ITNS register is read-as-zero(RAZ), as the ITNS register is not
* banked between security states and, therefore, has no Non-Secure instance.
*
* @param irq IRQ line
*
* @return 1 if target state is Secure, 0 otherwise.
*/
int irq_target_state_is_secure(unsigned int irq)
{
return NVIC_GetTargetState(irq) == 0;
}
#endif /* CONFIG_ARM_SECURE_FIRMWARE */
#ifdef CONFIG_DYNAMIC_INTERRUPTS
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(void *parameter), void *parameter,
u32_t flags)
{
z_isr_install(irq, routine, parameter);
z_arm_irq_priority_set(irq, priority, flags);
return irq;
}
#ifdef CONFIG_DYNAMIC_DIRECT_INTERRUPTS
static inline void z_arm_irq_dynamic_direct_isr_dispatch(void)
{
u32_t irq = __get_IPSR() - 16;
if (irq < IRQ_TABLE_SIZE) {
struct _isr_table_entry *isr_entry = &_sw_isr_table[irq];
isr_entry->isr(isr_entry->arg);
}
}
ISR_DIRECT_DECLARE(z_arm_irq_direct_dynamic_dispatch_reschedule)
{
z_arm_irq_dynamic_direct_isr_dispatch();
return 1;
}
ISR_DIRECT_DECLARE(z_arm_irq_direct_dynamic_dispatch_no_reschedule)
{
z_arm_irq_dynamic_direct_isr_dispatch();
return 0;
}
#endif /* CONFIG_DYNAMIC_DIRECT_INTERRUPTS */
#endif /* CONFIG_DYNAMIC_INTERRUPTS */

View File

@@ -1,46 +0,0 @@
/*
* Copyright (c) 2015 Intel corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file Software interrupts utility code - ARM implementation
*/
#include <kernel.h>
#include <irq_offload.h>
volatile irq_offload_routine_t offload_routine;
static void *offload_param;
/* Called by z_arm_svc */
void z_irq_do_offload(void)
{
offload_routine(offload_param);
}
void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
{
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) && defined(CONFIG_ASSERT)
/* ARMv6-M/ARMv8-M Baseline HardFault if you make a SVC call with
* interrupts locked.
*/
unsigned int key;
__asm__ volatile("mrs %0, PRIMASK;" : "=r" (key) : : "memory");
__ASSERT(key == 0U, "irq_offload called with interrupts locked\n");
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE && CONFIG_ASSERT */
k_sched_lock();
offload_routine = routine;
offload_param = parameter;
__asm__ volatile ("svc %[id]"
:
: [id] "i" (_SVC_CALL_IRQ_OFFLOAD)
: "memory");
offload_routine = NULL;
k_sched_unlock();
}

View File

@@ -1,193 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM Cortex-M and Cortex-R wrapper for ISRs with parameter
*
* Wrapper installed in vector table for handling dynamic interrupts that accept
* a parameter.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
#include <sw_isr_table.h>
_ASM_FILE_PROLOGUE
GDATA(_sw_isr_table)
GTEXT(_isr_wrapper)
GTEXT(z_arm_int_exit)
/**
*
* @brief Wrapper around ISRs when inserted in software ISR table
*
* When inserted in the vector table, _isr_wrapper() demuxes the ISR table
* using the running interrupt number as the index, and invokes the registered
* ISR with its corresponding argument. When returning from the ISR, it
* determines if a context switch needs to happen (see documentation for
* z_arm_pendsv()) and pends the PendSV exception if so: the latter will
* perform the context switch itself.
*
* @return N/A
*/
SECTION_FUNC(TEXT, _isr_wrapper)
#if defined(CONFIG_CPU_CORTEX_M)
push {r0,lr} /* r0, lr are now the first items on the stack */
#elif defined(CONFIG_CPU_CORTEX_R)
/*
* Save away r0-r3 from previous context to the process stack since
* they are clobbered here. Also, save away lr since we may swap
* processes and return to a different thread.
*/
push {r4, r5}
mov r4, r12
sub r5, lr, #4
cps #MODE_SYS
stmdb sp!, {r0-r5}
cps #MODE_IRQ
pop {r4, r5}
#endif
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_start_of_isr
#endif
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_enter
#endif
#ifdef CONFIG_SYS_POWER_MANAGEMENT
/*
* All interrupts are disabled when handling idle wakeup. For tickless
* idle, this ensures that the calculation and programming of the
* device for the next timer deadline is not interrupted. For
* non-tickless idle, this ensures that the clearing of the kernel idle
* state is not interrupted. In each case, z_sys_power_save_idle_exit
* is called with interrupts disabled.
*/
/*
* FIXME: Remove the Cortex-M conditional compilation checks for `cpsid i`
* and `cpsie i` after the Cortex-R port is updated to support
* interrupt nesting. For more details, refer to the issue #21758.
*/
#if defined(CONFIG_CPU_CORTEX_M)
cpsid i /* PRIMASK = 1 */
#endif
/* is this a wakeup from idle ? */
ldr r2, =_kernel
/* requested idle duration, in ticks */
ldr r0, [r2, #_kernel_offset_to_idle]
cmp r0, #0
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
beq _idle_state_cleared
movs.n r1, #0
/* clear kernel idle state */
str r1, [r2, #_kernel_offset_to_idle]
bl z_sys_power_save_idle_exit
_idle_state_cleared:
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ittt ne
movne r1, #0
/* clear kernel idle state */
strne r1, [r2, #_kernel_offset_to_idle]
blne z_sys_power_save_idle_exit
#elif defined(CONFIG_ARMV7_R)
beq _idle_state_cleared
movs r1, #0
/* clear kernel idle state */
str r1, [r2, #_kernel_offset_to_idle]
bl z_sys_power_save_idle_exit
_idle_state_cleared:
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#if defined(CONFIG_CPU_CORTEX_M)
cpsie i /* re-enable interrupts (PRIMASK = 0) */
#endif
#endif /* CONFIG_SYS_POWER_MANAGEMENT */
#if defined(CONFIG_CPU_CORTEX_M)
mrs r0, IPSR /* get exception number */
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
ldr r1, =16
subs r0, r1 /* get IRQ number */
lsls r0, #3 /* table is 8-byte wide */
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
sub r0, r0, #16 /* get IRQ number */
lsl r0, r0, #3 /* table is 8-byte wide */
#elif defined(CONFIG_ARMV7_R)
/*
* Cortex-R only has one IRQ line so the main handler will be at
* offset 0 of the table.
*/
mov r0, #0
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
ldr r1, =_sw_isr_table
add r1, r1, r0 /* table entry: ISRs must have their MSB set to stay
* in thumb mode */
ldm r1!,{r0,r3} /* arg in r0, ISR in r3 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
stm sp!,{r0-r3} /* Save r0 to r3 into stack */
push {r0, lr}
bl read_timer_end_of_isr
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r3}
mov lr,r3
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
ldm sp!,{r0-r3} /* Restore r0 to r3 regs */
#endif /* CONFIG_EXECUTION_BENCHMARKING */
blx r3 /* call ISR */
#ifdef CONFIG_TRACING_ISR
bl sys_trace_isr_exit
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r3}
mov lr, r3
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
pop {r0, lr}
#elif defined(CONFIG_ARMV7_R)
/*
* r0,lr were saved on the process stack since a swap could
* happen. exc_exit will handle getting those values back
* from the process stack to return to the correct location
* so there is no need to do anything here.
*/
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#if defined(CONFIG_CPU_CORTEX_R)
mov r0, #RET_FROM_IRQ
#endif
/* Use 'bx' instead of 'b' because 'bx' can jump further, and use
* 'bx' instead of 'blx' because exception return is done in
* z_arm_int_exit() */
ldr r1, =z_arm_int_exit
bx r1

View File

@@ -1,186 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Full C support initialization
*
*
* Initialization of full C support: zero the .bss, copy the .data if XIP,
* call z_cstart().
*
* Stack is available in this module, but not the global data/bss until their
* initialization is performed.
*/
#include <kernel.h>
#include <kernel_internal.h>
#include <linker/linker-defs.h>
#if defined(CONFIG_ARMV7_R)
#include <aarch32/cortex_r/stack.h>
#endif
#if defined(__GNUC__)
/*
* GCC can detect if memcpy is passed a NULL argument, however one of
* the cases of relocate_vector_table() it is valid to pass NULL, so we
* suppress the warning for this case. We need to do this before
* string.h is included to get the declaration of memcpy.
*/
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wnonnull"
#endif
#include <string.h>
#ifdef CONFIG_CPU_CORTEX_M_HAS_VTOR
#ifdef CONFIG_XIP
#define VECTOR_ADDRESS ((uintptr_t)_vector_start)
#else
#define VECTOR_ADDRESS CONFIG_SRAM_BASE_ADDRESS
#endif
static inline void relocate_vector_table(void)
{
SCB->VTOR = VECTOR_ADDRESS & SCB_VTOR_TBLOFF_Msk;
__DSB();
__ISB();
}
#else
#if defined(CONFIG_SW_VECTOR_RELAY)
Z_GENERIC_SECTION(.vt_pointer_section) void *_vector_table_pointer;
#endif
#define VECTOR_ADDRESS 0
void __weak relocate_vector_table(void)
{
#if defined(CONFIG_XIP) && (CONFIG_FLASH_BASE_ADDRESS != 0) || \
!defined(CONFIG_XIP) && (CONFIG_SRAM_BASE_ADDRESS != 0)
size_t vector_size = (size_t)_vector_end - (size_t)_vector_start;
(void)memcpy(VECTOR_ADDRESS, _vector_start, vector_size);
#elif defined(CONFIG_SW_VECTOR_RELAY)
_vector_table_pointer = _vector_start;
#endif
}
#if defined(__GNUC__)
#pragma GCC diagnostic pop
#endif
#endif /* CONFIG_CPU_CORTEX_M_HAS_VTOR */
#if defined(CONFIG_CPU_HAS_FPU)
static inline void z_arm_floating_point_init(void)
{
/*
* Upon reset, the Co-Processor Access Control Register is, normally,
* 0x00000000. However, it might be left un-cleared by firmware running
* before Zephyr boot.
*/
SCB->CPACR &= (~(CPACR_CP10_Msk | CPACR_CP11_Msk));
#if defined(CONFIG_FLOAT)
/*
* Enable CP10 and CP11 Co-Processors to enable access to floating
* point registers.
*/
#if defined(CONFIG_USERSPACE)
/* Full access */
SCB->CPACR |= CPACR_CP10_FULL_ACCESS | CPACR_CP11_FULL_ACCESS;
#else
/* Privileged access only */
SCB->CPACR |= CPACR_CP10_PRIV_ACCESS | CPACR_CP11_PRIV_ACCESS;
#endif /* CONFIG_USERSPACE */
/*
* Upon reset, the FPU Context Control Register is 0xC0000000
* (both Automatic and Lazy state preservation is enabled).
*/
#if !defined(CONFIG_FP_SHARING)
/* Default mode is Unshared FP registers mode. We disable the
* automatic stacking of FP registers (automatic setting of
* FPCA bit in the CONTROL register), upon exception entries,
* as the FP registers are to be used by a single context (and
* the use of FP registers in ISRs is not supported). This
* configuration improves interrupt latency and decreases the
* stack memory requirement for the (single) thread that makes
* use of the FP co-processor.
*/
FPU->FPCCR &= (~(FPU_FPCCR_ASPEN_Msk | FPU_FPCCR_LSPEN_Msk));
#else
/*
* Enable both automatic and lazy state preservation of the FP context.
* The FPCA bit of the CONTROL register will be automatically set, if
* the thread uses the floating point registers. Because of lazy state
* preservation the volatile FP registers will not be stacked upon
* exception entry, however, the required area in the stack frame will
* be reserved for them. This configuration improves interrupt latency.
* The registers will eventually be stacked when the thread is swapped
* out during context-switch.
*/
FPU->FPCCR = FPU_FPCCR_ASPEN_Msk | FPU_FPCCR_LSPEN_Msk;
#endif /* CONFIG_FP_SHARING */
/* Make the side-effects of modifying the FPCCR be realized
* immediately.
*/
__DSB();
__ISB();
/* Initialize the Floating Point Status and Control Register. */
__set_FPSCR(0);
/*
* Note:
* The use of the FP register bank is enabled, however the FP context
* will be activated (FPCA bit on the CONTROL register) in the presence
* of floating point instructions.
*/
#endif /* CONFIG_FLOAT */
/*
* Upon reset, the CONTROL.FPCA bit is, normally, cleared. However,
* it might be left un-cleared by firmware running before Zephyr boot.
* We must clear this bit to prevent errors in exception unstacking.
*
* Note:
* In Sharing FP Registers mode CONTROL.FPCA is cleared before switching
* to main, so it may be skipped here (saving few boot cycles).
*/
#if !defined(CONFIG_FLOAT) || !defined(CONFIG_FP_SHARING)
__set_CONTROL(__get_CONTROL() & (~(CONTROL_FPCA_Msk)));
#endif
}
#endif /* CONFIG_CPU_HAS_FPU */
extern FUNC_NORETURN void z_cstart(void);
/**
*
* @brief Prepare to and run C code
*
* This routine prepares for the execution of and runs C code.
*
* @return N/A
*/
void z_arm_prep_c(void)
{
relocate_vector_table();
#if defined(CONFIG_CPU_HAS_FPU)
z_arm_floating_point_init();
#endif
z_bss_zero();
z_data_copy();
#if defined(CONFIG_ARMV7_R) && defined(CONFIG_INIT_STACKS)
z_arm_init_stacks();
#endif
z_arm_int_lib_init();
z_cstart();
CODE_UNREACHABLE;
}

View File

@@ -1,61 +0,0 @@
/*
* Copyright (c) 2018 Linaro, Limited
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <kernel.h>
#include <kernel_internal.h>
#ifdef CONFIG_EXECUTION_BENCHMARKING
extern void read_timer_start_of_swap(void);
#endif
extern const int _k_neg_eagain;
/* The 'key' actually represents the BASEPRI register
* prior to disabling interrupts via the BASEPRI mechanism.
*
* arch_swap() itself does not do much.
*
* It simply stores the intlock key (the BASEPRI value) parameter into
* current->basepri, and then triggers a PendSV exception, which does
* the heavy lifting of context switching.
* This is the only place we have to save BASEPRI since the other paths to
* z_arm_pendsv all come from handling an interrupt, which means we know the
* interrupts were not locked: in that case the BASEPRI value is 0.
*
* Given that arch_swap() is called to effect a cooperative context switch,
* only the caller-saved integer registers need to be saved in the thread of the
* outgoing thread. This is all performed by the hardware, which stores it in
* its exception stack frame, created when handling the z_arm_pendsv exception.
*
* On ARMv6-M, the intlock key is represented by the PRIMASK register,
* as BASEPRI is not available.
*/
int arch_swap(unsigned int key)
{
#ifdef CONFIG_EXECUTION_BENCHMARKING
read_timer_start_of_swap();
#endif
/* store off key and return value */
_current->arch.basepri = key;
_current->arch.swap_return_value = _k_neg_eagain;
#if defined(CONFIG_CPU_CORTEX_M)
/* set pending bit to make sure we will take a PendSV exception */
SCB->ICSR |= SCB_ICSR_PENDSVSET_Msk;
/* clear mask or enable all irqs to take a pendsv */
irq_unlock(0);
#elif defined(CONFIG_CPU_CORTEX_R)
z_arm_cortex_r_svc();
irq_unlock(key);
#endif
/* Context switch is performed here. Returning implies the
* thread has been context-switched-in again.
*/
return _current->arch.swap_return_value;
}

View File

@@ -1,653 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
* Copyright (c) 2017-2019 Nordic Semiconductor ASA.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Thread context switching for ARM Cortex-M and Cortex-R
*
* This module implements the routines necessary for thread context switching
* on ARM Cortex-M and Cortex-R CPUs.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <arch/cpu.h>
#include <syscall.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_svc)
GTEXT(z_arm_pendsv)
GTEXT(z_do_kernel_oops)
GTEXT(z_arm_do_syscall)
GDATA(_k_neg_eagain)
GDATA(_kernel)
/**
*
* @brief PendSV exception handler, handling context switches
*
* The PendSV exception is the only execution context in the system that can
* perform context switching. When an execution context finds out it has to
* switch contexts, it pends the PendSV exception.
*
* When PendSV is pended, the decision that a context switch must happen has
* already been taken. In other words, when z_arm_pendsv() runs, we *know* we
* have to swap *something*.
*
* For Cortex-M, z_arm_pendsv() is invoked with no arguments.
*
* For Cortex-R, PendSV exception is not supported by the architecture and this
* function is directly called either by _IntExit in case of preemption, or
* z_arm_svc in case of cooperative switching.
*/
SECTION_FUNC(TEXT, z_arm_pendsv)
#ifdef CONFIG_TRACING
/* Register the context switch */
push {r0, lr}
bl sys_trace_thread_switched_out
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#endif /* CONFIG_TRACING */
/* load _kernel into r1 and current k_thread into r2 */
ldr r1, =_kernel
ldr r2, [r1, #_kernel_offset_to_current]
/* addr of callee-saved regs in thread in r0 */
ldr r0, =_thread_offset_to_callee_saved
add r0, r2
/* save callee-saved + psp in thread */
#if defined(CONFIG_CPU_CORTEX_M)
mrs ip, PSP
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* Store current r4-r7 */
stmea r0!, {r4-r7}
/* copy r8-r12 into r3-r7 */
mov r3, r8
mov r4, r9
mov r5, r10
mov r6, r11
mov r7, ip
/* store r8-12 */
stmea r0!, {r3-r7}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
stmia r0, {v1-v8, ip}
#ifdef CONFIG_FP_SHARING
/* Assess whether switched-out thread had been using the FP registers. */
ldr r0, =0x10 /* EXC_RETURN.F_Type Mask */
tst lr, r0 /* EXC_RETURN & EXC_RETURN.F_Type_Msk */
beq out_fp_active
/* FP context inactive: clear FP state */
ldr r0, [r2, #_thread_offset_to_mode]
bic r0, #0x4 /* _current->arch.mode &= ~(CONTROL_FPCA_Msk) */
b out_fp_endif
out_fp_active:
/* FP context active: set FP state and store callee-saved registers */
add r0, r2, #_thread_offset_to_preempt_float
vstmia r0, {s16-s31}
ldr r0, [r2, #_thread_offset_to_mode]
orrs r0, r0, #0x4 /* _current->arch.mode |= CONTROL_FPCA_Msk */
out_fp_endif:
str r0, [r2, #_thread_offset_to_mode]
#endif /* CONFIG_FP_SHARING */
#elif defined(CONFIG_ARMV7_R)
/* Store rest of process context */
mrs r12, SPSR
stm r0, {r4-r12,sp,lr}^
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
/* Protect the kernel state while we play with the thread lists */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
cpsid i
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
movs.n r0, #_EXC_IRQ_DEFAULT_PRIO
msr BASEPRI, r0
isb /* Make the effect of disabling interrupts be realized immediately */
#elif defined(CONFIG_ARMV7_R)
/*
* Interrupts are still disabled from arch_swap so empty clause
* here to avoid the preprocessor error below
*/
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
/*
* Prepare to clear PendSV with interrupts unlocked, but
* don't clear it yet. PendSV must not be cleared until
* the new thread is context-switched in since all decisions
* to pend PendSV have been taken with the current kernel
* state and this is what we're handling currently.
*/
#if defined(CONFIG_CPU_CORTEX_M)
ldr v4, =_SCS_ICSR
ldr v3, =_SCS_ICSR_UNPENDSV
#endif
/* _kernel is still in r1 */
/* fetch the thread to run from the ready queue cache */
ldr r2, [r1, #_kernel_offset_to_ready_q_cache]
str r2, [r1, #_kernel_offset_to_current]
/*
* Clear PendSV so that if another interrupt comes in and
* decides, with the new kernel state based on the new thread
* being context-switched in, that it needs to reschedule, it
* will take, but that previously pended PendSVs do not take,
* since they were based on the previous kernel state and this
* has been handled.
*/
/* _SCS_ICSR is still in v4 and _SCS_ICSR_UNPENDSV in v3 */
#if defined(CONFIG_CPU_CORTEX_M)
str v3, [v4, #0]
#endif
/* Restore previous interrupt disable state (irq_lock key)
* (We clear the arch.basepri field after restoring state)
*/
#if (defined(CONFIG_CPU_CORTEX_M0PLUS) || defined(CONFIG_CPU_CORTEX_M0)) && \
_thread_offset_to_basepri > 124
/* Doing it this way since the offset to thread->arch.basepri can in
* some configurations be larger than the maximum of 124 for ldr/str
* immediate offsets.
*/
ldr r4, =_thread_offset_to_basepri
adds r4, r2, r4
ldr r0, [r4]
movs.n r3, #0
str r3, [r4]
#else
ldr r0, [r2, #_thread_offset_to_basepri]
movs r3, #0
str r3, [r2, #_thread_offset_to_basepri]
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* BASEPRI not available, previous interrupt disable state
* maps to PRIMASK.
*
* Only enable interrupts if value is 0, meaning interrupts
* were enabled before irq_lock was called.
*/
cmp r0, #0
bne _thread_irq_disabled
cpsie i
_thread_irq_disabled:
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
/* Re-program dynamic memory map */
push {r2,lr}
mov r0, r2
bl z_arm_configure_dynamic_mpu_regions
pop {r2,r3}
mov lr, r3
#endif
#ifdef CONFIG_USERSPACE
/* restore mode */
ldr r3, =_thread_offset_to_mode
adds r3, r2, r3
ldr r0, [r3]
mrs r3, CONTROL
movs.n r1, #1
bics r3, r1
orrs r3, r0
msr CONTROL, r3
/* ISB is not strictly necessary here (stack pointer is not being
* touched), but it's recommended to avoid executing pre-fetched
* instructions with the previous privilege.
*/
isb
#endif
ldr r4, =_thread_offset_to_callee_saved
adds r0, r2, r4
/* restore r4-r12 for new thread */
/* first restore r8-r12 located after r4-r7 (4*4bytes) */
adds r0, #16
ldmia r0!, {r3-r7}
/* move to correct registers */
mov r8, r3
mov r9, r4
mov r10, r5
mov r11, r6
mov ip, r7
/* restore r4-r7, go back 9*4 bytes to the start of the stored block */
subs r0, #36
ldmia r0!, {r4-r7}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* restore BASEPRI for the incoming thread */
msr BASEPRI, r0
#ifdef CONFIG_FP_SHARING
/* Assess whether switched-in thread had been using the FP registers. */
ldr r0, [r2, #_thread_offset_to_mode]
tst r0, #0x04 /* thread.arch.mode & CONTROL.FPCA Msk */
bne in_fp_active
/* FP context inactive for swapped-in thread:
* - reset FPSCR to 0
* - set EXC_RETURN.F_Type (prevents FP frame un-stacking when returning
* from pendSV)
*/
movs.n r3, #0
vmsr fpscr, r3
orrs lr, lr, #0x10 /* EXC_RETURN & EXC_RETURN.F_Type_Msk */
b in_fp_endif
in_fp_active:
/* FP context active:
* - clear EXC_RETURN.F_Type
* - FPSCR and caller-saved registers will be restored automatically
* - restore callee-saved FP registers
*/
bic lr, #0x10 /* EXC_RETURN | (~EXC_RETURN.F_Type_Msk) */
add r0, r2, #_thread_offset_to_preempt_float
vldmia r0, {s16-s31}
in_fp_endif:
/* Clear CONTROL.FPCA that may have been set by FP instructions */
mrs r3, CONTROL
bic r3, #0x4 /* CONTROL.FPCA Msk */
msr CONTROL, r3
isb
#endif
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
/* Re-program dynamic memory map */
push {r2,lr}
mov r0, r2 /* _current thread */
bl z_arm_configure_dynamic_mpu_regions
pop {r2,lr}
#endif
#ifdef CONFIG_USERSPACE
/* restore mode */
ldr r0, [r2, #_thread_offset_to_mode]
mrs r3, CONTROL
bic r3, #1
orr r3, r0
msr CONTROL, r3
/* ISB is not strictly necessary here (stack pointer is not being
* touched), but it's recommended to avoid executing pre-fetched
* instructions with the previous privilege.
*/
isb
#endif
/* load callee-saved + psp from thread */
add r0, r2, #_thread_offset_to_callee_saved
ldmia r0, {v1-v8, ip}
#elif defined(CONFIG_ARMV7_R)
_thread_irq_disabled:
/* load _kernel into r1 and current k_thread into r2 */
ldr r1, =_kernel
ldr r2, [r1, #_kernel_offset_to_current]
/* addr of callee-saved regs in thread in r0 */
ldr r0, =_thread_offset_to_callee_saved
add r0, r2
/* restore r4-r12 for incoming thread, plus system sp and lr */
ldm r0, {r4-r12,sp,lr}^
msr SPSR_fsxc, r12
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#if defined(CONFIG_CPU_CORTEX_M)
msr PSP, ip
#endif
#ifdef CONFIG_BUILTIN_STACK_GUARD
/* r2 contains k_thread */
add r0, r2, #0
push {r2, lr}
bl configure_builtin_stack_guard
pop {r2, lr}
#endif /* CONFIG_BUILTIN_STACK_GUARD */
#ifdef CONFIG_EXECUTION_BENCHMARKING
push {r0, lr}
bl read_timer_end_of_swap
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr,r1
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
#endif /* CONFIG_EXECUTION_BENCHMARKING */
#ifdef CONFIG_TRACING
/* Register the context switch */
push {r0, lr}
bl sys_trace_thread_switched_in
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov lr, r1
#else
pop {r0, lr}
#endif
#endif /* CONFIG_TRACING */
/*
* Cortex-M: return from PendSV exception
* Cortex-R: return to the caller (_IntExit or z_arm_svc)
*/
bx lr
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) || \
defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/**
*
* @brief Service call handler
*
* The service call (svc) is used in the following occasions:
* - IRQ offloading
* - Kernel run-time exceptions
* - System Calls (User mode)
*
* @return N/A
*/
SECTION_FUNC(TEXT, z_arm_svc)
/* Use EXC_RETURN state to find out if stack frame is on the
* MSP or PSP
*/
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
movs r0, #0x4
mov r1, lr
tst r1, r0
beq _stack_frame_msp
mrs r0, PSP
bne _stack_frame_endif
_stack_frame_msp:
mrs r0, MSP
_stack_frame_endif:
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
tst lr, #0x4 /* did we come from thread mode ? */
ite eq /* if zero (equal), came from handler mode */
mrseq r0, MSP /* handler mode, stack frame is on MSP */
mrsne r0, PSP /* thread mode, stack frame is on PSP */
#endif
/* Figure out what SVC call number was invoked */
ldr r1, [r0, #24] /* grab address of PC from stack frame */
/* SVC is a two-byte instruction, point to it and read the
* SVC number (lower byte of SCV instruction)
*/
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
subs r1, r1, #2
ldrb r1, [r1]
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldrb r1, [r1, #-2]
#endif
/*
* grab service call number:
* 0: Unused
* 1: irq_offload (if configured)
* 2: kernel panic or oops (software generated fatal exception)
* 3: System call (if user mode supported)
*/
#if defined(CONFIG_USERSPACE)
mrs r2, CONTROL
cmp r1, #3
beq _do_syscall
/*
* check that we are privileged before invoking other SVCs
* oops if we are unprivileged
*/
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
movs r3, #0x1
tst r2, r3
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
tst r2, #0x1
#endif
bne _oops
#endif /* CONFIG_USERSPACE */
cmp r1, #2
beq _oops
#if defined(CONFIG_IRQ_OFFLOAD)
push {r0, lr}
bl z_irq_do_offload /* call C routine which executes the offload */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r3}
mov lr, r3
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
pop {r0, lr}
#endif
/* exception return is done in z_arm_int_exit() */
b z_arm_int_exit
#endif
_oops:
push {r0, lr}
bl z_do_kernel_oops
/* return from SVC exception is done here */
pop {r0, pc}
#if defined(CONFIG_USERSPACE)
/*
* System call will setup a jump to the z_arm_do_syscall() function
* when the SVC returns via the bx lr.
*
* There is some trickery involved here because we have to preserve
* the original PC value so that we can return back to the caller of
* the SVC.
*
* On SVC exeption, the stack looks like the following:
* r0 - r1 - r2 - r3 - r12 - LR - PC - PSR
*
* Registers look like:
* r0 - arg1
* r1 - arg2
* r2 - arg3
* r3 - arg4
* r4 - arg5
* r5 - arg6
* r6 - call_id
* r8 - saved link register
*/
_do_syscall:
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
movs r3, #24
ldr r1, [r0, r3] /* grab address of PC from stack frame */
mov r8, r1
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldr r8, [r0, #24] /* grab address of PC from stack frame */
#endif
ldr r1, =z_arm_do_syscall
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
str r1, [r0, r3] /* overwrite the PC to point to z_arm_do_syscall */
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
str r1, [r0, #24] /* overwrite the PC to point to z_arm_do_syscall */
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
ldr r3, =K_SYSCALL_LIMIT
cmp r6, r3
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* validate syscall limit */
ldr ip, =K_SYSCALL_LIMIT
cmp r6, ip
#endif
/* The supplied syscall_id must be lower than the limit
* (Requires unsigned integer comparison)
*/
blo valid_syscall_id
/* bad syscall id. Set arg1 to bad id and set call_id to SYSCALL_BAD */
str r6, [r0]
ldr r6, =K_SYSCALL_BAD
/* Bad syscalls treated as valid syscalls with ID K_SYSCALL_BAD. */
valid_syscall_id:
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
mov ip, r2
ldr r1, =_thread_offset_to_mode
ldr r3, [r0, r1]
movs r2, #1
bics r3, r2
/* Store (privileged) mode in thread's mode state variable */
str r3, [r0, r1]
mov r2, ip
dsb
/* set mode to privileged, r2 still contains value from CONTROL */
movs r3, #1
bics r2, r3
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldr r1, [r0, #_thread_offset_to_mode]
bic r1, #1
/* Store (privileged) mode in thread's mode state variable */
str r1, [r0, #_thread_offset_to_mode]
dsb
/* set mode to privileged, r2 still contains value from CONTROL */
bic r2, #1
#endif
msr CONTROL, r2
/* ISB is not strictly necessary here (stack pointer is not being
* touched), but it's recommended to avoid executing pre-fetched
* instructions with the previous privilege.
*/
isb
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/* Thread is now in privileged mode; after returning from SCVall it
* will use the default (user) stack before switching to the privileged
* stack to execute the system call. We need to protect the user stack
* against stack overflows until this stack transition.
*/
ldr r1, [r0, #_thread_offset_to_stack_info_start] /* stack_info.start */
msr PSPLIM, r1
#endif /* CONFIG_BUILTIN_STACK_GUARD */
/* return from SVC to the modified LR - z_arm_do_syscall */
bx lr
#endif /* CONFIG_USERSPACE */
#elif defined(CONFIG_ARMV7_R)
/**
*
* @brief Service call handler
*
* The service call (svc) is used in the following occasions:
* - Cooperative context switching
* - IRQ offloading
* - Kernel run-time exceptions
*
* @return N/A
*/
SECTION_FUNC(TEXT, z_arm_svc)
/*
* Switch to system mode to store r0-r3 to the process stack pointer.
* Save r12 and the lr as we could be swapping in another process and
* returning to a different location.
*/
push {r4, r5}
mov r4, r12
mov r5, lr
cps #MODE_SYS
stmdb sp!, {r0-r5}
cps #MODE_SVC
pop {r4, r5}
/* Get SVC number */
mrs r0, spsr
tst r0, #0x20
ldreq r1, [lr, #-4]
biceq r1, #0xff000000
beq demux
ldr r1, [lr, #-2]
bic r1, #0xff00
/*
* grab service call number:
* 0: context switch
* 1: irq_offload (if configured)
* 2: kernel panic or oops (software generated fatal exception)
* Planned implementation of system calls for memory protection will
* expand this case.
*/
demux:
cmp r1, #_SVC_CALL_CONTEXT_SWITCH
beq _context_switch
cmp r1, #_SVC_CALL_RUNTIME_EXCEPT
beq _oops
#if CONFIG_IRQ_OFFLOAD
blx z_irq_do_offload /* call C routine which executes the offload */
/* exception return is done in z_arm_int_exit() */
mov r0, #RET_FROM_SVC
b z_arm_int_exit
#endif
_context_switch:
/* handler mode exit, to PendSV */
bl z_arm_pendsv
mov r0, #RET_FROM_SVC
b z_arm_int_exit
_oops:
push {r0, lr}
blx z_do_kernel_oops
pop {r0, lr}
cpsie i
movs pc, lr
GTEXT(z_arm_cortex_r_svc)
SECTION_FUNC(TEXT, z_arm_cortex_r_svc)
svc #0
bx lr
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */

View File

@@ -1,463 +0,0 @@
/*
* Copyright (c) 2013-2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief New thread creation for ARM Cortex-M and Cortex-R
*
* Core thread related primitives for the ARM Cortex-M and Cortex-R
* processor architecture.
*/
#include <kernel.h>
#include <ksched.h>
#include <wait_q.h>
#ifdef CONFIG_USERSPACE
extern u8_t *z_priv_stack_find(void *obj);
#endif
/* An initial context, to be "restored" by z_arm_pendsv(), is put at the other
* end of the stack, and thus reusable by the stack when not needed anymore.
*
* The initial context is an exception stack frame (ESF) since exiting the
* PendSV exception will want to pop an ESF. Interestingly, even if the lsb of
* an instruction address to jump to must always be set since the CPU always
* runs in thumb mode, the ESF expects the real address of the instruction,
* with the lsb *not* set (instructions are always aligned on 16 bit
* halfwords). Since the compiler automatically sets the lsb of function
* addresses, we have to unset it manually before storing it in the 'pc' field
* of the ESF.
*/
void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
{
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
char *stackEnd;
/* Offset between the top of stack and the high end of stack area. */
u32_t top_of_stack_offset = 0U;
Z_ASSERT_VALID_PRIO(priority, pEntry);
#if defined(CONFIG_USERSPACE)
/* Truncate the stack size to align with the MPU region granularity.
* This is done proactively to account for the case when the thread
* switches to user mode (thus, its stack area will need to be MPU-
* programmed to be assigned unprivileged RW access permission).
*/
stackSize &= ~(CONFIG_ARM_MPU_REGION_MIN_ALIGN_AND_SIZE - 1);
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* Reserve space on top of stack for local data. */
u32_t p_local_data = STACK_ROUND_DOWN(pStackMem + stackSize
- sizeof(*thread->userspace_local_data));
thread->userspace_local_data =
(struct _thread_userspace_local_data *)(p_local_data);
/* Top of actual stack must be moved below the user local data. */
top_of_stack_offset = (u32_t)
(pStackMem + stackSize - ((char *)p_local_data));
#endif /* CONFIG_THREAD_USERSPACE_LOCAL_DATA */
#endif /* CONFIG_USERSPACE */
#if defined(CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT) \
&& defined(CONFIG_USERSPACE)
/* This is required to work-around the case where the thread
* is created without using K_THREAD_STACK_SIZEOF() macro in
* k_thread_create(). If K_THREAD_STACK_SIZEOF() is used, the
* Guard size has already been take out of stackSize.
*/
stackSize -= MPU_GUARD_ALIGN_AND_SIZE;
#endif
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING) \
&& defined(CONFIG_MPU_STACK_GUARD)
/* For a thread which intends to use the FP services, it is required to
* allocate a wider MPU guard region, to always successfully detect an
* overflow of the stack.
*
* Note that the wider MPU regions requires re-adjusting the stack_info
* .start and .size.
*
*/
if ((options & K_FP_REGS) != 0) {
pStackMem += MPU_GUARD_ALIGN_AND_SIZE_FLOAT
- MPU_GUARD_ALIGN_AND_SIZE;
stackSize -= MPU_GUARD_ALIGN_AND_SIZE_FLOAT
- MPU_GUARD_ALIGN_AND_SIZE;
}
#endif
stackEnd = pStackMem + stackSize;
struct __esf *pInitCtx;
z_new_thread_init(thread, pStackMem, stackSize, priority,
options);
/* Carve the thread entry struct from the "base" of the stack
*
* The initial carved stack frame only needs to contain the basic
* stack frame (state context), because no FP operations have been
* performed yet for this thread.
*/
pInitCtx = (struct __esf *)(STACK_ROUND_DOWN(stackEnd -
(char *)top_of_stack_offset - sizeof(struct __basic_sf)));
#if defined(CONFIG_USERSPACE)
if ((options & K_USER) != 0) {
pInitCtx->basic.pc = (u32_t)arch_user_mode_enter;
} else {
pInitCtx->basic.pc = (u32_t)z_thread_entry;
}
#else
pInitCtx->basic.pc = (u32_t)z_thread_entry;
#endif
#if defined(CONFIG_CPU_CORTEX_M)
/* force ARM mode by clearing LSB of address */
pInitCtx->basic.pc &= 0xfffffffe;
#endif
pInitCtx->basic.a1 = (u32_t)pEntry;
pInitCtx->basic.a2 = (u32_t)parameter1;
pInitCtx->basic.a3 = (u32_t)parameter2;
pInitCtx->basic.a4 = (u32_t)parameter3;
pInitCtx->basic.xpsr =
0x01000000UL; /* clear all, thumb bit is 1, even if RO */
thread->callee_saved.psp = (u32_t)pInitCtx;
#if defined(CONFIG_CPU_CORTEX_R)
pInitCtx->basic.lr = (u32_t)pInitCtx->basic.pc;
thread->callee_saved.spsr = A_BIT | T_BIT | MODE_SYS;
thread->callee_saved.lr = (u32_t)pInitCtx->basic.pc;
#endif
thread->arch.basepri = 0;
#if defined(CONFIG_USERSPACE) || defined(CONFIG_FP_SHARING)
thread->arch.mode = 0;
#if defined(CONFIG_USERSPACE)
thread->arch.priv_stack_start = 0;
#endif
#endif
/* swap_return_value can contain garbage */
/*
* initial values in all other registers/thread entries are
* irrelevant.
*/
}
#ifdef CONFIG_USERSPACE
FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
{
/* Set up privileged stack before entering user mode */
_current->arch.priv_stack_start =
(u32_t)z_priv_stack_find(_current->stack_obj);
#if defined(CONFIG_MPU_STACK_GUARD)
/* Stack guard area reserved at the bottom of the thread's
* privileged stack. Adjust the available (writable) stack
* buffer area accordingly.
*/
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
_current->arch.priv_stack_start +=
(_current->base.user_options & K_FP_REGS) ?
MPU_GUARD_ALIGN_AND_SIZE_FLOAT : MPU_GUARD_ALIGN_AND_SIZE;
#else
_current->arch.priv_stack_start += MPU_GUARD_ALIGN_AND_SIZE;
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */
#endif /* CONFIG_MPU_STACK_GUARD */
z_arm_userspace_enter(user_entry, p1, p2, p3,
(u32_t)_current->stack_info.start,
_current->stack_info.size);
CODE_UNREACHABLE;
}
#endif
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/*
* @brief Configure ARM built-in stack guard
*
* This function configures per thread stack guards by reprogramming
* the built-in Process Stack Pointer Limit Register (PSPLIM).
* The functionality is meant to be used during context switch.
*
* @param thread thread info data structure.
*/
void configure_builtin_stack_guard(struct k_thread *thread)
{
#if defined(CONFIG_USERSPACE)
if ((thread->arch.mode & CONTROL_nPRIV_Msk) != 0) {
/* Only configure stack limit for threads in privileged mode
* (i.e supervisor threads or user threads doing system call).
* User threads executing in user mode do not require a stack
* limit protection.
*/
__set_PSPLIM(0);
return;
}
/* Only configure PSPLIM to guard the privileged stack area, if
* the thread is currently using it, otherwise guard the default
* thread stack. Note that the conditional check relies on the
* thread privileged stack being allocated in higher memory area
* than the default thread stack (ensured by design).
*/
u32_t guard_start =
((thread->arch.priv_stack_start) &&
(__get_PSP() >= thread->arch.priv_stack_start)) ?
(u32_t)thread->arch.priv_stack_start :
(u32_t)thread->stack_obj;
__ASSERT(thread->stack_info.start == ((u32_t)thread->stack_obj),
"stack_info.start does not point to the start of the"
"thread allocated area.");
#else
u32_t guard_start = thread->stack_info.start;
#endif
#if defined(CONFIG_CPU_CORTEX_M_HAS_SPLIM)
__set_PSPLIM(guard_start);
#else
#error "Built-in PSP limit checks not supported by HW"
#endif
}
#endif /* CONFIG_BUILTIN_STACK_GUARD */
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
#define IS_MPU_GUARD_VIOLATION(guard_start, guard_len, fault_addr, stack_ptr) \
((fault_addr == -EINVAL) ? \
((fault_addr >= guard_start) && \
(fault_addr < (guard_start + guard_len)) && \
(stack_ptr < (guard_start + guard_len))) \
: \
(stack_ptr < (guard_start + guard_len)))
/**
* @brief Assess occurrence of current thread's stack corruption
*
* This function performs an assessment whether a memory fault (on a
* given memory address) is the result of stack memory corruption of
* the current thread.
*
* Thread stack corruption for supervisor threads or user threads in
* privilege mode (when User Space is supported) is reported upon an
* attempt to access the stack guard area (if MPU Stack Guard feature
* is supported). Additionally the current PSP (process stack pointer)
* must be pointing inside or below the guard area.
*
* Thread stack corruption for user threads in user mode is reported,
* if the current PSP is pointing below the start of the current
* thread's stack.
*
* Notes:
* - we assume a fully descending stack,
* - we assume a stacking error has occurred,
* - the function shall be called when handling MemManage and Bus fault,
* and only if a Stacking error has been reported.
*
* If stack corruption is detected, the function returns the lowest
* allowed address where the Stack Pointer can safely point to, to
* prevent from errors when un-stacking the corrupted stack frame
* upon exception return.
*
* @param fault_addr memory address on which memory access violation
* has been reported. It can be invalid (-EINVAL),
* if only Stacking error has been reported.
* @param psp current address the PSP points to
*
* @return The lowest allowed stack frame pointer, if error is a
* thread stack corruption, otherwise return 0.
*/
u32_t z_check_thread_stack_fail(const u32_t fault_addr, const u32_t psp)
{
const struct k_thread *thread = _current;
if (!thread) {
return 0;
}
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
u32_t guard_len = (thread->base.user_options & K_FP_REGS) ?
MPU_GUARD_ALIGN_AND_SIZE_FLOAT : MPU_GUARD_ALIGN_AND_SIZE;
#else
u32_t guard_len = MPU_GUARD_ALIGN_AND_SIZE;
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */
#if defined(CONFIG_USERSPACE)
if (thread->arch.priv_stack_start) {
/* User thread */
if ((__get_CONTROL() & CONTROL_nPRIV_Msk) == 0) {
/* User thread in privilege mode */
if (IS_MPU_GUARD_VIOLATION(
thread->arch.priv_stack_start - guard_len,
guard_len,
fault_addr, psp)) {
/* Thread's privilege stack corruption */
return thread->arch.priv_stack_start;
}
} else {
if (psp < (u32_t)thread->stack_obj) {
/* Thread's user stack corruption */
return (u32_t)thread->stack_obj;
}
}
} else {
/* Supervisor thread */
if (IS_MPU_GUARD_VIOLATION(thread->stack_info.start -
guard_len,
guard_len,
fault_addr, psp)) {
/* Supervisor thread stack corruption */
return thread->stack_info.start;
}
}
#else /* CONFIG_USERSPACE */
if (IS_MPU_GUARD_VIOLATION(thread->stack_info.start - guard_len,
guard_len,
fault_addr, psp)) {
/* Thread stack corruption */
return thread->stack_info.start;
}
#endif /* CONFIG_USERSPACE */
return 0;
}
#endif /* CONFIG_MPU_STACK_GUARD || CONFIG_USERSPACE */
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
int arch_float_disable(struct k_thread *thread)
{
if (thread != _current) {
return -EINVAL;
}
if (arch_is_in_isr()) {
return -EINVAL;
}
/* Disable all floating point capabilities for the thread */
/* K_FP_REG flag is used in SWAP and stack check fail. Locking
* interrupts here prevents a possible context-switch or MPU
* fault to take an outdated thread user_options flag into
* account.
*/
int key = arch_irq_lock();
thread->base.user_options &= ~K_FP_REGS;
__set_CONTROL(__get_CONTROL() & (~CONTROL_FPCA_Msk));
/* No need to add an ISB barrier after setting the CONTROL
* register; arch_irq_unlock() already adds one.
*/
arch_irq_unlock(key);
return 0;
}
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */
void arch_switch_to_main_thread(struct k_thread *main_thread,
k_thread_stack_t *main_stack,
size_t main_stack_size,
k_thread_entry_t _main)
{
#if defined(CONFIG_FLOAT)
/* Initialize the Floating Point Status and Control Register when in
* Unshared FP Registers mode (In Shared FP Registers mode, FPSCR is
* initialized at thread creation for threads that make use of the FP).
*/
__set_FPSCR(0);
#if defined(CONFIG_FP_SHARING)
/* In Sharing mode clearing FPSCR may set the CONTROL.FPCA flag. */
__set_CONTROL(__get_CONTROL() & (~(CONTROL_FPCA_Msk)));
__ISB();
#endif /* CONFIG_FP_SHARING */
#endif /* CONFIG_FLOAT */
#ifdef CONFIG_ARM_MPU
/* Configure static memory map. This will program MPU regions,
* to set up access permissions for fixed memory sections, such
* as Application Memory or No-Cacheable SRAM area.
*
* This function is invoked once, upon system initialization.
*/
z_arm_configure_static_mpu_regions();
#endif
/* get high address of the stack, i.e. its start (stack grows down) */
char *start_of_main_stack;
start_of_main_stack =
Z_THREAD_STACK_BUFFER(main_stack) + main_stack_size;
start_of_main_stack = (char *)STACK_ROUND_DOWN(start_of_main_stack);
_current = main_thread;
#ifdef CONFIG_TRACING
sys_trace_thread_switched_in();
#endif
/* the ready queue cache already contains the main thread */
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
/*
* If stack protection is enabled, make sure to set it
* before jumping to thread entry function
*/
z_arm_configure_dynamic_mpu_regions(main_thread);
#endif
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/* Set PSPLIM register for built-in stack guarding of main thread. */
#if defined(CONFIG_CPU_CORTEX_M_HAS_SPLIM)
__set_PSPLIM((u32_t)main_stack);
#else
#error "Built-in PSP limit checks not supported by HW"
#endif
#endif /* CONFIG_BUILTIN_STACK_GUARD */
/*
* Set PSP to the highest address of the main stack
* before enabling interrupts and jumping to main.
*/
__asm__ volatile (
"mov r0, %0\n\t" /* Store _main in R0 */
#if defined(CONFIG_CPU_CORTEX_M)
"msr PSP, %1\n\t" /* __set_PSP(start_of_main_stack) */
#endif
"movs r1, #0\n\t"
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE) \
|| defined(CONFIG_ARMV7_R)
"cpsie i\n\t" /* __enable_irq() */
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
"cpsie if\n\t" /* __enable_irq(); __enable_fault_irq() */
"msr BASEPRI, r1\n\t" /* __set_BASEPRI(0) */
#else
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
"isb\n\t"
"movs r2, #0\n\t"
"movs r3, #0\n\t"
"bl z_thread_entry\n\t" /* z_thread_entry(_main, 0, 0, 0); */
:
: "r" (_main), "r" (start_of_main_stack)
: "r0" /* not to be overwritten by msr PSP, %1 */
);
CODE_UNREACHABLE;
}

View File

@@ -1,634 +0,0 @@
/*
* Userspace and service handler hooks
*
* Copyright (c) 2017 Linaro Limited
*
* SPDX-License-Identifier: Apache-2.0
*
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <offsets_short.h>
#include <syscall.h>
#include <arch/arm/aarch32/exc.h>
_ASM_FILE_PROLOGUE
GTEXT(z_arm_userspace_enter)
GTEXT(z_arm_do_syscall)
GTEXT(arch_user_string_nlen)
GTEXT(z_arm_user_string_nlen_fault_start)
GTEXT(z_arm_user_string_nlen_fault_end)
GTEXT(z_arm_user_string_nlen_fixup)
GDATA(_kernel)
/* Imports */
GDATA(_k_syscall_table)
/**
*
* User space entry function
*
* This function is the entry point to user mode from privileged execution.
* The conversion is one way, and threads which transition to user mode do
* not transition back later, unless they are doing system calls.
*
* The function is invoked as:
* z_arm_userspace_enter(user_entry, p1, p2, p3,
* stack_info.start, stack_info.size);
*/
SECTION_FUNC(TEXT,z_arm_userspace_enter)
/* move user_entry to lr */
mov lr, r0
/* prepare to set stack to privileged stack */
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* move p1 to ip */
mov ip, r1
ldr r1, =_thread_offset_to_priv_stack_start
ldr r0, [r0, r1] /* priv stack ptr */
ldr r1, =CONFIG_PRIVILEGED_STACK_SIZE
add r0, r0, r1
/* Restore p1 from ip */
mov r1, ip
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldr r0, [r0, #_thread_offset_to_priv_stack_start] /* priv stack ptr */
ldr ip, =CONFIG_PRIVILEGED_STACK_SIZE
add r0, r0, ip
#endif
/* store current stack pointer to ip
* the current stack pointer is needed to retrieve
* stack_info.start and stack_info.size
*/
mov ip, sp
/* set stack to privileged stack
*
* Note [applies only when CONFIG_BUILTIN_STACK_GUARD is enabled]:
* modifying PSP via MSR instruction is not subject to stack limit
* checking, so we do not need to clear PSPLIM before setting PSP.
* The operation is safe since, by design, the privileged stack is
* located in memory higher than the default (user) thread stack.
*/
msr PSP, r0
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/* At this point the privileged stack is not yet protected by PSPLIM.
* Since we have just switched to the top of the privileged stack, we
* are safe, as long as the stack can accommodate the maximum exception
* stack frame.
*/
/* set stack pointer limit to the start of the priv stack */
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
ldr r0, [r0, #_thread_offset_to_priv_stack_start] /* priv stack ptr */
msr PSPLIM, r0
#endif
/* push args to stack */
push {r1,r2,r3,lr}
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
mov r1, ip
push {r0,r1}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
push {r0,ip}
#endif
/* Re-program dynamic memory map.
*
* Important note:
* z_arm_configure_dynamic_mpu_regions() may re-program the MPU Stack Guard
* to guard the privilege stack for overflows (if building with option
* CONFIG_MPU_STACK_GUARD). There is a risk of actually overflowing the
* stack while doing the re-programming. We minimize the risk by placing
* this function immediately after we have switched to the privileged stack
* so that the whole stack area is available for this critical operation.
*
* Note that the risk for overflow is higher if using the normal thread
* stack, since we do not control how much stack is actually left, when
* user invokes z_arm_userspace_enter().
*/
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
bl z_arm_configure_dynamic_mpu_regions
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0,r3}
/* load up stack info from user stack */
ldr r0, [r3]
ldr r3, [r3, #4]
mov ip, r3
push {r0,r3}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
pop {r0,ip}
/* load up stack info from user stack */
ldr r0, [ip]
ldr ip, [ip, #4]
push {r0,ip}
#endif
/* clear the user stack area to clean out privileged data */
/* from right past the guard right up to the end */
mov r2, ip
#ifdef CONFIG_INIT_STACKS
ldr r1,=0xaaaaaaaa
#else
eors.n r1, r1
#endif
bl memset
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1}
mov ip, r1
#elif (defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE))
pop {r0,ip}
#endif
/* r0 contains user stack start, ip contains user stack size */
add r0, r0, ip /* calculate top of stack */
/* pop remaining arguments from stack before switching stacks */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* Use r4 to pop lr, then restore r4 */
mov ip, r4
pop {r1,r2,r3,r4}
mov lr, r4
mov r4, ip
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
pop {r1,r2,r3,lr}
#endif
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/*
* Guard the default (user) stack until thread drops privileges.
*
* Notes:
* PSPLIM is configured *before* PSP switches to the default (user) stack.
* This is safe, since the user stack is located, by design, in a lower
* memory area compared to the privileged stack.
*
* However, we need to prevent a context-switch to occur, because that
* would re-configure PSPLIM to guard the privileged stack; we enforce
* a PendSV locking for this purporse.
*
* Between PSPLIM update and PSP switch, the privileged stack will be
* left un-guarded; this is safe, as long as the privileged stack is
* large enough to accommodate a maximum exception stack frame.
*/
/* Temporarily store current IRQ locking status in ip */
mrs ip, BASEPRI
push {r0, ip}
/* Lock PendSV while reprogramming PSP and PSPLIM */
mov r0, #_EXC_PENDSV_PRIO_MASK
msr BASEPRI, r0
isb
/* Set PSPLIM to guard the thread's user stack. */
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
ldr r0, [r0, #_thread_offset_to_stack_info_start]
msr PSPLIM, r0
pop {r0, ip}
#endif
/* set stack to user stack */
msr PSP, r0
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/* Restore interrupt lock status */
msr BASEPRI, ip
isb
#endif
/* restore r0 */
mov r0, lr
#ifdef CONFIG_EXECUTION_BENCHMARKING
stm sp!,{r0-r3} /* Save regs r0 to r4 on stack */
push {r0, lr}
bl read_timer_end_of_userspace_enter
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r3}
mov lr,r3
#else
pop {r0, lr}
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
ldm sp!,{r0-r3} /* Restore r0 to r3 regs */
#endif /* CONFIG_EXECUTION_BENCHMARKING */
/* change processor mode to unprivileged */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
push {r0, r1, r2, r3}
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
ldr r1, =_thread_offset_to_mode
ldr r1, [r0, r1]
movs r2, #1
orrs r1, r1, r2
mrs r3, CONTROL
orrs r3, r3, r2
mov ip, r3
/* Store (unprivileged) mode in thread's mode state variable */
ldr r2, =_thread_offset_to_mode
str r1, [r0, r2]
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
push {r0, r1}
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
ldr r1, [r0, #_thread_offset_to_mode]
orrs r1, r1, #1
mrs ip, CONTROL
orrs ip, ip, #1
/* Store (unprivileged) mode in thread's mode state variable */
str r1, [r0, #_thread_offset_to_mode]
#endif
dsb
msr CONTROL, ip
/* ISB is not strictly necessary here (stack pointer is not being
* touched), but it's recommended to avoid executing pre-fetched
* instructions with the previous privilege.
*/
isb
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
pop {r0, r1, r2, r3}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
pop {r0, r1}
#endif
/* jump to z_thread_entry entry */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
push {r0, r1}
ldr r0, =z_thread_entry
mov ip, r0
pop {r0, r1}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldr ip, =z_thread_entry
#endif
bx ip
/**
*
* Userspace system call function
*
* This function is used to do system calls from unprivileged code. This
* function is responsible for the following:
* 1) Fixing up bad syscalls
* 2) Configuring privileged stack and loading up stack arguments
* 3) Dispatching the system call
* 4) Restoring stack and calling back to the caller of the SVC
*
*/
SECTION_FUNC(TEXT, z_arm_do_syscall)
/* Note [when using MPU-based stack guarding]:
* The function is executing in privileged mode. This implies that we
* shall not be allowed to use the thread's default unprivileged stack,
* (i.e push to or pop from it), to avoid a possible stack corruption.
*
* Rationale: since we execute in PRIV mode and no MPU guard
* is guarding the end of the default stack, we won't be able
* to detect any stack overflows.
*
* Note [when using built-in stack limit checking on ARMv8-M]:
* At this point PSPLIM is already configured to guard the default (user)
* stack, so pushing to the default thread's stack is safe.
*/
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* save current stack pointer (user stack) */
mov ip, sp
/* temporarily push to user stack */
push {r0,r1}
/* setup privileged stack */
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
adds r0, r0, #_thread_offset_to_priv_stack_start
ldr r0, [r0] /* priv stack ptr */
ldr r1, =CONFIG_PRIVILEGED_STACK_SIZE
add r0, r1
/* Store current SP and LR at the beginning of the priv stack */
subs r0, #8
mov r1, ip
str r1, [r0, #0]
mov r1, lr
str r1, [r0, #4]
mov ip, r0
/* Restore user stack and original r0, r1 */
pop {r0, r1}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* setup privileged stack */
ldr ip, =_kernel
ldr ip, [ip, #_kernel_offset_to_current]
ldr ip, [ip, #_thread_offset_to_priv_stack_start] /* priv stack ptr */
add ip, #CONFIG_PRIVILEGED_STACK_SIZE
/* Store current SP and LR at the beginning of the priv stack */
subs ip, #8
str sp, [ip, #0]
str lr, [ip, #4]
#endif
/* switch to privileged stack */
msr PSP, ip
/* Note (applies when using stack limit checking):
* We do not need to lock IRQs after switching PSP to the privileged stack;
* PSPLIM is guarding the default (user) stack, which, by design, is
* located at *lower* memory area. Since we switch to the top of the
* privileged stack we are safe, as long as the stack can accommodate
* the maximum exception stack frame.
*/
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/* Set stack pointer limit (needed in privileged mode) */
ldr ip, =_kernel
ldr ip, [ip, #_kernel_offset_to_current]
ldr ip, [ip, #_thread_offset_to_priv_stack_start] /* priv stack ptr */
msr PSPLIM, ip
#endif
/*
* r0-r5 contain arguments
* r6 contains call_id
* r8 contains original LR
*/
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* save r0, r1 to ip, lr */
mov ip, r0
mov lr, r1
ldr r0, =K_SYSCALL_BAD
cmp r6, r0
bne valid_syscall
/* BAD SYSCALL path */
/* fixup stack frame on the privileged stack, adding ssf */
mov r1, sp
push {r4,r5}
/* ssf is present in r1 (sp) */
push {r1,lr}
/* restore r0, r1 */
mov r0, ip
mov r1, lr
b dispatch_syscall
valid_syscall:
/* push args to complete stack frame */
push {r4,r5}
dispatch_syscall:
/* original r0 is saved in ip */
ldr r0, =_k_syscall_table
lsls r6, #2
add r0, r6
ldr r0, [r0] /* load table address */
/* swap ip and r0, restore r1 from lr */
mov r1, ip
mov ip, r0
mov r0, r1
mov r1, lr
/* execute function from dispatch table */
blx ip
/* restore LR
* r0 holds the return value and needs to be preserved
*/
mov ip, r0
mov r0, sp
adds r0, #12
ldr r0, [r0]
mov lr, r0
/* Restore r0 */
mov r0, ip
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldr ip, =K_SYSCALL_BAD
cmp r6, ip
bne valid_syscall
/* BAD SYSCALL path */
/* fixup stack frame on the privileged stack, adding ssf */
mov ip, sp
push {r4,r5,ip,lr}
b dispatch_syscall
valid_syscall:
/* push args to complete stack frame */
push {r4,r5}
dispatch_syscall:
ldr ip, =_k_syscall_table
lsl r6, #2
add ip, r6
ldr ip, [ip] /* load table address */
/* execute function from dispatch table */
blx ip
/* restore LR */
ldr lr, [sp,#12]
#endif
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/*
* Guard the default (user) stack until thread drops privileges.
*
* Notes:
* PSPLIM is configured *before* PSP switches to the default (user) stack.
* This is safe, since the user stack is located, by design, in a lower
* memory area compared to the privileged stack.
*
* However, we need to prevent a context-switch to occur, because that
* would re-configure PSPLIM to guard the privileged stack; we enforce
* a PendSV locking for this purporse.
*
* Between PSPLIM update and PSP switch, the privileged stack will be
* left un-guarded; this is safe, as long as the privileged stack is
* large enough to accommodate a maximum exception stack frame.
*/
/* Temporarily store current IRQ locking status in r2 */
mrs r2, BASEPRI
/* Lock PendSV while reprogramming PSP and PSPLIM */
mov r3, #_EXC_PENDSV_PRIO_MASK
msr BASEPRI, r3
isb
/* Set PSPLIM to guard the thread's user stack. */
ldr r3, =_kernel
ldr r3, [r3, #_kernel_offset_to_current]
ldr r3, [r3, #_thread_offset_to_stack_info_start] /* stack_info.start */
msr PSPLIM, r3
#endif
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* set stack back to unprivileged stack */
mov ip, r0
mov r0, sp
ldr r0, [r0,#8]
msr PSP, r0
/* Restore r0 */
mov r0, ip
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* set stack back to unprivileged stack */
ldr ip, [sp,#8]
msr PSP, ip
#endif
#if defined(CONFIG_BUILTIN_STACK_GUARD)
/* Restore interrupt lock status */
msr BASEPRI, r2
isb
#endif
push {r0, r1}
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
push {r2, r3}
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
ldr r2, =_thread_offset_to_mode
ldr r1, [r0, r2]
movs r3, #1
orrs r1, r1, r3
/* Store (unprivileged) mode in thread's mode state variable */
str r1, [r0, r2]
dsb
/* drop privileges by setting bit 0 in CONTROL */
mrs r2, CONTROL
orrs r2, r2, r3
msr CONTROL, r2
pop {r2, r3}
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
ldr r0, =_kernel
ldr r0, [r0, #_kernel_offset_to_current]
ldr r1, [r0, #_thread_offset_to_mode]
orrs r1, r1, #1
/* Store (unprivileged) mode in thread's mode state variable */
str r1, [r0, #_thread_offset_to_mode]
dsb
/* drop privileges by setting bit 0 in CONTROL */
mrs ip, CONTROL
orrs ip, ip, #1
msr CONTROL, ip
#endif
/* ISB is not strictly necessary here (stack pointer is not being
* touched), but it's recommended to avoid executing pre-fetched
* instructions with the previous privilege.
*/
isb
pop {r0, r1}
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
/* Zero out volatile (caller-saved) registers so as to not leak state from
* kernel mode. The C calling convention for the syscall handler will
* restore the others to original values.
*/
movs r2, #0
movs r3, #0
/*
* return back to original function that called SVC, add 1 to force thumb
* mode
*/
/* Save return value temporarily to ip */
mov ip, r0
mov r0, r8
movs r1, #1
orrs r0, r0, r1
/* swap ip, r0 */
mov r1, ip
mov ip, r0
mov r0, r1
movs r1, #0
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* Zero out volatile (caller-saved) registers so as to not leak state from
* kernel mode. The C calling convention for the syscall handler will
* restore the others to original values.
*/
mov r1, #0
mov r2, #0
mov r3, #0
/*
* return back to original function that called SVC, add 1 to force thumb
* mode
*/
mov ip, r8
orrs ip, ip, #1
#endif
bx ip
/*
* size_t arch_user_string_nlen(const char *s, size_t maxsize, int *err_arg)
*/
SECTION_FUNC(TEXT, arch_user_string_nlen)
push {r0, r1, r2, r4, r5, lr}
/* sp+4 is error value, init to -1 */
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
ldr r3, =-1
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
mov.w r3, #-1
#endif
str r3, [sp, #4]
/* Perform string length calculation */
movs r3, #0 /* r3 is the counter */
strlen_loop:
z_arm_user_string_nlen_fault_start:
/* r0 contains the string. r5 = *(r0 + r3]). This could fault. */
ldrb r5, [r0, r3]
z_arm_user_string_nlen_fault_end:
#if defined(CONFIG_ARMV6_M_ARMV8_M_BASELINE)
cmp r5, #0
beq strlen_done
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
cbz r5, strlen_done
#endif
cmp r3, r1
beq.n strlen_done
adds r3, #1
b.n strlen_loop
strlen_done:
/* Move length calculation from r3 to r0 (return value register) */
mov r0, r3
/* Clear error value since we succeeded */
movs r1, #0
str r1, [sp, #4]
z_arm_user_string_nlen_fixup:
/* Write error value to err pointer parameter */
ldr r1, [sp, #4]
str r1, [r2, #0]
add sp, #12
pop {r4, r5, pc}

View File

@@ -1,18 +0,0 @@
/*
* Copyright (c) 2019 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
_vector_start = .;
KEEP(*(.exc_vector_table))
KEEP(*(".exc_vector_table.*"))
KEEP(*(IRQ_VECTOR_TABLE))
KEEP(*(.vectors))
_vector_end = .;
KEEP(*(.openocd_dbg))
KEEP(*(".openocd_dbg.*"))

View File

@@ -1,23 +0,0 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
if (CONFIG_COVERAGE)
toolchain_cc_coverage()
endif ()
zephyr_library_sources(
cpu_idle.S
fatal.c
irq_manage.c
prep_c.c
reset.S
swap.c
swap_helper.S
thread.c
vector_table.S
)
zephyr_library_sources_ifdef(CONFIG_GEN_SW_ISR_TABLE isr_wrapper.S)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_ARM_MMU arm_mmu.c)

View File

@@ -1,174 +0,0 @@
# ARM64 core configuration options
# Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
# SPDX-License-Identifier: Apache-2.0
if ARM64
config CPU_CORTEX
bool
help
This option signifies the use of a CPU of the Cortex family.
config CPU_CORTEX_A
bool
select CPU_CORTEX
select HAS_FLASH_LOAD_OFFSET
help
This option signifies the use of a CPU of the Cortex-A family.
config CPU_CORTEX_A53
bool
select CPU_CORTEX_A
select ARMV8_A
help
This option signifies the use of a Cortex-A53 CPU
config SWITCH_TO_EL1
bool "Switch to EL1 at boot"
default y
help
This option indicates that we want to switch to EL1 at boot. Only
switching to EL1 from EL3 is supported.
config NUM_IRQS
int
config MAIN_STACK_SIZE
default 4096
config IDLE_STACK_SIZE
default 4096
config ISR_STACK_SIZE
default 4096
config TEST_EXTRA_STACKSIZE
default 2048
config SYSTEM_WORKQUEUE_STACK_SIZE
default 4096
config OFFLOAD_WORKQUEUE_STACK_SIZE
default 4096
config CMSIS_THREAD_MAX_STACK_SIZE
default 4096
config CMSIS_V2_THREAD_MAX_STACK_SIZE
default 4096
config CMSIS_V2_THREAD_DYNAMIC_STACK_SIZE
default 4096
config IPM_CONSOLE_STACK_SIZE
default 2048
if CPU_CORTEX_A
config ARMV8_A
bool
select ATOMIC_OPERATIONS_BUILTIN
help
This option signifies the use of an ARMv8-A processor
implementation.
From https://developer.arm.com/products/architecture/cpu-architecture/a-profile:
The Armv8-A architecture introduces the ability to use 64-bit and
32-bit Execution states, known as AArch64 and AArch32 respectively.
The AArch64 Execution state supports the A64 instruction set, holds
addresses in 64-bit registers and allows instructions in the base
instruction set to use 64-bit registers for their processing. The AArch32
Execution state is a 32-bit Execution state that preserves backwards
compatibility with the Armv7-A architecture and enhances that profile
so that it can support some features included in the AArch64 state.
It supports the T32 and A32 instruction sets.
config GEN_ISR_TABLES
default y
config GEN_IRQ_VECTOR_TABLE
default n
config ARM_MMU
bool "ARM MMU Support"
default y
help
Memory Management Unit support.
if ARM_MMU
config MAX_XLAT_TABLES
int "Maximum numbers of translation tables"
default 7
help
This option specifies the maximum numbers of translation tables
excluding the base translation table. Based on this, translation
tables are allocated at compile time and used at runtime as needed.
If the runtime need exceeds preallocated numbers of translation
tables, it will result in assert. Number of translation tables
required is decided based on how many discrete memory regions
(both normal and device memory) are present on given platform and
how much granularity is required while assigning attributes
to these memory regions.
choice
prompt "Virtual address space size"
default ARM64_VA_BITS_32
help
Allows choosing one of multiple possible virtual address
space sizes. The level of translation table is determined by
a combination of page size and virtual address space size.
config ARM64_VA_BITS_32
bool "32-bit"
config ARM64_VA_BITS_36
bool "36-bit"
config ARM64_VA_BITS_42
bool "42-bit"
config ARM64_VA_BITS_48
bool "48-bit"
endchoice
config ARM64_VA_BITS
int
default 32 if ARM64_VA_BITS_32
default 36 if ARM64_VA_BITS_36
default 42 if ARM64_VA_BITS_42
default 48 if ARM64_VA_BITS_48
choice
prompt "Physical address space size"
default ARM64_PA_BITS_32
help
Choose the maximum physical address range that the kernel will
support.
config ARM64_PA_BITS_32
bool "32-bit"
config ARM64_PA_BITS_36
bool "36-bit"
config ARM64_PA_BITS_42
bool "42-bit"
config ARM64_PA_BITS_48
bool "48-bit"
endchoice
config ARM64_PA_BITS
int
default 32 if ARM64_PA_BITS_32
default 36 if ARM64_PA_BITS_36
default 42 if ARM64_PA_BITS_42
default 48 if ARM64_PA_BITS_48
endif #ARM_MMU
endif # CPU_CORTEX_A
endif # ARM64

View File

@@ -1,470 +0,0 @@
/*
* Copyright 2019 Broadcom
* The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <device.h>
#include <init.h>
#include <kernel.h>
#include <arch/arm/aarch64/cpu.h>
#include <arch/arm/aarch64/arm_mmu.h>
#include <linker/linker-defs.h>
#include <sys/util.h>
/* Set below flag to get debug prints */
#define MMU_DEBUG_PRINTS 0
/* To get prints from MMU driver, it has to initialized after console driver */
#define MMU_DEBUG_PRIORITY 70
#if MMU_DEBUG_PRINTS
/* To dump page table entries while filling them, set DUMP_PTE macro */
#define DUMP_PTE 0
#define MMU_DEBUG(fmt, ...) printk(fmt, ##__VA_ARGS__)
#else
#define MMU_DEBUG(...)
#endif
/* We support only 4kB translation granule */
#define PAGE_SIZE_SHIFT 12U
#define PAGE_SIZE (1U << PAGE_SIZE_SHIFT)
#define XLAT_TABLE_SIZE_SHIFT PAGE_SIZE_SHIFT /* Size of one complete table */
#define XLAT_TABLE_SIZE (1U << XLAT_TABLE_SIZE_SHIFT)
#define XLAT_TABLE_ENTRY_SIZE_SHIFT 3U /* Each table entry is 8 bytes */
#define XLAT_TABLE_LEVEL_MAX 3U
#define XLAT_TABLE_ENTRIES_SHIFT \
(XLAT_TABLE_SIZE_SHIFT - XLAT_TABLE_ENTRY_SIZE_SHIFT)
#define XLAT_TABLE_ENTRIES (1U << XLAT_TABLE_ENTRIES_SHIFT)
/* Address size covered by each entry at given translation table level */
#define L3_XLAT_VA_SIZE_SHIFT PAGE_SIZE_SHIFT
#define L2_XLAT_VA_SIZE_SHIFT \
(L3_XLAT_VA_SIZE_SHIFT + XLAT_TABLE_ENTRIES_SHIFT)
#define L1_XLAT_VA_SIZE_SHIFT \
(L2_XLAT_VA_SIZE_SHIFT + XLAT_TABLE_ENTRIES_SHIFT)
#define L0_XLAT_VA_SIZE_SHIFT \
(L1_XLAT_VA_SIZE_SHIFT + XLAT_TABLE_ENTRIES_SHIFT)
#define LEVEL_TO_VA_SIZE_SHIFT(level) \
(PAGE_SIZE_SHIFT + (XLAT_TABLE_ENTRIES_SHIFT * \
(XLAT_TABLE_LEVEL_MAX - (level))))
/* Virtual Address Index within given translation table level */
#define XLAT_TABLE_VA_IDX(va_addr, level) \
((va_addr >> LEVEL_TO_VA_SIZE_SHIFT(level)) & (XLAT_TABLE_ENTRIES - 1))
/*
* Calculate the initial translation table level from CONFIG_ARM64_VA_BITS
* For a 4 KB page size,
* (va_bits <= 21) - base level 3
* (22 <= va_bits <= 30) - base level 2
* (31 <= va_bits <= 39) - base level 1
* (40 <= va_bits <= 48) - base level 0
*/
#define GET_XLAT_TABLE_BASE_LEVEL(va_bits) \
((va_bits > L0_XLAT_VA_SIZE_SHIFT) \
? 0U \
: (va_bits > L1_XLAT_VA_SIZE_SHIFT) \
? 1U \
: (va_bits > L2_XLAT_VA_SIZE_SHIFT) \
? 2U : 3U)
#define XLAT_TABLE_BASE_LEVEL GET_XLAT_TABLE_BASE_LEVEL(CONFIG_ARM64_VA_BITS)
#define GET_NUM_BASE_LEVEL_ENTRIES(va_bits) \
(1U << (va_bits - LEVEL_TO_VA_SIZE_SHIFT(XLAT_TABLE_BASE_LEVEL)))
#define NUM_BASE_LEVEL_ENTRIES GET_NUM_BASE_LEVEL_ENTRIES(CONFIG_ARM64_VA_BITS)
#if DUMP_PTE
#define L0_SPACE ""
#define L1_SPACE " "
#define L2_SPACE " "
#define L3_SPACE " "
#define XLAT_TABLE_LEVEL_SPACE(level) \
(((level) == 0) ? L0_SPACE : \
((level) == 1) ? L1_SPACE : \
((level) == 2) ? L2_SPACE : L3_SPACE)
#endif
static u64_t base_xlat_table[NUM_BASE_LEVEL_ENTRIES]
__aligned(NUM_BASE_LEVEL_ENTRIES * sizeof(u64_t));
static u64_t xlat_tables[CONFIG_MAX_XLAT_TABLES][XLAT_TABLE_ENTRIES]
__aligned(XLAT_TABLE_ENTRIES * sizeof(u64_t));
/* Translation table control register settings */
static u64_t get_tcr(int el)
{
u64_t tcr;
u64_t pa_bits = CONFIG_ARM64_PA_BITS;
u64_t va_bits = CONFIG_ARM64_VA_BITS;
u64_t tcr_ps_bits;
switch (pa_bits) {
case 48:
tcr_ps_bits = TCR_PS_BITS_256TB;
break;
case 44:
tcr_ps_bits = TCR_PS_BITS_16TB;
break;
case 42:
tcr_ps_bits = TCR_PS_BITS_4TB;
break;
case 40:
tcr_ps_bits = TCR_PS_BITS_1TB;
break;
case 36:
tcr_ps_bits = TCR_PS_BITS_64GB;
break;
default:
tcr_ps_bits = TCR_PS_BITS_4GB;
break;
}
if (el == 1) {
tcr = (tcr_ps_bits << TCR_EL1_IPS_SHIFT);
/*
* TCR_EL1.EPD1: Disable translation table walk for addresses
* that are translated using TTBR1_EL1.
*/
tcr |= TCR_EPD1_DISABLE;
} else
tcr = (tcr_ps_bits << TCR_EL3_PS_SHIFT);
tcr |= TCR_T0SZ(va_bits);
/*
* Translation table walk is cacheable, inner/outer WBWA and
* inner shareable
*/
tcr |= TCR_TG0_4K | TCR_SHARED_INNER | TCR_ORGN_WBWA | TCR_IRGN_WBWA;
return tcr;
}
static int pte_desc_type(u64_t *pte)
{
return *pte & PTE_DESC_TYPE_MASK;
}
static u64_t *calculate_pte_index(u64_t addr, int level)
{
int base_level = XLAT_TABLE_BASE_LEVEL;
u64_t *pte;
u64_t idx;
unsigned int i;
/* Walk through all translation tables to find pte index */
pte = (u64_t *)base_xlat_table;
for (i = base_level; i <= XLAT_TABLE_LEVEL_MAX; i++) {
idx = XLAT_TABLE_VA_IDX(addr, i);
pte += idx;
/* Found pte index */
if (i == level)
return pte;
/* if PTE is not table desc, can't traverse */
if (pte_desc_type(pte) != PTE_TABLE_DESC)
return NULL;
/* Move to the next translation table level */
pte = (u64_t *)(*pte & 0x0000fffffffff000ULL);
}
return NULL;
}
static void set_pte_table_desc(u64_t *pte, u64_t *table, unsigned int level)
{
#if DUMP_PTE
MMU_DEBUG("%s", XLAT_TABLE_LEVEL_SPACE(level));
MMU_DEBUG("%p: [Table] %p\n", pte, table);
#endif
/* Point pte to new table */
*pte = PTE_TABLE_DESC | (u64_t)table;
}
static void set_pte_block_desc(u64_t *pte, u64_t addr_pa,
unsigned int attrs, unsigned int level)
{
u64_t desc = addr_pa;
unsigned int mem_type;
desc |= (level == 3) ? PTE_PAGE_DESC : PTE_BLOCK_DESC;
/* NS bit for security memory access from secure state */
desc |= (attrs & MT_NS) ? PTE_BLOCK_DESC_NS : 0;
/* AP bits for Data access permission */
desc |= (attrs & MT_RW) ? PTE_BLOCK_DESC_AP_RW : PTE_BLOCK_DESC_AP_RO;
/* the access flag */
desc |= PTE_BLOCK_DESC_AF;
/* memory attribute index field */
mem_type = MT_TYPE(attrs);
desc |= PTE_BLOCK_DESC_MEMTYPE(mem_type);
switch (mem_type) {
case MT_DEVICE_nGnRnE:
case MT_DEVICE_nGnRE:
case MT_DEVICE_GRE:
/* Access to Device memory and non-cacheable memory are coherent
* for all observers in the system and are treated as
* Outer shareable, so, for these 2 types of memory,
* it is not strictly needed to set shareability field
*/
desc |= PTE_BLOCK_DESC_OUTER_SHARE;
/* Map device memory as execute-never */
desc |= PTE_BLOCK_DESC_PXN;
desc |= PTE_BLOCK_DESC_UXN;
break;
case MT_NORMAL_NC:
case MT_NORMAL:
/* Make Normal RW memory as execute never */
if ((attrs & MT_RW) || (attrs & MT_EXECUTE_NEVER))
desc |= PTE_BLOCK_DESC_PXN;
if (mem_type == MT_NORMAL)
desc |= PTE_BLOCK_DESC_INNER_SHARE;
else
desc |= PTE_BLOCK_DESC_OUTER_SHARE;
}
#if DUMP_PTE
MMU_DEBUG("%s", XLAT_TABLE_LEVEL_SPACE(level));
MMU_DEBUG("%p: ", pte);
MMU_DEBUG((mem_type == MT_NORMAL) ? "MEM" :
((mem_type == MT_NORMAL_NC) ? "NC" : "DEV"));
MMU_DEBUG((attrs & MT_RW) ? "-RW" : "-RO");
MMU_DEBUG((attrs & MT_NS) ? "-NS" : "-S");
MMU_DEBUG((attrs & MT_EXECUTE_NEVER) ? "-XN" : "-EXEC");
MMU_DEBUG("\n");
#endif
*pte = desc;
}
/* Returns a new reallocated table */
static u64_t *new_prealloc_table(void)
{
static unsigned int table_idx;
__ASSERT(table_idx < CONFIG_MAX_XLAT_TABLES,
"Enough xlat tables not allocated");
return (u64_t *)(xlat_tables[table_idx++]);
}
/* Splits a block into table with entries spanning the old block */
static void split_pte_block_desc(u64_t *pte, int level)
{
u64_t old_block_desc = *pte;
u64_t *new_table;
unsigned int i = 0;
/* get address size shift bits for next level */
int levelshift = LEVEL_TO_VA_SIZE_SHIFT(level + 1);
MMU_DEBUG("Splitting existing PTE %p(L%d)\n", pte, level);
new_table = new_prealloc_table();
for (i = 0; i < XLAT_TABLE_ENTRIES; i++) {
new_table[i] = old_block_desc | (i << levelshift);
if ((level + 1) == 3)
new_table[i] |= PTE_PAGE_DESC;
}
/* Overwrite existing PTE set the new table into effect */
set_pte_table_desc(pte, new_table, level);
}
/* Create/Populate translation table(s) for given region */
static void init_xlat_tables(const struct arm_mmu_region *region)
{
u64_t *pte;
u64_t virt = region->base_va;
u64_t phys = region->base_pa;
u64_t size = region->size;
u64_t attrs = region->attrs;
u64_t level_size;
u64_t *new_table;
unsigned int level = XLAT_TABLE_BASE_LEVEL;
MMU_DEBUG("mmap: virt %llx phys %llx size %llx\n", virt, phys, size);
/* check minimum alignment requirement for given mmap region */
__ASSERT(((virt & (PAGE_SIZE - 1)) == 0) &&
((size & (PAGE_SIZE - 1)) == 0),
"address/size are not page aligned\n");
while (size) {
__ASSERT(level <= XLAT_TABLE_LEVEL_MAX,
"max translation table level exceeded\n");
/* Locate PTE for given virtual address and page table level */
pte = calculate_pte_index(virt, level);
__ASSERT(pte != NULL, "pte not found\n");
level_size = 1ULL << LEVEL_TO_VA_SIZE_SHIFT(level);
if (size >= level_size && !(virt & (level_size - 1))) {
/* Given range fits into level size,
* create block/page descriptor
*/
set_pte_block_desc(pte, phys, attrs, level);
virt += level_size;
phys += level_size;
size -= level_size;
/* Range is mapped, start again for next range */
level = XLAT_TABLE_BASE_LEVEL;
} else if (pte_desc_type(pte) == PTE_INVALID_DESC) {
/* Range doesn't fit, create subtable */
new_table = new_prealloc_table();
set_pte_table_desc(pte, new_table, level);
level++;
} else if (pte_desc_type(pte) == PTE_BLOCK_DESC) {
split_pte_block_desc(pte, level);
level++;
} else if (pte_desc_type(pte) == PTE_TABLE_DESC)
level++;
}
}
/* zephyr execution regions with appropriate attributes */
static const struct arm_mmu_region mmu_zephyr_regions[] = {
/* Mark text segment cacheable,read only and executable */
MMU_REGION_FLAT_ENTRY("zephyr_code",
(uintptr_t)_image_text_start,
(uintptr_t)_image_text_size,
MT_CODE | MT_SECURE),
/* Mark rodata segment cacheable, read only and execute-never */
MMU_REGION_FLAT_ENTRY("zephyr_rodata",
(uintptr_t)_image_rodata_start,
(uintptr_t)_image_rodata_size,
MT_RODATA | MT_SECURE),
/* Mark rest of the zephyr execution regions (data, bss, noinit, etc.)
* cacheable, read-write
* Note: read-write region is marked execute-ever internally
*/
MMU_REGION_FLAT_ENTRY("zephyr_data",
(uintptr_t)__kernel_ram_start,
(uintptr_t)__kernel_ram_size,
MT_NORMAL | MT_RW | MT_SECURE),
};
static void setup_page_tables(void)
{
unsigned int index;
const struct arm_mmu_region *region;
u64_t max_va = 0, max_pa = 0;
for (index = 0; index < mmu_config.num_regions; index++) {
region = &mmu_config.mmu_regions[index];
max_va = MAX(max_va, region->base_va + region->size);
max_pa = MAX(max_pa, region->base_pa + region->size);
}
__ASSERT(max_va <= (1ULL << CONFIG_ARM64_VA_BITS),
"Maximum VA not supported\n");
__ASSERT(max_pa <= (1ULL << CONFIG_ARM64_PA_BITS),
"Maximum PA not supported\n");
/* create translation tables for user provided platform regions */
for (index = 0; index < mmu_config.num_regions; index++) {
region = &mmu_config.mmu_regions[index];
if (region->size || region->attrs)
init_xlat_tables(region);
}
/* setup translation table for zephyr execution regions */
for (index = 0; index < ARRAY_SIZE(mmu_zephyr_regions); index++) {
region = &mmu_zephyr_regions[index];
if (region->size || region->attrs)
init_xlat_tables(region);
}
}
static void enable_mmu_el1(unsigned int flags)
{
ARG_UNUSED(flags);
u64_t val;
/* Set MAIR, TCR and TBBR registers */
__asm__ volatile("msr mair_el1, %0"
:
: "r" (MEMORY_ATTRIBUTES)
: "memory", "cc");
__asm__ volatile("msr tcr_el1, %0"
:
: "r" (get_tcr(1))
: "memory", "cc");
__asm__ volatile("msr ttbr0_el1, %0"
:
: "r" ((u64_t)base_xlat_table)
: "memory", "cc");
/* Ensure these changes are seen before MMU is enabled */
__ISB();
/* Enable the MMU and data cache */
__asm__ volatile("mrs %0, sctlr_el1" : "=r" (val));
__asm__ volatile("msr sctlr_el1, %0"
:
: "r" (val | SCTLR_M_BIT | SCTLR_C_BIT)
: "memory", "cc");
/* Ensure the MMU enable takes effect immediately */
__ISB();
MMU_DEBUG("MMU enabled with dcache\n");
}
/* ARM MMU Driver Initial Setup */
/*
* @brief MMU default configuration
*
* This function provides the default configuration mechanism for the Memory
* Management Unit (MMU).
*/
static int arm_mmu_init(struct device *arg)
{
u64_t val;
unsigned int idx, flags = 0;
/* Current MMU code supports only EL1 */
__asm__ volatile("mrs %0, CurrentEL" : "=r" (val));
__ASSERT(GET_EL(val) == MODE_EL1,
"Exception level not EL1, MMU not enabled!\n");
/* Ensure that MMU is already not enabled */
__asm__ volatile("mrs %0, sctlr_el1" : "=r" (val));
__ASSERT((val & SCTLR_M_BIT) == 0, "MMU is already enabled\n");
MMU_DEBUG("xlat tables:\n");
MMU_DEBUG("base table(L%d): %p, %d entries\n", XLAT_TABLE_BASE_LEVEL,
(u64_t *)base_xlat_table, NUM_BASE_LEVEL_ENTRIES);
for (idx = 0; idx < CONFIG_MAX_XLAT_TABLES; idx++)
MMU_DEBUG("%d: %p\n", idx, (u64_t *)(xlat_tables + idx));
setup_page_tables();
/* currently only EL1 is supported */
enable_mmu_el1(flags);
return 0;
}
SYS_INIT(arm_mmu_init, PRE_KERNEL_1,
#if MMU_DEBUG_PRINTS
MMU_DEBUG_PRIORITY
#else
CONFIG_KERNEL_INIT_PRIORITY_DEVICE
#endif
);

View File

@@ -1,42 +0,0 @@
/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM64 Cortex-A power management
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
GTEXT(arch_cpu_idle)
SECTION_FUNC(TEXT, arch_cpu_idle)
#ifdef CONFIG_TRACING
stp xzr, x30, [sp, #-16]!
bl sys_trace_idle
ldp xzr, x30, [sp], #16
#endif
dsb sy
wfi
msr daifclr, #(DAIFSET_IRQ)
ret
GTEXT(arch_cpu_atomic_idle)
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
#ifdef CONFIG_TRACING
stp xzr, x30, [sp, #-16]!
bl sys_trace_idle
ldp xzr, x30, [sp], #16
#endif
msr daifset, #(DAIFSET_IRQ)
isb
wfe
tst x0, #(DAIF_IRQ)
beq _irq_disabled
msr daifclr, #(DAIFSET_IRQ)
_irq_disabled:
ret

View File

@@ -1,203 +0,0 @@
/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Kernel fatal error handler for ARM64 Cortex-A
*
* This module provides the z_arm64_fatal_error() routine for ARM64 Cortex-A
* CPUs
*/
#include <kernel.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
static void print_EC_cause(u64_t esr)
{
u32_t EC = (u32_t)esr >> 26;
switch (EC) {
case 0b000000:
LOG_ERR("Unknown reason");
break;
case 0b000001:
LOG_ERR("Trapped WFI or WFE instruction execution");
break;
case 0b000011:
LOG_ERR("Trapped MCR or MRC access with (coproc==0b1111) that "
"is not reported using EC 0b000000");
break;
case 0b000100:
LOG_ERR("Trapped MCRR or MRRC access with (coproc==0b1111) "
"that is not reported using EC 0b000000");
break;
case 0b000101:
LOG_ERR("Trapped MCR or MRC access with (coproc==0b1110)");
break;
case 0b000110:
LOG_ERR("Trapped LDC or STC access");
break;
case 0b000111:
LOG_ERR("Trapped access to SVE, Advanced SIMD, or "
"floating-point functionality");
break;
case 0b001100:
LOG_ERR("Trapped MRRC access with (coproc==0b1110)");
break;
case 0b001101:
LOG_ERR("Branch Target Exception");
break;
case 0b001110:
LOG_ERR("Illegal Execution state");
break;
case 0b010001:
LOG_ERR("SVC instruction execution in AArch32 state");
break;
case 0b011000:
LOG_ERR("Trapped MSR, MRS or System instruction execution in "
"AArch64 state, that is not reported using EC "
"0b000000, 0b000001 or 0b000111");
break;
case 0b011001:
LOG_ERR("Trapped access to SVE functionality");
break;
case 0b100000:
LOG_ERR("Instruction Abort from a lower Exception level, that "
"might be using AArch32 or AArch64");
break;
case 0b100001:
LOG_ERR("Instruction Abort taken without a change in Exception "
"level.");
break;
case 0b100010:
LOG_ERR("PC alignment fault exception.");
break;
case 0b100100:
LOG_ERR("Data Abort from a lower Exception level, that might "
"be using AArch32 or AArch64");
break;
case 0b100101:
LOG_ERR("Data Abort taken without a change in Exception level");
break;
case 0b100110:
LOG_ERR("SP alignment fault exception");
break;
case 0b101000:
LOG_ERR("Trapped floating-point exception taken from AArch32 "
"state");
break;
case 0b101100:
LOG_ERR("Trapped floating-point exception taken from AArch64 "
"state.");
break;
case 0b101111:
LOG_ERR("SError interrupt");
break;
case 0b110000:
LOG_ERR("Breakpoint exception from a lower Exception level, "
"that might be using AArch32 or AArch64");
break;
case 0b110001:
LOG_ERR("Breakpoint exception taken without a change in "
"Exception level");
break;
case 0b110010:
LOG_ERR("Software Step exception from a lower Exception level, "
"that might be using AArch32 or AArch64");
break;
case 0b110011:
LOG_ERR("Software Step exception taken without a change in "
"Exception level");
break;
case 0b110100:
LOG_ERR("Watchpoint exception from a lower Exception level, "
"that might be using AArch32 or AArch64");
break;
case 0b110101:
LOG_ERR("Watchpoint exception taken without a change in "
"Exception level.");
break;
case 0b111000:
LOG_ERR("BKPT instruction execution in AArch32 state");
break;
case 0b111100:
LOG_ERR("BRK instruction execution in AArch64 state.");
break;
}
}
static void esf_dump(const z_arch_esf_t *esf)
{
LOG_ERR("x1: %-8llx x0: %llx",
esf->basic.regs[18], esf->basic.regs[19]);
LOG_ERR("x2: %-8llx x3: %llx",
esf->basic.regs[16], esf->basic.regs[17]);
LOG_ERR("x4: %-8llx x5: %llx",
esf->basic.regs[14], esf->basic.regs[15]);
LOG_ERR("x6: %-8llx x7: %llx",
esf->basic.regs[12], esf->basic.regs[13]);
LOG_ERR("x8: %-8llx x9: %llx",
esf->basic.regs[10], esf->basic.regs[11]);
LOG_ERR("x10: %-8llx x11: %llx",
esf->basic.regs[8], esf->basic.regs[9]);
LOG_ERR("x12: %-8llx x13: %llx",
esf->basic.regs[6], esf->basic.regs[7]);
LOG_ERR("x14: %-8llx x15: %llx",
esf->basic.regs[4], esf->basic.regs[5]);
LOG_ERR("x16: %-8llx x17: %llx",
esf->basic.regs[2], esf->basic.regs[3]);
LOG_ERR("x18: %-8llx x30: %llx",
esf->basic.regs[0], esf->basic.regs[1]);
}
void z_arm64_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
u64_t el, esr, elr, far;
if (reason != K_ERR_SPURIOUS_IRQ) {
__asm__ volatile("mrs %0, CurrentEL" : "=r" (el));
switch (GET_EL(el)) {
case MODE_EL1:
__asm__ volatile("mrs %0, esr_el1" : "=r" (esr));
__asm__ volatile("mrs %0, far_el1" : "=r" (far));
__asm__ volatile("mrs %0, elr_el1" : "=r" (elr));
break;
case MODE_EL2:
__asm__ volatile("mrs %0, esr_el2" : "=r" (esr));
__asm__ volatile("mrs %0, far_el2" : "=r" (far));
__asm__ volatile("mrs %0, elr_el2" : "=r" (elr));
break;
case MODE_EL3:
__asm__ volatile("mrs %0, esr_el3" : "=r" (esr));
__asm__ volatile("mrs %0, far_el3" : "=r" (far));
__asm__ volatile("mrs %0, elr_el3" : "=r" (elr));
break;
default:
/* Just to keep the compiler happy */
esr = elr = far = 0;
break;
}
if (GET_EL(el) != MODE_EL0) {
LOG_ERR("ESR_ELn: %llx", esr);
LOG_ERR("FAR_ELn: %llx", far);
LOG_ERR("ELR_ELn: %llx", elr);
print_EC_cause(esr);
}
}
if (esf != NULL) {
esf_dump(esf);
}
z_fatal_error(reason, esf);
CODE_UNREACHABLE;
}

View File

@@ -1,60 +0,0 @@
/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARM64 Cortex-A interrupt management
*/
#include <kernel.h>
#include <arch/cpu.h>
#include <device.h>
#include <tracing/tracing.h>
#include <irq.h>
#include <irq_nextlevel.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <sw_isr_table.h>
void z_arm64_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
void arch_irq_enable(unsigned int irq)
{
struct device *dev = _sw_isr_table[0].arg;
irq_enable_next_level(dev, (irq >> 8) - 1);
}
void arch_irq_disable(unsigned int irq)
{
struct device *dev = _sw_isr_table[0].arg;
irq_disable_next_level(dev, (irq >> 8) - 1);
}
int arch_irq_is_enabled(unsigned int irq)
{
struct device *dev = _sw_isr_table[0].arg;
return irq_is_enabled_next_level(dev);
}
void z_arm64_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
struct device *dev = _sw_isr_table[0].arg;
if (irq == 0)
return;
irq_set_priority_next_level(dev, (irq >> 8) - 1, prio, flags);
}
void z_irq_spurious(void *unused)
{
ARG_UNUSED(unused);
z_arm64_fatal_error(K_ERR_SPURIOUS_IRQ, NULL);
}

View File

@@ -1,34 +0,0 @@
/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Software interrupts utility code - ARM64 implementation
*/
#include <kernel.h>
#include <irq_offload.h>
#include <aarch64/exc.h>
volatile irq_offload_routine_t offload_routine;
static void *offload_param;
void z_irq_do_offload(void)
{
offload_routine(offload_param);
}
void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
{
k_sched_lock();
offload_routine = routine;
offload_param = parameter;
z_arm64_offload();
offload_routine = NULL;
k_sched_unlock();
}

Some files were not shown because too many files have changed in this diff Show More