Compare commits

..

57 Commits

Author SHA1 Message Date
Mariusz Skamra
ec4e86f5cc Bluetooth: controller: Workaround CPR procedure collision at CPU instant
This is a workaround for IOP issue, where peer rejects LLCP Slave
Connection Parameter Request with LMP Error Transaction Collision
error code even if previous request is complete at the instant.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-04-30 15:45:50 +02:00
Jiahao Li
84ae2f11ed Bluetooth: Mesh: Fixes existing friend lookup in Friend Request handling
Currently, when handling a Friend Request message with `prev_addr` set,
we look up existing friend entry using `prev_addr` as the address.
However, `prev_addr` is the address of the requesting node's previous
friend, NOT the address of the requesting node itself. Therefore, we
should always look up existing friend entry using `rx->ctx.addr` as the
address.

Signed-off-by: Jiahao Li <reg@ljh.me>
2019-01-29 07:23:22 -05:00
Johan Hedberg
5a0f7e44f1 Bluetooth: Mesh: Fix typo leading to incorrect settings storage
This was intended to be an equality comparison and not an assignment.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
da38403f9d Bluetooth: controller: Fix Conn Param Req procedure stall issue
Fix an issue wherein local or remote initiated Connection
Parameter Request procedure would stall without generation
of LE Connection Update Complete HCI event because a local
or remote initiated PHY Update procedure has overwritten the
currently active Link Layer Control Procedure type.

Signed-off-by: Vinayak Kariappa Chettimada <vinayak.kariappa@gmail.com>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
7e7886782f Bluetooth: controller: Fix chan map update's diff trans collision
Fix channel map update procedure implementation's handling
of different transaction collision by not asserting but
disconnecting the connection due to invalid behavior by
peer implementation.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
07537107a8 Bluetooth: controller: Fix CPR procedure regression
This fixes regression introduced in commit a11868fea9.

Fixes the following conformance tests:
LL/CON/MAS/BV-32-C [Accepting Connection Parameter Request -
Preferred Periodicity]
LL/CON/MAS/BV-33-C [Accepting Connection Parameter Request -
Preferred Periodicity and preferred anchor points]

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
210c27e370 Bluetooth: controller: Fix conn update assert in slave role
Explicitly track the connection update related ticker stop
and start to avoid asserting due to ticker update being done
at the same time for compensating the clock drift.

The compensation related ticker update failure in this case
can be safely ignored as new anchor point is used anyway
at the instant of the connection update.

Fixes #8796

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Luiz Augusto von Dentz
3637202c53 Bluetooth: GATT: Ensure GATT service is registered first
Accourding to Bluetooth specification the Service Changed
Characteristic shall not have its handle changed once it has been
bonded, so this moves the GATT service to be the very first service
registered that way it is guaranteed that it won't change even if
device is flashed with a different configuration which end up changing
the handles after it.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
a4d57234fc Bluetooth: Don't mask ECDH related events if CONFIG_BT_ECC disabled
It is related to b904ad387f.
This fixes checking ECDH related events in event mask.
If ECDH support is disabled in host, there is no need to check
if those are supported in the controller.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Luiz Augusto von Dentz
734ee4e57f Bluetooth: GATT: Fix not handling duplicated CCC storage
Some backends may actually contain the same key multiple times so the
code needs to check if there is already a ccc_cfg for an address before
attempting to use one that is unallocated.

Fixes #11409

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-01-29 07:23:22 -05:00
Luiz Augusto von Dentz
3769030f87 Bluetooth: GATT: Fix not resetting CCC storage
If there are no CCC to be stored the value should be set to NULL so it
is properly cleared otherwise calling settings_str_from_bytes will leave
str uninitialized which may cause a crash when attempting to load the
value.

Fixes #11564

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
bb07b4d3fb Bluetooth: Add Kconfig option to disable HCI ECDH support
This adds Kconfig option to disable HCI ECDH support.
It will compile out ECDH related code, especially HCI event handlers.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
82de097f20 Bluetooth: Add common Kconfig option to disable LE Data Length Update
This adds common option to disable support for LE Data Length Update
procedure in controller and host.
This will reduce flash usage by compiling out le_data_len_change
event handler that will never be called if controller has been
compiled with BT_CTLR_DATA_LENGTH option disabled.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
00d62fbd68 Bluetooth: Add common Kconfig option to disable PHY Update
This adds common option to disable support for PHY Update
procedure in controller and host.
This will reduce flash usage by compiling out le_phy_update_complete
event handler that will never be called if controller has been
compiled with BT_CTLR_PHY option disabled.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
e1f52c2800 Bluetooth: controller: Fix enable and disable of scan state
Updated controller implementation to disallow disabling
initiator state using scan disable. But allow disabling an
already disabled scan state. Also, disallow enabling scan
state while in initiator state.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
8f53d5ebe7 Bluetooth: controller: Do not feature exchange more than once
Updated the controller implementation to not feature
exchange if already done once either by local or remote peer
device in an active connection session.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Szymon Janc
3517f9b2ef Bluetooth: host: Fix not starting slave conn param update timer
When issuing LE Set Data Length Command host should not assume that
LE Data Length Change Event will be generated. From Core Spec 5.0:
"If the command causes the maximum transmission packet size or maximum
packet transmission time to change, an LE Data Length Change Event
shall be generated."

Change-Id: I17723b58ed4f390aa465db3f69126ee229871123
Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
c8ad2089ea Bluetooth: host: Don't send slave conn param request if not needed
If master or application decided to switch connection parameters to
ones that meet pending parameters don't bother sending request
after 5 seconds timeout.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Johan Hedberg
ebf9699f5b Bluetooth: Mesh: Increase scan window from 10ms to 30ms
Experiments have shown that the probability of missing advertising
packets is significantly lower with 30ms scan window compared to 10ms
scan window. This is especially the case with advertisers using a 20ms
advertising interval, which in turn is perhaps the most common one
since it's the smallest allowed by the Bluetooth 5.0 specification.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-01-29 07:23:22 -05:00
Szymon Janc
623e8c2d17 Bluetooth: host: Don't restart scan if connection is pending
With multiple devices on auto-connect list it is possible that while
having pending connection to device A, device B disconnects. In that
case host should not try to start scan (currently controller doesn't
support concurrent scanning and initiating).

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
791b2bd83d Bluetooth: controller: Add min & max interval support in CPR
Add support for exchanging both minimum and maximum
connection interval values in Connection Parameter Request
Procedure implementation.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Szymon Janc
335f231921 Bluetooth: host: Fallback to L2CAP for CPUP if rejected by master on LL
Fallback to L2CAP Connection Parameters Update Request if LL Connection
Update Request was rejected by remote device that has this marked as
supported in features. This can happen if procedure is supported only
by remote controller, but not enabled by host. This is connection
parameters update with iOS devices.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
5b35b4568d Bluetooth: Do not compile GATT response handlers if Client is disabled
This will exclude GATT Client response handlers from compilation
if GATT Client support is disabled.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
0a599e9283 Bluetooth: controller: Fix integer overflow in scheduling code
Fix an integer overflow in the scheduling implementation
that calculates whether resources required for next radio
event be retained.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Szymon Janc
1f242b21fa Bluetooth: controller: Compile conn complete due to cancel conditionally
BT_HCI_ERR_UNKNOWN_CONN_ID can only be sent when master role is enabled.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
78fdefecb6 Bluetooth: host: Compile master role conn complete conditionally
Connection complete event with error code can be received only for
central role and can be compiled conditionally.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
5892a68041 Bluetooth: L2CAP: Extend available return codes from accept cb
This adds support for returning various return codes from
the channel accept callback.
This is needed for implementation of incoming connection
authorization for certification purposes.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
31cdde8ca3 Bluetooth: L2CAP: Rename LE Connection Response Results
Rename connection response results to map those that are defined
for BR.
BR: BT_L2CAP_BR_*
LE: BT_L2CAP_LE_*

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
da62735763 Bluetooth: host: Avoid using out-of-scope pointer
Mkae sure that variable pointed by params is valid when passing it
as function argument.

Fixes #10587

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
f58ce51c2f Bluetooth: controller: Fix master role RSSI measurement
Fix broken master role RSSI measurement. Since the original
contribution clean up into Zephyr, the radio shorts that was
set for measuring the RSSI for master role has been broken,
as it was cleared by the radio switching code further in the
Tx ISR.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
308d2c0468 Bluetooth: controller: Fix connection failed to be established
Fix connection failed to be established regression
introduced by the commit 350c569aba ("Bluetooth:
controller: Avoid offseting to lldata").

As the Rx-ed PDU buffer is re-used to construct the
connection complete message towards HCI, the fields in the
Rx-ed PDU need to be backup for future use in the control
path. Here the channel selection bit is backup now.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
b79952c39d Bluetooth: controller: Fix conn param req procedure response
Fix Connection Parameter Request Procedure implementation
to respond with sent interval_min and interval_max so that
certain peer devices dont reject the response as Invalid LL
Parameters.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Radoslaw Koppel
da5b22595f bluetooth: host: conn: Add const to addr in bt_le_set_auto_conn
This commit adds missed const modifier for addr pointer for
bt_le_set_auto_conn function

Signed-off-by: Radoslaw Koppel <radoslaw.koppel@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Joakim Andersson
036bc5fbc0 bluetooth: config: Fix bluetooth config dependencies
Fix bluetooth config dependencies where the definitions depend on other
definitions.
BT_RX_PRIO is not always defined in a controller only build.
BT_CTLR_TX_BUFFER_SIZE does not depend on BT_CTLR, but BT_LL_SW.

Signed-off-by: Joakim Andersson <joakim.andersson@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
791dad637d Bluetooth: controller: Fix connection cancel deadlock
Calling bt_recv in the Bluetooth host Tx thread by the
controller implementation caused deadlock in combined host
controller builds when HCI LE Create Connection Cancel
generated the HCI LE Connection Complete or HCI LE Enhanced
Connection Complete events.

Controller's HCI implementation has been updated to place
the generated event into Rx FIFO to avoid the deadlock.

Relates to commit a59f544fb4 ("bluetooth: controller:
Handle non-priority events correctly")

Relates to #10314.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
9ffc9825b4 Bluetooth: controller: Avoid offseting to lldata
Avoid offseting to lldata when populating event structure
members.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Szymon Janc
e934506635 Bluetooth: Fix autoconnect if cancelled pending connection
bt_conn_disconnect removes device from autoconnect list and thus
should not be called from le_conn_update when timeouting pending
connection. Also auto connect flag needs to be check on connection
failure to make sure scan is restarted.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
a7814a6e1d Bluetooth: Fix incorrectly reporting connection as failure
Auto connect code reuses bt_conn object and connection code was assuming
object was cleared which resulted in invalid code being provided to
application. Fix that by explicitly setting error code to 0 on
successful connection.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
eb1f5804b2 Bluetooth: Allow to configure LE Create Connection timeout
Depending on perhiperal advertising interval 3 seconds might not be
enough and would result in cancelling pending connection. Make this
Kconfig configurable and let application to decide.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
ecda5c74a7 Bluetooth: Allow to configure background scan window and interval
Applications may require different scan windows and interval depending
on expected re-connection time or peer devices advertising parameters.
Default to GAP recommended slow values.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Vinayak Kariappa Chettimada
1310603c5f Bluetooth: controller: Fix Data Length Update implementation
The Data Length Update implementation reused the flags used
by Encryption Procedure which caused invalid Encryption
Procedure sequence under conditions where Data Length Update
Procedure collide with Encryption Setup initiated by the
peer central device.

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Kamil Piszczek
cdf6458122 Bluetooth: host: always set address via HCI before scanning
The address of the device is also set via HCI interface when passive
scanning is used. As a result, LL does not filter out directed
advertising packets that are targeted at this device.

Signed-off-by: Kamil Piszczek <Kamil.Piszczek@nordicsemi.no>
2019-01-29 07:23:22 -05:00
Szymon Janc
1c361f130c Bluetooth: Add option to configure peripheral connection parameters
This allows to configure desired parameters for peripheral. When set
PPCP characteristic is also added to GAP service. If disabled it is
up to application to controll connection parameters and stack will
only enforce 5 seconds delay before update.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
6f1300d246 Bluetooth: Fix connection parameters update
This fixes a few issues with the handling of Connection Parameter
update in the Host:
 - starting conn param update timer as master
 - ignoring 5 seconds slave timer when calling bt_conn_le_param_update
 - starting conn param update timer on every PHY update

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
cb2476ccd3 Bluetooth: Controller: Add support for setting public address
This allows to provide public address for controller without using
VS HCI command from host. Useful for controller only builds or
combined builds that are not using VS HCI commands.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
1b50ccab05 Bluetooth: Read static address from FICR on nRF5 if no VS enabled
When running combined build on nRF5 with disabled VS command it is
possible to simply read static random address from FICR in host.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
732f1ac0b5 Bluetooth: Controller: Add CONFIG_BT_HCI_VS option
This allows to fully disable HCI VS commands support.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Szymon Janc
9e429e8e2d Bluetooth: Fix security level checking with LE SC and no-bonding
This was affecting SM/MAS/SCPK/BV-01-C qualification test case.

Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
00cd24110a Bluetooth: Add a shell command to disable bondable mode
This adds a shell command for qualification purposes to enable/disable
Bonding flag in Authentication Requirements.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
736e124671 Bluetooth: Add run-time option to disable SMP bondable mode
This adds a function that will disable Bonding flag in
Authentication Requirements flag in SMP Pairing Request/Response.
This is needed for qualification purposes.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Mariusz Skamra
7531653139 Bluetooth: Add Kconfig option to disable bondable mode
This adds Kconfig option to allow disable bondable mode in compile
time.

Signed-off-by: Mariusz Skamra <mariusz.skamra@codecoup.pl>
2019-01-29 07:23:22 -05:00
Luiz Augusto von Dentz
d50095351b Bluetooth: GATT: Make bt_gatt_discover perform discover all procedure
This makes bt_gatt_discover perform discover all proceduce if no UUID
is given in the parameters.

Fixes #9713

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-01-29 07:23:22 -05:00
Luiz Augusto von Dentz
346a9f0710 Bluetooth: GATT: Fix long write procedure
Long write procedure currently requires BT_GATT_PERM_PREPARE_WRITE to
be set otherwise the prepares would fail. This changes the behavior so
that BT_GATT_PERM_PREPARE_WRITE enables checking each prepare chunk
skipping it otherwise.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2019-01-29 07:23:22 -05:00
Matias Karhumaa
e9c6e4d2f3 Bluetooth: hci_ecc: Fix byte order of dbg keys
Pass Bluetooth SMP debug keys to crypto API in correct byte order.

Fixes #9867

Signed-off-by: Matias Karhumaa <matias.karhumaa@gmail.com>
2019-01-29 07:23:22 -05:00
Johan Hedberg
85667eda09 Bluetooth: Remove bogus condition for setting IRK for an identity
Just like we set the address for an identity without depending on the
BT_DEV_READY flag, we should do the same for the IRK. Otherwise we
risk getting an all-zeroes IRK. Remove the condition and always set
the IRK value whenever CONFIG_BT_PRIVACY is enabled.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-01-29 07:23:22 -05:00
Johan Hedberg
c32b5872b8 Bluetooth: Fix identity creation through vendor HCI
The code creating identities from the Read_Static_Addresses vendor
command was failing to create matching IRKs, resulting in an
all-zeroes IRK to be used. Fix this by using the existing id_create()
function which takes care of generaing an IRK when necessary.

Fixes #10003

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-01-29 07:23:22 -05:00
Johan Hedberg
cd4d2d67ee Bluetooth: Move bt_setup_id_addr() to avoid forward declaration
A subsequent patch will make bt_setup_id_addr() depend on id_create()
which was so far lower down in the hci_core.c c-file. Move
bt_setup_id_addr() further down to avoid a forward declaration.

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-01-29 07:23:22 -05:00
17136 changed files with 6262486 additions and 851843 deletions

View File

@@ -1,3 +1,5 @@
--mailback
--no-tree
--emacs
--summary-file
--show-types

View File

@@ -1,151 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
#
# clang-format configuration file. Intended for clang-format >= 4.
#
# For more information, see:
#
# Documentation/process/clang-format.rst
# https://clang.llvm.org/docs/ClangFormat.html
# https://clang.llvm.org/docs/ClangFormatStyleOptions.html
#
---
AccessModifierOffset: -4
AlignAfterOpenBracket: Align
AlignConsecutiveAssignments: false
AlignConsecutiveDeclarations: false
#AlignEscapedNewlines: Left # Unknown to clang-format-4.0
AlignOperands: true
AlignTrailingComments: false
AllowAllParametersOfDeclarationOnNextLine: false
AllowShortBlocksOnASingleLine: false
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: None
AllowShortIfStatementsOnASingleLine: false
AllowShortLoopsOnASingleLine: false
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: false
AlwaysBreakTemplateDeclarations: false
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
AfterClass: false
AfterControlStatement: false
AfterEnum: false
AfterFunction: true
AfterNamespace: true
AfterObjCDeclaration: false
AfterStruct: false
AfterUnion: false
#AfterExternBlock: false # Unknown to clang-format-5.0
BeforeCatch: false
BeforeElse: false
IndentBraces: false
#SplitEmptyFunction: true # Unknown to clang-format-4.0
#SplitEmptyRecord: true # Unknown to clang-format-4.0
#SplitEmptyNamespace: true # Unknown to clang-format-4.0
BreakBeforeBinaryOperators: None
BreakBeforeBraces: Custom
#BreakBeforeInheritanceComma: false # Unknown to clang-format-4.0
BreakBeforeTernaryOperators: false
BreakConstructorInitializersBeforeComma: false
#BreakConstructorInitializers: BeforeComma # Unknown to clang-format-4.0
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: false
ColumnLimit: 80
CommentPragmas: '^ IWYU pragma:'
#CompactNamespaces: false # Unknown to clang-format-4.0
ConstructorInitializerAllOnOneLineOrOnePerLine: false
ConstructorInitializerIndentWidth: 8
ContinuationIndentWidth: 8
Cpp11BracedListStyle: false
DerivePointerAlignment: false
DisableFormat: false
ExperimentalAutoDetectBinPacking: false
#FixNamespaceComments: false # Unknown to clang-format-4.0
# Taken from:
# git grep -h '^#define [^[:space:]]*for_each[^[:space:]]*(' include/ \
# | sed "s,^#define \([^[:space:]]*for_each[^[:space:]]*\)(.*$, - '\1'," \
# | sort | uniq
ForEachMacros:
- 'FOR_EACH'
- 'for_each_linux_bus'
- 'for_each_linux_driver'
- 'metal_bitmap_for_each_clear_bit'
- 'metal_bitmap_for_each_set_bit'
- 'metal_for_each_page_size_down'
- 'metal_for_each_page_size_up'
- 'metal_list_for_each'
- 'RB_FOR_EACH'
- 'RB_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER'
- 'SYS_DLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_DLIST_FOR_EACH_NODE'
- 'SYS_DLIST_FOR_EACH_NODE_SAFE'
- 'SYS_SFLIST_FOR_EACH_CONTAINER'
- 'SYS_SFLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SFLIST_FOR_EACH_NODE'
- 'SYS_SFLIST_FOR_EACH_NODE_SAFE'
- 'SYS_SLIST_FOR_EACH_CONTAINER'
- 'SYS_SLIST_FOR_EACH_CONTAINER_SAFE'
- 'SYS_SLIST_FOR_EACH_NODE'
- 'SYS_SLIST_FOR_EACH_NODE_SAFE'
- '_WAIT_Q_FOR_EACH'
- 'Z_GENLIST_FOR_EACH_CONTAINER'
- 'Z_GENLIST_FOR_EACH_CONTAINER_SAFE'
- 'Z_GENLIST_FOR_EACH_NODE'
- 'Z_GENLIST_FOR_EACH_NODE_SAFE'
#IncludeBlocks: Preserve # Unknown to clang-format-5.0
IncludeCategories:
- Regex: '.*'
Priority: 1
IncludeIsMainRegex: '(Test)?$'
IndentCaseLabels: false
#IndentPPDirectives: None # Unknown to clang-format-5.0
IndentWidth: 8
IndentWrappedFunctionNames: false
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
MacroBlockBegin: ''
MacroBlockEnd: ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: Inner
#ObjCBinPackProtocolList: Auto # Unknown to clang-format-5.0
ObjCBlockIndentWidth: 8
ObjCSpaceAfterProperty: true
ObjCSpaceBeforeProtocolList: true
# Taken from git's rules
#PenaltyBreakAssignment: 10 # Unknown to clang-format-4.0
PenaltyBreakBeforeFirstCallParameter: 30
PenaltyBreakComment: 10
PenaltyBreakFirstLessLess: 0
PenaltyBreakString: 10
PenaltyExcessCharacter: 100
PenaltyReturnTypeOnItsOwnLine: 60
PointerAlignment: Right
ReflowComments: false
SortIncludes: false
#SortUsingDeclarations: false # Unknown to clang-format-4.0
SpaceAfterCStyleCast: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
#SpaceBeforeCtorInitializerColon: true # Unknown to clang-format-5.0
#SpaceBeforeInheritanceColon: true # Unknown to clang-format-5.0
SpaceBeforeParens: ControlStatements
#SpaceBeforeRangeBasedForLoopColon: true # Unknown to clang-format-5.0
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 1
SpacesInAngles: false
SpacesInContainerLiterals: false
SpacesInCStyleCastParentheses: false
SpacesInParentheses: false
SpacesInSquareBrackets: false
Standard: Cpp03
TabWidth: 8
UseTab: Always
...

View File

@@ -1,57 +0,0 @@
# EditorConfig: https://editorconfig.org/
# top-most EditorConfig file
root = true
# All (Defaults)
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
max_line_length = 80
# C
[*.{c,h}]
indent_style = tab
indent_size = 8
# Python
[*.py]
indent_style = space
indent_size = 4
# Perl
[*.pl]
indent_style = tab
indent_size = 8
# YAML
[*.yml]
indent_style = space
indent_size = 2
# Shell Script
[*.sh]
indent_style = space
indent_size = 4
# Windows Command Script
[*.cmd]
end_of_line = crlf
indent_style = tab
indent_size = 8
# Valgrind Suppression File
[*.supp]
indent_style = space
indent_size = 3
# CMake
[{CMakeLists.txt,*.cmake}]
indent_style = space
indent_size = 2
# Makefile
[Makefile]
indent_style = tab

View File

@@ -1,39 +0,0 @@
---
name: Bug report
about: Create a report to help us improve Zephyr
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
What have you tried to diagnose or workaround this issue?
**To Reproduce**
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Impact**
What impact does this issue have on your progress (e.g., annoyance, showstopper)
**Screenshots or console output**
If applicable, add a screenshot (drag-and-drop an image), or console logs
(cut-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue.
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
Add any other context about the problem here.

View File

@@ -1,20 +0,0 @@
---
name: Enhancement
about: Suggest enhancements to existing features
title: ''
labels: enhancement
assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: feature request
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or graphics (drag-and-drop an image) about the feature request here.

View File

@@ -1,51 +0,0 @@
---
name: RFC / Proposal
about: Submit an RFC / Proposal
title: ''
labels: RFC
assignees: ''
---
## Introduction
This section targets end users, TSC members, maintainers and anyone else that might
need a quick explanation of your proposed change.
### Problem description
Why do we want this change and what problem are we trying to address?
### Proposed change
A brief summary of the proposed change - the 10,000 ft view on what it will
change once this change is implemented.
## Detailed RFC
In this section of the document the target audience is the dev team. Upon
reading this section each engineer should have a rather clear picture of what
needs to be done in order to implement the described feature.
### Proposed change (Detailed)
This section is freeform - you should describe your change in as much detail
as possible. Please also ensure to include any context or background info here.
For example, do we have existing components which can be reused or altered.
By reading this section, each team member should be able to know what exactly
you're planning to change and how.
### Dependencies
Highlight how the change may affect the rest of the project (new components,
modifications in other areas), or other teams/projects.
### Concerns and Unresolved Questions
List any concerns, unknowns, and generally unresolved questions etc.
## Alternatives
List any alternatives considered, and the reasons for choosing this option
over them.

View File

@@ -1,16 +0,0 @@
license:
main: apache-2.0
report_missing: true
category: Permissive
copyright:
check: true
exclude:
extensions:
- yml
- yaml
- html
- rst
- conf
- cfg
langs:
- HTML

View File

@@ -1,65 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Documentation GitHub Workflow
on: [pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Update PATH for west
run: |
echo "::add-path::$HOME/.local/bin"
- name: checkout
uses: actions/checkout@v2
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen
- name: cache-pip
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: install-pip
run: |
pip3 install setuptools
pip3 install 'breathe>=4.9.1' 'docutils>=0.14' \
'sphinx>=1.7.5' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools
- name: west setup
run: |
west init -l . || true
- name: build-docs
run: |
source zephyr-env.sh
make htmldocs
tar cvf htmldocs.tar --directory=./doc/_build html
- name: upload-build
uses: actions/upload-artifact@master
continue-on-error: True
with:
name: htmldocs.tar
path: htmldocs.tar
- name: check-warns
run: |
if [ -s doc/_build/doc.warnings ]; then
docwarn=$(cat doc/_build/doc.warnings)
docwarn="${docwarn//'%'/'%25'}"
docwarn="${docwarn//$'\n'/'%0A'}"
docwarn="${docwarn//$'\r'/'%0D'}"
# We treat doc warnings as errors
echo "::error file=doc.warnings::$docwarn"
exit 1
fi

View File

@@ -1,112 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Doc build for Release or Daily
# Either a daily based on schedule/cron or only on tag push
on:
schedule:
- cron: '50 22 * * *'
push:
tags:
- '*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Update PATH for west
run: |
echo "::add-path::$HOME/.local/bin"
- name: Determine tag
id: tag
run: |
# We expect to get here either due to a schedule event in which
# case we are doing a daily build of the docs, or because a new
# tag was pushed, in which case we are building docs for a release
if [ ${GITHUB_EVENT_NAME} == "schedule" ]; then
echo ::set-output name=TYPE::daily;
echo ::set-output name=RELEASE::latest;
elif [ ${GITHUB_EVENT_NAME} == "push" ]; then
# If push due to a tag GITHUB_REF will look like refs/tags/TAG-FOO
# chop of 'refs/tags' so RELEASE=TAG-FOO
echo ::set-output name=TYPE::release;
echo ::set-output name=RELEASE::${GITHUB_REF/refs\/tags\//};
else
exit 1
fi
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: checkout
uses: actions/checkout@v2
- name: install-pkgs
run: |
sudo apt-get install -y ninja-build doxygen
- name: cache-pip
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-doc-pip
- name: install-pip
run: |
pip3 install setuptools
pip3 install 'breathe>=4.9.1' 'docutils>=0.14' \
'sphinx>=1.7.5' sphinx_rtd_theme sphinx-tabs \
sphinxcontrib-svg2pdfconverter 'west>=0.6.2'
pip3 install pyelftools
- name: west setup
run: |
west init -l . || true
- name: build-docs
env:
DOC_TAG: ${{ steps.tag.outputs.TYPE }}
run: |
source zephyr-env.sh
make DOC_TAG=${DOC_TAG} htmldocs
- name: check-warns
run: |
if [ -s doc/_build/doc.warnings ]; then
docwarn=$(cat doc/_build/doc.warnings)
docwarn="${docwarn//'%'/'%25'}"
docwarn="${docwarn//$'\n'/'%0A'}"
docwarn="${docwarn//$'\r'/'%0D'}"
# We treat doc warnings as errors
echo "::error file=doc.warnings::$docwarn"
exit 1
fi
- name: Upload to AWS S3
env:
RELEASE: ${{ steps.tag.outputs.RELEASE }}
run: |
echo "DOC_RELEASE=[$RELEASE]"
if [ "$RELEASE" == "latest" ]; then
export
echo "publish latest docs"
aws s3 sync --quiet doc/_build/html s3://docs.zephyrproject.org/latest --delete
echo "success sync of latest docs"
else
DOC_RELEASE=${RELEASE}.0
echo "publish release docs: ${DOC_RELEASE}"
aws s3 sync --quiet doc/_build/html s3://docs.zephyrproject.org/${DOC_RELEASE}
echo "success sync of rel docs"
fi
if [ -d doc/_build/doxygen/html ]; then
echo "publish doxygen"
aws s3 sync --quiet doc/_build/doxygen/html s3://docs.zephyrproject.org/apidoc/${RELEASE} --delete
echo "success publish of doxygen"
fi

View File

@@ -1,32 +0,0 @@
name: Scancode
on: [pull_request]
jobs:
scancode_job:
runs-on: ubuntu-latest
name: Scan code for licenses
steps:
- name: Checkout the code
uses: actions/checkout@v1
- name: Scan the code
id: scancode
uses: zephyrproject-rtos/action_scancode@v3
with:
directory-to-scan: 'scan/'
- name: Artifact Upload
uses: actions/upload-artifact@v1
with:
name: scancode
path: ./artifacts
- name: Verify
run: |
if [ -s ./artifacts/report.txt ]; then
report=$(cat ./artifacts/report.txt)
report="${report//'%'/'%25'}"
report="${report//$'\n'/'%0A'}"
report="${report//$'\r'/'%0D'}"
echo "::error file=./artifacts/report.txt::$report"
exit 1
fi

View File

@@ -1,65 +0,0 @@
# Copyright (c) 2020 Linaro Limited.
# SPDX-License-Identifier: Apache-2.0
name: Zephyr West Command Tests
on:
push:
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
pull_request:
paths:
- 'scripts/west-commands.yml'
- 'scripts/west_commands/**'
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
os: [ubuntu-latest, macos-latest, windows-latest]
steps:
- name: checkout
uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
- name: cache-pip-linux
if: startsWith(runner.os, 'Linux')
uses: actions/cache@v1
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: cache-pip-mac
if: startsWith(runner.os, 'macOS')
uses: actions/cache@v1
with:
path: ~/Library/Caches/pip
# Trailing '-' was just to get a different cache name
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}-
- name: cache-pip-win
if: startsWith(runner.os, 'Windows')
uses: actions/cache@v1
with:
path: ~\AppData\Local\pip\Cache
key: ${{ runner.os }}-pip-${{ matrix.python-version }}
restore-keys: |
${{ runner.os }}-pip-${{ matrix.python-version }}
- name: install pytest
run: |
pip3 install pytest west pyelftools
- name: run pytest-win
if: runner.os == 'Windows'
run: |
cmd /C "set PYTHONPATH=.\scripts\west_commands && pytest ./scripts/west_commands/tests/"
- name: run pytest-mac-linux
if: runner.os != 'Windows'
run: |
PYTHONPATH=./scripts/west_commands pytest ./scripts/west_commands/tests/

24
.gitignore vendored
View File

@@ -7,8 +7,8 @@
*.swp
*.swo
*~
build*/
!doc/guides/build
build
build-*
cscope.*
.dir
@@ -30,22 +30,16 @@ doc/boards
doc/samples
doc/latex
doc/themes/zephyr-docs-theme
sanity-out*
bsim_bt_out
sanity-out/
scripts/grub
doc/reference/kconfig/*.rst
doc/doc.warnings
.*project
.settings
tags
.project
.cproject
.xxproject
.envrc
.vscode
# Disables the warning about the changed behavior for Kconfig defaults
hide-defaults-note
# Tag files
GPATH
GRTAGS
GTAGS
TAGS
tags
.idea

View File

@@ -13,7 +13,7 @@ debug = false
extra-path=scripts/gitlint
[title-max-length-no-revert]
line-length=75
line-length=72
[body-min-line-count]
min-line-count=1
@@ -22,7 +22,7 @@ min-line-count=1
max-line-count=200
[title-starts-with-subsystem]
regex = ^(?!subsys:)(([^:]+):)(\s([^:]+):)*\s(.+)$
regex = ^(([^:]+):)(\s([^:]+):)*\s(.+)$
[title-must-not-contain-word]
# Comma-separated list of words that should not occur in the title. Matching is case

View File

@@ -3,7 +3,7 @@
#
# FIXME: all these should match the relative filename
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
^[ \t]*$
^[ \t]*\^$
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]$
@@ -15,7 +15,7 @@
#
# bt_gatt_discover_params unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -27,7 +27,7 @@
#
# Bluetooth GATT unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -39,7 +39,7 @@
#
# Bluetooth mesh unnamed struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]bluetooth.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -48,21 +48,3 @@
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model.__unnamed__.*
^[- \t]*\^
#
# Bluetooth mesh pub struct definition
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]bluetooth[/\\]mesh[/\\]access.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*bt_mesh_model_pub.*
^[- \t]*\^

View File

@@ -4,7 +4,7 @@
#
# include
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]display_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]display_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]

View File

@@ -1,6 +1,6 @@
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]file_system[/\\]index.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]file_system.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]dma.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]io_interfaces.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]sensor.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]subsystems[/\\]sensor.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.

View File

@@ -0,0 +1,24 @@
#
# Kernel
#
#
# include/kernel.h warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]kernel_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*k_poll_event.__unnamed__
^[- \t]*\^

View File

@@ -4,7 +4,7 @@
#
# include
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]misc_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]misc_api.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]

View File

@@ -4,7 +4,7 @@
#
# include/net/net_ip.h warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -19,7 +19,7 @@
#
# include/net/net_mgmt.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -31,7 +31,7 @@
#
# include/net/buf.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -43,7 +43,7 @@
#
# include/net/ieee802154.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -55,16 +55,12 @@
#
# include/net/net_context.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_context.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_context.options
^[- \t]*\^
#
# include/net/net_stats.h
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_stats.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]networking.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*net_stats_tc.[a-z]+
^[- \t]*\^
#
# stray duplicate definition warnings
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]networking[/\\]net_if.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration(.*)

View File

@@ -1,7 +1,7 @@
#
# UART unnamed struct definition
#doc/api/peripherals/uart.rst
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]uart.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]api[/\\]io_interfaces.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
@@ -13,19 +13,3 @@
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*uart_device_config.__unnamed__.*
^[- \t]*\^
#
^(?P<filename>([\-:\\/\w\.])+[/\\]doc[/\\]reference[/\\]peripherals[/\\]uart.rst):(?P<lineno>[0-9]+): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected identifier in nested name. \[error at [0-9]+]
^[ \t]*
^[ \t]*\^
^(?P=filename):(?P=lineno): WARNING: Invalid definition: Expected end of definition. \[error at [0-9]+]
^.*uart_event.data
^[- \t]*\^

View File

@@ -20,13 +20,3 @@ Felipe Neves <ryukokki.felipe@gmail.com> <ryukokki.felipe@gmail.com>
Amir Kaplan <amir.kaplan@intel.com> <amir.kaplan@intel.com>
Anas Nashif <anas.nashif@intel.com> <anas.nashif@intel.com>
Ruud Derwig <Ruud.Derwig@synopsys.com> <Ruud.Derwig@synopsys.com>
Flavio Arieta Netto <flavio@exati.com.br>
Nishikant Nayak <nishikantax.nayak@intel.com>
Justin Watson <jwatson5@gmail.com>
Johann Fischer <j.fischer@phytec.de>
Jun Li <jun.r.li@intel.com>
Xiaorui Hu <xiaorui.hu@linaro.org>
Yannis Damigos <giannis.damigos@gmail.com> <ydamigos@iccs.gr>
Vinayak Kariappa Chettimada <vinayak.kariappa.chettimada@nordicsemi.no> <vinayak.kariappa.chettimada@nordicsemi.no> <vich@nordicsemi.no> <vinayak.kariappa@gmail.com>
Sean Nyekjaer <sean@geanix.com> <sean.nyekjaer@prevas.dk>
Sean Nyekjaer <sean@geanix.com> <sean@nyekjaer.dk>

View File

@@ -4,9 +4,13 @@ compiler: gcc
env:
global:
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.11.2
- SDK=0.9.3
- SANITYCHECK_OPTIONS=" --inline-logs --enable-coverage "
- SANITYCHECK_OPTIONS_RETRY="${SANITYCHECK_OPTIONS} --only-failed --outdir=out-2nd-pass"
- ZEPHYR_SDK_INSTALL_DIR=/opt/sdk/zephyr-sdk-0.9.3
- ZEPHYR_TOOLCHAIN_VARIANT=zephyr
- MATRIX_BUILDS="5"
- MATRIX_BUILDS_EXTRA="5"
matrix:
- MATRIX_BUILD="1"
- MATRIX_BUILD="2"
@@ -15,12 +19,12 @@ env:
- MATRIX_BUILD="5"
build:
cache: false
cache: true
cache_dir_list:
- ${SHIPPABLE_BUILD_DIR}/ccache
pre_ci_boot:
image_name: zephyrprojectrtos/ci
image_tag: v0.11.4
image_tag: v0.4-rc7
pull: true
options: "-e HOME=/home/buildslave --privileged=true --tty --net=bridge --user buildslave"
@@ -28,30 +32,96 @@ build:
- export CCACHE_DIR=${SHIPPABLE_BUILD_DIR}/ccache/.ccache
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -c -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -c -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
git rebase origin/${PULL_REQUEST_BASE_BRANCH};
fi
- source zephyr-env.sh
- ccache -c -s --max-size=5000M
- >
if [ "$MATRIX_BUILD" = "5" -a "$IS_PULL_REQUEST" = "true" ]; then
export COMMIT_RANGE=origin/${PULL_REQUEST_BASE_BRANCH}..HEAD
echo "Building a Pull Request";
echo "- Building Documentation";
echo "Commit range:" ${COMMIT_RANGE}
make htmldocs
if [ "$?" != "0" ]; then
echo "Documentation build failed";
exit 1;
fi
if [ -s doc/_build/doc.warnings ]; then
echo " => New documentation warnings/errors";
cp doc/_build/doc.warnings doc.warnings
fi;
echo "- Verify commit message, coding style, doc build";
./scripts/ci/check-compliance.py --commits ${COMMIT_RANGE} || true;
fi;
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/get_modified_tests.py --commits origin/${PULL_REQUEST_BASE_BRANCH}..HEAD > modified_tests.args;
./scripts/ci/get_modified_boards.py --commits origin/${PULL_REQUEST_BASE_BRANCH}..HEAD > modified_boards.args;
if [ -s modified_boards.args ]; then
./scripts/sanitycheck ${SANITYCHECK_OPTIONS} +modified_boards.args --save-tests test_file.txt || exit 1;
fi;
if [ -s modified_tests.args ]; then
./scripts/sanitycheck ${SANITYCHECK_OPTIONS} +modified_tests.args --save-tests test_file.txt || exit 1;
fi;
rm -f modified_tests.args modified_boards.args;
fi;
- ./scripts/sanitycheck ${SANITYCHECK_OPTIONS} --save-tests test_file.txt || exit 1
- ./scripts/sanitycheck ${SANITYCHECK_OPTIONS} --load-tests test_file.txt --subset ${MATRIX_BUILD}/${MATRIX_BUILDS} || ./scripts/sanitycheck ${SANITYCHECK_OPTIONS_RETRY} || ./scripts/sanitycheck ${SANITYCHECK_OPTIONS_RETRY}
- rm test_file.txt
- ccache -s
on_failure:
- rm -rf ccache $HOME/.cache/zephyr
- mkdir -p shippable/testresults
- mkdir -p shippable/codecoverage
- source zephyr-env.sh
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -f -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -f -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
if [ "$MATRIX_BUILD" = "1" ]; then
gcovr -r ${ZEPHYR_BASE} -x > shippable/codecoverage/coverage.xml;
lcov --capture --directory sanity-out/native_posix/ --directory sanity-out/unit_testing/ --output-file lcov.pre.info -q --rc lcov_branch_coverage=1;
lcov -q --remove lcov.pre.info mylib.c --remove lcov.pre.info tests/\* --remove lcov.pre.info samples/\* --remove lcov.pre.info ext/\* --remove lcov.pre.info *generated* -o lcov.info --rc lcov_branch_coverage=1;
rm lcov.pre.info;
rm -rf sanity-out out-2nd-pass;
bash <(curl -s https://codecov.io/bash) -f "lcov.info" -X coveragepy -X fixes;
rm -f lcov.info;
else
rm -rf sanity-out out-2nd-pass;
fi;
- >
if [ -e compliance.xml ]; then
cp compliance.xml shippable/testresults/;
fi;
- >
if [ -e ./scripts/sanity_chk/last_sanity.xml ]; then
cp ./scripts/sanity_chk/last_sanity.xml shippable/testresults/;
fi;
on_success:
- rm -rf ccache $HOME/.cache/zephyr
- mkdir -p shippable/testresults
- mkdir -p shippable/codecoverage
- source zephyr-env.sh
- >
if [ "$IS_PULL_REQUEST" = "true" ]; then
./scripts/ci/run_ci.sh -s -b ${PULL_REQUEST_BASE_BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS} -p ${PULL_REQUEST};
else
./scripts/ci/run_ci.sh -s -b ${BRANCH} -r origin -m ${MATRIX_BUILD} -M ${MATRIX_BUILDS};
fi;
branches:
only:
- master
- v*-branch
- topic-*
if [ "$MATRIX_BUILD" = "1" ]; then
gcovr -r ${ZEPHYR_BASE} -x > shippable/codecoverage/coverage.xml;
lcov --capture --directory sanity-out/native_posix/ --directory sanity-out/unit_testing/ --output-file lcov.pre.info -q --rc lcov_branch_coverage=1;
lcov -q --remove lcov.pre.info mylib.c --remove lcov.pre.info tests/\* --remove lcov.pre.info samples/\* --remove lcov.pre.info ext/\* --remove lcov.pre.info *generated* -o lcov.info --rc lcov_branch_coverage=1;
rm lcov.pre.info;
rm -rf sanity-out out-2nd-pass;
bash <(curl -s https://codecov.io/bash) -f "lcov.info" -X coveragepy -X fixes;
rm -f lcov.info;
else
rm -rf sanity-out out-2nd-pass;
fi;
- >
if [ -e compliance.xml ]; then
cp compliance.xml shippable/testresults/;
fi;
- >
if [ -e ./scripts/sanity_chk/last_sanity.xml ]; then
cp ./scripts/sanity_chk/last_sanity.xml shippable/testresults/;
fi;
integrations:
notifications:
- integrationName: slack_integration
@@ -62,7 +132,7 @@ integrations:
only:
- master
on_success: never
on_failure: never
on_failure: always
- integrationName: email
type: email
recipients:
@@ -70,6 +140,8 @@ integrations:
branches:
only:
- master
- net
- bluetooth
- arm
on_success: never
on_failure: always
on_pull_request: never
on_failure: never

View File

@@ -50,7 +50,6 @@ sp_after_comma = add
sp_func_def_paren = remove # "int foo (){" vs "int foo(){"
sp_func_call_paren = remove # "foo (" vs "foo("
sp_func_proto_paren = remove # "int foo ();" vs "int foo();"
sp_inside_fparen = remove # "func( arg )" vs "func(arg)"
sp_else_brace = add # ignore/add/remove/force
sp_before_ptr_star = add # ignore/add/remove/force
sp_after_ptr_star = remove # ignore/add/remove/force

File diff suppressed because it is too large Load Diff

View File

@@ -1,466 +1,205 @@
# CODEOWNERS for autoreview assigning in github
# https://help.github.com/en/articles/about-code-owners#codeowners-syntax
# Order is important; for each modified file, the last matching
# pattern takes the most precedence.
# That is, with the last pattern being
# *.rst @nashif
# if only .rst files are being modified, only nashif is
# automatically requested for review, but you can manually
# add others as needed.
# Do not use wildcard on all source yet
# * @galak @nashif
/.known-issues/ @nashif
/.github/ @nashif
/.github/workflows/ @galak @nashif
/arch/arc/ @vonhust @ruuddw
/arch/arm/ @MaureenHelm @galak @ioannisg
/arch/arm/core/aarch32/cortex_m/cmse/ @ioannisg
/arch/arm/core/aarch64/ @carlocaione
/arch/arm/include/aarch32/cortex_m/cmse.h @ioannisg
/arch/arm/include/aarch64/ @carlocaione
/arch/arm/core/aarch32/cortex_r/ @MaureenHelm @galak @ioannisg @bbolen @stephanosio
/arch/common/ @andrewboie @ioannisg @andyross
/soc/arc/snps_*/ @vonhust @ruuddw
/soc/nios2/ @nashif @wentongwu
/soc/arm/ @MaureenHelm @galak @ioannisg
/soc/arm/arm/mps2/ @fvincenzo
/soc/arm/atmel_sam/sam3x/ @ioannisg
/soc/arm/atmel_sam/sam4e/ @nandojve
/soc/arm/atmel_sam/sam4s/ @fallrisk
/soc/arm/atmel_sam/samv71/ @nandojve
/soc/arm/bcm*/ @sbranden
/soc/arm/nxp*/ @MaureenHelm
/soc/arm/nordic_nrf/ @ioannisg
/soc/arm/qemu_cortex_a53/ @carlocaione
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/stm32f4/ @rsalveti @idlethread
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/ti_simplelink/cc13x2_cc26x2/ @bwitherspoon
/soc/arm/ti_simplelink/cc32xx/ @vanti
/soc/arm/ti_simplelink/msp432p4xx/ @Mani-Sadhasivam
/soc/arm/xilinx_zynqmp/ @stephanosio
/soc/xtensa/intel_s1000/ @sathishkuttan @dcpleung
/arch/x86/ @andrewboie
/arch/nios2/ @andrewboie @wentongwu
/arch/posix/ @aescolar
/arch/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/posix/ @aescolar
/soc/riscv/ @kgugala @pgielda @nategraff-sifive
/soc/riscv/openisa*/ @MaureenHelm
/soc/x86/ @andrewboie
/arch/xtensa/ @andrewboie @dcpleung @andyross
/soc/xtensa/ @andrewboie @dcpleung @andyross
/boards/arc/ @vonhust @ruuddw
/boards/arm/ @MaureenHelm @galak
/boards/arm/96b_argonkey/ @avisconti
/boards/arm/96b_avenger96/ @Mani-Sadhasivam
/boards/arm/96b_carbon/ @rsalveti @idlethread
/boards/arm/96b_meerkat96/ @Mani-Sadhasivam
/boards/arm/96b_nitrogen/ @idlethread
/boards/arm/96b_neonkey/ @Mani-Sadhasivam
/boards/arm/96b_stm32_sensor_mez/ @Mani-Sadhasivam
/boards/arm/96b_wistrio/ @Mani-Sadhasivam
/boards/arm/arduino_due/ @ioannisg
/boards/arm/cc1352r1_launchxl/ @bwitherspoon
/boards/arm/cc26x2r1_launchxl/ @bwitherspoon
/boards/arm/cc3220sf_launchxl/ @vanti
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/frdm*/ @MaureenHelm
/boards/arm/frdm*/doc/ @MaureenHelm @MeganHansen
/boards/arm/google_*/ @jackrosenthal
/boards/arm/hexiwear*/ @MaureenHelm
/boards/arm/hexiwear*/doc/ @MaureenHelm @MeganHansen
/boards/arm/lpcxpresso*/ @MaureenHelm
/boards/arm/lpcxpresso*/doc/ @MaureenHelm @MeganHansen
/boards/arm/mimxrt*/ @MaureenHelm
/boards/arm/mimxrt*/doc/ @MaureenHelm @MeganHansen
/boards/arm/mps2_an385/ @fvincenzo
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/nrf*/ @carlescufi @lemrey @ioannisg
/boards/arm/nucleo*/ @erwango
/boards/arm/nucleo_f401re/ @rsalveti @idlethread
/boards/arm/qemu_cortex_a53/ @carlocaione
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg
/boards/arm/sam4e_xpro/ @nandojve
/boards/arm/sam4s_xplained/ @fallrisk
/boards/arm/sam_v71_xult/ @nandojve
/boards/arm/v2m_beetle/ @fvincenzo
/boards/arm/olimexino_stm32/ @ydamigos
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango
/boards/common/ @mbolivar
/boards/nios2/ @wentongwu
/boards/nios2/altera_max10/ @wentongwu
/boards/arm/stm32_min_dev/ @cbsiddharth
/boards/posix/ @aescolar
/boards/riscv/ @kgugala @pgielda @nategraff-sifive
/boards/riscv/rv32m1_vega/ @MaureenHelm
/boards/shields/ @erwango
/boards/x86/ @andrewboie @nashif
/boards/xtensa/ @nashif @dcpleung
/boards/xtensa/intel_s1000_crb/ @sathishkuttan @dcpleung
/boards/xtensa/odroid_go/ @ydamigos
.known-issues/* @inakypg @nashif
arch/arc/ @vonhust @ruuddw
arch/arm/ @MaureenHelm @galak
arch/arm/soc/arm/mps2/* @fvincenzo
arch/arm/soc/atmel_sam/sam4s @fallrisk
arch/arm/soc/nxp*/ @MaureenHelm
arch/arm/soc/st_stm32/ @erwango
arch/arm/soc/st_stm32/stm32f4/* @rsalveti @idlethread
arch/arm/soc/ti_simplelink/cc32xx @GAnthony
arch/arm/soc/ti_simplelink/msp432p4xx @Mani-Sadhasivam
arch/nios2/ @andrewboie @ramakrishnapallala
arch/posix/ @aescolar
arch/riscv32/ @fractalclone @kgugala @pgielda
arch/x86/ @andrewboie @ramakrishnapallala
arch/x86/core/* @andrewboie
arch/x86/core/crt0.S @ramakrishnapallala @nashif
arch/x86/soc/intel_quark/quark_d2000/* @nashif
arch/x86/soc/intel_quark/quark_se/* @nashif
arch/x86/soc/intel_quark/quark_x1000/* @nashif
arch/xtensa/ @andrewboie @rgundi @andyross
boards/arc/ @vonhust @ruuddw
boards/arc/arduino_101_sss/* @nashif
boards/arc/em_starterkit/* @vonhust
boards/arc/quark_se_c1000_ss_devboard/* @nashif
boards/arm/* @MaureenHelm @galak
boards/arm/96b_carbon/ @rsalveti @idlethread
boards/arm/96b_nitrogen/ @idlethread
boards/arm/96b_neonkey/ @Mani-Sadhasivam
boards/arm/cc3220sf_launchxl/ @GAnthony
boards/arm/curie_ble/ @jhedberg
boards/arm/disco_l475_iot1/ @erwango
boards/arm/frdm*/ @MaureenHelm
boards/arm/hexiwear*/ @MaureenHelm
boards/arm/lpcxpresso*/ @MaureenHelm
boards/arm/mimxrt*/ @MaureenHelm
boards/arm/mps2_an385/ @fvincenzo
boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
boards/arm/nrf51_blenano/ @rsalveti
boards/arm/nrf52_pca10040/ @carlescufi
boards/arm/nucleo_f401re/ @rsalveti @idlethread
boards/arm/sam4s_xplained/ @fallrisk
boards/arm/v2m_beetle/ @fvincenzo
boards/arm/olimexino_stm32/ @ydamigos
boards/arm/stm32f3_disco/ @ydamigos
boards/nios2/ @ramakrishnapallala
boards/nios2/altera_max10/ @ramakrishnapallala
boards/posix/ @aescolar
boards/riscv32/ @fractalclone @kgugala @pgielda
boards/x86/ @andrewboie @nashif
boards/x86/arduino_101/ @nashif
boards/x86/galileo/ @nashif
boards/x86/quark_d2000_crb/ @nashif
boards/x86/quark_se_c1000_devboard/ @nashif
boards/xtensa/ @andrewboie @ramakrishnapallala
# All cmake related files
/cmake/ @SebastianBoe @nashif
/CMakeLists.txt @SebastianBoe @nashif
/doc/ @dbkinder
/doc/guides/coccinelle.rst @himanshujha199640 @JuliaLawall
/doc/CMakeLists.txt @carlescufi
/doc/scripts/ @carlescufi
/doc/guides/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/reference/bluetooth/ @joerchan @jhedberg @Vudentz
/doc/reference/kernel/other/resource_mgmt.rst @pabigot
/doc/reference/networking/can* @alexanderwachter
/drivers/debug/ @nashif
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*mcux* @MaureenHelm
/drivers/*/*stm32* @erwango
/drivers/*/*native_posix* @aescolar
/drivers/adc/ @anangl
/drivers/adc/adc_stm32.c @cybertale
/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
/drivers/can/ @alexanderwachter
/drivers/can/*mcp2515* @karstenkoenig
/drivers/clock_control/*nrf* @nordic-krch
/drivers/counter/ @nordic-krch
/drivers/counter/counter_cmos.c @andrewboie
/drivers/display/ @vanwinkeljan
/drivers/display/display_framebuf.c @andrewboie
/drivers/dma/*dw* @tbursztyka
/drivers/dma/*sam0* @Sizurka
/drivers/dma/dma_stm32* @cybertale
/drivers/eeprom/ @henrikbrixandersen
/drivers/eeprom/eeprom_stm32.c @KwonTae-young
/drivers/entropy/*rv32m1* @MaureenHelm
/drivers/espi/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ps2/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/kscan/ @albertofloyd @franciscomunoz @scottwcpg
/drivers/ethernet/ @jukkar @tbursztyka @pfalcon
/drivers/entropy/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/flash/ @nashif @nvlsianpu
/drivers/flash/*nrf* @nvlsianpu
/drivers/flash/*spi_nor* @pabigot
/drivers/flash/*stm32* @superna9999
/drivers/gpio/ @mnkp @pabigot
/drivers/gpio/*ht16k33* @henrikbrixandersen
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*sx1509b* @pabigot
/drivers/gpio/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/hwinfo/ @alexanderwachter
/drivers/i2c/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/i2s/i2s_ll_stm32* @avisconti
/drivers/i2c/i2c_shell.c @nashif
/drivers/ieee802154/ @jukkar @tbursztyka
/drivers/ieee802154/ieee802154_rf2xx* @jukkar @tbursztyka @nandojve
/drivers/interrupt_controller/ @andrewboie
/drivers/interrupt_controller/intc_gic.c @stephanosio
/drivers/*/intc_vexriscv_litex.c @mateusz-holenko @kgugala @pgielda
/drivers/ipm/ipm_mhu* @karl-zh
/drivers/ipm/Kconfig.nrfx @masz-nordic @ioannisg
/drivers/ipm/Kconfig.nrfx_ipc_channel @masz-nordic @ioannisg
/drivers/ipm/ipm_nrfx_ipc.c @masz-nordic @ioannisg
/drivers/ipm/ipm_nrfx_ipc.h @masz-nordic @ioannisg
/drivers/ipm/ipm_stm32_ipcc.c @arnopo
/drivers/led/ @Mani-Sadhasivam
/drivers/led_strip/ @mbolivar
/drivers/lora/ @Mani-Sadhasivam
/drivers/modem/ @mike-scott
/drivers/pcie/ @andrewboie
/drivers/pinmux/stm32/ @rsalveti @idlethread
/drivers/pinmux/*hsdk* @iriszzw
/drivers/pwm/*litex* @mateusz-holenko @kgugala @pgielda
/drivers/sensor/ @MaureenHelm
/drivers/sensor/ams_iAQcore/ @alexanderwachter
/drivers/sensor/ens210/ @alexanderwachter
/drivers/sensor/hts*/ @avisconti
/drivers/sensor/lis*/ @avisconti
/drivers/sensor/lps*/ @avisconti
/drivers/sensor/lsm*/ @avisconti
/drivers/sensor/st*/ @avisconti
/drivers/serial/uart_altera_jtag_hal.c @wentongwu
/drivers/serial/*ns16550* @andrewboie
/drivers/serial/Kconfig.litex @mateusz-holenko @kgugala @pgielda
/drivers/serial/uart_liteuart.c @mateusz-holenko @kgugala @pgielda
/drivers/serial/Kconfig.rtt @carlescufi @pkral78
/drivers/serial/uart_rtt.c @carlescufi @pkral78
/drivers/serial/Kconfig.xlnx @wjliang
/drivers/serial/uart_xlnx_ps.c @wjliang
/drivers/net/ @jukkar @tbursztyka
/drivers/ptp_clock/ @jukkar
/drivers/pwm/*rv32m1* @henrikbrixandersen
/drivers/pwm/pwm_shell.c @henrikbrixandersen
/drivers/spi/ @tbursztyka
/drivers/spi/spi_ll_stm32.* @superna9999
/drivers/spi/spi_rv32m1_lpspi* @karstenkoenig
/drivers/timer/apic_timer.c @andrewboie
/drivers/timer/arm_arch_timer.c @carlocaione
/drivers/timer/cortex_m_systick.c @ioannisg
/drivers/timer/altera_avalon_timer_hal.c @wentongwu
/drivers/timer/riscv_machine_timer.c @nategraff-sifive @kgugala @pgielda
/drivers/timer/litex_timer.c @mateusz-holenko @kgugala @pgielda
/drivers/timer/xlnx_psttc_timer.c @wjliang
/drivers/timer/cc13x2_cc26x2_rtc_timer.c @vanti
/drivers/usb/ @jfischer-phytec-iot @finikorg
/drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
/drivers/video/ @loicpoulain
/drivers/i2c/i2c_ll_stm32* @ldts @ydamigos
/drivers/i2c/i2c_rv32m1_lpi2c* @henrikbrixandersen
/drivers/i2c/*sam0* @Sizurka
/drivers/i2c/i2c_dw* @dcpleung
/drivers/*/*xec* @franciscomunoz @albertofloyd @scottwcpg
/drivers/watchdog/*gecko* @oanerer
/drivers/watchdog/wdt_handlers.c @andrewboie
/drivers/wifi/ @jukkar @tbursztyka @pfalcon
/drivers/wifi/eswifi/ @loicpoulain
/dts/arc/ @vonhust @ruuddw @iriszzw
/dts/arm/atmel/sam4e* @nandojve
/dts/arm/atmel/samr21.dtsi @benpicco
/dts/arm/atmel/sam*5*.dtsi @benpicco
/dts/arm/atmel/samv71* @nandojve
/dts/arm/broadcom/ @sbranden
/dts/arm/qemu-virt/ @carlocaione
/dts/arm/st/ @erwango
/dts/arm/ti/cc13?2* @bwitherspoon
/dts/arm/ti/cc26?2* @bwitherspoon
/dts/arm/ti/cc3235* @vanti
/dts/arm/nordic/ @ioannisg @carlescufi
/dts/arm/nxp/ @MaureenHelm
/dts/arm/microchip/ @franciscomunoz @albertofloyd @scottwcpg
/dts/arm/silabs/efm32gg11b* @oanerer
/dts/arm/silabs/efm32_jg_pg* @chrta
/dts/arm/silabs/efm32jg12b* @chrta
/dts/arm/silabs/efm32pg12b* @chrta
/dts/riscv/microsemi-miv.dtsi @galak
/dts/riscv/rv32m1* @MaureenHelm
/dts/riscv/riscv32-fe310.dtsi @nategraff-sifive
/dts/riscv/riscv32-litex-vexriscv.dtsi @mateusz-holenko @kgugala @pgielda
/dts/arm/armv7-r.dtsi @bbolen @stephanosio
/dts/arm/armv8-a.dtsi @carlocaione
/dts/arm/xilinx/ @bbolen @stephanosio
/dts/xtensa/xtensa.dtsi @ydamigos
/dts/xtensa/intel/ @dcpleung
/dts/bindings/ @galak
/dts/bindings/can/ @alexanderwachter
/dts/bindings/iio/adc/st*stm32-adc.yaml @cybertale
/dts/bindings/serial/ns16550.yaml @andrewboie
/dts/bindings/*/nordic* @anangl
/dts/bindings/*/nxp* @MaureenHelm
/dts/bindings/*/openisa* @MaureenHelm
/dts/bindings/*/st* @erwango
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda @nategraff-sifive
/dts/bindings/*/litex* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/vexriscv* @mateusz-holenko @kgugala @pgielda
/dts/posix/ @aescolar @vanwinkeljan
/dts/bindings/sensor/*bme680* @BoschSensortec
/dts/bindings/sensor/st* @avisconti
/ext/hal/cmsis/ @MaureenHelm @galak @stephanosio
/ext/lib/crypto/tinycrypt/ @ceolin
/include/ @nashif @carlescufi @galak @MaureenHelm
/include/drivers/adc.h @anangl
/include/drivers/can.h @alexanderwachter
/include/drivers/counter.h @nordic-krch
/include/drivers/display.h @vanwinkeljan
/include/drivers/espi.h @albertofloyd @franciscomunoz @scottwcpg
/include/drivers/bluetooth/ @joerchan @jhedberg @Vudentz
/include/drivers/flash.h @nashif @carlescufi @galak @MaureenHelm @nvlsianpu
/include/drivers/led/ht16k33.h @henrikbrixandersen
/include/drivers/interrupt_controller/ @andrewboie
/include/drivers/interrupt_controller/gic.h @stephanosio
/include/drivers/pcie/ @andrewboie
/include/drivers/hwinfo.h @alexanderwachter
/include/drivers/led.h @Mani-Sadhasivam
/include/drivers/led_strip.h @mbolivar
/include/drivers/sensor.h @MaureenHelm
/include/drivers/spi.h @tbursztyka
/include/drivers/lora.h @Mani-Sadhasivam
/include/app_memory/ @andrewboie
/include/arch/arc/ @vonhust @ruuddw
/include/arch/arc/arch.h @andrewboie
/include/arch/arc/v2/irq.h @andrewboie
/include/arch/arm/aarch32/ @MaureenHelm @galak @ioannisg
/include/arch/arm/aarch32/cortex_r/ @stephanosio
/include/arch/arm/aarch64/ @carlocaione
/include/arch/arm/aarch32/irq.h @andrewboie
/include/arch/nios2/ @andrewboie
/include/arch/nios2/arch.h @andrewboie
/include/arch/posix/ @aescolar
/include/arch/riscv/ @nategraff-sifive @kgugala @pgielda
/include/arch/x86/ @andrewboie @wentongwu
/include/arch/common/ @andrewboie @andyross @nashif
/include/arch/xtensa/ @andrewboie
/include/sys/atomic.h @andrewboie @andyross
/include/bluetooth/ @joerchan @jhedberg @Vudentz
/include/cache.h @andrewboie @andyross
/include/canbus/ @alexanderwachter
/include/tracing/ @wentongwu @nashif
/include/debug/ @nashif
/include/device.h @wentongwu @nashif
/include/display/ @vanwinkeljan
/include/dt-bindings/clock/kinetis_mcg.h @henrikbrixandersen
/include/dt-bindings/clock/kinetis_scg.h @henrikbrixandersen
/include/dt-bindings/dma/stm32_dma.h @cybertale
/include/dt-bindings/pcie/ @andrewboie
/include/dt-bindings/usb/usb.h @galak @finikorg
/include/fs/ @nashif @wentongwu
/include/init.h @andrewboie @andyross
/include/irq.h @andrewboie @andyross
/include/irq_offload.h @andrewboie @andyross
/include/kernel.h @andrewboie @andyross
/include/kernel_version.h @andrewboie @andyross
/include/linker/app_smem*.ld @andrewboie
/include/linker/ @andrewboie @andyross
/include/logging/ @nordic-krch
/include/net/ @jukkar @tbursztyka @pfalcon
/include/net/buf.h @jukkar @jhedberg @tbursztyka @pfalcon
/include/posix/ @pfalcon
/include/power/power.h @wentongwu @nashif
/include/ptp_clock.h @jukkar
/include/shared_irq.h @andrewboie @andyross
/include/shell/ @jakub-uC @nordic-krch
/include/sw_isr_table.h @andrewboie @andyross
/include/sys_clock.h @andrewboie @andyross
/include/sys/sys_io.h @andrewboie @andyross
/include/toolchain.h @andrewboie @andyross @nashif
/include/toolchain/ @andrewboie @andyross
/include/zephyr.h @andrewboie @andyross
/kernel/ @andrewboie @andyross
/lib/gui/ @vanwinkeljan
/lib/os/ @andrewboie @andyross
/lib/posix/ @pfalcon
/lib/cmsis_rtos_v2/ @nashif
/lib/cmsis_rtos_v1/ @nashif
/lib/libc/ @nashif @andrewboie
/modules/ @nashif
/kernel/device.c @andrewboie @andyross @nashif
/kernel/idle.c @andrewboie @andyross @nashif
/samples/ @nashif
/samples/basic/minimal/ @carlescufi
/samples/basic/servo_motor/*microbit* @jhe
/lib/updatehub/ @chtavares592 @otavio
/samples/bluetooth/ @jhedberg @Vudentz @joerchan
/samples/boards/intel_s1000_crb/ @sathishkuttan @dcpleung @nashif
/samples/display/ @vanwinkeljan
/samples/drivers/CAN/ @alexanderwachter
/samples/drivers/display/ @vanwinkeljan
/samples/drivers/ht16k33/ @henrikbrixandersen
/samples/drivers/lora/ @Mani-Sadhasivam
/samples/net/ @jukkar @tbursztyka @pfalcon
/samples/net/dns_resolve/ @jukkar @tbursztyka @pfalcon
/samples/net/lwm2m_client/ @mike-scott
/samples/net/mqtt_publisher/ @jukkar @tbursztyka
/samples/net/sockets/coap_*/ @rveerama1
/samples/net/sockets/ @jukkar @tbursztyka @pfalcon
/samples/net/updatehub/ @chtavares592 @otavio
/samples/sensor/ @MaureenHelm
/samples/shields/ @avisconti
/samples/subsys/logging/ @nordic-krch @jakub-uC
/samples/subsys/shell/ @jakub-uC @nordic-krch
/samples/subsys/usb/ @jfischer-phytec-iot @finikorg
/samples/subsys/power/ @wentongwu @pabigot
/samples/userspace/ @andrewboie
/scripts/coccicheck @himanshujha199640 @JuliaLawall
/scripts/coccinelle/ @himanshujha199640 @JuliaLawall
/scripts/kconfig/ @ulfalizer
/scripts/elf_helper.py @andrewboie
/scripts/sanity_chk/expr_parser.py @nashif
/scripts/gen_app_partitions.py @andrewboie
/scripts/dts/ @ulfalizer @galak
/scripts/release/ @nashif
/arch/x86/gen_gdt.py @andrewboie
/arch/x86/gen_idt.py @andrewboie
/scripts/gen_kobject_list.py @andrewboie
/scripts/gen_priv_stacks.py @andrewboie @agross-oss @ioannisg
/scripts/gen_syscalls.py @andrewboie
/scripts/net/ @jukkar @pfl
/scripts/process_gperf.py @andrewboie
/scripts/gen_relocate_app.py @wentongwu
/scripts/tracing/ @wentongwu
/scripts/sanity_chk/ @nashif
/scripts/sanitycheck @nashif
/scripts/series-push-hook.sh @erwango
/scripts/west_commands/ @mbolivar
/scripts/west-commands.yml @mbolivar
/scripts/zephyr_module.py @tejlmand
/scripts/valgrind.supp @aescolar
/subsys/bluetooth/ @joerchan @jhedberg @Vudentz
/subsys/bluetooth/controller/ @carlescufi @cvinayak @thoh-ot
/subsys/bluetooth/mesh/ @jhedberg @trond-snekvik @joerchan @Vudentz
/subsys/canbus/ @alexanderwachter
/subsys/cpp/ @pabigot @vanwinkeljan
/subsys/debug/ @nashif
/subsys/tracing/ @nashif @wentongwu
/subsys/debug/asan_hacks.c @vanwinkeljan @aescolar
/subsys/disk/disk_access_spi_sdhc.c @JunYangNXP
/subsys/disk/disk_access_sdhc.h @JunYangNXP
/subsys/disk/disk_access_usdhc.c @JunYangNXP
/subsys/fb/ @jfischer-phytec-iot
/subsys/fs/ @nashif
/subsys/fs/fcb/ @nvlsianpu
/subsys/fs/fuse_fs_access.c @vanwinkeljan
/subsys/fs/littlefs_fs.c @pabigot
/subsys/fs/nvs/ @Laczen
/subsys/logging/ @nordic-krch
/subsys/logging/log_backend_net.c @nordic-krch @jukkar
/subsys/mgmt/ @carlescufi @nvlsianpu
/subsys/net/buf.c @jukkar @jhedberg @tbursztyka @pfalcon
/subsys/net/ip/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/dns/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/lwm2m/ @mike-scott
/subsys/net/lib/config/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/mqtt/ @jukkar @tbursztyka @rlubos
/subsys/net/lib/openthread/ @rlubos
/subsys/net/lib/coap/ @rveerama1
/subsys/net/lib/sockets/ @jukkar @tbursztyka @pfalcon
/subsys/net/lib/tls_credentials/ @rlubos
/subsys/net/l2/ @jukkar @tbursztyka
/subsys/net/l2/canbus/ @alexanderwachter @jukkar
/subsys/power/ @wentongwu @pabigot
/subsys/random/ @dleach02
/subsys/settings/ @nvlsianpu
/subsys/shell/ @jakub-uC @nordic-krch
/subsys/storage/ @nvlsianpu
/subsys/testsuite/ @nashif
/subsys/usb/ @jfischer-phytec-iot @finikorg
/tests/ @nashif
/tests/application_development/libcxx/ @pabigot
/tests/arch/arm/ @ioannisg
/tests/boards/native_posix/ @aescolar
/tests/boards/intel_s1000_crb/ @dcpleung @sathishkuttan
/tests/bluetooth/ @joerchan @jhedberg @Vudentz
/tests/bluetooth/bsim_bt/ @joerchan @jhedberg @Vudentz @aescolar
/tests/posix/ @pfalcon
/tests/crypto/ @ceolin
/tests/crypto/mbedtls/ @nashif @ceolin
/tests/drivers/can/ @alexanderwachter
/tests/drivers/flash_simulator/ @nvlsianpu
/tests/drivers/gpio/ @mnkp @pabigot
/tests/drivers/hwinfo/ @alexanderwachter
/tests/drivers/spi/ @tbursztyka
/tests/drivers/uart/uart_async_api/ @Mierunski
/tests/kernel/ @andrewboie @andyross @nashif
/tests/lib/ @nashif
/tests/net/ @jukkar @tbursztyka @pfalcon
/tests/net/buf/ @jukkar @jhedberg @tbursztyka @pfalcon
/tests/net/lib/ @jukkar @tbursztyka @pfalcon
/tests/net/lib/http_header_fields/ @jukkar @tbursztyka
/tests/net/lib/mqtt_packet/ @jukkar @tbursztyka
/tests/net/lib/coap/ @rveerama1
/tests/net/socket/ @jukkar @tbursztyka @pfalcon
/tests/subsys/fs/ @nashif @wentongwu
/tests/subsys/settings/ @nvlsianpu
/tests/subsys/shell/ @jakub-uC @nordic-krch
cmake/ @nashif @SebastianBoe
/CMakeLists.txt @nashif @SebastianBoe
doc/ @dbkinder
doc/CMakeLists.txt @carlescufi
doc/scripts/* @carlescufi
doc/subsystems/bluetooth/ @sjanc @jhedberg @Vudentz
drivers/*/*mcux* @MaureenHelm
drivers/*/*qmsi* @nashif
drivers/*/*stm32* @erwango
drivers/*/*native_posix* @aescolar
drivers/adc/ @anangl
drivers/bluetooth/ @sjanc @jhedberg @Vudentz
drivers/clock_control/*stm32f4* @rsalveti @idlethread
drivers/counter/ @anangl
drivers/ethernet/ @jukkar @tbursztyka @pfalcon
drivers/flash/ @nashif
drivers/flash/*stm32* @superna9999
drivers/gpio/*stm32* @rsalveti @idlethread
drivers/gpio/gpio_pulpino.c @fractalclone @kgugala @pgielda
drivers/ieee802154/* @jukkar @tbursztyka
drivers/interrupt_controller/* @andrewboie
drivers/led/* @Mani-Sadhasivam
drivers/led_strip/* @mbolivar
drivers/pinmux/stm32/* @rsalveti @idlethread
drivers/sensor/ @bogdan-davidoaia @MaureenHelm
drivers/serial/uart_altera_jtag_hal.c @ramakrishnapallala
drivers/serial/uart_riscv_qemu.c @fractalclone @kgugala @pgielda
drivers/net/slip.c @jukkar @tbursztyka
drivers/spi/* @tbursztyka
drivers/spi/spi_ll_stm32.* @superna9999
drivers/timer/altera_avalon_timer_hal.c @ramakrishnapallala
drivers/timer/pulpino_timer.c @fractalclone @kgugala @pgielda
drivers/timer/riscv_machine_timer.c @fractalclone @kgugala @pgielda
drivers/usb/ @jfischer-phytec-iot @finikorg
drivers/usb/device/usb_dc_stm32.c @ydamigos @loicpoulain
drivers/i2c/i2c_ll_stm32* @ldts @ydamigos
dts/arm/st/ @erwango
ext/fs/ @nashif @ramakrishnapallala
ext/hal/cmsis/ @MaureenHelm @galak
ext/hal/nordic/ @carlescufi @anangl
ext/hal/nxp/ @MaureenHelm
ext/hal/qmsi/ @nashif
ext/hal/st/stm32cube/ @erwango
ext/lib/crypto/mbedtls/ @nashif
ext/lib/crypto/tinycrypt/ @lpereira
include/adc.h @anangl
include/arch/arc/* @vonhust @ruuddw
include/arch/arc/arch.h @andrewboie
include/arch/arc/v2/irq.h @andrewboie
include/arch/arm/* @MaureenHelm @galak
include/arch/arm/cortex_m/irq.h @andrewboie
include/arch/nios2/* @andrewboie
include/arch/nios2/arch.h @andrewboie
include/arch/riscv32 @fractalclone @kgugala @pgielda
include/arch/x86/* @andrewboie @ramakrishnapallala
include/arch/x86/arch.h @andrewboie
include/arch/xtensa/* @andrewboie
include/atomic.h @andrewboie @andyross
include/bluetooth/* @sjanc @jhedberg @Vudentz
include/cache.h @andrewboie @andyross
include/counter.h @anangl
include/device.h @ramakrishnapallala @nashif
include/drivers/bluetooth/* @sjanc @jhedberg @Vudentz
include/drivers/ioapic.h @andrewboie
include/drivers/loapic.h @andrewboie
include/drivers/mvic.h @andrewboie
include/fs.h @nashif @ramakrishnapallala
include/fs/* @nashif @ramakrishnapallala
include/init.h @andrewboie @andyross
include/irq.h @andrewboie @andyross
include/irq_offload.h @andrewboie @andyross
include/kernel.h @andrewboie @andyross
include/kernel_version.h @andrewboie @andyross
include/led.h @Mani-Sadhasivam
include/led_strip.h @mbolivar
include/linker/linker-defs.h @andrewboie @andyross
include/linker/linker-tool-gcc.h @andrewboie @andyross
include/linker/linker-tool.h @andrewboie @andyross
include/linker/section_tags.h @andrewboie @andyross
include/linker/sections.h @andrewboie @andyross
include/misc/* @andrewboie @andyross
include/net/* @jukkar @tbursztyka @pfalcon
include/net/buf.h @jukkar @jhedberg @tbursztyka @pfalcon
include/power.h @ramakrishnapallala @nashif
include/sensor.h @bogdan-davidoaia
include/shared_irq.h @andrewboie @andyross
include/spi.h @tbursztyka
include/sw_isr_table.h @andrewboie @andyross
include/sys_clock.h @andrewboie @andyross
include/sys_io.h @andrewboie @andyross
include/toolchain.h @andrewboie @andyross
include/toolchain/* @andrewboie @andyross
include/zephyr.h @andrewboie @andyross
kernel/ @andrewboie @andyross
lib/posix/ @ramakrishnapallala @nniranjhana
kernel/device.c @ramakrishnapallala @nashif
kernel/idle.c @ramakrishnapallala @nashif
samples/bluetooth/ @sjanc @jhedberg @Vudentz
samples/boards/quark_se_c1000/power*/* @ramakrishnapallala @nashif
samples/net/ @jukkar @tbursztyka @pfalcon
samples/net/dns_resolve/ @jukkar @tbursztyka @pfalcon
samples/net/http_server/* @jukkar @tbursztyka
samples/net/lwm2m_client/* @mike-scott
samples/net/mbedtls_sslclient/* @jukkar
samples/net/mqtt_publisher/* @jukkar @tbursztyka
samples/net/coap_client/* @rveerama1
samples/net/coap_server/* @rveerama1
samples/net/sockets/* @jukkar @tbursztyka @pfalcon
samples/sensor/* @bogdan-davidoaia
samples/subsys/usb @jfischer-phytec-iot @finikorg
scripts/expr_parser.py @andrewboie
scripts/sanity_chk/* @andrewboie
scripts/sanitycheck @andrewboie @nashif
scripts/support/runner/* @mbolivar
subsys/bluetooth/* @sjanc @jhedberg @Vudentz
subsys/bluetooth/controller/* @carlescufi @cvinayak
subsys/fs/* @nashif
subsys/net/buf.c @jukkar @jhedberg @tbursztyka @pfalcon
subsys/net/ip/* @jukkar @tbursztyka @pfalcon
subsys/net/lib/* @jukkar @tbursztyka @pfalcon
subsys/net/lib/dns/* @jukkar @tbursztyka @pfalcon
subsys/net/lib/http/* @jukkar @tbursztyka
subsys/net/lib/lwm2m/* @mike-scott
subsys/net/lib/mqtt/* @jukkar @tbursztyka
subsys/net/lib/coap/* @rveerama1
subsys/net/lib/sockets/* @jukkar @tbursztyka @pfalcon
subsys/usb/ @jfischer-phytec-iot @finikorg
tests/boards/native_posix/* @aescolar
tests/bluetooth/ @sjanc @jhedberg @Vudentz @tarunkum
tests/posix/ @nniranjhana
tests/crypto/ @lpereira @pswarnak
tests/crypto/mbedtls/ @nashif @lpereira
tests/drivers/spi/ @tbursztyka
tests/kernel/ @andrewboie @andyross @spoorthik @pswarnak @nniranjhana
tests/net/ @jukkar @tbursztyka @tarunkum @pfalcon
tests/net/buf/ @jukkar @jhedberg @tbursztyka @pfalcon
tests/net/lib/ @jukkar @tbursztyka @pfalcon
tests/net/lib/http_header_fields/ @jukkar @tbursztyka
tests/net/lib/mqtt_packet/ @jukkar @tbursztyka
tests/net/lib/coap/ @rveerama1
tests/net/socket/ @jukkar @tbursztyka @pfalcon
tests/subsys/fs/ @nashif @ramakrishnapallala
# Get all docs reviewed
*.rst @nashif
*posix*.rst @aescolar
*.rst @dbkinder

View File

@@ -1,78 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at conduct@zephyrproject.org.
Reports will be received by Kate Stewart (Linux Foundation) and Amy Occhialino
(Intel). All complaints will be reviewed and investigated, and will result in a
response that is deemed necessary and appropriate to the circumstances. The
project team is obligated to maintain confidentiality with regard to the
reporter of an incident. Further details of specific enforcement policies may
be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@@ -7,7 +7,7 @@ standards and methods for submitting changes help reduce the chaos that can resu
from an active development community.
This document briefly summarizes the full `Contribution
Guidelines <http://docs.zephyrproject.org/latest/contribute/index.html>`_
Guidelines <http://docs.zephyrproject.org/contribute/contribute.html>`_
documentation.
* Zephyr uses the permissive open source `Apache 2.0 license`_
@@ -19,7 +19,7 @@ documentation.
* The Developer Certificate of Origin (DCO) process is followed to
ensure developers are following licensing criteria for their
contributions, and documented with a ``Signed-off-by`` line in commits.
contributions, and documented with a ``Signed-of-by`` line in commits.
* Zephyr development workflow is supported on Linux, macOS, and Windows,
(with a few exceptions).

View File

@@ -1,8 +1,10 @@
# General configuration options
# Kconfig - general configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
mainmenu "Zephyr Kernel Configuration"
source "Kconfig.zephyr"

View File

@@ -1,444 +1,39 @@
# General configuration options
# Kconfig - general configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# Copyright (c) 2016 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
menu "Modules"
source "$(CMAKE_BINARY_DIR)/Kconfig.modules"
source "modules/Kconfig"
endmenu
# Include Kconfig.defconfig files first so that they can override defaults and
# other symbol/choice properties by adding extra symbol/choice definitions.
# After merging all definitions for a symbol/choice, Kconfig picks the first
# property (e.g. the first default) with a satisfied condition.
#
# Shield defaults should have precedence over board defaults, which should have
# precedence over SoC defaults, so include them in that order.
#
# $ARCH and $BOARD_DIR will be glob patterns when building documentation.
source "boards/shields/*/Kconfig.defconfig"
source "$(BOARD_DIR)/Kconfig.defconfig"
source "$(SOC_DIR)/$(ARCH)/*/Kconfig.defconfig"
source "boards/Kconfig"
source "$(SOC_DIR)/Kconfig"
# Include these first so that any properties (e.g. defaults) below can be
# overriden in *.defconfig files (by defining symbols in multiple locations).
# After merging all the symbol definitions, Kconfig picks the first property
# (e.g. the first default) with a satisfied condition.
#
# Board defaults should be parsed before SoC defaults, because boards usually
# overrides SoC values.
#
# Note: $ENV_VAR_ARCH and $ENV_VAR_BOARD_DIR might be glob patterns.
source "$ENV_VAR_BOARD_DIR/Kconfig.defconfig"
source "arch/$ENV_VAR_ARCH/soc/*/Kconfig.defconfig"
source "arch/Kconfig"
source "kernel/Kconfig"
source "dts/Kconfig"
source "drivers/Kconfig"
source "misc/Kconfig"
source "lib/Kconfig"
source "subsys/Kconfig"
source "ext/Kconfig"
osource "$(TOOLCHAIN_KCONFIG_DIR)/Kconfig"
menu "Build and Link Features"
menu "Linker Options"
choice
prompt "Linker Orphan Section Handling"
default LINKER_ORPHAN_SECTION_WARN
config LINKER_ORPHAN_SECTION_PLACE
bool "Place"
help
Linker puts orphan sections in place without warnings
or errors.
config LINKER_ORPHAN_SECTION_WARN
bool "Warn"
help
Linker places the orphan sections in output and issues
warning about those sections.
config LINKER_ORPHAN_SECTION_ERROR
bool "Error"
help
Linker exits with error when an orphan section is found.
endchoice
config CODE_DATA_RELOCATION
bool "Relocate code/data sections"
depends on ARM
help
When selected this will relocate .text, data and .bss sections from
the specified files and places it in the required memory region. The
files should be specified in the CMakeList.txt file with
a cmake API zephyr_code_relocate().
config HAS_FLASH_LOAD_OFFSET
bool
help
This option is selected by targets having a FLASH_LOAD_OFFSET
and FLASH_LOAD_SIZE.
if HAS_FLASH_LOAD_OFFSET
config USE_DT_CODE_PARTITION
bool "Link application into /chosen/zephyr,code-partition from devicetree"
help
When enabled, the application will be linked into the flash partition
selected by the zephyr,code-partition property in /chosen in devicetree.
When this is disabled, the flash load offset and size can be set manually
below.
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_CODE_PARTITION := zephyr,code-partition
config FLASH_LOAD_OFFSET
# Only user-configurable when USE_DT_CODE_PARTITION is disabled
hex "Kernel load offset" if !USE_DT_CODE_PARTITION
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_CODE_PARTITION)) if USE_DT_CODE_PARTITION
default 0
help
This option specifies the byte offset from the beginning of flash that
the kernel should be loaded into. Changing this value from zero will
affect the Zephyr image's link, and will decrease the total amount of
flash available for use by application code.
If unsure, leave at the default value 0.
config FLASH_LOAD_SIZE
# Only user-configurable when USE_DT_CODE_PARTITION is disabled
hex "Kernel load size" if !USE_DT_CODE_PARTITION
default $(dt_chosen_reg_size_hex,$(DT_CHOSEN_Z_CODE_PARTITION)) if USE_DT_CODE_PARTITION
default 0
help
If non-zero, this option specifies the size, in bytes, of the flash
area that the Zephyr image will be allowed to occupy. If zero, the
image will be able to occupy from the FLASH_LOAD_OFFSET to the end of
the device.
If unsure, leave at the default value 0.
endif # HAS_FLASH_LOAD_OFFSET
config TEXT_SECTION_OFFSET
hex
prompt "TEXT section offset" if !BOOTLOADER_MCUBOOT
default 0x200 if BOOTLOADER_MCUBOOT
default 0
help
If the application is built for chain-loading by a bootloader this
variable is required to be set to value that leaves sufficient
space between the beginning of the image and the start of the .text
section to store an image header or any other metadata.
In the particular case of the MCUboot bootloader this reserves enough
space to store the image header, which should also meet vector table
alignment requirements on most ARM targets, although some targets
may require smaller or larger values.
config HAVE_CUSTOM_LINKER_SCRIPT
bool "Custom linker scripts provided"
help
Set this option if you have a custom linker script which needed to
be define in CUSTOM_LINKER_SCRIPT.
config CUSTOM_LINKER_SCRIPT
string "Path to custom linker script"
depends on HAVE_CUSTOM_LINKER_SCRIPT
help
Path to the linker script to be used instead of the one define by the
board.
The linker script must be based on a version provided by Zephyr since
the kernel can expect a certain layout/certain regions.
This is useful when an application needs to add sections into the
linker script and avoid having to change the script provided by
Zephyr.
config CUSTOM_RODATA_LD
bool "(DEPRECATED) Include custom-rodata.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
data and linker directives into the rodata section.
config CUSTOM_RWDATA_LD
bool "(DEPRECATED) Include custom-rwdata.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
data and linker directives into the data section.
config CUSTOM_SECTIONS_LD
bool "(DEPRECATED) Include custom-sections.ld"
help
Note: This is deprecated, use Cmake function zephyr_linker_sources() instead.
Include a customized linker script fragment for inserting additional
arbitrary sections.
config KERNEL_ENTRY
string "Kernel entry symbol"
default "__start"
help
Code entry symbol, to be set at linking phase.
config LINKER_SORT_BY_ALIGNMENT
bool "Sort input sections by alignment"
default y
help
This turns on the linker flag to sort sections by alignment
in decreasing size of symbols. This helps to minimize
padding between symbols.
endmenu
menu "Compiler Options"
config NATIVE_APPLICATION
bool "Build as a native host application"
help
Build as a native application that can run on the host and using
resources and libraries provided by the host.
choice
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
default SIZE_OPTIMIZATIONS
help
Note that these flags shall only control the compiler
optimization level, and that no extra debug code shall be
conditionally compiled based on them.
config SIZE_OPTIMIZATIONS
bool "Optimize for size"
help
Compiler optimizations will be set to -Os independently of other
options.
config SPEED_OPTIMIZATIONS
bool "Optimize for speed"
help
Compiler optimizations will be set to -O2 independently of other
options.
config DEBUG_OPTIMIZATIONS
bool "Optimize debugging experience"
help
Compiler optimizations will be set to -Og independently of other
options.
config NO_OPTIMIZATIONS
bool "Optimize nothing"
help
Compiler optimizations will be set to -O0 independently of other
options.
endchoice
config COMPILER_OPT
string "Custom compiler options"
help
This option is a free-form string that is passed to the compiler
when building all parts of a project (i.e. kernel).
The compiler options specified by this string supplement the
predefined set of compiler supplied by the build system,
and can be used to change compiler optimization, warning and error
messages, and so on.
endmenu
choice
prompt "Error checking behavior for CHECK macro"
default RUNTIME_ERROR_CHECKS
config ASSERT_ON_ERRORS
bool "Assert on all errors"
help
Assert on errors covered with the CHECK macro.
config NO_RUNTIME_CHECKS
bool "No runtime error checks"
help
Do not do any runtime checks or asserts when using the CHECK macro.
config RUNTIME_ERROR_CHECKS
bool "Enable runtime error checks"
help
Always perform runtime checks covered with the CHECK macro. This
option is the default and the only option used during testing.
endchoice
menu "Build Options"
config KERNEL_BIN_NAME
string "The kernel binary name"
default "zephyr"
help
This option sets the name of the generated kernel binary.
config OUTPUT_STAT
bool "Create a statistics file"
default y
help
Create a stat file using readelf -e <elf>
config OUTPUT_DISASSEMBLY
bool "Create a disassembly file"
default y
help
Create an .lst file with the assembly listing of the firmware.
config OUTPUT_PRINT_MEMORY_USAGE
bool "Print memory usage to stdout"
default y
help
If the toolchain supports it, this option will pass
--print-memory-region to the linker when it is doing it's first
linker pass. Note that the memory regions are symbolic concepts
defined by the linker scripts and do not necessarily map
directly to the real physical address space. Take also note that
some platforms do two passes of the linker so the results do not
match exactly to the final elf file. See also rom_report,
ram_report and
https://sourceware.org/binutils/docs/ld/MEMORY.html
config BUILD_OUTPUT_HEX
bool "Build a binary in HEX format"
help
Build a binary in HEX format. This will build a zephyr.hex file need
by some platforms.
config BUILD_OUTPUT_BIN
bool "Build a binary in BIN format"
default y
help
Build a binary in BIN format. This will build a zephyr.bin file need
by some platforms.
config BUILD_OUTPUT_EXE
bool "Build a binary in ELF format with .exe extension"
help
Build a binary in ELF format that can run in the host system. This
will build a zephyr.exe file.
config BUILD_OUTPUT_S19
bool "Build a binary in S19 format"
help
Build a binary in S19 format. This will build a zephyr.s19 file need
by some platforms.
config BUILD_NO_GAP_FILL
bool "Don't fill gaps in generated hex/bin/s19 files."
config BUILD_OUTPUT_STRIPPED
bool "Build a stripped binary"
help
Build a stripped binary. This will build a zephyr.stripped file need
by some platforms.
config APPLICATION_DEFINED_SYSCALL
bool "Scan application folder for any syscall definition"
help
Scan additional folders inside application source folder
for application defined syscalls.
config MAKEFILE_EXPORTS
bool "Generate build metadata files named Makefile.exports"
help
Generates a file with build information that can be read by
third party Makefile-based build systems.
endmenu
endmenu
menu "Boot Options"
config IS_BOOTLOADER
bool "Act as a bootloader"
depends on XIP
depends on ARM
help
This option indicates that Zephyr will act as a bootloader to execute
a separate Zephyr image payload.
config BOOTLOADER_SRAM_SIZE
int "SRAM reserved for bootloader"
default 16
depends on !XIP || IS_BOOTLOADER
depends on ARM || XTENSA
help
This option specifies the amount of SRAM (measure in kB) reserved for
a bootloader image, when either:
- the Zephyr image itself is to act as the bootloader, or
- Zephyr is a !XIP image, which implicitly assumes existence of a
bootloader that loads the Zephyr !XIP image onto SRAM.
config BOOTLOADER_MCUBOOT
bool "MCUboot bootloader support"
select USE_DT_CODE_PARTITION
help
This option signifies that the target uses MCUboot as a bootloader,
or in other words that the image is to be chain-loaded by MCUboot.
This sets several required build system and Device Tree options in
order for the image generated to be bootable using the MCUboot open
source bootloader. Currently this includes:
* Setting TEXT_SECTION_OFFSET to a default value that allows space
for the MCUboot image header
* Activating SW_VECTOR_RELAY on Cortex-M0 (or Armv8-M baseline)
targets with no built-in vector relocation mechanisms
config BOOTLOADER_ESP_IDF
bool "ESP-IDF bootloader support"
depends on SOC_ESP32
help
This option will trigger the compilation of the ESP-IDF bootloader
inside the build folder.
At flash time, the bootloader will be flashed with the zephyr image
config BOOTLOADER_KEXEC
bool "Boot using Linux kexec() system call"
depends on X86
help
This option signifies that Linux boots the kernel using kexec system call
and utility. This method is used to boot the kernel over the network.
config BOOTLOADER_CONTEXT_RESTORE
bool "Boot loader has context restore support"
default y
depends on SYS_POWER_DEEP_SLEEP_STATES && BOOTLOADER_CONTEXT_RESTORE_SUPPORTED
help
This option signifies that the target has a bootloader
that restores CPU context upon resuming from deep sleep
power state.
config REBOOT
bool "Reboot functionality"
select SYSTEM_CLOCK_DISABLE
help
Enable the sys_reboot() API. Enabling this can drag in other subsystems
needed to perform a "safe" reboot (e.g. SYSTEM_CLOCK_DISABLE, to stop the
system clock before issuing a reset).
config MISRA_SANE
bool "MISRA standards compliance features"
help
Causes the source code to build in "MISRA" mode, which
disallows some otherwise-permitted features of the C
standard for safety reasons. Specifically variable length
arrays are not permitted (and gcc will enforce this).
endmenu
menu "Compatibility"
config COMPAT_INCLUDES
bool "Suppress warnings when using header shims"
default y
help
Suppress any warnings from the pre-processor when including
deprecated header files.
endmenu
source "tests/Kconfig"

View File

@@ -1,5 +1,5 @@
#
# Top level makefile for documentation build
# Top level makefile for things not covered by cmake
#
ifndef ZEPHYR_BASE
@@ -12,14 +12,5 @@ SPHINXOPTS ?= -q
# Documentation targets
# ---------------------------------------------------------------------------
clean:
rm -rf ${BUILDDIR}
htmldocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} htmldocs
htmldocs-fast:
mkdir -p ${BUILDDIR} && cmake -GNinja -DKCONFIG_TURBO_MODE=1 -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} htmldocs
pdfdocs:
mkdir -p ${BUILDDIR} && cmake -GNinja -DDOC_TAG=${DOC_TAG} -DSPHINXOPTS=${SPHINXOPTS} -B${BUILDDIR} -Hdoc/ && ninja -C ${BUILDDIR} pdfdocs

View File

@@ -1,3 +1,4 @@
.. raw:: html
<a href="https://www.zephyrproject.org">
@@ -31,56 +32,95 @@ Intel x86, ARC, Nios II, Tensilica Xtensa, and RISC-V, and a large number of
Getting Started
***************
Welcome to Zephyr! See the `Introduction to Zephyr`_ for a high-level overview,
and the documentation's `Getting Started Guide`_ to start developing.
To start developing Zephyr applications refer to the `Getting Started Guide`_
in the `Zephyr Documentation`_ pages.
A brief introduction to Zephyr can be found in the `Zephyr Introduction`_
page.
Community Support
*****************
Community support is provided via mailing lists and Slack; see the Resources
below for details.
The Zephyr Project Developer Community includes developers from member
organizations and the general community all joining in the development of
software within the Zephyr Project. Members contribute and discuss ideas,
submit bugs and bug fixes, and provide training. They also help those in need
through the community's forums such as mailing lists and IRC channels. Anyone
can join the developer community and the community is always willing to help
its members and the User Community to get the most out of the Zephyr Project.
.. _project-resources:
Welcome to the Zephyr community!
Resources
*********
Here's a quick summary of resources to help you find your way around:
Here's a quick summary of resources to find your way around the Zephyr Project
support systems:
* **Help**: `Asking for Help Tips`_
* **Documentation**: http://docs.zephyrproject.org (`Getting Started Guide`_)
* **Source Code**: https://github.com/zephyrproject-rtos/zephyr is the main
repository; https://elixir.bootlin.com/zephyr/latest/source contains a
searchable index
* **Releases**: https://zephyrproject.org/developers/#downloads
* **Samples and example code**: see `Sample and Demo Code Examples`_
* **Mailing Lists**: users@lists.zephyrproject.org and
devel@lists.zephyrproject.org are the main user and developer mailing lists,
respectively. You can join the developer's list and search its archives at
`Zephyr Development mailing list`_. The other `Zephyr mailing list
subgroups`_ have their own archives and sign-up pages.
* **Nightly CI Build Status**: https://lists.zephyrproject.org/g/builds
The builds@lists.zephyrproject.org mailing list archives the CI
(shippable) nightly build results.
* **Chat**: Zephyr's Slack workspace is https://zephyrproject.slack.com. Use
this `Slack Invite`_ to register.
* **Contributing**: see the `Contribution Guide`_
* **Wiki**: `Zephyr GitHub wiki`_
* **Issues**: https://github.com/zephyrproject-rtos/zephyr/issues
* **Security Issues**: Email vulnerabilities@zephyrproject.org to report
security issues; also see our `Security`_ documentation. Security issues are
tracked separately at https://zephyrprojectsec.atlassian.net.
* **Zephyr Project Website**: https://zephyrproject.org
* **Zephyr Project Website**: The https://zephyrproject.org website is the
central source of information about the Zephyr Project. On this site, you'll
find background and current information about the project as well as all the
relevant links to project material.
.. _Slack Invite: https://tinyurl.com/y5glwylp
.. _supported boards: http://docs.zephyrproject.org/latest/boards/index.html
* **Releases**: Source code for Zephyr kernel releases are available at
https://zephyrproject.org/developers/#downloads. On this page,
you'll find release information, and links to download or clone source
code from our GitHub repository. You'll also find links for the Zephyr
SDK, a moderated collection of tools and libraries used to develop your
applications.
* **Source Code in GitHub**: Zephyr Project source code is maintained on a
public GitHub repository at https://github.com/zephyrproject-rtos/zephyr.
You'll find information about getting access to the repository and how to
contribute to the project in this `Contribution Guide`_ document.
* **Samples Code**: In addition to the kernel source code, there are also
many documented `Sample and Demo Code Examples`_ that can help show you
how to use Zephyr services and subsystems.
* **Documentation**: Extensive Project technical documentation is developed
along with the Zephyr kernel itself, and can be found at
http://docs.zephyrproject.org. Additional documentation is maintained in
the `Zephyr GitHub wiki`_.
* **Cross-reference**: Source code cross-reference for the Zephyr
kernel and samples code is available at
https://elixir.bootlin.com/zephyr/latest/source.
* **Issue Reporting and Tracking**: Requirements and Issue tracking is done in
the Github issues system: https://github.com/zephyrproject-rtos/zephyr/issues.
You can browse through the reported issues and submit issues of your own.
* **Security-related Issue Reporting and Tracking**: For security-related
inquiries or reporting suspected security-related bugs in the Zephyr OS,
please send email to vulnerabilities@zephyrproject.org. We will assess and
fix flaws according to our security policy outlined in the Zephyr Project
`Security Overview`_.
Security related issue tracking is done in JIRA. The location of this JIRA
is https://zephyrprojectsec.atlassian.net.
* **Mailing List**: The `Zephyr Development mailing list`_ is perhaps the most convenient
way to track developer discussions and to ask your own support questions to
the Zephyr project community. There are also specific `Zephyr mailing list
subgroups`_ for announcements, builds, marketing, and Technical
Steering Committee notes, for example.
You can read through the message archives to follow
past posts and discussions, a good thing to do to discover more about the
Zephyr project.
* **IRC Chatting**: You can chat online with the Zephyr project developer
community and other users in our IRC channel #zephyrproject on the
freenode.net IRC server. You can use the http://webchat.freenode.net web
client or use a client-side application such as pidgin.
.. _supported boards: http://docs.zephyrproject.org/boards/boards.html
.. _Zephyr Documentation: http://docs.zephyrproject.org
.. _Introduction to Zephyr: http://docs.zephyrproject.org/latest/introduction/index.html
.. _Getting Started Guide: http://docs.zephyrproject.org/latest/getting_started/index.html
.. _Contribution Guide: http://docs.zephyrproject.org/latest/contribute/index.html
.. _Zephyr Introduction: http://docs.zephyrproject.org/introduction/introducing_zephyr.html
.. _Getting Started Guide: http://docs.zephyrproject.org/getting_started/getting_started.html
.. _Contribution Guide: http://docs.zephyrproject.org/contribute/contribute_guidelines.html
.. _Zephyr GitHub wiki: https://github.com/zephyrproject-rtos/zephyr/wiki
.. _Zephyr Development mailing list: https://lists.zephyrproject.org/g/devel
.. _Zephyr mailing list subgroups: https://lists.zephyrproject.org/g/main/subgroups
.. _Sample and Demo Code Examples: http://docs.zephyrproject.org/latest/samples/index.html
.. _Security: http://docs.zephyrproject.org/latest/security/index.html
.. _Asking for Help Tips: https://docs.zephyrproject.org/latest/guides/getting-help.html
.. _Sample and Demo Code Examples: http://docs.zephyrproject.org/samples/samples.html
.. _Security Overview: http://docs.zephyrproject.org/security/security-overview.html

View File

@@ -1,5 +1,5 @@
VERSION_MAJOR = 2
VERSION_MINOR = 2
VERSION_MAJOR = 1
VERSION_MINOR = 13
PATCHLEVEL = 0
VERSION_TWEAK = 0
EXTRAVERSION =

View File

@@ -1,11 +1,4 @@
# SPDX-License-Identifier: Apache-2.0
add_definitions(-D__ZEPHYR_SUPERVISOR__)
include_directories(
${ZEPHYR_BASE}/kernel/include
${ZEPHYR_BASE}/arch/${ARCH}/include
)
add_subdirectory(common)
add_subdirectory(${ARCH_DIR}/${ARCH} arch/${ARCH})
add_subdirectory(${ARCH})

View File

@@ -1,186 +1,60 @@
# General architecture configuration options
# Kconfig - general architecture configuration options
#
# Copyright (c) 2014-2015 Wind River Systems, Inc.
# Copyright (c) 2015 Intel Corporation
# Copyright (c) 2016 Cadence Design Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Include these first so that any properties (e.g. defaults) below can be
# overridden (by defining symbols in multiple locations)
# overriden (by defining symbols in multiple locations)
# Note: $ARCH might be a glob pattern
source "$(ARCH_DIR)/$(ARCH)/Kconfig"
source "boards/Kconfig"
# Note: $ENV_VAR_ARCH might be a glob pattern
source "arch/$ENV_VAR_ARCH/Kconfig"
# Architecture symbols
#
# Should be 'select'ed by low-level symbols like SOC_SERIES_* or, lacking that,
# by SOC_*.
choice
prompt "Architecture"
default X86
config ARC
bool
select ARCH_IS_SET
bool "ARC architecture"
select HAS_DTS
help
ARC architecture
config ARM
bool
select ARCH_IS_SET
select HAS_DTS
help
ARM architecture
bool "ARM architecture"
select ARCH_HAS_THREAD_ABORT
config X86
bool
select ARCH_IS_SET
bool "x86 architecture"
select ATOMIC_OPERATIONS_BUILTIN
select HAS_DTS
help
x86 architecture
config NIOS2
bool
select ARCH_IS_SET
bool "Nios II Gen 2 architecture"
select ATOMIC_OPERATIONS_C
select HAS_DTS
help
Nios II Gen 2 architecture
config RISCV
bool
select ARCH_IS_SET
select HAS_DTS
help
RISCV architecture
config RISCV32
bool "RISCV32 architecture"
config XTENSA
bool
select ARCH_IS_SET
select HAS_DTS
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select XTENSA_HAL if "$(ZEPHYR_TOOLCHAIN_VARIANT)" != "xcc"
help
Xtensa architecture
bool "Xtensa architecture"
config ARCH_POSIX
bool
select ARCH_IS_SET
bool "POSIX (native) architecture"
select ATOMIC_OPERATIONS_BUILTIN
select ARCH_HAS_CUSTOM_SWAP_TO_MAIN
select ARCH_HAS_CUSTOM_BUSY_WAIT
select ARCH_HAS_THREAD_ABORT
select NATIVE_APPLICATION
select HAS_COVERAGE_SUPPORT
help
POSIX (native) architecture
config ARCH_IS_SET
bool
help
Helper symbol to detect SoCs forgetting to select one of the arch
symbols above. See the top-level CMakeLists.txt.
endchoice
menu "General Architecture Options"
module = ARCH
module-str = arch
source "subsys/logging/Kconfig.template.log_config"
module = MPU
module-str = mpu
source "subsys/logging/Kconfig.template.log_config"
config BIG_ENDIAN
bool
help
This option tells the build system that the target system is big-endian.
Little-endian architecture is the default and should leave this option
unselected. This option is selected by arch/$ARCH/Kconfig,
soc/**/Kconfig, or boards/**/Kconfig and the user should generally avoid
modifying it. The option is used to select linker script OUTPUT_FORMAT
and command line option for gen_isr_tables.py.
config 64BIT
bool
help
This option tells the build system that the target system is
using a 64-bit address space, meaning that pointer and long types
are 64 bits wide. This option is selected by arch/$ARCH/Kconfig,
soc/**/Kconfig, or boards/**/Kconfig and the user should generally
avoid modifying it.
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_SRAM := zephyr,sram
config SRAM_SIZE
int "SRAM Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_SRAM),0,K)
help
The SRAM size in kB. The default value comes from /chosen/zephyr,sram in
devicetree. The user should generally avoid changing it via menuconfig or
in configuration files.
config SRAM_BASE_ADDRESS
hex "SRAM Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
The SRAM base address. The default value comes from from
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
if ARC || ARM || NIOS2 || X86
# Workaround for not being able to have commas in macro arguments
DT_CHOSEN_Z_FLASH := zephyr,flash
config FLASH_SIZE
int "Flash Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_FLASH),0,K) if (XIP && ARM) || !ARM
help
This option specifies the size of the flash in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
it via the menu configuration.
config FLASH_BASE_ADDRESS
hex "Flash Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH)) if (XIP && ARM) || !ARM
help
This option specifies the base address of the flash on the board. It is
normally set by the board's defconfig file and the user should generally
avoid modifying it via the menu configuration.
endif # ARM || ARC || NIOS2 || X86
if ARCH_HAS_TRUSTED_EXECUTION
config TRUSTED_EXECUTION_SECURE
bool "Trusted Execution: Secure firmware image"
help
Select this option to enable building a Secure firmware
image for a platform that supports Trusted Execution. A
Secure firmware image will execute in Secure state. It may
allow the CPU to execute in Non-Secure (Normal) state.
Therefore, a Secure firmware image shall be able to
configure security attributions of CPU resources (memory
areas, peripherals, interrupts, etc.) as well as to handle
faults, related to security violations. It may optionally
allow certain functions to be called from the Non-Secure
(Normal) domain.
config TRUSTED_EXECUTION_NONSECURE
depends on !TRUSTED_EXECUTION_SECURE
bool "Trusted Execution: Non-Secure firmware image"
help
Select this option to enable building a Non-Secure
firmware image for a platform that supports Trusted
Execution. A Non-Secure firmware image will execute
in Non-Secure (Normal) state. Therefore, it shall not
access CPU resources (memory areas, peripherals,
interrupts etc.) belonging to the Secure domain.
endif # ARCH_HAS_TRUSTED_EXECUTION
config HW_STACK_PROTECTION
bool "Hardware Stack Protection"
depends on ARCH_HAS_STACK_PROTECTION
@@ -195,9 +69,8 @@ config HW_STACK_PROTECTION
made.
config USERSPACE
bool "User mode threads"
bool "User mode threads (EXPERIMENTAL)"
depends on ARCH_HAS_USERSPACE
depends on RUNTIME_ERROR_CHECKS
help
When enabled, threads may be created or dropped down to user mode,
which has significantly restricted permissions and must interact
@@ -210,9 +83,13 @@ config USERSPACE
privileged mode or handling a system call; to ensure these are always
caught, enable CONFIG_HW_STACK_PROTECTION.
This feature is under heavy development and APIs related to it are
subject to change, even if declared non-private.
config PRIVILEGED_STACK_SIZE
int "Size of privileged stack"
default 1024
default 384 if ARC
default 256
depends on ARCH_HAS_USERSPACE
help
This option sets the privileged stack region size that will be used
@@ -220,37 +97,12 @@ config PRIVILEGED_STACK_SIZE
this region will be inaccessible from user mode. During system calls,
this region will be utilized by the system call.
config PRIVILEGED_STACK_TEXT_AREA
int "Privileged stacks text area"
default 512 if COVERAGE_GCOV
default 256
depends on ARCH_HAS_USERSPACE
help
Stack text area size for privileged stacks.
config KOBJECT_TEXT_AREA
int "Size if kobject text area"
default 512 if COVERAGE_GCOV
default 512 if NO_OPTIMIZATIONS
default 256
depends on ARCH_HAS_USERSPACE
help
Size of kernel object text area. Used in linker script.
config STACK_GROWS_UP
bool "Stack grows towards higher memory addresses"
help
Select this option if the architecture has upward growing thread
stacks. This is not common.
config NO_UNUSED_STACK_INSPECTION
bool
help
Selected if the architecture will generate a fault if unused stack
memory is examined, which is the region between the current stack
pointer and the deepest available address in the current stack
region.
config MAX_THREAD_BYTES
int "Bytes to use when tracking object thread permissions"
default 2
@@ -265,33 +117,28 @@ config DYNAMIC_OBJECTS
bool "Allow kernel objects to be allocated at runtime"
depends on USERSPACE
help
Enabling this option allows for kernel objects to be requested from
the calling thread's resource pool, at a slight cost in performance
due to the supplemental run-time tables required to validate such
objects.
Enabling this option allows for kernel objects to be requested from
the calling thread's resource pool, at a slight cost in performance
due to the supplemental run-time tables required to validate such
objects.
Objects allocated in this way can be freed with a supervisor-only
API call, or when the number of references to that object drops to
zero.
Objects allocated in this way can be freed with a supervisor-only
API call, or when the number of references to that object drops to
zero.
config NOCACHE_MEMORY
bool "Support for uncached memory"
depends on ARCH_HAS_NOCACHE_MEMORY_SUPPORT
config SIMPLE_FATAL_ERROR_HANDLER
bool "Simple system fatal error handler"
default y if !MULTITHREADING
help
Add a "nocache" read-write memory section that is configured to
not be cached. This memory section can be used to perform DMA
transfers when cache coherence issues are not optimal or can not
be solved using cache maintenance operations.
Provides an implementation of _SysFatalErrorHandler() that hard hangs
instead of aborting the faulting thread, and does not print anything,
for footprint-concerned systems. Only enable this option if you do not
want debug capabilities in case of system fatal error.
menu "Interrupt Configuration"
config DYNAMIC_INTERRUPTS
bool "Enable installation of IRQs at runtime"
help
Enable installation of interrupts at runtime, which will move some
interrupt-related data structures to RAM instead of ROM, and
on some architectures increase code size.
#
# Interrupt related configs
#
config GEN_ISR_TABLES
bool "Use generated IRQ tables"
help
@@ -321,16 +168,6 @@ config GEN_SW_ISR_TABLE
_isr_table_entry containing the interrupt service routine and supplied
parameter.
config ARCH_SW_ISR_TABLE_ALIGN
int "Alignment size of a software ISR table"
default 0
depends on GEN_SW_ISR_TABLE
help
This option controls alignment size of generated
_sw_isr_table. Some architecture needs a software ISR table
to be aligned to architecture specific size. The default
size is 0 for no alignment.
config GEN_IRQ_START_VECTOR
int
default 0
@@ -343,13 +180,12 @@ config GEN_IRQ_START_VECTOR
This is a hidden option which needs to be set per architecture and
left alone.
config IRQ_OFFLOAD
bool "Enable IRQ offload"
depends on TEST
help
Enable irq_offload() API which allows functions to be synchronously
run in interrupt context. Only useful for test cases that need
to validate the correctness of kernel objects in IRQ context.
run in interrupt context. Mainly useful for test cases.
endmenu # Interrupt configuration
@@ -358,10 +194,6 @@ endmenu
#
# Architecture Capabilities
#
config ARCH_HAS_TRUSTED_EXECUTION
bool
config ARCH_HAS_STACK_PROTECTION
bool
@@ -371,15 +203,6 @@ config ARCH_HAS_USERSPACE
config ARCH_HAS_EXECUTABLE_PAGE_BIT
bool
config ARCH_HAS_NOCACHE_MEMORY_SUPPORT
bool
config ARCH_HAS_RAMFUNC_SUPPORT
bool
config ARCH_HAS_NESTED_EXCEPTION_DETECTION
bool
#
# Other architecture related options
#
@@ -391,65 +214,30 @@ config ARCH_HAS_THREAD_ABORT
# Hidden PM feature configs which are to be selected by
# individual SoC.
#
config HAS_SYS_POWER_STATE_SLEEP_1
config SYS_POWER_LOW_POWER_STATE_SUPPORTED
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_1
This option signifies that the target supports the SYS_POWER_LOW_POWER_STATE
configuration option.
config HAS_SYS_POWER_STATE_SLEEP_2
config SYS_POWER_DEEP_SLEEP_SUPPORTED
# Hidden
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_2
configuration option.
config HAS_SYS_POWER_STATE_SLEEP_3
bool
help
This option signifies that the target supports the SYS_POWER_STATE_SLEEP_3
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_1
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_1
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_2
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_2
configuration option.
config HAS_SYS_POWER_STATE_DEEP_SLEEP_3
bool
help
This option signifies that the target supports the SYS_POWER_STATE_DEEP_SLEEP_3
This option signifies that the target supports the SYS_POWER_DEEP_SLEEP
configuration option.
config BOOTLOADER_CONTEXT_RESTORE_SUPPORTED
# Hidden
bool
help
This option signifies that the target has options of bootloaders
that support context restore upon resume from deep sleep
#
# Hidden CPU family configs
#
config CPU_HAS_TEE
bool
help
This option is enabled when the CPU has support for Trusted
Execution Environment (e.g. when it has a security attribution
unit).
config CPU_HAS_DCLS
bool
help
This option is enabled when the processor hardware is configured in
Dual-redundant Core Lock-step (DCLS) topology.
# End hidden CPU family configs
#
config CPU_HAS_FPU
bool
@@ -459,57 +247,23 @@ config CPU_HAS_FPU
config CPU_HAS_MPU
bool
# Omit prompt to signify "hidden" option
help
This option is enabled when the CPU has a Memory Protection Unit (MPU).
config MEMORY_PROTECTION
bool
help
This option is enabled when Memory Protection features are supported.
Memory protection support is currently available on ARC, ARM, and x86
architectures.
config MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT
bool
# Omit prompt to signify "hidden" option
help
This option is enabled when the MPU requires a power of two alignment
and size for MPU regions.
config MPU_REQUIRES_NON_OVERLAPPING_REGIONS
bool
help
This option is enabled when the MPU requires the active (i.e. enabled)
MPU regions to be non-overlapping with each other.
config MPU_GAP_FILLING
bool "Force MPU to be filling in background memory regions"
depends on MPU_REQUIRES_NON_OVERLAPPING_REGIONS
default y if !USERSPACE
help
This Kconfig option instructs the MPU driver to enforce
a full kernel SRAM partitioning, when it programs the
dynamic MPU regions (user thread stack, PRIV stack guard
and application memory domains) during context-switch. We
allow this to be a configurable option, in order to be able
to switch the option off and have an increased number of MPU
regions available for application memory domain programming.
menu "Floating Point Options"
depends on CPU_HAS_FPU
Notes:
An increased number of MPU regions should only be required,
when building with USERSPACE support. As a result, when we
build without USERSPACE support, gap filling should always
be required.
When the option is switched off, access to memory areas not
covered by explicit MPU regions is restricted to privileged
code on an ARCH-specific basis. Refer to ARCH-specific
documentation for more information on how this option is
used.
menuconfig FLOAT
bool "Floating point"
depends on CPU_HAS_FPU
depends on ARM || X86 || ARC
config FLOAT
bool "Floating point registers"
help
This option allows threads to use the floating point registers.
By default, only a single thread may use the registers.
@@ -524,6 +278,12 @@ config FP_SHARING
This option allows multiple threads to use the floating point
registers.
endmenu
#
# End hidden PM feature configs
#
config ARCH
string
help
@@ -532,21 +292,22 @@ config ARCH
config SOC
string
help
SoC name which can be found under soc/<arch>/<soc name>.
SoC name which can be found under arch/<arch>/soc/<soc name>.
This option holds the directory name used by the build system to locate
the correct linker and header files for the SoC.
the correct linker and header files for the SoC. This option will go away
once all SoCs are using family/series structure.
config SOC_SERIES
string
help
SoC series name which can be found under soc/<arch>/<family>/<series>.
SoC series name which can be found under arch/<arch>/soc/<family>/<series>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
config SOC_FAMILY
string
help
SoC family name which can be found under soc/<arch>/<family>.
SoC family name which can be found under arch/<arch>/soc/<family>.
This option holds the directory name used by the build system to locate
the correct linker and header files.
@@ -557,4 +318,4 @@ config BOARD
related to the board in the source tree (under boards/).
The Board is the first location where we search for a linker.ld file,
if not found we look for the linker file in
soc/<arch>/<family>/<series>
arch/<arch>/soc/<family>/<series>

View File

@@ -1,5 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
# Enable debug support in mdb
# Dwarf version 2 can be recognized by mdb
# The default dwarf version in gdb is not recognized by mdb
@@ -10,6 +8,7 @@ zephyr_cc_option(-g3 -gdwarf-2)
# See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63691
zephyr_cc_option(-fno-delete-null-pointer-checks)
zephyr_cc_option_ifdef(CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS -munaligned-access)
zephyr_cc_option_ifdef (CONFIG_LTO -flto)
add_subdirectory(soc/${SOC_PATH})
add_subdirectory(core)

View File

@@ -1,77 +1,64 @@
# ARC options
# ARC EM4 options
# Copyright (c) 2014, 2019 Wind River Systems, Inc.
#
# Copyright (c) 2014 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
choice
prompt "ARC SoC Selection"
depends on ARC
source "arch/arc/soc/*/Kconfig.soc"
endchoice
menu "ARC Options"
depends on ARC
# Include these first so that any properties (e.g. defaults) below can be
# overriden (by defining symbols in multiple locations)
source "arch/arc/soc/*/Kconfig"
config ARCH
default "arc"
choice
prompt "ARC core family"
default CPU_ARCEM
config ARCH_DEFCONFIG
string
default "arch/arc/defconfig"
config CPU_ARCEM
bool "ARC EM cores"
menu "ARC EM4 processor options"
config CPU_ARCEM4
bool
default y
select CPU_ARCV2
select ATOMIC_OPERATIONS_C
help
This option signifies the use of an ARC EM CPU
This option signifies the use of an ARC EM4 CPU
config CPU_ARCHS
bool "ARC HS cores"
select CPU_ARCV2
select ATOMIC_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU
endchoice
config CPU_EM4
bool
help
If y, the SoC uses an ARC EM4 CPU
config CPU_EM4_DMIPS
bool
help
If y, the SoC uses an ARC EM4 DMIPS CPU
config CPU_EM4_FPUS
bool
help
If y, the SoC uses an ARC EM4 DMIPS CPU with the single-precision
floating-point extension
config CPU_EM4_FPUDA
bool
help
If y, the SoC uses an ARC EM4 DMIPS CPU with single-precision
floating-point and double assist instructions
config CPU_EM6
bool
help
If y, the SoC uses an ARC EM6 CPU
config FP_FPU_DA
bool
endmenu
menu "ARCv2 Family Options"
config CPU_ARCV2
config CPU_ARCV2
bool
select ARCH_HAS_STACK_PROTECTION if ARC_HAS_STACK_CHECKING || ARC_MPU
select ARCH_HAS_USERSPACE if ARC_MPU
select USE_SWITCH
select USE_SWITCH_SUPPORTED
select ARCH_HAS_STACK_PROTECTION
select ARCH_HAS_USERSPACE if ARC_CORE_MPU
default y
help
This option signifies the use of a CPU of the ARCv2 family.
config NUM_IRQ_PRIO_LEVELS
config DATA_ENDIANNESS_LITTLE
bool
default y
help
This is driven by the processor implementation, since it is fixed in
hardware. The BSP should set this value to 'n' if the data is
implemented as big endian.
config NUM_IRQ_PRIO_LEVELS
int "Number of supported interrupt priority levels"
range 1 16
help
@@ -80,7 +67,7 @@ config NUM_IRQ_PRIO_LEVELS
The BSP must provide a valid default for proper operation.
config NUM_IRQS
config NUM_IRQS
int "Upper limit of interrupt numbers/IDs used"
range 17 256
help
@@ -91,7 +78,7 @@ config NUM_IRQS
The BSP must provide a valid default. This drives the size of the
vector table.
config RGF_NUM_BANKS
config RGF_NUM_BANKS
int "Number of General Purpose Register Banks"
depends on CPU_ARCV2
range 1 2
@@ -114,67 +101,15 @@ config ARC_FIRQ
If FIRQ is disabled, the handle of interrupts with highest priority
will be same with other interrupts.
config ARC_FIRQ_STACK
bool "Enable separate firq stack"
depends on ARC_FIRQ && RGF_NUM_BANKS > 1
help
Use separate stack for FIRQ handing. When the fast irq is also a direct
irq, this will get the minimal interrupt latency.
config ARC_FIRQ_STACK_SIZE
int "FIRQ stack size"
depends on ARC_FIRQ_STACK
default 1024
help
The size of firq stack.
config ARC_HAS_STACK_CHECKING
bool "ARC has STACK_CHECKING"
default y
help
ARC is configured with STACK_CHECKING which is a mechanism for
checking stack accesses and raising an exception when a stack
overflow or underflow is detected.
config ARC_CONNECT
bool "ARC has ARC connect"
select SCHED_IPI_SUPPORTED
help
ARC is configured with ARC CONNECT which is a hardware for connecting
multi cores.
config ARC_STACK_CHECKING
bool
select NO_UNUSED_STACK_INSPECTION
help
Use ARC STACK_CHECKING to do stack protection
config ARC_STACK_PROTECTION
config ARC_STACK_CHECKING
bool
default y if HW_STACK_PROTECTION
select ARC_STACK_CHECKING if ARC_HAS_STACK_CHECKING
select MPU_STACK_GUARD if (!ARC_STACK_CHECKING && ARC_MPU)
select THREAD_STACK_INFO
help
This option enables either:
- The ARC stack checking, or
- the MPU-based stack guard
to cause a system fatal error
if the bounds of the current process stack are overflowed.
The two stack guard options are mutually exclusive. The
selection of the ARC stack checking is
prioritized over the MPU-based stack guard.
ARCV2 has a special feature allowing to check stack overflows. This
enables code that allows using this debug feature
config ARC_USE_UNALIGNED_MEM_ACCESS
bool "Enable unaligned access in HW"
default y if CPU_ARCHS
depends on (CPU_ARCEM && !ARC_HAS_SECURE) || CPU_ARCHS
help
ARC EM cores w/o secure shield 2+2 mode support might be configured
to support unaligned memory access which is then disabled by default.
Enable unaligned access in hardware and make software to use it.
config FAULT_DUMP
config FAULT_DUMP
int "Fault dump level"
default 2
range 0 2
@@ -189,7 +124,7 @@ config FAULT_DUMP
0: Off.
config XIP
config XIP
default y if !UART_NSIM
config GEN_ISR_TABLES
@@ -209,71 +144,18 @@ config CODE_DENSITY
help
Enable code density option to get better code density
config ARC_HAS_ACCL_REGS
bool "Reg Pair ACCL:ACCH (FPU and/or MPY > 6)"
default y if FLOAT
help
Depending on the configuration, CPU can contain accumulator reg-pair
(also referred to as r58:r59). These can also be used by gcc as GPR so
kernel needs to save/restore per process
config ARC_HAS_SECURE
bool "ARC has SecureShield"
select CPU_HAS_TEE
select ARCH_HAS_TRUSTED_EXECUTION
bool
# a hidden option
help
This option is enabled when ARC core supports secure mode
config SJLI_TABLE_SIZE
int "SJLI table size"
depends on ARC_SECURE_FIRMWARE
default 8
help
The size of sjli (Secure Jump and Link Indexed) table. The
code in normal mode call secure services in secure mode through
sjli instruction.
config ARC_SECURE_FIRMWARE
bool "Generate Secure Firmware"
depends on ARC_HAS_SECURE
default y if TRUSTED_EXECUTION_SECURE
help
This option indicates that we are building a Zephyr image that
is intended to execute in secure mode. The option is only
applicable to ARC processors that implement the SecureShield.
This option enables Zephyr to include code that executes in
secure mode, as well as to exclude code that is designed to
execute only in normal mode.
Code executing in secure mode has access to both the secure
and normal resources of the ARC processors.
config ARC_NORMAL_FIRMWARE
bool "Generate Normal Firmware"
depends on !ARC_SECURE_FIRMWARE
depends on ARC_HAS_SECURE
default y if TRUSTED_EXECUTION_NONSECURE
help
This option indicates that we are building a Zephyr image that
is intended to execute in normal mode. Execution of this
image is triggered by secure firmware that executes in secure
mode. The option is only applicable to ARC processors that
implement the SecureShield.
This option enables Zephyr to include code that executes in
normal mode only, as well as to exclude code that is
designed to execute only in secure mode.
Code executing in normal mode has no access to secure
resources of the ARC processors, and, therefore, it shall avoid
accessing them.
menu "ARC MPU Options"
depends on CPU_HAS_MPU
config ARC_MPU_ENABLE
bool "Enable MPU"
depends on CPU_HAS_MPU
select ARC_MPU
help
Enable MPU
@@ -311,24 +193,6 @@ config CACHE_FLUSHING
If the d-cache is present, set this to y.
If the d-cache is NOT present, set this to n.
config ARC_EXCEPTION_STACK_SIZE
int "ARC exception handling stack size"
default 768
help
Size in bytes of exception handling stack which is at the top of
interrupt stack to get smaller memory footprint because exception
is not frequent. To reduce the impact on interrupt handling,
especially nested interrupt, it cannot be too large.
endmenu
config ARC_EXCEPTION_DEBUG
bool "Unhandled exception debugging information"
default n
depends on PRINTK || LOG
help
Print human-readable information about exception vectors, cause codes,
and parameters, at a cost of code/data size for the human-readable
strings.
endmenu

View File

@@ -1,34 +1,27 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_library_sources(
thread.c
thread_entry_wrapper.S
cpu_idle.S
fatal.c
fault.c
fault_s.S
irq_manage.c
timestamp.c
isr_wrapper.S
regular_irq.S
switch.S
prep_c.c
reset.S
vector_table.c
)
thread.c
thread_entry_wrapper.S
cpu_idle.S
fatal.c
fault.c
fault_s.S
irq_manage.c
cache.c
timestamp.c
isr_wrapper.S
regular_irq.S
swap.S
sys_fatal_error_handler.c
prep_c.c
reset.S
vector_table.c
)
zephyr_library_sources_ifdef(CONFIG_CACHE_FLUSHING cache.c)
zephyr_library_sources_ifdef(CONFIG_ARC_FIRQ fast_irq.S)
zephyr_library_sources_if_kconfig(irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_connect.c)
zephyr_library_sources_ifdef(CONFIG_ARC_CONNECT arc_smp.c)
zephyr_library_sources_ifdef(CONFIG_ATOMIC_OPERATIONS_CUSTOM atomic.c)
add_subdirectory_ifdef(CONFIG_ARC_CORE_MPU mpu)
add_subdirectory_ifdef(CONFIG_ARC_SECURE_FIRMWARE secureshield)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)

View File

@@ -1,449 +0,0 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARCv2 ARC CONNECT driver
*
*/
#include <kernel.h>
#include <arch/cpu.h>
#include <spinlock.h>
static struct k_spinlock arc_connect_spinlock;
#define LOCKED(lck) for (k_spinlock_key_t __i = {}, \
__key = k_spin_lock(lck); \
!__i.key; \
k_spin_unlock(lck, __key), __i.key = 1)
/* Generate an inter-core interrupt to the target core */
void z_arc_connect_ici_generate(u32_t core)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_IRQ, core);
}
}
/* Acknowledge the inter-core interrupt raised by core */
void z_arc_connect_ici_ack(u32_t core)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_GENERATE_ACK, core);
}
}
/* Read inter-core interrupt status */
u32_t z_arc_connect_ici_read_status(u32_t core)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_READ_STATUS, core);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Check the source of inter-core interrupt */
u32_t z_arc_connect_ici_check_src(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Clear the inter-core interrupt */
void z_arc_connect_ici_clear(void)
{
u32_t cpu, c;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_INTRPT_CHECK_SOURCE, 0);
cpu = z_arc_connect_cmd_readback(); /* 1,2,4,8... */
/*
* In rare case, multiple concurrent ICIs sent to same target can
* possibly be coalesced by MCIP into 1 asserted IRQ, so @cpu can be
* "vectored" (multiple bits sets) as opposed to typical single bit
*/
while (cpu) {
c = find_lsb_set(cpu) - 1;
z_arc_connect_cmd(
ARC_CONNECT_CMD_INTRPT_GENERATE_ACK, c);
cpu &= ~(1U << c);
}
}
}
/* Reset the cores in core_mask */
void z_arc_connect_debug_reset(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RESET,
0, core_mask);
}
}
/* Halt the cores in core_mask */
void z_arc_connect_debug_halt(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_HALT,
0, core_mask);
}
}
/* Run the cores in core_mask */
void z_arc_connect_debug_run(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_RUN,
0, core_mask);
}
}
/* Set core mask */
void z_arc_connect_debug_mask_set(u32_t core_mask, u32_t mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_MASK,
mask, core_mask);
}
}
/* Read core mask */
u32_t z_arc_connect_debug_mask_read(u32_t core_mask)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_READ_MASK,
0, core_mask);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/*
* Select cores that should be halted if the core issuing the command is halted
*/
void z_arc_connect_debug_select_set(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_DEBUG_SET_SELECT,
0, core_mask);
}
}
/* Read the select value */
u32_t z_arc_connect_debug_select_read(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_SELECT, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Read the status, halt or run of all cores in the system */
u32_t z_arc_connect_debug_en_read(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_EN, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Read the last command sent */
u32_t z_arc_connect_debug_cmd_read(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CMD, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Read the value of internal MCD_CORE register */
u32_t z_arc_connect_debug_core_read(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_DEBUG_READ_CORE, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Clear global free running counter */
void z_arc_connect_gfrc_clear(void)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_CLEAR, 0);
}
}
/* Read total 64 bits of global free running counter */
u64_t z_arc_connect_gfrc_read(void)
{
u32_t low;
u32_t high;
u32_t key;
/*
* each core has its own arc connect interface, i.e.,
* CMD/READBACK. So several concurrent commands to ARC
* connect are of if they are trying to access different
* sub-components. For GFRC, HW allows simultaneously accessing to
* counters. So an irq lock is enough.
*/
key = arch_irq_lock();
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_LO, 0);
low = z_arc_connect_cmd_readback();
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HI, 0);
high = z_arc_connect_cmd_readback();
arch_irq_unlock(key);
return (((u64_t)high) << 32) | low;
}
/* Enable global free running counter */
void z_arc_connect_gfrc_enable(void)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_ENABLE, 0);
}
}
/* Disable global free running counter */
void z_arc_connect_gfrc_disable(void)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_DISABLE, 0);
}
}
/* Disable global free running counter */
void z_arc_connect_gfrc_core_set(u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_GFRC_SET_CORE,
0, core_mask);
}
}
/* Set the relevant cores to halt global free running counter */
u32_t z_arc_connect_gfrc_halt_read(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_HALT, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Read the internal CORE register */
u32_t z_arc_connect_gfrc_core_read(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_GFRC_READ_CORE, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Enable interrupt distribute unit */
void z_arc_connect_idu_enable(void)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ENABLE, 0);
}
}
/* Disable interrupt distribute unit */
void z_arc_connect_idu_disable(void)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_DISABLE, 0);
}
}
/* Read enable status of interrupt distribute unit */
u32_t z_arc_connect_idu_read_enable(void)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_ENABLE, 0);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/*
* Set the triggering mode and distribution mode for the specified common
* interrupt
*/
void z_arc_connect_idu_set_mode(u32_t irq_num,
u16_t trigger_mode, u16_t distri_mode)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MODE,
irq_num, (distri_mode | (trigger_mode << 4)));
}
}
/* Read the internal MODE register of the specified common interrupt */
u32_t z_arc_connect_idu_read_mode(u32_t irq_num)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MODE, irq_num);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/*
* Set the target cores to receive the specified common interrupt
* when it is triggered
*/
void z_arc_connect_idu_set_dest(u32_t irq_num, u32_t core_mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_DEST,
irq_num, core_mask);
}
}
/* Read the internal DEST register of the specified common interrupt */
u32_t z_arc_connect_idu_read_dest(u32_t irq_num)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_DEST, irq_num);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Assert the specified common interrupt */
void z_arc_connect_idu_gen_cirq(u32_t irq_num)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_GEN_CIRQ, irq_num);
}
}
/* Acknowledge the specified common interrupt */
void z_arc_connect_idu_ack_cirq(u32_t irq_num)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_ACK_CIRQ, irq_num);
}
}
/* Read the internal STATUS register of the specified common interrupt */
u32_t z_arc_connect_idu_check_status(u32_t irq_num)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_STATUS, irq_num);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Read the internal SOURCE register of the specified common interrupt */
u32_t z_arc_connect_idu_check_source(u32_t irq_num)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_SOURCE, irq_num);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/* Mask or unmask the specified common interrupt */
void z_arc_connect_idu_set_mask(u32_t irq_num, u32_t mask)
{
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd_data(ARC_CONNECT_CMD_IDU_SET_MASK,
irq_num, mask);
}
}
/* Read the internal MASK register of the specified common interrupt */
u32_t z_arc_connect_idu_read_mask(u32_t irq_num)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_READ_MASK, irq_num);
ret = z_arc_connect_cmd_readback();
}
return ret;
}
/*
* Check if it is the first-acknowledging core to the common interrupt
* if IDU is programmed in the first-acknowledged mode
*/
u32_t z_arc_connect_idu_check_first(u32_t irq_num)
{
u32_t ret = 0;
LOCKED(&arc_connect_spinlock) {
z_arc_connect_cmd(ARC_CONNECT_CMD_IDU_CHECK_FIRST, irq_num);
ret = z_arc_connect_cmd_readback();
}
return ret;
}

View File

@@ -1,185 +0,0 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief codes required for ARC multicore and Zephyr smp support
*
*/
#include <device.h>
#include <kernel.h>
#include <kernel_structs.h>
#include <ksched.h>
#include <soc.h>
#include <init.h>
#ifndef IRQ_ICI
#define IRQ_ICI 19
#endif
#define ARCV2_ICI_IRQ_PRIORITY 1
volatile struct {
arch_cpustart_t fn;
void *arg;
} arc_cpu_init[CONFIG_MP_NUM_CPUS];
/*
* arc_cpu_wake_flag is used to sync up master core and slave cores
* Slave core will spin for arc_cpu_wake_flag until master core sets
* it to the core id of slave core. Then, slave core clears it to notify
* master core that it's waken
*
*/
volatile u32_t arc_cpu_wake_flag;
volatile char *arc_cpu_sp;
/*
* _curr_cpu is used to record the struct of _cpu_t of each cpu.
* for efficient usage in assembly
*/
volatile _cpu_t *_curr_cpu[CONFIG_MP_NUM_CPUS];
/* Called from Zephyr initialization */
void arch_start_cpu(int cpu_num, k_thread_stack_t *stack, int sz,
arch_cpustart_t fn, void *arg)
{
_curr_cpu[cpu_num] = &(_kernel.cpus[cpu_num]);
arc_cpu_init[cpu_num].fn = fn;
arc_cpu_init[cpu_num].arg = arg;
/* set the initial sp of target sp through arc_cpu_sp
* arc_cpu_wake_flag will protect arc_cpu_sp that
* only one slave cpu can read it per time
*/
arc_cpu_sp = Z_THREAD_STACK_BUFFER(stack) + sz;
arc_cpu_wake_flag = cpu_num;
/* wait slave cpu to start */
while (arc_cpu_wake_flag != 0) {
;
}
}
/* the C entry of slave cores */
void z_arc_slave_start(int cpu_num)
{
arch_cpustart_t fn;
#ifdef CONFIG_SMP
z_icache_setup();
z_irq_setup();
z_arc_connect_ici_clear();
z_irq_priority_set(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY, 0);
irq_enable(IRQ_ICI);
#endif
/* call the function set by arch_start_cpu */
fn = arc_cpu_init[cpu_num].fn;
fn(arc_cpu_init[cpu_num].arg);
}
#ifdef CONFIG_SMP
static void sched_ipi_handler(void *unused)
{
ARG_UNUSED(unused);
z_arc_connect_ici_clear();
z_sched_ipi();
}
/**
* @brief Check whether need to do thread switch in isr context
*
* @details u64_t is used to let compiler use (r0, r1) as return register.
* use register r0 and register r1 as return value, r0 has
* new thread, r1 has old thread. If r0 == 0, it means no thread switch.
*/
u64_t z_arc_smp_switch_in_isr(void)
{
u64_t ret = 0;
u32_t new_thread;
u32_t old_thread;
old_thread = (u32_t)_current;
new_thread = (u32_t)z_get_next_ready_thread();
if (new_thread != old_thread) {
#ifdef CONFIG_TIMESLICING
z_reset_time_slice();
#endif
_current_cpu->swap_ok = 0;
((struct k_thread *)new_thread)->base.cpu =
arch_curr_cpu()->id;
_current_cpu->current = (struct k_thread *) new_thread;
ret = new_thread | ((u64_t)(old_thread) << 32);
}
return ret;
}
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
u32_t i;
/* broadcast sched_ipi request to other cores
* if the target is current core, hardware will ignore it
*/
for (i = 0; i < CONFIG_MP_NUM_CPUS; i++) {
z_arc_connect_ici_generate(i);
}
}
static int arc_smp_init(struct device *dev)
{
ARG_UNUSED(dev);
struct arc_connect_bcr bcr;
/* necessary master core init */
_kernel.cpus[0].id = 0;
_kernel.cpus[0].irq_stack = Z_THREAD_STACK_BUFFER(_interrupt_stack)
+ CONFIG_ISR_STACK_SIZE;
_curr_cpu[0] = &(_kernel.cpus[0]);
bcr.val = z_arc_v2_aux_reg_read(_ARC_V2_CONNECT_BCR);
if (bcr.ipi) {
/* register ici interrupt, just need master core to register once */
z_arc_connect_ici_clear();
IRQ_CONNECT(IRQ_ICI, ARCV2_ICI_IRQ_PRIORITY,
sched_ipi_handler, NULL, 0);
irq_enable(IRQ_ICI);
} else {
__ASSERT(0,
"ARC connect has no inter-core interrupt\n");
return -ENODEV;
}
if (bcr.gfrc) {
/* global free running count init */
z_arc_connect_gfrc_enable();
/* when all cores halt, gfrc halt */
z_arc_connect_gfrc_core_set((1 << CONFIG_MP_NUM_CPUS) - 1);
z_arc_connect_gfrc_clear();
} else {
__ASSERT(0,
"ARC connect has no global free running counter\n");
return -ENODEV;
}
return 0;
}
SYS_INIT(arc_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

422
arch/arc/core/atomic.S Normal file
View File

@@ -0,0 +1,422 @@
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARC atomic operations library
*
* This library provides routines to perform a number of atomic operations
* on a memory location: add, subtract, increment, decrement, bitwise OR,
* bitwise NOR, bitwise AND, bitwise NAND, set, clear and compare-and-swap.
*
* This requires the processor to support LLOCK and SCOND instructions,
* where they are not supported on ARC EM family processors.
*/
#include <toolchain.h>
#include <linker/sections.h>
/* exports */
GTEXT(atomic_set)
GTEXT(atomic_get)
GTEXT(atomic_add)
GTEXT(atomic_nand)
GTEXT(atomic_and)
GTEXT(atomic_or)
GTEXT(atomic_xor)
GTEXT(atomic_clear)
GTEXT(atomic_dec)
GTEXT(atomic_inc)
GTEXT(atomic_sub)
GTEXT(atomic_cas)
.section .TEXT._Atomic, "ax"
.balign 2
/**
*
* @brief Atomically clear a memory location
*
* This routine atomically clears the contents of <target> and returns the old
* value that was in <target>.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_clear
* (
* atomic_t *target /@ memory location to clear @/
* )
*/
SECTION_SUBSEC_FUNC(TEXT, atomic_clear_set, atomic_clear)
mov_s r1, 0
/* fall through into atomic_set */
/**
*
* @brief Atomically set a memory location
*
* This routine atomically sets the contents of <target> to <value> and returns
* the old value that was in <target>.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_set
* (
* atomic_t *target, /@ memory location to set @/
* atomic_val_t value /@ set with this value @/
* )
*
*/
SECTION_SUBSEC_FUNC(TEXT, atomic_clear_set, atomic_set)
ex r1, [r0] /* swap new value with old value */
j_s.d [blink]
mov_s r0, r1 /* return old value */
/**
*
* @brief Get the value of a shared memory atomically
*
* This routine atomically retrieves the value in *target
*
* atomic_val_t atomic_get
* (
* atomic_t *target /@ address of atom to be retrieved @/
* )
*
* RETURN: value read from address target.
*
*/
SECTION_FUNC(TEXT, atomic_get)
ld_s r0, [r0, 0]
j_s [blink]
/**
*
* @brief Atomically increment a memory location
*
* This routine atomically increments the value in <target>. The operation is
* done using unsigned integer arithmetic. Various CPU architectures may impose
* restrictions with regards to the alignment and cache attributes of the
* atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_inc
* (
* atomic_t *target, /@ memory location to increment @/
* )
*
*/
SECTION_SUBSEC_FUNC(TEXT, atomic_inc_add, atomic_inc)
mov_s r1, 1
/* fall through into atomic_add */
/**
*
* @brief Atomically add a value to a memory location
*
* This routine atomically adds the contents of <target> and <value>, placing
* the result in <target>. The operation is done using signed integer arithmetic.
* Various CPU architectures may impose restrictions with regards to the
* alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_add
* (
* atomic_t *target, /@ memory location to add to @/
* atomic_val_t value /@ value to add @/
* )
*/
SECTION_SUBSEC_FUNC(TEXT, atomic_inc_add, atomic_add)
llock r2, [r0] /* load old value and mark exclusive access */
add_s r3, r1, r2
scond r3, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_add /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, r2 /* return old value */
/**
*
* @brief Atomically decrement a memory location
*
* This routine atomically decrements the value in <target>. The operation is
* done using unsigned integer arithmetic. Various CPU architectures may impose
* restrictions with regards to the alignment and cache attributes of the
* atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_dec
* (
* atomic_t *target, /@ memory location to decrement @/
* )
*
*/
SECTION_SUBSEC_FUNC(TEXT, atomic_dec_sub, atomic_dec)
mov_s r1, 1
/* fall through into atomic_sub */
/**
*
* @brief Atomically subtract a value from a memory location
*
* This routine atomically subtracts <value> from the contents of <target>,
* placing the result in <target>. The operation is done using signed integer
* arithmetic. Various CPU architectures may impose restrictions with regards to
* the alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_sub
* (
* atomic_t *target, /@ memory location to subtract from @/
* atomic_val_t value /@ value to subtract @/
* )
*
*/
SECTION_SUBSEC_FUNC(TEXT, atomic_dec_sub, atomic_sub)
llock r2, [r0] /* load old value and mark exclusive access */
sub r3, r2, r1
scond r3, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_sub /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, r2 /* return old value */
/**
*
* @brief Atomically perform a bitwise NAND on a memory location
*
* This routine atomically performs a bitwise NAND operation of the contents of
* <target> and <value>, placing the result in <target>.
* Various CPU architectures may impose restrictions with regards to the
* alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_nand
* (
* atomic_t *target, /@ memory location to NAND @/
* atomic_val_t value /@ NAND with this value @/
* )
*
*/
SECTION_FUNC(TEXT, atomic_nand)
llock r2, [r0] /* load old value and mark exclusive access */
and r3, r1, r2
not r3, r3
scond r3, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_nand /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, r2 /* return old value */
/**
*
* @brief Atomically perform a bitwise AND on a memory location
*
* This routine atomically performs a bitwise AND operation of the contents of
* <target> and <value>, placing the result in <target>.
* Various CPU architectures may impose restrictions with regards to the
* alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_and
* (
* atomic_t *target, /@ memory location to AND @/
* atomic_val_t value /@ AND with this value @/
* )
*
*/
SECTION_FUNC(TEXT, atomic_and)
llock r2, [r0] /* load old value and mark exclusive access */
and r3, r1, r2
scond r3, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_and /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, r2 /* return old value */
/**
*
* @brief Atomically perform a bitwise OR on memory location
*
* This routine atomically performs a bitwise OR operation of the contents of
* <target> and <value>, placing the result in <target>.
* Various CPU architectures may impose restrictions with regards to the
* alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_or
* (
* atomic_t *target, /@ memory location to OR @/
* atomic_val_t value /@ OR with this value @/
* )
*
*/
SECTION_FUNC(TEXT, atomic_or)
llock r2, [r0] /* load old value and mark exclusive access */
or r3, r1, r2
scond r3, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_or /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, r2 /* return old value */
/**
*
* @brief Atomically perform a bitwise XOR on a memory location
*
* This routine atomically performs a bitwise XOR operation of the contents of
* <target> and <value>, placing the result in <target>.
* Various CPU architectures may impose restrictions with regards to the
* alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return Contents of <target> before the atomic operation
*
* ERRNO: N/A
*
* atomic_val_t atomic_xor
* (
* atomic_t *target, /@ memory location to XOR @/
* atomic_val_t value /@ XOR with this value @/
* )
*
*/
SECTION_FUNC(TEXT, atomic_xor)
llock r2, [r0] /* load old value and mark exclusive access */
xor r3, r1, r2
scond r3, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_xor /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, r2 /* return old value */
/**
*
* @brief Atomically compare-and-swap the contents of a memory location
*
* This routine performs an atomic compare-and-swap. testing that the contents of
* <target> contains <oldValue>, and if it does, setting the value of <target>
* to <newValue>. Various CPU architectures may impose restrictions with regards
* to the alignment and cache attributes of the atomic_t type.
*
* This routine can be used from both task and interrupt level.
*
* @return 1 if the swap is actually executed, 0 otherwise.
*
* ERRNO: N/A
*
* int atomic_cas
* (
* atomic_t *target, /@ memory location to compare-and-swap @/
* atomic_val_t oldValue, /@ compare to this value @/
* atomic_val_t newValue, /@ swap with this value @/
* )
*
*/
SECTION_FUNC(TEXT, atomic_cas)
llock r3, [r0] /* load old value and mark exclusive access */
cmp_s r1, r3
bne_s nanoAtomicCas_fail
scond r2, [r0] /* try to store new value */
/* STATUS32.Z = 1 if successful */
bne_s atomic_cas /* if store is not successful, retry */
j_s.d [blink]
mov_s r0, 1 /* return TRUE */
/* failed comparison */
nanoAtomicCas_fail:
scond r1, [r0] /* write old value to clear the access lock */
j_s.d [blink]
mov_s r0, 0 /* return FALSE */

View File

@@ -15,15 +15,16 @@
#include <kernel.h>
#include <arch/cpu.h>
#include <sys/util.h>
#include <misc/util.h>
#include <toolchain.h>
#include <cache.h>
#include <linker/linker-defs.h>
#include <arch/arc/v2/aux_regs.h>
#include <kernel_internal.h>
#include <sys/__assert.h>
#include <misc/__assert.h>
#include <init.h>
#include <stdbool.h>
#if defined(CONFIG_CACHE_FLUSHING)
#if (CONFIG_CACHE_LINE_SIZE == 0) && !defined(CONFIG_CACHE_LINE_SIZE_DETECT)
#error Cannot use this implementation with a cache line size of 0
@@ -47,19 +48,19 @@
#define DC_CTRL_OP_SUCCEEDED 0x4 /* d-cache operation succeeded */
static bool dcache_available(void)
static int dcache_available(void)
{
unsigned long val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
unsigned long val = _arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
val &= 0xff; /* extract version */
return (val == 0) ? false : true;
return (val == 0)?0:1;
}
static void dcache_dc_ctrl(u32_t dcache_en_mask)
{
if (dcache_available()) {
z_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, dcache_en_mask);
}
if (!dcache_available())
return;
_arc_v2_aux_reg_write(_ARC_V2_DC_CTRL, dcache_en_mask);
}
static void dcache_enable(void)
@@ -89,7 +90,7 @@ static void dcache_flush_mlines(u32_t start_addr, u32_t size)
u32_t end_addr;
unsigned int key;
if (!dcache_available() || (size == 0U)) {
if (!dcache_available() || (size == 0)) {
return;
}
@@ -99,16 +100,15 @@ static void dcache_flush_mlines(u32_t start_addr, u32_t size)
key = irq_lock(); /* --enter critical section-- */
do {
z_arc_v2_aux_reg_write(_ARC_V2_DC_FLDL, start_addr);
_arc_v2_aux_reg_write(_ARC_V2_DC_FLDL, start_addr);
__asm__ volatile("nop_s");
__asm__ volatile("nop_s");
__asm__ volatile("nop_s");
/* wait for flush completion */
do {
if ((z_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) &
DC_CTRL_FLUSH_STATUS) == 0) {
if ((_arc_v2_aux_reg_read(_ARC_V2_DC_CTRL) &
DC_CTRL_FLUSH_STATUS) == 0)
break;
}
} while (1);
start_addr += DCACHE_LINE_SIZE;
} while (start_addr <= end_addr);
@@ -147,10 +147,10 @@ static void init_dcache_line_size(void)
{
u32_t val;
val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
__ASSERT((val&0xff) != 0U, "d-cache is not present");
val = _arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
__ASSERT((val&0xff) != 0, "d-cache is not present");
val = ((val>>16) & 0xf) + 1;
val *= 16U;
val *= 16;
sys_cache_line_size = (size_t) val;
}
#endif
@@ -169,3 +169,6 @@ static int init_dcache(struct device *unused)
}
SYS_INIT(init_dcache, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* CONFIG_CACHE_FLUSHING */

View File

@@ -17,11 +17,11 @@
#include <linker/sections.h>
#include <arch/cpu.h>
GTEXT(arch_cpu_idle)
GTEXT(arch_cpu_atomic_idle)
GDATA(z_arc_cpu_sleep_mode)
GTEXT(k_cpu_idle)
GTEXT(k_cpu_atomic_idle)
GDATA(k_cpu_sleep_mode)
SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
SECTION_VAR(BSS, k_cpu_sleep_mode)
.balign 4
.word 0
@@ -33,29 +33,17 @@ SECTION_VAR(BSS, z_arc_cpu_sleep_mode)
* void nanCpuIdle(void)
*/
SECTION_FUNC(TEXT, arch_cpu_idle)
SECTION_FUNC(TEXT, k_cpu_idle)
#ifdef CONFIG_TRACING
push_s blink
jl sys_trace_idle
jl z_sys_trace_idle
pop_s blink
#endif
ld r1, [z_arc_cpu_sleep_mode]
ld r1, [k_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
/*
* It's found that (in nsim_hs_smp), when cpu
* is sleeping, no response to inter-processor interrupt
* although it's pending and interrupts are enabled.
* here is a workround
*/
#if !defined(CONFIG_SOC_NSIM) && !defined(CONFIG_SMP)
sleep r1
#else
seti r1
_z_arc_idle_loop:
b _z_arc_idle_loop
#endif
j_s [blink]
nop
@@ -64,17 +52,17 @@ _z_arc_idle_loop:
*
* This function exits with interrupts restored to <key>.
*
* void arch_cpu_atomic_idle(unsigned int key)
* void k_cpu_atomic_idle(unsigned int key)
*/
SECTION_FUNC(TEXT, arch_cpu_atomic_idle)
SECTION_FUNC(TEXT, k_cpu_atomic_idle)
#ifdef CONFIG_TRACING
push_s blink
jl sys_trace_idle
jl z_sys_trace_idle
pop_s blink
#endif
ld r1, [z_arc_cpu_sleep_mode]
ld r1, [k_cpu_sleep_mode]
or r1, r1, (1 << 4) /* set IRQ-enabled bit */
sleep r1
j_s.d [blink]

View File

@@ -16,13 +16,19 @@
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <swap_macros.h>
GTEXT(_firq_enter)
GTEXT(_firq_exit)
GDATA(exc_nest_count)
#if CONFIG_RGF_NUM_BANKS == 1
GDATA(saved_r0)
#else
GDATA(saved_sp)
#endif
/**
*
* @brief Work to be done before handing control to a FIRQ ISR
@@ -54,10 +60,12 @@ SECTION_FUNC(TEXT, _firq_enter)
* This has already been done by _isr_wrapper.
*/
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#ifdef CONFIG_ARC_HAS_SECURE
lr r2, [_ARC_V2_SEC_STAT]
bclr r2, r2, _ARC_V2_SEC_STAT_SSC_BIT
sflag r2
/* sflag r2 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00bf302f
#else
/* disable stack checking */
lr r2, [_ARC_V2_STATUS32]
@@ -67,6 +75,7 @@ SECTION_FUNC(TEXT, _firq_enter)
#endif
#if CONFIG_RGF_NUM_BANKS != 1
#ifndef CONFIG_FIRQ_NO_LPCC
/*
* Save LP_START/LP_COUNT/LP_END because called handler might use.
* Save these in callee saved registers to avoid using memory.
@@ -75,52 +84,35 @@ SECTION_FUNC(TEXT, _firq_enter)
mov r23,lp_count
lr r24, [_ARC_V2_LP_START]
lr r25, [_ARC_V2_LP_END]
#endif
#endif
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
ld r1, [exc_nest_count]
add r0, r1, 1
st r0, [exc_nest_count]
cmp r1, 0
bne.d firq_nest
mov_s r0, sp
bgt.d firq_nest
mov r0, sp
_get_curr_cpu_irq_stack sp
mov r1, _kernel
ld sp, [r1, _kernel_offset_to_irq_stack]
#if CONFIG_RGF_NUM_BANKS != 1
b firq_nest_1
firq_nest:
/*
* because firq and rirq share the same interrupt stack,
* switch back to original register bank to get correct sp.
* to get better firq latency, an approach is to prepare
* separate interrupt stack for firq and do not do thread
* switch in firq.
*/
lr r1, [_ARC_V2_STATUS32]
and r1, r1, ~_ARC_V2_STATUS32_RB(7)
kflag r1
mov r1, ilink
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
/* here use _ARC_V2_USER_SP and ilink to exchange sp
* save original value of _ARC_V2_USER_SP and ilink into
* the stack of interrupted context first, then restore them later
*/
push ilink
PUSHAX ilink, _ARC_V2_USER_SP
st sp, [saved_sp]
/* sp here is the sp of interrupted context */
sr sp, [_ARC_V2_USER_SP]
/* here, bank 0 sp must go back to the value before push and
* PUSHAX as we will switch to bank1, the pop and POPAX later will
* change bank1's sp, not bank0's sp
*/
add sp, sp, 8
/* switch back to banked reg, only ilink can be used */
lr ilink, [_ARC_V2_STATUS32]
or ilink, ilink, _ARC_V2_STATUS32_RB(1)
kflag ilink
lr sp, [_ARC_V2_USER_SP]
POPAX ilink, _ARC_V2_USER_SP
pop ilink
mov r0, sp
ld sp, [saved_sp]
mov ilink, r1
firq_nest_1:
#else
firq_nest:
@@ -140,35 +132,39 @@ firq_nest:
SECTION_FUNC(TEXT, _firq_exit)
#if CONFIG_RGF_NUM_BANKS != 1
#ifndef CONFIG_FIRQ_NO_LPCC
/* restore lp_count, lp_start, lp_end from r23-r25 */
mov lp_count,r23
sr r24, [_ARC_V2_LP_START]
sr r25, [_ARC_V2_LP_END]
#endif
_dec_int_nest_counter r0, r1
_check_nest_int_by_irq_act r0, r1
jne _firq_no_reschedule
#endif
/* check if we're a nested interrupt: if so, let the interrupted
* interrupt handle the reschedule */
mov r1, exc_nest_count
ld r0, [r1]
sub r0, r0, 1
cmp r0, 0
bne.d _firq_no_reschedule
st r0, [r1]
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
bl _check_stack_sentinel
#endif
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
brne r0, 0, _firq_reschedule
#else
#ifdef CONFIG_PREEMPT_ENABLED
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* Check if the current thread (in r2) is the cached thread */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
brne r0, r2, _firq_reschedule
#endif
/* fall to no rescheduling */
#endif /* CONFIG_PREEMPT_ENABLED */
.balign 4
_firq_no_reschedule:
pop sp
@@ -178,59 +174,58 @@ _firq_no_reschedule:
* instruction instead of a pair of cmp and bxx
*/
#if CONFIG_RGF_NUM_BANKS == 1
_pop_irq_stack_frame
add sp,sp,4 /* don't need r0 from stack */
pop_s r1
pop_s r2
pop_s r3
pop r4
pop r5
pop r6
pop r7
pop r8
pop r9
pop r10
pop r11
pop_s r12
pop_s r13
pop_s blink
pop_s r0
sr r0, [_ARC_V2_LP_END]
pop_s r0
sr r0, [_ARC_V2_LP_START]
pop_s r0
mov lp_count,r0
#ifdef CONFIG_CODE_DENSITY
pop_s r0
sr r0, [_ARC_V2_EI_BASE]
pop_s r0
sr r0, [_ARC_V2_LDI_BASE]
pop_s r0
sr r0, [_ARC_V2_JLI_BASE]
#endif
ld r0,[saved_r0]
add sp,sp,8 /* don't need ilink & status32_po from stack */
#endif
rtie
#ifdef CONFIG_PREEMPT_ENABLED
.balign 4
_firq_reschedule:
pop sp
#if CONFIG_RGF_NUM_BANKS != 1
#ifdef CONFIG_SMP
/*
* save r0, r1 in irq stack for a while, as they will be changed by register
* bank switch
*/
_get_curr_cpu_irq_stack r2
st r0, [r2, -4]
st r1, [r2, -8]
#endif
/*
* We know there is no interrupted interrupt of lower priority at this
* point, so when switching back to register bank 0, it will contain the
* registers from the interrupted thread.
*/
#if defined(CONFIG_USERSPACE)
/* when USERSPACE is configured, here need to consider the case where firq comes
* out in user mode, according to ARCv2 ISA and nsim, the following micro ops
* will be executed:
* sp<-reg bank1'sp
* switch between sp and _ARC_V2_USER_SP
* then:
* sp is the sp of kernel stack of interrupted thread
* _ARC_V2_USER_SP is reg bank1'sp
* the sp of user stack of interrupted thread is reg bank0'sp
* if firq comes out in kernel mode, the following micro ops will be executed:
* sp<-reg bank'sp
* so, sw needs to do necessary handling to set up the correct sp
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bbit0 r0, 31, _firq_from_kernel
aex sp, [_ARC_V2_USER_SP]
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
aex sp, [_ARC_V2_USER_SP]
b _firq_create_irq_stack_frame
_firq_from_kernel:
#endif
/* chose register bank #0 */
lr r0, [_ARC_V2_STATUS32]
and r0, r0, ~_ARC_V2_STATUS32_RB(7)
kflag r0
_firq_create_irq_stack_frame:
/* we're back on the outgoing thread's stack */
_create_irq_stack_frame
@@ -243,43 +238,17 @@ _firq_create_irq_stack_frame:
st_s r0, [sp, ___isf_t_status32_OFFSET]
st ilink, [sp, ___isf_t_pc_OFFSET] /* ilink into pc */
#ifdef CONFIG_SMP
/*
* load r0, r1 from irq stack
*/
_get_curr_cpu_irq_stack r2
ld r0, [r2, -4]
ld r1, [r2, -8]
#endif
#endif
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread, will be
* restored when thread switched back
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
#ifdef CONFIG_SMP
mov_s r2, r1
#else
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
#endif
_save_callee_saved_regs
st _CAUSE_FIRQ, [r2, _thread_offset_to_relinquish_cause]
#ifdef CONFIG_SMP
mov_s r2, r0
#else
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
@@ -292,53 +261,49 @@ _firq_create_irq_stack_frame:
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
mov r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* see comments in regular_irq.S
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
ld_s r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _firq_return_from_rirq
nop_s
nop
breq r3, _CAUSE_FIRQ, _firq_return_from_firq
nop_s
nop
/* fall through */
.balign 4
_firq_return_from_coop:
ld_s r3, [r2, _thread_offset_to_intlock_key]
st 0, [r2, _thread_offset_to_intlock_key]
/* pc into ilink */
pop_s r0
mov_s ilink, r0
mov ilink, r0
pop_s r0 /* status32 into r0 */
/*
* There are only two interrupt lock states: locked and unlocked. When
* entering _Swap(), they are always locked, so the IE bit is unset in
* status32. If the incoming thread had them locked recursively, it
* means that the IE bit should stay unset. The only time the bit
* has to change is if they were not locked recursively.
*/
and.f r3, r3, (1 << 4)
or.nz r0, r0, _ARC_V2_STATUS32_IE
sr r0, [_ARC_V2_STATUS32_P0]
ld_s r0, [r2, _thread_offset_to_return_value]
rtie
.balign 4
_firq_return_from_rirq:
_firq_return_from_firq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
_pop_irq_stack_frame
ld ilink, [sp, -4] /* status32 into ilink */
@@ -347,3 +312,5 @@ _firq_return_from_firq:
/* LP registers are already restored, just switch back to bank 0 */
rtie
#endif /* CONFIG_PREEMPT_ENABLED */

View File

@@ -12,33 +12,77 @@
* ARCv2 CPUs.
*/
#include <kernel.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <arch/cpu.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
#include <misc/printk.h>
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
/**
*
* @brief Kernel fatal error handler
*
* This routine is called when fatal error conditions are detected by software
* and is responsible only for reporting the error. Once reported, it then
* invokes the user provided routine _SysFatalErrorHandler() which is
* responsible for implementing the error handling policy.
*
* The caller is expected to always provide a usable ESF. In the event that the
* fatal error does not have a hardware generated ESF, the caller should either
* create its own or use a pointer to the global default ESF <_default_esf>.
*
* @return This function does not return.
*/
void _NanoFatalErrorHandler(unsigned int reason, const NANO_ESF *pEsf)
{
if (reason == K_ERR_CPU_EXCEPTION) {
LOG_ERR("Faulting instruction address = 0x%lx",
z_arc_v2_aux_reg_read(_ARC_V2_ERET));
switch (reason) {
case _NANO_ERR_HW_EXCEPTION:
break;
#if defined(CONFIG_STACK_CANARIES) || defined(CONFIG_ARC_STACK_CHECKING) \
|| defined(CONFIG_STACK_SENTINEL)
case _NANO_ERR_STACK_CHK_FAIL:
printk("***** Stack Check Fail! *****\n");
break;
#endif
case _NANO_ERR_ALLOCATION_FAIL:
printk("**** Kernel Allocation Failure! ****\n");
break;
case _NANO_ERR_KERNEL_OOPS:
printk("***** Kernel OOPS! *****\n");
break;
case _NANO_ERR_KERNEL_PANIC:
printk("***** Kernel Panic! *****\n");
break;
default:
printk("**** Unknown Fatal Error %d! ****\n", reason);
break;
}
z_fatal_error(reason, esf);
printk("Current thread ID = %p\n", k_current_get());
if (reason == _NANO_ERR_HW_EXCEPTION) {
printk("Faulting instruction address = 0x%lx\n",
_arc_v2_aux_reg_read(_ARC_V2_ERET));
}
/*
* Now that the error has been reported, call the user implemented
* policy
* to respond to the error. The decisions as to what responses are
* appropriate to the various errors are something the customer must
* decide.
*/
_SysFatalErrorHandler(reason, pEsf);
}
FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
FUNC_NORETURN void _arch_syscall_oops(void *ssf_ptr)
{
z_arc_fatal_error(K_ERR_KERNEL_OOPS, ssf_ptr);
CODE_UNREACHABLE;
}
FUNC_NORETURN void arch_system_halt(unsigned int reason)
{
ARG_UNUSED(reason);
__asm__("brk");
_SysFatalErrorHandler(_NANO_ERR_KERNEL_OOPS, ssf_ptr);
CODE_UNREACHABLE;
}

View File

@@ -16,358 +16,31 @@
#include <inttypes.h>
#include <kernel.h>
#include <kernel_internal.h>
#include <kernel_structs.h>
#include <misc/printk.h>
#include <exc_handle.h>
#include <logging/log.h>
LOG_MODULE_DECLARE(os);
#ifdef CONFIG_USERSPACE
Z_EXC_DECLARE(z_arc_user_string_nlen);
Z_EXC_DECLARE(z_arch_user_string_nlen);
static const struct z_exc_handle exceptions[] = {
Z_EXC_HANDLE(z_arc_user_string_nlen)
Z_EXC_HANDLE(z_arch_user_string_nlen)
};
#endif
#if defined(CONFIG_MPU_STACK_GUARD)
#define IS_MPU_GUARD_VIOLATION(guard_start, fault_addr, stack_ptr) \
((fault_addr >= guard_start) && \
(fault_addr < (guard_start + STACK_GUARD_SIZE)) && \
(stack_ptr <= (guard_start + STACK_GUARD_SIZE)))
/**
* @brief Assess occurrence of current thread's stack corruption
*
* This function performs an assessment whether a memory fault (on a
* given memory address) is the result of stack memory corruption of
* the current thread.
*
* Thread stack corruption for supervisor threads or user threads in
* privilege mode (when User Space is supported) is reported upon an
* attempt to access the stack guard area (if MPU Stack Guard feature
* is supported). Additionally the current thread stack pointer
* must be pointing inside or below the guard area.
*
* Thread stack corruption for user threads in user mode is reported,
* if the current stack pointer is pointing below the start of the current
* thread's stack.
*
* Notes:
* - we assume a fully descending stack,
* - we assume a stacking error has occurred,
* - the function shall be called when handling MPU privilege violation
*
* If stack corruption is detected, the function returns the lowest
* allowed address where the Stack Pointer can safely point to, to
* prevent from errors when un-stacking the corrupted stack frame
* upon exception return.
*
* @param fault_addr memory address on which memory access violation
* has been reported.
* @param sp stack pointer when exception comes out
*
* @return The lowest allowed stack frame pointer, if error is a
* thread stack corruption, otherwise return 0.
*/
static u32_t z_check_thread_stack_fail(const u32_t fault_addr, u32_t sp)
{
const struct k_thread *thread = _current;
if (!thread) {
return 0;
}
#if defined(CONFIG_USERSPACE)
if (thread->arch.priv_stack_start) {
/* User thread */
if (z_arc_v2_aux_reg_read(_ARC_V2_ERSTATUS)
& _ARC_V2_STATUS32_U) {
/* Thread's user stack corruption */
#ifdef CONFIG_ARC_HAS_SECURE
sp = z_arc_v2_aux_reg_read(_ARC_V2_SEC_U_SP);
#else
sp = z_arc_v2_aux_reg_read(_ARC_V2_USER_SP);
#endif
if (sp <= (u32_t)thread->stack_obj) {
return (u32_t)thread->stack_obj;
}
} else {
/* User thread in privilege mode */
if (IS_MPU_GUARD_VIOLATION(
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
fault_addr, sp)) {
/* Thread's privilege stack corruption */
return thread->arch.priv_stack_start;
}
}
} else {
/* Supervisor thread */
if (IS_MPU_GUARD_VIOLATION((u32_t)thread->stack_obj,
fault_addr, sp)) {
/* Supervisor thread stack corruption */
return (u32_t)thread->stack_obj + STACK_GUARD_SIZE;
}
}
#else /* CONFIG_USERSPACE */
if (IS_MPU_GUARD_VIOLATION(thread->stack_info.start,
fault_addr, sp)) {
/* Thread stack corruption */
return thread->stack_info.start + STACK_GUARD_SIZE;
}
#endif /* CONFIG_USERSPACE */
return 0;
}
#endif
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
/* For EV_ProtV, the numbering/semantics of the parameter are consistent across
* several codes, although not all combination will be reported.
*
* These codes and parameters do not have associated* names in
* the technical manual, just switch on the values in Table 6-5
*/
static const char *get_protv_access_err(u32_t parameter)
{
switch (parameter) {
case 0x1:
return "code protection scheme";
case 0x2:
return "stack checking scheme";
case 0x4:
return "MPU";
case 0x8:
return "MMU";
case 0x10:
return "NVM";
case 0x24:
return "Secure MPU";
case 0x44:
return "Secure MPU with SID mismatch";
default:
return "unknown";
}
}
static void dump_protv_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("Instruction fetch violation (%s)",
get_protv_access_err(parameter));
break;
case 0x1:
LOG_ERR("Memory read protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x2:
LOG_ERR("Memory write protection violation (%s)",
get_protv_access_err(parameter));
break;
case 0x3:
LOG_ERR("Memory read-modify-write violation (%s)",
get_protv_access_err(parameter));
break;
case 0x10:
LOG_ERR("Normal vector table in secure memory");
break;
case 0x11:
LOG_ERR("NS handler code located in S memory");
break;
case 0x12:
LOG_ERR("NSC Table Range Violation");
break;
default:
LOG_ERR("unknown");
break;
}
}
static void dump_machine_check_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("double fault");
break;
case 0x1:
LOG_ERR("overlapping TLB entries");
break;
case 0x2:
LOG_ERR("fatal TLB error");
break;
case 0x3:
LOG_ERR("fatal cache error");
break;
case 0x4:
LOG_ERR("internal memory error on instruction fetch");
break;
case 0x5:
LOG_ERR("internal memory error on data fetch");
break;
case 0x6:
LOG_ERR("illegal overlapping MPU entries");
if (parameter == 0x1) {
LOG_ERR(" - jump and branch target");
}
break;
case 0x10:
LOG_ERR("secure vector table not located in secure memory");
break;
case 0x11:
LOG_ERR("NSC jump table not located in secure memory");
break;
case 0x12:
LOG_ERR("secure handler code not located in secure memory");
break;
case 0x13:
LOG_ERR("NSC target address not located in secure memory");
break;
case 0x80:
LOG_ERR("uncorrectable ECC or parity error in vector memory");
break;
default:
LOG_ERR("unknown");
break;
}
}
static void dump_privilege_exception(u32_t cause, u32_t parameter)
{
switch (cause) {
case 0x0:
LOG_ERR("Privilege violation");
break;
case 0x1:
LOG_ERR("disabled extension");
break;
case 0x2:
LOG_ERR("action point hit");
break;
case 0x10:
switch (parameter) {
case 0x1:
LOG_ERR("N to S return using incorrect return mechanism");
break;
case 0x2:
LOG_ERR("N to S return with incorrect operating mode");
break;
case 0x3:
LOG_ERR("IRQ/exception return fetch from wrong mode");
break;
case 0x4:
LOG_ERR("attempt to halt secure processor in NS mode");
break;
case 0x20:
LOG_ERR("attempt to access secure resource from normal mode");
break;
case 0x40:
LOG_ERR("SID violation on resource access (APEX/UAUX/key NVM)");
break;
default:
LOG_ERR("unknown");
break;
}
break;
case 0x13:
switch (parameter) {
case 0x20:
LOG_ERR("attempt to access secure APEX feature from NS mode");
break;
case 0x40:
LOG_ERR("SID violation on access to APEX feature");
break;
default:
LOG_ERR("unknown");
break;
}
break;
default:
LOG_ERR("unknown");
break;
}
}
static void dump_exception_info(u32_t vector, u32_t cause, u32_t parameter)
{
if (vector >= 0x10 && vector <= 0xFF) {
LOG_ERR("interrupt %u", vector);
return;
}
/* Names are exactly as they appear in Designware ARCv2 ISA
* Programmer's reference manual for easy searching
*/
switch (vector) {
case ARC_EV_RESET:
LOG_ERR("Reset");
break;
case ARC_EV_MEM_ERROR:
LOG_ERR("Memory Error");
break;
case ARC_EV_INS_ERROR:
LOG_ERR("Instruction Error");
break;
case ARC_EV_MACHINE_CHECK:
LOG_ERR("EV_MachineCheck");
dump_machine_check_exception(cause, parameter);
break;
case ARC_EV_TLB_MISS_I:
LOG_ERR("EV_TLBMissI");
break;
case ARC_EV_TLB_MISS_D:
LOG_ERR("EV_TLBMissD");
break;
case ARC_EV_PROT_V:
LOG_ERR("EV_ProtV");
dump_protv_exception(cause, parameter);
break;
case ARC_EV_PRIVILEGE_V:
LOG_ERR("EV_PrivilegeV");
dump_privilege_exception(cause, parameter);
break;
case ARC_EV_SWI:
LOG_ERR("EV_SWI");
break;
case ARC_EV_TRAP:
LOG_ERR("EV_Trap");
break;
case ARC_EV_EXTENSION:
LOG_ERR("EV_Extension");
break;
case ARC_EV_DIV_ZERO:
LOG_ERR("EV_DivZero");
break;
case ARC_EV_DC_ERROR:
LOG_ERR("EV_DCError");
break;
case ARC_EV_MISALIGNED:
LOG_ERR("EV_Misaligned");
break;
case ARC_EV_VEC_UNIT:
LOG_ERR("EV_VecUnit");
break;
default:
LOG_ERR("unknown");
break;
}
}
#endif /* CONFIG_ARC_EXCEPTION_DEBUG */
/*
* @brief Fault handler
*
* This routine is called when fatal error conditions are detected by hardware
* and is responsible only for reporting the error. Once reported, it then
* invokes the user provided routine k_sys_fatal_error_handler() which is
* invokes the user provided routine _SysFatalErrorHandler() which is
* responsible for implementing the error handling policy.
*/
void _Fault(z_arch_esf_t *esf, u32_t old_sp)
void _Fault(NANO_ESF *esf)
{
u32_t vector, cause, parameter;
u32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);
u32_t ecr = z_arc_v2_aux_reg_read(_ARC_V2_ECR);
u32_t vector, code, parameter;
u32_t exc_addr = _arc_v2_aux_reg_read(_ARC_V2_EFA);
u32_t ecr = _arc_v2_aux_reg_read(_ARC_V2_ECR);
#ifdef CONFIG_USERSPACE
for (int i = 0; i < ARRAY_SIZE(exceptions); i++) {
@@ -381,54 +54,28 @@ void _Fault(z_arch_esf_t *esf, u32_t old_sp)
}
#endif
vector = Z_ARC_V2_ECR_VECTOR(ecr);
cause = Z_ARC_V2_ECR_CODE(ecr);
parameter = Z_ARC_V2_ECR_PARAMETER(ecr);
vector = _ARC_V2_ECR_VECTOR(ecr);
code = _ARC_V2_ECR_CODE(ecr);
parameter = _ARC_V2_ECR_PARAMETER(ecr);
/* exception raised by kernel */
if (vector == ARC_EV_TRAP && parameter == _TRAP_S_CALL_RUNTIME_EXCEPT) {
/*
* in user mode software-triggered system fatal exceptions only allow
* K_ERR_KERNEL_OOPS and K_ERR_STACK_CHK_FAIL
*/
#ifdef CONFIG_USERSPACE
if ((esf->status32 & _ARC_V2_STATUS32_U) &&
esf->r0 != K_ERR_STACK_CHK_FAIL) {
esf->r0 = K_ERR_KERNEL_OOPS;
}
#endif
z_arc_fatal_error(esf->r0, esf);
if (vector == 0x9 && parameter == _TRAP_S_CALL_RUNTIME_EXCEPT) {
_NanoFatalErrorHandler(esf->r0, esf);
return;
}
LOG_ERR("***** Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x",
vector, cause, parameter);
LOG_ERR("Address 0x%x", exc_addr);
#ifdef CONFIG_ARC_EXCEPTION_DEBUG
dump_exception_info(vector, cause, parameter);
#endif
printk("Exception vector: 0x%x, cause code: 0x%x, parameter 0x%x\n",
vector, code, parameter);
printk("Address 0x%x\n", exc_addr);
#ifdef CONFIG_ARC_STACK_CHECKING
/* Vector 6 = EV_ProV. Regardless of cause, parameter 2 means stack
/* Vector 6 = EV_ProV. Regardless of code, parameter 2 means stack
* check violation
* stack check and mpu violation can come out together, then
* parameter = 0x2 | [0x4 | 0x8 | 0x1]
*/
if (vector == ARC_EV_PROT_V && parameter & 0x2) {
z_arc_fatal_error(K_ERR_STACK_CHK_FAIL, esf);
if (vector == 6 && parameter == 2) {
_NanoFatalErrorHandler(_NANO_ERR_STACK_CHK_FAIL, esf);
return;
}
#endif
#ifdef CONFIG_MPU_STACK_GUARD
if (vector == ARC_EV_PROT_V && ((parameter == 0x4) ||
(parameter == 0x24))) {
if (z_check_thread_stack_fail(exc_addr, old_sp)) {
z_arc_fatal_error(K_ERR_STACK_CHK_FAIL, esf);
return;
}
}
#endif
z_arc_fatal_error(K_ERR_CPU_EXCEPTION, esf);
_NanoFatalErrorHandler(_NANO_ERR_HW_EXCEPTION, esf);
}

View File

@@ -19,7 +19,7 @@
#include <syscall.h>
GTEXT(_Fault)
GTEXT(z_do_kernel_oops)
GTEXT(_do_kernel_oops)
GTEXT(__reset)
GTEXT(__memory_error)
GTEXT(__instruction_error)
@@ -35,20 +35,17 @@ GTEXT(__ev_div_zero)
GTEXT(__ev_dc_error)
GTEXT(__ev_maligned)
#ifdef CONFIG_IRQ_OFFLOAD
GTEXT(z_irq_do_offload);
GTEXT(_irq_do_offload);
#endif
GDATA(exc_nest_count)
/*
* The exception handling will use top part of interrupt stack to
* get smaller memory footprint, because exception is not frequent.
* To reduce the impact on interrupt handling, especially nested interrupt
* the top part of interrupt stack cannot be too large, so add a check
* here
*/
#if CONFIG_ARC_EXCEPTION_STACK_SIZE > (CONFIG_ISR_STACK_SIZE >> 1)
#error "interrupt stack size is too small"
#endif
.balign 4
SECTION_VAR(BSS, saved_value)
.word 0
/* the necessary stack size for exception handling */
#define EXCEPTION_STACK_SIZE 384
/*
* @brief Fault handler installed in the fault and reserved vectors
@@ -68,15 +65,15 @@ SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_dc_error)
SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_maligned)
_exc_entry:
st sp, [saved_value]
/*
* re-use the top part of interrupt stack as exception
* stack. If this top part is used by interrupt handling,
* and exception is raised, then here it's guaranteed that
* exception handling has necessary stack to use
*/
mov_s ilink, sp
_get_curr_cpu_irq_stack sp
sub sp, sp, (CONFIG_ISR_STACK_SIZE - CONFIG_ARC_EXCEPTION_STACK_SIZE)
mov_s sp, _interrupt_stack
add sp, sp, EXCEPTION_STACK_SIZE
/*
* save caller saved registers
@@ -90,7 +87,6 @@ _exc_entry:
_create_irq_stack_frame
#ifdef CONFIG_ARC_HAS_SECURE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0,[_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#endif
@@ -100,25 +96,12 @@ _exc_entry:
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
/* sp is parameter of _Fault */
mov_s r0, sp
/* ilink is the thread's original sp */
mov_s r1, ilink
mov r0, sp
jl _Fault
_exc_return:
/* the exception cause must be fixed in exception handler when exception returns
* directly, or exception will be repeated.
*
* If thread switch is raised in exception handler, the context of old thread will
* not be saved, i.e., it cannot be recovered, because we don't know where the
* exception comes out, thread context?irq_context?nest irq context?
*/
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
breq r0, 0, _exc_return_from_exc
mov_s r2, r0
#else
#ifdef CONFIG_PREEMPT_ENABLED
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
@@ -128,9 +111,8 @@ _exc_return:
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#ifdef CONFIG_ARC_HAS_SECURE
/*
* sync up the ERSEC_STAT.ERM and SEC_STAT.IRM.
* use a fake interrupt return to simulate an exception turn.
@@ -139,60 +121,31 @@ _exc_return:
*/
lr r3,[_ARC_V2_ERSEC_STAT]
btst r3, 31
bset.nz r3, r3, _ARC_V2_SEC_STAT_IRM_BIT
bclr.z r3, r3, _ARC_V2_SEC_STAT_IRM_BIT
sflag r3
#endif
/* clear AE bit to forget this was an exception, and go to
* register bank0 (if exception is raised in firq with 2 reg
* banks, then we may be bank1)
*/
#if defined(CONFIG_ARC_FIRQ) && CONFIG_RGF_NUM_BANKS != 1
/* save r2 in ilink because of the possible following reg
* bank switch
*/
mov_s ilink, r2
bset.nz r3, r3, 3
bclr.z r3, r3, 3
/* sflag r3 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00ff302f
#endif
/* clear AE bit to forget this was an exception */
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~(_ARC_V2_STATUS32_AE | _ARC_V2_STATUS32_RB(7)))
and r3,r3,(~_ARC_V2_STATUS32_AE)
kflag r3
/* pretend lowest priority interrupt happened to use common handler
* if exception is raised in irq, i.e., _ARC_V2_AUX_IRQ_ACT !=0,
* ignore irq handling, we cannot return to irq handling which may
* raise exception again. The ignored interrupts will be re-triggered
* if not cleared, or re-triggered by interrupt sources, or just missed
*/
#ifdef CONFIG_ARC_SECURE_FIRMWARE
mov_s r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
mov_s r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
push_s r2
mov_s r0, _ARC_V2_AUX_IRQ_ACT
mov_s r1, r3
mov_s r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
pop_s r2
#else
/* pretend lowest priority interrupt happened to use common handler */
lr r3, [_ARC_V2_AUX_IRQ_ACT]
or r3,r3,(1<<(CONFIG_NUM_IRQ_PRIO_LEVELS-1)) /* use lowest */
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
#if defined(CONFIG_ARC_FIRQ) && CONFIG_RGF_NUM_BANKS != 1
mov r2, ilink
#endif
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
#endif
_exc_return_from_exc:
ld_s r0, [sp, ___isf_t_pc_OFFSET]
sr r0, [_ARC_V2_ERET]
_pop_irq_stack_frame
mov_s sp, ilink
ld sp, [saved_value]
rtie
@@ -203,36 +156,30 @@ SECTION_SUBSEC_FUNC(TEXT,__fault,__ev_trap)
#ifdef CONFIG_USERSPACE
cmp ilink, _TRAP_S_CALL_SYSTEM_CALL
bne _do_non_syscall_trap
/* do sys_call */
mov_s ilink, K_SYSCALL_LIMIT
/* do sys_call */
mov ilink, K_SYSCALL_LIMIT
cmp r6, ilink
blo valid_syscall_id
blt valid_syscall_id
mov_s r0, r6
mov_s r6, K_SYSCALL_BAD
mov r0, r6
mov r6, K_SYSCALL_BAD
valid_syscall_id:
/* create a sys call frame
* caller regs (r0 - 12) are saved in _create_irq_stack_frame
* ok to use them later
*/
_create_irq_stack_frame
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* ERSEC_STAT is IOW/RAZ in normal mode */
lr r0, [_ARC_V2_ERSEC_STAT]
st_s r0, [sp, ___isf_t_sec_stat_OFFSET]
#ifdef CONFIG_ARC_HAS_SECURE
lr ilink, [_ARC_V2_ERSEC_STAT]
push ilink
#endif
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
lr r0,[_ARC_V2_ERSTATUS]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr ilink, [_ARC_V2_ERET]
push ilink
lr ilink, [_ARC_V2_ERSTATUS]
push ilink
bclr r0, r0, _ARC_V2_STATUS32_U_BIT
sr r0, [_ARC_V2_ERSTATUS]
mov_s r0, _arc_do_syscall
sr r0, [_ARC_V2_ERET]
bclr ilink, ilink, _ARC_V2_STATUS32_U_BIT
sr ilink, [_ARC_V2_ERSTATUS]
mov ilink, _arc_do_syscall
sr ilink, [_ARC_V2_ERET]
rtie
@@ -258,23 +205,75 @@ _do_non_syscall_trap:
lr r0,[_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
ld r1, [exc_nest_count]
add r0, r1, 1
st r0, [exc_nest_count]
cmp r1, 0
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
bgt.d exc_nest_handle
mov r0, sp
bne.d exc_nest_handle
mov_s r0, sp
_get_curr_cpu_irq_stack sp
mov r1, _kernel
ld sp, [r1, _kernel_offset_to_irq_stack]
exc_nest_handle:
push_s r0
jl z_irq_do_offload
jl _irq_do_offload
pop sp
_dec_int_nest_counter r0, r1
mov r1, exc_nest_count
ld r0, [r1]
sub r0, r0, 1
cmp r0, 0
bne.d _exc_return_from_exc
st r0, [r1]
#ifdef CONFIG_PREEMPT_ENABLED
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* check if the current thread needs to be rescheduled */
ld_s r0, [r1, _kernel_offset_to_ready_q_cache]
breq r0, r2, _exc_return_from_irqoffload_trap
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
/* note: Ok to use _CAUSE_RIRQ since everything is saved */
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
st_s r2, [r1, _kernel_offset_to_current]
#ifdef CONFIG_ARC_HAS_SECURE
/*
* sync up the ERSEC_STAT.ERM and SEC_STAT.IRM.
* use a fake interrupt return to simulate an exception turn.
* ERM and IRM record which mode the cpu should return, 1: secure
* 0: normal
*/
lr r3,[_ARC_V2_ERSEC_STAT]
btst r3, 31
bset.nz r3, r3, 3
bclr.z r3, r3, 3
/* sflag r3 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00ff302f
#endif
/* clear AE bit to forget this was an exception */
lr r3, [_ARC_V2_STATUS32]
and r3,r3,(~_ARC_V2_STATUS32_AE)
kflag r3
/* pretend lowest priority interrupt happened to use common handler */
lr r3, [_ARC_V2_AUX_IRQ_ACT]
or r3,r3,(1<<(CONFIG_NUM_IRQ_PRIO_LEVELS-1)) /* use lowest */
sr r3, [_ARC_V2_AUX_IRQ_ACT]
/* Assumption: r2 has current thread */
b _rirq_common_interrupt_swap
#endif
_exc_return_from_irqoffload_trap:
_pop_irq_stack_frame
rtie
#endif /* CONFIG_IRQ_OFFLOAD */

View File

@@ -19,69 +19,12 @@
#include <kernel.h>
#include <arch/cpu.h>
#include <sys/__assert.h>
#include <misc/__assert.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <sw_isr_table.h>
#include <irq.h>
#include <sys/printk.h>
/*
* storage space for the interrupt stack of fast_irq
*/
#if defined(CONFIG_ARC_FIRQ_STACK)
#if defined(CONFIG_SMP)
K_THREAD_STACK_ARRAY_DEFINE(_firq_interrupt_stack, CONFIG_MP_NUM_CPUS,
CONFIG_ARC_FIRQ_STACK_SIZE);
#else
K_THREAD_STACK_DEFINE(_firq_interrupt_stack, CONFIG_ARC_FIRQ_STACK_SIZE);
#endif
/*
* @brief Set the stack pointer for firq handling
*
* @return N/A
*/
void z_arc_firq_stack_set(void)
{
#ifdef CONFIG_SMP
char *firq_sp = Z_THREAD_STACK_BUFFER(
_firq_interrupt_stack[z_arc_v2_core_id()]) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#else
char *firq_sp = Z_THREAD_STACK_BUFFER(_firq_interrupt_stack) +
CONFIG_ARC_FIRQ_STACK_SIZE;
#endif
/* the z_arc_firq_stack_set must be called when irq diasbled, as
* it can be called not only in the init phase but also other places
*/
unsigned int key = irq_lock();
__asm__ volatile (
/* only ilink will not be banked, so use ilink as channel
* between 2 banks
*/
"mov ilink, %0 \n\t"
"lr %0, [%1] \n\t"
"or %0, %0, %2 \n\t"
"kflag %0 \n\t"
"mov sp, ilink \n\t"
/* switch back to bank0, use ilink to avoid the pollution of
* bank1's gp regs.
*/
"lr ilink, [%1] \n\t"
"and ilink, ilink, %3 \n\t"
"kflag ilink \n\t"
:
: "r"(firq_sp), "i"(_ARC_V2_STATUS32),
"i"(_ARC_V2_STATUS32_RB(1)),
"i"(~_ARC_V2_STATUS32_RB(7))
);
irq_unlock(key);
}
#endif
#include <misc/printk.h>
/*
* @brief Enable an interrupt line
@@ -93,11 +36,11 @@ void z_arc_firq_stack_set(void)
* @return N/A
*/
void arch_irq_enable(unsigned int irq)
void _arch_irq_enable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_enable(irq);
_arc_v2_irq_unit_int_enable(irq);
irq_unlock(key);
}
@@ -110,25 +53,14 @@ void arch_irq_enable(unsigned int irq)
* @return N/A
*/
void arch_irq_disable(unsigned int irq)
void _arch_irq_disable(unsigned int irq)
{
unsigned int key = irq_lock();
z_arc_v2_irq_unit_int_disable(irq);
_arc_v2_irq_unit_int_disable(irq);
irq_unlock(key);
}
/**
* @brief Return IRQ enable state
*
* @param irq IRQ line
* @return interrupt enable state, true or false
*/
int arch_irq_is_enabled(unsigned int irq)
{
return z_arc_v2_irq_unit_int_enabled(irq);
}
/*
* @internal
*
@@ -143,7 +75,7 @@ int arch_irq_is_enabled(unsigned int irq)
* @return N/A
*/
void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
void _irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
{
ARG_UNUSED(flags);
@@ -151,17 +83,7 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
__ASSERT(prio < CONFIG_NUM_IRQ_PRIO_LEVELS,
"invalid priority %d for irq %d", prio, irq);
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocted to secure world
* left prio levels allocated to normal world
*/
#if defined(CONFIG_ARC_SECURE_FIRMWARE)
prio = prio < ARC_N_IRQ_START_LEVEL ?
prio : (ARC_N_IRQ_START_LEVEL - 1);
#elif defined(CONFIG_ARC_NORMAL_FIRMWARE)
prio = prio < ARC_N_IRQ_START_LEVEL ?
ARC_N_IRQ_START_LEVEL : prio;
#endif
z_arc_v2_irq_unit_prio_set(irq, prio);
_arc_v2_irq_unit_prio_set(irq, prio);
irq_unlock(key);
}
@@ -174,19 +96,11 @@ void z_irq_priority_set(unsigned int irq, unsigned int prio, u32_t flags)
* @return N/A
*/
void z_irq_spurious(void *unused)
void _irq_spurious(void *unused)
{
ARG_UNUSED(unused);
z_fatal_error(K_ERR_SPURIOUS_IRQ, NULL);
printk("_irq_spurious(). Spinning...\n");
for (;;)
;
}
#ifdef CONFIG_DYNAMIC_INTERRUPTS
int arch_irq_connect_dynamic(unsigned int irq, unsigned int priority,
void (*routine)(void *parameter), void *parameter,
u32_t flags)
{
z_isr_install(irq, routine, parameter);
z_irq_priority_set(irq, priority, flags);
return irq;
}
#endif /* CONFIG_DYNAMIC_INTERRUPTS */

View File

@@ -15,14 +15,16 @@ static irq_offload_routine_t offload_routine;
static void *offload_param;
/* Called by trap_s exception handler */
void z_irq_do_offload(void)
void _irq_do_offload(void)
{
offload_routine(offload_param);
}
void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
void irq_offload(irq_offload_routine_t routine, void *parameter)
{
unsigned int key;
key = irq_lock();
offload_routine = routine;
offload_param = parameter;
@@ -30,4 +32,6 @@ void arch_irq_offload(irq_offload_routine_t routine, void *parameter)
:
: [id] "i"(_TRAP_S_SCALL_IRQ_OFFLOAD) : );
irq_unlock(key);
}

View File

@@ -24,8 +24,28 @@
GTEXT(_isr_wrapper)
GTEXT(_isr_demux)
GDATA(exc_nest_count)
SECTION_VAR(BSS, exc_nest_count)
.balign 4
.word 0
#if CONFIG_RGF_NUM_BANKS == 1
GDATA(saved_r0)
SECTION_VAR(BSS, saved_r0)
.balign 4
.word 0
#else
GDATA(saved_sp)
SECTION_VAR(BSS, saved_sp)
.balign 4
.word 0
#endif
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
GTEXT(z_sys_power_save_idle_exit)
GTEXT(_sys_power_save_idle_exit)
#endif
/*
@@ -42,7 +62,7 @@ Context switch explanation:
The context switch code is spread in these files:
isr_wrapper.s, switch.s, swap_macros.s, fast_irq.s, regular_irq.s
isr_wrapper.s, swap.s, swap_macros.s, fast_irq.s, regular_irq.s
IRQ stack frame layout:
@@ -60,6 +80,13 @@ IRQ stack frame layout:
low address
Registers not taken into account in the current implementation.
jli_base
ldi_base
ei_base
accl
acch
The context switch code adopts this standard so that it is easier to follow:
- r1 contains _kernel ASAP and is not overwritten over the lifespan of
@@ -68,7 +95,7 @@ The context switch code adopts this standard so that it is easier to follow:
transition from outgoing thread to incoming thread
Not loading _kernel into r0 allows loading _kernel without stomping on
the parameter in r0 in arch_switch().
the parameter in r0 in _Swap().
ARCv2 processors have two kinds of interrupts: fast (FIRQ) and regular. The
@@ -100,7 +127,7 @@ done upfront, and the rest is done when needed:
o RIRQ
All needed registers to run C code in the ISR are saved automatically
All needed regisers to run C code in the ISR are saved automatically
on the outgoing thread's stack: loop, status32, pc, and the caller-
saved GPRs. That stack frame layout is pre-determined. If returning
to a thread, the stack is popped and no registers have to be saved by
@@ -150,7 +177,7 @@ From coop:
o to coop
Do a normal function call return.
Restore interrupt lock level and do a normal function call return.
o to any irq
@@ -168,13 +195,14 @@ From FIRQ:
o to coop
The address of the returning instruction from arch_switch() is loaded
in ilink and the saved status32 in status32_p0.
The address of the returning instruction from _Swap() is loaded in ilink and
the saved status32 in status32_p0, taking care to adjust the interrupt lock
state desired in status32_p0. The return value is put in r0.
o to any irq
The IRQ has saved the caller-saved registers in a stack frame, which must be
popped, and status32 and pc loaded in status32_p0 and ilink.
popped, and statu32 and pc loaded in status32_p0 and ilink.
From RIRQ:
@@ -182,7 +210,7 @@ From RIRQ:
The interrupt return mechanism in the processor expects a stack frame, but
the outgoing context did not create one. A fake one is created here, with
only the relevant values filled in: pc, status32.
only the relevant values filled in: pc, status32 and the return value in r0.
There is a discrepancy between the ABI from the ARCv2 docs, including the
way the processor pushes GPRs in pairs in the IRQ stack frame, and the ABI
@@ -202,34 +230,93 @@ From RIRQ:
*/
SECTION_FUNC(TEXT, _isr_wrapper)
#if defined(CONFIG_ARC_FIRQ)
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
push_s blink
push_s r13
push_s r12
push r11
push r10
push r9
push r8
push r7
push r6
push r5
push r4
push_s r3
push_s r2
push_s r1
bl read_timer_start_of_isr
pop_s r1
pop_s r2
pop_s r3
pop r4
pop r5
pop r6
pop r7
pop r8
pop r9
pop r10
pop r11
pop_s r12
pop_s r13
pop_s blink
pop_s r0
#endif
#if CONFIG_ARC_FIRQ
#if CONFIG_RGF_NUM_BANKS == 1
/* free r0 here, use r0 to check whether irq is firq.
* for rirq, as sp will not change and r0 already saved, this action
* in fact is an action like nop.
* for firq, r0 will be restored later
*/
st_s r0, [sp]
st r0,[saved_r0]
#endif
lr r0, [_ARC_V2_AUX_IRQ_ACT]
ffs r0, r0
cmp r0, 0
#if CONFIG_RGF_NUM_BANKS == 1
bnz rirq_path
ld_s r0, [sp]
/* 1-register bank FIRQ handling must save registers on stack */
_create_irq_stack_frame
lr r0, [_ARC_V2_STATUS32_P0]
st_s r0, [sp, ___isf_t_status32_OFFSET]
lr r0, [_ARC_V2_ERET]
st_s r0, [sp, ___isf_t_pc_OFFSET]
mov_s r3, _firq_exit
mov_s r2, _firq_enter
lr r0,[_ARC_V2_STATUS32_P0]
push_s r0
mov r0,ilink
push_s r0
#ifdef CONFIG_CODE_DENSITY
lr r0, [_ARC_V2_JLI_BASE]
push_s r0
lr r0, [_ARC_V2_LDI_BASE]
push_s r0
lr r0, [_ARC_V2_EI_BASE]
push_s r0
#endif
mov r0,lp_count
push_s r0
lr r0, [_ARC_V2_LP_START]
push_s r0
lr r0, [_ARC_V2_LP_END]
push_s r0
push_s blink
push_s r13
push_s r12
push r11
push r10
push r9
push r8
push r7
push r6
push r5
push r4
push_s r3
push_s r2
push_s r1
ld r0,[saved_r0]
push_s r0
mov r3, _firq_exit
mov r2, _firq_enter
j_s [r2]
rirq_path:
mov_s r3, _rirq_exit
mov_s r2, _rirq_enter
mov r3, _rirq_exit
mov r2, _rirq_enter
j_s [r2]
#else
mov.z r3, _firq_exit
@@ -239,13 +326,26 @@ rirq_path:
j_s [r2]
#endif
#else
mov_s r3, _rirq_exit
mov_s r2, _rirq_enter
mov r3, _rirq_exit
mov r2, _rirq_enter
j_s [r2]
#endif
#if defined(CONFIG_TRACING)
GTEXT(sys_trace_isr_enter)
GTEXT(z_sys_trace_isr_enter)
.macro log_interrupt_k_event
clri r0 /* do not interrupt event logger operations */
push_s r0
push_s blink
jl z_sys_trace_isr_enter
pop_s blink
pop_s r0
seti r0
.endm
#else
#define log_interrupt_k_event
#endif
#if defined(CONFIG_SYS_POWER_MANAGEMENT)
@@ -259,7 +359,7 @@ GTEXT(sys_trace_isr_enter)
st 0, [r1, _kernel_offset_to_idle] /* zero idle duration */
push_s blink
jl z_sys_power_save_idle_exit
jl _sys_power_save_idle_exit
pop_s blink
_skip_sys_power_save_idle_exit:
@@ -275,55 +375,65 @@ _skip_sys_power_save_idle_exit:
SECTION_FUNC(TEXT, _isr_demux)
push_s r3
/* according to ARCv2 ISA, r25, r30, r58, r59 are caller-saved
* scratch registers, possibly used by interrupt handlers
*/
push r25
push r30
#ifdef CONFIG_ARC_HAS_ACCL_REGS
push r58
push r59
#endif
#ifdef CONFIG_EXECUTION_BENCHMARKING
bl read_timer_start_of_isr
#endif
/* cannot be done before this point because we must be able to run C */
/* r0 is available to be stomped here, and exit_tickless_idle uses it */
exit_tickless_idle
log_interrupt_k_event
lr r0, [_ARC_V2_ICAUSE]
/* handle software triggered interrupt */
lr r3, [_ARC_V2_AUX_IRQ_HINT]
brne r3, r0, irq_hint_handled
sr 0, [_ARC_V2_AUX_IRQ_HINT]
lr r3, [_ARC_V2_AUX_IRQ_HINT]
brne r3, r0, irq_hint_handled
sr 0, [_ARC_V2_AUX_IRQ_HINT]
irq_hint_handled:
sub r0, r0, 16
mov_s r1, _sw_isr_table
mov r1, _sw_isr_table
add3 r0, r1, r0 /* table entries are 8-bytes wide */
ld_s r1, [r0, 4] /* ISR into r1 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
push_s blink
push_s r13
push_s r12
push r11
push r10
push r9
push r8
push r7
push r6
push r5
push r4
push_s r3
push_s r2
push_s r1
bl read_timer_end_of_isr
pop_s r1
pop_s r2
pop_s r3
pop r4
pop r5
pop r6
pop r7
pop r8
pop r9
pop r10
pop r11
pop_s r12
pop_s r13
pop_s blink
pop_s r0
#endif
jl_s.d [r1]
ld_s r0, [r0] /* delay slot: ISR parameter into r0 */
#ifdef CONFIG_ARC_HAS_ACCL_REGS
pop r59
pop r58
#endif
pop r30
pop r25
/* back from ISR, jump to exit stub */
pop_s r3
j_s [r3]
nop_s
nop

View File

@@ -1,5 +1,3 @@
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_library_sources_if_kconfig(arc_core_mpu.c)

View File

@@ -1,8 +1,10 @@
# Memory Protection Unit (MPU) configuration options
# Kconfig - Memory Protection Unit (MPU) configuration options
#
# Copyright (c) 2017 Synopsys
#
# SPDX-License-Identifier: Apache-2.0
#
config ARC_MPU_VER
int "ARC MPU version"
range 2 4
@@ -13,21 +15,22 @@ config ARC_MPU_VER
config ARC_CORE_MPU
bool "ARC Core MPU functionalities"
depends on CPU_HAS_MPU
help
ARC core MPU functionalities
config MPU_STACK_GUARD
bool "Thread Stack Guards"
depends on ARC_CORE_MPU
depends on ARC_CORE_MPU && !ARC_STACK_CHECKING
help
Enable thread stack guards via MPU. ARC supports built-in stack protection.
If your core supports that, it is preferred over MPU stack guard
config ARC_MPU
bool "ARC MPU Support"
depends on CPU_HAS_MPU
select ARC_CORE_MPU
select THREAD_STACK_INFO
select MEMORY_PROTECTION
select MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT if ARC_MPU_VER = 2
help
Target has ARC MPU (currently only works for EMSK 2.2/2.3 ARCEM7D)

View File

@@ -9,7 +9,7 @@
#include <kernel.h>
#include <soc.h>
#include <arch/arc/v2/mpu/arc_core_mpu.h>
#include <kernel_structs.h>
#include <logging/sys_log.h>
/*
* @brief Configure MPU for the thread
@@ -21,13 +21,86 @@
void configure_mpu_thread(struct k_thread *thread)
{
arc_core_mpu_disable();
arc_core_mpu_configure_thread(thread);
#if defined(CONFIG_MPU_STACK_GUARD)
configure_mpu_stack_guard(thread);
#endif
#if defined(CONFIG_USERSPACE)
configure_mpu_user_context(thread);
configure_mpu_mem_domain(thread);
#endif
arc_core_mpu_enable();
}
#if defined(CONFIG_MPU_STACK_GUARD)
/*
* @brief Configure MPU stack guard
*
* This function configures per thread stack guards reprogramming the MPU.
* The functionality is meant to be used during context switch.
*
* @param thread thread info data structure.
*/
void configure_mpu_stack_guard(struct k_thread *thread)
{
#if defined(CONFIG_USERSPACE)
if (thread->thread_base.user_options & K_USER) {
/* the areas before and after the user stack of thread is
* kernel only. These area can be used as stack guard.
* -----------------------
* | kernel only area |
* |---------------------|
* | user stack |
* |---------------------|
* |privilege stack guard|
* |---------------------|
* | privilege stack |
* -----------------------
*/
arc_core_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE);
return;
}
#endif
arc_core_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE);
int arch_mem_domain_max_partitions_get(void)
}
#endif
#if defined(CONFIG_USERSPACE)
/*
* @brief Configure MPU user context
*
* This function configures the thread's user context.
* The functionality is meant to be used during context switch.
*
* @param thread thread info data structure.
*/
void configure_mpu_user_context(struct k_thread *thread)
{
SYS_LOG_DBG("configure user thread %p's context", thread);
arc_core_mpu_configure_user_context(thread);
}
/*
* @brief Configure MPU memory domain
*
* This function configures per thread memory domain reprogramming the MPU.
* The functionality is meant to be used during context switch.
*
* @param thread thread info data structure.
*/
void configure_mpu_mem_domain(struct k_thread *thread)
{
SYS_LOG_DBG("configure thread %p's domain", thread);
arc_core_mpu_configure_mem_domain(thread->mem_domain_info.mem_domain);
}
int _arch_mem_domain_max_partitions_get(void)
{
return arc_core_mpu_get_max_domain_partition_regions();
}
@@ -35,65 +108,41 @@ int arch_mem_domain_max_partitions_get(void)
/*
* Reset MPU region for a single memory partition
*/
void arch_mem_domain_partition_remove(struct k_mem_domain *domain,
u32_t partition_id)
void _arch_mem_domain_partition_remove(struct k_mem_domain *domain,
u32_t partition_id)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
}
ARG_UNUSED(domain);
arc_core_mpu_disable();
arc_core_mpu_remove_mem_partition(domain, partition_id);
arc_core_mpu_mem_partition_remove(partition_id);
arc_core_mpu_enable();
}
/*
* Configure MPU memory domain
*/
void arch_mem_domain_thread_add(struct k_thread *thread)
void _arch_mem_domain_configure(struct k_thread *thread)
{
if (_current != thread) {
return;
}
arc_core_mpu_disable();
arc_core_mpu_configure_mem_domain(thread);
arc_core_mpu_enable();
configure_mpu_mem_domain(thread);
}
/*
* Destroy MPU regions for the mem domain
*/
void arch_mem_domain_destroy(struct k_mem_domain *domain)
void _arch_mem_domain_destroy(struct k_mem_domain *domain)
{
if (_current->mem_domain_info.mem_domain != domain) {
return;
}
ARG_UNUSED(domain);
arc_core_mpu_disable();
arc_core_mpu_remove_mem_domain(domain);
arc_core_mpu_configure_mem_domain(NULL);
arc_core_mpu_enable();
}
void arch_mem_domain_partition_add(struct k_mem_domain *domain,
u32_t partition_id)
{
/* No-op on this architecture */
}
void arch_mem_domain_thread_remove(struct k_thread *thread)
{
if (_current != thread) {
return;
}
arch_mem_domain_destroy(thread->mem_domain_info.mem_domain);
}
/*
* Validate the given buffer is user accessible or not
*/
int arch_buffer_validate(void *addr, size_t size, int write)
int _arch_buffer_validate(void *addr, size_t size, int write)
{
return arc_core_mpu_buffer_validate(addr, size, write);
}

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2019 Synopsys.
* Copyright (c) 2017 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
@@ -12,20 +12,50 @@
#include <arch/arc/v2/mpu/arc_mpu.h>
#include <arch/arc/v2/mpu/arc_core_mpu.h>
#include <linker/linker-defs.h>
#include <logging/sys_log.h>
#define AUX_MPU_RDB_VALID_MASK (0x1)
#define AUX_MPU_EN_ENABLE (0x40000000)
#define AUX_MPU_EN_DISABLE (0xBFFFFFFF)
#define AUX_MPU_RDP_REGION_SIZE(bits) \
(((bits - 1) & 0x3) | (((bits - 1) & 0x1C) << 7))
#define AUX_MPU_RDP_ATTR_MASK (0xFFF)
#define _ARC_V2_MPU_EN (0x409)
#define _ARC_V2_MPU_RDB0 (0x422)
#define _ARC_V2_MPU_RDP0 (0x423)
/* aux regs added in MPU version 3 */
#define _ARC_V2_MPU_INDEX (0x448) /* MPU index */
#define _ARC_V2_MPU_RSTART (0x449) /* MPU region start address */
#define _ARC_V2_MPU_REND (0x44A) /* MPU region end address */
#define _ARC_V2_MPU_RPER (0x44B) /* MPU region permission register */
#define _ARC_V2_MPU_PROBE (0x44C) /* MPU probe register */
/* For MPU version 2, the minimum protection region size is 2048 bytes */
/* FOr MPU version 3, the minimum protection region size is 32 bytes */
#if CONFIG_ARC_MPU_VER == 2
#define ARC_FEATURE_MPU_ALIGNMENT_BITS 11
#elif CONFIG_ARC_MPU_VER == 3
#define ARC_FEATURE_MPU_ALIGNMENT_BITS 5
#endif
#define CALC_REGION_END_ADDR(start, size) \
(start + size - (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS))
#define LOG_LEVEL CONFIG_MPU_LOG_LEVEL
#include <logging/log.h>
LOG_MODULE_REGISTER(mpu);
/**
* @brief Get the number of supported MPU regions
*
*/
static inline u8_t get_num_regions(void)
static inline u8_t _get_num_regions(void)
{
u32_t num = z_arc_v2_aux_reg_read(_ARC_V2_MPU_BUILD);
u32_t num = _arc_v2_aux_reg_read(_ARC_V2_MPU_BUILD);
num = (num & 0xFF00U) >> 8U;
num = (num & 0xFF00) >> 8;
return (u8_t)num;
}
@@ -34,26 +64,648 @@ static inline u8_t get_num_regions(void)
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct parameter set.
*/
static inline u32_t get_region_attr_by_type(u32_t type)
static inline u32_t _get_region_attr_by_type(u32_t type)
{
switch (type) {
case THREAD_STACK_USER_REGION:
return REGION_RAM_ATTR;
case THREAD_STACK_REGION:
return AUX_MPU_ATTR_KW | AUX_MPU_ATTR_KR;
return AUX_MPU_RDP_KW | AUX_MPU_RDP_KR;
case THREAD_APP_DATA_REGION:
return REGION_RAM_ATTR;
case THREAD_STACK_GUARD_REGION:
/* no Write and Execute to guard region */
return AUX_MPU_ATTR_UR | AUX_MPU_ATTR_KR;
/* no Write and Execute to guard region */
return AUX_MPU_RDP_UR | AUX_MPU_RDP_KR;
default:
/* unknown type */
/* Size 0 region */
return 0;
}
}
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
/* ARC MPU version 2 and version 3 have different aux reg interface */
#if CONFIG_ARC_MPU_VER == 2
#include "arc_mpu_v2_internal.h"
u8_t bits = find_msb_set(size) - 1;
index = 2 * index;
if (bits < ARC_FEATURE_MPU_ALIGNMENT_BITS) {
bits = ARC_FEATURE_MPU_ALIGNMENT_BITS;
}
if ((1 << bits) < size) {
bits++;
}
if (size > 0) {
region_attr |= AUX_MPU_RDP_REGION_SIZE(bits);
region_addr |= AUX_MPU_RDB_VALID_MASK;
} else {
region_addr = 0;
}
_arc_v2_aux_reg_write(_ARC_V2_MPU_RDP0 + index, region_attr);
_arc_v2_aux_reg_write(_ARC_V2_MPU_RDB0 + index, region_addr);
#elif CONFIG_ARC_MPU_VER == 3
#include "arc_mpu_v3_internal.h"
#define AUX_MPU_RPER_SID1 0x10000
if (size < (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS)) {
size = (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS);
}
/* all MPU regions SID are the same: 1, the default SID */
if (region_attr) {
region_attr |= (AUX_MPU_RDB_VALID_MASK | AUX_MPU_RDP_S |
AUX_MPU_RPER_SID1);
}
_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
_arc_v2_aux_reg_write(_ARC_V2_MPU_RSTART, region_addr);
_arc_v2_aux_reg_write(_ARC_V2_MPU_REND,
CALC_REGION_END_ADDR(region_addr, size));
_arc_v2_aux_reg_write(_ARC_V2_MPU_RPER, region_attr);
#endif
}
#if CONFIG_ARC_MPU_VER == 3
static inline s32_t _mpu_probe(u32_t addr)
{
u32_t val;
_arc_v2_aux_reg_write(_ARC_V2_MPU_PROBE, addr);
val = _arc_v2_aux_reg_read(_ARC_V2_MPU_INDEX);
/* if no match or multiple regions match, return error */
if (val & 0xC0000000) {
return -1;
} else {
return val;
}
}
#endif
/**
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct region index.
*/
static inline u32_t _get_region_index_by_type(u32_t type)
{
/*
* The new MPU regions are allocated per type after the statically
* configured regions. The type is one-indexed rather than
* zero-indexed.
*
* For ARC MPU v2, the smaller index has higher priority, so the
* index is allocated in reverse order. Static regions start from
* the biggest index, then thread related regions.
*
* For ARC MPU v3, each index has the same priority, so the index is
* allocated from small to big. Static regions start from 0, then
* thread related regions.
*/
switch (type) {
#if CONFIG_ARC_MPU_VER == 2
case THREAD_STACK_USER_REGION:
return _get_num_regions() - mpu_config.num_regions
- THREAD_STACK_REGION;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return _get_num_regions() - mpu_config.num_regions - type;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return _get_num_regions() - mpu_config.num_regions - type;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not enabled.
*/
return _get_num_regions() - mpu_config.num_regions - type + 1;
#endif
#elif CONFIG_ARC_MPU_VER == 3
case THREAD_STACK_USER_REGION:
return mpu_config.num_regions + THREAD_STACK_REGION - 1;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return mpu_config.num_regions + type - 1;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return mpu_config.num_regions + type - 1;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not enabled.
*/
return mpu_config.num_regions + type - 2;
#endif
#endif
default:
__ASSERT(0, "Unsupported type");
return 0;
}
}
/**
* This internal function checks if region is enabled or not
*/
static inline int _is_enabled_region(u32_t r_index)
{
#if CONFIG_ARC_MPU_VER == 2
return ((_arc_v2_aux_reg_read(_ARC_V2_MPU_RDB0 + 2 * r_index)
& AUX_MPU_RDB_VALID_MASK) == AUX_MPU_RDB_VALID_MASK);
#elif CONFIG_ARC_MPU_VER == 3
_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, r_index);
return ((_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER) &
AUX_MPU_RDB_VALID_MASK) == AUX_MPU_RDB_VALID_MASK);
#endif
}
/**
* This internal function check if the given buffer in in the region
*/
static inline int _is_in_region(u32_t r_index, u32_t start, u32_t size)
{
#if CONFIG_ARC_MPU_VER == 2
u32_t r_addr_start;
u32_t r_addr_end;
u32_t r_size_lshift;
r_addr_start = _arc_v2_aux_reg_read(_ARC_V2_MPU_RDB0 + 2 * r_index)
& (~AUX_MPU_RDB_VALID_MASK);
r_size_lshift = _arc_v2_aux_reg_read(_ARC_V2_MPU_RDP0 + 2 * r_index)
& AUX_MPU_RDP_ATTR_MASK;
r_size_lshift = (r_size_lshift & 0x3) | ((r_size_lshift >> 7) & 0x1C);
r_addr_end = r_addr_start + (1 << (r_size_lshift + 1));
if (start >= r_addr_start && (start + size) < r_addr_end) {
return 1;
}
#elif CONFIG_ARC_MPU_VER == 3
if ((r_index == _mpu_probe(start)) &&
(r_index == _mpu_probe(start + size))) {
return 1;
}
#endif
return 0;
}
/**
* This internal function check if the region is user accessible or not
*/
static inline int _is_user_accessible_region(u32_t r_index, int write)
{
u32_t r_ap;
#if CONFIG_ARC_MPU_VER == 2
r_ap = _arc_v2_aux_reg_read(_ARC_V2_MPU_RDP0 + 2 * r_index);
#elif CONFIG_ARC_MPU_VER == 3
_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, r_index);
r_ap = _arc_v2_aux_reg_read(_ARC_V2_MPU_RPER);
#endif
r_ap &= AUX_MPU_RDP_ATTR_MASK;
if (write) {
return ((r_ap & (AUX_MPU_RDP_UW | AUX_MPU_RDP_KW)) ==
(AUX_MPU_RDP_UW | AUX_MPU_RDP_KW));
}
return ((r_ap & (AUX_MPU_RDP_UR | AUX_MPU_RDP_KR)) ==
(AUX_MPU_RDP_UR | AUX_MPU_RDP_KR));
}
/* ARC Core MPU Driver API Implementation for ARC MPU */
/**
* @brief enable the MPU
*/
void arc_core_mpu_enable(void)
{
#if CONFIG_ARC_MPU_VER == 2
/* Enable MPU */
_arc_v2_aux_reg_write(_ARC_V2_MPU_EN,
_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) | AUX_MPU_EN_ENABLE);
/* MPU is always enabled, use default region to
* simulate MPU enable
*/
#elif CONFIG_ARC_MPU_VER == 3
#define MPU_ENABLE_ATTR 0
arc_core_mpu_default(MPU_ENABLE_ATTR);
#endif
}
/**
* @brief disable the MPU
*/
void arc_core_mpu_disable(void)
{
#if CONFIG_ARC_MPU_VER == 2
/* Disable MPU */
_arc_v2_aux_reg_write(_ARC_V2_MPU_EN,
_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) & AUX_MPU_EN_DISABLE);
#elif CONFIG_ARC_MPU_VER == 3
/* MPU is always enabled, use default region to
* simulate MPU disable
*/
arc_core_mpu_default(REGION_ALL_ATTR);
#endif
}
/**
* @brief configure the base address and size for an MPU region
*
* @param type MPU region type
* @param base base address in RAM
* @param size size of the region
*/
void arc_core_mpu_configure(u8_t type, u32_t base, u32_t size)
{
u32_t region_index = _get_region_index_by_type(type);
u32_t region_attr = _get_region_attr_by_type(type);
SYS_LOG_DBG("Region info: 0x%x 0x%x", base, size);
if (region_attr == 0) {
return;
}
/*
* The new MPU regions are allocated per type before
* the statically configured regions.
*/
#if CONFIG_ARC_MPU_VER == 2
/*
* For ARC MPU v2, MPU regions can be overlapped, smaller
* region index has higher priority.
*/
_region_init(region_index, base, size, region_attr);
#elif CONFIG_ARC_MPU_VER == 3
static s32_t last_index;
s32_t index;
u32_t last_region = _get_num_regions() - 1;
/* use hardware probe to find the region maybe split.
* another way is to look up the mpu_config.mpu_regions
* in software, which is time consuming.
*/
index = _mpu_probe(base);
/* ARC MPU version 3 doesn't support region overlap.
* So it can not be directly used for stack/stack guard protect
* One way to do this is splitting the ram region as follow:
*
* Take THREAD_STACK_GUARD_REGION as example:
* RAM region 0: the ram region before THREAD_STACK_GUARD_REGION, rw
* RAM THREAD_STACK_GUARD_REGION: RO
* RAM region 1: the region after THREAD_STACK_GUARD_REGION, same
* as region 0
* if region_index == index, it means the same thread comes back,
* no need to split
*/
if (index >= 0 && region_index != index) {
/* need to split, only 1 split is allowed */
/* find the correct region to mpu_config.mpu_regions */
if (index == last_region) {
/* already split */
index = last_index;
} else {
/* new split */
last_index = index;
}
_region_init(index,
mpu_config.mpu_regions[index].base,
base - mpu_config.mpu_regions[index].base,
mpu_config.mpu_regions[index].attr);
#if defined(CONFIG_MPU_STACK_GUARD)
if (type != THREAD_STACK_USER_REGION)
/*
* USER REGION is continuous with MPU_STACK_GUARD.
* In current implementation, THREAD_STACK_GUARD_REGION is
* configured before THREAD_STACK_USER_REGION
*/
#endif
_region_init(last_region, base + size,
(mpu_config.mpu_regions[index].base +
mpu_config.mpu_regions[index].size - base - size),
mpu_config.mpu_regions[index].attr);
}
_region_init(region_index, base, size, region_attr);
#endif
}
/**
* @brief configure the default region
*
* @param region_attr region attribute of default region
*/
void arc_core_mpu_default(u32_t region_attr)
{
u32_t val = _arc_v2_aux_reg_read(_ARC_V2_MPU_EN) &
(~AUX_MPU_RDP_ATTR_MASK);
region_attr &= AUX_MPU_RDP_ATTR_MASK;
_arc_v2_aux_reg_write(_ARC_V2_MPU_EN, region_attr | val);
}
/**
* @brief configure the MPU region
*
* @param index MPU region index
* @param base base address
* @param region_attr region attribute
*/
void arc_core_mpu_region(u32_t index, u32_t base, u32_t size,
u32_t region_attr)
{
if (index >= _get_num_regions()) {
return;
}
region_attr &= AUX_MPU_RDP_ATTR_MASK;
_region_init(index, base, size, region_attr);
}
#if defined(CONFIG_USERSPACE)
void arc_core_mpu_configure_user_context(struct k_thread *thread)
{
u32_t base = (u32_t)thread->stack_obj;
u32_t size = thread->stack_info.size;
/* for kernel threads, no need to configure user context */
if (!thread->arch.priv_stack_start) {
return;
}
arc_core_mpu_configure(THREAD_STACK_USER_REGION, base, size);
/* configure app data portion */
#ifdef CONFIG_APPLICATION_MEMORY
#if CONFIG_ARC_MPU_VER == 2
/*
* _app_ram_size is guaranteed to be power of two, and
* _app_ram_start is guaranteed to be aligned _app_ram_size
* in linker template
*/
base = (u32_t)&__app_ram_start;
size = (u32_t)&__app_ram_size;
/* set up app data region if exists, otherwise disable */
if (size > 0) {
arc_core_mpu_configure(THREAD_APP_DATA_REGION, base, size);
}
#elif CONFIG_ARC_MPU_VER == 3
/*
* ARC MPV v3 doesn't support MPU region overlap.
* Application memory should be a static memory, defined in mpu_config
*/
#endif
#endif
}
/**
* @brief configure MPU regions for the memory partitions of the memory domain
*
* @param mem_domain memory domain that thread belongs to
*/
void arc_core_mpu_configure_mem_domain(struct k_mem_domain *mem_domain)
{
s32_t region_index =
_get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
u32_t num_partitions;
struct k_mem_partition *pparts;
if (mem_domain) {
SYS_LOG_DBG("configure domain: %p", mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
SYS_LOG_DBG("disable domain partition regions");
num_partitions = 0;
pparts = NULL;
}
#if CONFIG_ARC_MPU_VER == 2
for (; region_index >= 0; region_index--) {
#elif CONFIG_ARC_MPU_VER == 3
/*
* Note: For ARC MPU v3, overlapping is not allowed, so the following
* partitions/region may be overlapped with each other or regions in
* mpu_config. This will cause EV_MachineCheck exception (ECR = 0x030600).
* Although split mechanism is used for stack guard region to avoid this,
* it doesn't work for memory domain, because the dynamic region numbers.
* So be careful to avoid the overlap situation.
*/
u32_t num_regions = _get_num_regions() - 1;
for (; region_index < num_regions; region_index++) {
#endif
if (num_partitions && pparts->size) {
SYS_LOG_DBG("set region 0x%x 0x%x 0x%x",
region_index, pparts->start, pparts->size);
_region_init(region_index, pparts->start, pparts->size,
pparts->attr);
num_partitions--;
} else {
SYS_LOG_DBG("disable region 0x%x", region_index);
/* Disable region */
_region_init(region_index, 0, 0, 0);
}
pparts++;
}
}
/**
* @brief configure MPU region for a single memory partition
*
* @param part_index memory partition index
* @param part memory partition info
*/
void arc_core_mpu_configure_mem_partition(u32_t part_index,
struct k_mem_partition *part)
{
u32_t region_index =
_get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
SYS_LOG_DBG("configure partition index: %u", part_index);
if (part) {
SYS_LOG_DBG("set region 0x%x 0x%x 0x%x",
region_index + part_index, part->start, part->size);
_region_init(region_index, part->start, part->size,
part->attr);
} else {
SYS_LOG_DBG("disable region 0x%x", region_index + part_index);
/* Disable region */
_region_init(region_index + part_index, 0, 0, 0);
}
}
/**
* @brief Reset MPU region for a single memory partition
*
* @param part_index memory partition index
*/
void arc_core_mpu_mem_partition_remove(u32_t part_index)
{
u32_t region_index =
_get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
SYS_LOG_DBG("disable region 0x%x", region_index + part_index);
/* Disable region */
_region_init(region_index + part_index, 0, 0, 0);
}
/**
* @brief get the maximum number of free regions for memory domain partitions
*/
int arc_core_mpu_get_max_domain_partition_regions(void)
{
#if CONFIG_ARC_MPU_VER == 2
return _get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION) + 1;
#elif CONFIG_ARC_MPU_VER == 3
/*
* Subtract the start of domain partition regions and 1 reserved region
* from total regions will get the maximum number of free regions for
* memory domain partitions.
*/
return _get_num_regions() -
_get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION) - 1;
#endif
}
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
s32_t r_index;
/*
* For ARC MPU v2, smaller region number takes priority.
* we can stop the iteration immediately once we find the
* matched region that grants permission or denies access.
*
* For ARC MPU v3, overlapping is not supported.
* we can stop the iteration immediately once we find the
* matched region that grants permission or denies access.
*/
#if CONFIG_ARC_MPU_VER == 2
for (r_index = 0; r_index < _get_num_regions(); r_index++) {
if (!_is_enabled_region(r_index) ||
!_is_in_region(r_index, (u32_t)addr, size)) {
continue;
}
if (_is_user_accessible_region(r_index, write)) {
return 0;
} else {
return -EPERM;
}
}
#elif CONFIG_ARC_MPU_VER == 3
r_index = _mpu_probe((u32_t)addr);
/* match and the area is in one region */
if (r_index >= 0 && r_index == _mpu_probe((u32_t)addr + size)) {
if (_is_user_accessible_region(r_index, write)) {
return 0;
} else {
return -EPERM;
}
}
#endif
return -EPERM;
}
#endif /* CONFIG_USERSPACE */
/* ARC MPU Driver Initial Setup */
/*
* @brief MPU default configuration
*
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
static void _arc_mpu_config(void)
{
u32_t num_regions;
u32_t i;
num_regions = _get_num_regions();
/* ARC MPU supports up to 16 Regions */
if (mpu_config.num_regions > num_regions) {
return;
}
/* Disable MPU */
arc_core_mpu_disable();
#if CONFIG_ARC_MPU_VER == 2
u32_t r_index;
/*
* the MPU regions are filled in the reverse order.
* According to ARCv2 ISA, the MPU region with smaller
* index has higher priority. The static background MPU
* regions in mpu_config will be in the bottom. Then
* the special type regions will be above.
*
*/
r_index = num_regions - mpu_config.num_regions;
/* clear all the regions first */
for (i = 0; i < r_index; i++) {
_region_init(i, 0, 0, 0);
}
/* configure the static regions */
for (i = 0; i < mpu_config.num_regions; i++) {
_region_init(r_index,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
r_index++;
}
/* default region: no read, write and execute */
arc_core_mpu_default(0);
#elif CONFIG_ARC_MPU_VER == 3
for (i = 0; i < mpu_config.num_regions; i++) {
_region_init(i,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
}
for (; i < num_regions; i++) {
_region_init(i, 0, 0, 0);
}
#endif
/* Enable MPU */
arc_core_mpu_enable();
}
static int arc_mpu_init(struct device *arg)
{
ARG_UNUSED(arg);
_arc_mpu_config();
return 0;
}
SYS_INIT(arc_mpu_init, PRE_KERNEL_1,
CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View File

@@ -1,479 +0,0 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_V2_INTERNAL_H_
#define ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_V2_INTERNAL_H_
#define AUX_MPU_RDB_VALID_MASK (0x1)
#define AUX_MPU_EN_ENABLE (0x40000000)
#define AUX_MPU_EN_DISABLE (0xBFFFFFFF)
#define AUX_MPU_RDP_REGION_SIZE(bits) \
(((bits - 1) & 0x3) | (((bits - 1) & 0x1C) << 7))
#define AUX_MPU_RDP_ATTR_MASK (0x1FC)
#define AUX_MPU_RDP_SIZE_MASK (0xE03)
#define _ARC_V2_MPU_EN (0x409)
#define _ARC_V2_MPU_RDB0 (0x422)
#define _ARC_V2_MPU_RDP0 (0x423)
/* For MPU version 2, the minimum protection region size is 2048 bytes */
#define ARC_FEATURE_MPU_ALIGNMENT_BITS 11
/**
* This internal function initializes a MPU region
*/
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
u8_t bits = find_msb_set(size) - 1;
index = index * 2U;
if (bits < ARC_FEATURE_MPU_ALIGNMENT_BITS) {
bits = ARC_FEATURE_MPU_ALIGNMENT_BITS;
}
if ((1 << bits) < size) {
bits++;
}
if (size > 0) {
region_attr &= ~(AUX_MPU_RDP_SIZE_MASK);
region_attr |= AUX_MPU_RDP_REGION_SIZE(bits);
region_addr |= AUX_MPU_RDB_VALID_MASK;
} else {
region_addr = 0U;
}
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RDP0 + index, region_attr);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RDB0 + index, region_addr);
}
/**
* This internal function is utilized by the MPU driver to parse the intent
* type (i.e. THREAD_STACK_REGION) and return the correct region index.
*/
static inline int get_region_index_by_type(u32_t type)
{
/*
* The new MPU regions are allocated per type after the statically
* configured regions. The type is one-indexed rather than
* zero-indexed.
*
* For ARC MPU v2, the smaller index has higher priority, so the
* index is allocated in reverse order. Static regions start from
* the biggest index, then thread related regions.
*
*/
switch (type) {
case THREAD_STACK_USER_REGION:
return get_num_regions() - mpu_config.num_regions
- THREAD_STACK_REGION;
case THREAD_STACK_REGION:
case THREAD_APP_DATA_REGION:
case THREAD_STACK_GUARD_REGION:
return get_num_regions() - mpu_config.num_regions - type;
case THREAD_DOMAIN_PARTITION_REGION:
#if defined(CONFIG_MPU_STACK_GUARD)
return get_num_regions() - mpu_config.num_regions - type;
#else
/*
* Start domain partition region from stack guard region
* since stack guard is not enabled.
*/
return get_num_regions() - mpu_config.num_regions - type + 1;
#endif
default:
__ASSERT(0, "Unsupported type");
return -EINVAL;
}
}
/**
* This internal function checks if region is enabled or not
*/
static inline bool _is_enabled_region(u32_t r_index)
{
return ((z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDB0 + r_index * 2U)
& AUX_MPU_RDB_VALID_MASK) == AUX_MPU_RDB_VALID_MASK);
}
/**
* This internal function check if the given buffer in in the region
*/
static inline bool _is_in_region(u32_t r_index, u32_t start, u32_t size)
{
u32_t r_addr_start;
u32_t r_addr_end;
u32_t r_size_lshift;
r_addr_start = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDB0 + r_index * 2U)
& (~AUX_MPU_RDB_VALID_MASK);
r_size_lshift = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDP0 + r_index * 2U)
& AUX_MPU_RDP_SIZE_MASK;
r_size_lshift = (r_size_lshift & 0x3) | ((r_size_lshift >> 7) & 0x1C);
r_addr_end = r_addr_start + (1 << (r_size_lshift + 1));
if (start >= r_addr_start && (start + size) <= r_addr_end) {
return 1;
}
return 0;
}
/**
* This internal function check if the region is user accessible or not
*/
static inline bool _is_user_accessible_region(u32_t r_index, int write)
{
u32_t r_ap;
r_ap = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RDP0 + r_index * 2U);
r_ap &= AUX_MPU_RDP_ATTR_MASK;
if (write) {
return ((r_ap & (AUX_MPU_ATTR_UW | AUX_MPU_ATTR_KW)) ==
(AUX_MPU_ATTR_UW | AUX_MPU_ATTR_KW));
}
return ((r_ap & (AUX_MPU_ATTR_UR | AUX_MPU_ATTR_KR)) ==
(AUX_MPU_ATTR_UR | AUX_MPU_ATTR_KR));
}
/**
* @brief configure the base address and size for an MPU region
*
* @param type MPU region type
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(u8_t type, u32_t base, u32_t size)
{
s32_t region_index = get_region_index_by_type(type);
u32_t region_attr = get_region_attr_by_type(type);
LOG_DBG("Region info: 0x%x 0x%x", base, size);
if (region_attr == 0U || region_index < 0) {
return -EINVAL;
}
/*
* For ARC MPU v2, MPU regions can be overlapped, smaller
* region index has higher priority.
*/
_region_init(region_index, base, size, region_attr);
return 0;
}
/* ARC Core MPU Driver API Implementation for ARC MPUv2 */
/**
* @brief enable the MPU
*/
void arc_core_mpu_enable(void)
{
/* Enable MPU */
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN,
z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) | AUX_MPU_EN_ENABLE);
}
/**
* @brief disable the MPU
*/
void arc_core_mpu_disable(void)
{
/* Disable MPU */
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN,
z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) & AUX_MPU_EN_DISABLE);
}
/**
* @brief configure the thread's MPU regions
*
* @param thread the target thread
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
#if defined(CONFIG_MPU_STACK_GUARD)
#if defined(CONFIG_USERSPACE)
if ((thread->base.user_options & K_USER) != 0) {
/* the areas before and after the user stack of thread is
* kernel only. These area can be used as stack guard.
* -----------------------
* | kernel only area |
* |---------------------|
* | user stack |
* |---------------------|
* |privilege stack guard|
* |---------------------|
* | privilege stack |
* -----------------------
*/
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
} else {
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
}
#else
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
#endif
#endif
#if defined(CONFIG_USERSPACE)
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {
LOG_DBG("configure user thread %p's stack", thread);
if (_mpu_configure(THREAD_STACK_USER_REGION,
(u32_t)thread->stack_obj, thread->stack_info.size) < 0) {
LOG_ERR("user thread %p's stack failed", thread);
return;
}
}
LOG_DBG("configure thread %p's domain", thread);
arc_core_mpu_configure_mem_domain(thread);
#endif
}
/**
* @brief configure the default region
*
* @param region_attr region attribute of default region
*/
void arc_core_mpu_default(u32_t region_attr)
{
u32_t val = z_arc_v2_aux_reg_read(_ARC_V2_MPU_EN) &
(~AUX_MPU_RDP_ATTR_MASK);
region_attr &= AUX_MPU_RDP_ATTR_MASK;
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN, region_attr | val);
}
/**
* @brief configure the MPU region
*
* @param index MPU region index
* @param base base address
* @param region_attr region attribute
*/
int arc_core_mpu_region(u32_t index, u32_t base, u32_t size,
u32_t region_attr)
{
if (index >= get_num_regions()) {
return -EINVAL;
}
region_attr &= AUX_MPU_RDP_ATTR_MASK;
_region_init(index, base, size, region_attr);
return 0;
}
#if defined(CONFIG_USERSPACE)
/**
* @brief configure MPU regions for the memory partitions of the memory domain
*
* @param thread the thread which has memory domain
*/
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
int region_index =
get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
u32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = NULL;
if (thread) {
mem_domain = thread->mem_domain_info.mem_domain;
}
if (mem_domain) {
LOG_DBG("configure domain: %p", mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
LOG_DBG("disable domain partition regions");
num_partitions = 0U;
pparts = NULL;
}
for (; region_index >= 0; region_index--) {
if (num_partitions) {
LOG_DBG("set region 0x%x 0x%lx 0x%x",
region_index, pparts->start, pparts->size);
_region_init(region_index, pparts->start,
pparts->size, pparts->attr);
num_partitions--;
} else {
/* clear the left mpu entries */
_region_init(region_index, 0, 0, 0);
}
pparts++;
}
}
/**
* @brief remove MPU regions for the memory partitions of the memory domain
*
* @param mem_domain the target memory domain
*/
void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
{
ARG_UNUSED(mem_domain);
int region_index =
get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
for (; region_index >= 0; region_index--) {
_region_init(region_index, 0, 0, 0);
}
}
/**
* @brief reset MPU region for a single memory partition
*
* @param domain the target memory domain
* @param partition_id memory partition id
*/
void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
u32_t part_id)
{
ARG_UNUSED(domain);
int region_index =
get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION);
LOG_DBG("disable region 0x%x", region_index + part_id);
/* Disable region */
_region_init(region_index + part_id, 0, 0, 0);
}
/**
* @brief get the maximum number of free regions for memory domain partitions
*/
int arc_core_mpu_get_max_domain_partition_regions(void)
{
return get_region_index_by_type(THREAD_DOMAIN_PARTITION_REGION) + 1;
}
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
int r_index;
/*
* For ARC MPU v2, smaller region number takes priority.
* we can stop the iteration immediately once we find the
* matched region that grants permission or denies access.
*
*/
for (r_index = 0; r_index < get_num_regions(); r_index++) {
if (!_is_enabled_region(r_index) ||
!_is_in_region(r_index, (u32_t)addr, size)) {
continue;
}
if (_is_user_accessible_region(r_index, write)) {
return 0;
} else {
return -EPERM;
}
}
return -EPERM;
}
#endif /* CONFIG_USERSPACE */
/* ARC MPU Driver Initial Setup */
/*
* @brief MPU default initialization and configuration
*
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
static int arc_mpu_init(struct device *arg)
{
ARG_UNUSED(arg);
u32_t num_regions;
u32_t i;
num_regions = get_num_regions();
/* ARC MPU supports up to 16 Regions */
if (mpu_config.num_regions > num_regions) {
__ASSERT(0,
"Request to configure: %u regions (supported: %u)\n",
mpu_config.num_regions, num_regions);
return -EINVAL;
}
/* Disable MPU */
arc_core_mpu_disable();
int r_index;
/*
* the MPU regions are filled in the reverse order.
* According to ARCv2 ISA, the MPU region with smaller
* index has higher priority. The static background MPU
* regions in mpu_config will be in the bottom. Then
* the special type regions will be above.
*
*/
r_index = num_regions - mpu_config.num_regions;
/* clear all the regions first */
for (i = 0U; i < r_index; i++) {
_region_init(i, 0, 0, 0);
}
/* configure the static regions */
for (i = 0U; i < mpu_config.num_regions; i++) {
_region_init(r_index,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
r_index++;
}
/* default region: no read, write and execute */
arc_core_mpu_default(0);
/* Enable MPU */
arc_core_mpu_enable();
return 0;
}
SYS_INIT(arc_mpu_init, PRE_KERNEL_1,
CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_V2_INTERNAL_H_ */

View File

@@ -1,752 +0,0 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_V3_INTERNAL_H_
#define ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_V3_INTERNAL_H_
#define AUX_MPU_RPER_SID1 0x10000
/* valid mask: SID1+secure+valid */
#define AUX_MPU_RPER_VALID_MASK ((0x1) | AUX_MPU_RPER_SID1 | AUX_MPU_ATTR_S)
#define AUX_MPU_RPER_ATTR_MASK (0x1FF)
#define _ARC_V2_MPU_EN (0x409)
/* aux regs added in MPU version 3 */
#define _ARC_V2_MPU_INDEX (0x448) /* MPU index */
#define _ARC_V2_MPU_RSTART (0x449) /* MPU region start address */
#define _ARC_V2_MPU_REND (0x44A) /* MPU region end address */
#define _ARC_V2_MPU_RPER (0x44B) /* MPU region permission register */
#define _ARC_V2_MPU_PROBE (0x44C) /* MPU probe register */
/* For MPU version 3, the minimum protection region size is 32 bytes */
#define ARC_FEATURE_MPU_ALIGNMENT_BITS 5
#define CALC_REGION_END_ADDR(start, size) \
(start + size - (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS))
#if defined(CONFIG_USERSPACE) && defined(CONFIG_MPU_STACK_GUARD)
/* 1 for stack guard , 1 for user thread, 1 for split */
#define MPU_REGION_NUM_FOR_THREAD 3
#elif defined(CONFIG_USERSPACE) || defined(CONFIG_MPU_STACK_GUARD)
/* 1 for stack guard or user thread stack , 1 for split */
#define MPU_REGION_NUM_FOR_THREAD 2
#else
#define MPU_REGION_NUM_FOR_THREAD 0
#endif
/**
* @brief internal structure holding information of
* memory areas where dynamic MPU programming is allowed.
*/
struct dynamic_region_info {
u8_t index;
u32_t base;
u32_t size;
u32_t attr;
};
#define MPU_DYNAMIC_REGION_AREAS_NUM 2
static u8_t static_regions_num;
static u8_t dynamic_regions_num;
static u8_t dynamic_region_index;
/**
* Global array, holding the MPU region index of
* the memory region inside which dynamic memory
* regions may be configured.
*/
static struct dynamic_region_info dyn_reg_info[MPU_DYNAMIC_REGION_AREAS_NUM];
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* \todo through secure service to access mpu */
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
}
static inline void _region_set_attr(u32_t index, u32_t attr)
{
}
static inline u32_t _region_get_attr(u32_t index)
{
return 0;
}
static inline u32_t _region_get_start(u32_t index)
{
return 0;
}
static inline void _region_set_start(u32_t index, u32_t start)
{
}
static inline u32_t _region_get_end(u32_t index)
{
return 0;
}
static inline void _region_set_end(u32_t index, u32_t end)
{
}
/**
* This internal function probes the given addr's MPU index.if not
* in MPU, returns error
*/
static inline int _mpu_probe(u32_t addr)
{
return -EINVAL;
}
/**
* This internal function checks if MPU region is enabled or not
*/
static inline bool _is_enabled_region(u32_t r_index)
{
return false;
}
/**
* This internal function check if the region is user accessible or not
*/
static inline bool _is_user_accessible_region(u32_t r_index, int write)
{
return false;
}
#else /* CONFIG_ARC_NORMAL_FIRMWARE */
/* the following functions are prepared for SECURE_FRIMWARE */
static inline void _region_init(u32_t index, u32_t region_addr, u32_t size,
u32_t region_attr)
{
if (size < (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS)) {
size = (1 << ARC_FEATURE_MPU_ALIGNMENT_BITS);
}
if (region_attr) {
region_attr &= AUX_MPU_RPER_ATTR_MASK;
region_attr |= AUX_MPU_RPER_VALID_MASK;
}
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RSTART, region_addr);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_REND,
CALC_REGION_END_ADDR(region_addr, size));
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RPER, region_attr);
}
static inline void _region_set_attr(u32_t index, u32_t attr)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RPER, attr |
AUX_MPU_RPER_VALID_MASK);
}
static inline u32_t _region_get_attr(u32_t index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
return z_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER);
}
static inline u32_t _region_get_start(u32_t index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
return z_arc_v2_aux_reg_read(_ARC_V2_MPU_RSTART);
}
static inline void _region_set_start(u32_t index, u32_t start)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_RSTART, start);
}
static inline u32_t _region_get_end(u32_t index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
return z_arc_v2_aux_reg_read(_ARC_V2_MPU_REND) +
(1 << ARC_FEATURE_MPU_ALIGNMENT_BITS);
}
static inline void _region_set_end(u32_t index, u32_t end)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, index);
z_arc_v2_aux_reg_write(_ARC_V2_MPU_REND, end -
(1 << ARC_FEATURE_MPU_ALIGNMENT_BITS));
}
/**
* This internal function probes the given addr's MPU index.if not
* in MPU, returns error
*/
static inline int _mpu_probe(u32_t addr)
{
u32_t val;
z_arc_v2_aux_reg_write(_ARC_V2_MPU_PROBE, addr);
val = z_arc_v2_aux_reg_read(_ARC_V2_MPU_INDEX);
/* if no match or multiple regions match, return error */
if (val & 0xC0000000) {
return -EINVAL;
} else {
return val;
}
}
/**
* This internal function checks if MPU region is enabled or not
*/
static inline bool _is_enabled_region(u32_t r_index)
{
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, r_index);
return ((z_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER) &
AUX_MPU_RPER_VALID_MASK) == AUX_MPU_RPER_VALID_MASK);
}
/**
* This internal function check if the region is user accessible or not
*/
static inline bool _is_user_accessible_region(u32_t r_index, int write)
{
u32_t r_ap;
z_arc_v2_aux_reg_write(_ARC_V2_MPU_INDEX, r_index);
r_ap = z_arc_v2_aux_reg_read(_ARC_V2_MPU_RPER);
r_ap &= AUX_MPU_RPER_ATTR_MASK;
if (write) {
return ((r_ap & (AUX_MPU_ATTR_UW | AUX_MPU_ATTR_KW)) ==
(AUX_MPU_ATTR_UW | AUX_MPU_ATTR_KW));
}
return ((r_ap & (AUX_MPU_ATTR_UR | AUX_MPU_ATTR_KR)) ==
(AUX_MPU_ATTR_UR | AUX_MPU_ATTR_KR));
}
#endif /* CONFIG_ARC_NORMAL_FIRMWARE */
/**
* This internal function allocates a dynamic MPU region and returns
* the index or error
*/
static inline int _dynamic_region_allocate_index(void)
{
if (dynamic_region_index >= get_num_regions()) {
LOG_ERR("no enough mpu entries %d", dynamic_region_index);
return -EINVAL;
}
return dynamic_region_index++;
}
/**
* This internal function checks the area given by (start, size)
* and returns the index if the area match one MPU entry
*/
static inline int _get_region_index(u32_t start, u32_t size)
{
int index = _mpu_probe(start);
if (index > 0 && index == _mpu_probe(start + size - 1)) {
return index;
}
return -EINVAL;
}
/* @brief allocate and init a dynamic MPU region
*
* This internal function performs the allocation and initialization of
* a dynamic MPU region
*
* @param base region base
* @param size region size
* @param attr region attribute
* @return <0 failure, >0 allocated dynamic region index
*/
static int _dynamic_region_allocate_and_init(u32_t base, u32_t size,
u32_t attr)
{
int u_region_index = _get_region_index(base, size);
int region_index;
LOG_DBG("Region info: base 0x%x size 0x%x attr 0x%x", base, size, attr);
if (u_region_index == -EINVAL) {
/* no underlying region */
region_index = _dynamic_region_allocate_index();
if (region_index > 0) {
/* a new region */
_region_init(region_index, base, size, attr);
}
return region_index;
}
/*
* The new memory region is to be placed inside the underlying
* region, possibly splitting the underlying region into two.
*/
u32_t u_region_start = _region_get_start(u_region_index);
u32_t u_region_end = _region_get_end(u_region_index);
u32_t u_region_attr = _region_get_attr(u_region_index);
u32_t end = base + size;
if ((base == u_region_start) && (end == u_region_end)) {
/* The new region overlaps entirely with the
* underlying region. In this case we simply
* update the partition attributes of the
* underlying region with those of the new
* region.
*/
_region_init(u_region_index, base, size, attr);
region_index = u_region_index;
} else if (base == u_region_start) {
/* The new region starts exactly at the start of the
* underlying region; the start of the underlying
* region needs to be set to the end of the new region.
*/
_region_set_start(u_region_index, base + size);
_region_set_attr(u_region_index, u_region_attr);
region_index = _dynamic_region_allocate_index();
if (region_index > 0) {
_region_init(region_index, base, size, attr);
}
} else if (end == u_region_end) {
/* The new region ends exactly at the end of the
* underlying region; the end of the underlying
* region needs to be set to the start of the
* new region.
*/
_region_set_end(u_region_index, base);
_region_set_attr(u_region_index, u_region_attr);
region_index = _dynamic_region_allocate_index();
if (region_index > 0) {
_region_init(region_index, base, size, attr);
}
} else {
/* The new region lies strictly inside the
* underlying region, which needs to split
* into two regions.
*/
_region_set_end(u_region_index, base);
_region_set_attr(u_region_index, u_region_attr);
region_index = _dynamic_region_allocate_index();
if (region_index > 0) {
_region_init(region_index, base, size, attr);
region_index = _dynamic_region_allocate_index();
if (region_index > 0) {
_region_init(region_index, base + size,
u_region_end - end, u_region_attr);
}
}
}
return region_index;
}
/* @brief reset the dynamic MPU regions
*
* This internal function performs the reset of dynamic MPU regions
*/
static void _mpu_reset_dynamic_regions(void)
{
u32_t i;
u32_t num_regions = get_num_regions();
for (i = static_regions_num; i < num_regions; i++) {
_region_init(i, 0, 0, 0);
}
for (i = 0U; i < dynamic_regions_num; i++) {
_region_init(
dyn_reg_info[i].index,
dyn_reg_info[i].base,
dyn_reg_info[i].size,
dyn_reg_info[i].attr);
}
/* dynamic regions are after static regions */
dynamic_region_index = static_regions_num;
}
/**
* @brief configure the base address and size for an MPU region
*
* @param type MPU region type
* @param base base address in RAM
* @param size size of the region
*/
static inline int _mpu_configure(u8_t type, u32_t base, u32_t size)
{
u32_t region_attr = get_region_attr_by_type(type);
return _dynamic_region_allocate_and_init(base, size, region_attr);
}
/* ARC Core MPU Driver API Implementation for ARC MPUv3 */
/**
* @brief enable the MPU
*/
void arc_core_mpu_enable(void)
{
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* the default region:
* normal:0x000, SID:0x10000, KW:0x100 KR:0x80, KE:0x4 0
*/
#define MPU_ENABLE_ATTR 0x101c0
#else
#define MPU_ENABLE_ATTR 0
#endif
arc_core_mpu_default(MPU_ENABLE_ATTR);
}
/**
* @brief disable the MPU
*/
void arc_core_mpu_disable(void)
{
/* MPU is always enabled, use default region to
* simulate MPU disable
*/
arc_core_mpu_default(REGION_ALL_ATTR | AUX_MPU_ATTR_S |
AUX_MPU_RPER_SID1);
}
/**
* @brief configure the thread's mpu regions
*
* @param thread the target thread
*/
void arc_core_mpu_configure_thread(struct k_thread *thread)
{
/* the mpu entries of ARC MPUv3 are divided into 2 parts:
* static entries: global mpu entries, not changed in context switch
* dynamic entries: MPU entries changed in context switch and
* memory domain configure, including:
* MPU entries for user thread stack
* MPU entries for stack guard
* MPU entries for mem domain
* MPU entries for other thread specific regions
* before configuring thread specific mpu entries, need to reset dynamic
* entries
*/
_mpu_reset_dynamic_regions();
#if defined(CONFIG_MPU_STACK_GUARD)
#if defined(CONFIG_USERSPACE)
if ((thread->base.user_options & K_USER) != 0U) {
/* the areas before and after the user stack of thread is
* kernel only. These area can be used as stack guard.
* -----------------------
* | kernel only area |
* |---------------------|
* | user stack |
* |---------------------|
* |privilege stack guard|
* |---------------------|
* | privilege stack |
* -----------------------
*/
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->arch.priv_stack_start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
} else {
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
}
#else
if (_mpu_configure(THREAD_STACK_GUARD_REGION,
thread->stack_info.start - STACK_GUARD_SIZE,
STACK_GUARD_SIZE) < 0) {
LOG_ERR("thread %p's stack guard failed", thread);
return;
}
#endif
#endif
#if defined(CONFIG_USERSPACE)
u32_t num_partitions;
struct k_mem_partition *pparts;
struct k_mem_domain *mem_domain = thread->mem_domain_info.mem_domain;
/* configure stack region of user thread */
if (thread->base.user_options & K_USER) {
LOG_DBG("configure user thread %p's stack", thread);
if (_mpu_configure(THREAD_STACK_USER_REGION,
(u32_t)thread->stack_obj, thread->stack_info.size) < 0) {
LOG_ERR("thread %p's stack failed", thread);
return;
}
}
/* configure thread's memory domain */
if (mem_domain) {
LOG_DBG("configure thread %p's domain: %p",
thread, mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
num_partitions = 0U;
pparts = NULL;
}
for (u32_t i = 0; i < num_partitions; i++) {
if (pparts->size) {
if (_dynamic_region_allocate_and_init(pparts->start,
pparts->size, pparts->attr) < 0) {
LOG_ERR(
"thread %p's mem region: %p failed",
thread, pparts);
return;
}
}
pparts++;
}
#endif
}
/**
* @brief configure the default region
*
* @param region_attr region attribute of default region
*/
void arc_core_mpu_default(u32_t region_attr)
{
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* \todo through secure service to access mpu */
#else
z_arc_v2_aux_reg_write(_ARC_V2_MPU_EN, region_attr);
#endif
}
/**
* @brief configure the MPU region
*
* @param index MPU region index
* @param base base address
* @param size region size
* @param region_attr region attribute
*/
int arc_core_mpu_region(u32_t index, u32_t base, u32_t size,
u32_t region_attr)
{
if (index >= get_num_regions()) {
return -EINVAL;
}
region_attr &= AUX_MPU_RPER_ATTR_MASK;
_region_init(index, base, size, region_attr);
return 0;
}
#if defined(CONFIG_USERSPACE)
/**
* @brief configure MPU regions for the memory partitions of the memory domain
*
* @param thread the thread which has memory domain
*/
void arc_core_mpu_configure_mem_domain(struct k_thread *thread)
{
arc_core_mpu_configure_thread(thread);
}
/**
* @brief remove MPU regions for the memory partitions of the memory domain
*
* @param mem_domain the target memory domain
*/
void arc_core_mpu_remove_mem_domain(struct k_mem_domain *mem_domain)
{
u32_t num_partitions;
struct k_mem_partition *pparts;
int index;
if (mem_domain) {
LOG_DBG("configure domain: %p", mem_domain);
num_partitions = mem_domain->num_partitions;
pparts = mem_domain->partitions;
} else {
LOG_DBG("disable domain partition regions");
num_partitions = 0U;
pparts = NULL;
}
for (u32_t i = 0; i < num_partitions; i++) {
if (pparts->size) {
index = _get_region_index(pparts->start,
pparts->size);
if (index > 0) {
_region_set_attr(index,
REGION_KERNEL_RAM_ATTR);
}
}
pparts++;
}
}
/**
* @brief reset MPU region for a single memory partition
*
* @param partition_id memory partition id
*/
void arc_core_mpu_remove_mem_partition(struct k_mem_domain *domain,
u32_t partition_id)
{
struct k_mem_partition *partition = &domain->partitions[partition_id];
int region_index = _get_region_index(partition->start,
partition->size);
if (region_index < 0) {
return;
}
LOG_DBG("remove region 0x%x", region_index);
_region_set_attr(region_index, REGION_KERNEL_RAM_ATTR);
}
/**
* @brief get the maximum number of free regions for memory domain partitions
*/
int arc_core_mpu_get_max_domain_partition_regions(void)
{
/* consider the worst case: each partition requires split */
return (get_num_regions() - MPU_REGION_NUM_FOR_THREAD) / 2;
}
/**
* @brief validate the given buffer is user accessible or not
*/
int arc_core_mpu_buffer_validate(void *addr, size_t size, int write)
{
int r_index;
int key = arch_irq_lock();
/*
* For ARC MPU v3, overlapping is not supported.
* we can stop the iteration immediately once we find the
* matched region that grants permission or denies access.
*/
r_index = _mpu_probe((u32_t)addr);
/* match and the area is in one region */
if (r_index >= 0 && r_index == _mpu_probe((u32_t)addr + (size - 1))) {
if (_is_user_accessible_region(r_index, write)) {
r_index = 0;
} else {
r_index = -EPERM;
}
} else {
r_index = -EPERM;
}
arch_irq_unlock(key);
return r_index;
}
#endif /* CONFIG_USERSPACE */
/* ARC MPU Driver Initial Setup */
/*
* @brief MPU default initialization and configuration
*
* This function provides the default configuration mechanism for the Memory
* Protection Unit (MPU).
*/
static int arc_mpu_init(struct device *arg)
{
ARG_UNUSED(arg);
u32_t num_regions;
u32_t i;
num_regions = get_num_regions();
/* ARC MPU supports up to 16 Regions */
if (mpu_config.num_regions > num_regions) {
__ASSERT(0,
"Request to configure: %u regions (supported: %u)\n",
mpu_config.num_regions, num_regions);
return -EINVAL;
}
/* Disable MPU */
arc_core_mpu_disable();
for (i = 0U; i < mpu_config.num_regions; i++) {
_region_init(i,
mpu_config.mpu_regions[i].base,
mpu_config.mpu_regions[i].size,
mpu_config.mpu_regions[i].attr);
/* record the static region which can be split */
if (mpu_config.mpu_regions[i].attr & REGION_DYNAMIC) {
if (dynamic_regions_num >=
MPU_DYNAMIC_REGION_AREAS_NUM) {
LOG_ERR("not enough dynamic regions %d",
dynamic_regions_num);
return -EINVAL;
}
dyn_reg_info[dynamic_regions_num].index = i;
dyn_reg_info[dynamic_regions_num].base =
mpu_config.mpu_regions[i].base;
dyn_reg_info[dynamic_regions_num].size =
mpu_config.mpu_regions[i].size;
dyn_reg_info[dynamic_regions_num].attr =
mpu_config.mpu_regions[i].attr;
dynamic_regions_num++;
}
}
static_regions_num = mpu_config.num_regions;
for (; i < num_regions; i++) {
_region_init(i, 0, 0, 0);
}
/* Enable MPU */
arc_core_mpu_enable();
return 0;
}
SYS_INIT(arc_mpu_init, PRE_KERNEL_1,
CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* ZEPHYR_ARCH_ARC_CORE_MPU_ARC_MPU_V3_INTERNAL_H_ */

View File

@@ -22,12 +22,13 @@
* completeness.
*/
#include <kernel.h>
#include <kernel_arch_data.h>
#include <gen_offset.h>
#include <kernel_structs.h>
#include <kernel_offsets.h>
GEN_OFFSET_SYM(_thread_arch_t, intlock_key);
GEN_OFFSET_SYM(_thread_arch_t, relinquish_cause);
GEN_OFFSET_SYM(_thread_arch_t, return_value);
#ifdef CONFIG_ARC_STACK_CHECKING
GEN_OFFSET_SYM(_thread_arch_t, k_stack_base);
GEN_OFFSET_SYM(_thread_arch_t, k_stack_top);
@@ -95,11 +96,9 @@ GEN_OFFSET_SYM(_callee_saved_stack_t, user_sp);
#endif
#endif
GEN_OFFSET_SYM(_callee_saved_stack_t, r30);
#ifdef CONFIG_ARC_HAS_ACCL_REGS
#ifdef CONFIG_FP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, r58);
GEN_OFFSET_SYM(_callee_saved_stack_t, r59);
#endif
#ifdef CONFIG_FP_SHARING
GEN_OFFSET_SYM(_callee_saved_stack_t, fpu_status);
GEN_OFFSET_SYM(_callee_saved_stack_t, fpu_ctrl);
#ifdef CONFIG_FP_FPU_DA

View File

@@ -10,7 +10,7 @@
*
*
* Initialization of full C support: zero the .bss, copy the .data if XIP,
* call z_cstart().
* call _Cstart().
*
* Stack is available in this module, but not the global data/bss until their
* initialization is performed.
@@ -40,14 +40,14 @@ static void disable_icache(void)
{
unsigned int val;
val = z_arc_v2_aux_reg_read(_ARC_V2_I_CACHE_BUILD);
val = _arc_v2_aux_reg_read(_ARC_V2_I_CACHE_BUILD);
val &= 0xff; /* version field */
if (val == 0) {
return; /* skip if i-cache is not present */
}
z_arc_v2_aux_reg_write(_ARC_V2_IC_IVIC, 0);
_arc_v2_aux_reg_write(_ARC_V2_IC_IVIC, 0);
__asm__ __volatile__ ("nop");
z_arc_v2_aux_reg_write(_ARC_V2_IC_CTRL, 1);
_arc_v2_aux_reg_write(_ARC_V2_IC_CTRL, 1);
}
/**
@@ -64,16 +64,48 @@ static void invalidate_dcache(void)
{
unsigned int val;
val = z_arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
val = _arc_v2_aux_reg_read(_ARC_V2_D_CACHE_BUILD);
val &= 0xff; /* version field */
if (val == 0) {
return; /* skip if d-cache is not present */
}
z_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 1);
_arc_v2_aux_reg_write(_ARC_V2_DC_IVDC, 1);
}
#endif
extern FUNC_NORETURN void z_cstart(void);
/**
*
* @brief Adjust the vector table base
*
* Set the vector table base if the value found in the
* _ARC_V2_IRQ_VECT_BASE auxiliary register is different from the
* _VectorTable known by software. It is important to do this very early
* so that exception vectors can be handled.
*
* @return N/A
*/
static void adjust_vector_table_base(void)
{
#ifdef CONFIG_ARC_HAS_SECURE
#undef _ARC_V2_IRQ_VECT_BASE
#define _ARC_V2_IRQ_VECT_BASE _ARC_V2_IRQ_VECT_BASE_S
#endif
extern struct vector_table _VectorTable;
unsigned int vbr;
/* if the compiled-in vector table is different
* from the base address known by the ARC CPU,
* set the vector base to the compiled-in address.
*/
vbr = _arc_v2_aux_reg_read(_ARC_V2_IRQ_VECT_BASE);
vbr &= 0xfffffc00;
if (vbr != (unsigned int)&_VectorTable) {
_arc_v2_aux_reg_write(_ARC_V2_IRQ_VECT_BASE,
(unsigned int)&_VectorTable);
}
}
extern FUNC_NORETURN void _Cstart(void);
/**
*
* @brief Prepare to and run C code
@@ -85,9 +117,10 @@ extern FUNC_NORETURN void z_cstart(void);
void _PrepC(void)
{
z_icache_setup();
z_bss_zero();
z_data_copy();
z_cstart();
_icache_setup();
adjust_vector_table_base();
_bss_zero();
_data_copy();
_Cstart();
CODE_UNREACHABLE;
}

View File

@@ -17,14 +17,13 @@
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <swap_macros.h>
GTEXT(_rirq_enter)
GTEXT(_rirq_exit)
GTEXT(_rirq_common_interrupt_swap)
GDATA(exc_nest_count)
#if 0 /* TODO: when FIRQ is not present, all would be regular */
#define NUM_REGULAR_IRQ_PRIO_LEVELS CONFIG_NUM_IRQ_PRIO_LEVELS
@@ -36,168 +35,6 @@ GTEXT(_rirq_common_interrupt_swap)
* TODO: Revist this if FIRQ becomes configurable.
*/
/*
===========================================================
RETURN FROM INTERRUPT TO COOPERATIVE THREAD
===========================================================
That's a special case because:
1. We return from IRQ handler to a cooperative thread
2. During IRQ handling context switch did happen
3. Returning to a thread which previously gave control
to another thread because of:
- Calling k_sleep()
- Explicitly yielding
- Bumping into locked sync primitive etc
What (3) means is before passing control to another thread our thread
in question:
a. Stashed all precious caller-saved registers on its stack
b. Pushed return address to the top of the stack as well
That's how thread's stack looks like right before jumping to another thread:
----------------------------->8---------------------------------
PRE-CONTEXT-SWITCH STACK
lower_addr, let's say: 0x1000
--------------------------------------
SP -> | Return address; PC (Program Counter), in fact value taken from
| BLINK register in arch_switch()
--------------------------------------
| STATUS32 value, we explicitly save it here for later usage, read-on
--------------------------------------
| Caller-saved registers: some of R0-R12
--------------------------------------
|...
|...
higher_addr, let's say: 0x2000
----------------------------->8---------------------------------
When context gets switched the kernel saves callee-saved registers in the
thread's stack right on top of pre-switch contents so that's what we have:
----------------------------->8---------------------------------
POST-CONTEXT-SWITCH STACK
lower_addr, let's say: 0x1000
--------------------------------------
SP -> | Callee-saved registers: see struct _callee_saved_stack{}
| |- R13
| |- R14
| | ...
| \- FP
| ...
--------------------------------------
| Return address; PC (Program Counter)
--------------------------------------
| STATUS32 value
--------------------------------------
| Caller-saved registers: some of R0-R12
--------------------------------------
|...
|...
higher_addr, let's say: 0x2000
----------------------------->8---------------------------------
So how do we return in such a complex scenario.
First we restore callee-saved regs with help of _load_callee_saved_regs().
Now we're back to PRE-CONTEXT-SWITCH STACK (see above).
Logically our next step is to load return address from the top of the stack
and jump to that address to continue execution of the desired thread, but
we're still in interrupt handling mode and the only way to return to normal
execution mode is to execute "rtie" instruction. And here we need to deal
with peculiarities of return from IRQ on ARCv2 cores.
Instead of simple jump to a return address stored in the tip of thread's stack
(with subsequent interrupt enable) ARCv2 core additionally automatically
restores some registers from stack. Most important ones are
PC ("Program Counter") which holds address of the next instruction to execute
and STATUS32 which holds imortant flags including global interrupt enable,
zero, carry etc.
To make things worse depending on ARC core configuration and run-time setup
of certain features different set of registers will be restored.
Typically those same registers are automatically saved on stack on entry to
an interrupt, but remember we're returning to the thread which was
not interrupted by interrupt and so on its stack there're no automatically
saved registers, still inevitably on RTIE execution register restoration
will happen. So if we do nothing special we'll end-up with that:
----------------------------->8---------------------------------
lower_addr, let's say: 0x1000
--------------------------------------
# | Return address; PC (Program Counter)
| --------------------------------------
| | STATUS32 value
| --------------------------------------
|
sizeof(_irq_stack_frame)
|
| | Caller-saved registers: R0-R12
V --------------------------------------
|...
SP -> | < Some data on thread's stack>
|...
higher_addr, let's say: 0x2000
----------------------------->8---------------------------------
I.e. we'll go much deeper down the stack over needed return address, read
some value from unexpected location in stack and will try to jump there.
Nobody knows were we end-up then.
To work-around that problem we need to mimic existance of IRQ stack frame
of which we really only need return address obviously to return where we
need to. For that we just shift SP so that it points sizeof(_irq_stack_frame)
above like that:
----------------------------->8---------------------------------
lower_addr, let's say: 0x1000
SP -> |
A | < Some unrelated data >
| |
|
sizeof(_irq_stack_frame)
|
| --------------------------------------
| | Return address; PC (Program Counter)
| --------------------------------------
# | STATUS32 value
--------------------------------------
| Caller-saved registers: R0-R12
--------------------------------------
|...
| < Some data on thread's stack>
|...
higher_addr, let's say: 0x2000
----------------------------->8---------------------------------
Indeed R0-R13 "restored" from IRQ stack frame will contain garbage but
it makes no difference because we're returning to execution of code as if
we're returning from yet another function call and so we will restore
all needed registers from the stack.
One other important remark here is R13.
CPU hardware automatically save/restore registers in pairs and since we
wanted to save/restore R12 in IRQ stack frame as a caller-saved register we
just happen to do that for R13 as well. But given compiler treats it as
a callee-saved register we save/restore it separately in _callee_saved_stack
structure. And when we restore callee-saved registers from stack we among
other registers recover R13. But later on return from IRQ with RTIE
instruction, R13 will be "restored" again from fake IRQ stack frame and
if we don't copy correct R13 value to fake IRQ stack frame R13 value
will be corrupted.
*/
/**
*
@@ -216,11 +53,12 @@ SECTION_FUNC(TEXT, _rirq_enter)
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#ifdef CONFIG_ARC_HAS_SECURE
lr r2, [_ARC_V2_SEC_STAT]
bclr r2, r2, _ARC_V2_SEC_STAT_SSC_BIT
sflag r2
/* sflag r2 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00bf302f
#else
/* disable stack checking */
lr r2, [_ARC_V2_STATUS32]
@@ -229,14 +67,16 @@ SECTION_FUNC(TEXT, _rirq_enter)
#endif
#endif
clri
ld r1, [exc_nest_count]
add r0, r1, 1
st r0, [exc_nest_count]
cmp r1, 0
/* check whether irq stack is used */
_check_and_inc_int_nest_counter r0, r1
bgt.d rirq_nest
mov r0, sp
bne.d rirq_nest
mov_s r0, sp
_get_curr_cpu_irq_stack sp
mov r1, _kernel
ld sp, [r1, _kernel_offset_to_irq_stack]
rirq_nest:
push_s r0
@@ -256,26 +96,27 @@ SECTION_FUNC(TEXT, _rirq_exit)
pop sp
_dec_int_nest_counter r0, r1
_check_nest_int_by_irq_act r0, r1
jne _rirq_no_reschedule
mov r1, exc_nest_count
ld r0, [r1]
sub r0, r0, 1
cmp r0, 0
bne.d _rirq_return_from_rirq
st r0, [r1]
#ifdef CONFIG_STACK_SENTINEL
bl z_check_stack_sentinel
bl _check_stack_sentinel
#endif
#ifdef CONFIG_SMP
bl z_arc_smp_switch_in_isr
/* r0 points to new thread, r1 points to old thread */
cmp_s r0, 0
beq _rirq_no_reschedule
mov_s r2, r1
#else
mov_s r1, _kernel
#ifdef CONFIG_PREEMPT_ENABLED
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/*
* Lock interrupts to ensure kernel queues do not change from this
* point on until return from interrupt.
*/
/*
* Both (a)reschedule and (b)non-reschedule cases need to load the
* current thread's stack, but don't have to use it until the decision
@@ -292,36 +133,18 @@ SECTION_FUNC(TEXT, _rirq_exit)
beq _rirq_no_reschedule
/* cached thread to run is in r0, fall through */
#endif
.balign 4
_rirq_reschedule:
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to remember SEC_STAT.IRM bit */
lr r3, [_ARC_V2_SEC_STAT]
push_s r3
#endif
#if defined(CONFIG_USERSPACE)
/*
* need to remember the user/kernel status of interrupted thread
*/
lr r3, [_ARC_V2_AUX_IRQ_ACT]
and r3, r3, 0x80000000
push_s r3
#endif
/* _save_callee_saved_regs expects outgoing thread in r2 */
_save_callee_saved_regs
st _CAUSE_RIRQ, [r2, _thread_offset_to_relinquish_cause]
#ifdef CONFIG_SMP
mov_s r2, r0
#else
/* incoming thread is in r0: it becomes the new 'current' */
mov_s r2, r0
mov r2, r0
st_s r2, [r1, _kernel_offset_to_current]
#endif
.balign 4
_rirq_common_interrupt_swap:
@@ -338,57 +161,44 @@ _rirq_common_interrupt_swap:
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov_s r0, r2
mov r0, r2
bl configure_mpu_thread
pop_s r2
#endif
#if defined(CONFIG_USERSPACE)
/*
* when USERSPACE is enabled, according to ARCv2 ISA, SP will be switched
* if interrupt comes out in user mode, and will be recorded in bit 31
* (U bit) of IRQ_ACT. when interrupt exits, SP will be switched back
* according to U bit.
*
* For the case that context switches in interrupt, the target sp must be
* thread's kernel stack, no need to do hardware sp switch. so, U bit should
* be cleared.
*/
lr r0, [_ARC_V2_AUX_IRQ_ACT]
bclr r0, r0, 31
sr r0, [_ARC_V2_AUX_IRQ_ACT]
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
ld_s r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _rirq_return_from_rirq
nop_s
nop
breq r3, _CAUSE_FIRQ, _rirq_return_from_firq
nop_s
nop
/* fall through */
.balign 4
_rirq_return_from_coop:
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* must return to secure mode, so set IRM bit to 1 */
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_IRM_BIT
sflag r0
#endif
/*
* See verbose explanation of
* RETURN FROM INTERRUPT TO COOPERATIVE THREAD above
* status32, sec_stat (when CONFIG_ARC_HAS_SECURE is enabled) and pc
* (blink) are already on the stack in the right order
*/
ld_s r0, [sp, ___isf_t_status32_OFFSET - ___isf_t_pc_OFFSET]
/* update status32.ie (explanation in firq_exit:_firq_return_from_coop) */
ld_s r3, [r2, _thread_offset_to_intlock_key]
st 0, [r2, _thread_offset_to_intlock_key]
cmp r3, 0
or.ne r0, r0, _ARC_V2_STATUS32_IE
st_s r0, [sp, ___isf_t_status32_OFFSET - ___isf_t_pc_OFFSET]
/* carve fake stack */
sub sp, sp, ___isf_t_pc_OFFSET
/* reset zero-overhead loops */
st 0, [sp, ___isf_t_lp_end_OFFSET]
/* update return value on stack */
ld_s r0, [r2, _thread_offset_to_return_value]
st_s r0, [sp, ___isf_t_r0_OFFSET]
/*
* r13 is part of both the callee and caller-saved register sets because
@@ -398,27 +208,19 @@ _rirq_return_from_coop:
*/
st_s r13, [sp, ___isf_t_r13_OFFSET]
/* stack now has the IRQ stack frame layout, pointing to sp */
/* stack now has the IRQ stack frame layout, pointing to r0 */
/* fall through to rtie instruction */
/* rtie will pop the rest from the stack */
rtie
/* fall through to rtie instruction */
#endif /* CONFIG_PREEMPT_ENABLED */
.balign 4
_rirq_return_from_firq:
_rirq_return_from_rirq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
_rirq_no_reschedule:
rtie

View File

@@ -14,11 +14,9 @@
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <swap_macros.h>
GDATA(_interrupt_stack)
GDATA(z_main_stack)
GDATA(_VectorTable)
GDATA(_main_stack)
/* use one of the available interrupt stacks during init */
@@ -45,57 +43,26 @@ GTEXT(__start)
SECTION_FUNC(TEXT,__reset)
SECTION_FUNC(TEXT,__start)
/* lock interrupts: will get unlocked when switch to main task
* also make sure the processor in the correct status
*/
mov_s r0, 0
mov r0, 0
kflag r0
#ifdef CONFIG_ARC_SECURE_FIRMWARE
sflag r0
#endif
#if defined(CONFIG_BOOT_TIME_MEASUREMENT) && defined(CONFIG_ARCV2_TIMER)
/*
* ARCV2 timer (timer0) is a free run timer, let it start to count
* here.
*/
mov_s r0, 0xffffffff
sr r0, [_ARC_V2_TMR0_LIMIT]
mov_s r0, 0
sr r0, [_ARC_V2_TMR0_COUNT]
#endif
/* interrupt related init */
#ifndef CONFIG_ARC_NORMAL_FIRMWARE
/* IRQ_ACT and IRQ_CTRL should be initialized and set in secure mode */
sr r0, [_ARC_V2_AUX_IRQ_ACT]
sr r0, [_ARC_V2_AUX_IRQ_CTRL]
#endif
sr r0, [_ARC_V2_AUX_IRQ_HINT]
/* set the vector table base early,
* so that exception vectors can be handled.
*/
mov_s r0, _VectorTable
#ifdef CONFIG_ARC_SECURE_FIRMWARE
sr r0, [_ARC_V2_IRQ_VECT_BASE_S]
#else
sr r0, [_ARC_V2_IRQ_VECT_BASE]
#endif
/* \todo: MPU init, gp for small data? */
#if defined(CONFIG_USERSPACE)
#if CONFIG_USERSPACE
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_US_BIT
kflag r0
#endif
#ifdef CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_AD_BIT
kflag r0
#endif
mov_s r1, 1
mov r1, 1
invalidate_and_disable_icache:
@@ -106,9 +73,9 @@ invalidate_and_disable_icache:
mov_s r2, 0
sr r2, [_ARC_V2_IC_IVIC]
/* writing to IC_IVIC needs 3 NOPs */
nop_s
nop_s
nop_s
nop
nop
nop
sr r1, [_ARC_V2_IC_CTRL]
invalidate_dcache:
@@ -121,32 +88,9 @@ invalidate_dcache:
done_cache_invalidate:
#if defined(CONFIG_SYS_POWER_DEEP_SLEEP_STATES) && \
#if defined(CONFIG_SYS_POWER_DEEP_SLEEP) && \
!defined(CONFIG_BOOTLOADER_CONTEXT_RESTORE)
jl @_sys_resume_from_deep_sleep
#endif
#if CONFIG_MP_NUM_CPUS > 1
_get_cpu_id r0
breq r0, 0, _master_core_startup
/*
* Non-masters wait for master core (core 0) to boot enough
*/
_slave_core_wait:
ld r1, [arc_cpu_wake_flag]
brne r0, r1, _slave_core_wait
ld sp, [arc_cpu_sp]
/* signal master core that slave core runs */
st 0, [arc_cpu_wake_flag]
#if defined(CONFIG_ARC_FIRQ_STACK)
jl z_arc_firq_stack_set
#endif
j z_arc_slave_start
_master_core_startup:
jl @_sys_soc_resume_from_deep_sleep
#endif
#ifdef CONFIG_INIT_STACKS
@@ -155,7 +99,7 @@ _master_core_startup:
* FIRQ stack when CONFIG_INIT_STACKS is enabled before switching to
* one of them for the rest of the early boot
*/
mov_s sp, z_main_stack
mov sp, _main_stack
add sp, sp, CONFIG_MAIN_STACK_SIZE
mov_s r0, _interrupt_stack
@@ -165,11 +109,7 @@ _master_core_startup:
#endif /* CONFIG_INIT_STACKS */
mov_s sp, INIT_STACK
mov sp, INIT_STACK
add sp, sp, INIT_STACK_SIZE
#if defined(CONFIG_ARC_FIRQ_STACK)
jl z_arc_firq_stack_set
#endif
j @_PrepC

View File

@@ -1,12 +0,0 @@
#
# Copyright (c) 2018 Synopsys, Inc. All rights reserved.
#
# SPDX-License-Identifier: Apache-2.0
zephyr_library()
zephyr_library_sources(
arc_sjli.c
arc_secure.S
secure_sys_services.c
)

View File

@@ -1,122 +0,0 @@
/*
* Copyright (c) 2018 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
.macro clear_scratch_regs
mov r1, 0
mov r2, 0
mov r3, 0
mov r4, 0
mov r5, 0
mov r6, 0
mov r7, 0
mov r8, 0
mov r9, 0
mov r10, 0
mov r11, 0
mov r12, 0
.endm
.macro clear_callee_regs
mov r25, 0
mov r24, 0
mov r23, 0
mov r22, 0
mov r21, 0
mov r20, 0
mov r19, 0
mov r18, 0
mov r17, 0
mov r16, 0
mov r15, 0
mov r14, 0
mov r13, 0
.endm
GTEXT(arc_go_to_normal)
GTEXT(_arc_do_secure_call)
GDATA(arc_s_call_table)
SECTION_FUNC(TEXT, _arc_do_secure_call)
/* r0-r5: arg1-arg6, r6 is call id */
/* the call id should be checked */
/* disable normal interrupt happened when processor in secure mode ? */
/* seti (0x30 | (ARC_N_IRQ_START_LEVEL-1)) */
breq r6, ARC_S_CALL_CLRI, _s_clri
breq r6, ARC_S_CALL_SETI, _s_seti
push_s blink
mov blink, arc_s_call_table
ld.as r6, [blink, r6]
jl [r6]
/*
* no need to clear callee regs, as they will be saved and restored
* automatically
*/
clear_scratch_regs
mov r29, 0
mov r30, 0
_arc_do_secure_call_exit:
pop_s blink
j [blink]
/* enable normal interrupt */
/*
* j.d [blink]
* seti (0x30 | (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
*/
_s_clri:
lr r0, [_ARC_V2_STATUS32]
and r0, r0, 0x1e
asr r0, r0
or r0, r0, 0x30
mov r6, (0x30 | (ARC_N_IRQ_START_LEVEL-1))
j.d [blink]
seti r6
_s_seti:
btst r0, 4
jnz __seti_0
mov r0, (CONFIG_NUM_IRQ_PRIO_LEVELS - 1)
lr r6, [_ARC_V2_STATUS32]
and r6, r6, 0x1e
asr r6, r6
cmp r0, r6
mov.hs r0, r6
__seti_0:
and r0, r0, 0xf
brhs r0, ARC_N_IRQ_START_LEVEL, __seti_1
mov r0, ARC_N_IRQ_START_LEVEL
__seti_1:
or r0, r0, 0x30
j.d [blink]
seti r0
SECTION_FUNC(TEXT, arc_go_to_normal)
clear_callee_regs
clear_scratch_regs
mov fp, 0
mov r29, 0
mov r30, 0
mov blink, 0
jl [r0]
/* should not come here */
kflag 1

View File

@@ -1,68 +0,0 @@
/*
* Copyright (c) 2018 Synopsys, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <device.h>
#include <kernel.h>
#include <errno.h>
#include <zephyr/types.h>
#include <init.h>
#include <toolchain.h>
#include <arch/arc/v2/secureshield/arc_secure.h>
static void _default_sjli_entry(void);
/*
* sjli vector table must be in instruction space
* \todo: how to let user to install customized sjli entry easily, e.g.
* through macros or with the help of compiler?
*/
const static u32_t _sjli_vector_table[CONFIG_SJLI_TABLE_SIZE] = {
[0] = (u32_t)_arc_do_secure_call,
[1 ... (CONFIG_SJLI_TABLE_SIZE - 1)] = (u32_t)_default_sjli_entry,
};
/*
* @brief default entry of sjli call
*
*/
static void _default_sjli_entry(void)
{
printk("default sjli entry\n");
}
/*
* @brief initializaiton of sjli related functions
*
*/
static void sjli_table_init(void)
{
/* install SJLI table */
z_arc_v2_aux_reg_write(_ARC_V2_NSC_TABLE_BASE, _sjli_vector_table);
z_arc_v2_aux_reg_write(_ARC_V2_NSC_TABLE_TOP,
(_sjli_vector_table + CONFIG_SJLI_TABLE_SIZE));
}
/*
* @brief initializaiton of secureshield related functions.
*/
static int arc_secureshield_init(struct device *arg)
{
sjli_table_init();
/* set nic bit to enable seti/clri and
* sleep/wevt in normal mode.
* If not set, direct call of seti/clri etc. will raise exception.
* Then, these seti/clri instructions should be replaced with secure
* secure services (sjli call)
*
*/
__asm__ volatile("sflag 0x20");
return 0;
}
SYS_INIT(arc_secureshield_init, PRE_KERNEL_1,
CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View File

@@ -1,89 +0,0 @@
/*
* Copyright (c) 2018 Synopsys, Inc. All rights reserved.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <errno.h>
#include <kernel.h>
#include <arch/cpu.h>
#include <zephyr/types.h>
#include <soc.h>
#include <toolchain.h>
#include <arch/arc/v2/secureshield/arc_secure.h>
#define IRQ_PRIO_MASK (0xffff << ARC_N_IRQ_START_LEVEL)
/*
* @brief read secure auxiliary regs on behalf of normal mode
*
* @param aux_reg address of aux reg
*
* Some aux regs require secure privilege, this function implements
* an secure service to access secure aux regs. Check should be done
* to decide whether the access is valid.
*/
static s32_t arc_s_aux_read(u32_t aux_reg)
{
return -1;
}
/*
* @brief write secure auxiliary regs on behalf of normal mode
*
* @param aux_reg address of aux reg
* @param val, the val to write
*
* Some aux regs require secure privilege, this function implements
* an secure service to access secure aux regs. Check should be done
* to decide whether the access is valid.
*/
static s32_t arc_s_aux_write(u32_t aux_reg, u32_t val)
{
if (aux_reg == _ARC_V2_AUX_IRQ_ACT) {
/* 0 -> CONFIG_NUM_IRQ_PRIO_LEVELS allocated to secure world
* left prio levels allocated to normal world
*/
val &= IRQ_PRIO_MASK;
z_arc_v2_aux_reg_write(_ARC_V2_AUX_IRQ_ACT, val |
(z_arc_v2_aux_reg_read(_ARC_V2_AUX_IRQ_ACT) &
(~IRQ_PRIO_MASK)));
return 0;
}
return -1;
}
/*
* @brief allocate interrupt for normal world
*
* @param intno, the interrupt to be allocated to normal world
*
* By default, most interrupts are configured to be secure in initialization.
* If normal world wants to use an interrupt, through this secure service to
* apply one. Necessary check should be done to decide whether the apply is
* valid
*/
static s32_t arc_s_irq_alloc(u32_t intno)
{
z_arc_v2_irq_uinit_secure_set(intno, 0);
return 0;
}
/*
* \todo, to access MPU from normal mode, secure mpu service should be
* created. In the secure mpu service, the parameters should be checked
* (e.g., not overwrite the mpu regions for secure world)that operations
* are valid
*/
/*
* \todo, how to add secure service easily
*/
const _arc_s_call_handler_t arc_s_call_table[ARC_S_CALL_LIMIT] = {
[ARC_S_CALL_AUX_READ] = (_arc_s_call_handler_t)arc_s_aux_read,
[ARC_S_CALL_AUX_WRITE] = (_arc_s_call_handler_t)arc_s_aux_write,
[ARC_S_CALL_IRQ_ALLOC] = (_arc_s_call_handler_t)arc_s_irq_alloc,
};

248
arch/arc/core/swap.S Normal file
View File

@@ -0,0 +1,248 @@
/*
* Copyright (c) 2014-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Thread context switching
*
* This module implements the routines necessary for thread context switching
* on ARCv2 CPUs.
*
* See isr_wrapper.S for details.
*/
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <arch/cpu.h>
#include <v2/irq.h>
#include <swap_macros.h>
GTEXT(__swap)
GDATA(_k_neg_eagain)
GDATA(_kernel)
/**
*
* @brief Initiate a cooperative context switch
*
* The __swap() routine is invoked by various kernel services to effect
* a cooperative context switch. Prior to invoking __swap(), the caller
* disables interrupts via irq_lock() and the return 'key' is passed as a
* parameter to __swap(). The key is in fact the value stored in the register
* operand of a CLRI instruction.
*
* It stores the intlock key parameter into current->intlock_key.
* Given that __swap() is called to effect a cooperative context switch,
* the caller-saved integer registers are saved on the stack by the function
* call preamble to __swap(). This creates a custom stack frame that will be
* popped when returning from __swap(), but is not suitable for handling a
* return from an exception. Thus, the fact that the thread is pending because
* of a cooperative call to __swap() has to be recorded via the _CAUSE_COOP code
* in the relinquish_cause of the thread's k_thread structure. The
* _IrqExit()/_FirqExit() code will take care of doing the right thing to
* restore the thread status.
*
* When __swap() is invoked, we know the decision to perform a context switch or
* not has already been taken and a context switch must happen.
*
* @return may contain a return value setup by a call to
* _set_thread_return_value()
*
* C function prototype:
*
* unsigned int __swap (unsigned int key);
*
*/
SECTION_FUNC(TEXT, __swap)
#ifdef CONFIG_EXECUTION_BENCHMARKING
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_save_callee_saved_regs
push_s r31
bl read_timer_start_of_swap
pop_s r31
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_callee_saved_regs
st sp, [r2, _thread_offset_to_sp]
#endif
/* interrupts are locked, interrupt key is in r0 */
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
/* save intlock key */
st_s r0, [r2, _thread_offset_to_intlock_key]
st _CAUSE_COOP, [r2, _thread_offset_to_relinquish_cause]
/*
* Carve space for the return value. Setting it to a default of
* -EAGAIN eliminates the need for the timeout code to set it.
* If another value is ever needed, it can be modified with
* _set_thread_return_value().
*/
ld r3, [_k_neg_eagain]
st_s r3, [r2, _thread_offset_to_return_value]
/*
* Save status32 and blink on the stack before the callee-saved registers.
* This is the same layout as the start of an IRQ stack frame.
*/
lr r3, [_ARC_V2_STATUS32]
push_s r3
#ifdef CONFIG_ARC_HAS_SECURE
lr r3, [_ARC_V2_SEC_STAT]
push_s r3
#endif
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_HAS_SECURE
bclr r3, r3, _ARC_V2_SEC_STAT_SSC_BIT
/* sflag r3 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00ff302f
#else
/* disable stack checking during swap */
bclr r3, r3, _ARC_V2_STATUS32_SC_BIT
kflag r3
#endif
#endif
push_s blink
_save_callee_saved_regs
/* get the cached thread to run */
ld_s r2, [r1, _kernel_offset_to_ready_q_cache]
/* entering here, r2 contains the new current thread */
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
/* XXX - can be moved to delay slot of _CAUSE_RIRQ ? */
st_s r2, [r1, _kernel_offset_to_current]
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
mov r0, r2
bl configure_mpu_thread
pop_s r2
#endif
ld_s r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _swap_return_from_rirq
nop
breq r3, _CAUSE_FIRQ, _swap_return_from_firq
nop
/* fall through to _swap_return_from_coop */
.balign 4
_swap_return_from_coop:
ld_s r1, [r2, _thread_offset_to_intlock_key]
st 0, [r2, _thread_offset_to_intlock_key]
ld_s r0, [r2, _thread_offset_to_return_value]
lr ilink, [_ARC_V2_STATUS32]
bbit1 ilink, _ARC_V2_STATUS32_AE_BIT, _return_from_exc
pop_s blink /* pc into blink */
#ifdef CONFIG_ARC_HAS_SECURE
pop_s r3 /* pop SEC_STAT */
/* sflag r3 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00ff302f
#endif
#ifdef CONFIG_EXECUTION_BENCHMARKING
b _capture_value_for_benchmarking
#endif
return_loc:
pop_s r3 /* status32 into r3 */
kflag r3 /* write status32 */
j_s.d [blink] /* always execute delay slot */
seti r1 /* delay slot */
.balign 4
_swap_return_from_rirq:
_swap_return_from_firq:
lr r3, [_ARC_V2_STATUS32]
bbit1 r3, _ARC_V2_STATUS32_AE_BIT, _return_from_exc_irq
/* pretend interrupt happened to use rtie instruction */
#ifdef CONFIG_ARC_HAS_SECURE
lr r3, [_ARC_V2_SEC_STAT]
/* set SEC_STAT.IRM = SECURE for interrupt return */
bset r3, r3, 3
/* sflag r3 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x00ff302f
#endif
lr r3, [_ARC_V2_AUX_IRQ_ACT]
brne r3, 0, _swap_already_in_irq
/* use lowest interrupt priority */
or r3, r3, (1<<(CONFIG_NUM_IRQ_PRIO_LEVELS-1))
sr r3, [_ARC_V2_AUX_IRQ_ACT]
_swap_already_in_irq:
rtie
.balign 4
_return_from_exc_irq:
_pop_irq_stack_frame
sub_s sp, sp, ___isf_t_status32_OFFSET - ___isf_t_pc_OFFSET + 4
_return_from_exc:
/* put the return address to eret */
ld ilink, [sp] /* pc into ilink */
sr ilink, [_ARC_V2_ERET]
/* SEC_STAT is bypassed when CONFIG_ARC_HAS_SECURE */
/* put status32 into estatus */
ld ilink, [sp, ___isf_t_status32_OFFSET - ___isf_t_pc_OFFSET]
sr ilink, [_ARC_V2_ERSTATUS]
add_s sp, sp, ___isf_t_status32_OFFSET - ___isf_t_pc_OFFSET + 4
rtie
#ifdef CONFIG_EXECUTION_BENCHMARKING
.balign 4
_capture_value_for_benchmarking:
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_save_callee_saved_regs
push_s blink
bl read_timer_end_of_swap
pop_s blink
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_callee_saved_regs
st sp, [r2, _thread_offset_to_sp]
b return_loc
#endif /* CONFIG_EXECUTION_BENCHMARKING */

View File

@@ -1,208 +0,0 @@
/*
* Copyright (c) 2019 Synopsys.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Thread context switching
*
* This module implements the routines necessary for thread context switching
* on ARCv2 CPUs.
*
* See isr_wrapper.S for details.
*/
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <v2/irq.h>
#include <swap_macros.h>
GTEXT(arch_switch)
/**
*
* @brief Initiate a cooperative context switch
*
* The arch_switch routine is invoked by various kernel services to effect
* a cooperative context switch. Prior to invoking arch_switch, the caller
* disables interrupts via irq_lock()
* Given that arch_switch() is called to effect a cooperative context switch,
* the caller-saved integer registers are saved on the stack by the function
* call preamble to arch_switch. This creates a custom stack frame that will
* be popped when returning from arch_switch, but is not suitable for handling
* a return from an exception. Thus, the fact that the thread is pending because
* of a cooperative call to arch_switch() has to be recorded via the
* _CAUSE_COOP code in the relinquish_cause of the thread's k_thread structure.
* The _rirq_exit()/_firq_exit() code will take care of doing the right thing
* to restore the thread status.
*
* When arch_switch() is invoked, we know the decision to perform a context
* switch or not has already been taken and a context switch must happen.
*
*
* C function prototype:
*
* void arch_switch(void *switch_to, void **switched_from);
*
*/
SECTION_FUNC(TEXT, arch_switch)
#ifdef CONFIG_EXECUTION_BENCHMARKING
push_s r0
push_s r1
push_s blink
bl read_timer_start_of_swap
pop_s blink
pop_s r1
pop_s r0
#endif
/*
* r0 = new_thread->switch_handle = switch_to thread,
* r1 = &old_thread->switch_handle = &switch_from thread
*/
ld_s r2, [r1]
/*
* r2 may be dummy_thread in z_cstart, dummy_thread->switch_handle
* must be 0
*/
breq r2, 0, _switch_to_target_thread
st _CAUSE_COOP, [r2, _thread_offset_to_relinquish_cause]
/*
* Save status32 and blink on the stack before the callee-saved registers.
* This is the same layout as the start of an IRQ stack frame.
*/
lr r3, [_ARC_V2_STATUS32]
push_s r3
#ifdef CONFIG_ARC_HAS_SECURE
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r3, [_ARC_V2_SEC_STAT]
#else
mov_s r3, 0
#endif
push_s r3
#endif
push_s blink
_save_callee_saved_regs
#ifdef CONFIG_ARC_STACK_CHECKING
/* disable stack checking here, as sp will be changed to target
* thread'sp
*/
#if defined(CONFIG_ARC_HAS_SECURE) && defined(CONFIG_ARC_SECURE_FIRMWARE)
bclr r3, r3, _ARC_V2_SEC_STAT_SSC_BIT
sflag r3
#else
bclr r3, r3, _ARC_V2_STATUS32_SC_BIT
kflag r3
#endif
#endif
_switch_to_target_thread:
mov_s r2, r0
/* entering here, r2 contains the new current thread */
#ifdef CONFIG_ARC_STACK_CHECKING
_load_stack_check_regs
#endif
_load_callee_saved_regs
#if defined(CONFIG_MPU_STACK_GUARD) || defined(CONFIG_USERSPACE)
push_s r2
bl configure_mpu_thread
pop_s r2
#endif
ld r3, [r2, _thread_offset_to_relinquish_cause]
breq r3, _CAUSE_RIRQ, _switch_return_from_rirq
nop_s
breq r3, _CAUSE_FIRQ, _switch_return_from_firq
nop_s
/* fall through to _switch_return_from_coop */
.balign 4
_switch_return_from_coop:
pop_s blink /* pc into blink */
#ifdef CONFIG_ARC_HAS_SECURE
pop_s r3 /* pop SEC_STAT */
#ifdef CONFIG_ARC_SECURE_FIRMWARE
sflag r3
#endif
#endif
pop_s r3 /* status32 into r3 */
kflag r3 /* write status32 */
#ifdef CONFIG_EXECUTION_BENCHMARKING
b _capture_value_for_benchmarking
#endif
return_loc:
j_s [blink]
.balign 4
_switch_return_from_rirq:
_switch_return_from_firq:
#if defined(CONFIG_USERSPACE)
/*
* need to recover the user/kernel status of interrupted thread
*/
pop_s r3
lr r2, [_ARC_V2_AUX_IRQ_ACT]
or r2, r2, r3
sr r2, [_ARC_V2_AUX_IRQ_ACT]
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
/* here need to recover SEC_STAT.IRM bit */
pop_s r3
sflag r3
#endif
lr r3, [_ARC_V2_AUX_IRQ_ACT]
/* use lowest interrupt priority */
#ifdef CONFIG_ARC_SECURE_FIRMWARE
or r3, r3, (1 << (ARC_N_IRQ_START_LEVEL - 1))
#else
or r3, r3, (1 << (CONFIG_NUM_IRQ_PRIO_LEVELS - 1))
#endif
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
mov_s r0, _ARC_V2_AUX_IRQ_ACT
mov_s r1, r3
mov_s r6, ARC_S_CALL_AUX_WRITE
sjli SJLI_CALL_ARC_SECURE
#else
sr r3, [_ARC_V2_AUX_IRQ_ACT]
#endif
rtie
#ifdef CONFIG_EXECUTION_BENCHMARKING
.balign 4
_capture_value_for_benchmarking:
push_s blink
bl read_timer_end_of_swap
pop_s blink
b return_loc
#endif /* CONFIG_EXECUTION_BENCHMARKING */

View File

@@ -0,0 +1,74 @@
/*
* Copyright (c) 2014 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief ARCv2 system fatal error handler
*
* This module provides the _SysFatalErrorHandler() routine for ARCv2 BSPs.
*/
#include <kernel.h>
#include <toolchain.h>
#include <linker/sections.h>
#include <kernel_structs.h>
#include <misc/printk.h>
/**
*
* @brief Fatal error handler
*
* This routine implements the corrective action to be taken when the system
* detects a fatal error.
*
* This sample implementation attempts to abort the current thread and allow
* the system to continue executing, which may permit the system to continue
* functioning with degraded capabilities.
*
* System designers may wish to enhance or substitute this sample
* implementation to take other actions, such as logging error (or debug)
* information to a persistent repository and/or rebooting the system.
*
* @param reason the fatal error reason
* @param pEsf pointer to exception stack frame
*
* @return N/A
*/
__weak void _SysFatalErrorHandler(unsigned int reason,
const NANO_ESF *pEsf)
{
ARG_UNUSED(pEsf);
#if !defined(CONFIG_SIMPLE_FATAL_ERROR_HANDLER)
#if defined(CONFIG_STACK_SENTINEL)
if (reason == _NANO_ERR_STACK_CHK_FAIL) {
goto hang_system;
}
#endif
if (reason == _NANO_ERR_KERNEL_PANIC) {
goto hang_system;
}
if (_is_thread_essential()) {
printk("Fatal fault in essential thread! Spinning...\n");
goto hang_system;
}
printk("Fatal fault in thread %p! Aborting.\n", _current);
k_thread_abort(_current);
return;
hang_system:
#else
ARG_UNUSED(reason);
#endif
for (;;) {
k_cpu_idle();
}
}

View File

@@ -12,9 +12,13 @@
*/
#include <kernel.h>
#include <ksched.h>
#include <toolchain.h>
#include <kernel_structs.h>
#include <offsets_short.h>
#include <wait_q.h>
#ifdef CONFIG_INIT_STACKS
#include <string.h>
#endif /* CONFIG_INIT_STACKS */
#ifdef CONFIG_USERSPACE
#include <arch/arc/v2/mpu/arc_core_mpu.h>
@@ -42,7 +46,7 @@ struct init_stack_frame {
* needed anymore.
*
* The initial context is a basic stack frame that contains arguments for
* z_thread_entry() return address, that points at z_thread_entry()
* _thread_entry() return address, that points at _thread_entry()
* and status register.
*
* <options> is currently unused.
@@ -58,53 +62,45 @@ struct init_stack_frame {
*
* @return N/A
*/
void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
void _new_thread(struct k_thread *thread, k_thread_stack_t *stack,
size_t stackSize, k_thread_entry_t pEntry,
void *parameter1, void *parameter2, void *parameter3,
int priority, unsigned int options)
{
char *pStackMem = Z_THREAD_STACK_BUFFER(stack);
Z_ASSERT_VALID_PRIO(priority, pEntry);
char *pStackMem = K_THREAD_STACK_BUFFER(stack);
_ASSERT_VALID_PRIO(priority, pEntry);
char *stackEnd;
char *stackAdjEnd;
struct init_stack_frame *pInitCtx;
#ifdef CONFIG_USERSPACE
size_t stackAdjSize;
size_t offset = 0;
#if CONFIG_USERSPACE
/* adjust stack and stack size */
#if CONFIG_ARC_MPU_VER == 2
stackAdjSize = Z_ARC_MPUV2_SIZE_ALIGN(stackSize);
stackSize = POW2_CEIL(STACK_SIZE_ALIGN(stackSize));
#elif CONFIG_ARC_MPU_VER == 3
stackAdjSize = STACK_SIZE_ALIGN(stackSize);
#endif
stackEnd = pStackMem + stackAdjSize;
#ifdef CONFIG_STACK_POINTER_RANDOM
offset = stackAdjSize - stackSize;
stackSize = ROUND_UP(stackSize, STACK_ALIGN);
#endif
stackEnd = pStackMem + stackSize;
if (options & K_USER) {
thread->arch.priv_stack_start =
(u32_t)(stackEnd + STACK_GUARD_SIZE);
thread->arch.priv_stack_size =
(u32_t)(CONFIG_PRIVILEGED_STACK_SIZE);
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd +
ARCH_THREAD_STACK_RESERVED);
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd + STACK_GUARD_SIZE +
CONFIG_PRIVILEGED_STACK_SIZE);
/* reserve 4 bytes for the start of user sp */
stackAdjEnd -= 4;
(*(u32_t *)stackAdjEnd) = STACK_ROUND_DOWN(
(u32_t)stackEnd - offset);
(*(u32_t *)stackAdjEnd) = (u32_t)stackEnd;
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
thread->userspace_local_data =
(struct _thread_userspace_local_data *)
STACK_ROUND_DOWN(stackEnd -
sizeof(*thread->userspace_local_data) - offset);
STACK_ROUND_DOWN(stackEnd - sizeof(*thread->userspace_local_data));
/* update the start of user sp */
(*(u32_t *)stackAdjEnd) = (u32_t) thread->userspace_local_data;
#endif
@@ -121,23 +117,25 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
* ---------------------------------------------
*/
pStackMem += STACK_GUARD_SIZE;
stackAdjSize = stackAdjSize + CONFIG_PRIVILEGED_STACK_SIZE;
stackEnd += ARCH_THREAD_STACK_RESERVED;
stackSize = stackSize + CONFIG_PRIVILEGED_STACK_SIZE;
stackEnd += CONFIG_PRIVILEGED_STACK_SIZE + STACK_GUARD_SIZE;
thread->arch.priv_stack_start = 0;
thread->arch.priv_stack_size = 0;
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd);
#ifdef CONFIG_THREAD_USERSPACE_LOCAL_DATA
/* reserve stack space for the userspace local data struct */
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd
- sizeof(*thread->userspace_local_data) - offset);
- sizeof(*thread->userspace_local_data));
thread->userspace_local_data =
(struct _thread_userspace_local_data *)stackAdjEnd;
#else
stackAdjEnd = (char *)STACK_ROUND_DOWN(stackEnd - offset);
#endif
}
z_new_thread_init(thread, pStackMem, stackAdjSize, priority, options);
_new_thread_init(thread, pStackMem, stackSize, priority, options);
/* carve the thread entry struct from the "base" of
the privileged stack */
@@ -145,11 +143,11 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
stackAdjEnd - sizeof(struct init_stack_frame));
/* fill init context */
pInitCtx->status32 = 0U;
pInitCtx->status32 = 0;
if (options & K_USER) {
pInitCtx->pc = ((u32_t)z_user_thread_entry_wrapper);
pInitCtx->pc = ((u32_t)_user_thread_entry_wrapper);
} else {
pInitCtx->pc = ((u32_t)z_thread_entry_wrapper);
pInitCtx->pc = ((u32_t)_thread_entry_wrapper);
}
/*
@@ -161,10 +159,9 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
*/
pInitCtx->status32 |= _ARC_V2_STATUS32_US;
#else /* For no USERSPACE feature */
pStackMem += ARCH_THREAD_STACK_RESERVED;
stackEnd = pStackMem + stackSize;
z_new_thread_init(thread, pStackMem, stackSize, priority, options);
_new_thread_init(thread, pStackMem, stackSize, priority, options);
stackAdjEnd = stackEnd;
@@ -172,12 +169,12 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
STACK_ROUND_DOWN(stackAdjEnd) -
sizeof(struct init_stack_frame));
pInitCtx->status32 = 0U;
pInitCtx->pc = ((u32_t)z_thread_entry_wrapper);
pInitCtx->status32 = 0;
pInitCtx->pc = ((u32_t)_thread_entry_wrapper);
#endif
#ifdef CONFIG_ARC_SECURE_FIRMWARE
pInitCtx->sec_stat = z_arc_v2_aux_reg_read(_ARC_V2_SEC_STAT);
#ifdef CONFIG_ARC_HAS_SECURE
pInitCtx->sec_stat = _arc_v2_aux_reg_read(_ARC_V2_SEC_STAT);
#endif
pInitCtx->r0 = (u32_t)pEntry;
@@ -187,7 +184,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
/* stack check configuration */
#ifdef CONFIG_ARC_STACK_CHECKING
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#ifdef CONFIG_ARC_HAS_SECURE
pInitCtx->sec_stat |= _ARC_V2_SEC_STAT_SSC;
#else
pInitCtx->status32 |= _ARC_V2_STATUS32_SC;
@@ -199,7 +196,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
thread->arch.k_stack_top =
(u32_t)(stackEnd + STACK_GUARD_SIZE);
thread->arch.k_stack_base = (u32_t)
(stackEnd + ARCH_THREAD_STACK_RESERVED);
(stackEnd + STACK_GUARD_SIZE + CONFIG_PRIVILEGED_STACK_SIZE);
} else {
thread->arch.k_stack_top = (u32_t)pStackMem;
thread->arch.k_stack_base = (u32_t)stackEnd;
@@ -211,12 +208,17 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
thread->arch.k_stack_base = (u32_t) stackEnd;
#endif
#endif
#ifdef CONFIG_ARC_USE_UNALIGNED_MEM_ACCESS
pInitCtx->status32 |= _ARC_V2_STATUS32_AD;
#endif
thread->switch_handle = thread;
/*
* seti instruction in the end of the _Swap() will
* enable the interrupts based on intlock_key
* value.
*
* intlock_key is constructed based on ARCv2 ISA Programmer's
* Reference Manual CLRI instruction description:
* dst[31:6] dst[5] dst[4] dst[3:0]
* 26'd0 1 STATUS32.IE STATUS32.E[3:0]
*/
thread->arch.intlock_key = 0x30 | (_ARC_V2_DEF_IRQ_LEVEL & 0xf);
thread->arch.relinquish_cause = _CAUSE_COOP;
thread->callee_saved.sp =
(u32_t)pInitCtx - ___callee_saved_stack_t_SIZEOF;
@@ -227,8 +229,8 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
#ifdef CONFIG_USERSPACE
FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
FUNC_NORETURN void _arch_user_mode_enter(k_thread_entry_t user_entry,
void *p1, void *p2, void *p3)
{
/*
@@ -247,11 +249,13 @@ FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
_current->arch.priv_stack_start =
(u32_t)(_current->stack_info.start +
_current->stack_info.size + STACK_GUARD_SIZE);
_current->arch.priv_stack_size =
(u32_t)CONFIG_PRIVILEGED_STACK_SIZE;
#ifdef CONFIG_ARC_STACK_CHECKING
_current->arch.k_stack_top = _current->arch.priv_stack_start;
_current->arch.k_stack_base = _current->arch.priv_stack_start +
CONFIG_PRIVILEGED_STACK_SIZE;
_current->arch.priv_stack_size;
_current->arch.u_stack_top = _current->stack_info.start;
_current->arch.u_stack_base = _current->stack_info.start +
_current->stack_info.size;
@@ -261,45 +265,10 @@ FUNC_NORETURN void arch_user_mode_enter(k_thread_entry_t user_entry,
/* need to lock cpu here ? */
configure_mpu_thread(_current);
z_arc_userspace_enter(user_entry, p1, p2, p3,
_arc_userspace_enter(user_entry, p1, p2, p3,
(u32_t)_current->stack_obj,
_current->stack_info.size);
CODE_UNREACHABLE;
}
#endif
#if defined(CONFIG_FLOAT) && defined(CONFIG_FP_SHARING)
int arch_float_disable(struct k_thread *thread)
{
unsigned int key;
/* Ensure a preemptive context switch does not occur */
key = irq_lock();
/* Disable all floating point capabilities for the thread */
thread->base.user_options &= ~K_FP_REGS;
irq_unlock(key);
return 0;
}
int arch_float_enable(struct k_thread *thread)
{
unsigned int key;
/* Ensure a preemptive context switch does not occur */
key = irq_lock();
/* Enable all floating point capabilities for the thread */
thread->base.user_options |= K_FP_REGS;
irq_unlock(key);
return 0;
}
#endif /* CONFIG_FLOAT && CONFIG_FP_SHARING */

View File

@@ -6,33 +6,30 @@
/**
* @file
* @brief Wrapper for z_thread_entry
* @brief Wrapper for _thread_entry
*
* Wrapper for z_thread_entry routine when called from the initial context.
* Wrapper for _thread_entry routine when called from the initial context.
*/
#include <toolchain.h>
#include <linker/sections.h>
#include <v2/irq.h>
GTEXT(z_thread_entry_wrapper)
GTEXT(z_thread_entry_wrapper1)
GTEXT(_thread_entry_wrapper)
/*
* @brief Wrapper for z_thread_entry
* @brief Wrapper for _thread_entry
*
* The routine pops parameters for the z_thread_entry from stack frame, prepared
* by the arch_new_thread() routine.
* The routine pops parameters for the _thread_entry from stack frame, prepared
* by the _new_thread() routine.
*
* @return N/A
*/
SECTION_FUNC(TEXT, z_thread_entry_wrapper)
seti _ARC_V2_INIT_IRQ_LOCK_KEY
z_thread_entry_wrapper1:
SECTION_FUNC(TEXT, _thread_entry_wrapper)
pop_s r3
pop_s r2
pop_s r1
pop_s r0
j z_thread_entry
j _thread_entry
nop

View File

@@ -15,6 +15,9 @@
#include <toolchain.h>
#include <kernel_structs.h>
extern volatile u64_t _sys_clock_tick_count;
extern int sys_clock_hw_cycles_per_tick;
/*
* @brief Read 64-bit timestamp value
*
@@ -23,17 +26,17 @@
*
* @return 64-bit time stamp value
*/
u64_t z_tsc_read(void)
u64_t _tsc_read(void)
{
unsigned int key;
u64_t t;
u32_t count;
key = irq_lock();
t = (u64_t)z_tick_get();
count = z_arc_v2_aux_reg_read(_ARC_V2_TMR0_COUNT);
t = (u64_t)_sys_clock_tick_count;
count = _arc_v2_aux_reg_read(_ARC_V2_TMR0_COUNT);
irq_unlock(key);
t *= k_ticks_to_cyc_floor64(1);
t *= (u64_t)sys_clock_hw_cycles_per_tick;
t += (u64_t)count;
return t;
}

View File

@@ -11,55 +11,53 @@
#include <arch/cpu.h>
#include <syscall.h>
#include <swap_macros.h>
#include <v2/irq.h>
.macro clear_scratch_regs
mov_s r1, 0
mov_s r2, 0
mov_s r3, 0
mov_s r4, 0
mov_s r5, 0
mov_s r6, 0
mov_s r7, 0
mov_s r8, 0
mov_s r9, 0
mov_s r10, 0
mov_s r11, 0
mov_s r12, 0
mov r1, 0
mov r2, 0
mov r3, 0
mov r4, 0
mov r5, 0
mov r6, 0
mov r7, 0
mov r8, 0
mov r9, 0
mov r10, 0
mov r11, 0
mov r12, 0
.endm
.macro clear_callee_regs
mov_s r25, 0
mov_s r24, 0
mov_s r23, 0
mov_s r22, 0
mov_s r21, 0
mov_s r20, 0
mov_s r19, 0
mov_s r18, 0
mov_s r17, 0
mov_s r16, 0
mov r25, 0
mov r24, 0
mov r23, 0
mov r22, 0
mov r21, 0
mov r20, 0
mov r19, 0
mov r18, 0
mov r17, 0
mov r16, 0
mov_s r15, 0
mov_s r14, 0
mov_s r13, 0
mov r15, 0
mov r14, 0
mov r13, 0
.endm
GTEXT(z_arc_userspace_enter)
GTEXT(_arc_userspace_enter)
GTEXT(_arc_do_syscall)
GTEXT(z_user_thread_entry_wrapper)
GTEXT(arch_user_string_nlen)
GTEXT(z_arc_user_string_nlen_fault_start)
GTEXT(z_arc_user_string_nlen_fault_end)
GTEXT(z_arc_user_string_nlen_fixup)
GTEXT(_user_thread_entry_wrapper)
GTEXT(z_arch_user_string_nlen)
GTEXT(z_arch_user_string_nlen_fault_start)
GTEXT(z_arch_user_string_nlen_fault_end)
GTEXT(z_arch_user_string_nlen_fixup)
/*
* @brief Wrapper for z_thread_entry in the case of user thread
* @brief Wrapper for _thread_entry in the case of user thread
* The init parameters are in privileged stack
*
* @return N/A
*/
SECTION_FUNC(TEXT, z_user_thread_entry_wrapper)
seti _ARC_V2_INIT_IRQ_LOCK_KEY
SECTION_FUNC(TEXT, _user_thread_entry_wrapper)
pop_s r3
pop_s r2
pop_s r1
@@ -67,7 +65,7 @@ SECTION_FUNC(TEXT, z_user_thread_entry_wrapper)
/* the start of user sp is in r5 */
pop r5
/* start of privilege stack in blink */
mov_s blink, sp
mov blink, sp
st.aw r0, [r5, -4]
st.aw r1, [r5, -4]
@@ -76,7 +74,7 @@ SECTION_FUNC(TEXT, z_user_thread_entry_wrapper)
/*
* when CONFIG_INIT_STACKS is enable, stack will be initialized
* in z_new_thread_init.
* in _new_thread_init.
*/
j _arc_go_to_user_space
@@ -89,16 +87,18 @@ SECTION_FUNC(TEXT, z_user_thread_entry_wrapper)
* not transition back later, unless they are doing system calls.
*
*/
SECTION_FUNC(TEXT, z_arc_userspace_enter)
SECTION_FUNC(TEXT, _arc_userspace_enter)
/*
* In ARCv2, the U bit can only be set through exception return
*/
#ifdef CONFIG_ARC_STACK_CHECKING
/* disable stack checking as the stack should be initialized */
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#ifdef CONFIG_ARC_HAS_SECURE
lr blink, [_ARC_V2_SEC_STAT]
bclr blink, blink, _ARC_V2_SEC_STAT_SSC_BIT
sflag blink
/* sflag blink */
/* sflag instruction is not supported in current ARC GNU */
.long 0x07ff302f
#else
lr blink, [_ARC_V2_STATUS32]
bclr blink, blink, _ARC_V2_STATUS32_SC_BIT
@@ -109,7 +109,7 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
add r5, r4, r5
/* start of privilege stack */
add blink, r5, CONFIG_PRIVILEGED_STACK_SIZE+STACK_GUARD_SIZE
mov_s sp, r5
mov sp, r5
push_s r0
push_s r1
@@ -119,9 +119,9 @@ SECTION_FUNC(TEXT, z_arc_userspace_enter)
mov r5, sp /* skip r0, r1, r2, r3 */
#ifdef CONFIG_INIT_STACKS
mov_s r0, 0xaaaaaaaa
mov r0, 0xaaaaaaaa
#else
mov_s r0, 0x0
mov r0, 0x0
#endif
_clear_user_stack:
st.ab r0, [r4, 4]
@@ -129,15 +129,17 @@ _clear_user_stack:
jlt _clear_user_stack
#ifdef CONFIG_ARC_STACK_CHECKING
mov_s r1, _kernel
mov r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_stack_check_regs
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#ifdef CONFIG_ARC_HAS_SECURE
lr r0, [_ARC_V2_SEC_STAT]
bset r0, r0, _ARC_V2_SEC_STAT_SSC_BIT
sflag r0
/* sflag r0 */
/* sflag instruction is not supported in current ARC GNU */
.long 0x003f302f
#else
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_SC_BIT
@@ -149,40 +151,38 @@ _arc_go_to_user_space:
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_U_BIT
mov_s r1, z_thread_entry_wrapper1
mov r1, _thread_entry_wrapper
/* fake exception return */
kflag _ARC_V2_STATUS32_AE
sr r0, [_ARC_V2_ERSTATUS]
sr r1, [_ARC_V2_ERET]
/* fake exception return */
lr r0, [_ARC_V2_STATUS32]
bclr r0, r0, _ARC_V2_STATUS32_AE_BIT
kflag r0
/* when exception returns from kernel to user, sp and _ARC_V2_USER_SP
* /_ARC_V2_SECU_SP will be switched
*/
#if defined(CONFIG_ARC_HAS_SECURE) && defined(CONFIG_ARC_SECURE_FIRMWARE)
#ifdef CONFIG_ARC_HAS_SECURE
lr r0, [_ARC_V2_SEC_STAT]
/* the mode returns from exception return is secure mode */
bset r0, r0, 31
sr r0, [_ARC_V2_ERSEC_STAT]
sr r5, [_ARC_V2_SEC_U_SP]
#else
/* when exception returns from kernel to user, sp and _ARC_V2_USER_SP
* will be switched
*/
sr r5, [_ARC_V2_USER_SP]
#endif
mov_s sp, blink
mov sp, blink
mov_s r0, 0
mov r0, 0
clear_callee_regs
clear_scratch_regs
mov_s fp, 0
mov_s r29, 0
mov_s r30, 0
mov_s blink, 0
mov fp, 0
mov r29, 0
mov r30, 0
mov blink, 0
#ifdef CONFIG_EXECUTION_BENCHMARKING
b _capture_value_for_benchmarking_userspace
@@ -202,55 +202,48 @@ return_loc_userspace_enter:
*
*/
SECTION_FUNC(TEXT, _arc_do_syscall)
/*
* r0-r5: arg1-arg6, r6 is call id which is already checked in
* trap_s handler, r7 is the system call stack frame pointer
* need to recover r0, r1, r2 because they will be modified in
* _create_irq_stack_frame. If a specific syscall frame (different
* with irq stack frame) is defined, the cover of r0, r1, r2 can be
* optimized.
*/
ld_s r0, [sp, ___isf_t_r0_OFFSET]
ld_s r1, [sp, ___isf_t_r1_OFFSET]
ld_s r2, [sp, ___isf_t_r2_OFFSET]
/* r0-r5: arg1-arg6, r6 is call id */
/* the call id is already checked in trap_s handler */
push_s blink
mov r7, sp
mov_s blink, _k_syscall_table
mov blink, _k_syscall_table
ld.as r6, [blink, r6]
jl [r6]
/* save return value */
st_s r0, [sp, ___isf_t_r0_OFFSET]
/*
* no need to clear callee regs, as they will be saved and restored
* automatically
*/
clear_scratch_regs
mov_s r29, 0
mov_s r30, 0
mov r29, 0
mov r30, 0
pop_s blink
/* through fake exception return, go back to the caller */
lr r0, [_ARC_V2_STATUS32]
bset r0, r0, _ARC_V2_STATUS32_AE_BIT
kflag r0
kflag _ARC_V2_STATUS32_AE
#ifdef CONFIG_ARC_SECURE_FIRMWARE
ld_s r0, [sp, ___isf_t_sec_stat_OFFSET]
sr r0,[_ARC_V2_ERSEC_STAT]
/* the status and return address are saved in trap_s handler */
pop r6
sr r6, [_ARC_V2_ERSTATUS]
pop r6
sr r6, [_ARC_V2_ERET]
#ifdef CONFIG_ARC_HAS_SECURE
pop r6
sr r6, [_ARC_V2_ERSEC_STAT]
#endif
ld_s r0, [sp, ___isf_t_status32_OFFSET]
sr r0,[_ARC_V2_ERSTATUS]
ld_s r0, [sp, ___isf_t_pc_OFFSET] /* eret into pc */
sr r0,[_ARC_V2_ERET]
_pop_irq_stack_frame
mov r6, 0
rtie
/*
* size_t arch_user_string_nlen(const char *s, size_t maxsize, int *err_arg)
* size_t z_arch_user_string_nlen(const char *s, size_t maxsize, int *err_arg)
*/
SECTION_FUNC(TEXT, arch_user_string_nlen)
SECTION_FUNC(TEXT, z_arch_user_string_nlen)
/* int err; */
sub_s sp,sp,0x4
@@ -269,11 +262,11 @@ SECTION_FUNC(TEXT, arch_user_string_nlen)
mov lp_count, r1
strlen_loop:
z_arc_user_string_nlen_fault_start:
z_arch_user_string_nlen_fault_start:
/* is the byte at ++r12 a NULL? if so, we're done. Might fault! */
ldb.aw r1, [r12, 1]
z_arc_user_string_nlen_fault_end:
z_arch_user_string_nlen_fault_end:
brne_s r1, 0, not_null
strlen_done:
@@ -281,7 +274,7 @@ strlen_done:
mov_s r1, 0
st_s r1, [sp, 0]
z_arc_user_string_nlen_fixup:
z_arch_user_string_nlen_fixup:
/* *err_arg = err; Pop stack and return */
ld_s r1, [sp, 0]
add_s sp, sp, 4

View File

@@ -46,7 +46,7 @@ struct vector_table {
u32_t unused_2;
};
struct vector_table _VectorTable Z_GENERIC_SECTION(.exc_vector_table) = {
struct vector_table _VectorTable _GENERIC_SECTION(.exc_vector_table) = {
(u32_t)__reset,
(u32_t)__memory_error,
(u32_t)__instruction_error,
@@ -64,3 +64,4 @@ struct vector_table _VectorTable Z_GENERIC_SECTION(.exc_vector_table) = {
0,
0
};

View File

@@ -1,11 +0,0 @@
/*
* Copyright (c) 2019 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
/* when !XIP, .text is in RAM, and vector table must be at its very start */
KEEP(*(.exc_vector_table))
KEEP(*(".exc_vector_table.*"))
KEEP(*(IRQ_VECTOR_TABLE))

5
arch/arc/defconfig Normal file
View File

@@ -0,0 +1,5 @@
CONFIG_ARC=y
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC=32000000
CONFIG_CPU_ARCEM4=y
CONFIG_ARCV2_INTERRUPT_UNIT=y
CONFIG_ARCV2_TIMER=y

View File

@@ -17,24 +17,28 @@
* symbols" in the offsets.o module.
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_DATA_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_DATA_H_
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <vector_table.h>
#ifndef _ASMLANGUAGE
#include <kernel.h>
#include <zephyr/types.h>
#include <sys/util.h>
#include <sys/dlist.h>
#ifndef _kernel_arch_data__h_
#define _kernel_arch_data__h_
#ifdef __cplusplus
extern "C" {
#endif
#include <toolchain.h>
#include <linker/sections.h>
#include <arch/cpu.h>
#include <vector_table.h>
#include <kernel_arch_thread.h>
#ifndef _ASMLANGUAGE
#include <kernel.h>
#include <kernel_internal.h>
#include <zephyr/types.h>
#include <misc/util.h>
#include <misc/dlist.h>
#endif
#ifndef _ASMLANGUAGE
#ifdef CONFIG_ARC_HAS_SECURE
struct _irq_stack_frame {
u32_t lp_end;
@@ -137,13 +141,9 @@ struct _callee_saved_stack {
/* r28 is the stack pointer and saved separately */
/* r29 is ILINK and does not need to be saved */
u32_t r30;
#ifdef CONFIG_ARC_HAS_ACCL_REGS
#ifdef CONFIG_FP_SHARING
u32_t r58;
u32_t r59;
#endif
#ifdef CONFIG_FP_SHARING
u32_t fpu_status;
u32_t fpu_ctrl;
#ifdef CONFIG_FP_FPU_DA
@@ -162,9 +162,18 @@ struct _callee_saved_stack {
typedef struct _callee_saved_stack _callee_saved_stack_t;
#ifdef __cplusplus
}
#endif
struct _kernel_arch {
char *rirq_sp; /* regular IRQ stack pointer base */
/*
* FIRQ stack pointer is installed once in the second bank's SP, so
* there is no need to track it in _kernel.
*/
};
typedef struct _kernel_arch _kernel_arch_t;
#endif /* _ASMLANGUAGE */
@@ -175,5 +184,8 @@ typedef struct _callee_saved_stack _callee_saved_stack_t;
#define STACK_ROUND_UP(x) ROUND_UP(x, STACK_ALIGN_SIZE)
#define STACK_ROUND_DOWN(x) ROUND_DOWN(x, STACK_ALIGN_SIZE)
#ifdef __cplusplus
}
#endif
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_DATA_H_ */
#endif /* _kernel_arch_data__h_ */

View File

@@ -17,29 +17,30 @@
* symbols" in the offsets.o module.
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_FUNC_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_FUNC_H_
#ifndef _kernel_arch_func__h_
#define _kernel_arch_func__h_
#ifdef __cplusplus
extern "C" {
#endif
#if !defined(_ASMLANGUAGE)
#include <kernel_arch_data.h>
#ifdef CONFIG_CPU_ARCV2
#include <v2/cache.h>
#include <v2/irq.h>
#endif
#ifdef __cplusplus
extern "C" {
#endif
static ALWAYS_INLINE void arch_kernel_init(void)
static ALWAYS_INLINE void kernel_arch_init(void)
{
z_irq_setup();
_current_cpu->irq_stack =
Z_THREAD_STACK_BUFFER(_interrupt_stack) + CONFIG_ISR_STACK_SIZE;
_irq_setup();
}
static ALWAYS_INLINE void
_set_thread_return_value(struct k_thread *thread, unsigned int value)
{
thread->arch.return_value = value;
}
/**
*
@@ -48,34 +49,30 @@ static ALWAYS_INLINE void arch_kernel_init(void)
*
* @return IRQ number
*/
static ALWAYS_INLINE int Z_INTERRUPT_CAUSE(void)
static ALWAYS_INLINE int _INTERRUPT_CAUSE(void)
{
u32_t irq_num = z_arc_v2_aux_reg_read(_ARC_V2_ICAUSE);
u32_t irq_num = _arc_v2_aux_reg_read(_ARC_V2_ICAUSE);
return irq_num;
}
static inline bool arch_is_in_isr(void)
#define _is_in_isr _arc_v2_irq_unit_is_in_isr
extern void _thread_entry_wrapper(void);
extern void _user_thread_entry_wrapper(void);
static inline void _IntLibInit(void)
{
return z_arc_v2_irq_unit_is_in_isr();
/* nothing needed, here because the kernel requires it */
}
extern void z_thread_entry_wrapper(void);
extern void z_user_thread_entry_wrapper(void);
extern void z_arc_userspace_enter(k_thread_entry_t user_entry, void *p1,
extern void _arc_userspace_enter(k_thread_entry_t user_entry, void *p1,
void *p2, void *p3, u32_t stack, u32_t size);
extern void arch_switch(void *switch_to, void **switched_from);
extern void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
extern void arch_sched_ipi(void);
#endif /* _ASMLANGUAGE */
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_KERNEL_ARCH_FUNC_H_ */
#endif /* _kernel_arch_func__h_ */

View File

@@ -0,0 +1,82 @@
/*
* Copyright (c) 2017 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Per-arch thread definition
*
* This file contains definitions for
*
* struct _thread_arch
* struct _callee_saved
* struct _caller_saved
*
* necessary to instantiate instances of struct k_thread.
*/
#ifndef _kernel_arch_thread__h_
#define _kernel_arch_thread__h_
/*
* Reason a thread has relinquished control: threads can only be in the NONE
* or COOP state, threads can be one in the four.
*/
#define _CAUSE_NONE 0
#define _CAUSE_COOP 1
#define _CAUSE_RIRQ 2
#define _CAUSE_FIRQ 3
#ifndef _ASMLANGUAGE
#include <zephyr/types.h>
struct _caller_saved {
/*
* Saved on the stack as part of handling a regular IRQ or by the
* kernel when calling the FIRQ return code.
*/
};
typedef struct _caller_saved _caller_saved_t;
struct _callee_saved {
u32_t sp; /* r28 */
};
typedef struct _callee_saved _callee_saved_t;
struct _thread_arch {
/* interrupt key when relinquishing control */
u32_t intlock_key;
/* one of the _CAUSE_xxxx definitions above */
int relinquish_cause;
/* return value from _Swap */
unsigned int return_value;
#ifdef CONFIG_ARC_STACK_CHECKING
/* High address of stack region, stack grows downward from this
* location. Usesd for hardware stack checking
*/
u32_t k_stack_base;
u32_t k_stack_top;
#ifdef CONFIG_USERSPACE
u32_t u_stack_base;
u32_t u_stack_top;
#endif
#endif
#ifdef CONFIG_USERSPACE
u32_t priv_stack_start;
u32_t priv_stack_size;
#endif
};
typedef struct _thread_arch _thread_arch_t;
#endif /* _ASMLANGUAGE */
#endif /* _kernel_arch_thread__h_ */

View File

@@ -4,8 +4,8 @@
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_OFFSETS_SHORT_ARCH_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_OFFSETS_SHORT_ARCH_H_
#ifndef _offsets_short_arch__h_
#define _offsets_short_arch__h_
#include <offsets.h>
@@ -17,9 +17,15 @@
/* threads */
#define _thread_offset_to_intlock_key \
(___thread_t_arch_OFFSET + ___thread_arch_t_intlock_key_OFFSET)
#define _thread_offset_to_relinquish_cause \
(___thread_t_arch_OFFSET + ___thread_arch_t_relinquish_cause_OFFSET)
#define _thread_offset_to_return_value \
(___thread_t_arch_OFFSET + ___thread_arch_t_return_value_OFFSET)
#define _thread_offset_to_k_stack_base \
(___thread_t_arch_OFFSET + ___thread_arch_t_k_stack_base_OFFSET)
@@ -37,4 +43,4 @@
/* end - threads */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_OFFSETS_SHORT_ARCH_H_ */
#endif /* _offsets_short_arch__h_ */

View File

@@ -6,14 +6,18 @@
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_SWAP_MACROS_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_SWAP_MACROS_H_
#ifndef _SWAP_MACROS__H_
#define _SWAP_MACROS__H_
#include <kernel_structs.h>
#include <offsets_short.h>
#include <toolchain.h>
#include <arch/cpu.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifdef _ASMLANGUAGE
/* entering this macro, current is in r2 */
@@ -40,33 +44,20 @@
#ifdef CONFIG_USERSPACE
#ifdef CONFIG_ARC_HAS_SECURE
#ifdef CONFIG_ARC_SECURE_FIRMWARE
lr r13, [_ARC_V2_SEC_U_SP]
st_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
lr r13, [_ARC_V2_SEC_K_SP]
st_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
#else
lr r13, [_ARC_V2_USER_SP]
st_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
lr r13, [_ARC_V2_KERNEL_SP]
st_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
#endif /* CONFIG_ARC_SECURE_FIRMWARE */
#else
lr r13, [_ARC_V2_USER_SP]
st_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
st r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
#endif
#endif
st r30, [sp, ___callee_saved_stack_t_r30_OFFSET]
#ifdef CONFIG_ARC_HAS_ACCL_REGS
#ifdef CONFIG_FP_SHARING
st r58, [sp, ___callee_saved_stack_t_r58_OFFSET]
st r59, [sp, ___callee_saved_stack_t_r59_OFFSET]
#endif
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 1f
lr r13, [_ARC_V2_FPU_STATUS]
st_s r13, [sp, ___callee_saved_stack_t_fpu_status_OFFSET]
lr r13, [_ARC_V2_FPU_CTRL]
@@ -82,27 +73,21 @@
lr r13, [_ARC_V2_FPU_DPFP2H]
st_s r13, [sp, ___callee_saved_stack_t_dpfp2h_OFFSET]
#endif
1 :
#endif
/* save stack pointer in struct k_thread */
/* save stack pointer in struct tcs */
st sp, [r2, _thread_offset_to_sp]
.endm
/* entering this macro, current is in r2 */
.macro _load_callee_saved_regs
/* restore stack pointer from struct k_thread */
/* restore stack pointer from struct tcs */
ld sp, [r2, _thread_offset_to_sp]
#ifdef CONFIG_ARC_HAS_ACCL_REGS
#ifdef CONFIG_FP_SHARING
ld r58, [sp, ___callee_saved_stack_t_r58_OFFSET]
ld r59, [sp, ___callee_saved_stack_t_r59_OFFSET]
#endif
#ifdef CONFIG_FP_SHARING
ld_s r13, [r2, ___thread_base_t_user_options_OFFSET]
/* K_FP_REGS is bit 1 */
bbit0 r13, 1, 2f
ld_s r13, [sp, ___callee_saved_stack_t_fpu_status_OFFSET]
sr r13, [_ARC_V2_FPU_STATUS]
@@ -119,22 +104,15 @@
ld_s r13, [sp, ___callee_saved_stack_t_dpfp2h_OFFSET]
sr r13, [_ARC_V2_FPU_DPFP2H]
#endif
2 :
#endif
#ifdef CONFIG_USERSPACE
#ifdef CONFIG_ARC_HAS_SECURE
#ifdef CONFIG_ARC_SECURE_FIRMWARE
ld_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
ld r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
sr r13, [_ARC_V2_SEC_U_SP]
ld_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
ld r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
sr r13, [_ARC_V2_SEC_K_SP]
#else
ld_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
sr r13, [_ARC_V2_USER_SP]
ld_s r13, [sp, ___callee_saved_stack_t_kernel_sp_OFFSET]
sr r13, [_ARC_V2_KERNEL_SP]
#endif /* CONFIG_ARC_SECURE_FIRMWARE */
#else
ld_s r13, [sp, ___callee_saved_stack_t_user_sp_OFFSET]
sr r13, [_ARC_V2_USER_SP]
@@ -258,7 +236,7 @@
* The pc and status32 values will still be on the stack. We cannot
* pop them yet because the callers of _pop_irq_stack_frame must reload
* status32 differently depending on the execution context they are
* running in (arch_switch(), firq or exception).
* running in (_Swap(), firq or exception).
*/
add_s sp, sp, ___isf_t_SIZEOF
@@ -269,7 +247,7 @@
* _kernel.current. r3 is a scratch reg.
*/
.macro _load_stack_check_regs
#if defined(CONFIG_ARC_SECURE_FIRMWARE)
#ifdef CONFIG_ARC_HAS_SECURE
ld r3, [r2, _thread_offset_to_k_stack_base]
sr r3, [_ARC_V2_S_KSTACK_BASE]
ld r3, [r2, _thread_offset_to_k_stack_top]
@@ -291,92 +269,13 @@
ld r3, [r2, _thread_offset_to_u_stack_top]
sr r3, [_ARC_V2_USTACK_TOP]
#endif
#endif /* CONFIG_ARC_SECURE_FIRMWARE */
.endm
/* check and increase the interrupt nest counter
* after increase, check whether nest counter == 1
* the result will be EQ bit of status32
*/
.macro _check_and_inc_int_nest_counter reg1 reg2
#ifdef CONFIG_SMP
_get_cpu_id \reg1
ld.as \reg1, [@_curr_cpu, \reg1]
ld \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
mov \reg1, _kernel
ld \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
add \reg2, \reg2, 1
#ifdef CONFIG_SMP
st \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
st \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
cmp \reg2, 1
.endm
/* decrease interrupt nest counter */
.macro _dec_int_nest_counter reg1 reg2
#ifdef CONFIG_SMP
_get_cpu_id \reg1
ld.as \reg1, [@_curr_cpu, \reg1]
ld \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
mov \reg1, _kernel
ld \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
sub \reg2, \reg2, 1
#ifdef CONFIG_SMP
st \reg2, [\reg1, ___cpu_t_nested_OFFSET]
#else
st \reg2, [\reg1, ___kernel_t_nested_OFFSET]
#endif
.endm
/* If multi bits in IRQ_ACT are set, i.e. last bit != fist bit, it's
* in nest interrupt. The result will be EQ bit of status32
*/
.macro _check_nest_int_by_irq_act reg1, reg2
lr \reg1, [_ARC_V2_AUX_IRQ_ACT]
#ifdef CONFIG_ARC_SECURE_FIRMWARE
and \reg1, \reg1, ((1 << ARC_N_IRQ_START_LEVEL) - 1)
#else
and \reg1, \reg1, 0xffff
#endif
ffs \reg2, \reg1
fls \reg1, \reg1
cmp \reg1, \reg2
.endm
.macro _get_cpu_id reg
lr \reg, [_ARC_V2_IDENTITY]
xbfu \reg, \reg, 0xe8
.endm
.macro _get_curr_cpu_irq_stack irq_sp
#ifdef CONFIG_SMP
_get_cpu_id \irq_sp
ld.as \irq_sp, [@_curr_cpu, \irq_sp]
ld \irq_sp, [\irq_sp, ___cpu_t_irq_stack_OFFSET]
#else
mov \irq_sp, _kernel
ld \irq_sp, [\irq_sp, _kernel_offset_to_irq_stack]
#endif
.endm
/* macro to push aux reg through reg */
.macro PUSHAX reg aux
lr \reg, [\aux]
st.a \reg, [sp, -4]
.endm
/* macro to pop aux reg through reg */
.macro POPAX reg aux
ld.ab \reg, [sp, 4]
sr \reg, [\aux]
#endif /* CONFIG_ARC_HAS_SECURE */
.endm
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_SWAP_MACROS_H_ */
#ifdef __cplusplus
}
#endif
#endif /* _SWAP_MACROS__H_ */

View File

@@ -0,0 +1,36 @@
/*
* Copyright (c) 2016 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Kernel event logger support for ARM
*/
#ifndef __KERNEL_TRACING_H__
#define __KERNEL_TRACING_H__
#ifdef __cplusplus
extern "C" {
#endif
/**
* @brief Get the identification of the current interrupt.
*
* This routine obtain the key of the interrupt that is currently processed
* if it is called from a ISR context.
*
* @return The key of the interrupt that is currently being processed.
*/
int _sys_current_irq_key_get(void)
{
return _INTERRUPT_CAUSE();
}
#ifdef __cplusplus
}
#endif
#endif /* __KERNEL_TRACING_H__ */

View File

@@ -12,17 +12,17 @@
* ARCv2 processor architecture.
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_V2_CACHE_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_V2_CACHE_H_
#ifndef _ARCV2_CACHE__H_
#define _ARCV2_CACHE__H_
#include <arch/cpu.h>
#ifndef _ASMLANGUAGE
#ifdef __cplusplus
extern "C" {
#endif
#ifndef _ASMLANGUAGE
/* i-cache defines for IC_CTRL register */
#define IC_CACHE_ENABLE 0x00
#define IC_CACHE_DISABLE 0x01
@@ -34,7 +34,7 @@ extern "C" {
*
* Enables the i-cache and sets it to direct access mode.
*/
static ALWAYS_INLINE void z_icache_setup(void)
static ALWAYS_INLINE void _icache_setup(void)
{
u32_t icache_config = (
IC_CACHE_DIRECT | /* direct mapping (one-way assoc.) */
@@ -42,18 +42,18 @@ static ALWAYS_INLINE void z_icache_setup(void)
);
u32_t val;
val = z_arc_v2_aux_reg_read(_ARC_V2_I_CACHE_BUILD);
val = _arc_v2_aux_reg_read(_ARC_V2_I_CACHE_BUILD);
val &= 0xff;
if (val != 0U) { /* is i-cache present? */
if (val != 0) { /* is i-cache present? */
/* configure i-cache */
z_arc_v2_aux_reg_write(_ARC_V2_IC_CTRL, icache_config);
_arc_v2_aux_reg_write(_ARC_V2_IC_CTRL, icache_config);
}
}
#endif /* _ASMLANGUAGE */
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_V2_CACHE_H_ */
#endif /* _ARCV2_CACHE__H_ */

View File

@@ -12,10 +12,8 @@
* other definitions for the ARCv2 processor architecture.
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_V2_IRQ_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_V2_IRQ_H_
#include <arch/cpu.h>
#ifndef _ARCV2_IRQ__H_
#define _ARCV2_IRQ__H_
#ifdef __cplusplus
extern "C" {
@@ -30,31 +28,19 @@ extern "C" {
#define _ARC_V2_AUX_IRQ_CTRL_32_REGS 16
#ifdef CONFIG_ARC_SECURE_FIRMWARE
#define _ARC_V2_DEF_IRQ_LEVEL (ARC_N_IRQ_START_LEVEL - 1)
#else
#define _ARC_V2_DEF_IRQ_LEVEL (CONFIG_NUM_IRQ_PRIO_LEVELS - 1)
#endif
#define _ARC_V2_DEF_IRQ_LEVEL (CONFIG_NUM_IRQ_PRIO_LEVELS-1)
#define _ARC_V2_WAKE_IRQ_LEVEL _ARC_V2_DEF_IRQ_LEVEL
/*
* INIT_IRQ_LOCK_KEY is init interrupt level setting of a thread.
* It's configured by seti instruction when a thread starts to run
*, i.e., z_thread_entry_wrapper and z_user_thread_entry_wrapper
*/
#define _ARC_V2_INIT_IRQ_LOCK_KEY (0x10 | _ARC_V2_DEF_IRQ_LEVEL)
#ifndef _ASMLANGUAGE
extern K_THREAD_STACK_DEFINE(_interrupt_stack, CONFIG_ISR_STACK_SIZE);
/*
* z_irq_setup
* _irq_setup
*
* Configures interrupt handling parameters
*/
static ALWAYS_INLINE void z_irq_setup(void)
static ALWAYS_INLINE void _irq_setup(void)
{
u32_t aux_irq_ctrl_value = (
_ARC_V2_AUX_IRQ_CTRL_LOOP_REGS | /* save lp_xxx registers */
@@ -65,14 +51,11 @@ static ALWAYS_INLINE void z_irq_setup(void)
_ARC_V2_AUX_IRQ_CTRL_14_REGS /* save r0 -> r13 (caller-saved) */
);
z_arc_cpu_sleep_mode = _ARC_V2_WAKE_IRQ_LEVEL;
k_cpu_sleep_mode = _ARC_V2_WAKE_IRQ_LEVEL;
_arc_v2_aux_reg_write(_ARC_V2_AUX_IRQ_CTRL, aux_irq_ctrl_value);
#ifdef CONFIG_ARC_NORMAL_FIRMWARE
/* normal mode cannot write irq_ctrl, ignore it */
aux_irq_ctrl_value = aux_irq_ctrl_value;
#else
z_arc_v2_aux_reg_write(_ARC_V2_AUX_IRQ_CTRL, aux_irq_ctrl_value);
#endif
_kernel.irq_stack =
K_THREAD_STACK_BUFFER(_interrupt_stack) + CONFIG_ISR_STACK_SIZE;
}
#endif /* _ASMLANGUAGE */
@@ -81,4 +64,4 @@ static ALWAYS_INLINE void z_irq_setup(void)
}
#endif
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_V2_IRQ_H_ */
#endif /* _ARCV2_IRQ__H_ */

View File

@@ -18,13 +18,18 @@
* Refer to the ARCv2 manual for an explanation of the exceptions.
*/
#ifndef ZEPHYR_ARCH_ARC_INCLUDE_VECTOR_TABLE_H_
#define ZEPHYR_ARCH_ARC_INCLUDE_VECTOR_TABLE_H_
#ifndef _VECTOR_TABLE__H_
#define _VECTOR_TABLE__H_
#ifdef __cplusplus
extern "C" {
#endif
#define EXC_EV_TRAP 0x9
#ifdef _ASMLANGUAGE
#include <board.h>
#include <toolchain.h>
#include <linker/sections.h>
@@ -51,10 +56,6 @@ GTEXT(_isr_wrapper)
#else
#ifdef __cplusplus
extern "C" {
#endif
extern void __reset(void);
extern void __memory_error(void);
extern void __instruction_error(void);
@@ -70,10 +71,10 @@ extern void __ev_div_zero(void);
extern void __ev_dc_error(void);
extern void __ev_maligned(void);
#endif /* _ASMLANGUAGE */
#ifdef __cplusplus
}
#endif
#endif /* _ASMLANGUAGE */
#endif /* ZEPHYR_ARCH_ARC_INCLUDE_VECTOR_TABLE_H_ */
#endif /* _VECTOR_TABLE__H_ */

View File

@@ -0,0 +1,17 @@
zephyr_library_include_directories(${ZEPHYR_BASE}/drivers)
zephyr_include_directories(${ZEPHYR_BASE}/arch/x86/soc/intel_quark)
zephyr_cc_option(-mcpu=quarkse_em -mno-sdata)
zephyr_compile_definitions_ifdef(
CONFIG_SOC_QUARK_SE_C1000_SS
QM_SENSOR=1
SOC_SERIES=quark_se
)
zephyr_sources(
soc.c
soc_config.c
power.c
soc_power.S
)

View File

@@ -0,0 +1,18 @@
#
# Copyright (c) 2016 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
if ARC && SOC_QUARK_SE_C1000_SS
if IPM
config QUARK_SE_SS_IPM_IRQ_PRI
int "IPM interrupt priority"
default 1
help
Priority level for interrupts coming in from the inter-processor
mailboxes.
endif # IPM
endif # SOC_QUARK_SE_C1000_SS

View File

@@ -0,0 +1,245 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
# Copyright (c) 2015-2016 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
if SOC_QUARK_SE_C1000_SS
config SOC
default "quark_se_c1000_ss"
config NUM_IRQ_PRIO_LEVELS
# This processor supports only 2 priority levels:
# 0 for Fast Interrupts (FIRQs) and 1 for Regular Interrupts (IRQs).
default 2
config NUM_IRQS
# must be > the highest interrupt number used
default 68
config RGF_NUM_BANKS
default 2
config SYS_CLOCK_HW_CYCLES_PER_SEC
default 32000000
config HARVARD
def_bool n
config QMSI
def_bool y
config QMSI_BUILTIN
def_bool y
if RTC
config RTC_QMSI
def_bool y
endif # RTC
if PWM
config PWM_QMSI
def_bool y
endif # PWM
if PINMUX
config PINMUX_QMSI
def_bool y
endif
if GPIO
config GPIO_QMSI
def_bool n
if GPIO_QMSI
config GPIO_QMSI_0
def_bool y
config GPIO_QMSI_1
def_bool y
endif # GPIO_QMSI
config GPIO_QMSI_SS
def_bool y
if GPIO_QMSI_SS
config GPIO_QMSI_SS_0
def_bool y
config GPIO_QMSI_SS_1
def_bool y
endif # GPIO_QMSI_SS
endif # GPIO
if I2C
config I2C_QMSI
def_bool n
if I2C_QMSI
config I2C_0
def_bool y
config I2C_1
def_bool y
config I2C_SDA_SETUP
default 2
config I2C_SDA_TX_HOLD
default 16
config I2C_SDA_RX_HOLD
default 24
endif
config I2C_QMSI_SS
def_bool y
if I2C_QMSI_SS
config I2C_SS_0
def_bool y
config I2C_SS_1
def_bool y
config I2C_SS_SDA_SETUP
default 2
config I2C_SS_SDA_HOLD
default 10
endif
endif # I2C
if ADC
config ADC_DW
def_bool y
endif
if BT_H4
config UART_QMSI_0
def_bool y
config UART_QMSI_0_HW_FC
def_bool y
endif # BT_H4
if UART_QMSI
config UART_QMSI_1
def_bool y
endif # UART_QMSI
if SPI
config SPI_DW
def_bool y
if SPI_DW
config SPI_DW_FIFO_DEPTH
default 7
config CLOCK_CONTROL
def_bool y
config CLOCK_CONTROL_QUARK_SE
def_bool y
config CLOCK_CONTROL_QUARK_SE_SENSOR
def_bool y
config SPI_0
def_bool y
config SPI_DW_PORT_0_INTERRUPT_SINGLE_LINE
def_bool n
config SPI_DW_PORT_0_CLOCK_GATE
def_bool y
config SPI_DW_PORT_0_CLOCK_GATE_DRV_NAME
default CLOCK_CONTROL_QUARK_SE_SENSOR_DRV_NAME
config SPI_DW_PORT_0_CLOCK_GATE_SUBSYS
default 3
config SPI_0_IRQ_PRI
default 1
config SPI_1
def_bool y
config SPI_DW_PORT_1_INTERRUPT_SINGLE_LINE
def_bool n
config SPI_DW_PORT_1_CLOCK_GATE
def_bool y
config SPI_DW_PORT_1_CLOCK_GATE_DRV_NAME
default CLOCK_CONTROL_QUARK_SE_SENSOR_DRV_NAME
config SPI_DW_PORT_1_CLOCK_GATE_SUBSYS
default 4
config SPI_1_IRQ_PRI
default 1
endif # SPI_DW
endif # SPI
if AIO_COMPARATOR
config AIO_COMPARATOR_QMSI
def_bool y
endif # AIO_COMPARATOR
if WATCHDOG
config WDT_QMSI
def_bool y
config WDT_0_IRQ_PRI
default 0
endif # WATCHDOG
if DMA
config DMA_QMSI
def_bool y
endif # DMA
if COUNTER
config AON_COUNTER_QMSI
def_bool y
config AON_TIMER_QMSI
def_bool y
config AON_TIMER_IRQ_PRI
default 0
endif # COUNTER
endif #SOC_QUARK_SE_C1000_SS

View File

@@ -0,0 +1,6 @@
config SOC_QUARK_SE_C1000_SS
bool "Intel Quark SE C1000- Sensor Sub System"
select SYS_POWER_LOW_POWER_STATE_SUPPORTED
select SYS_POWER_DEEP_SLEEP_SUPPORTED
select HAS_QMSI

View File

@@ -0,0 +1,77 @@
/* SoC level DTS fixup file */
#define CONFIG_UART_QMSI_0_BAUDRATE INTEL_QMSI_UART_B0002000_CURRENT_SPEED
#define CONFIG_UART_QMSI_0_NAME INTEL_QMSI_UART_B0002000_LABEL
#define CONFIG_UART_QMSI_0_IRQ INTEL_QMSI_UART_B0002000_IRQ_0
#define CONFIG_UART_QMSI_0_IRQ_PRI INTEL_QMSI_UART_B0002000_IRQ_0_PRIORITY
#define CONFIG_UART_QMSI_1_BAUDRATE INTEL_QMSI_UART_B0002400_CURRENT_SPEED
#define CONFIG_UART_QMSI_1_NAME INTEL_QMSI_UART_B0002400_LABEL
#define CONFIG_UART_QMSI_1_IRQ INTEL_QMSI_UART_B0002400_IRQ_0
#define CONFIG_UART_QMSI_1_IRQ_PRI INTEL_QMSI_UART_B0002400_IRQ_0_PRIORITY
#define SRAM_START CONFIG_SRAM_BASE_ADDRESS
#define SRAM_SIZE CONFIG_SRAM_SIZE
#define FLASH_START CONFIG_FLASH_BASE_ADDRESS
#define FLASH_SIZE CONFIG_FLASH_SIZE
#define CONFIG_DCCM_BASE_ADDRESS ARC_DCCM_80000000_BASE_ADDRESS
#define CONFIG_DCCM_SIZE (ARC_DCCM_80000000_SIZE >> 10)
#define CONFIG_I2C_SS_0_NAME INTEL_QMSI_SS_I2C_80012000_LABEL
#define CONFIG_I2C_SS_0_ERR_IRQ INTEL_QMSI_SS_I2C_80012000_IRQ_ERROR
#define CONFIG_I2C_SS_0_ERR_IRQ_PRI INTEL_QMSI_SS_I2C_80012000_IRQ_ERROR_PRIORITY
#define CONFIG_I2C_SS_0_RX_IRQ INTEL_QMSI_SS_I2C_80012000_IRQ_RX
#define CONFIG_I2C_SS_0_RX_IRQ_PRI INTEL_QMSI_SS_I2C_80012000_IRQ_RX_PRIORITY
#define CONFIG_I2C_SS_0_TX_IRQ INTEL_QMSI_SS_I2C_80012000_IRQ_TX
#define CONFIG_I2C_SS_0_TX_IRQ_PRI INTEL_QMSI_SS_I2C_80012000_IRQ_TX_PRIORITY
#define CONFIG_I2C_SS_0_STOP_IRQ INTEL_QMSI_SS_I2C_80012000_IRQ_STOP
#define CONFIG_I2C_SS_0_STOP_IRQ_PRI INTEL_QMSI_SS_I2C_80012000_IRQ_STOP_PRIORITY
#define CONFIG_I2C_SS_0_BITRATE INTEL_QMSI_SS_I2C_80012000_CLOCK_FREQUENCY
#define CONFIG_I2C_SS_1_NAME INTEL_QMSI_SS_I2C_80012100_LABEL
#define CONFIG_I2C_SS_1_ERR_IRQ INTEL_QMSI_SS_I2C_80012100_IRQ_ERROR
#define CONFIG_I2C_SS_1_ERR_IRQ_PRI INTEL_QMSI_SS_I2C_80012100_IRQ_ERROR_PRIORITY
#define CONFIG_I2C_SS_1_RX_IRQ INTEL_QMSI_SS_I2C_80012100_IRQ_RX
#define CONFIG_I2C_SS_1_RX_IRQ_PRI INTEL_QMSI_SS_I2C_80012100_IRQ_RX_PRIORITY
#define CONFIG_I2C_SS_1_TX_IRQ INTEL_QMSI_SS_I2C_80012100_IRQ_TX
#define CONFIG_I2C_SS_1_TX_IRQ_PRI INTEL_QMSI_SS_I2C_80012100_IRQ_TX_PRIORITY
#define CONFIG_I2C_SS_1_STOP_IRQ INTEL_QMSI_SS_I2C_80012100_IRQ_STOP
#define CONFIG_I2C_SS_1_STOP_IRQ_PRI INTEL_QMSI_SS_I2C_80012100_IRQ_STOP_PRIORITY
#define CONFIG_I2C_SS_1_BITRATE INTEL_QMSI_SS_I2C_80012100_CLOCK_FREQUENCY
#define CONFIG_I2C_0_NAME INTEL_QMSI_I2C_B0002800_LABEL
#define CONFIG_I2C_0_BITRATE INTEL_QMSI_I2C_B0002800_CLOCK_FREQUENCY
#define CONFIG_I2C_0_IRQ INTEL_QMSI_I2C_B0002800_IRQ_0
#define CONFIG_I2C_0_IRQ_PRI INTEL_QMSI_I2C_B0002800_IRQ_0_PRIORITY
#define CONFIG_I2C_1_NAME INTEL_QMSI_I2C_B0002C00_LABEL
#define CONFIG_I2C_1_BITRATE INTEL_QMSI_I2C_B0002C00_CLOCK_FREQUENCY
#define CONFIG_I2C_1_IRQ INTEL_QMSI_I2C_B0002C00_IRQ_0
#define CONFIG_I2C_1_IRQ_PRI INTEL_QMSI_I2C_B0002C00_IRQ_0_PRIORITY
#define CONFIG_RTC_0_NAME INTEL_QMSI_RTC_B0000400_LABEL
#define CONFIG_RTC_0_IRQ INTEL_QMSI_RTC_B0000400_IRQ_0
#define CONFIG_RTC_0_IRQ_PRI INTEL_QMSI_RTC_B0000400_IRQ_0_PRIORITY
#define CONFIG_GPIO_QMSI_SS_0_NAME INTEL_QMSI_SS_GPIO_80017800_LABEL
#define CONFIG_GPIO_QMSI_SS_0_IRQ INTEL_QMSI_SS_GPIO_80017800_IRQ_0
#define CONFIG_GPIO_QMSI_SS_0_IRQ_PRI INTEL_QMSI_SS_GPIO_80017800_IRQ_0_PRIORITY
#define CONFIG_GPIO_QMSI_SS_1_NAME INTEL_QMSI_SS_GPIO_80017900_LABEL
#define CONFIG_GPIO_QMSI_SS_1_IRQ INTEL_QMSI_SS_GPIO_80017900_IRQ_0
#define CONFIG_GPIO_QMSI_SS_1_IRQ_PRI INTEL_QMSI_SS_GPIO_80017900_IRQ_0_PRIORITY
#define CONFIG_GPIO_QMSI_0_NAME INTEL_QMSI_GPIO_B0000C00_LABEL
#define CONFIG_GPIO_QMSI_0_IRQ INTEL_QMSI_GPIO_B0000C00_IRQ_0
#define CONFIG_GPIO_QMSI_0_IRQ_PRI INTEL_QMSI_GPIO_B0000C00_IRQ_0_PRIORITY
#define CONFIG_GPIO_QMSI_1_NAME INTEL_QMSI_GPIO_B0800B00_LABEL
#define CONFIG_GPIO_QMSI_1_IRQ INTEL_QMSI_GPIO_B0800B00_IRQ_0
#define CONFIG_GPIO_QMSI_1_IRQ_PRI INTEL_QMSI_GPIO_B0800B00_IRQ_0_PRIORITY
#define CONFIG_ADC_0_IRQ SNPS_DW_ADC_80015000_IRQ_NORMAL
#define CONFIG_ADC_IRQ_ERR SNPS_DW_ADC_80015000_IRQ_ERROR
#define CONFIG_ADC_0_IRQ_PRI SNPS_DW_ADC_80015000_IRQ_0_PRIORITY
#define CONFIG_ADC_0_NAME SNPS_DW_ADC_80015000_LABEL
#define CONFIG_ADC_0_BASE_ADDRESS SNPS_DW_ADC_80015000_BASE_ADDRESS
/* End of SoC Level DTS fixup file */

View File

@@ -0,0 +1,30 @@
/*
* Copyright (c) 2014-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @brief Linker script for the Quark SE platform, both standard images and XIP
* images.
*/
/* Flash base address and size */
#define FLASH_START CONFIG_FLASH_BASE_ADDRESS /* Flash bank 1 */
#define FLASH_SIZE CONFIG_FLASH_SIZE
/*
* SRAM base address and size
*
* Internal SRAM includes the exception vector table at reset, which is at
* the beginning of the region.
*/
#define SRAM_START CONFIG_SRAM_BASE_ADDRESS
#define SRAM_SIZE CONFIG_SRAM_SIZE
/* Data Closely Coupled Memory (DCCM) base address and size */
#define DCCM_START CONFIG_DCCM_BASE_ADDRESS
#define DCCM_SIZE CONFIG_DCCM_SIZE
#include <generated_dts_board.h>
#include <arch/arc/v2/linker.ld>

View File

@@ -0,0 +1,105 @@
/*
* Copyright (c) 2016 Intel Corporation.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr.h>
#include <sys_io.h>
#include <misc/__assert.h>
#include <power.h>
#include <soc_power.h>
#include <init.h>
#include <kernel_structs.h>
#include <soc.h>
#include "power_states.h"
#include "ss_power_states.h"
#include "vreg.h"
#if (defined(CONFIG_SYS_POWER_DEEP_SLEEP))
extern void _power_soc_sleep(void);
extern void _power_soc_deep_sleep(void);
extern void _power_soc_lpss_mode(void);
static void _deep_sleep(enum power_states state)
{
qm_power_soc_set_ss_restore_flag();
switch (state) {
case SYS_POWER_STATE_DEEP_SLEEP_1:
_power_soc_sleep();
break;
case SYS_POWER_STATE_DEEP_SLEEP_2:
_power_soc_deep_sleep();
break;
default:
break;
}
}
#endif
void _sys_soc_set_power_state(enum power_states state)
{
switch (state) {
case SYS_POWER_STATE_CPU_LPS:
qm_ss_power_cpu_ss1(QM_SS_POWER_CPU_SS1_TIMER_ON);
break;
case SYS_POWER_STATE_CPU_LPS_1:
qm_ss_power_cpu_ss2();
break;
#if (defined(CONFIG_SYS_POWER_DEEP_SLEEP))
case SYS_POWER_STATE_DEEP_SLEEP:
qm_ss_power_soc_lpss_enable();
qm_power_soc_set_ss_restore_flag();
_power_soc_lpss_mode();
break;
case SYS_POWER_STATE_DEEP_SLEEP_1:
case SYS_POWER_STATE_DEEP_SLEEP_2:
_deep_sleep(state);
break;
#endif
default:
break;
}
}
void _sys_soc_power_state_post_ops(enum power_states state)
{
u32_t limit;
switch (state) {
case SYS_POWER_STATE_CPU_LPS_1:
/* Expire the timer as it is disabled in SS2. */
limit = _arc_v2_aux_reg_read(_ARC_V2_TMR0_LIMIT);
_arc_v2_aux_reg_write(_ARC_V2_TMR0_COUNT, limit - 1);
case SYS_POWER_STATE_CPU_LPS:
__builtin_arc_seti(0);
break;
case SYS_POWER_STATE_DEEP_SLEEP:
qm_ss_power_soc_lpss_disable();
/* If flag is cleared it means the system entered in
* sleep state while we were in LPS. In that case, we
* must set ARC_READY flag so x86 core can continue
* its execution.
*/
if ((QM_SCSS_GP->gp0 & GP0_BIT_SLEEP_READY) == 0) {
_quark_se_ss_ready();
__builtin_arc_seti(0);
} else {
QM_SCSS_GP->gp0 &= ~GP0_BIT_SLEEP_READY;
QM_SCSS_GP->gps0 &= ~QM_GPS0_BIT_SENSOR_WAKEUP;
}
break;
case SYS_POWER_STATE_DEEP_SLEEP_1:
case SYS_POWER_STATE_DEEP_SLEEP_2:
/* Route RTC interrupt to the current core */
QM_IR_UNMASK_INTERRUPTS(QM_INTERRUPT_ROUTER->rtc_0_int_mask);
__builtin_arc_seti(0);
break;
break;
default:
break;
}
}

View File

@@ -0,0 +1,32 @@
/* system.c - system/hardware module for quark_se_ss BSP */
/*
* Copyright (c) 2014-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* This module provides routines to initialize and support board-level hardware
* for the Quark SE platform.
*/
#include <kernel.h>
#include "soc.h"
#include <init.h>
/**
* @brief perform basic hardware initialization
*
* RETURNS: N/A
*/
static int quark_se_arc_init(struct device *arg)
{
ARG_UNUSED(arg);
_quark_se_ss_ready();
return 0;
}
SYS_INIT(quark_se_arc_init, POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);

View File

@@ -0,0 +1,221 @@
/*
* Copyright (c) 2014-2015 Wind River Systems, Inc.
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @brief Board configuration macros for Quark SE Sensor Subsystem
*
* This header file is used to specify and describe board-level
* aspects for the Quark SE Sensor Subsystem.
*/
#ifndef _BOARD__H_
#define _BOARD__H_
#include <misc/util.h>
/* default system clock */
#define SYSCLK_DEFAULT_IOSC_HZ MHZ(32)
/* address bases */
#define SCSS_REGISTER_BASE 0xB0800000 /*Sensor Subsystem Base*/
#define PERIPH_ADDR_BASE_ADC 0x80015000 /* ADC */
#define PERIPH_ADDR_BASE_CREG_MST0 0x80018000 /* CREG Master 0 */
#define PERIPH_ADDR_BASE_CREG_SLV0 0x80018080 /* CREG Slave 0 */
#define PERIPH_ADDR_BASE_CREG_SLV1 0x80018180 /* CREG Slave 1 */
#define PERIPH_ADDR_BASE_GPIO0 0x80017800 /* GPIO 0 */
#define PERIPH_ADDR_BASE_GPIO1 0x80017900 /* GPIO 1 */
#define PERIPH_ADDR_BASE_SPI_MST0 0x80010000 /* SPI Master 0 */
#define PERIPH_ADDR_BASE_SPI_MST1 0x80010100 /* SPI Master 1 */
/* IRQs */
/* The CPU-visible IRQ numbers change between the ARC and IA cores,
* and QMSI itself has no easy way to pick the correct one, though it
* does have the necessary information to do it ourselves (in the meantime).
* This macro will be used by the shim drivers to get the IRQ number to
* use, and it should always be called using the QM_IRQ_*_INT macro
* provided by QMSI.
*/
#define IRQ_GET_NUMBER(_irq) _irq##_VECTOR
#define IRQ_TIMER0 16
#define IRQ_TIMER1 17
#define IRQ_I2C0_RX_AVAIL 18
#define IRQ_I2C0_TX_REQ 19
#define IRQ_I2C0_STOP_DET 20
#define IRQ_I2C0_ERR 21
#define IRQ_I2C1_RX_AVAIL 22
#define IRQ_I2C1_TX_REQ 23
#define IRQ_I2C1_STOP_DET 24
#define IRQ_I2C1_ERR 25
#define IRQ_SPI0_ERR_INT 30
#define IRQ_SPI0_RX_AVAIL 31
#define IRQ_SPI0_TX_REQ 32
#define IRQ_SPI1_ERR_INT 33
#define IRQ_SPI1_RX_AVAIL 34
#define IRQ_SPI1_TX_REQ 35
#define IRQ_ADC_ERR 18
#define IRQ_ADC_IRQ 19
#define IRQ_GPIO0_INTR 20
#define IRQ_GPIO1_INTR 21
#define IRQ_I2C_MST0_INTR 36
#define IRQ_I2C_MST1_INTR 37
#define IRQ_SPI_MST0_INTR 38
#define IRQ_SPI_MST1_INTR 39
#define IRQ_SPI_SLV_INTR 40
#define IRQ_UART0_INTR 41
#define IRQ_UART1_INTR 42
#define IRQ_I2S_INTR 43
#define IRQ_GPIO_INTR 44
#define IRQ_PWM_TIMER_INTR 45
#define IRQ_USB_INTR 46
#define IRQ_RTC_INTR 47
#define IRQ_WDOG_INTR 48
#define IRQ_DMA_CHAN0 49
#define IRQ_DMA_CHAN1 50
#define IRQ_DMA_CHAN2 51
#define IRQ_DMA_CHAN3 52
#define IRQ_DMA_CHAN4 53
#define IRQ_DMA_CHAN5 54
#define IRQ_DMA_CHAN6 55
#define IRQ_DMA_CHAN7 56
#define IRQ_MAILBOXES_INTR 57
#define IRQ_COMPARATORS_INTR 58
#define IRQ_SYS_PMU_INTR 59
#define IRQ_DMA_CHANS_ERR 60
#define IRQ_INT_SRAM_CTLR 61
#define IRQ_INT_FLASH0_CTLR 62
#define IRQ_INT_FLASH1_CTLR 63
#define IRQ_ALWAYS_ON_TMR 64
#define IRQ_ADC_PWR 65
#define IRQ_ADC_CALIB 66
#define IRQ_ALWAYS_ON_GPIO 67
#ifndef _ASMLANGUAGE
#include <misc/util.h>
#include <random/rand32.h>
#include <quark_se/shared_mem.h>
#define INT_ENABLE_ARC ~(0x00000001 << 8)
#define INT_ENABLE_ARC_BIT_POS (8)
/*
* I2C
*/
#define I2C_SS_0_ERR_VECTOR 22
#define I2C_SS_0_ERR_MASK 0x410
#define I2C_SS_0_RX_VECTOR 23
#define I2C_SS_0_RX_MASK 0x414
#define I2C_SS_0_TX_VECTOR 24
#define I2C_SS_0_TX_MASK 0x418
#define I2C_SS_0_STOP_VECTOR 25
#define I2C_SS_0_STOP_MASK 0x41C
#define I2C_SS_1_ERR_VECTOR 26
#define I2C_SS_1_ERR_MASK 0x420
#define I2C_SS_1_RX_VECTOR 27
#define I2C_SS_1_RX_MASK 0x424
#define I2C_SS_1_TX_VECTOR 28
#define I2C_SS_1_TX_MASK 0x428
#define I2C_SS_1_STOP_VECTOR 29
#define I2C_SS_1_STOP_MASK 0x42C
#define CONFIG_I2C_0_IRQ_FLAGS (IOAPIC_LEVEL | IOAPIC_HIGH)
#define CONFIG_I2C_1_IRQ_FLAGS (IOAPIC_LEVEL | IOAPIC_HIGH)
/*
* GPIO
*/
#define GPIO_DW_IO_ACCESS
#define GPIO_DW_0_BASE_ADDR 0x80017800
#define GPIO_DW_0_IRQ 20
#define GPIO_DW_0_BITS 8
#define GPIO_DW_PORT_0_INT_MASK (SCSS_REGISTER_BASE + 0x408)
#define GPIO_DW_1_BASE_ADDR 0x80017900
#define GPIO_DW_1_IRQ 21
#define GPIO_DW_1_BITS 8
#define GPIO_DW_PORT_1_INT_MASK (SCSS_REGISTER_BASE + 0x40C)
#if defined(CONFIG_IOAPIC)
#define GPIO_DW_0_IRQ_FLAGS (IOAPIC_EDGE | IOAPIC_HIGH)
#define GPIO_DW_1_IRQ_FLAGS (IOAPIC_EDGE | IOAPIC_HIGH)
#else
#define GPIO_DW_0_IRQ_FLAGS 0
#define GPIO_DW_1_IRQ_FLAGS 0
#endif
#define CONFIG_GPIO_QMSI_0_IRQ_FLAGS (IOAPIC_EDGE | IOAPIC_HIGH)
#define CONFIG_GPIO_QMSI_1_IRQ_FLAGS (IOAPIC_EDGE | IOAPIC_HIGH)
/*
* UART
*/
#define CONFIG_UART_QMSI_0_IRQ_FLAGS 0
#define CONFIG_UART_QMSI_1_IRQ_FLAGS 0
/*
* SPI
*/
#ifdef CONFIG_SPI_DW
#define SPI_DW_PORT_0_REGS 0x80010000
#define SPI_DW_PORT_1_REGS 0x80010100
#define SPI_DW_PORT_0_ERROR_INT_MASK (SCSS_REGISTER_BASE + 0x430)
#define SPI_DW_PORT_0_RX_INT_MASK (SCSS_REGISTER_BASE + 0x434)
#define SPI_DW_PORT_0_TX_INT_MASK (SCSS_REGISTER_BASE + 0x438)
#define SPI_DW_PORT_1_ERROR_INT_MASK (SCSS_REGISTER_BASE + 0x43C)
#define SPI_DW_PORT_1_RX_INT_MASK (SCSS_REGISTER_BASE + 0x440)
#define SPI_DW_PORT_1_TX_INT_MASK (SCSS_REGISTER_BASE + 0x444)
#define SPI_DW_IRQ_FLAGS 0
#define SPI_DW_PORT_2_REGS 0xB0001000
#define SPI_DW_PORT_2_IRQ IRQ_SPI_MST0_INTR
#define SPI_DW_PORT_2_INT_MASK (SCSS_REGISTER_BASE + 0x454)
#define SPI_DW_PORT_3_REGS 0xB0001400
#define SPI_DW_PORT_3_IRQ IRQ_SPI_MST1_INTR
#define SPI_DW_PORT_3_INT_MASK (SCSS_REGISTER_BASE + 0x458)
#endif /* CONFIG_SPI_DW */
/* Clock */
#define CLOCK_PERIPHERAL_BASE_ADDR (SCSS_REGISTER_BASE + 0x18)
#define CLOCK_EXTERNAL_BASE_ADDR (SCSS_REGISTER_BASE + 0x24)
#define CLOCK_SENSOR_BASE_ADDR (SCSS_REGISTER_BASE + 0x28)
#define CLOCK_SYSTEM_CLOCK_CONTROL (SCSS_REGISTER_BASE + \
SCSS_CCU_SYS_CLK_CTL)
/*
* RTC
*/
#define CONFIG_RTC_0_IRQ_FLAGS (IOAPIC_EDGE | IOAPIC_HIGH)
static inline void _quark_se_ss_ready(void)
{
shared_data->flags |= ARC_READY;
}
#endif /* !_ASMLANGUAGE */
#endif /* _BOARD__H_ */

View File

@@ -0,0 +1,44 @@
/*
* Copyright (c) 2015 Intel Corporation
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <device.h>
#include <init.h>
#include "soc.h"
#if CONFIG_IPM_QUARK_SE
#include <ipm.h>
#include <ipm/ipm_quark_se.h>
static int arc_quark_se_ipm_init(void)
{
IRQ_CONNECT(QUARK_SE_IPM_INTERRUPT, CONFIG_QUARK_SE_SS_IPM_IRQ_PRI,
quark_se_ipm_isr, NULL, 0);
irq_enable(QUARK_SE_IPM_INTERRUPT);
return 0;
}
static struct quark_se_ipm_controller_config_info ipm_controller_config = {
.controller_init = arc_quark_se_ipm_init
};
DEVICE_AND_API_INIT(quark_se_ipm, "", quark_se_ipm_controller_initialize,
NULL, &ipm_controller_config,
POST_KERNEL, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT,
&ipm_quark_se_api_funcs);
#if CONFIG_IPM_CONSOLE_SENDER
#include <console/ipm_console.h>
QUARK_SE_IPM_DEFINE(quark_se_ipm4, 4, QUARK_SE_IPM_OUTBOUND);
static struct ipm_console_sender_config_info quark_se_ipm_sender_config = {
.bind_to = "quark_se_ipm4",
.flags = IPM_CONSOLE_PRINTK | IPM_CONSOLE_STDOUT,
};
DEVICE_INIT(ipm_console, "ipm_console", ipm_console_sender_init,
NULL, &quark_se_ipm_sender_config,
POST_KERNEL, CONFIG_IPM_CONSOLE_INIT_PRIORITY);
#endif /* CONFIG_IPM_CONSOLE_SENDER */
#endif /* CONFIG_IPM_QUARK_SE */

View File

@@ -0,0 +1,127 @@
/*
* Copyright (c) 2016 Intel Corporation.
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <toolchain.h>
#include <arch/arc/v2/aux_regs.h>
#include <swap_macros.h>
#ifdef CONFIG_SYS_POWER_DEEP_SLEEP
GDATA(_pm_arc_context)
GTEXT(_sys_soc_resume_from_deep_sleep)
GTEXT(_power_restore_cpu_context)
GTEXT(_power_soc_sleep)
GTEXT(_power_soc_deep_sleep)
GTEXT(_power_soc_lpss_mode)
#define GPS0_REGISTER 0xb0800100
#define GP0_REGISTER 0xb0800114
#define GP0_BIT_SLEEP_READY 0
#define RESTORE_SS_BIT 2
#define SLEEP_INTR_ENABLED_BIT 4
#define SLEEP_MODE_RTC_ENABLED_BIT 5
SECTION_FUNC(TEXT, _sys_soc_resume_from_deep_sleep)
/* Check is this wakeup after sleep event. */
ld r0,[GPS0_REGISTER]
bbit1 r0,RESTORE_SS_BIT,restore
j_s [blink] /* Jump to context of BLINK register. */
restore:
bclr_s r0,r0,RESTORE_SS_BIT
st r0,[GPS0_REGISTER]
/* Enable I-Cache */
sr 1, [_ARC_V2_IC_CTRL]
j @_sys_soc_restore_cpu_context
SECTION_FUNC(TEXT, save_cpu_context)
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_save_callee_saved_regs
j_s [blink] /* Jump to context of BLINK register. */
SECTION_FUNC(TEXT, _power_soc_sleep)
/*
* Save the return address.
* The restore function will pop this and jump
* back to the caller.
*/
push_s blink
/* Do not link to preserve blink */
jl @save_cpu_context
j @qm_power_soc_sleep
/* Does not return */
SECTION_FUNC(TEXT, _power_soc_deep_sleep)
/*
* Save the return address.
* The restore function will pop this and jump
* back to the caller.
*/
push_s blink
/* Do not link to preserve blink */
jl @save_cpu_context
j @qm_power_soc_deep_sleep
/* Does not return */
SECTION_FUNC(TEXT, _power_soc_lpss_mode)
/*
* Setup 'sleep' instruction operand.
*/
/* Get interrupt priority from status32 registers. */
lr r0, [_ARC_V2_STATUS32]
lsr r0, r0
and r0, r0, 0xF
/* Enable interrupts */
bset r0, r0, SLEEP_INTR_ENABLED_BIT
/* Set 'sleep' mode corresponding to SS2 state i.e. core disabled,
* timers disabled, RTC enabled.
*/
bset r0, r0, SLEEP_MODE_RTC_ENABLED_BIT
/*
* Save the return address.
* The restore function will pop this and jump
* back to the caller.
*/
push_s blink
jl @save_cpu_context
ld r1, [GP0_REGISTER]
bset r1, r1, GP0_BIT_SLEEP_READY
st r1, [GP0_REGISTER]
sleep r0
/* If we reach this code it means the x86 core didn't put the
* system in SYS_POWER_STATE_DEEP_SLEEP_2 state while we were
* in LPS. Then discard saved context.
*/
_discard_callee_saved_regs
pop_s blink
j_s [blink]
SECTION_FUNC(TEXT, _sys_soc_restore_cpu_context)
mov_s r1, _kernel
ld_s r2, [r1, _kernel_offset_to_current]
_load_callee_saved_regs
/* Restore return address */
pop_s blink
j_s [blink] /* Jump to context of BLINK register. */
#endif

View File

@@ -0,0 +1,83 @@
/*
* Copyright (c) 2016 Intel Corporation.
*
* SPDX-License-Identifier: Apache-2.0
*/
#ifndef _SOC_POWER_H_
#define _SOC_POWER_H_
#ifdef __cplusplus
extern "C" {
#endif
/*
* Bit 0 from GP0 register is used internally by the kernel
* to handle PM multicore support. Any change on QMSI and/or
* bootloader which affects this bit should take it in
* consideration.
*/
#define GP0_BIT_SLEEP_READY BIT(0)
enum power_states {
SYS_POWER_STATE_CPU_LPS, /* SS1 state with Timer ON */
SYS_POWER_STATE_CPU_LPS_1, /* SS2 state */
SYS_POWER_STATE_DEEP_SLEEP, /* SS2 with LPSS enabled state */
SYS_POWER_STATE_DEEP_SLEEP_1, /* SLEEP state */
SYS_POWER_STATE_DEEP_SLEEP_2, /* SLEEP state with LPMODE enabled */
SYS_POWER_STATE_MAX
};
/**
* @brief Put processor into low power state
*
* This function implements the SoC specific details necessary
* to put the processor into available power states.
*
* Wake up considerations:
* -----------------------
*
* SYS_POWER_STATE_CPU_LPS: Any interrupt works as wake event.
*
* SYS_POWER_STATE_CPU_LPS_1: Any interrupt works as wake event except
* the ARC TIMER which is disabled.
*
* SYS_POWER_STATE_CPU_LPS_2: SYS_POWER_STATE_DEEP_SLEEP wake events applies.
*
* SYS_POWER_STATE_DEEP_SLEEP: Only Always-On peripherals can wake up
* the SoC. This consists of the Counter, RTC, GPIO 1 and AIO Comparator.
*
* SYS_POWER_STATE_DEEP_SLEEP_1: Only Always-On peripherals can wake up
* the SoC. This consists of the Counter, RTC, GPIO 1 and AIO Comparator.
*
* SYS_POWER_STATE_DEEP_SLEEP_2: Only Always-On peripherals can wake up
* the SoC. This consists of the Counter, RTC, GPIO 1 and AIO Comparator.
*
* Considerations around SYS_POWER_STATE_CPU_LPS (LPSS state):
* -----------------------------------------------------------
*
* LPSS is a common power state between the x86 and ARC.
* When the two applications enter SYS_POWER_STATE_CPU_LPS,
* the SoC will enter this lower consumption mode.
* After wake up, this state can only be entered again
* if the ARC wakes up and transitions again to
* SYS_POWER_STATE_CPU_LPS. This is not required on the x86 side.
*/
void _sys_soc_set_power_state(enum power_states state);
/**
* @brief Do any SoC or architecture specific post ops after low power states.
*
* This function is a place holder to do any operations that may
* be needed to be done after deep sleep exits. Currently it enables
* interrupts after resuming from deep sleep. In future, the enabling
* of interrupts may be moved into the kernel.
*/
void _sys_soc_power_state_post_ops(enum power_states state);
#ifdef __cplusplus
}
#endif
#endif /* _SOC_POWER_H_ */

View File

@@ -0,0 +1,17 @@
zephyr_library_include_directories(${ZEPHYR_BASE}/drivers)
zephyr_cc_option(-mcpu=${GCC_M_CPU})
zephyr_cc_option(-mno-sdata -mdiv-rem -mswap -mnorm)
zephyr_cc_option(-mmpy-option=6 -mbarrel-shifter)
zephyr_cc_option_ifdef(CONFIG_SOC_EMSK_EM7D --param l1-cache-size=16384)
zephyr_cc_option_ifdef(CONFIG_SOC_EMSK_EM7D --param l1-cache-line-size=32)
zephyr_cc_option_ifdef(CONFIG_SOC_EMSK_EM11D --param l1-cache-size=16384)
zephyr_cc_option_ifdef(CONFIG_SOC_EMSK_EM11D --param l1-cache-line-size=32)
zephyr_cc_option_ifdef(CONFIG_CODE_DENSITY -mcode-density)
zephyr_cc_option_ifdef(CONFIG_FLOAT -mfpu=fpuda_all)
zephyr_sources(
soc.c
soc_config.c
)

View File

@@ -0,0 +1,29 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
# Copyright (c) 2018 Synopsys, Inc. All rights reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
if SOC_EMSK
choice
prompt "ARC EM Starter Kit Core Selection"
default SOC_EMSK_EM7D
config SOC_EMSK_EM7D
bool "Synopsys ARC EM7D of EMSK"
select CPU_HAS_MPU
select ARC_HAS_SECURE if BOARD_EM_STARTERKIT_R23
config SOC_EMSK_EM11D
bool "Synopsys ARC EM11D of EMSK"
select CPU_HAS_FPU
config SOC_EMSK_EM9D
bool "Synopsys ARC EM9D of EMSK"
select CPU_HAS_FPU
endchoice
endif #SOC_EMSK

View File

@@ -0,0 +1,18 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
# Copyright (c) 2018 Synopsys, Inc. All rights reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
if SOC_EMSK
config SOC
string
default "snps_emsk"
source "arch/arc/soc/snps_emsk/Kconfig.defconfig.em7d"
source "arch/arc/soc/snps_emsk/Kconfig.defconfig.em11d"
source "arch/arc/soc/snps_emsk/Kconfig.defconfig.em9d"
endif #SOC_EMSK

View File

@@ -0,0 +1,38 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
# Copyright (c) 2018 Synopsys, Inc. All rights reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
if SOC_EMSK_EM11D
config CPU_EM4_FPUDA
def_bool y
config NUM_IRQ_PRIO_LEVELS
# This processor supports 4 priority levels:
# 0 for Fast Interrupts (FIRQs) and 1-3 for Regular Interrupts (IRQs).
default 4
config NUM_IRQS
# must be > the highest interrupt number used
default 38 if BOARD_EM_STARTERKIT_R23
default 36 if BOARD_EM_STARTERKIT_R22
config RGF_NUM_BANKS
default 2
config SYS_CLOCK_HW_CYCLES_PER_SEC
default 20000000
config HARVARD
def_bool n
config CACHE_FLUSHING
def_bool y
config FP_FPU_DA
def_bool y
endif #SOC_EMSK_EM11D

View File

@@ -0,0 +1,61 @@
#
# Copyright (c) 2014 Wind River Systems, Inc.
# Copyright (c) 2018 Synopsys, Inc. All rights reserved.
#
# SPDX-License-Identifier: Apache-2.0
#
if SOC_EMSK_EM7D
config CPU_EM4_DMIPS
def_bool y
config NUM_IRQ_PRIO_LEVELS
# This processor supports 4 priority levels:
# 0 for Fast Interrupts (FIRQs) and 1-3 for Regular Interrupts (IRQs).
default 4
config NUM_IRQS
# must be > the highest interrupt number used
default 38 if BOARD_EM_STARTERKIT_R23
default 36 if BOARD_EM_STARTERKIT_R22
config ARC_MPU_VER
default 3 if BOARD_EM_STARTERKIT_R23
default 2 if BOARD_EM_STARTERKIT_R22
config RGF_NUM_BANKS
default 1
config SYS_CLOCK_HW_CYCLES_PER_SEC
default 25000000 if BOARD_EM_STARTERKIT_R23
default 30000000 if BOARD_EM_STARTERKIT_R22
config HARVARD
def_bool y
config ARC_FIRQ
def_bool n if BOARD_EM_STARTERKIT_R23
def_bool y if BOARD_EM_STARTERKIT_R22
config CACHE_FLUSHING
def_bool y
if (ARC_MPU_VER = 2)
config MAIN_STACK_SIZE
default 2048
config IDLE_STACK_SIZE
default 2048
if ZTEST
config ZTEST_STACKSIZE
default 2048
endif # ZTEST
endif # ARC_MPU_VER
endif #SOC_EMSK_EM7D

Some files were not shown because too many files have changed in this diff Show More