Now that we can have "make allyesconfig" build and link, add this type
of build to the job which builds host tools as well. In GitLab, make
this job rather than binman testsuite be the job which unblocks the next
stage of the pipeline. This is because we had been using that job for
"sandbox builds", and now that we have an explicit test for that, we
should use it.
Signed-off-by: Tom Rini <trini@konsulko.com>
Semihosting allows a virtual machine to write to the host file system.
Such dangerous settings should not be in a defconfig.
Move it to a CI configuration override.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Similar to the changes made for the world build job, rework the binman
testsuite job as well. There's no functional changes, but makes our CI
clearer to others familiar with Azure pipelines.
Signed-off-by: Tom Rini <trini@konsulko.com>
While we had problems historically using buildman inside of a container
when invoked directly via Azure, rather than calling docker in our
script, that is no longer the case. We can make the job a bit easier to
understand by running it more normally. The challenge here is that our
container normally runs with an unprivileged user that we have populated
tools for and Azure creates and uses a new unprivileged user. Copy what
we need over to the new user.
Signed-off-by: Tom Rini <trini@konsulko.com>
The problem we face currently with Azure jobs is that we're running out
of disk space on the runners as we build. There's not a good way to
split approximately 1500 configurations across 10 jobs and not be close
to or exceeding that limit. Split this in to 29 jobs instead with a goal
of averaging an hour per job. This split gets us close, but there are
still some challenging jobs to try and break up further. The list is
mostly alphabetized but with some intentional changes (catch-all are
last, mx/imx are together, SoC family splits are just grouped together).
The average build time should be close to the same, but outliers can and
will happen.
Signed-off-by: Tom Rini <trini@konsulko.com>
- Bump to noble-20251013
- Include tools for sage lab, build TF-A for platforms there.
- Switch to distro provided trace-cmd, add libengine-pkcs11-openssl
- Use mirrors for GNU projects
- Switch to QEMU 10.1.x
Signed-off-by: Tom Rini <trini@konsulko.com>
QEMU comes with its own OpenSBI. For running RISC-V virtual machine
using one of qemu-riscv64_smode_defconfig or
qemu-riscv64_smode_acpi_defconfig is the natural choice.
Add the riscv64 smode configurations to the test scope.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
The changes here are that we need to ensure python setuptools are
in our build virtual environments as they will no longer come in via
python even in a virtual environment. As part of this ensure setuptools
is in our cache and also include pytest-azurepipelines as we should have
been doing. Next, we move away from using apt-key directly and move that
stanza towards the rest of the apt work. This also lets us drop
directly installing gnupg2. These steps are not strictly required for
24.04 but will be for later releases and are valid now. Finally, we drop
the unused PTYHONPATH ENV line.
In order to use these containers however, we need to stop running the
event_dump test as the 'addr2line' tool provided by binutils no longer
is able to decode those specific events in most cases. As this is a
problem with binutils and present for some time now, disabling the test
until someone has time to work with upstream this seems reasonable.
Signed-off-by: Tom Rini <trini@konsulko.com>
Support for using "u-boot,dm-..." rather than "bootph-..." has been
deprecated since February 2023. Any platforms using this have had a
console message saying to migrate by 2023.07. Go and remove all support
here now, for the v2026.01 release.
The results of this change that aren't clear from the above are that we
still have a checkpatch.pl error message, and document in
doc/develop/spl.rst that they have been migrated since 2023. We also
change the key2dtsi.py tool to use the correct bootph phase rather than
the legacy phase.
Signed-off-by: Tom Rini <trini@konsulko.com>
In Azure, older pipelines such as ours do not default to a shallow fetch
but rather do a complete clone. This introduces a marginal time increase
in each task, but also more importantly takes up significant disk space.
We are now getting warnings in some cases about using more than 95% of
our available disk space so take this as a first easy step to resolve
that problem.
Signed-off-by: Tom Rini <trini@konsulko.com>
This also incorporates the following commits to the Dockerfile:
da7942de29 Dockerfile: remove Python 2.7
183299d9a4 docker: add OP-TEE and TF-A build for testing Firmware Handoff
Signed-off-by: Tom Rini <trini@konsulko.com>
Add qemu_arm64_tfa_fw_handoff test entries to azure and gitlab
pipelines.
Signed-off-by: Raymond Mao <raymond.mao@linaro.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
Check the existence of bl1 and fip from:
1. /opt/tf-a/${board_type}_${board_ident}, if not exist, then;
2. /opt/tf-a/${board_type}
This change allows to test with TF-A with specified board ID only.
Signed-off-by: Raymond Mao <raymond.mao@linaro.org>
- Update to Ubuntu "Jammy" 20250714 tag
- Update to current Dockerfile which brings us QEMU 10.0.2 and newer
coreboot and pulls in lz4 via the non-legacy package name.
Signed-off-by: Tom Rini <trini@konsulko.com>
This series from myself brings CI up to using QEMU 10.0.2 for platforms.
We need to disable one test for now while a report to upstream QEMU is
resolved and also need to now update coreboot in order to be able to
build a version of it non-interactively (source locations have changed).
Link: https://lore.kernel.org/r/20250716001539.2483390-1-trini@konsulko.com
Changes in upstream QEMU have lead to this specific configuration of the
sifive_unleashed platform not working in QEMU. Until this can be
root caused and resolved, disable this test for now.
Link: https://gitlab.com/qemu-project/qemu/-/issues/2945
Signed-off-by: Tom Rini <trini@konsulko.com>
In order to unblock this CI workflow for the moment, switch to using a
mirror of u-boot-test-hooks on GitHub rather than source.denx.de.
Signed-off-by: Tom Rini <trini@konsulko.com>
Adriano Cordova <adrianox@gmail.com> says:
Enable HTTP server in CI to support HTTP tests in pytest
QEMU does not emulate an HTTP server, unlike other services like DHCP or TFTP.
To enable HTTP tests during CI runs, start a simple Python HTTP server
on port 80. This allows tests that require HTTP access to run.
The HTTP server is launched on the host. For QEMU environments launched
with '-netdev,user' this means that the HTTP server runs together with DHCP
at 10.0.2.2. HTTP testing needs to be explicitly enabled with
env__efi_helloworld_net_http_test_skip = False.
We also default `WGET=y` in `ARCH_QEMU` configurations so that these HTTP
tests are included automatically when using QEMU in CI.
Link: https://lore.kernel.org/r/20250516085256.30386-1-adriano.cordova@canonical.com
As noted by Quentin, in CI we should be at least versioning the pytest
that we install. To avoid problems later, go with the whole requirements
file being used. Furthermore, our documentation building for readthedocs
must also have pytest so install the requirements file there as well.
Reported-by: Quentin Schulz <quentin.schulz@cherry.de>
Signed-off-by: Tom Rini <trini@konsulko.com>
In order to easily document pytests, we need to include the autodoc
extension. We also need to make sure that for building the docs, CI
includes pytest and that we have PYTHONPATH configured such that it will
find all of the tests and related files. Finally, we need to have our
comments in the test file by in proper pydoc format in order to be
included in the output.
Signed-off-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
This adds the vexpress_fvp and vexpress_fvp_bloblist platforms to the
list of platforms we test via emulator in CI. In order to do this we
need to first have our container runtime have TF-A builds for the
vexpress_fvp platform, both with and without transfer list support as
well as installing "telnet" so that we can access console. In the CI
files we check for the existence of /opt/tf-a/${TEST_PY_BD} and if
found, copy bl1.bin and fip.bin to /tmp and set the variables so that we
can later run FVP to run.
Note that we currently disable the hostfs (semihosting) tests as they
trigger a bug in FVP. This has been reported upstream, and can be
enabled when fixed.
Reviewed-by: Harrison Mutai <harrison.mutai@arm.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Using some form of sandbox with Python modules is a long standing best
practice with the language. There are a number of ways to have a Python
sandbox be created. At this point in time, it seems the Python community
is moving towards using the "venv" module provided with Python rather
than a separate tool. To match that we make the following changes:
- Refer to a "Python sandbox" rather than virtualenv in comments, etc.
- Install the python3-venv module in our container and not virtualenv.
- In our CI files, invoke "python -m venv" rather than "virtualenv".
- In documentation, tell users to install python3-venv and not
virtualenv.
Signed-off-by: Tom Rini <trini@konsulko.com>
Binman includes a good set of tests covering all of its functionality.
This includes a code-coverage test.
However to date the code-coverage test has not been checked
automatically by CI, relying on people to run 'binman test -T'
themselves.
Plug the gap to avoid bugs creeping in future.
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
The CI image does not ship with all tools required for the binman tests.
Have binman build the missing tools.
Signed-off-by: Leonard Anderweit <l.anderweit@phytec.de>
Outside of changing versions here the other visible change is that we
tell grub that riscv64 does not have "large model" support. Without this
change the resulting mkimage is non-functional. This is known upstream
already.
Link: https://savannah.gnu.org/bugs/?65909
Signed-off-by: Tom Rini <trini@konsulko.com>
Currently, this platform is failing in CI due to seemingly platform
specific reasons. For now, remove it from CI until the maintainers have
a chance to look in to it.
Signed-off-by: Tom Rini <trini@konsulko.com>
Tom Rini <trini@konsulko.com> says:
A challenge we've run in to is making it easier for more people to use
various python tools that we include in the tree. Part of the problem is
that when we have a requirements.txt file, aside from the doc one we
share with the kernel, I created it using "pip freeze". And while this
might have been a best (or at least OK) practice at the time, that's no
longer the case and is why our files have so many things in them. What
this series does is create multiple files, one per project/tool and then
has CI install them as needed. There's a few places here where this
means that we update the requirements as well, but we keep a few big
things where they are currently. This is because updating them
introduces problems of their own and delaing with that would best be a
follow up series. I've put this through GitLab and Azure to make sure
everything is still going fine on both platforms.
Link: https://lore.kernel.org/r/20250205000743.949790-1-trini@konsulko.com
We can invoke pip once to install the various requirements.txt files
that we need rather than invoking the tool multiple times.
Signed-off-by: Tom Rini <trini@konsulko.com>
We should install all of our requirements.txt files after starting the
virtualenv rather than ad-hoc throughout each test.
Signed-off-by: Tom Rini <trini@konsulko.com>
Before we invoke pip we should always have first created and started our
virtualenv. This was done most of the time, but not always.
Signed-off-by: Tom Rini <trini@konsulko.com>
Without setting the shell flag to exit immediately when a command exists
with a non-zero status we can have the situation where the htmldocs
target fails with an error but the job will succeed due to infodocs
passing and being the last build target.
Signed-off-by: Tom Rini <trini@konsulko.com>
Execution time varies widely with the existing tests. Provides a way to
produce a summary of the time taken for each test, along with a
histogram.
This is enabled with the --timing flag.
Enable it for sandbox in CI.
Example:
Duration : Number of tests
======== : ========================================
<1ms : 1
<8ms : 1
<20ms : # 20
<30ms : ######## 127
<50ms : ######################################## 582
<75ms : ####### 102
<100ms : ## 39
<200ms : ##### 86
<300ms : # 29
<500ms : ## 42
<750ms : # 16
<1.0s : # 15
<2.0s : # 23
<3.0s : 13
<5.0s : 9
<7.5s : 1
<10.0s : 6
<20.0s : 12
Signed-off-by: Simon Glass <sjg@chromium.org>
Reviewed-by: Tom Rini <trini@konsulko.com>
The CI uses the following command to launch xilinx_versal_virt_defconfig:
qemu-system-aarch64 -M xlnx-versal-virt \
-display none -m 4G -serial mon:stdio \
-device loader,file=u-boot,cpu-num=0
'usb start' or invoking eth_bootdev_hunt leads to a crash when function
dwc3_core_init() tries to access a register at offset 0xc704 (DWC3_DCTL)
relative to the register start address 0xfe20c100.
Disable CONFIG_USB_DWC3 in the CI until the driver problem is fixed.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Tom Rini <trini@konsulko.com> says:
Hey all,
This is picking up Simon's v5 of the above-named series and making a few
more changes so that the follow-up series I have leads to arm64 being
supported for almost all jobs. To quote Simon's cover letter:
All gitlab runners are currently amd64 machines. This series attempts to
create a docker image which can also support arm64 so that sandbox tests
can be run on it.
The TARGET_... environment variables for grub could perhaps be adjusted,
using the new variables, but I have not done that for now.
Adding to what Simon said, we now build grub for all architectures as
the reason to install it was to be able to use the binaries in QEMU.
That won't provide us with amd64 binaries on arm64 hosts so we can't use
that shortcut anymore.
Link: https://lore.kernel.org/r/20241127172247.1488685-1-trini@konsulko.com
For consistency now, and future ease of testing with non-amd64 hosts,
build grub for all architectures rather than relying on host binaries
for i386/x86_64.
Reviewed-by: Simon Glass <sjg@chromium.org>
Signed-off-by: Tom Rini <trini@konsulko.com>
Patrick Rudolph <patrick.rudolph@9elements.com> says:
Based on the existing work done by Simon Glass this series adds
support for booting aarch64 devices using ACPI only.
As first target QEMU SBSA support is added, which relies on ACPI
only to boot an OS. As secondary target the Raspberry Pi4 was used,
which is broadly available and allows easy testing of the proposed
solution.
The series is split into ACPI cleanups and code movements, adding
Arm specific ACPI tables and finally SoC and mainboard related
changes to boot a Linux on the QEMU SBSA and RPi4. Currently only the
mandatory ACPI tables are supported, allowing to boot into Linux
without errors.
The QEMU SBSA support is feature complete and provides the same
functionality as the EDK2 implementation.
The changes were tested on real hardware as well on QEMU v9.0:
qemu-system-aarch64 -machine sbsa-ref -nographic -cpu cortex-a57 \
-pflash secure-world.rom \
-pflash unsecure-world.rom
qemu-system-aarch64 -machine raspi4b -kernel u-boot.bin -cpu cortex-a72 \
-smp 4 -m 2G -drive file=raspbian.img,format=raw,index=0 \
-dtb bcm2711-rpi-4-b.dtb -nographic
Tested against FWTS V24.03.00.
Known issues:
- The QEMU rpi4 support is currently limited as it doesn't emulate PCI,
USB or ethernet devices!
- The SMP bringup doesn't work on RPi4, but works in QEMU (Possibly
cache related).
- PCI on RPI4 isn't working on real hardware since the pcie_brcmstb
Linux kernel module doesn't support ACPI yet.
Link: https://lore.kernel.org/r/20241023132116.970117-1-patrick.rudolph@9elements.com
Add QEMU's SBSA ref board to azure pipelines and gitlab CI to run tests on it.
TEST: Run on Azure pipelines and confirmed that tests succeed.
Signed-off-by: Patrick Rudolph <patrick.rudolph@9elements.com>
Reviewed-by: Tom Rini <trini@konsulko.com>
Reviewed-by: Simon Glass <sjg@chromium.org>
Soon Azure will be removing the macOS-12 container in following their
normal support schedule. Move us to macOS-14 so we won't have problems
there for a while. At the same time, our Windows container is the oldest
supported, so move to the newer option. Finally, Ubuntu 22.04 is the
middle option currently, but 24.04 should be fine.
Link: https://github.com/actions/runner-images/issues/10721
Signed-off-by: Tom Rini <trini@konsulko.com>