perf lock contention: Load kernel map before lookup

On some machines, it caused troubles when it tried to find kernel
symbols.  I think it's because kernel modules and kallsyms are messed
up during load and split.

Basically we want to make sure the kernel map is loaded and the code has
it in the lock_contention_read().  But recently we added more lookups in
the lock_contention_prepare() which is called before _read().

Also the kernel map (kallsyms) may not be the first one in the group
like on ARM.  Let's use machine__kernel_map() rather than just loading
the first map.

Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 688d2e8de2 ("perf lock contention: Add -l/--lock-addr option")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
This commit is contained in:
Namhyung Kim
2025-10-29 21:01:39 -07:00
parent 3528647874
commit 553d18c98a

View File

@@ -184,6 +184,9 @@ int lock_contention_prepare(struct lock_contention *con)
struct evlist *evlist = con->evlist;
struct target *target = con->target;
/* make sure it loads the kernel map before lookup */
map__load(machine__kernel_map(con->machine));
skel = lock_contention_bpf__open();
if (!skel) {
pr_err("Failed to open lock-contention BPF skeleton\n");
@@ -749,9 +752,6 @@ int lock_contention_read(struct lock_contention *con)
bpf_prog_test_run_opts(prog_fd, &opts);
}
/* make sure it loads the kernel map */
maps__load_first(machine->kmaps);
prev_key = NULL;
while (!bpf_map_get_next_key(fd, prev_key, &key)) {
s64 ls_key;