Skip to content

Conversation

@Gelbpunkt
Copy link
Member

@Gelbpunkt Gelbpunkt commented Nov 27, 2025

This refactors our timer interrupt handling into a separate module that all timer interrupts are configured through and handled by. It allows us to track why a timer interrupt was fired.

We make use of this to now only conditionally wake the network task's waker when necessary. This is either because we sent some network packets, a network device driver received an interrupt, or because a timer interrupt that we configured so we can poll smoltcp in the future was handled.

Closes #2126.

@mkroening mkroening self-assigned this Nov 27, 2025
@Gelbpunkt Gelbpunkt force-pushed the wakers branch 2 times, most recently from 9c652d1 to c03dc8e Compare November 27, 2025 11:27
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmark Results

Details
Benchmark Current: d5f9bef Previous: 4e2eb9e Performance Ratio
startup_benchmark Build Time 99.72 s 98.85 s 1.01
startup_benchmark File Size 0.87 MB 0.87 MB 1.00
Startup Time - 1 core 0.93 s (±0.03 s) 0.93 s (±0.02 s) 1.00
Startup Time - 2 cores 0.94 s (±0.03 s) 0.94 s (±0.03 s) 0.99
Startup Time - 4 cores 0.97 s (±0.03 s) 0.97 s (±0.03 s) 1.00
multithreaded_benchmark Build Time 96.32 s 101.75 s 0.95
multithreaded_benchmark File Size 0.96 MB 0.97 MB 1.00
Multithreaded Pi Efficiency - 2 Threads 86.37 % (±10.70 %) 88.94 % (±9.28 %) 0.97
Multithreaded Pi Efficiency - 4 Threads 44.39 % (±3.72 %) 43.84 % (±4.40 %) 1.01
Multithreaded Pi Efficiency - 8 Threads 25.38 % (±2.39 %) 25.07 % (±1.98 %) 1.01
micro_benchmarks Build Time 93.09 s 100.40 s 0.93
micro_benchmarks File Size 0.97 MB 0.97 MB 1.00
Scheduling time - 1 thread 65.31 ticks (±3.15 ticks) 69.09 ticks (±3.48 ticks) 0.95
Scheduling time - 2 threads 36.32 ticks (±5.00 ticks) 40.35 ticks (±6.50 ticks) 0.90
Micro - Time for syscall (getpid) 3.18 ticks (±0.45 ticks) 3.35 ticks (±0.55 ticks) 0.95
Memcpy speed - (built_in) block size 4096 68886.45 MByte/s (±49215.60 MByte/s) 60564.43 MByte/s (±43274.44 MByte/s) 1.14
Memcpy speed - (built_in) block size 1048576 29769.09 MByte/s (±24441.63 MByte/s) 29279.03 MByte/s (±24148.63 MByte/s) 1.02
Memcpy speed - (built_in) block size 16777216 28839.34 MByte/s (±23981.31 MByte/s) 25925.51 MByte/s (±21792.33 MByte/s) 1.11
Memset speed - (built_in) block size 4096 70040.68 MByte/s (±49941.51 MByte/s) 61139.18 MByte/s (±43619.65 MByte/s) 1.15
Memset speed - (built_in) block size 1048576 30555.72 MByte/s (±24887.92 MByte/s) 30025.71 MByte/s (±24565.41 MByte/s) 1.02
Memset speed - (built_in) block size 16777216 29615.78 MByte/s (±24421.43 MByte/s) 26704.16 MByte/s (±22275.17 MByte/s) 1.11
Memcpy speed - (rust) block size 4096 60951.24 MByte/s (±45075.93 MByte/s) 60319.96 MByte/s (±44078.65 MByte/s) 1.01
Memcpy speed - (rust) block size 1048576 29659.45 MByte/s (±24392.31 MByte/s) 29314.22 MByte/s (±24161.97 MByte/s) 1.01
Memcpy speed - (rust) block size 16777216 28861.21 MByte/s (±24012.55 MByte/s) 24865.71 MByte/s (±20917.45 MByte/s) 1.16
Memset speed - (rust) block size 4096 62074.50 MByte/s (±45808.90 MByte/s) 60870.22 MByte/s (±44412.47 MByte/s) 1.02
Memset speed - (rust) block size 1048576 30433.79 MByte/s (±24837.14 MByte/s) 30010.85 MByte/s (±24572.25 MByte/s) 1.01
Memset speed - (rust) block size 16777216 29637.88 MByte/s (±24452.93 MByte/s) 25601.25 MByte/s (±21378.75 MByte/s) 1.16
alloc_benchmarks Build Time 91.81 s 92.98 s 0.99
alloc_benchmarks File Size 0.94 MB 0.94 MB 1.00
Allocations - Allocation success 100.00 % 100.00 % 1
Allocations - Deallocation success 100.00 % 100.00 % 1
Allocations - Pre-fail Allocations 100.00 % 100.00 % 1
Allocations - Average Allocation time 10433.65 Ticks (±171.23 Ticks) 9692.28 Ticks (±125.85 Ticks) 1.08
Allocations - Average Allocation time (no fail) 10433.65 Ticks (±171.23 Ticks) 9692.28 Ticks (±125.85 Ticks) 1.08
Allocations - Average Deallocation time 983.04 Ticks (±354.22 Ticks) 862.97 Ticks (±109.24 Ticks) 1.14
mutex_benchmark Build Time 91.77 s 92.38 s 0.99
mutex_benchmark File Size 0.97 MB 0.97 MB 1.00
Mutex Stress Test Average Time per Iteration - 1 Threads 12.52 ns (±0.61 ns) 12.72 ns (±0.63 ns) 0.98
Mutex Stress Test Average Time per Iteration - 2 Threads 95.30 ns (±4.88 ns) 13.12 ns (±0.97 ns) 7.26

This comment was automatically generated by workflow using github-action-benchmark.

@Gelbpunkt Gelbpunkt force-pushed the wakers branch 8 times, most recently from 390c379 to 466882e Compare December 22, 2025 09:45
@mkroening mkroening self-requested a review December 22, 2025 10:12
@Gelbpunkt Gelbpunkt marked this pull request as ready for review December 22, 2025 10:13
@Gelbpunkt Gelbpunkt force-pushed the wakers branch 3 times, most recently from f48f443 to 839554d Compare January 8, 2026 10:51
Copy link
Member

@mkroening mkroening left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot; this is looking great! :)

Comment on lines +586 to +584
create_timer_abs(Source::Scheduler, wt);
return;
}

cursor.move_next();
}

set_oneshot_timer();
create_timer_abs(Source::Scheduler, wt);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docs say that wakeup_time is relative. This was also done wrongly before, but maybe we should follow up to fix this.


self.isr_stat.acknowledge();

trace!("Waking network waker");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative to duplicating the trace statement everywhere could be using #[track_caller] to log the caller inside WakerRegistration::wake. I am not sure about the performance implications, though.

This refactors our timer interrupt handling into a separate module that
all timer interrupts are configured through and handled by. It allows us
to track why a timer interrupt was fired.

We make use of this to now only conditionally wake the network task's
waker when necessary. This is either because we sent some network
packets, a network device driver received an interrupt, or because a
timer interrupt that we configured so we can poll smoltcp in the future
was handled.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Task wakeup timer is overwritten

2 participants