Skip to content

Implement per-size-class lock sharding to reduce contention#120

Open
bpowers wants to merge 1 commit intomasterfrom
bobby/sharded-global-locks
Open

Implement per-size-class lock sharding to reduce contention#120
bpowers wants to merge 1 commit intomasterfrom
bobby/sharded-global-locks

Conversation

@bpowers
Copy link
Member

@bpowers bpowers commented Nov 28, 2025

  • Replace single _miniheapLock with per-size-class _miniheapLocks array
  • Add separate _largeAllocLock for large allocations (sizeClass == -1)
  • Add _arenaLock to protect shared arena/allocator state
  • Add AllLocksGuard RAII class for acquiring all locks in consistent order
  • Make _miniheapCount atomic for safe access under different locks
  • Update freeFor() to use per-size-class locks with proper ordering
  • Defer flushBinLocked() to meshing cycle to avoid lock ordering issues
  • Update mallctl(), getSize(), okToProceed() for new lock structure

Lock ordering: arena → large → sizeClass[0..N-1]

This idea is that this can reduce lock contention in multi-threaded workloads by allowing concurrent operations on different size classes.

- Replace single _miniheapLock with per-size-class _miniheapLocks array
- Add separate _largeAllocLock for large allocations (sizeClass == -1)
- Add _arenaLock to protect shared arena/allocator state
- Add AllLocksGuard RAII class for acquiring all locks in consistent order
- Make _miniheapCount atomic for safe access under different locks
- Update freeFor() to use per-size-class locks with proper ordering
- Defer flushBinLocked() to meshing cycle to avoid lock ordering issues
- Update mallctl(), getSize(), okToProceed() for new lock structure

Lock ordering: arena → large → sizeClass[0..N-1]

This reduces lock contention in multi-threaded workloads by allowing
concurrent operations on different size classes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
@emeryberger
Copy link
Member

@bpowers is this still in progress?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants