Skip to content

refactor: optimize semantic token processing and caching#1641

Open
mattn wants to merge 1 commit intomasterfrom
bits-tune
Open

refactor: optimize semantic token processing and caching#1641
mattn wants to merge 1 commit intomasterfrom
bits-tune

Conversation

@mattn
Copy link
Collaborator

@mattn mattn commented Jan 22, 2026

  • Extract token decoding to avoid repeated calls in handle_semantic_tokens_response
  • Replace array concatenation with extend() for better performance when accumulating highlights
  • Cache precomputed line/character deltas in decode_tokens for efficiency
  • Simplify endpos calculation by avoiding unnecessary object mutation
  • Implement memoization for highlight group lookups via s:hl_group_cache
  • Remove debug logging call in apply_highlights
  • Fix indentation inconsistency in return statement

This pull request improves performance by reducing the re-rendering time for a 20,000-line JSON file in kakehashi
from 3 seconds to 1.5 seconds.

- Extract token decoding to avoid repeated calls in handle_semantic_tokens_response
- Replace array concatenation with extend() for better performance when accumulating highlights
- Cache precomputed line/character deltas in decode_tokens for efficiency
- Simplify endpos calculation by avoiding unnecessary object mutation
- Implement memoization for highlight group lookups via s:hl_group_cache
- Remove debug logging call in apply_highlights
- Fix indentation inconsistency in return statement
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments