You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[opt](rowset) Aggregate non-MOW segment key bounds to reduce rowset meta size (#62604)
For non-MOW (duplicate / aggregate key) tables, per-segment key bounds
are not consumed on the read path — only the rowset-level [min, max] is
used by the reader and ordered compaction. In cloud mode, persisting
bounds for every segment can blow past FDB's value size limit on
commit_rowset.
Introduce an `enable_aggregate_non_mow_key_bounds` BE config (default
off). When enabled, non-MOW rowsets collapse per-segment bounds into a
single [overall_min, overall_max] entry at write time, and compaction
preserves this behavior. MOW rowsets always retain per-segment bounds —
their `lookup_row_key` path relies on them for delete bitmap
computation, and is guarded by a new DCHECK against aggregated input.
A new optional `segments_key_bounds_aggregated` flag is added to both
RowsetMetaPB and RowsetMetaCloudPB so consumers can distinguish
aggregated from per-segment layouts. Proto round-trip, pb_convert,
snapshot restore, and index builder all preserve both this flag and the
existing `segments_key_bounds_truncated` flag.
Correctness notes:
- `first_key/last_key` callers (`block_reader`, ordered compaction)
already bail out on overlapping rowsets, so for non-overlapping rowsets
the aggregated [min, max] equals seg[0].min / seg[last].max exactly.
- `merge_rowset_meta` (MOW partial-update publish) DCHECKs both sides
are non-aggregated.
0 commit comments