[improve](compaction) Use segment footer raw_data_bytes for first-time batch size estimation#62263
Open
Yukang-Lian wants to merge 6 commits intoapache:masterfrom
Open
[improve](compaction) Use segment footer raw_data_bytes for first-time batch size estimation#62263Yukang-Lian wants to merge 6 commits intoapache:masterfrom
Yukang-Lian wants to merge 6 commits intoapache:masterfrom
Conversation
…e batch size estimation When vertical compaction runs for the first time on a tablet (no historical sampling data), estimate_batch_size() previously returned a hardcoded value of 992, which could cause OOM for wide tables or be too conservative for narrow tables. This change uses ColumnMetaPB.raw_data_bytes from segment footer to compute a per-row size estimate for the first compaction. raw_data_bytes records the original data size before encoding, which closely approximates runtime Block::bytes(). Subsequent compactions continue to use the existing historical sampling mechanism unchanged. Key design decisions: - Footer collection only runs when needed (no manual override, and at least one column group lacks historical sampling data) - Variant columns (raw_data_bytes=0 TODO) trigger fallback to 992 - Structural overhead (+1 null map, +8 offset) only added for scalar columns with actual footer data - Complex types (ARRAY/MAP/STRUCT) use raw_data_bytes directly without structural compensation as it already includes recursive sub-writer data - Historical sampling now uses Block::allocated_bytes() instead of bytes() for more accurate memory estimation
Contributor
|
Thank you for your contribution to Apache Doris. Please clearly describe your PR:
|
Collaborator
Author
|
run buildall |
Contributor
BE Regression && UT Coverage ReportIncrement line coverage Increment coverage report
|
…ion init Log per_row, sample_bytes, sample_rows immediately after all merge inputs finish loading their first block, before the actual merge starts. This helps diagnose memory issues by showing the actual per-row memory size at init time.
The log was added to help diagnose vertical compaction memory issues. Investigation is complete; the existing 'estimate batch size' log in merger.cpp already provides per-group batch_size and per_row info for daily monitoring.
Collaborator
Author
|
run buildall |
Contributor
BE UT Coverage ReportIncrement line coverage Increment coverage report
|
Contributor
BE Regression && UT Coverage ReportIncrement line coverage Increment coverage report
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
estimate_batch_size()previously returned a hardcoded value of 992, which could cause OOM for wide tables or be too conservative for narrow tablesColumnMetaPB.raw_data_bytesfrom segment footer to compute a per-row size estimate for the first compaction.raw_data_bytesrecords the original data size before encoding, which closely approximates runtimeBlock::bytes()Block::allocated_bytes()instead ofbytes()for more accurate memory estimation (size()vscapacity())Key design decisions
raw_data_bytes / rows_with_data+ structural compensation (+1 null map, +8 offset)raw_data_bytes / rows_with_data, no compensation (already includes recursive sub-writer data)raw_data_bytes=0 // TODOin writer)Performance safeguards
compaction_batch_sizeis manually setTest plan
TestFirstCompactionUsesFooterEstimationunit test passes