Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 9 additions & 13 deletions packages/dds/shared-object-base/src/sharedObject.ts
Original file line number Diff line number Diff line change
Expand Up @@ -396,28 +396,24 @@ export abstract class SharedObjectCore<
return;
}

/* eslint-disable jsdoc/check-indentation */
/**
* Derived classes must override this to do custom processing on a 'bunch' of remote messages.
* Apply a 'bunch' of sequenced ops to this shared object.
* @remarks
* A 'bunch' is a group of messages that have the following properties:
* - They are all part of the same grouped batch, which entails:
* - They are contiguous in sequencing order.
* - They are all from the same client.
* - They are all based on the same reference sequence number.
* - They are not interleaved with messages from other clients.
* - They are not interleaved with messages from other DDS in the container.
* Derived classes should override this if they need to do custom processing on a 'bunch' of remote messages.
* @param messageCollection - The collection of messages to process.
* See {@link @fluidframework/runtime-definitions#IRuntimeMessageCollection} for what a "bunch" is.
*
* These ops have been sequenced by the service and now have a finalized ordering.
* They may be local or remote ops, but they cannot be ops that are still pending acknowledgement from the service.
*
* @param messageCollection - The 'bunch' of sequenced ops to apply to this shared object.
* @privateRemarks
* TODO:Performance: AB#59783: Allowing this to process more messages at once (more than the current definition of 'bunch') could improve performance of clients which fall behind,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO should not be added here. I can't say for sure if the description in AB#59783 can actually be integrated into bunching or is a completely separate feature. The options mentioned there looks fine in theory but in practice, they may not work or may not be worth doing because it likely covers a very small set of scenarios.
So, I would not add this TODO here until at least the general direction of that work item is more defined. Adding it before that gives the impression that there is more coming which may not be the case.

Copy link
Contributor Author

@CraigMacomber CraigMacomber Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All possible ways to implement that work item involve changing what is passed in here (regardless of if its larger bunches or multiple bunches). It will deliver a large and important performance improvement generally (make slow clients much more able to keep up with lots of tree edits) which is being funded as part of a feature I own (text) where I have measured it would be a large win already. I have an active work item tracking implementing this, and plan to attempt it soon. If that attempt fails I will clean this up along with the work item itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For some context, I am making this PR because I was reading this code as the first steps of starting that implementation work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, personally I think linking a work item for future work in impacting a part of the code is a good thing to do, even if that work item isn't scheduled and likely to happen soon like this one is. If someone wants to know if/when it will happen, they can check the work item to see when/if its scheduled. Linking such items helps prevent redundant work items or implementation and ensures if someone does implement this, they will know about the item and know to coordinate with its owner (if any) and adjust or close if when they are done.

* which is one of the most important performance sensitive scenarios.
*/
/* eslint-enable jsdoc/check-indentation */
protected abstract processMessagesCore(messagesCollection: IRuntimeMessageCollection): void;

/**
* Called when the object has disconnected from the delta stream.
*/

protected abstract onDisconnect(): void;

/**
Expand Down
19 changes: 19 additions & 0 deletions packages/runtime/runtime-definitions/src/protocol.ts
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,25 @@ export interface IRuntimeMessagesContent {

/**
* A collection of messages that are processed by the runtime.
* @remarks
* Some places which use this interface may have additional constraints on the messages
* which are collected together, and those cases have special names:
*
* A "grouped batch" requires that all contained messages:
* - Are contiguous in sequencing order.
* - Are all from the same client.
* - Are all based on the same reference sequence number.
* - Are not interleaved with messages from other clients.
* - Were actually grouped together into a "grouped batch" by the client when they were submitted:
* even ops meeting the other requirements arn't considered a grouped batch if they were not send and processed as one.
*
* A "bunch" requires that all contained messages:
* - Are a contiguous subsequence of messages from the same grouped batch (in order and cannot skip any messages).
* - Are all for the same DDS in the container.
*
* @privateRemarks
* TODO: We should have more specific types for the "grouped batch" and "bunch" cases.
* TODO: This type should only be needed for implementing DDSs, and thus should eventually be made internal as the legacy API surface is cleaned up.
* @legacy @beta
* @sealed
*/
Expand Down
Loading