Conversation
|
size-limit report 📦
|
2f31fbc to
6a5960d
Compare
6a5960d to
2e6f8cf
Compare
2619816 to
8e7a138
Compare
Lukas has some reservations: #1820 (comment)
After doing this I was able to get it to work end to end!
Example of where this can be triggered is specifying an empty name when creating a new data track.
Temporarily replace the hexdump with a chart for a demo. I think this probably should be reverted though, the hexdump is more useful longer term.
…ks data channel Adds a "sendLossyBytes" method which hooks into the existing data channel logging / buffer status / etc infrastructure.
…ions This is required so that a user can specify custom extension metadata like user timestamps. As part of this, DataTrackExtensions and friends are now part of the public api interface.
…g extensions" This reverts commit 66ba201.
…ternal interface a user passes into tryPush Doing it this way makes it much easier to specify a user timestamp value.
This a pretty large thing to miss, yikes...
…dle the separate data tracks data channel The old way this worked, channelKind was being computed based on maxRetransmits === 0. But now there are multiple lossy data channels, the regular one and the data tracks one, so this heuristic no longer works.
…s are buffered when the dc is in low status This was encountered when implementing the e2e tests - a large data track frame could be split up into potentially n packets, and if one packet is dropped the whole frame is considered dropped. So the behavior that is really wanted here is to buffer packets and send them once the data channel can accept more data, even though it is technically a lossy data channel.
…channels When operating in dual peer connection contexts, this is required.
… users that a failure is actually happening
lukasIO
left a comment
There was a problem hiding this comment.
a couple of nits that would be nice to address.
More generally and important: what happens currently when running this against older servers that don't have DataTrack support?
| return; | ||
| if (!this.isBufferStatusLow(kind)) { | ||
| // Depending on the exact circumstance that data is being sent, either drop or wait for the | ||
| // buffer status to not be low before continuing. |
There was a problem hiding this comment.
what's the intention behind this differentiation? Why not just drop them for lossy?
There was a problem hiding this comment.
That's what I originally was doing but this didn't work for large data track payloads - here is some more context as to why from some notes I wrote up:
- If the
_data_tracksrtc data channel buffer was full, the existing logic would drop packets (what the lossy data channel currently does).
- This doesn't work though for data track packets, because if the first
npackets of a frame get enqueued but the last packet(s) get dropped because there isn't enough room in the rtc data channel buffer, the receiver only receives a partial set of packets which will never fully decode into a data track frame.- If packets aren't buffered like this, then it becomes impossible to send data track payloads > ~75k bytes (the e2e tests are checking this case)
- Solution (commit): buffer data track packets instead of dropping them if the rtc data channel buffer is full
If there's some other way to handle this, I'm open to alternatives but what I did here was the most straightforward way I could think of.
|
To provide context on this:
After talking with @ladvoc , I can confirm that on the subscribing end the new messages are just dropped, and on the publishing end the failure would end up as a timeout error. So nothing breaks in a incredibly bad way here. We decided that including a note in the publish function docstring and also maybe a small comment in the timeout error message would be good enough to guard against this - did this here. |
Integrates the data track managers (both incoming and outgoing) into the existing sdk
RoomandRemoteParticipantinterfaces. Once this is merged, data tracks is fully implemented!Interface - Publishing
Interface - Subscribing
Todo:
Resolves CLT-2582