feat: WAL-based RocksDB replication with HTTP streaming and failover#366
feat: WAL-based RocksDB replication with HTTP streaming and failover#366JackGuslerGit wants to merge 12 commits intomatrix-construct:mainfrom
Conversation
|
this implements async replication, so some dataloss is to be expected after an expected failover (node failure, disk failure process crash, ...), right? |
|
@pschichtel yes, that is correct. Under normal write load, RPO is determined just by network RTT. |
|
Hey @x86pup. It seems the CI failures on this PR are all runner-side cache issues. I am seeing two issues: 1: 2: Can you clean these up and re-run? |
These docker flakes sometimes occur when CI is really busy, apologies! We'll be happy to rerun as necessary. |
|
I haven't had a chance to thoroughly review this yet since I'm currently away, but a few things stand out as suspicious. Foremost it's not clear why WAL streaming is necessary. RocksDB already has internal mechanisms to synchronize primary and secondary; all that's missing is the promotion signalling. What is the basis for concerning ourselves with binary framing of rocksdb inner-workings at the user level? Is the rocksdb synchronization API being invoked here? Perhaps I missed it... |
|
@jevolk Yes, you are correct, In our case, we have a cluster of physical servers where each server has its own local disk. We can't use NFS/shared storage in our infrastructure. So core2 has no direct filesystem access to core1's RocksDB directory. That's the gap we're trying to fill by replicating the WAL and SST files over the network, so core2 can stay in sync with core1 without shared storage. Once core2 has a local copy of the data, Is there a mechanism in RocksDB you'd recommend for this case, or is shared storage assumed in your deployment model? |
Alright so this is not limited to shared filesystem mounts, that's rather exciting actually. Keep up the good work 👍 |
|
@jevolk Thanks! I see it has passed all checks, what's the next step here? |
|
It needs to be thoroughly reviewed here especially since the usage of AI is apparent, Jason is on vacation and will get to it soon. Thank you for ensuring CI passes to help this along. |
|
Okay sounds good, thanks for letting me know! |
|
Thank you for your patience 🙏 I'm right around the corner now... |
Add query and stream features; enhance replication routes and logic
Signed-off-by: Jason Volk <jason@zemos.net>
Signed-off-by: Jason Volk <jason@zemos.net>
Add query and stream features; enhance replication routes and logic
Signed-off-by: Jason Volk <jason@zemos.net>
Signed-off-by: Jason Volk <jason@zemos.net>
|
Hey @jevolk. I ran into an issue with the checkpoint logic while testing. The original code swapped the RocksDB database directory while RocksDB was already running, causing file corruption because open file descriptors still pointed to the old files while new writes went to the checkpoint copy. The fix moves the checkpoint download and filesystem swap to before RocksDB opens, so the database always starts fresh from a clean checkpoint with no live files being touched. |
Thanks for finding this. I made an attempt at merging this but ran out of time before the 1.6 release with a few loose ends still. The main issue primarily dealt with switching to CBOR for the wire format which makes more sense for several reasons. |
|
I'll be revisiting this again at the top of the 1.6.1 dev cycle (start of next week). I only have a small number of re-organizations and applying CBOR (which is hugely simplifying) so this should go in pretty early on. Thank you again for your patience 🙏🏻 |
Add query and stream features; enhance replication routes and logic
Signed-off-by: Jason Volk <jason@zemos.net>
Logically-agnostic refactor for patterns and conventions. Fix additional lints. Signed-off-by: Jason Volk <jason@zemos.net>
…ot. (#366) Signed-off-by: Jason Volk <jason@zemos.net>
Use strong Url type. Signed-off-by: Jason Volk <jason@zemos.net>
Signed-off-by: Jason Volk <jason@zemos.net>
Signed-off-by: Jason Volk <jason@zemos.net>
Signed-off-by: Jason Volk <jason@zemos.net>
Reduce additional log+err repeated message patterns. Compose with Url rather than format strings. Additional renames; tracing instruments. Reduce interval/heartbeat frequency. Signed-off-by: Jason Volk <jason@zemos.net>
Reduce additional log+err repeated message patterns. Compose with Url rather than format strings. Additional renames; tracing instruments. Reduce interval/heartbeat frequency. Bump tar RUSTSEC-2026-0067. Signed-off-by: Jason Volk <jason@zemos.net>
Signed-off-by: Jason Volk <jason@zemos.net>
Add query and stream features; enhance replication routes and logic
Signed-off-by: Jason Volk <jason@zemos.net>
Logically-agnostic refactor for patterns and conventions. Fix additional lints. Signed-off-by: Jason Volk <jason@zemos.net>
Split WAL related functions; shuffle/reorganize out of database modroot. Tuck maybe_bootstrap_checkpoint() back into replication service. Use strong Url type. Rename endpoints and service to cluster. Split and rename run_stream and wal endpoint to sync. Move backoff constants to config items. Use 'global' column instead of 'replication_meta' cf. Reduce additional log+err repeated message patterns. Compose with Url rather than format strings. Additional renames; tracing instruments. Reduce interval/heartbeat frequency. Bump tar RUSTSEC-2026-0067. Signed-off-by: Jason Volk <jason@zemos.net>
|
Hi Jack, I took a second swipe at this and unfortunately I just haven't gotten it to where it needs to be. I should have taken notes to provide some details since there's way too much nuance to summarize here. The tldr is that I have to revisit this after some higher priority tasks- either next week for 1.6.1 or on the backside of that release. Overall I think this feature has promise and we're very close now. |
|
Hey, no worries! Take your time. Let me know if there's anything I can do to help. |
|
Hey @jevolk, do you think this should also handle replicating media files? |
If we were to put media in RocksDB (and I have before) we would very likely be disabling the WAL for those columns and write-ops. We would need a different mode of transport. Now that we have S3 storage provider support, we have more possibilities for media backup. In fact I'm considering the ability to backup the database itself over an S3 connection- though that would be for "colder" storage and wouldn't replace this feature for "hot" failover of course 😅 |
This relates to #35.
Summary:
Test plan:
Relevant config options added: