-
-
Notifications
You must be signed in to change notification settings - Fork 236
chore: reduce production dependencies #1045
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1045 +/- ##
==========================================
+ Coverage 99.44% 99.72% +0.28%
==========================================
Files 15 15
Lines 1797 1837 +40
==========================================
+ Hits 1787 1832 +45
+ Misses 10 5 -5
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Automated Benchmark testing suggests more than 10℅ perf decrease. Kind regards, |
|
It turns out the difference was actually positive instead of negative, tested on Codespaces using 4 cores. Overall Benchmark Results+x% is better, -x% is worse, current threshold to fail at -10%
Benchmark Results for main
Benchmark Results for reduce-production-dependencies
|
|
A fix for the benchmarking is in #1046 Kind regards, |
robertsLando
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Love this! ❤️
Let's see if @mcollina has some opinions on this otherwise this is good to merge 🙏🏼
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR reduces production dependencies by replacing 8 external packages with native Node.js APIs and local implementations. The goal is to minimize external dependencies while maintaining equivalent functionality.
- Replaces external utilities like
fastfall,fastparallel,fastserieswith local implementations - Switches from
hyperidanduuidpackages to nativerandomUUIDfrom Node.js crypto module - Replaces
end-of-stream,retimer, andreusifywith native or local alternatives
Reviewed Changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| package.json | Removes 8 production dependencies |
| lib/utils.js | Adds local ObjectPool and retimer implementations |
| lib/handlers/unsubscribe.js | Replaces fastparallel with Promise.all pattern |
| lib/handlers/subscribe.js | Adds runFall implementation and replaces fastparallel with Promise.all |
| lib/handlers/connect.js | Switches from hyperid to native randomUUID |
| lib/client.js | Replaces end-of-stream with native finished from stream |
| docs/Client.md | Updates documentation to reflect hyperid → randomUUID change |
| aedes.js | Replaces multiple external dependencies with native/local implementations |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
mcollina
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generically don't see a problem with dependencies and number of things on disk, but some of the changes are incorrect or will worsen the performance.
Some of it is in the right direction because it will now be using things from the platform.
| done() | ||
| } catch (err) { | ||
| done(err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is by far not equivalent. It also creates a bazillion of promises and it will significantly impact performance
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, the current solution is rather inefficient.
I'm working on a more elegant solution for this one.
aedes.js
Outdated
| this._series = series() | ||
| this._enqueuers = reusify(DoEnqueues) | ||
| this._series = runSeries | ||
| this._enqueuers = new ObjectPool(DoEnqueues) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be interesting to verify if this is needed at all. My 2 cents is that we don't need this at all anymore and there is no perf gap in just allocating them all the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fully agree, see OP
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just ran the benchmark on the commit that only removes the object pool. No material difference, still about 18% improvement over the main branch. See: https://github.com/moscajs/aedes/actions/runs/17486480074/attempts/1#summary-49666422333
aedes.js
Outdated
| else resolve() | ||
| }) | ||
| }))) | ||
| done() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| done() | |
| process.nextTick(done) |
otherwise error handling would be a disaster
lib/handlers/subscribe.js
Outdated
| } | ||
| next(null, arg) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please test this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what you mean with this comment. The coverage report says that its the runFallall is called 36 times and all code of runfall is covered during tests. The anonymous inner function runs 164 times, the next() function runs 628 times. Do you want me to move it to lib/utils.js and test it standalone?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having some standalone tests would be better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added tests to test/utils.js
Kind regards, |
|
@robertsLando
|
|
@robertsLando The only file without 100% coverage is now client.js because of : Lines 377 to 379 in 9b76359
which we can't test and therefore should imo exclude from coverage testing. And legacy support in: Lines 169 to 172 in 9b76359
The associated drain request: Lines 362 to 364 in 9b76359
is never called according to coverage testing. Calling nodejs private API's like this.conn._writableState.getBuffer() should not be required, so Iḿ really wondering whether we should not leave this to conn.destroy(). Kind regards, |
aedes.js
Outdated
| for (const clientId of Object.keys(this.clients)) { | ||
| promises.push(closeClient(this.clients[clientId])) | ||
| } | ||
| Promise.all(promises).then(() => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is equivalent, in case one fails, the callback will not be invoked
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
replaced .then() by .finally()
lib/handlers/subscribe.js
Outdated
| } | ||
| next(null, arg) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having some standalone tests would be better.
| } | ||
| } catch (err) { | ||
| done(err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has the same problem with promises identified elsewhere
| } | ||
| } catch (err) { | ||
| done(err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has the same problem with promises identified elsewhere
| }) | ||
| }))) | ||
| if (restore) { | ||
| done() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that if this throws, it would call done twice
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see why, can you please explain?
I tried the following:
const pOk = new Promise(resolve => {
console.log('pOK will resolve')
resolve()
})
const pFail = new Promise((resolve,reject) => {
console.log('pFail will reject')
reject('pFail rejected')
})
try {
await Promise.all([pOk,pFail])
console.log('all Promises resolved ok')
} catch (err){
console.log('At least one promise failed, first error:', err)
}This gives me:
pOK will resolve
pFail will reject
At least one promise failed, first error: pFail rejected
Btw: I reworked doSubscribe to make this part more easy to read.
| // since it is a rare race condition we ignore it in coverage testing | ||
| return | ||
| } | ||
| /* c8 ignore stop */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was this tested before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not that I know, the coverage testing shows that this part is never reached and given the comments it looked like a rare race condition to me.
| completeUnsubscribe.call(state) | ||
| } catch (err) { | ||
| completeUnsubscribe.call(state, err) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same discussion as doSubscribe(), I also reworked doUnsubscribe to make the code more readable
| } | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just remove the utility if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| // if chunks is half full when the iterator ends | ||
| if (chunks.length > 0) { | ||
| yield chunks | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you benchmark the simple implementation against https://github.com/mcollina/hwp/blob/main/index.js? I have a feeling this might problematic (or maybe V8 just got way faster over the years).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran a benchmark: (see https://github.com/seriousme/hwp/blob/benchmarks/benchmark/bench.js)
And the results were quite consistent:
| test | num runs | items per run | num batches | estimated ideal time (ms) | lapsed time (ms) |
|---|---|---|---|---|---|
| benchBatch | 100 | 100 | 7 | 7000 | 7640 |
| benchForEach | 100 | 100 | 7 | 7000 | 7643 |
| benchBatch | 100 | 1000 | 63 | 63000 | 66515 |
| benchForEach | 100 | 1000 | 63 | 63000 | 66527 |
Estimated ideal time is based on the following:
100 items per test run means 7 batch runs per batch (Match.ceil(100/16)), each batch taking 10ms, so 70ms per test run, 100 test runs of 70 ms should run in about 7 seconds,
In reality on a light Chromebook each batch on average takes 10.5 ms and 100 or 1000 items per run hardly makes any difference. 0.5 ms overhead on such a machine sounds not too bad to me.
Hope this helps!
Kind regards,
Hans
| export async function * batch (readable, fn, batchSize) { | ||
| let chunks = [] | ||
| for await (const chunk of readable) { | ||
| chunks.push(fn(chunk)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there might be an issue with error handling here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added testing of batch() to test/utils.js
|
FYI: with all the changes the github review comments are a bit hard to track. |
|
@robertsLando @mcollina Kind regards, |
Co-authored-by: Daniel Lando <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would like to wait for @mcollina feedback on this. From my POV this is good to merge 👍🏼
|
@mcollina ping |
|
Just to be sure: you all are not waiting for me right? |
|
@seriousme nope, I'm waiting for @mcollina last feedback after your changes after his review :) |
@robertsLando
As promised, this PR reduces the number of production dependencies to a bare minimum.
This PR replaces:
The local variants listed above are all quite small, mostly because they only need to support one variant instead of the many variants that some external modules provide. Also over the years some patterns have become more easy to implement with modern Javascript.
This PR tries to keep the code changes to a minimum.
From here on, further optimizations might be possible, but I didn't want to make this PR too complex.
Kind regards,
Hans