Skip to content

Cargo RFC for min publish age#3923

Open
tmccombs wants to merge 5 commits intorust-lang:masterfrom
tmccombs:cargo-min-publish-age
Open

Cargo RFC for min publish age#3923
tmccombs wants to merge 5 commits intorust-lang:masterfrom
tmccombs:cargo-min-publish-age

Conversation

@tmccombs
Copy link

@tmccombs tmccombs commented Feb 23, 2026

Important

When responding to RFCs, try to use inline review comments (it is possible to leave an inline review comment for the entire file at the top) instead of direct comments for normal comments and keep normal comments for procedural matters like starting FCPs.

This keeps the discussion more organized.

Summary

This RFC proposes adding options to Cargo that allow specifying a minimum age for published versions to use.

See rust-lang/cargo#15973

Rendered

This RFC proposes adding options to Cargo that allow specifying
a minimum age for published versions to use.

See rust-lang/cargo#15973

Signed-off-by: Thayne McCombs <[email protected]>
@ehuss ehuss added the T-cargo Relevant to the Cargo team, which will review and decide on the RFC. label Feb 23, 2026
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The RFC doesn't clearly lay out the expected way to deal with urgent security updates.
From my reading of it, I would deal with those by using cargo update --precise or putting it in Cargo.toml, getting a warning, and then after having the new version in the lockfile, everything works normally.
But this would mean that when regenerating the lockfile (when update --precise was used instead of Cargo.toml), this security update would be lost silently, which sounds suboptimal. Having an exclude array like pnpm does not have this problem.
Can you add why you decided against package exclusion configuration and whether this threat of regenerating the lockfile after a security update is important?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this would mean that when regenerating the lockfile (when update --precise was used instead of Cargo.toml), this security update would be lost silently, which sounds suboptimal

If there is something you depend on, you should raise your version requirement.

Having an exclude array like pnpm does not have this problem.

But that is a heavy hammer. It doesn't record why it was excluded and you can easily forget to un-exclude when it is no longer applicable. With raising a version requirement, you automatically start using the feature again with the dependency.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having an exclude array like pnpm does not have this problem.

I did include that in the Future Possibiliites, but I could probably expand on that more.


However, these tools only work for updating and adding dependencies outside of cargo itself, they do not
have any impact on explicitly run built-in cargo commands such as `cargo update` and `cargo add`.
Having built-in support makes it easier to enforce a minimum publish age policy.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is easier, yes, but more importantly having this in Cargo is the only way to have it actually be secure. As soon as you run cargo build after a malicious package has been resolved, the build is compromised.
This can't be anywhere but in Cargo's resolver to achieve the security benefits.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that is only true if you don't have a Cargo.lock file, or your Cargo.lock file is missing something from your Cargo.toml file.

But I did try to mention that in d78605c

Comment on lines 310 to 311
* Should "deny" be an allowed value for `resolver.incompatible-publish-age`? And if so, how does that behave? What is the error message? Is it overridden
by a version specified in `Cargo.lock` or with the `--precise` flag?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And how would it interact with important urgent security updates?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this from open questions, and left it as just a Future possibility for now.

* Locking message for [Cargo time machine (generate lock files based on old registry state) #5221](https://github.com/rust-lang/cargo/issues/5221) is in UTC time, see [Tracking Issue for _lockfile-publish-time_ #16271](https://github.com/rust-lang/cargo/issues/16271), when relative time differences likely make local time more relevant
* Implementation wise, will there be much complexity in getting per registry information into `VersionPreferences` and using it?
* `fallback` precedence between this and `incompatible-rust-version`?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another question worth pointing out: "Are we comfortable making the security guarantees that the build remains secure when a not-yet-of-age malicious update has been released?". I think the answer here ought to be yes, but that's a decision that should be made deliberately and knowingly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry when "making [a] security guarantee" on whether we are actually committing to that level (ie a report needs to be issued and a hotfix released).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any such guarantee would need to be predicated on some conditions, including:

  • The build was possible before the malicious release on the target platform and compiler version. I.E. there are no cases that require a fallback (a "deny" policy could potentially prevent a compromised build if that isn't the case, but that is currently put off as a future possibility)
  • The malicious release happened more recently than the minimum publish age.
  • Either none of the versions needed for a secure build have been yanked OR there a lock file is used.

@Noratrieb
Copy link
Member

@rust-lang/wg-secure-code @rust-lang/security @walterhpearce y'all may be interested in this too

Copy link

@clarfonthey clarfonthey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(going to leave as a detached "review" since GH's system is bad, and I thought this would allow replies)

@Shnatsel
Copy link
Member

My take from the wg-secure-code perspective is that the only solution to supply chain attacks is cargo vet/cargo crev, and everything else is snake oil.

The way forward is to stop whining and start using cargo vet instead of trying to pile on ever-increasing amounts of questionable heuristics.

@djc
Copy link
Contributor

djc commented Feb 23, 2026

FWIW, I found the linked blog post to be fairly convincing that something in this direction makes sense. There is a very wide gap between (a) issuing cargo update sight unseen and (z) making sure all your dependencies are trusted by cargo-vet, and this seems like a decent middle ground.

@@ -0,0 +1,327 @@
- Feature Name: cargo_min_publish_age

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that the industry has chosen to use the term "supply chain attacks," but can we please stop treating the open source ecosystem like a supply chain, because that's not what it is?

Basically all projects involved are offered explicitly with no warranty of usability and on a best-effort basis. Part of the reason why it's so vulnerable to attacks is that we have this dichotomy where trillion-dollar companies are relying on the efforts of volunteers for critical infrastructure, and so, bad actors have a large incentive to take over from burnt-out volunteers.

This proposal also suffers from the lack of insight that the name itself provides, where it doesn't really do much to actually prevent these issues or understand them, and just offers a "solution" without much of an argument for why it solves the problem.

Generally, these vulnerabilities aren't noticed until they've seen widespread use. If they were, it wouldn't be a problem! So, delaying the time between when these releases are deployed and they see widespread use isn't really going to catch the vulnerabilities, just delay them.

And if you think that this is mitigated by giving a longer testing period for people to try out releases before widespread adoption: we already have this. People already test out the git versions of packages before actual releases, and we also have pre-releases as another way of testing as well. You could argue that maybe projects that are "trusted" should go through a pre-release process and a thorough testing period, but that's something that would be orthogonal to this RFC, and not helped by this RFC.

Also, as mentioned, there are plenty of reasons to explicitly update to a new release quickly, for critical security issues. We could potentially get around this by adding a dedicated "security" release type that bypasses the restrictions, but I dunno, that seems also vulnerable to abuse.

While it is true that announcing a security issue could lead to increased scrutiny that would help exploits get noticed, we've reached "CVE fatigue" to the point where there are so many minor issues that go unnoticed because the difference between a low-priority security issue and a high-priority issue mostly depends on how it's used, and whether you're one of those users. It could be that there's a serious bug in a method that's rarely used, and does that make it a low-priority or high-priority issue?

People update for security fixes all the time, and it's so regular that it feels like an attacker would have a good chance of getting away with releasing their exploit as a "security fix" and still doing some damage before it gets noticed. So, we obviously wouldn't be able to add a bypass mechanism of this type without it still being vulnerable.

Ultimately, it really feels like people are trying to fix a social problem with an automated system, and that's just not going to work. The closest stuff we have are systems like cargo vet which allow vetting specific versions of dependencies in addition to the dependencies themselves, but again, there is always a chance of letting things slip through the cracks in the right circumstances.

The best thing folks can do is not let their guard down, and ensure that maintainers of critical packages are supported both in their career and by having multiple people to share the load. And this requires these big companies to pay up, which I know is something they really don't want to do.

Plus, we already have Cargo.lock as a way of avoiding "automated" exploits of dependencies. You need to do a cargo update to become vulnerable, so, just be aware what you're doing when you do that.

Copy link
Member

@Noratrieb Noratrieb Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have much to say about your points about the "supply chain", but about the other point (leaving all your points in one comment ensures that some of them will remain unaddressed):

Generally, these vulnerabilities aren't noticed until they've seen widespread use

I suggest you read the linked blog post, which provides direct evidence to the contrary. This setting does really reduce (but certainly not remove) the likelyhood of getting malicious dependency updates.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI Covering multiple topics in a single reply makes it harder to have a conversation

  • I assumed this was all about naming at first and was going to ignore it
  • Even if they are noticed, they can easily be lost track of
  • It makes the ensuing conversation harder to follow
  • It makes it hard to tell the status of the conversation

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Responding as the author of the cited blog post.)

I think the "cooldown" technique is empirically motivated in the post. It's true that it isn't a silver bullet (people who make silver bullet claims tend to have financial incentives to do so), but it does demonstrably work well at stymieing real-world package compromises.

In terms of language, I think "supply chain" is the generally recognized term when it comes to describing complex, sometimes opaque dependency chains (regardless of whether those chains are open, proprietary, or composite). I've never understood that term to undermine the licensing/warranty disclaimers in open source; I think "vendor" would be more charged and should be avoided but AFAICT this RFC doesn't use that term.

(The point about an attacker being incentivized to make a "security" release is a good one, and is something I addressed in a follow-up post. The TL;DR is that security advisories require coordination with an additional party i.e. the vulnerability DB/service, which raises the floor on attacker sophistication/how pervasive their takeover is.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fail to see how the post makes a valid empirical claim at all; it simply says that the widow of opportunity is small, but doesn't actually prove that delays built into the package manager don't simply delay the small period of exploit. If it were evidence that directly showed that vulnerabilities have been stopped by this implementation, it would be a lot more compelling, but otherwise, it's just a guess that has an obvious counter.

The fact that people see criticising the term "supply chain attack" as a reason to ignore the argument only bolsters how relevant it is. The text is long and people don't want to read, but the point is the same: if you want to ensure that releases are safe and secure, ensure that the maintainers are safe and secure and not burned out. Being complacent only makes things worse.

Plus, like, you should always be sceptical of a simple fix to a complex problem. Sometimes that ends up working out, but usually not.

Sure, if we do get compelling evidence from other ecosystems that this mitigation helps, by all means implement it, but I don't think it's worth the time right now given the fact that cargo already has a lot of good mitigations in place.

For example, library crates used to not publish lockfiles; now they do. This itself is a big win, and we should make more like that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plus, like, you should always be sceptical of a simple fix to a complex problem. Sometimes that ends up working out, but usually not.

This is not meant to be "fix" but to be a small, incremental improvement and we should continue to improve the situation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the main table in the post is sufficient: it shows that 80% of surveyed package compromises were detected within less than a week, which extends to meaning that anybody who had waited a week before upgrading would not have been (directly) affected.

I think your points about treating maintainers well are essentially reasonable but also not immediately relevant, insofar as both things should be done (we should treat maintainers better, and we should employ mechanisms that effectively mitigate the damage of package compromise). This isn't really a corporate mentality either: I maintain a lot of things, and I personally benefit (as an unpaid maintainer) from mechanisms that help me mitigate attacks on other maintainers.

Plus, like, you should always be sceptical of a simple fix to a complex problem. Sometimes that ends up working out, but usually not.

I think myself and others have been pretty explicit about not treating cooldowns as a panacea. Nobody has advanced this argument.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, in hindsight, this is definitely a bunch of concurrent arguments that I guess can be litigated separately, so, I will follow up on this in separate threads. In my head, it was a singular coherent argument of this being the wrong focus, but that honestly isn't a very constructive way to discuss this

@epage
Copy link
Contributor

epage commented Feb 23, 2026

@Shnatsel

The way forward is to stop whining and start using cargo vet instead of trying to pile on ever-increasing amounts of questionable heuristics.

Please note that this is not a constructive way to engage with others on this topic.

Comment on lines 9 to 11
This proposal adds a new configuration option to cargo that specifies a minimum age for package
updates. When adding or updating a dependency, cargo won't use a version of that crate that
is newer than the minimum age, unless no possible version is older.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you include in the summary an example configuration?

Comment on lines +32 to +34
Although cargo, of course, allows you to specify exact versions in `Cargo.toml` and has a lock file that can freeze the versions used,
that requires manually inspecting each version of each transitive dependency to confirm it complies with a policy of
using a version older than some age.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like splitting Cargo.toml and Cargo.lock helps and going more into the workflows around Cargo.lock, like

Suggested change
Although cargo, of course, allows you to specify exact versions in `Cargo.toml` and has a lock file that can freeze the versions used,
that requires manually inspecting each version of each transitive dependency to confirm it complies with a policy of
using a version older than some age.
You can pin versions in your `Cargo.toml` but that is a manual process and doesn't cover transitive dependencies.
`Cargo.lock` records versions but those are at the time of last change. Adding a new dependency can cause you to pull in transitive dependencies that are outside your desired minimum age. There isn't a manageable way to run `cargo update` and intentionally get versions that are inside of your desired minimum age.

Also, this might be better served under Alternatives.

Comment on lines +36 to +37
As such, it would be useful to have an option to put a limit on commands like `cargo add` and `cargo update`
so that they can only use package releases that are older than some threshold.
Copy link
Contributor

@epage epage Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't just those commands but any that use Cargo.lock could update it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, I'm having difficulty understanding what you are trying to say here. Cargo.lock could update what?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry,left out some words. Any command that can use a Cargo.lock, like cargo check, can update it if Cargo.toml has been changed.

so that they can only use package releases that are older than some threshold.


## Guide-level explanation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be benefited by having examples of config and output

Comment on lines +151 to +152
* `min-publish-age` only apply to dependencies fetched from a registry, such as crates.io. They do not apply to git or path dependencies, in
part because there is not always an obvious publish time, or a way to find alternative versions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also call out that it is only for registry dependencies that have pubtime set.

Comment on lines +158 to +160
* `cargo install`
* If a specific version is not specified by the user, respect `registries.min-publish-age` for the version of the crate itself,
as well as transitive dependencies when possible.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MSRV resolver doesn't apply to cargo install, so I assume it shouldn't apply here

Comment on lines +27 to +30
Another reason to wish to delay using a new release, is because new versions can introduce new bugs. By only
using versions that have had some time to "mature", you can mitigate the risk of encountering those bugs a little.
Hopefully the bugs will be found before you start using the new version and thus could update to a version that fixes those bugs,
or lock your version, so you don't get the buggy version, if the bugs apply to you.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way to frame this is that by people having different risk tolerances and setting different values (particularly with the default being "disabled"), we end up with a gradual roll out of versions so some people act as canaries for the rest.

Comment on lines +179 to +180
## Rationale and Alternatives
[rationale-and-alternatives]: #rationale-and-alternatives
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like there is content from the issue that isn't here, for example...

we should talk about the reason we are going with fallback and not deny is as a way to allow people to override this when there is a fix or feature they need. We've been experimenting with ways of doing this with cargo update --precise but if a newer transitive dependency is needed, you can't do that.

I think the thread also covered that fallback for this doesn't have the same problems as incompatible-rust-version because that suffers from a lack of data while crates.io exhaustively sets pubtime and so can any other registry that supports it.

We also had conversions on naming that isn't here. e.g. publish is intentionally in the name to help connect it to this only working on some deps.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I think we talked about the use case for exclude being lessened because

  • per-registry support
  • ability to override

* We list MSRVs for unselected packages, should we also list publish times? I'm assuming that should be in local time
* Locking message for [Cargo time machine (generate lock files based on old registry state) #5221](https://github.com/rust-lang/cargo/issues/5221) is in UTC time, see [Tracking Issue for _lockfile-publish-time_ #16271](https://github.com/rust-lang/cargo/issues/16271), when relative time differences likely make local time more relevant
* Implementation wise, will there be much complexity in getting per registry information into `VersionPreferences` and using it?
* `fallback` precedence between this and `incompatible-rust-version`?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

imo unspecified though I would put incompatible-rust-version at a higher precedence in the implementation so that people are more likely to have successful builds.

there has been an increased interest in using automated tools to ensure that packages used
are older than some age. This creates a window of time between when a dependency is compromised
and when that release is used by your project. See for example the blog post
"[We should all be using dependency cooldowns](https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns)".

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving discussion from longer/less coherent thread: https://github.com/rust-lang/rfcs/pull/3923/changes#r2841703101

Rewording the arguments, I don't really think that this constitutes an empirical basis for the benefit of these delays. What would constitute a valid empirical basis is vulnerabilities that have been caught by this system after it's been implemented in other places. These should exist in practice since deployment of these mitigations is slow at best, and there should be cases where some people have the mitigations and some don't, and are thus vulnerable. But I haven't seen any so far.

The only argument listed is that the time between publication and mitigation is short, and so, if we delay deployment until a certain time after publication, then deployment won't happen before mitigation.

This really doesn't feel valid for one primary reason: people need to notice that there's a problem in order to mitigate it, and delaying mitigation also delays the time that people actually encounter an issue.

Like, my argument is that the vulnerabilities we should be worrying about are ones like the xz vulnerability, which fall cleanly into this class: they require an analysis of multiple moving parts and are only going to be noticed once someone actually has them running.

There are plenty of notable crates where an exploit would easily be noticed immediately and delaying dependency updates would catch them, in most cases. For example, if someone tried to release a malicious version of syn, we would definitely know almost immediately.

However, there are plenty of other situations where a vulnerability would not be noticed, because Rust's trait system lets you hide code in the most unsuspecting of places. Would you notice if one crate you used suddenly started relying on a particular hasher implementation that was promising but not as scrutinized, and then that hasher implementation was running malicious code? Would you notice if a reasonable-looking trait import also happened to silently change the behaviour of a method that looked fine?

The xz example is such a good example because it fits so well into so many examples. You don't have to be a bad actor hired by a nation-state to exploit a codebase; you just need to get your code in front of one tired maintainer who says "looks good to me" when you've snuck something suspicious in. Or, even just one compromised account: how many repos do you know are big enough to have several PRs per day, and how many of those do you scrutinize every PR?

Simply assuming that a timer starts ticking from release until someone notices a vulnerability is very naïve, because while that is definitely the case for obvious exploits like adding bitcoin_miner::run() to the top of a function, all it takes is one layer of indirection to hide that, and then it's not obvious at all.

There are definitely quick "hit-and-run" vulnerabilities that involve a compromised account that might be stopped by this, but so far we haven't seen any of those in the Rust ecosystem, and it seems like there are plenty of other low-hanging fruit with dependency updating that could be chosen instead of a simple time limit, especially considering how it can make security updates challenging to deploy.

There are a few other things discussed in other threads as ways of further improving the dependency-updating and dependency-locking systems, but here I wanted to specifically hone in on why I don't think that the data here creates a valid argument in this context.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are plenty of notable crates where an exploit would easily be noticed immediately and delaying dependency updates would catch them, in most cases. For example, if someone tried to release a malicious version of syn, we would definitely know almost immediately.

However, there are plenty of other situations where a vulnerability would not be noticed, because Rust's trait system lets you hide code in the most unsuspecting of places.

I'm a bit confused by this. As we said in the other thread, this isn't meant to be an exhaustive solution but one part of improving the whole and one we can deploy rather cheaply / quick for a quick improvement while we continue to work towards the larger improvements.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This really doesn't feel valid for one primary reason: people need to notice that there's a problem in order to mitigate it, and delaying mitigation also delays the time that people actually encounter an issue.

FWIW, the core thesis is that the relevant parties here are security scanners, not early victims. In the Python ecosystem, for example, the overwhelming majority of malicious package reports comes from automated static analysis, not from users who have already been victimized.

(And therefore it's not that people need to notice a problem, it's that systems need to be in place to monitor for problems and people should generally wait for that monitoring rather than acting as ecosystem canaries themselves.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was not clear from the article, at least, although I definitely was biased against the article despite the fact that you could not have more clearly stated that the problem itself is overhyped. That definitely changes the situation in a way that should be reflected more clearly in the RFC IMHO.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was not clear from the article, at least, although I definitely was biased against the article despite the fact that you could not have more clearly stated that the problem itself is overhyped.

I'm open to suggestions on how to phrase or edit the blog post, I thought I had covered that with this paragraph:

Cooldowns enforce positive behavior from supply chain security vendors: vendors are still incentivized to discover and report attacks quickly, but are not as incentivized to emit volumes of blogspam about “critical” attacks on largely underfunded open source ecosystems.

(But I freely admit that the post is terse, and I'm always open to feedback on clarifying my language.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also worth pointing out that many of these attacks have involved a developer's credentials to the repository and/or package registry getting compromised, and in such cases, the developer often eventually realizes their credentials have been compromised. But that isn't always immediate, and it can take time for them to realize what happened, restore their account, and yank the malicious release.

The locations and names of the configuration options in this proposal were chosen to be
consistent with existing Cargo options, as described in [Related Options in Cargo](#related-options).

## Prior Art
Copy link

@clarfonthey clarfonthey Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing that's particularly relevant IMHO is how npm treats updating package.json and package.lock, which is different from how cargo updates Cargo.lock (and not Cargo.toml, usually).

Right now, Cargo.lock is the brute-force solution of ensuring deterministic builds: every single dependency is locked to an exact version, no exceptions, until the lockfile is updated. Except, there really isn't a way to update the lockfile except updating everything, or updating a single crate. And there isn't really much control over the lockfile except a few key configuration options like MSRV or choosing the minimum valid option.

There are plenty of tools that other projects use to help mitigate some dependency issues that could definitely be helpful if they were part of Cargo. For example, the rust-lang repo has a large list of all crates that can ever be dependencies, ever, to ensure that all crates are at minimum vetted to be from trusted sources. And other tools like cargo vet exist to manage this trust process in a more
configurable way.

It would be nice if you could have an intermediary between Cargo.toml and Cargo.lock that let you add a few more limits on dependencies, like:

  • Only allow certain transitive dependencies (of specified versions) and warn or error otherwise
  • Distinguishing between "minimum" versions and "preferred" versions: for example, your code might work with version 1.0.0 of a crate but you've vetted version 1.1.2 and you haven't checked newer versions yet (NPM does this by just updating package.json directly, lower versions be damned, and this could be a reasonable option for Cargo too potentially for binary or internal library crates)

It's not clear whether all of these changes should be part of Cargo, but at least some of them could be, and we should make it easier to understand how things are being updated to avoid cases where someone accidentally updates to a vulnerable version. Minimum release time could still be one of the constraints added to packages under this system, but it wouldn't be the only one.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think minimum release ages are only a (small) part of a larger toolbox here. I think the larger toolbox includes things like trusted packages/reviews as well as capability analysis (e.g. foo v1.0.0 could only do local I/O, but foo v1.0.1 can do networked I/O and thus merits more attention).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minimum release time could still be one of the constraints added to packages under this system, but it wouldn't be the only one.

If thats the case, I'm not sure I understand the concern. Yes, there are more things that we can be doing. Is it that the system for these rules isn't generalized like rust-lang/cargo#7193 ?

Only allow certain transitive dependencies (of specified versions) and warn or error otherwise

We have several different issues exploring different angles on this:

Distinguishing between "minimum" versions and "preferred" versions: for example, your code might work with version 1.0.0 of a crate but you've vetted version 1.1.2 and you haven't checked newer versions yet (NPM does this by just updating package.json directly, lower versions be damned, and this could be a reasonable option for Cargo too potentially for binary or internal library crates)

In my opinion, we should work to having vetting minimum dependencies. rust-lang/cargo#5657 includes two different ways to resolve to it to check it but I have concerns about each approach and I wonder if we should instead be more aggressive with Cargo.toml, like with rust-lang/cargo#15583

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I almost deleted my last two replies so we could focus on the bigger topic in the first but didn't want to lose track of the links)

Comment on lines +49 to +52
The `resolver.incompatible-publish-age` configuration can also be used to control how `cargo` handles versions whose
publish time is newer than the min-publish-age. By default, it will try to use an older version, unless none is available
that also complies with the specified version constraint, or the `rust-version`. However by setting this to "allow"
it is possible to disable the min-publish-age checking.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying to understand when I would want to set this. Is there perhaps an example we can provide of typical use-cases? Isn't setting this to allow the equivalent of removing the registry.global-min-publish-age configuration?

Or does registry.global-min-publish-age apply always (e.g., even with a lock file?) whereas resolver.incompatible-publish-age only applies when we're actively searching for new versions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the question "why set to allow when you can set the time to 0?"

It would be good to clarify this. One controls time, the other behavior but both have a way to turn it off due td their design,

I'd say turning off the behavior is for tranisently turning it off for all registries rather than setting it for each registry.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the common case for most users is that they only have one registry, right? In that case just commenting out or setting the time to zero feels better to me than complicating the configuration out of the gate with two options that interact.

(It seems like we can always add the extra configuration to temporarily opt out later).

We use the term `publish` and not `release`


## Unresolved Questions
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering how/if we expect this to interact with alternative date sources (some future, some partially already visible today). For example, if we implement something like TUF or a mirror of crates.io, the "age" here might want to be relative to the index snapshot we're using, rather than to absolute local time. Maybe there's a question worth adding about exploring what "now" is?

As a concrete example of that, mirrors today can implement a version of this RFC by delaying imports of new versions by the period into the index they expose, rather than implementing that in Cargo. Fully replacing that with this RFC would require that the now timestamp is kept deterministic over time without a lock file checked in, which could be done if the index 'checkout' itself had the now timestamp stored when it was fetched. That would also help users opt-in more eagerly without needing that controlled at the mirror level, while still staying behind for most versions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should include that as an alternative.

Note that pubtime is already stabilized in the registry and so for a TUF mirror of crates.io, they would have to preserve pubtime or else they would be invalid.

If we don't, for the unaware, we should call out that this relies on the already stable pubtime field.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Putting it more concisely: the RFC as written I think assuming the relative age is relative to wall-clock now. I think registries should be able to tell Cargo when now is.

For example, in offline mode, I think cargo ought to use the last time it fetched from the network, not continue ratcheting forward. (Or at least we should consider that...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you help me understand what the use case is for this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm on a flight, I don't really want Cargo suddenly realizing I have new crates "available" because they have matured in my index copy: I want to keep my index static and without the last N days as of when it was fetched. I think Cargo partially supports this with offline mode, but not completely: even if some newer crate is available locally, I probably don't want to start using it, because without an up to date index I can't know if it was yanked in the period of time since I fetched the index.

My understanding is that our model is that automated scanners take 24-72 hours to pick up bad code in lots of cases. If I believe that I want to use the index as of -72 hours from when I fetched it, not -72 hours from whatever my local time is.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think anything like this could potentially be added later and feel like it should then be best left to a future possibility to keep this more narrowly scoped (for faster resolving of discussion and implementation).

There also isn't a coherent snapshot of the index that we are operating on because of the sparse index design. We only update the entries we request and not all of those that have been downloaded. We'd have to keep per-entry "last update" times and make this relative to those. This is also adding a lot of writes even when the cache is unchanged which seems like it could affect performance and health of the drive (granted, target-dirs are even worse there) for something that won't be used in most cases.

I'm also not aware of any prior art that tracks it this carefully. Some allow the user to set a date for comparison. cargo generate-lockfile --publish-time will limit packages according to a specific date. As a future possibility, we could have a config for people to override "now".

tmccombs and others added 2 commits February 24, 2026 01:34
Add clarification on how to update versions that are newer than the
minimum age.

Clarify this is not intended to be a single fix for all supply chain
problems. And why it is valuable even if everyone uses it.
@ehuss ehuss moved this to RFC needs review in Cargo status tracker Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

T-cargo Relevant to the Cargo team, which will review and decide on the RFC.

Projects

Status: RFC needs review

Development

Successfully merging this pull request may close these issues.

9 participants