Skip to content

feat: honour Retry-After for artifact retries#3737

Draft
lox wants to merge 1 commit intofeat/artifact-batch-hintsfrom
feat/artifact-retry-after-hints
Draft

feat: honour Retry-After for artifact retries#3737
lox wants to merge 1 commit intofeat/artifact-batch-hintsfrom
feat/artifact-retry-after-hints

Conversation

@lox
Copy link
Copy Markdown
Contributor

@lox lox commented Mar 5, 2026

Summary

This follow-up teaches artifact retry loops to honour server-provided Retry-After headers, so retry pacing can be coordinated by the backend during pressure events.

What Changed

  1. Added a small shared helper in internal/artifact/retry_after.go to parse and apply Retry-After values to retry intervals.
  2. Wired Retry-After handling into CreateArtifacts retries in internal/artifact/batch_creator.go.
  3. Wired Retry-After handling into UpdateArtifacts retries in internal/artifact/uploader.go.
  4. Added tests in internal/artifact/batching_test.go for valid and invalid Retry-After behaviour.

Why

  1. Improves stability by letting the server shape retry timing during load.
  2. Reduces risk of synchronized retry bursts from many agents.
  3. Keeps existing retry behaviour intact when Retry-After is absent or invalid.

Validation

  1. go tool gofumpt -extra -w internal/artifact/retry_after.go internal/artifact/batch_creator.go internal/artifact/uploader.go internal/artifact/batching_test.go
  2. go test ./internal/artifact -count=1
  3. go test ./api ./agent ./clicommand -count=1

Notes

  1. This PR is stacked on top of feat: support server-hinted artifact batch sizing #3736.

Copy link
Copy Markdown
Contributor

@moskyb moskyb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good! good to bring this in line with many of our other retry loops.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants