-
Notifications
You must be signed in to change notification settings - Fork 8
Description
This issue tracks transient HTTP 502 Bad Gateway errors that occur during downloads from GitHub's CDN (releases, Copilot CLI, etc.). These are infrastructure failures, not code issues.
Latest Occurrence
Run: 22031343904
Date: 2026-02-15 06:52 UTC
Job: detection (Issue Monster workflow)
Error:
curl: (22) The requested URL returned error: 502
##[error]Process completed with exit code 22.
```
The job failed while downloading GitHub Copilot CLI v0.0.405 from:
```
https://github.com/github/copilot-cli/releases/download/v0.0.405/copilot-linux-x64.tar.gz
Pattern Analysis
This is the 3rd documented occurrence in the past week:
| Run ID | Date | Workflow | Download Target | Status |
|---|---|---|---|---|
| 21833584727 | 2026-02-09 | CodeQL | git clone (checkout) |
HTTP 500/502 |
| 22024528697 | 2026-02-14 | Test Action | awf binary (v0.7.0) | HTTP 502 |
| 22031343904 | 2026-02-15 | Issue Monster | Copilot CLI (v0.0.405) | HTTP 502 |
Common characteristics:
- All errors occur during downloads from GitHub infrastructure (releases, git, etc.)
- Self-resolving within minutes (verified with post-failure curl tests)
- No code or configuration issues identified
- Typically affects 1 job while other jobs succeed
- Install scripts already have retry logic (3 attempts)
Impact
- Severity: Low (self-resolving, no data loss)
- Frequency: ~3 occurrences per week (based on documented failures)
- Affected Jobs: Typically 1 out of 4-7 jobs per workflow
- Resolution Time: <5 minutes (automatic recovery)
Root Cause
GitHub infrastructure/CDN transient failures. Not a problem with:
- β Repository code or configuration
- β Workflow definitions
- β Release assets (verified to exist and be valid)
- β Network configuration
- β Retry logic (already implemented in install scripts)
Recommended Actions
Short Term
- No immediate action required - these failures self-resolve
- Re-run failed workflows when they occur
- Continue monitoring frequency via cache-memory entries
Long Term (if frequency increases)
- Check GitHub Status: Monitor (www.githubstatus.com/redacted) during failures
- Add exponential backoff: Enhance retry logic with delays between attempts
- Implement fallback mirrors: Consider caching critical binaries
- Escalate to GitHub Support: If frequency exceeds 5 occurrences per week
Monitoring
All occurrences are logged to /tmp/gh-aw/cache-memory/ci-failure-*.md for pattern tracking.
Latest analysis: ci-failure-22031343904.md
π Classification: Infrastructure/Transient
β±οΈ Auto-resolves: Yes (typically <5 minutes)
π Action Required: None (monitor only)
π₯ Automatically tracked by CI Doctor
AI generated by CI Doctor