Add scripts+workflow to build and upload tarballs from artifacts#4448
Add scripts+workflow to build and upload tarballs from artifacts#4448
Conversation
|
Here's my worklog with more notes, prototypes, etc., fyi: https://github.com/ScottTodd/claude-rocm-workspace/blob/main/tasks/active/multi-arch-releases.md#workstream-2a-build-multi-arch-tarballs |
erman-gurses
left a comment
There was a problem hiding this comment.
Looks good overall, added one concern - will do one more pass tomorrow.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The upload path includes the platform ({run_id}-{platform}/tarballs/),
so the script needs to know the target platform rather than
auto-detecting from the current system. This matters when building
Windows tarballs on a Linux runner.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
54e29a9 to
a763196
Compare
| f"--amdgpu-families={families_str}", | ||
| "--expand-family-to-targets", |
There was a problem hiding this comment.
@marbre this from #4449 is working as expected now, it expands gfx110X-all to gfx1100, gfx1101, gfx1102, gfx1103:
https://github.com/ScottTodd/TheRock/actions/runs/24255576558/job/70826158778
python build_tools/build_tarballs.py \
--run-id="24187929660" \
--run-github-repo="ROCm/TheRock" \
--dist-amdgpu-families="gfx110X-all;gfx1151" \
++ Downloading prim_test_gfx1100.tar.zst
++ Downloading prim_test_gfx1101.tar.zst
++ Downloading prim_test_gfx1102.tar.zst
++ Downloading prim_test_gfx1103.tar.zst
There was a problem hiding this comment.
Seems uploading fails due to missing credentials. In the setting I see
github_repository: ScottTodd/TheRock
is_pr_from_fork: False
bucket: therock-ci-artifacts-external
Shouldn't is_pr_from_fork be True?
There was a problem hiding this comment.
That's all working as intended. The run was workflow_dispatch in my fork, as I can't test .github/workflows/multi_arch_build_tarballs.yml from here in ROCm/TheRock until the workflow is included on a default branch here.
- The repository is my fork
- It's not a pull request
- The artifacts-external bucket is used for any workflow run outside of ROCm/TheRock (push/pull_request/workflow_dispatch/etc.)
My fork does not have any credentials to upload or access to self-hosted runners, so the upload is expected to fail there.
Motivation
We'd like to produce tarballs as part of multi-arch release pipelines. For context, see:
This will also enable building JAX packages as part of CI pipelines, see:
Technical Details
This downloads artifacts from a workflow run (current workflow run when included as part of CI/CD workflows, or a prior workflow for testing or repackaging) and then uploads them to an artifacts bucket (e.g.
therock-dev-artifacts). Release workflows (to be added) can then choose to copy these tarballs to a tarballs bucket (e.g.therock-dev-tarball).Important
The workflow is not yet integrated into any workflows via
workflow_call. It is only run manually viaworkflow_dispatch.Tarball files use substantial storage (2GB+ per tarball), so I'd like to only include this for release builds and opt-in for PRs that want to build JAX -- at least until
KPACK_SPLIT_ARTIFACTSis flipped and we can produce a single "multiarch" tarball instead of separate tarballs per family.Behavior with and without
KPACK_SPLIT_ARTIFACTSIn this initial implementation,
KPACK_SPLIT_ARTIFACTSdisabledKPACK_SPLIT_ARTIFACTSenabledWe may later want to also produce tarballs without including test artifacts, produce larger groups independent of the current families like "all Radeon GPU targets", etc. All of that is just changes to the filtering and repackaging.
Downloading and extracting
This implementation runs a loop around:
python build_tools/artifact_manager.py fetch \ --stage=all \ # artifacts from all stages (foundation,math-libs,etc.), all components (lib,doc,test,etc.) --amdgpu-families=${families_str} \ # filter to a single family --output-dir=${output_dir} \ --flatten \ # extract and flatten into "dist" directory in one command --download-cache-dir=${download_cache_dir} # reuse generic artifacts downloaded by prior callsThis has the advantage of being easy to reproduce outside of the script and reusing cached downloaded artifacts for local debugging and CI efficiency. We also considered fetching and not flattening, then using artifacts.py::ArtifactCatalog to repackage as
build_python_packages.pydoes (usingpy_packaging.py), but this is simpler.Compression
This implementation produces
.tar.gz, matching existing tarball releases. Compression would be faster and more efficient using.tar.zst. I ran some benchmarks on my Windows dev machine:Expand for benchmark results
I did wrap compression in a
ProcessPoolExecutorsince parallel compression does make efficient use of CPU cores, sample benchmarks showing speedup (so not oversubscribed):Expand for benchmark results
Test Plan
KPACK_SPLIT_ARTIFACTS, artifacts were downloaded, packaged into the expected tarballs, and "uploaded" to a staging directoryTest Result
Without
KPACK_SPLIT_ARTIFACTS: https://github.com/ScottTodd/TheRock/actions/runs/24205988455/job/70661826987With
KPACK_SPLIT_ARTIFACTS: https://github.com/ScottTodd/TheRock/actions/runs/24217435275/job/70701188683Submission Checklist