Closed
Conversation
Replace direct singularity invocation with datalad containers-run for the single-container code path. The pipeline (multi-container) path is unchanged. At init time, register the container at the analysis dataset level via datalad containers-add with a call-fmt built from the user's singularity_args. This enables datalad containers-run, which reads the container path from .datalad/config rather than relying on a hardcoded layout. The generated participant_job.sh now runs three separate steps: 1. datalad containers-run for the BIDS app (clean provenance) 2. datalad run for zipping outputs 3. git rm for removing raw outputs The zip script (bidsapp_run.sh.jinja2) is removed since its logic is now inline in participant_job.sh. This supports any container dataset layout (e.g. repronim/containers) without path assumptions. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Replace `os.access(parent_dir, os.W_OK)` with a tempfile creation attempt. `os.access` only checks POSIX permission bits and returns false negatives on NFSv4 ACL filesystems common on HPC clusters. Co-Authored-By: Claude Opus 4.5 <[email protected]>
With --containall, Singularity sets the working directory to $HOME inside the container, which breaks relative paths to inputs/outputs. Adding --pwd $PWD restores the expected working directory. Co-Authored-By: Claude Opus 4.5 <[email protected]>
containers-run fails to fetch the container image when running from a clone obtained through a RIA store. Work around this by adding an explicit datalad get before containers-run. The container image path is discovered at init time via containers-list (not hardcoded), threaded through to the job template. Co-Authored-By: Claude Opus 4.5 <[email protected]>
The single-container path no longer generates a separate zip script; zipping is handled inside participant_job.sh. Co-Authored-By: Claude Opus 4.5 <[email protected]>
The mkdir -p for the output directory before containers-run likely left the dataset dirty, causing "clean dataset required" error at the zip step. containers-run with --output should handle directory creation. Co-Authored-By: Claude Opus 4.5 <[email protected]>
The subject pruning (rm -rf other subjects) before containers-run dirties the input subdataset. Without --explicit, datalad run refuses to start on a dirty dataset. The old single datalad run also used --explicit, so this matches prior behavior. Also removes pre-created output dir (unnecessary with --output) and adds TODO to verify subject pruning is necessary. Co-Authored-By: Claude Opus 4.5 <[email protected]>
Collaborator
Author
|
accidental open sorry! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.