reinstate no-spawn option for remove#7134
Conversation
70fdee6 to
6481ee0
Compare
How did they get spawned back at the ICP - hopefully that does not happen automatically? (Even if that was human error, it's a still a problem that we need to be able to fix easily, but just wondering...) |
If you re-run ICP (i.e. deployment) tasks.. But like you said, it might be a human error of inserting tasks in the wrong place and not being able to remove without spawning. Also, as mentioned, you may want to remove these tasks for an interval then reintroduce |
I was going to say unwanted flow-on from retriggering ICP tasks should not be a problem now, since group trigger - now we can re-run in the same flow, and flow-on will be blocked by the prior history ... but adding new tasks (which by definition have no history back in the graph) does bring the flow-on problem back! |
This will also not be a problem when we relegate deployment tasks to a bespoke startup/whatever graph section 😉 |
6481ee0 to
5655542
Compare
FYI, skip mode is probably the best solution for this use case:
|
Yes it does look promising...
However, the flip side, you may need to put all downstream tasks in skip mode too right? Also, the gap might be so large that you don't want these tasks running (even in skip mode).. Might be better to have both options. |
Yes, however, the flip side of the flip side is that the remove solution requires you to know enough about the graph structure to correctly identify the task(s) which require removal, and enough about SoD behaviour to determine the point at which those tasks need to be removed (in some cases tasks may need to be removed multiple times due to subsequent outputs causing them to re-spawn). But anyway.... Another option might be to set the tasks, this would be nice and easy, but at present we don't have a syntax for "set all instances of X before a certain cycle, e.g |
|
As it stands, the
All depending on the exact details of the graph structure and the precise state of the task pool at the time the command was issued. The user can't really tell what will happen without a lot of graph understanding and SoD knowledge which is a shame. The new option might be a little bit confusing (i.e, it only applies to parentless / sequential-xtriggererd tasks), but the status quo is a bit confusing, so whatever. @hjoliver, thoughts? |
|
Things to test for this PR:
|
Skip mode is ideal:
But otherwise we probably need the
(Note this could be needed to recover after accidentally triggering a new flow with parentless tasks too, in which case cut-off is unequivocally required, not skip). Generally....
Unfortunately the way we spawn parentless tasks is a sort of pragmatic (implementation) caveat though, not fundamentally a feature of the SOD concept, that is harder to understand. I've commented on this since the beginning of SOD planning: really parentless tasks should be spawned by the clock, or by an xtrigger, or by some spawner object (if not clock or xtriggered). But it was easier to get it working by having each parentless task spawn its own next instance to wait on non-task-prerequisites - which is really a relic of SOS, not SOD. And that is probably not easy to manipulate without "low level" commands. |
This:
Also, it's not an option slapping users in the face.. It's there if needed.
Problem here is you run into the same skip-broadcast issue; you may inadvertently trigger a cascade of downstream tasks to run/spawn.. |
|
To be clear, not arguing against the use case here, and we do need some kind of a solution to enable the removal of orphan tasks (#7209). This argument is absolutely an option, but:
So we wouldn't be able to publish generic instructions for this intervention and there are a bunch of caveats. Hence trying to explore whether there are other options here. In the SoS days, the remove option had spawn/no-spawn modes which worked consistently in all scenarios, whereas as it stands here, the absence of
This plays both ways, with the remove approach, you may inadvertently trigger a cascade of downstream tasks if:
With the set approach, you may inadvertently trigger a cascade of downstream tasks if:
The set approach would be a bit harder to get wrong because we can automatically generate the list of newly added tasks (and already log this for reloads), whereas we cannot automatically determine the correct task(s) to remove (which may be different from one cycle to another). Moreover, with the remove approach, you may create the conditions for a workflow stall due to missing task outputs, whereas the set and skip approaches ensure that this cannot happen. |
There's no graph trigger off the removal of waiting tasks that would cause downstream to run (aside from runahead release, and spawning the next instance), there is a trigger for succeeded (the far more common occurrence) i.e. set/skip..
Not using
No, it's much more likely downstream would start running... There is only one downstream that will spawn and potentially run with removal (the next instance), whereas with set you could have any number of direct dependents spawn and/or kick off (including the next instance!)..
The whole ESNZ operations team have run into this problem, and would like the |
|
I can see both sides of this to some extent, but I'm not sure I entirely understand your points @oliver-sanders - e.g.:
Surely that's not true? If it is possible - even by accident - to trigger an unwanted flow that spawns onward via parentless tasks, then we need a way to recover from that situation. Isn't "remove without spawning" the easiest way to do that? (If there really is a more intuitive way, fine, but I'm not convinced yet). As @dwsutherland notes, this won't be the default, it is just a command option with an obscure sounding name "no-spawn" that should deter casual use, and which we can document the caveats and dangers of: "don't use this low level command option if you don't understand it!". Would it help to construct a dummy workflow that clearly illustrates the scenario we hope to address here, so we can consider the various options in a more concrete context? |
A basic example would be to create a gap, i.e. remove without spawning at one cycle and reintroduce at some future cycle.. |
@dwsutherland, you've misunderstood me, the set approach would be safer here. If we set ALL of the tasks between the initial-cycle-point and the runahead-cycle-point then it would be impossible for ANY of these tasks to run (the DB would prohibit it).
Agreed (that we need a solution) of course (and I have suggested two alternatives above), but I think you may have skipped past the issues with remove as a solution for this, I'll elaborate below...
If I thought that I wouldn't have written it!
Ok, here's a (purposefully vague) scenario:
And here's how we would handle it with the three different approaches discussed so far... SetMark all previous instances of the the tasks between the ICP and 2026 (inclusive) as succeeded:
Succeeded outputs for each task are written to the DB, any tasks spawned will be instantly completed and removed from the pool:
SkipTell all the newly added tasks to skip until 2027:
Tasks will be configured to skip, succeeded outputs will be written to the DB when they do.
RemoveRemove the first instances of the tasks ensuring that they don't spawn their successors:
The tasks will be removed from the pool:
Specific examplesContinuing with the same example (where the tasks a, b & c have been added)... 1: First instance of added tasks is not necessarily the ICPThis means we need to remove the tasks from all ( 2: First instance of added tasks are not necessarily n=0 at the time of the remove command:With the remove solution, you would need to run the remove command three times:
3: Not all previous cycles are necessarily inactiveSay when we perform the graph change, the workflow state is:
Then the following tasks will be spawned when the upstreams complete (despite any previous attempts to remove them) and require subsequent re-removal:
ConclusionsThe remove approach is an option, but it isn't the only option and it's far from perfect. Set and skip are also options, they may also be imperfect, but have the robustness advantage of writing outputs into the DB which results in fewer caveats. Ideally, we should be able to apply graph changes automatically without encountering these situations in the first place (see #7203). So we may potentially consider some sort of automation here, e.g, by pre-initial rules, Cylc could automatically set newly added tasks to succeeded in earlier cycles, completely avoiding the problem in the first place. (added as a note in #7203) |
|
Thanks @oliver-sanders - I'll attempt to find the time to digest your write-up in detail later today. [Update, sorry, failed due to user support, R20 meetings, and ongoing attempts to get access to all users workflow - logs which I don't currently have because security ... but I see David has continued the discussion below]. As a first comment, I was preferring the remove option as much simpler in terms of both implementation and command syntax (if perhaps not conceptually simpler, being a low level intervention) - you only need to target a single task.
Your other suggestions rely on selecting a potentially large number of tasks over many cycles and modifying all of those in the DB. Maybe that's OK in some cases though, with the right syntax (I like your suggested syntax there). But does it cover all cases, and easily? |
"If we set ALL of the tasks" is doing a bit of work here.. It assumes we want all tasks in those cycles to be
Well, this is very common with our workflows, all of the operations team have run into this problem.. In fact, just this week I helped someone by using the Again, as a consequence of our setup, many workflows have downstream workflows (~60 operational workflows). So these downstream workflows have many parentless xtriggered tasks.. And a large subset of these workflows are collaborative with research (so research poll our operations with workflows containing parentless xtriggered tasks).. This will be less of a problem when separate graph sections are in: However, you can still imagine people wanting to create/avoid a gap without artificially setting/skipping everything in between.
This is a simple example, in a real workflow you may have to set more than just
If these are tasks doing real things, and downstream tasks/workflows have genuine dependence on the things they did, then you may want them to stall/hang. There are complexities in the choice of tasks, what setting them to succeeded does, and the order in which you do it.
R1 can be set, and the next cycle task removed... And as mentioned above, it's not necessarily desirable to have fake succeeded tasks, so the same could be said about remembering them.
Again, way more complicated to select an entire subgraph in real workflows
Same as above, for real tasks, stall/hang might be a good thing for real downstream tasks/workflows.
As with set.
If this is
(or set R1, remove the next)
This may be desirable (as explained above).
Then set R1, remove
Yes, sometimes this is required.
Yes, this is true, and the next instance popping up will remind you.. It could be arguably worse not to remember all the downstream tasks to set/skip.
As above, set R1 and then remove-no-spawn the next instance(s)..
The remove
Both aren't perfect, yes, but both have their place in my humble opinion..
As mentioned, fake succeeded tasks may not always be desirable, this would have to be specified by the user.. However, ideally, I do agree that there should be better design/option to dealing with gaps so not to expose users to these "low-level" internal workings.. |
|
(We might need two approaches here: low-level |
|
@hjoliver, to save time, skip my responses above, I've tried to provide a quick summary here. @dwsutherland, just to clarify, you don't need to defend your desired solution here (it may still have value), I'm just trying to get a handle on the problem as there are several edge cases and scenarios listed here which we do not have a solution to. @dwsutherland, I'm guessing there are no arguments with these points:
@dwsutherland, @hjoliver: To demonstrate some of these problems with an example derived from the above: flow.cylc[scheduler]
allow implicit tasks = True
[scheduling]
cycling mode = integer
initial cycle point = 1
final cycle point = 5
[[graph]]
R1 = """
install_x => install_y => install_z
"""
P1 = """
install_z[^] => x => y => z
z[-P1] => x
"""
R1/2 = """
x => remover
"""
R1/$ = """
z => stop
"""
# the graph change:
## +P1/P1 = """
## install_x[^] => a
## """
## +P2/P1 = """
## install_y[^] => b
## """
## +P3/P1 = """
## install_z[^] => c
## a & b => c
## x => a
## z => c
## """
#
# dependency to make reporting easier:
## R1/$ = """
## c => stop
## """
[runtime]
[[root]]
script = sleep 5
[[INSTALL]]
[[install_x, install_y, install_z]]
inherit = INSTALL
[[y]]
script = """
if [[ $CYLC_TASK_CYCLE_POINT -eq 3 ]]; then
sed -i 's/##//' "${CYLC_WORKFLOW_RUN_DIR}/flow.cylc"
cylc reload "${CYLC_WORKFLOW_ID}"
cylc trigger "${CYLC_WORKFLOW_ID}//1/INSTALL"
# we want the tasks a, b & c to run from cycle 4 onwards
cylc set --pre=all "${CYLC_WORKFLOW_ID}//4/[ab]"
fi
"""
[[remover]]
script = """
while true; do
cylc remove "${CYLC_WORKFLOW_ID}//[123]/[abc]" --no-spawn
sleep 3
done &
remover_pid="$!"
for i in $(seq 1 10); do
echo $i
$(cylc__job__poll_grep_workflow_log '5/x.*succeeded') || true
done
kill "$remover_pid" || true
"""
[[stop]]
script = """
echo "$(cylc cat "${CYLC_WORKFLOW_ID}" | grep 'Removed tasks: [123]/[abc]' | wc -l) 'cylc remove's required:"
cylc cat "${CYLC_WORKFLOW_ID}" | grep 'Removed tasks: [123]/[abc]'
echo
echo
echo "$(cylc cat "${CYLC_WORKFLOW_ID}" | grep -o '\([123]/[abc]/01\)' | sort | uniq | wc -l) unintended tasks became active:"
cylc cat "${CYLC_WORKFLOW_ID}" | grep '\([123]/[abc]/01\)'
"""This workflow:
When I run the example (on this branch):
The exact outcomes are highly timings dependent (due to the unpredictable behaviour of Whereas if we replace the "remove" with "set":
My points are:
|
|
Thanks for the clear example @oliver-sanders. Your argument makes sense for that example, but I think it has several properties that make use of
Further, I don't think you've responded to some of our comments above, such as:
Surely in those scenarios, It seems to me that we probably need both approaches, perhaps along with documenting (warning) that I think David is going to post a more detailed response based on his experience with our operational workflows again.
I don't disagree with your caveats in general, but I think most aren't really a problem for our real-world use cases, and further it's OK to have some powerful lower-level commands that are useful in some common situations even if more generic use has caveats and requires more from the user.
I don't think it's that hard. I would use the simpler real examples of David's type and warn that you really have to know what you're doing if you stray far from that simplicity.
Yep, I'm not dismissing those - I just think we need this as well. [UPDATE] and David has made a good case below that remove IS by far the easiest approach for our ops issues. |
I promise I'm only fighting for it because I see
It's actually the opposite (believe it or not):
I would use
True.. Usually I would pause the workflow to avoid
So
Not sure I follow.. I guess you're saying manual interventions should be avoided in the first place? I agree.
I would
Will provide and example below, in line with what we encounter in our operational workflows and my responses above.
Better than unintended downstream running? (tasks or other workflows)..
People have to use their judgement.. In the situations I would use
I would say the
Disagree (as explained), it's the opposite, in the situations I use
Oh? . . . I think the remaining points (about caveats and documentation) are not entirely applicable given my response to your Examples
flow.cylc[scheduler]
allow implicit tasks = True
[scheduling]
initial cycle point = 20260101T0000Z
[[xtriggers]]
xtrig_a = . . .
xtrig_x = . . .
[[graph]]
R1 = """
install_other
install_x => install_y => install_z
install_z & install_other => install_done
"""
PT10M = """
install_done[^] => x => y => z?
@xtrig_x => x
install_done[^] => a => b => c?
@xtrig_a => a
c | z => d
"""
[runtime]
[[root]]
script = sleep 5When removing
Compare this to
Let flow.cylc[scheduler]
allow implicit tasks = True
[scheduling]
initial cycle point = 20260101T0000Z
[[xtriggers]]
xtrig_a = . . .
xtrig_x = . . .
[[graph]]
R1 = """
install_other
install_x => install_y => install_z
install_z & install_other => install_done
"""
PT10M = """
install_done[^] => x => <NDown_x>
@xtrig_x => x
install_done[^] => a => <NDown_a>
@xtrig_a => a
"""
[runtime]
[[root]]
script = sleep 5There's actually no way to handle this with Thoughts: And often these scenarios will occur with addition of new sub-graph on reload and rerun of R1 tasks, however for the purposes of handling/creating gaps it's probably still justified. We also have our DR system that, at the moment, spins up workflows with off-ICP start tasks (which introduces a gap).. If this workflow gets restarted, then tasks are spawned on rerun of the ICP.. This problem can be alleviated with isolated graphs: @oliver-sanders - I will employ both |
This comment was marked as resolved.
This comment was marked as resolved.
|
Caveats to the remove approach To conclude this lengthy debate, several caveats to the remove approach have been identified. The following major points have been conceded (by acknowledgement or lack of counter):
Additionally, this point was not conceded but was demonstrated in a worked example:
I'm not going to spend time arguing out the remaining points. The key thing is that we have acknowledged the approach has caveats which make it challenging for the general case, and even some caveats which apply to your specific use case. |
|
General case solution
I'm not necessarily trying to convince you not to have
Great, let's talk alternatives! Outline of the scenario:
Outline of the problem:
While the status-quo behaviour is "logical" by SoD, it's rather illogical from a user perspective. It can certainly be argued that this is a bug (I lean towards the position that it is, more on that below). Irrespective of whether we accept this as a bug or not, there are two possible ways for Cylc to handle this situation:
We have to choose one of these as the default behaviour, currently, we choose (1), however, I would argue that:
More:Yes, we could provide tooling and documentation to help operators get out of this situation, but as outlined above, it's very difficult to document this in the general case and the intervention can get rather messy depending on the specifics of the graph. But it would be much easier to prevent this from happening in the first place. We can always provide a mechanism for running these new-old tasks if anyone were ever to want this. I did actually outline a mechanism in the above which would achieve this:
I.e, all historical instances of newly added tasks would be automatically marked as completed:
You can still remove/trigger/set these tasks to cater to any exotic use case, but the default behaviour is much more manageable. This problem is actually very similar to the issue of pre-start-cycle-point tasks in the warm start scenario (#7178), an issue which is impacting our operational workflows at the moment:
The "reload-point" becomes the oldest active cycle point which is consistent with:
As such, another option would be to set the start-cycle-point for all newly added tasks to the oldest active cycle point which would provide nice consistency. So there's two simple alternatives:
Both of these approaches would completely resolve this situation, removing the need for the operator to run the |
|
Bear with me, I do come to an agreement in the end. . .
If you don't like it from an idealistic point of view, that's fine, however these points are off target..
This point is void.. I've explained already:
I object here too.. To create/avoid a gap I would usually just target the parentless task(s) of a cycle/glob-of-cycle point(s) if necessary.
Yes, this is a post spawn intervention.. Yes, it would be nice to avoid the situation in the first place, but people make mistakes.. And it's nice that they can visualize the thing they need to act on (as opposed to the "whole subgraph" which they would have to know about.
I already addressed this:
That's why I'm not concerned..
And again, I've already addressed this.. There are "unintended active tasks" with
I'm with you.. Another scenario would be wanting to create a gap for some unspecified number of cycle points by removing some lead (parentless) tasks.
Agree, again..
There's another issue I've run into, which this will solve, and that's off ICP start tasks (when you spin up a workflow on specified tasks, cycles ahead of the ICP)..
Agreed! Along with ideas around parentless tasks #7228 (which I agree that xtriggers should be like automatic python tasks separate from what they trigger) However As mentioned above, there's still the scenario where people may want to create gaps or a patch work of them.. (without removing the tasks altogether, and definitely without having gaps considered complete.) For now: This PR is a small change that can be undone once we have some of these solutions ironed out and ready to go (which I assume would be something like |
Oh wow! I have given a long list of rational arguments outlining several caveats to the the remove approach, no ideals required (note, I'm not trying to push any one solution in particular, just trying to find the best solution in general). I'm just trying to get agreement that the remove approach has caveats (which it most definitely does), some of which apply to you! You may have workarounds for these caveats (e.g, pauseing the workflow, or setting the first instance of the task to remove), and you might not be concerned by some of these caveats (e.g, if they don't apply to your particular example), but they are still caveats! If you want to continue haggling out the caveats...
This point doesn't apply to your specific example, however, that doesn't make it in any way "void". You are not the only person who will ever encounter this issue. Your example is not the only one to which it will apply. Linear pipelines are a special case.
You've misunderstood this point. The R1 tasks you're triggering may be inter-dependent, not all of the downstreams are necessarily spawned at the same time, so multiple remove commands may be required. This is demonstrated in my worked example above, it's provable, I don't think there's room for argument here?
Point reluctantly conceded then?
You have provided a workaround for this issue using "set", however, the workaround is an acknowledgement of this caveat, all I'm trying to do here is to demonstrate that there are caveats! However (since you mention it), you have vehemently argued against the use of "set" for these purposes above!
No, this point has not been addressed, unintended active tasks do not (and logically can not) occur with alternative approaches. Otherwise, we're in agreement that it would be easier if we could prevent this issue from happening in the first place which is great (all I need to achieve here). What are your thoughts on the two alternative mechanisms I suggested above? Any other ideas? These alternatives may also be quick to implement, though, as noted above, implementing an alternative doesn't mean we wouldn't add a
I'm not 100% sure what scenario you're describing here, but I think this is exactly the use case for which skip-mode was introduced? Nothing "devilish" about a real-world use case! We use skip-mode to toggle tasks on/off. |
Of course it does, as does
Of course there are caveats, I'm just asking for an alternate to the above repercussions.. And the ability to handle/create gaps in situation where it's more practical to do so..
I really don't... But I have to, because there are scenarios in which
I'm not arguing for one or another, just that
Yes, if you don't select all you need to
I'm happy with that, I'm not arguing for it as a primary solution.. But to give options.
Skip mode is fake succeeded, may be undesirable.. Also they would have to use a new flow if later they wanted to run a gap. (final?) Note: Again, I'm not arguing that there are no caveats to We may not be even having this conversation if the isolated graph sections was in (although there may have been still been an argument for it)... Kind of wish our energy was put into that instead of this 🤷♂️ |
Oh boi. I'm not going to continue in this, but instead summarise as:
I suggested a couple of approaches which avoid the need for removal, and asked for feedback:
You seem ok with these ideas:
Please can I get your thoughts on these? (Note, this issue is not just about R1 tasks, it can happen at any point in the workflow's history) |
Well with respect to these:
(Noting here, that newly-added is those added to the workflow definition and scheduler reloaded... And the discussion is how the spawning mechanism should treat them in the history prior to their introduction/some-current-point) I think the 2nd alternative has some assumptions that I'm not sure about.. i.e. what if the newly added needs to be introduced a few cycles before the oldest active cycle... The first is a reasonable option, however, it didn't "really"/actually complete.. Did it? How about this for an alternative:
(i.e. cannot spawn the newly-added tasks off a flow that's already run the same path) Obviously, if the outputs/states change on rerun causing an alternative path to be followed for the first time (i.e. In my mind this would fix the problem with both R1 and any other set of historical reruns, without the need to meddle with historical completion states (pre-ICP aside).. By now, people may have gotten into the habit of Thoughts? |
It didn't complete, however, Cylc effectively considers pre-inital tasks to have completed. This is the same situation as warm start where we have proposed that Since Cylc assumes pre-initial tasks to have completed for the purpose of task prerequisites, it is arguably inconsistent that the record states to the contrary and that this logic only applies internally to one workflow, and not externally to others. The tasks could be marked as having run in "skip mode" which serves as indication that they are only "simulated complete" tasks, not the result of a real submission. Skip mode tasks display a (fast-forward) task badge in the GUI. The upside of this approach is that these tasks can easily and intuitively be re-run (via trigger) or removed as needed.
We could, potentially include a mechanism to allow earlier instances of these tasks to be run (similar to the Cylc 7 However, another way to handle this issue would be to introduce the concept of a "reload point" which I suggested in #7201. This would allow the operator to decide which cycle point the reload changes apply from, newly added tasks could then be automatically spawned into the pool from that point onwards (#5949). This issue (#7134) along with those mentioned in the last paragraph (#7201, #5949) are part of #7203 which is about enabling workflows to be safely and automatically reloaded. At present, graph changes have many caveats which means a human operator has to supervise reloads to manually add/remove/set tasks in order to manage the transition. #7203 is about resolving these caveats making reloads safe and predictable so that manual intervention is not needed, enabling the continuous deployment of workflows in automated environments. This topic will come up in a developers discussion in the near-future. This is a large part of why it's important for us to have an automated solution for this problem which works in the general case, irrespective of any manual overrides we might provide. |
I'm more happy with pre-initial than post-initial.
Is
Yes, I would be in favor of this.. It would also help with the speed of reload for large workflows that are far from the original ICP. Do have a think about the alternative I proposed, it might feel fundamental, but it kind of makes sense in a way. |
Aah, sorry, forgot to respond to that, I didn't quite grapple the suggestion. The historical instances of newly added tasks do not belong to any flow yet as there's no DB entry. So we would need to add output entries to mark them, say as completed, in order to assign them to a flow. |
|
(I'll try to summarize and suggest the way forward IMO on Element chat first...) |
It's the rerun tasks that have flow history, and that spawn the newly-added, this is what I'm referring to. |
|
Sorry @dwsutherland, I don't understand what you're suggesting here, could you elaborate. |
It's really what the packet reads. The main problem we are addressing is unintended consequence of newly-added tasks being spawned by tasks that have already run (rerun of tasks that spawn newly-added). So if we take on the following rule:
what I'm saying is if We then add Then retrigger/rerun of And to run This change would appear to solve the problem. |
|
Right, gotcha, good idea, i.e: "Don't spawn on the same output twice (in the same flow)". I think that makes sense, off the top of my head, I can't think of any situation where we would not want to do that. It certainly makes reload a bit more predictable and you can always force task spawning with Furthermore, there's a bunch of issues to do with the message-handling code which point towards a refactor in this code area:
But, unfortunately, in relation to "the problem of not spawning historical instances of newly added tasks", there are a couple of caveats:
So I think we would still need a more formal mechanism of defining the "reload point" at which newly-added tasks are inserted from to solve the issue in general. However, this idea may well make sense for other reasons. |

At present there's no way to remove parentless tasks from the scheduler and not have them spawn their next instance.
This is not ideal, and at ESNZ I've run into this problem many times while adding new tasks to a workflow and them being spawned in at the distant past (i.e. the ICP), and then the only way to fix the workflow is to spawn them forward one cycle at a time to the desired cycle point.
Also, in general, perhaps you wish not to run an isolate/branch for a gap then introduce again at some future cycle point.
This PR fixes the problem by reinstating
--no-spawnoption tocylc remove..Example:
running shows:



Removing
dnormally ($ cylc remove couple/run1//20260106T0000Z/d) spawns it into the next cycle:With
--no-spawn:And of course removing the sequentially spawned

a(cylc remove --no-spawn couple/run1//20260101T0000Z/a), with this option, gives:(as expected)
Code changes were made in such a way that they should be conflict-free with #7132
Seems our forms are smart enough at the UI end:

Check List
CONTRIBUTING.mdand added my name as a Code Contributor.setup.cfg(andconda-environment.ymlif present).?.?.xbranch.