Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion src/libkernelbot/launchers/modal.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,16 @@ async def run_submission(

await status.push("⏳ Waiting for Modal run to finish...")

# Use task-specific timeout + 60s buffer for signal-based timeout
# This catches most hangs; container timeout is the fallback for hung GPUs
task_timeout = config.get("ranked_timeout", 180)
signal_timeout = task_timeout + 60
Copy link

Copilot AI Jan 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The timeout calculation logic doesn't account for different submission modes. The code only uses "ranked_timeout" from config, but according to the codebase, there are three different timeout values: "test_timeout", "benchmark_timeout", and "ranked_timeout" corresponding to different submission modes (test, benchmark, leaderboard). This means test and benchmark submissions will incorrectly use the ranked_timeout value (180s default) instead of their respective timeout values. Consider using a similar approach to the one in src/libkernelbot/launchers/github.py which has a get_timeout function that properly maps the mode to the correct timeout value.

Copilot uses AI. Check for mistakes.

result = await loop.run_in_executor(
None,
lambda: modal.Function.from_name("discord-bot-runner", func_name).remote(config=config),
lambda: modal.Function.from_name("discord-bot-runner", func_name).remote(
config=config, timeout_seconds=signal_timeout
),
)

await status.update("✅ Waiting for modal run to finish... Done")
Expand Down
8 changes: 6 additions & 2 deletions src/runners/modal_runner_archs.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,16 @@
# Modal apps on specific devices. We will fix this later.
from modal_runner import app, cuda_image, modal_run_config

# Container-level timeout (seconds) - kills container regardless of GPU state
# This is the nuclear option for hung GPUs that don't respond to signals
MODAL_CONTAINER_TIMEOUT = 300

Comment on lines 4 to +8
Copy link

Copilot AI Jan 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The container-level timeout (300 seconds) is shorter than the default signal timeout (180 + 60 = 240 seconds) for ranked submissions, but could be exceeded by longer timeouts in other modes or custom configurations. For example, if a task has a ranked_timeout of 300 seconds, the signal timeout would be 360 seconds, which exceeds the container timeout of 300 seconds. This means the container timeout would always trigger before the signal timeout can work, defeating the purpose of having a two-tier timeout system. Consider either making MODAL_CONTAINER_TIMEOUT configurable or setting it to a value that provides a reasonable buffer above the maximum expected signal timeout (e.g., 600-900 seconds).

Suggested change
# Container-level timeout (seconds) - kills container regardless of GPU state
# This is the nuclear option for hung GPUs that don't respond to signals
MODAL_CONTAINER_TIMEOUT = 300
import os
# Container-level timeout (seconds) - kills container regardless of GPU state
# This is the nuclear option for hung GPUs that don't respond to signals
# Make this configurable and give it a higher default than any expected signal timeout.
def _get_modal_container_timeout(default: int = 900) -> int:
raw = os.getenv("MODAL_CONTAINER_TIMEOUT")
if raw is None:
return default
try:
value = int(raw)
return value if value > 0 else default
except (TypeError, ValueError):
return default
MODAL_CONTAINER_TIMEOUT = _get_modal_container_timeout()

Copilot uses AI. Check for mistakes.
gpus = ["T4", "L4", "L4:4", "A100-80GB", "H100!", "B200"]
for gpu in gpus:
gpu_slug = gpu.lower().split("-")[0].strip("!").replace(":", "x")
app.function(gpu=gpu, image=cuda_image, name=f"run_cuda_script_{gpu_slug}", serialized=True)(
app.function(gpu=gpu, image=cuda_image, name=f"run_cuda_script_{gpu_slug}", serialized=True, timeout=MODAL_CONTAINER_TIMEOUT)(
modal_run_config
)
app.function(gpu=gpu, image=cuda_image, name=f"run_pytorch_script_{gpu_slug}", serialized=True)(
app.function(gpu=gpu, image=cuda_image, name=f"run_pytorch_script_{gpu_slug}", serialized=True, timeout=MODAL_CONTAINER_TIMEOUT)(
modal_run_config
)
Loading