feat: engine plugin system for game-engine agnostic orchestration#4
feat: engine plugin system for game-engine agnostic orchestration#4frostebite merged 1 commit intomainfrom
Conversation
Introduce EnginePlugin interface so the orchestrator can support different game engines (Unity, Godot, Unreal, etc.) without hardcoding engine-specific behavior. Unity ships as a built-in plugin and remains the default — zero changes required to unity-builder. - EnginePlugin interface: name, cacheFolders, preStopCommand - UnityPlugin: built-in plugin preserving all existing behavior - 3 community plugin loaders: npm module, CLI executable, Docker image - initEngine() dispatcher with prefix-based source detection - All hardcoded 'Library' references replaced with getEngine().cacheFolders - K8s preStop hook now uses getEngine().preStopCommand - CLI: --engine and --engine-plugin options - All 812 tests pass unchanged Co-Authored-By: Claude Opus 4.6 <[email protected]>
📝 WalkthroughWalkthroughThis PR introduces a pluggable engine system enabling the orchestrator to support multiple game engines beyond Unity. It adds CLI options for engine selection, implements engine loaders for various sources, creates an EnginePlugin interface, and refactors cache handling from hardcoded Library folders to dynamic engine-specific folders. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI User
participant BuildCmd as Build Command
participant BuildParams as BuildParameters
participant Engine as Engine System
participant Loaders as Engine Loaders
participant Cache as Cache Service
participant K8s as Kubernetes Job
CLI->>BuildCmd: --engine unity --engine-plugin docker:image
BuildCmd->>BuildParams: create(options)
BuildParams->>Engine: initEngine('unity', 'docker:image')
alt Docker-prefixed
Engine->>Loaders: loadEngineFromDocker('image')
Loaders->>Loaders: docker run get-engine-config
Loaders->>Engine: return EnginePlugin
else CLI-prefixed
Engine->>Loaders: loadEngineFromCli(execPath)
else Module source
Engine->>Loaders: loadEngineFromModule(moduleId)
else No prefix (unity)
Engine->>Engine: setEngine(UnityPlugin)
end
Engine->>Cache: restoreEngineCache()
Cache->>Cache: iterate getEngine().cacheFolders
Cache->>Cache: restore each folder
BuildParams->>K8s: create job spec
K8s->>Engine: getEngine().preStopCommand
K8s->>K8s: configure preStop lifecycle
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/model/orchestrator/providers/k8s/kubernetes-job-spec-factory.ts (1)
155-162:⚠️ Potential issue | 🟠 MajorAvoid unconditional 60s preStop delay for engines without cleanup commands.
At Line 161,
sleep 60runs even whenpreStopCommandis undefined, adding teardown latency to every non-Unity build.Proposed fix
+ const preStopCommand = getEngine().preStopCommand?.trim(); lifecycle: { preStop: { exec: { command: [ '/bin/sh', '-c', - `sleep 60; ${getEngine().preStopCommand || 'true'}`, + preStopCommand ? `sleep 60; ${preStopCommand}` : 'true', ], }, }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/orchestrator/providers/k8s/kubernetes-job-spec-factory.ts` around lines 155 - 162, The lifecycle.preStop block unconditionally runs `sleep 60` even when the engine has no cleanup command (getEngine().preStopCommand), causing unnecessary teardown delays; change the logic in kubernetes-job-spec-factory (the lifecycle.preStop/exec command construction) to only include the sleep+command sequence when getEngine().preStopCommand is truthy, otherwise either omit the preStop handler entirely or set the exec.command to a no-op without the 60s sleep so engines without cleanup commands do not incur the delay.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/model/engine/cli-engine-loader.ts`:
- Around line 16-23: The loader is sending the command twice (JSON on stdin and
also as CLI arg) which violates the plugin protocol and can break plugins that
validate argv; update the spawnSync invocation so it does not pass
['get-engine-config'] as argv—only send the JSON request via stdin (keep the
request variable and executablePath usage) and preserve the existing options
(encoding, timeout, shell) so the plugin receives the command exclusively
through stdin.
- Around line 39-51: The current loop splits stdout into lines and calls
JSON.parse on each single line (variables: stdout, lines, config), which breaks
multi-line/pretty JSON; change the logic to locate and parse the last valid JSON
object in the full stdout by extracting a contiguous JSON substring (e.g., find
the last '{' and corresponding matching '}' or use a regex to capture the last
JSON object) and call JSON.parse once on that multi-line substring, set config
to the parsed object, and keep the existing guard that ensures parsed is an
object with a name before breaking.
In `@src/model/engine/docker-engine-loader.ts`:
- Around line 35-48: The current line-by-line JSON.parse loop using stdout,
lines, parsed and config fails for pretty-printed multi-line JSON; update
docker-engine-loader logic to first try parsing the entire stdout.trim() as JSON
and if that fails extract the last balanced JSON object from the full stdout
buffer (scan from the end for matching '{' and '}' or '[' and ']' pairs) and
JSON.parse that substring, assigning it to config; keep the existing fallback
that rejects non-objects or missing name after parsing.
In `@src/model/engine/index.ts`:
- Around line 39-67: Validate the loaded plugin before assigning currentEngine
in initEngine: after the loader call that sets `loaded` (from
loadEngineFromDocker/loadEngineFromCli/loadEngineFromModule) verify
`loaded.name` is a non-empty string and equals the requested `engine` (or
document/normalize if you accept different canonical names), ensure
`loaded.cacheFolders` exists and is an array of strings where each entry is a
relative path that does not contain '..', and ensure `loaded.preStopCommand` is
either undefined or a string; if any check fails, throw a descriptive Error and
do not set `currentEngine` (log the failure via OrchestratorLogger.log/error) so
malformed plugins cannot be assigned and will fail early.
In `@src/model/engine/module-engine-loader.ts`:
- Around line 5-10: The docs claim relative paths and TypeScript files are
supported but the loader uses require(moduleId) (around the moduleId resolution
in module-engine-loader.ts) which resolves relative paths from the compiled file
location and cannot load raw .ts files; update the loader to detect file-like
inputs (moduleId values beginning with ./, ../, / or ending with .js/.ts), and
for those resolve the path against process.cwd() (e.g.
path.resolve(process.cwd(), moduleId)) before calling require; if you cannot add
a runtime TypeScript loader, explicitly reject or error on .ts inputs with a
clear message (or update docs to say only .js is supported) so callers know to
provide compiled .js. Ensure changes touch the code paths that build moduleId
and call require(moduleId) and preserve existing behavior for npm package names.
In `@src/model/orchestrator/options/orchestrator-folders.ts`:
- Around line 56-59: The legacy getters (e.g.,
OrchestratorFolders.libraryFolderAbsolute and the similar deprecated accessor at
lines ~81-84) hard-code getEngine().cacheFolders[0] and break multi-folder
engines; replace their usage and callers in remote-client (restore code in
src/model/orchestrator/remote-client/index.ts) and the post-build save path to
operate across the engine's full cacheFolders array instead of a single folder.
Update callers to either call
OrchestratorFolders.engineCacheFolderAbsolute(folder) for each folder in
getEngine().cacheFolders or use OrchestratorFolders.cacheFolderForCacheKeyFull
with the correct engine-specific folder selection logic so restores iterate all
cacheFolders and saves write to the engine-aware cacheFolder (do not hardcode
"Library"); remove reliance on the deprecated single-folder getters.
In `@src/model/orchestrator/services/cache/child-workspace-service.ts`:
- Around line 155-181: The restoreLibraryCache loop can fail when restoring
nested folders because fs.renameSync(backupPath, destination) requires the
destination's parent directory to exist; update
ChildWorkspaceService.restoreLibraryCache to ensure path.dirname(destination)
exists before the rename by creating the parent directory (use
fs.mkdirSync(parent, { recursive: true }) or equivalent) after you remove any
existing destination and before calling fs.renameSync; reference functions:
restoreLibraryCache, resolveCacheFolderBackupPath and the use of
getEngine().cacheFolders and OrchestratorLogger for locating the change.
---
Outside diff comments:
In `@src/model/orchestrator/providers/k8s/kubernetes-job-spec-factory.ts`:
- Around line 155-162: The lifecycle.preStop block unconditionally runs `sleep
60` even when the engine has no cleanup command (getEngine().preStopCommand),
causing unnecessary teardown delays; change the logic in
kubernetes-job-spec-factory (the lifecycle.preStop/exec command construction) to
only include the sleep+command sequence when getEngine().preStopCommand is
truthy, otherwise either omit the preStop handler entirely or set the
exec.command to a no-op without the 60s sleep so engines without cleanup
commands do not incur the delay.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 75875617-827e-4c18-be8e-36b6bc94cffc
📒 Files selected for processing (15)
CLAUDE.mdsrc/cli/commands/build.tssrc/index.tssrc/model/build-parameters.tssrc/model/engine/cli-engine-loader.tssrc/model/engine/docker-engine-loader.tssrc/model/engine/engine-plugin.tssrc/model/engine/index.tssrc/model/engine/module-engine-loader.tssrc/model/engine/unity-plugin.tssrc/model/orchestrator/options/orchestrator-folders.tssrc/model/orchestrator/providers/k8s/kubernetes-job-spec-factory.tssrc/model/orchestrator/services/cache/child-workspace-service.tssrc/model/orchestrator/services/cache/local-cache-service.tssrc/model/orchestrator/workflows/build-automation-workflow.ts
| const request = JSON.stringify({ command: 'get-engine-config' }); | ||
|
|
||
| const result = spawnSync(executablePath, ['get-engine-config'], { | ||
| input: request, | ||
| encoding: 'utf-8', | ||
| timeout: 30_000, | ||
| shell: process.platform === 'win32', | ||
| }); |
There was a problem hiding this comment.
Protocol mismatch: loader sends both stdin JSON and CLI args.
The contract says plugin receives {"command":"get-engine-config"} on stdin, but Line 18 also forces ['get-engine-config'] argv. Plugins that validate argv strictly can fail despite correct stdin handling.
Proposed fix
- const result = spawnSync(executablePath, ['get-engine-config'], {
+ const result = spawnSync(executablePath, [], {
input: request,
encoding: 'utf-8',
timeout: 30_000,
shell: process.platform === 'win32',
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const request = JSON.stringify({ command: 'get-engine-config' }); | |
| const result = spawnSync(executablePath, ['get-engine-config'], { | |
| input: request, | |
| encoding: 'utf-8', | |
| timeout: 30_000, | |
| shell: process.platform === 'win32', | |
| }); | |
| const request = JSON.stringify({ command: 'get-engine-config' }); | |
| const result = spawnSync(executablePath, [], { | |
| input: request, | |
| encoding: 'utf-8', | |
| timeout: 30_000, | |
| shell: process.platform === 'win32', | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/engine/cli-engine-loader.ts` around lines 16 - 23, The loader is
sending the command twice (JSON on stdin and also as CLI arg) which violates the
plugin protocol and can break plugins that validate argv; update the spawnSync
invocation so it does not pass ['get-engine-config'] as argv—only send the JSON
request via stdin (keep the request variable and executablePath usage) and
preserve the existing options (encoding, timeout, shell) so the plugin receives
the command exclusively through stdin.
| const lines = stdout.split('\n'); | ||
| let config: any; | ||
| for (let i = lines.length - 1; i >= 0; i--) { | ||
| try { | ||
| const parsed = JSON.parse(lines[i].trim()); | ||
| if (typeof parsed === 'object' && parsed !== null && parsed.name) { | ||
| config = parsed; | ||
| break; | ||
| } | ||
| } catch { | ||
| // Not JSON, skip | ||
| } | ||
| } |
There was a problem hiding this comment.
Line-by-line JSON parsing will reject valid multi-line plugin responses.
Current parsing only attempts JSON.parse per line, so pretty-printed JSON objects are treated as invalid output.
Proposed fix
- const lines = stdout.split('\n');
- let config: any;
- for (let i = lines.length - 1; i >= 0; i--) {
- try {
- const parsed = JSON.parse(lines[i].trim());
- if (typeof parsed === 'object' && parsed !== null && parsed.name) {
- config = parsed;
- break;
- }
- } catch {
- // Not JSON, skip
- }
- }
+ const lines = stdout.split('\n');
+ let config: any;
+ for (let i = lines.length - 1; i >= 0; i--) {
+ const candidate = lines.slice(i).join('\n').trim();
+ if (!candidate) continue;
+ try {
+ const parsed = JSON.parse(candidate);
+ if (typeof parsed === 'object' && parsed !== null && parsed.name) {
+ config = parsed;
+ break;
+ }
+ } catch {
+ // Not valid JSON from this boundary, continue scanning upward
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const lines = stdout.split('\n'); | |
| let config: any; | |
| for (let i = lines.length - 1; i >= 0; i--) { | |
| try { | |
| const parsed = JSON.parse(lines[i].trim()); | |
| if (typeof parsed === 'object' && parsed !== null && parsed.name) { | |
| config = parsed; | |
| break; | |
| } | |
| } catch { | |
| // Not JSON, skip | |
| } | |
| } | |
| const lines = stdout.split('\n'); | |
| let config: any; | |
| for (let i = lines.length - 1; i >= 0; i--) { | |
| const candidate = lines.slice(i).join('\n').trim(); | |
| if (!candidate) continue; | |
| try { | |
| const parsed = JSON.parse(candidate); | |
| if (typeof parsed === 'object' && parsed !== null && parsed.name) { | |
| config = parsed; | |
| break; | |
| } | |
| } catch { | |
| // Not valid JSON from this boundary, continue scanning upward | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/engine/cli-engine-loader.ts` around lines 39 - 51, The current loop
splits stdout into lines and calls JSON.parse on each single line (variables:
stdout, lines, config), which breaks multi-line/pretty JSON; change the logic to
locate and parse the last valid JSON object in the full stdout by extracting a
contiguous JSON substring (e.g., find the last '{' and corresponding matching
'}' or use a regex to capture the last JSON object) and call JSON.parse once on
that multi-line substring, set config to the parsed object, and keep the
existing guard that ensures parsed is an object with a name before breaking.
| // Find the last JSON object in stdout | ||
| const lines = stdout.split('\n'); | ||
| let config: any; | ||
| for (let i = lines.length - 1; i >= 0; i--) { | ||
| try { | ||
| const parsed = JSON.parse(lines[i].trim()); | ||
| if (typeof parsed === 'object' && parsed !== null && parsed.name) { | ||
| config = parsed; | ||
| break; | ||
| } | ||
| } catch { | ||
| // Not JSON, skip | ||
| } | ||
| } |
There was a problem hiding this comment.
Line-oriented parsing rejects valid Docker plugin output.
At Lines 35-48, the loader only JSON.parses one line at a time. A plugin that prints a valid multi-line JSON object, such as JSON.stringify(config, null, 2), will fall through to the "did not return valid JSON config" error even though stdout is well-formed JSON. Either require the last non-empty stdout line to be compact JSON or extract the last balanced JSON object from the full stdout buffer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/engine/docker-engine-loader.ts` around lines 35 - 48, The current
line-by-line JSON.parse loop using stdout, lines, parsed and config fails for
pretty-printed multi-line JSON; update docker-engine-loader logic to first try
parsing the entire stdout.trim() as JSON and if that fails extract the last
balanced JSON object from the full stdout buffer (scan from the end for matching
'{' and '}' or '[' and ']' pairs) and JSON.parse that substring, assigning it to
config; keep the existing fallback that rejects non-objects or missing name
after parsing.
| export function initEngine(engine: string, enginePlugin?: string): void { | ||
| if (!enginePlugin) { | ||
| if (engine === 'unity') { | ||
| currentEngine = UnityPlugin; | ||
| return; | ||
| } | ||
| throw new Error( | ||
| `Engine '${engine}' requires an enginePlugin source. ` + | ||
| `Use one of: module:<npm-package>, cli:<executable-path>, docker:<image>`, | ||
| ); | ||
| } | ||
|
|
||
| OrchestratorLogger.log(`Loading engine plugin for '${engine}' from: ${enginePlugin}`); | ||
|
|
||
| let loaded: EnginePlugin; | ||
|
|
||
| if (enginePlugin.startsWith('docker:')) { | ||
| loaded = loadEngineFromDocker(enginePlugin.slice('docker:'.length)); | ||
| } else if (enginePlugin.startsWith('cli:')) { | ||
| loaded = loadEngineFromCli(enginePlugin.slice('cli:'.length)); | ||
| } else if (enginePlugin.startsWith('module:')) { | ||
| loaded = loadEngineFromModule(enginePlugin.slice('module:'.length)); | ||
| } else { | ||
| // No prefix — default to module loader (npm package or local path) | ||
| loaded = loadEngineFromModule(enginePlugin); | ||
| } | ||
|
|
||
| currentEngine = loaded; | ||
| OrchestratorLogger.log(`Engine plugin '${loaded.name}' initialized successfully`); |
There was a problem hiding this comment.
Validate the loaded plugin before mutating global state.
initEngine() assigns whatever the loader returned straight into currentEngine, and it never checks that loaded.name matches the requested engine. Downstream code uses cacheFolders in path.join(...) and shell generation, so a malformed plugin survives init and then fails much later in src/model/orchestrator/options/orchestrator-folders.ts at Lines 52-84 or src/model/orchestrator/workflows/build-automation-workflow.ts at Lines 249-267. Please validate here that name is a non-empty string, cacheFolders is a string array of relative paths without .., preStopCommand is either undefined or a string, and loaded.name === engine unless you intentionally normalize one source of truth.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/engine/index.ts` around lines 39 - 67, Validate the loaded plugin
before assigning currentEngine in initEngine: after the loader call that sets
`loaded` (from loadEngineFromDocker/loadEngineFromCli/loadEngineFromModule)
verify `loaded.name` is a non-empty string and equals the requested `engine` (or
document/normalize if you accept different canonical names), ensure
`loaded.cacheFolders` exists and is an array of strings where each entry is a
relative path that does not contain '..', and ensure `loaded.preStopCommand` is
either undefined or a string; if any check fails, throw a descriptive Error and
do not set `currentEngine` (log the failure via OrchestratorLogger.log/error) so
malformed plugins cannot be assigned and will fail early.
| * Load an EnginePlugin from a TypeScript/JavaScript module. | ||
| * | ||
| * Accepts: | ||
| * - npm package name: `@game-ci/godot-engine` | ||
| * - Local file path: `./my-engine-plugin.js` or `/absolute/path/plugin.js` | ||
| * |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files src/model/engine/ | head -20Repository: game-ci/orchestrator
Length of output: 276
🏁 Script executed:
cat -n src/model/engine/module-engine-loader.tsRepository: game-ci/orchestrator
Length of output: 1992
🏁 Script executed:
# Check if there's a tsconfig or jest config that might enable TS loading
fd -t f "tsconfig\|jest\|loader" . --max-depth 3Repository: game-ci/orchestrator
Length of output: 46
🏁 Script executed:
# Look for existing require() patterns and how paths are handled elsewhere
rg "require\(" src/model/ -B 2 -A 2 | head -40Repository: game-ci/orchestrator
Length of output: 3982
🏁 Script executed:
# Check for any path resolution utilities or similar patterns
rg "process\.cwd\|path\.resolve" src/ | head -20Repository: game-ci/orchestrator
Length of output: 46
🏁 Script executed:
rg "loadEngineFromModule" src/ --type ts --type jsRepository: game-ci/orchestrator
Length of output: 636
🏁 Script executed:
# Check the test file for this module
fd "module-engine-loader" src/ -t fRepository: game-ci/orchestrator
Length of output: 104
🏁 Script executed:
# Look for build configuration and runtime context
cat package.json | head -60Repository: game-ci/orchestrator
Length of output: 1737
🏁 Script executed:
# Check if TypeScript is compiled as part of build
rg "tsc\|esbuild\|webpack" . --type json --type js --type ts | head -20Repository: game-ci/orchestrator
Length of output: 46
🏁 Script executed:
cat -n src/model/engine/index.tsRepository: game-ci/orchestrator
Length of output: 3095
🏁 Script executed:
# Check if there are any tests related to engine loading
fd -t f "\.test\|\.spec" src/ | xargs grep -l "loadEngineFromModule\|module-engine-loader" 2>/dev/nullRepository: game-ci/orchestrator
Length of output: 48
🏁 Script executed:
# Check if there's any dynamic require or loader configuration
cat tsconfig.json | head -40Repository: game-ci/orchestrator
Length of output: 631
🏁 Script executed:
# Check if there's any documentation about the expected behavior
fd -t f "README\|CONTRIBUTING\|docs" . --max-depth 2 | head -10Repository: game-ci/orchestrator
Length of output: 46
🏁 Script executed:
# Look for any examples or tests that show how enginePlugin paths are used
rg "module:\|enginePlugin" src/ --type ts -B 2 -A 2 | head -40Repository: game-ci/orchestrator
Length of output: 46
🏁 Script executed:
# Check if there are any built-in plugins or examples
fd "plugin" src/ --type ts | head -10Repository: game-ci/orchestrator
Length of output: 233
🏁 Script executed:
# Look for any example plugins or documentation about the API
rg "EnginePlugin\|enginePlugin" src/ -A 3 | head -60Repository: game-ci/orchestrator
Length of output: 46
🏁 Script executed:
# Check if the documented TypeScript support makes sense given the compilation step
# Look at what actually happens when require() is called
python3 << 'PYTHON_EOF'
# Simulate require() resolution in Node.js CommonJS
# When require('./my-engine-plugin.js') is called from dist/model/engine/module-engine-loader.js
# Node resolves it relative to dist/model/engine/
import os
import json
# The module file path after compilation
module_file = "dist/model/engine/module-engine-loader.js"
module_dir = os.path.dirname(module_file)
# Test path: './my-engine-plugin.js' as documented
test_path = "./my-engine-plugin.js"
# How Node.js resolves this
resolved = os.path.normpath(os.path.join(module_dir, test_path))
print(f"Module directory: {module_dir}")
print(f"Documented path: {test_path}")
print(f"Node.js would resolve to: {resolved}")
print(f"Expected by user (cwd-based): ./my-engine-plugin.js")
print(f"Mismatch: {resolved != './my-engine-plugin.js'}")
# For absolute paths, require() works fine
abs_path = "/absolute/path/plugin.js"
print(f"\nAbsolute path {abs_path} resolves to itself: ✓")
# For npm packages
npm_pkg = "@game-ci/godot-engine"
print(f"npm package {npm_pkg} resolves via node_modules: ✓")
PYTHON_EOFRepository: game-ci/orchestrator
Length of output: 394
🏁 Script executed:
# Check if .ts files are mentioned in any actual implementation or tests
rg "\.ts" src/model/engine/ --type tsRepository: game-ci/orchestrator
Length of output: 46
Relative file paths and TypeScript support are not actually implemented despite being documented.
The docstring (lines 5-10) advertises support for local file paths like ./my-engine-plugin.js and TypeScript files. However:
-
Relative paths fail:
require(moduleId)on line 22 resolves paths relative to the compiled module file location (dist/model/engine/), not the caller's working directory. A user passing./my-engine-plugin.jswill fail unless the file exists atdist/model/engine/my-engine-plugin.js. -
TypeScript files cannot load: Raw
.tsfiles in the documented examples require a TypeScript loader, which isn't configured for runtime. The built code runs compiled JavaScript only.
Absolute paths and npm packages work correctly via require(). To support the documented API, detect file-like inputs and resolve relative paths from process.cwd(). Narrowing the docs to .js only or adding explicit TypeScript loader support would be required for full feature parity.
🛠️ Fix for relative file path resolution
+import path from 'node:path';
import { EnginePlugin } from './engine-plugin';
import OrchestratorLogger from '../orchestrator/services/core/orchestrator-logger';
export function loadEngineFromModule(moduleId: string): EnginePlugin {
let loaded: any;
try {
- loaded = require(moduleId);
+ const request =
+ moduleId.startsWith('.') || path.isAbsolute(moduleId)
+ ? path.resolve(process.cwd(), moduleId)
+ : moduleId;
+ loaded = require(request);
} catch (error: any) {
throw new Error(`Failed to load engine plugin module '${moduleId}': ${error.message}`);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/engine/module-engine-loader.ts` around lines 5 - 10, The docs claim
relative paths and TypeScript files are supported but the loader uses
require(moduleId) (around the moduleId resolution in module-engine-loader.ts)
which resolves relative paths from the compiled file location and cannot load
raw .ts files; update the loader to detect file-like inputs (moduleId values
beginning with ./, ../, / or ending with .js/.ts), and for those resolve the
path against process.cwd() (e.g. path.resolve(process.cwd(), moduleId)) before
calling require; if you cannot add a runtime TypeScript loader, explicitly
reject or error on .ts inputs with a clear message (or update docs to say only
.js is supported) so callers know to provide compiled .js. Ensure changes touch
the code paths that build moduleId and call require(moduleId) and preserve
existing behavior for npm package names.
| /** @deprecated Use engineCacheFolderAbsolute(folder) — kept for backward compatibility */ | ||
| public static get libraryFolderAbsolute(): string { | ||
| return path.join(OrchestratorFolders.projectPathAbsolute, `Library`); | ||
| return OrchestratorFolders.engineCacheFolderAbsolute(getEngine().cacheFolders[0] || 'Library'); | ||
| } |
There was a problem hiding this comment.
Legacy library accessors still break multi-folder engines.
These getters only expose getEngine().cacheFolders[0]. The remaining callers in src/model/orchestrator/remote-client/index.ts at Lines 356-390 still restore through these accessors, and the post-build path at Lines 101-125 still saves to ${OrchestratorFolders.cacheFolderForCacheKeyFull}/Library. For non-Unity engines, that means only the first cache folder is restored and saves still land under Library, so the engine-agnostic cache story is incomplete. Please migrate those callers in this PR rather than routing them through a single-folder alias.
Also applies to: 81-84
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/options/orchestrator-folders.ts` around lines 56 - 59,
The legacy getters (e.g., OrchestratorFolders.libraryFolderAbsolute and the
similar deprecated accessor at lines ~81-84) hard-code
getEngine().cacheFolders[0] and break multi-folder engines; replace their usage
and callers in remote-client (restore code in
src/model/orchestrator/remote-client/index.ts) and the post-build save path to
operate across the engine's full cacheFolders array instead of a single folder.
Update callers to either call
OrchestratorFolders.engineCacheFolderAbsolute(folder) for each folder in
getEngine().cacheFolders or use OrchestratorFolders.cacheFolderForCacheKeyFull
with the correct engine-specific folder selection logic so restores iterate all
cacheFolders and saves write to the engine-aware cacheFolder (do not hardcode
"Library"); remove reliance on the deprecated single-folder getters.
| static restoreLibraryCache(projectPath: string, config: ChildWorkspaceConfig): boolean { | ||
| const libraryBackup = ChildWorkspaceService.resolveLibraryBackupPath(config); | ||
| const libraryDestination = path.join(projectPath, 'Library'); | ||
|
|
||
| try { | ||
| if (!fs.existsSync(libraryBackup)) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] No Library cache found at ${libraryBackup}`); | ||
|
|
||
| return false; | ||
| } | ||
| let restored = false; | ||
| for (const folder of getEngine().cacheFolders) { | ||
| const backupPath = ChildWorkspaceService.resolveCacheFolderBackupPath(config, folder); | ||
| const destination = path.join(projectPath, folder); | ||
|
|
||
| try { | ||
| if (!fs.existsSync(backupPath)) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] No ${folder} cache found at ${backupPath}`); | ||
| continue; | ||
| } | ||
|
|
||
| const entries = fs.readdirSync(libraryBackup); | ||
| if (entries.length === 0) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] Library cache at ${libraryBackup} is empty`); | ||
| fs.rmSync(libraryBackup, { recursive: true, force: true }); | ||
| const entries = fs.readdirSync(backupPath); | ||
| if (entries.length === 0) { | ||
| OrchestratorLogger.log(`[ChildWorkspace] ${folder} cache at ${backupPath} is empty`); | ||
| fs.rmSync(backupPath, { recursive: true, force: true }); | ||
| continue; | ||
| } | ||
|
|
||
| return false; | ||
| } | ||
| // Remove existing directory if present | ||
| if (fs.existsSync(destination)) { | ||
| fs.rmSync(destination, { recursive: true, force: true }); | ||
| } | ||
|
|
||
| // Remove existing Library directory if present | ||
| if (fs.existsSync(libraryDestination)) { | ||
| fs.rmSync(libraryDestination, { recursive: true, force: true }); | ||
| // Atomic move | ||
| OrchestratorLogger.log(`[ChildWorkspace] Restoring ${folder} cache: ${backupPath} -> ${destination}`); | ||
| fs.renameSync(backupPath, destination); |
There was a problem hiding this comment.
Nested engine cache folders won't restore successfully.
At Lines 158-181, destination can be something like <projectPath>/.godot/imported, but fs.renameSync(backupPath, destination) only works if path.dirname(destination) already exists. That is not guaranteed for the .godot/imported example documented in src/model/engine/engine-plugin.ts at Lines 14-16, so the first nested cache restore will fail with ENOENT. Create the destination parent before the rename.
🛠️ Minimal fix for nested cache folders
// Remove existing directory if present
if (fs.existsSync(destination)) {
fs.rmSync(destination, { recursive: true, force: true });
}
+ const destinationParent = path.dirname(destination);
+ if (!fs.existsSync(destinationParent)) {
+ fs.mkdirSync(destinationParent, { recursive: true });
+ }
+
// Atomic move
OrchestratorLogger.log(`[ChildWorkspace] Restoring ${folder} cache: ${backupPath} -> ${destination}`);
fs.renameSync(backupPath, destination);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| static restoreLibraryCache(projectPath: string, config: ChildWorkspaceConfig): boolean { | |
| const libraryBackup = ChildWorkspaceService.resolveLibraryBackupPath(config); | |
| const libraryDestination = path.join(projectPath, 'Library'); | |
| try { | |
| if (!fs.existsSync(libraryBackup)) { | |
| OrchestratorLogger.log(`[ChildWorkspace] No Library cache found at ${libraryBackup}`); | |
| return false; | |
| } | |
| let restored = false; | |
| for (const folder of getEngine().cacheFolders) { | |
| const backupPath = ChildWorkspaceService.resolveCacheFolderBackupPath(config, folder); | |
| const destination = path.join(projectPath, folder); | |
| try { | |
| if (!fs.existsSync(backupPath)) { | |
| OrchestratorLogger.log(`[ChildWorkspace] No ${folder} cache found at ${backupPath}`); | |
| continue; | |
| } | |
| const entries = fs.readdirSync(libraryBackup); | |
| if (entries.length === 0) { | |
| OrchestratorLogger.log(`[ChildWorkspace] Library cache at ${libraryBackup} is empty`); | |
| fs.rmSync(libraryBackup, { recursive: true, force: true }); | |
| const entries = fs.readdirSync(backupPath); | |
| if (entries.length === 0) { | |
| OrchestratorLogger.log(`[ChildWorkspace] ${folder} cache at ${backupPath} is empty`); | |
| fs.rmSync(backupPath, { recursive: true, force: true }); | |
| continue; | |
| } | |
| return false; | |
| } | |
| // Remove existing directory if present | |
| if (fs.existsSync(destination)) { | |
| fs.rmSync(destination, { recursive: true, force: true }); | |
| } | |
| // Remove existing Library directory if present | |
| if (fs.existsSync(libraryDestination)) { | |
| fs.rmSync(libraryDestination, { recursive: true, force: true }); | |
| // Atomic move | |
| OrchestratorLogger.log(`[ChildWorkspace] Restoring ${folder} cache: ${backupPath} -> ${destination}`); | |
| fs.renameSync(backupPath, destination); | |
| static restoreLibraryCache(projectPath: string, config: ChildWorkspaceConfig): boolean { | |
| let restored = false; | |
| for (const folder of getEngine().cacheFolders) { | |
| const backupPath = ChildWorkspaceService.resolveCacheFolderBackupPath(config, folder); | |
| const destination = path.join(projectPath, folder); | |
| try { | |
| if (!fs.existsSync(backupPath)) { | |
| OrchestratorLogger.log(`[ChildWorkspace] No ${folder} cache found at ${backupPath}`); | |
| continue; | |
| } | |
| const entries = fs.readdirSync(backupPath); | |
| if (entries.length === 0) { | |
| OrchestratorLogger.log(`[ChildWorkspace] ${folder} cache at ${backupPath} is empty`); | |
| fs.rmSync(backupPath, { recursive: true, force: true }); | |
| continue; | |
| } | |
| // Remove existing directory if present | |
| if (fs.existsSync(destination)) { | |
| fs.rmSync(destination, { recursive: true, force: true }); | |
| } | |
| const destinationParent = path.dirname(destination); | |
| if (!fs.existsSync(destinationParent)) { | |
| fs.mkdirSync(destinationParent, { recursive: true }); | |
| } | |
| // Atomic move | |
| OrchestratorLogger.log(`[ChildWorkspace] Restoring ${folder} cache: ${backupPath} -> ${destination}`); | |
| fs.renameSync(backupPath, destination); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/model/orchestrator/services/cache/child-workspace-service.ts` around
lines 155 - 181, The restoreLibraryCache loop can fail when restoring nested
folders because fs.renameSync(backupPath, destination) requires the
destination's parent directory to exist; update
ChildWorkspaceService.restoreLibraryCache to ensure path.dirname(destination)
exists before the rename by creating the parent directory (use
fs.mkdirSync(parent, { recursive: true }) or equivalent) after you remove any
existing destination and before calling fs.renameSync; reference functions:
restoreLibraryCache, resolveCacheFolderBackupPath and the use of
getEngine().cacheFolders and OrchestratorLogger for locating the change.
|
Documentation PR: game-ci/documentation#544 (merged into release/lts-2.0.0) |
Summary
EnginePlugininterface so the orchestrator can support different game engines (Unity, Godot, Unreal, etc.) without hardcoding engine-specific behaviorUnityPlugin) and remains the default — zero changes required to unity-builderWhat changed
New files (
src/model/engine/):engine-plugin.ts— minimal interface:name,cacheFolders,preStopCommandunity-plugin.ts— built-in Unity plugin preserving all existing behaviorindex.ts— singleton registry withgetEngine()/setEngine()/initEngine()module-engine-loader.ts— load plugins from npm packages or local JS/TS filescli-engine-loader.ts— load plugins from external executables (any language, JSON protocol)docker-engine-loader.ts— load plugins from Docker imagesModified files:
orchestrator-folders.ts— generic cache folder helpers replacing hardcoded'Library'local-cache-service.ts—restoreEngineCache()/saveEngineCache()iterate overgetEngine().cacheFolderschild-workspace-service.ts— same pattern for child workspace cachingbuild-automation-workflow.ts— dynamic shell script generation fromgetEngine().cacheFolderskubernetes-job-spec-factory.ts— preStop hook fromgetEngine().preStopCommandbuild-parameters.ts—engineandenginePluginfields, callsinitEngine()on createbuild.ts—--engineand--engine-pluginCLI optionsDesign
restoreLibraryCache→restoreEngineCache, etc.)Test plan
yarn buildcompiles cleanly🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
--engine(default: 'unity') and--engine-pluginCLI options to the build command for specifying game engine configurationImprovements