LCORE-1329: Adding new MCP E2E Tests#1294
LCORE-1329: Adding new MCP E2E Tests#1294jrobertboos wants to merge 10 commits intolightspeed-core:mainfrom
Conversation
- Introduced new YAML configuration files for MCP authentication methods: file, Kubernetes, and OAuth. - Updated existing MCP configuration to include new authentication methods. - Enhanced end-to-end tests to cover scenarios for file-based, Kubernetes, and OAuth authentication. - Added utility functions for unregistering MCP toolgroups in the testing environment. - Implemented new step definitions for checking response body content in tests.
WalkthroughAdds end-to-end MCP authentication tests and supporting artifacts: new config files (file, Kubernetes, client, OAuth) for library/server modes, expanded feature tests, environment handling for MCP configs, utilities to unregister MCP toolgroups and clear Llama Stack storage, a secret file, and a docker volume mount for the invalid token. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can enforce grammar and style rules using `languagetool`.Configure the |
- Added a new invalid MCP token file for testing purposes. - Updated the Docker Compose configuration to mount the invalid MCP token. - Introduced a new YAML configuration for testing invalid MCP file authentication. - Enhanced the test scenarios to include checks for invalid MCP file authentication. - Updated feature files to reflect the new authentication configurations.
…handling - Added new configurations for invalid, Kubernetes, client, and OAuth MCP authentication methods. - Updated scenario handling to reference the new configuration paths for Kubernetes, client, and OAuth authentication. - Enhanced the testing environment to support the new authentication configurations.
|
So currently the following e2e tests fail: These failing tests can be grouped into the following bugs:
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/e2e/features/mcp.feature (1)
440-447:⚠️ Potential issue | 🟡 MinorUse the same OAuth fixture token as the other happy-path cases.
The server-mode query happy path right above this uses
Bearer oauth-test-token, but this streaming case switches toBearer test-token. If those fixtures diverge, this scenario stops covering the same auth path and will fail independently.🔧 Suggested fix
And I set the "MCP-HEADERS" header to """ - {"mcp-oauth": {"Authorization": "Bearer test-token"}} + {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}} """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/features/mcp.feature` around lines 440 - 447, The streaming-query scenario "Check if streaming_query endpoint succeeds when MCP OAuth auth token is passed" uses a different fixture token ("Bearer test-token") than the other happy-path cases; update the MCP-HEADERS block in that Scenario to use the same fixture token "Bearer oauth-test-token" so it matches the server-mode happy-path tests and continues to exercise the same OAuth fixture.
🧹 Nitpick comments (3)
tests/e2e/utils/llama_stack_tools.py (2)
35-48: Explicitreturn Noneis unnecessary.Line 45 returns
Noneexplicitly, but the function return type isNone. A barereturnor simply omitting it would be clearer.✨ Minor cleanup
if e.status_code == 400 and "not found" in str(e).lower(): - return None + return raise🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/utils/llama_stack_tools.py` around lines 35 - 48, The function _unregister_toolgroup_async contains an explicit "return None" inside the APIStatusError handler which is redundant given the annotated return type is None; remove the explicit "return None" (either replace with a bare "return" or just omit the return entirely) in the block that checks if e.status_code == 400 and "not found" in str(e).lower(), leaving the rest of the error handling (raising other errors) and the finally block that calls await client.close() unchanged.
51-65: Multiple client instances created during iteration.The
_unregister_mcp_toolgroups_asyncfunction creates a client at line 53 to list toolgroups, then_unregister_toolgroup_asynccreates a new client for each toolgroup to unregister. This results in N+1 client instances for N toolgroups, each requiring cleanup.Consider reusing the client instance:
♻️ Proposed refactor to reuse client
async def _unregister_mcp_toolgroups_async() -> None: """Unregister all MCP toolgroups.""" client = _get_llama_stack_client() try: toolgroups = await client.toolgroups.list() for toolgroup in toolgroups: if ( toolgroup.identifier and toolgroup.provider_id == "model-context-protocol" ): - await _unregister_toolgroup_async(toolgroup.identifier) + try: + await client.toolgroups.unregister(toolgroup.identifier) + except APIStatusError as e: + # 400 "not found": toolgroup already absent, continue + if not (e.status_code == 400 and "not found" in str(e).lower()): + raise except APIConnectionError: raise finally: await client.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/utils/llama_stack_tools.py` around lines 51 - 65, The function _unregister_mcp_toolgroups_async currently creates a client to list toolgroups but calls _unregister_toolgroup_async for each item which creates its own client per toolgroup; refactor to reuse a single Llama Stack client by modifying _unregister_toolgroup_async (or adding a helper like _unregister_toolgroup_with_client) to accept an existing client instance and use that to perform the unregister operation, then call that variant from _unregister_mcp_toolgroups_async to avoid N+1 clients; ensure the shared client is closed once in the outer function’s finally block and preserve the existing APIConnectionError rethrow behavior.tests/e2e/features/mcp.feature (1)
421-428: Drop the unusedmcp-clientheader from this server-mode-only scenario.This scenario is already gated with
@skip-in-library-mode, so it runs against the server-mode OAuth config.tests/e2e/configuration/server-mode/lightspeed-stack-mcp-oauth-auth.yamlonly registersmcp-oauthon Lines 21-25, so the extramcp-cliententry does not participate in the test and makes failures harder to attribute.✂️ Suggested cleanup
And I set the "MCP-HEADERS" header to """ - {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}, "mcp-client": {"Authorization": "Bearer client-test-token"}} + {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}} """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/features/mcp.feature` around lines 421 - 428, Remove the unused "mcp-client" header from the MCP-HEADERS payload in the Scenario "Check if query endpoint succeeds when MCP OAuth auth token is passed" so the test only sets {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}}; locate the Scenario block that sets the "MCP-HEADERS" header and delete the "mcp-client" JSON entry to match the server-mode MCP OAuth config (registered as mcp-oauth).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/e2e/features/mcp.feature`:
- Around line 10-12: Replace blanket `@skip` tags that disable tests
unconditionally with the targeted `@skip-in-library-mode` tag so the /tools and
invalid-auth scenarios still run in non-library modes; specifically, for the
Scenario "Check if tools endpoint succeeds when MCP file-based auth token is
passed" and the other scenarios at the same locations (lines referenced in the
review), change the tag from `@skip` to `@skip-in-library-mode` (keep
`@MCPFileAuthConfig` intact) so only library-mode runs are skipped while
preserving these regression checks.
---
Outside diff comments:
In `@tests/e2e/features/mcp.feature`:
- Around line 440-447: The streaming-query scenario "Check if streaming_query
endpoint succeeds when MCP OAuth auth token is passed" uses a different fixture
token ("Bearer test-token") than the other happy-path cases; update the
MCP-HEADERS block in that Scenario to use the same fixture token "Bearer
oauth-test-token" so it matches the server-mode happy-path tests and continues
to exercise the same OAuth fixture.
---
Nitpick comments:
In `@tests/e2e/features/mcp.feature`:
- Around line 421-428: Remove the unused "mcp-client" header from the
MCP-HEADERS payload in the Scenario "Check if query endpoint succeeds when MCP
OAuth auth token is passed" so the test only sets {"mcp-oauth":
{"Authorization": "Bearer oauth-test-token"}}; locate the Scenario block that
sets the "MCP-HEADERS" header and delete the "mcp-client" JSON entry to match
the server-mode MCP OAuth config (registered as mcp-oauth).
In `@tests/e2e/utils/llama_stack_tools.py`:
- Around line 35-48: The function _unregister_toolgroup_async contains an
explicit "return None" inside the APIStatusError handler which is redundant
given the annotated return type is None; remove the explicit "return None"
(either replace with a bare "return" or just omit the return entirely) in the
block that checks if e.status_code == 400 and "not found" in str(e).lower(),
leaving the rest of the error handling (raising other errors) and the finally
block that calls await client.close() unchanged.
- Around line 51-65: The function _unregister_mcp_toolgroups_async currently
creates a client to list toolgroups but calls _unregister_toolgroup_async for
each item which creates its own client per toolgroup; refactor to reuse a single
Llama Stack client by modifying _unregister_toolgroup_async (or adding a helper
like _unregister_toolgroup_with_client) to accept an existing client instance
and use that to perform the unregister operation, then call that variant from
_unregister_mcp_toolgroups_async to avoid N+1 clients; ensure the shared client
is closed once in the outer function’s finally block and preserve the existing
APIConnectionError rethrow behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b5f69e52-f673-4161-9333-abb5b7d2fdbd
📒 Files selected for processing (18)
docker-compose.yamltests/e2e/configuration/library-mode/lightspeed-stack-invalid-mcp-file-auth.yamltests/e2e/configuration/library-mode/lightspeed-stack-mcp-client-auth.yamltests/e2e/configuration/library-mode/lightspeed-stack-mcp-file-auth.yamltests/e2e/configuration/library-mode/lightspeed-stack-mcp-kubernetes-auth.yamltests/e2e/configuration/library-mode/lightspeed-stack-mcp-oauth-auth.yamltests/e2e/configuration/library-mode/lightspeed-stack-mcp.yamltests/e2e/configuration/server-mode/lightspeed-stack-invalid-mcp-file-auth.yamltests/e2e/configuration/server-mode/lightspeed-stack-mcp-client-auth.yamltests/e2e/configuration/server-mode/lightspeed-stack-mcp-file-auth.yamltests/e2e/configuration/server-mode/lightspeed-stack-mcp-kubernetes-auth.yamltests/e2e/configuration/server-mode/lightspeed-stack-mcp-oauth-auth.yamltests/e2e/configuration/server-mode/lightspeed-stack-mcp.yamltests/e2e/features/environment.pytests/e2e/features/mcp.featuretests/e2e/features/steps/common_http.pytests/e2e/secrets/invalid-mcp-tokentests/e2e/utils/llama_stack_tools.py
tests/e2e/features/mcp.feature
Outdated
| @skip | ||
| @MCPFileAuthConfig | ||
| Scenario: Check if tools endpoint succeeds when MCP file-based auth token is passed |
There was a problem hiding this comment.
Don't merge these scenarios behind blanket @skip tags.
The targeted @skip-in-library-mode cases are understandable, but these plain @skips disable the exact /tools and invalid-auth paths called out as broken in the PR notes. That leaves the new MCP suite green without actually protecting those regressions in any mode.
Also applies to: 49-51, 65-67, 84-86, 104-106, 146-148, 163-165, 183-185, 204-206
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/features/mcp.feature` around lines 10 - 12, Replace blanket `@skip`
tags that disable tests unconditionally with the targeted `@skip-in-library-mode`
tag so the /tools and invalid-auth scenarios still run in non-library modes;
specifically, for the Scenario "Check if tools endpoint succeeds when MCP
file-based auth token is passed" and the other scenarios at the same locations
(lines referenced in the review), change the tag from `@skip` to
`@skip-in-library-mode` (keep `@MCPFileAuthConfig` intact) so only library-mode runs
are skipped while preserving these regression checks.
| Then The status code of the response is 200 | ||
| And The response should contain following fragments | ||
| | Fragments in LLM response | | ||
| | Hello | |
There was a problem hiding this comment.
Add And The body of the response does not contain mcp-client
- Renamed `llama_stack_shields.py` to `llama_stack_utils.py` and expanded its functionality to manage both toolgroups and shields. - Removed the deprecated `llama_stack_tools.py` file. - Updated E2E test scenarios to utilize the new utility functions for unregistering toolgroups and clearing Llama Stack storage. - Enhanced feature files to include comments indicating pending fixes for skipped scenarios.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/e2e/features/mcp.feature (2)
441-447:⚠️ Potential issue | 🟠 MajorUse the same OAuth fixture in the streaming success case.
The earlier
@MCPOAuthAuthConfigsuccess scenarios useoauth-test-tokenand keepmcp-clientauthorized as well. This one switches toBearer test-tokenonly, so it is no longer exercising the same auth matrix and can fail for the wrong reason.Suggested change
And I set the "MCP-HEADERS" header to """ - {"mcp-oauth": {"Authorization": "Bearer test-token"}} + {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}, "mcp-client": {"Authorization": "Bearer client-test-token"}} """🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/features/mcp.feature` around lines 441 - 447, Replace the ad-hoc token in the `@MCPOAuthAuthConfig` streaming success scenario with the same fixture used by the other success cases: set the "MCP-HEADERS" payload to use "Authorization": "Bearer oauth-test-token" and ensure the same "mcp-client" authorization entry is present (i.e., mirror the header structure used by the earlier MCPOAuth success scenarios) so the test exercises the identical auth matrix.
460-478:⚠️ Potential issue | 🟠 MajorKeep
mcp-clientvalid in the OAuth negative cases.Under the same
@MCPOAuthAuthConfig, the success path authorizes bothmcp-oauthandmcp-client. Omittingmcp-clienthere broadens the failure surface, so these 401s can come from a different unauthorized toolgroup instead of the invalid OAuth token.Suggested change
- {"mcp-oauth": {"Authorization": "Bearer invalid-token"}} + {"mcp-oauth": {"Authorization": "Bearer invalid-token"}, "mcp-client": {"Authorization": "Bearer client-test-token"}}Apply the same header payload to all three OAuth-invalid scenarios.
Also applies to: 480-501, 503-524
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/features/mcp.feature` around lines 460 - 478, The scenario's failure is caused by omitting a valid mcp-client header so the 401 may be due to missing client auth rather than the invalid mcp-oauth token; update the "MCP-HEADERS" payload in the scenario "Check if tools endpoint reports error when MCP OAuth invalid auth token is passed" to include the same valid "mcp-client" entry used in the successful OAuth scenario (alongside "mcp-oauth": {"Authorization": "Bearer invalid-token"}) so only the OAuth token is invalid; apply the same change to the other two OAuth-invalid scenarios referenced (the blocks around lines 480-501 and 503-524) so all negative OAuth tests keep a valid mcp-client header.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/e2e/utils/utils.py`:
- Around line 254-285: The clear_llama_stack_storage function currently ignores
non-zero docker exits and lacks the Prow short-circuit mentioned in the
docstring; update clear_llama_stack_storage to first return immediately when
running under Prow (e.g., check an environment flag such as PROW_JOB_ID or
another repo-standard CI var) and then run each docker exec with strict failure
semantics (use subprocess.run(..., check=True) or inspect
CompletedProcess.returncode and raise subprocess.CalledProcessError on failure)
so any failed removal aborts the test setup instead of silently continuing;
reference clear_llama_stack_storage, _LLAMA_STORAGE_ROOT, the commands list, and
the subprocess.run calls to locate and change the logic.
---
Outside diff comments:
In `@tests/e2e/features/mcp.feature`:
- Around line 441-447: Replace the ad-hoc token in the `@MCPOAuthAuthConfig`
streaming success scenario with the same fixture used by the other success
cases: set the "MCP-HEADERS" payload to use "Authorization": "Bearer
oauth-test-token" and ensure the same "mcp-client" authorization entry is
present (i.e., mirror the header structure used by the earlier MCPOAuth success
scenarios) so the test exercises the identical auth matrix.
- Around line 460-478: The scenario's failure is caused by omitting a valid
mcp-client header so the 401 may be due to missing client auth rather than the
invalid mcp-oauth token; update the "MCP-HEADERS" payload in the scenario "Check
if tools endpoint reports error when MCP OAuth invalid auth token is passed" to
include the same valid "mcp-client" entry used in the successful OAuth scenario
(alongside "mcp-oauth": {"Authorization": "Bearer invalid-token"}) so only the
OAuth token is invalid; apply the same change to the other two OAuth-invalid
scenarios referenced (the blocks around lines 480-501 and 503-524) so all
negative OAuth tests keep a valid mcp-client header.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 81ffe43b-ff9c-4cc2-8150-a3f164e79c99
📒 Files selected for processing (5)
docs/e2e_testing.mdtests/e2e/features/environment.pytests/e2e/features/mcp.featuretests/e2e/utils/llama_stack_utils.pytests/e2e/utils/utils.py
| def clear_llama_stack_storage(container_name: str = "lightspeed-stack") -> None: | ||
| """Clear Llama Stack storage in library mode (embedded Llama Stack). | ||
|
|
||
| Removes SQLite/KV store databases and file storage contents so that | ||
| toolgroups and other persisted state are reset. Used before MCP config | ||
| scenarios when not running in server mode (no separate Llama Stack to | ||
| unregister toolgroups from). Only runs when using Docker (skipped in Prow). | ||
|
|
||
| Parameters: | ||
| container_name (str): Docker container name (default "lightspeed-stack"). | ||
|
|
||
| Returns: | ||
| None | ||
| """ | ||
|
|
||
| storage_root = _LLAMA_STORAGE_ROOT | ||
| commands = [ | ||
| f"rm -f {storage_root}/rag/kv_store.db", | ||
| f"rm -f {storage_root}/sql_store.db", | ||
| f"rm -rf {storage_root}/files/*", | ||
| ] | ||
| try: | ||
| for cmd in commands: | ||
| subprocess.run( | ||
| ["docker", "exec", container_name, "sh", "-c", cmd], | ||
| capture_output=True, | ||
| text=True, | ||
| timeout=10, | ||
| check=False, | ||
| ) | ||
| except subprocess.TimeoutExpired as e: | ||
| print(f"Warning: Could not clear Llama Stack storage: {e}") |
There was a problem hiding this comment.
Make the storage reset fail fast.
This helper now sits in scenario setup, but non-zero docker exec exits are ignored and the Prow short-circuit described in the docstring is missing. That lets stale embedded MCP state leak into the next scenario while the suite proceeds as if cleanup succeeded. Moving executable code directly under the docstring also clears the current pydocstyle D202 failure.
Suggested change
def clear_llama_stack_storage(container_name: str = "lightspeed-stack") -> None:
"""Clear Llama Stack storage in library mode (embedded Llama Stack).
@@
Returns:
None
"""
-
+ if is_prow_environment():
+ return
+
storage_root = _LLAMA_STORAGE_ROOT
commands = [
f"rm -f {storage_root}/rag/kv_store.db",
f"rm -f {storage_root}/sql_store.db",
f"rm -rf {storage_root}/files/*",
]
- try:
- for cmd in commands:
- subprocess.run(
- ["docker", "exec", container_name, "sh", "-c", cmd],
- capture_output=True,
- text=True,
- timeout=10,
- check=False,
- )
- except subprocess.TimeoutExpired as e:
- print(f"Warning: Could not clear Llama Stack storage: {e}")
+ for cmd in commands:
+ subprocess.run(
+ ["docker", "exec", container_name, "sh", "-c", cmd],
+ capture_output=True,
+ text=True,
+ timeout=10,
+ check=True,
+ )🧰 Tools
🪛 GitHub Actions: Pydocstyle
[error] 255-255: pydocstyle: D202 - No blank lines allowed after function docstring (found 1).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/utils/utils.py` around lines 254 - 285, The
clear_llama_stack_storage function currently ignores non-zero docker exits and
lacks the Prow short-circuit mentioned in the docstring; update
clear_llama_stack_storage to first return immediately when running under Prow
(e.g., check an environment flag such as PROW_JOB_ID or another repo-standard CI
var) and then run each docker exec with strict failure semantics (use
subprocess.run(..., check=True) or inspect CompletedProcess.returncode and raise
subprocess.CalledProcessError on failure) so any failed removal aborts the test
setup instead of silently continuing; reference clear_llama_stack_storage,
_LLAMA_STORAGE_ROOT, the commands list, and the subprocess.run calls to locate
and change the logic.
- Added checks to ensure the response body does not contain 'mcp-client' in MCP feature tests. - Simplified the `clear_llama_stack_storage` function to remove the entire `~/.llama` directory instead of specific files, improving clarity and efficiency. - Updated comments to reflect the changes in storage clearing logic.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/e2e/features/mcp.feature`:
- Line 417: In the OAuth scenarios in mcp.feature the request headers include
both "mcp-oauth" and "mcp-client", which allows client-token auth to mask OAuth
regressions; update the failing scenarios so the header map only contains
"mcp-oauth": {"Authorization": "Bearer oauth-test-token"} (remove the
"mcp-client" entry) and likewise ensure client-auth scenarios only include
"mcp-client": {"Authorization": "Bearer client-test-token"} so each auth method
is tested in isolation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1377a782-da9d-45ee-a3a9-74e72e555377
📒 Files selected for processing (2)
tests/e2e/features/mcp.featuretests/e2e/utils/utils.py
| And I set the "MCP-HEADERS" header to | ||
| """ | ||
| {"mcp-oauth": {"Authorization": "Bearer test-token"}} | ||
| {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}, "mcp-client": {"Authorization": "Bearer client-test-token"}} |
There was a problem hiding this comment.
OAuth success tests are not isolated from other auth methods.
Passing both mcp-oauth and mcp-client credentials can let these scenarios pass even if OAuth handling regresses, which weakens signal for this suite.
🔧 Suggested fix
- {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}, "mcp-client": {"Authorization": "Bearer client-test-token"}}
+ {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}}
...
- {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}, "mcp-client": {"Authorization": "Bearer client-test-token"}}
+ {"mcp-oauth": {"Authorization": "Bearer oauth-test-token"}}Also applies to: 429-429
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/features/mcp.feature` at line 417, In the OAuth scenarios in
mcp.feature the request headers include both "mcp-oauth" and "mcp-client", which
allows client-token auth to mask OAuth regressions; update the failing scenarios
so the header map only contains "mcp-oauth": {"Authorization": "Bearer
oauth-test-token"} (remove the "mcp-client" entry) and likewise ensure
client-auth scenarios only include "mcp-client": {"Authorization": "Bearer
client-test-token"} so each auth method is tested in isolation.
Description
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Tests
Chores
Documentation