Skip to content

Commit 1ce0bf0

Browse files
committed
(lcore-1251) added tls e2e tests
(lcore-1251) fixed tls tests & removed other e2e tests for quicker test running (lcore-1251) restored test_list.txt (lcore-1251) use `trustme` for certs (lcore-1251) quick tls server fix (lcore-1251) removed tags in place of steps (fix) removed unused code fix tls config verified correct llm response clean LCORE-1253: Add e2e proxy and TLS networking tests Add comprehensive end-to-end tests verifying that Llama Stack's NetworkConfig (proxy, TLS) works correctly through the Lightspeed Stack pipeline. Test infrastructure: - TunnelProxy: Async HTTP CONNECT tunnel proxy that creates TCP tunnels for HTTPS traffic. Tracks CONNECT count and target hosts. - InterceptionProxy: Async TLS-intercepting (MITM) proxy using trustme CA to generate per-target server certificates. Simulates corporate SSL inspection proxies. Behave scenarios (tests/e2e/features/proxy.feature): - Tunnel proxy: Configures run.yaml with NetworkConfig proxy pointing to a local tunnel proxy. Verifies CONNECT to api.openai.com:443 is observed and the LLM query succeeds through the proxy. - Interception proxy: Configures run.yaml with proxy and custom CA cert (trustme). Verifies TLS interception of api.openai.com traffic and successful LLM query through the MITM proxy. - TLS version: Configures run.yaml with min_version TLSv1.2 and verifies the LLM query succeeds with the TLS constraint. Each scenario dynamically generates a modified run-ci.yaml with the appropriate NetworkConfig, restarts Llama Stack with the new config, restarts the Lightspeed Stack, and sends a query to verify the full pipeline. Added trustme>=1.2.1 to dev dependencies. LCORE-1253: Add negative tests, TLS/cipher scenarios, and cleanup hooks Expand proxy e2e test coverage to fully address all acceptance criteria: AC1 (tunnel proxy): - Add negative test: LLM query fails gracefully when proxy is unreachable AC2 (interception proxy with CA): - Add negative test: LLM query fails when interception proxy CA is not provided (verifies "only successful when correct CA is provided") AC3 (TLS version and ciphers): - Add TLSv1.3 minimum version scenario - Add custom cipher suite configuration scenario (ECDHE+AESGCM:DHE+AESGCM) Test infrastructure: - Add after_scenario cleanup hook in environment.py that restores original Llama Stack and Lightspeed Stack configs after @Proxy scenarios. Prevents config leaks between scenarios. - Use different ports for each interception proxy instance to avoid address-already-in-use errors in sequential scenarios. Documentation: - Update docs/e2e_scenarios.md with all 7 proxy test scenarios. - Update docs/e2e_testing.md with proxy-related Behave tags (@Proxy, @tunnelproxy, @InterceptionProxy, @TLSVersion, @tlscipher). LCORE-1253: Address review feedback Changes requested by reviewer (tisnik) and CodeRabbit: - Detect Docker mode once in before_all and store as context.is_docker_mode. All proxy step functions now use the context attribute instead of calling _is_docker_mode() repeatedly. - Log exception in _restore_original_services instead of silently swallowing it. - Only clear context.services_modified on successful restoration, not when restoration fails (prevents leaking modified state). - Add 10-second timeout to tunnel proxy open_connection to prevent stalls on unreachable targets. - Handle malformed CONNECT port with ValueError catch and 400 response. LCORE-1253: Replace tag-based cleanup with Background restore step Move config restoration from @Proxy after_scenario hook to an explicit Background Given step. This follows the team convention that tags are used only for test selection (filtering), not for triggering behavior. The Background step "The original Llama Stack config is restored if modified" runs before every scenario. If a previous scenario left a modified run.yaml (detected by backup file existence), it restores the original and restarts services. This handles cleanup even when the previous scenario failed mid-way. Removed: - @Proxy tag from feature file (was triggering after_scenario hook) - after_scenario hook for @Proxy in environment.py - _restore_original_services function (replaced by Background step) - context.services_modified tracking (no hook reads it) Updated docs/e2e_testing.md: tags documented as selection-only, not behavior-triggering. LCORE-1253: Address radofuchs review feedback Rewrite proxy e2e tests to follow project conventions: - Reuse existing step definitions: use "I use query to ask question" from llm_query_response.py and "The status code of the response is" from common_http.py instead of custom query/response steps. - Split service restart into two explicit Given steps: "Llama Stack is restarted" and "Lightspeed Stack is restarted" so restart ordering is visible in the feature file. - Remove local (non-Docker) mode code path. Proxy tests use restart_container() exclusively, consistent with the rest of the e2e test suite. - Check specific status code 500 for error scenarios instead of the broad >= 400 range. - Remove custom send_query, verify_llm_response, and verify_error_response steps that duplicated existing functionality. Net reduction: -183 lines from step definitions. LCORE-1253: Clean up proxy servers between scenarios Stop proxy servers and their event loops explicitly in the Background restore step. Previously, proxy daemon threads were left running after each scenario, causing asyncio "Task was destroyed but it is pending" warnings at process exit. The _stop_proxy helper schedules an async stop on the proxy's event loop, waits for it to complete, then stops the loop. Context references are cleared so the next scenario starts clean. LCORE-1253: Stop proxy servers after last scenario in after_feature Add proxy cleanup in after_feature to stop proxy servers left running from the last scenario. The Background restore step handles cleanup between scenarios, but the last scenario's proxies persist until process exit, causing asyncio "Task was destroyed" warnings. The cleanup checks for proxy objects on context (no tag check needed) and calls _stop_proxy to gracefully shut down the event loops. fixed dup steps addressed comments
1 parent 270aa62 commit 1ce0bf0

File tree

10 files changed

+621
-1
lines changed

10 files changed

+621
-1
lines changed

docker-compose-library.yaml

Lines changed: 27 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ services:
3030
condition: service_healthy
3131
mock-mcp:
3232
condition: service_healthy
33+
mock-tls-inference:
34+
condition: service_healthy
3335
networks:
3436
- lightspeednet
3537
volumes:
@@ -40,6 +42,7 @@ services:
4042
- ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:Z
4143
- ./tests/e2e/secrets/mcp-token:/tmp/mcp-token:ro
4244
- ./tests/e2e/secrets/invalid-mcp-token:/tmp/invalid-mcp-token:ro
45+
- mock-tls-certs:/certs:ro
4346
environment:
4447
# LLM Provider API Keys
4548
- BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-}
@@ -113,7 +116,30 @@ services:
113116
retries: 3
114117
start_period: 2s
115118

119+
# Mock TLS inference server for TLS E2E tests
120+
mock-tls-inference:
121+
build:
122+
context: ./tests/e2e/mock_tls_inference_server
123+
dockerfile: Dockerfile
124+
container_name: mock-tls-inference
125+
ports:
126+
- "8443:8443"
127+
- "8444:8444"
128+
networks:
129+
- lightspeednet
130+
volumes:
131+
- mock-tls-certs:/certs
132+
healthcheck:
133+
test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"]
134+
interval: 5s
135+
timeout: 3s
136+
retries: 3
137+
start_period: 5s
138+
116139

117140
networks:
118141
lightspeednet:
119-
driver: bridge
142+
driver: bridge
143+
144+
volumes:
145+
mock-tls-certs:

docker-compose.yaml

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,12 +25,16 @@ services:
2525
container_name: llama-stack
2626
ports:
2727
- "8321:8321" # Expose llama-stack on 8321 (adjust if needed)
28+
depends_on:
29+
mock-tls-inference:
30+
condition: service_healthy
2831
volumes:
2932
- ./run.yaml:/opt/app-root/run.yaml:z
3033
- ${GCP_KEYS_PATH:-./tmp/.gcp-keys-dummy}:/opt/app-root/.gcp-keys:ro
3134
- ./lightspeed-stack.yaml:/opt/app-root/lightspeed-stack.yaml:ro
3235
- llama-storage:/opt/app-root/src/.llama/storage
3336
- ./tests/e2e/rag:/opt/app-root/src/.llama/storage/rag:z
37+
- mock-tls-certs:/certs:ro
3438
environment:
3539
- BRAVE_SEARCH_API_KEY=${BRAVE_SEARCH_API_KEY:-}
3640
- TAVILY_SEARCH_API_KEY=${TAVILY_SEARCH_API_KEY:-}
@@ -140,9 +144,30 @@ services:
140144
retries: 3
141145
start_period: 2s
142146

147+
# Mock TLS inference server for TLS E2E tests
148+
mock-tls-inference:
149+
build:
150+
context: ./tests/e2e/mock_tls_inference_server
151+
dockerfile: Dockerfile
152+
container_name: mock-tls-inference
153+
ports:
154+
- "8443:8443"
155+
- "8444:8444"
156+
networks:
157+
- lightspeednet
158+
volumes:
159+
- mock-tls-certs:/certs
160+
healthcheck:
161+
test: ["CMD", "python", "-c", "import urllib.request,ssl;c=ssl.create_default_context();c.check_hostname=False;c.verify_mode=ssl.CERT_NONE;urllib.request.urlopen('https://localhost:8443/health',context=c)"]
162+
interval: 5s
163+
timeout: 3s
164+
retries: 3
165+
start_period: 5s
166+
143167

144168
volumes:
145169
llama-storage:
170+
mock-tls-certs:
146171

147172
networks:
148173
lightspeednet:
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
name: Lightspeed Core Service (LCS)
2+
service:
3+
host: 0.0.0.0
4+
port: 8080
5+
auth_enabled: false
6+
workers: 1
7+
color_log: true
8+
access_log: true
9+
llama_stack:
10+
use_as_library_client: true
11+
library_client_config_path: run.yaml
12+
user_data_collection:
13+
feedback_enabled: true
14+
feedback_storage: "/tmp/data/feedback"
15+
transcripts_enabled: true
16+
transcripts_storage: "/tmp/data/transcripts"
17+
authentication:
18+
module: "noop"
19+
inference:
20+
default_provider: tls-openai
21+
default_model: mock-tls-model
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
name: Lightspeed Core Service (LCS)
2+
service:
3+
host: 0.0.0.0
4+
port: 8080
5+
auth_enabled: false
6+
workers: 1
7+
color_log: true
8+
access_log: true
9+
llama_stack:
10+
use_as_library_client: false
11+
url: http://llama-stack:8321
12+
api_key: xyzzy
13+
user_data_collection:
14+
feedback_enabled: true
15+
feedback_storage: "/tmp/data/feedback"
16+
transcripts_enabled: true
17+
transcripts_storage: "/tmp/data/transcripts"
18+
authentication:
19+
module: "noop"
20+
inference:
21+
default_provider: tls-openai
22+
default_model: mock-tls-model

tests/e2e/features/environment.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -552,6 +552,15 @@ def after_feature(context: Context, feature: Feature) -> None:
552552
restart_container("lightspeed-stack")
553553
remove_config_backup(context.default_config_backup)
554554

555+
# Restore Lightspeed Stack config if TLS Background step switched it
556+
if getattr(context, "tls_config_active", False):
557+
switch_config(context.default_config_backup)
558+
remove_config_backup(context.default_config_backup)
559+
if not context.is_library_mode:
560+
restart_container("llama-stack")
561+
restart_container("lightspeed-stack")
562+
context.tls_config_active = False
563+
555564
# Clean up any proxy servers left from the last scenario
556565
if hasattr(context, "tunnel_proxy") or hasattr(context, "interception_proxy"):
557566
from tests.e2e.features.steps.proxy import _stop_proxy

tests/e2e/features/steps/tls.py

Lines changed: 226 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,226 @@
1+
"""Step definitions for TLS configuration e2e tests.
2+
3+
These tests configure Llama Stack's run.yaml with NetworkConfig TLS settings
4+
and verify the full pipeline works through the Lightspeed Stack.
5+
6+
Config switching uses the same pattern as other e2e tests: overwrite the
7+
host-mounted run.yaml and restart Docker containers. Cleanup is handled
8+
by a Background step that restores the backup before each scenario.
9+
"""
10+
11+
import copy
12+
import os
13+
import shutil
14+
15+
import yaml
16+
from behave import given # pyright: ignore[reportAttributeAccessIssue]
17+
from behave.runner import Context
18+
19+
from tests.e2e.utils.utils import (
20+
create_config_backup,
21+
restart_container,
22+
switch_config,
23+
)
24+
25+
# Llama Stack config — mounted into the container from the host
26+
_LLAMA_STACK_CONFIG = "run.yaml"
27+
_LLAMA_STACK_CONFIG_BACKUP = "run.yaml.tls-backup"
28+
29+
_LIGHTSPEED_STACK_CONFIG = "lightspeed-stack.yaml"
30+
31+
32+
def _load_llama_config() -> dict:
33+
"""Load the base Llama Stack run config.
34+
35+
Returns:
36+
The parsed YAML configuration as a dictionary.
37+
"""
38+
with open(_LLAMA_STACK_CONFIG, encoding="utf-8") as f:
39+
return yaml.safe_load(f)
40+
41+
42+
def _write_config(config: dict, path: str) -> None:
43+
"""Write a YAML config file.
44+
45+
Parameters:
46+
config: The configuration dictionary to write.
47+
path: The file path to write to.
48+
"""
49+
with open(path, "w", encoding="utf-8") as f:
50+
yaml.dump(config, f, default_flow_style=False)
51+
52+
53+
_TLS_PROVIDER_BASE: dict = {
54+
"provider_id": "tls-openai",
55+
"provider_type": "remote::openai",
56+
"config": {
57+
"api_key": "test-key",
58+
"base_url": "https://mock-tls-inference:8443/v1",
59+
"allowed_models": ["mock-tls-model"],
60+
},
61+
}
62+
63+
_TLS_MODEL_RESOURCE: dict = {
64+
"model_id": "mock-tls-model",
65+
"provider_id": "tls-openai",
66+
"provider_model_id": "mock-tls-model",
67+
}
68+
69+
70+
def _ensure_tls_provider(config: dict) -> dict:
71+
"""Find or create the tls-openai inference provider in the config.
72+
73+
If the provider does not exist, it is added along with the
74+
mock-tls-model registered resource.
75+
76+
Parameters:
77+
config: The Llama Stack configuration dictionary.
78+
79+
Returns:
80+
The tls-openai provider configuration dictionary.
81+
"""
82+
providers = config.setdefault("providers", {})
83+
inference = providers.setdefault("inference", [])
84+
85+
for provider in inference:
86+
if provider.get("provider_id") == "tls-openai":
87+
return provider
88+
89+
# Provider not found — add it
90+
provider = copy.deepcopy(_TLS_PROVIDER_BASE)
91+
inference.append(provider)
92+
93+
# Also register the model resource
94+
resources = config.setdefault("registered_resources", {})
95+
models = resources.setdefault("models", [])
96+
if not any(m.get("model_id") == "mock-tls-model" for m in models):
97+
models.append(copy.deepcopy(_TLS_MODEL_RESOURCE))
98+
99+
return provider
100+
101+
102+
def _backup_llama_config() -> None:
103+
"""Create a backup of the current run.yaml if not already backed up."""
104+
if not os.path.exists(_LLAMA_STACK_CONFIG_BACKUP):
105+
shutil.copy(_LLAMA_STACK_CONFIG, _LLAMA_STACK_CONFIG_BACKUP)
106+
107+
108+
def _prepare_tls_provider() -> tuple[dict, dict]:
109+
"""Back up run.yaml, load it, ensure the TLS provider exists, and init network config.
110+
111+
Returns:
112+
A tuple of (full config dict, provider's network config dict).
113+
"""
114+
_backup_llama_config()
115+
config = _load_llama_config()
116+
provider = _ensure_tls_provider(config)
117+
provider.setdefault("config", {}).setdefault("network", {})
118+
return config, provider
119+
120+
121+
# --- Background Steps ---
122+
# Restart steps ("The original Llama Stack config is restored if modified",
123+
# "Llama Stack is restarted", "Lightspeed Stack is restarted") are defined in
124+
# proxy.py and shared across features by behave.
125+
126+
127+
@given("Lightspeed Stack is configured for TLS testing")
128+
def configure_lightspeed_for_tls(context: Context) -> None:
129+
"""Switch lightspeed-stack.yaml to the TLS test configuration.
130+
131+
Backs up the current config and switches to the TLS variant that sets
132+
default_provider to tls-openai and default_model to mock-tls-model.
133+
The backup is restored in after_scenario via the shared restore step.
134+
135+
Parameters:
136+
context: Behave test context.
137+
"""
138+
mode_dir = "library-mode" if context.is_library_mode else "server-mode"
139+
tls_config = f"tests/e2e/configuration/{mode_dir}/lightspeed-stack-tls.yaml"
140+
141+
if not hasattr(context, "default_config_backup"):
142+
context.default_config_backup = create_config_backup(_LIGHTSPEED_STACK_CONFIG)
143+
144+
switch_config(tls_config)
145+
restart_container("lightspeed-stack")
146+
context.tls_config_active = True
147+
148+
149+
# --- TLS Configuration Steps ---
150+
151+
152+
@given("Llama Stack is configured with TLS verification disabled")
153+
def configure_tls_verify_false(context: Context) -> None:
154+
"""Configure run.yaml with TLS verify: false.
155+
156+
Parameters:
157+
context: Behave test context.
158+
"""
159+
config, provider = _prepare_tls_provider()
160+
provider["config"]["network"]["tls"] = {"verify": False}
161+
_write_config(config, _LLAMA_STACK_CONFIG)
162+
163+
164+
@given("Llama Stack is configured with CA certificate verification")
165+
def configure_tls_verify_ca(context: Context) -> None:
166+
"""Configure run.yaml with TLS verify: /certs/ca.crt.
167+
168+
Parameters:
169+
context: Behave test context.
170+
"""
171+
config, provider = _prepare_tls_provider()
172+
provider["config"]["network"]["tls"] = {
173+
"verify": "/certs/ca.crt",
174+
"min_version": "TLSv1.2",
175+
}
176+
_write_config(config, _LLAMA_STACK_CONFIG)
177+
178+
179+
@given("Llama Stack is configured with TLS verification enabled")
180+
def configure_tls_verify_true(context: Context) -> None:
181+
"""Configure run.yaml with TLS verify: true.
182+
183+
This should fail when connecting to a self-signed certificate server.
184+
185+
Parameters:
186+
context: Behave test context.
187+
"""
188+
config, provider = _prepare_tls_provider()
189+
provider["config"]["network"]["tls"] = {"verify": True}
190+
_write_config(config, _LLAMA_STACK_CONFIG)
191+
192+
193+
@given("Llama Stack is configured with mutual TLS authentication")
194+
def configure_tls_mtls(context: Context) -> None:
195+
"""Configure run.yaml with mutual TLS (client cert and key).
196+
197+
Parameters:
198+
context: Behave test context.
199+
"""
200+
config, provider = _prepare_tls_provider()
201+
202+
# Update base_url to use the mTLS server port
203+
provider["config"]["base_url"] = "https://mock-tls-inference:8444/v1"
204+
205+
provider["config"]["network"]["tls"] = {
206+
"verify": "/certs/ca.crt",
207+
"client_cert": "/certs/client.crt",
208+
"client_key": "/certs/client.key",
209+
}
210+
_write_config(config, _LLAMA_STACK_CONFIG)
211+
212+
213+
@given('Llama Stack is configured with TLS minimum version "{version}"')
214+
def configure_tls_min_version(context: Context, version: str) -> None:
215+
"""Configure run.yaml with TLS minimum version.
216+
217+
Parameters:
218+
context: Behave test context.
219+
version: The TLS version (e.g., "TLSv1.2", "TLSv1.3").
220+
"""
221+
config, provider = _prepare_tls_provider()
222+
provider["config"]["network"]["tls"] = {
223+
"verify": "/certs/ca.crt",
224+
"min_version": version,
225+
}
226+
_write_config(config, _LLAMA_STACK_CONFIG)

0 commit comments

Comments
 (0)