Skip to content

Commit cd9f7fa

Browse files
committed
LCORE-716: refractor duplicate integration/endpoints tests to parameterized tests
Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate test code and improving maintainability. Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases Signed-off-by: Anik Bhattacharjee <[email protected]>
1 parent b250fca commit cd9f7fa

File tree

5 files changed

+539
-618
lines changed

5 files changed

+539
-618
lines changed

tests/integration/README.md

Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ This directory contains integration tests for Lightspeed Core Stack. Integration
1010
- [Test Constants](#test-constants)
1111
- [Writing Integration Tests](#writing-integration-tests)
1212
- [Running Tests](#running-tests)
13+
- [Data-Driven (Parameterized) Tests](#data-driven-parameterized-tests)
1314
- [Best Practices](#best-practices)
1415

1516
## Getting Started
@@ -252,6 +253,107 @@ uv run make test-integration
252253
uv run pytest tests/integration/ -v --tb=short
253254
```
254255

256+
## Data-Driven (Parameterized) Tests
257+
258+
### Overview
259+
260+
Data-driven tests use `@pytest.mark.parametrize` to run the same test logic with different inputs. This eliminates duplicate code and makes test coverage more visible.
261+
262+
**Benefits:**
263+
- Reduce code duplication
264+
- Add new test cases by simply adding to the data table
265+
- See all test scenarios at a glance
266+
- Consistent structure across similar tests
267+
268+
### When to Use
269+
270+
Use parameterized tests when you have:
271+
- **Multiple similar tests** that differ only in input data and expected output
272+
- **Validation tests** with multiple valid/invalid scenarios
273+
- **Error handling tests** with different error conditions
274+
275+
### Pattern
276+
277+
```python
278+
# Define test cases as a list
279+
TEST_CASES = [
280+
pytest.param(
281+
{
282+
"input": "value1",
283+
"expected_result": "result1",
284+
},
285+
id="descriptive_test_name_1",
286+
),
287+
pytest.param(
288+
{
289+
"input": "value2",
290+
"expected_result": "result2",
291+
},
292+
id="descriptive_test_name_2",
293+
),
294+
]
295+
296+
@pytest.mark.asyncio
297+
@pytest.mark.parametrize("test_case", TEST_CASES)
298+
async def test_example_data_driven(
299+
test_case: dict,
300+
# ... fixtures
301+
) -> None:
302+
"""Data-driven test for example functionality.
303+
304+
Tests multiple scenarios:
305+
- Scenario 1 description
306+
- Scenario 2 description
307+
308+
Parameters:
309+
test_case: Dictionary containing test parameters
310+
# ... other fixtures
311+
"""
312+
input_value = test_case["input"]
313+
expected = test_case["expected_result"]
314+
315+
result = await function_under_test(input_value)
316+
317+
assert result == expected
318+
```
319+
320+
### Best Practices for Parameterized Tests
321+
322+
1. **Use descriptive `id` values** - They appear in test output
323+
```python
324+
pytest.param(..., id="attachment_unknown_type_returns_422") # Good
325+
pytest.param(..., id="test1") # Bad
326+
```
327+
328+
2. **Group related test data** - Keep test cases together at module level
329+
```python
330+
ATTACHMENT_TEST_CASES = [...] # Define near the test that uses it
331+
```
332+
333+
3. **Document all scenarios** - List scenarios in docstring
334+
```python
335+
"""Data-driven test for attachments.
336+
337+
Tests:
338+
- Single attachment
339+
- Empty payload
340+
- Invalid type (422 error)
341+
"""
342+
```
343+
344+
4. **Keep test logic simple** - Use if/else only for success vs. error paths
345+
```python
346+
if expected_status == 200:
347+
# Success assertions
348+
else:
349+
# Error assertions
350+
```
351+
352+
5. **Use consistent dict keys** - Standardize parameter names
353+
```python
354+
{"expected_status": 200, "expected_error": None} # Consistent
355+
```
356+
255357
## Best Practices
256358

257359
### 1. Use Common Fixtures

tests/integration/endpoints/test_model_list.py

Lines changed: 53 additions & 104 deletions
Original file line numberDiff line numberDiff line change
@@ -86,55 +86,61 @@ def mock_llama_stack_client_failing_fixture(
8686
yield mock_client
8787

8888

89-
@pytest.mark.asyncio
90-
async def test_models_list(
91-
test_config: AppConfig,
92-
mock_llama_stack_client: AsyncMockType,
93-
test_request: Request,
94-
test_auth: AuthTuple,
95-
) -> None:
96-
"""Test that models endpoint returns successful response.
97-
98-
This integration test verifies:
99-
- Model list handler
100-
101-
Parameters:
102-
test_config: Test configuration
103-
mock_llama_stack_client: Mocked Llama Stack client
104-
test_request: FastAPI request
105-
test_auth: noop authentication tuple
106-
"""
107-
_ = test_config
108-
_ = mock_llama_stack_client
109-
110-
response = await models_endpoint_handler(
111-
request=test_request,
112-
auth=test_auth,
113-
model_type=ModelFilter(model_type=None),
114-
)
115-
116-
# Verify response structure
117-
assert response is not None
118-
assert len(response.models) == 2
119-
assert response.models[0]["identifier"] == "test-provider/test-model-1"
120-
assert response.models[0]["api_model_type"] == "llm"
121-
assert response.models[1]["identifier"] == "test-provider/test-model-2"
122-
assert response.models[1]["api_model_type"] == "embedding"
89+
MODEL_FILTER_TEST_CASES = [
90+
pytest.param(
91+
{
92+
"filter_type": None,
93+
"expected_count": 2,
94+
"expected_models": [
95+
{"identifier": "test-provider/test-model-1", "api_model_type": "llm"},
96+
{
97+
"identifier": "test-provider/test-model-2",
98+
"api_model_type": "embedding",
99+
},
100+
],
101+
},
102+
id="no_filter_returns_all_models",
103+
),
104+
pytest.param(
105+
{
106+
"filter_type": "llm",
107+
"expected_count": 1,
108+
"expected_models": [
109+
{"identifier": "test-provider/test-model-1", "api_model_type": "llm"},
110+
],
111+
},
112+
id="filter_llm_returns_llm_model",
113+
),
114+
pytest.param(
115+
{
116+
"filter_type": "foobar",
117+
"expected_count": 0,
118+
"expected_models": [],
119+
},
120+
id="filter_unknown_type_returns_empty",
121+
),
122+
]
123123

124124

125125
@pytest.mark.asyncio
126-
async def test_models_list_filter_model_type_llm(
126+
@pytest.mark.parametrize("test_case", MODEL_FILTER_TEST_CASES)
127+
async def test_models_list_with_filter(
128+
test_case: dict,
127129
test_config: AppConfig,
128130
mock_llama_stack_client: AsyncMockType,
129131
test_request: Request,
130132
test_auth: AuthTuple,
131133
) -> None:
132-
"""Test that models endpoint returns successful response.
134+
"""Tests for models endpoint filtering.
133135
134-
This integration test verifies:
135-
- Model list handler
136+
Tests different model_type filter scenarios:
137+
- No filter (returns all models)
138+
- Filter by llm type
139+
- Filter by unknown type (returns empty)
136140
137141
Parameters:
142+
test_case: Dictionary containing test parameters (filter_type,
143+
expected_count, expected_models)
138144
test_config: Test configuration
139145
mock_llama_stack_client: Mocked Llama Stack client
140146
test_request: FastAPI request
@@ -143,81 +149,24 @@ async def test_models_list_filter_model_type_llm(
143149
_ = test_config
144150
_ = mock_llama_stack_client
145151

146-
response = await models_endpoint_handler(
147-
request=test_request, auth=test_auth, model_type=ModelFilter(model_type="llm")
148-
)
149-
150-
# Verify response structure
151-
assert response is not None
152-
assert len(response.models) == 1
153-
assert response.models[0]["identifier"] == "test-provider/test-model-1"
154-
assert response.models[0]["api_model_type"] == "llm"
155-
156-
157-
@pytest.mark.asyncio
158-
async def test_models_list_filter_model_type_embedding(
159-
test_config: AppConfig,
160-
mock_llama_stack_client: AsyncMockType,
161-
test_request: Request,
162-
test_auth: AuthTuple,
163-
) -> None:
164-
"""Test that models endpoint returns successful response.
165-
166-
This integration test verifies:
167-
- Model list handler
168-
169-
Parameters:
170-
test_config: Test configuration
171-
mock_llama_stack_client: Mocked Llama Stack client
172-
test_request: FastAPI request
173-
test_auth: noop authentication tuple
174-
"""
175-
_ = test_config
176-
_ = mock_llama_stack_client
152+
filter_type = test_case["filter_type"]
153+
expected_count = test_case["expected_count"]
154+
expected_models = test_case["expected_models"]
177155

178156
response = await models_endpoint_handler(
179157
request=test_request,
180158
auth=test_auth,
181-
model_type=ModelFilter(model_type="embedding"),
159+
model_type=ModelFilter(model_type=filter_type),
182160
)
183161

184162
# Verify response structure
185163
assert response is not None
186-
assert len(response.models) == 1
187-
assert response.models[0]["identifier"] == "test-provider/test-model-2"
188-
assert response.models[0]["api_model_type"] == "embedding"
189-
190-
191-
@pytest.mark.asyncio
192-
async def test_models_list_filter_model_type_unknown(
193-
test_config: AppConfig,
194-
mock_llama_stack_client: AsyncMockType,
195-
test_request: Request,
196-
test_auth: AuthTuple,
197-
) -> None:
198-
"""Test that models endpoint returns successful response.
199-
200-
This integration test verifies:
201-
- Model list handler
202-
203-
Parameters:
204-
test_config: Test configuration
205-
mock_llama_stack_client: Mocked Llama Stack client
206-
test_request: FastAPI request
207-
test_auth: noop authentication tuple
208-
"""
209-
_ = test_config
210-
_ = mock_llama_stack_client
211-
212-
response = await models_endpoint_handler(
213-
request=test_request,
214-
auth=test_auth,
215-
model_type=ModelFilter(model_type="foobar"),
216-
)
164+
assert len(response.models) == expected_count
217165

218-
# Verify response structure
219-
assert response is not None
220-
assert len(response.models) == 0
166+
# Verify each expected model
167+
for i, expected_model in enumerate(expected_models):
168+
assert response.models[i]["identifier"] == expected_model["identifier"]
169+
assert response.models[i]["api_model_type"] == expected_model["api_model_type"]
221170

222171

223172
@pytest.mark.asyncio

0 commit comments

Comments
 (0)