You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LCORE-716: refractor duplicate integration/endpoints tests to parameterized tests
Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate
test code and improving maintainability.
Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases
Signed-off-by: Anik Bhattacharjee <[email protected]>
@@ -252,6 +253,107 @@ uv run make test-integration
252
253
uv run pytest tests/integration/ -v --tb=short
253
254
```
254
255
256
+
## Data-Driven (Parameterized) Tests
257
+
258
+
### Overview
259
+
260
+
Data-driven tests use `@pytest.mark.parametrize` to run the same test logic with different inputs. This eliminates duplicate code and makes test coverage more visible.
261
+
262
+
**Benefits:**
263
+
- Reduce code duplication
264
+
- Add new test cases by simply adding to the data table
265
+
- See all test scenarios at a glance
266
+
- Consistent structure across similar tests
267
+
268
+
### When to Use
269
+
270
+
Use parameterized tests when you have:
271
+
-**Multiple similar tests** that differ only in input data and expected output
272
+
-**Validation tests** with multiple valid/invalid scenarios
273
+
-**Error handling tests** with different error conditions
274
+
275
+
### Pattern
276
+
277
+
```python
278
+
# Define test cases as a list
279
+
TEST_CASES= [
280
+
pytest.param(
281
+
{
282
+
"input": "value1",
283
+
"expected_result": "result1",
284
+
},
285
+
id="descriptive_test_name_1",
286
+
),
287
+
pytest.param(
288
+
{
289
+
"input": "value2",
290
+
"expected_result": "result2",
291
+
},
292
+
id="descriptive_test_name_2",
293
+
),
294
+
]
295
+
296
+
@pytest.mark.asyncio
297
+
@pytest.mark.parametrize("test_case", TEST_CASES)
298
+
asyncdeftest_example_data_driven(
299
+
test_case: dict,
300
+
# ... fixtures
301
+
) -> None:
302
+
"""Data-driven test for example functionality.
303
+
304
+
Tests multiple scenarios:
305
+
- Scenario 1 description
306
+
- Scenario 2 description
307
+
308
+
Parameters:
309
+
test_case: Dictionary containing test parameters
310
+
# ... other fixtures
311
+
"""
312
+
input_value = test_case["input"]
313
+
expected = test_case["expected_result"]
314
+
315
+
result =await function_under_test(input_value)
316
+
317
+
assert result == expected
318
+
```
319
+
320
+
### Best Practices for Parameterized Tests
321
+
322
+
1.**Use descriptive `id` values** - They appear in test output
323
+
```python
324
+
pytest.param(..., id="attachment_unknown_type_returns_422") # Good
325
+
pytest.param(..., id="test1") # Bad
326
+
```
327
+
328
+
2.**Group related test data** - Keep test cases together at module level
329
+
```python
330
+
ATTACHMENT_TEST_CASES= [...] # Define near the test that uses it
331
+
```
332
+
333
+
3.**Document all scenarios** - List scenarios in docstring
334
+
```python
335
+
"""Data-driven test for attachments.
336
+
337
+
Tests:
338
+
- Single attachment
339
+
- Empty payload
340
+
- Invalid type (422 error)
341
+
"""
342
+
```
343
+
344
+
4.**Keep test logic simple** - Use if/else only for success vs. error paths
0 commit comments