You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LCORE-716: refractor duplicate integration/endpoints tests to parameterized tests
Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate
test code and improving maintainability.
Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases
Signed-off-by: Anik Bhattacharjee <anbhatta@redhat.com>
Copy file name to clipboardExpand all lines: tests/integration/README.md
+101Lines changed: 101 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -252,6 +252,107 @@ uv run make test-integration
252
252
uv run pytest tests/integration/ -v --tb=short
253
253
```
254
254
255
+
## Data-Driven (Parameterized) Tests
256
+
257
+
### Overview
258
+
259
+
Data-driven tests use `@pytest.mark.parametrize` to run the same test logic with different inputs. This eliminates duplicate code and makes test coverage more visible.
260
+
261
+
**Benefits:**
262
+
- Reduce code duplication (~80% reduction for repetitive tests)
263
+
- Add new test cases by simply adding to the data table
264
+
- See all test scenarios at a glance
265
+
- Consistent structure across similar tests
266
+
267
+
### When to Use
268
+
269
+
Use parameterized tests when you have:
270
+
-**Multiple similar tests** that differ only in input data and expected output
271
+
-**Validation tests** with multiple valid/invalid scenarios
272
+
-**Error handling tests** with different error conditions
273
+
274
+
### Pattern
275
+
276
+
```python
277
+
# Define test cases as a list
278
+
TEST_CASES= [
279
+
pytest.param(
280
+
{
281
+
"input": "value1",
282
+
"expected_result": "result1",
283
+
},
284
+
id="descriptive_test_name_1",
285
+
),
286
+
pytest.param(
287
+
{
288
+
"input": "value2",
289
+
"expected_result": "result2",
290
+
},
291
+
id="descriptive_test_name_2",
292
+
),
293
+
]
294
+
295
+
@pytest.mark.asyncio
296
+
@pytest.mark.parametrize("test_case", TEST_CASES)
297
+
asyncdeftest_example_data_driven(
298
+
test_case: dict,
299
+
# ... fixtures
300
+
) -> None:
301
+
"""Data-driven test for example functionality.
302
+
303
+
Tests multiple scenarios:
304
+
- Scenario 1 description
305
+
- Scenario 2 description
306
+
307
+
Parameters:
308
+
test_case: Dictionary containing test parameters
309
+
# ... other fixtures
310
+
"""
311
+
input_value = test_case["input"]
312
+
expected = test_case["expected_result"]
313
+
314
+
result =await function_under_test(input_value)
315
+
316
+
assert result == expected
317
+
```
318
+
319
+
### Best Practices for Parameterized Tests
320
+
321
+
1.**Use descriptive `id` values** - They appear in test output
322
+
```python
323
+
pytest.param(..., id="attachment_unknown_type_returns_422") # Good
324
+
pytest.param(..., id="test1") # Bad
325
+
```
326
+
327
+
2.**Group related test data** - Keep test cases together at module level
328
+
```python
329
+
ATTACHMENT_TEST_CASES= [...] # Define near the test that uses it
330
+
```
331
+
332
+
3.**Document all scenarios** - List scenarios in docstring
333
+
```python
334
+
"""Data-driven test for attachments.
335
+
336
+
Tests:
337
+
- Single attachment
338
+
- Empty payload
339
+
- Invalid type (422 error)
340
+
"""
341
+
```
342
+
343
+
4.**Keep test logic simple** - Use if/else only for success vs. error paths
0 commit comments