LCORE-716: Refactor duplicate integration/endpoints tests to parameterized tests#1394
LCORE-716: Refactor duplicate integration/endpoints tests to parameterized tests#1394anik120 wants to merge 1 commit intolightspeed-core:mainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughConsolidates many duplicate async integration tests into parameterized pytest tests and adds a new "Data-Driven (Parameterized) Tests" section to Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~30 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
e1fc184 to
f2de5ba
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/integration/endpoints/test_model_list.py (1)
89-155:⚠️ Potential issue | 🟡 MinorRun the formatter on this new parametrized block before merge.
This file is already failing
black --check, and pylint is also flagging the longtest_caseparameter line in the docstring.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 89 - 155, Run the code formatter and fix the long docstring line: run black (or your repo formatter) on the file so the new parametrized block (MODEL_FILTER_TEST_CASES) is formatted, and break/wrap the long docstring parameter lines inside test_models_list_with_filter (especially the "test_case: Dictionary containing test parameters..." line) so pylint line-length warnings go away; ensure the parameter name test_case and the function signature remain unchanged.tests/integration/endpoints/test_query_integration.py (1)
356-372:⚠️ Potential issue | 🟡 MinorWrap the
test_caseparameter docstring line.Line 367 is currently over pylint's 100-character limit, so this otherwise-good refactor still leaves CI red.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_query_integration.py` around lines 356 - 372, The docstring line describing the test_case parameter exceeds pylint's 100-char limit; break the long "test_case: Dictionary containing test parameters (attachments, expected_status, expected_error)" line into two or more wrapped lines so each is under 100 characters (e.g., "test_case: Dictionary containing test parameters such as attachments and expected_status" on one line and "expected_error on the next"), keeping the parameter name "test_case" intact and preserving the same content and punctuation in the docstring for the query v2 endpoint attachment validation tests.
🧹 Nitpick comments (2)
tests/integration/endpoints/test_tools_integration.py (1)
30-45: Drop the duplicatedexpect_www_authenticateflag.This table can derive that boolean from
www_authenticate is not None, so carrying both values creates an easy desync point when new cases are added.♻️ Suggested simplification
TOOLS_OAUTH_401_TEST_CASES = [ pytest.param( { "www_authenticate": 'Bearer realm="oauth"', - "expect_www_authenticate": True, }, id="with_www_authenticate_when_mcp_oauth_required", ), pytest.param( { "www_authenticate": None, - "expect_www_authenticate": False, }, id="without_www_authenticate_when_oauth_probe_times_out", ), ]www_authenticate = test_case["www_authenticate"] - expect_www_authenticate = test_case["expect_www_authenticate"] + expect_www_authenticate = www_authenticate is not None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_tools_integration.py` around lines 30 - 45, The test table TOOLS_OAUTH_401_TEST_CASES contains a redundant expect_www_authenticate flag; remove that key from each pytest.param and update any test that reads expect_www_authenticate to compute the expectation from the www_authenticate value (e.g., expected = www_authenticate is not None or bool(www_authenticate)). Locate references to TOOLS_OAUTH_401_TEST_CASES and the test function(s) that unpack its params and replace uses of expect_www_authenticate with the derived expression so the table only carries www_authenticate.tests/integration/endpoints/test_model_list.py (1)
89-129: Remove the redundantexpected_countfield.Every case already encodes the count in
expected_models, so keeping both makes the table easier to drift out of sync during future edits.♻️ Suggested simplification
MODEL_FILTER_TEST_CASES = [ pytest.param( { "filter_type": None, - "expected_count": 2, "expected_models": [ {"identifier": "test-provider/test-model-1", "api_model_type": "llm"}, {"identifier": "test-provider/test-model-2", "api_model_type": "embedding"}, ], }, id="no_filter_returns_all_models", ),- expected_count = test_case["expected_count"] expected_models = test_case["expected_models"] @@ - assert len(response.models) == expected_count + assert len(response.models) == len(expected_models)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_model_list.py` around lines 89 - 129, Remove the redundant expected_count entries from the table of test cases so the expected_models list is the single source of truth; update MODEL_FILTER_TEST_CASES by deleting the "expected_count" key from each pytest.param dict and adjust any test assertions that reference expected_count to compute count as len(entry["expected_models"]) or to assert against expected_models directly (search for uses of MODEL_FILTER_TEST_CASES and expected_count in the tests and replace with len(expected_models) or direct comparisons).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_streaming_query_integration.py`:
- Around line 158-173: The docstring adds a long "Parameters" entry for
test_case that exceeds the 100-character line length; edit the docstring in the
test function in tests/integration/endpoints/test_streaming_query_integration.py
so the "test_case: Dictionary containing test parameters (attachments,
expected_status, expected_error)" line is wrapped to multiple shorter lines
(each <=100 chars) or rephrased into two lines (e.g., "test_case: Dict with test
parameters" followed by a short bulleted or indented continuation listing
attachments, expected_status, expected_error) so the docstring complies with the
pylint 100-character limit.
- Around line 251-258: The test function
test_streaming_query_endpoint_returns_401_for_mcp_oauth has too many positional
arguments and triggers pylint's
too-many-arguments/too-many-positional-arguments; fix by adding the same pylint
suppression used in tests/integration/endpoints/test_query_integration.py to the
test function declaration (disable too-many-arguments and
too-many-positional-arguments) so the linter ignores this parametrized test
signature.
In `@tests/integration/endpoints/test_tools_integration.py`:
- Around line 50-57: The test function
test_tools_endpoint_returns_401_for_mcp_oauth triggers a pylint
"too-many-arguments"/"too-many-positional-arguments" blocker; fix it by applying
the same local pylint suppression used in
tests/integration/endpoints/test_query_integration.py (add an inline # pylint:
disable=too-many-arguments,too-many-positional-arguments comment directly on the
async def line) or alternatively collapse several fixtures into a single helper
fixture and update the test signature to use that one fixture instead; update
the function definition for test_tools_endpoint_returns_401_for_mcp_oauth
accordingly.
In `@tests/integration/README.md`:
- Around line 255-355: Add a TOC entry for the new top-level section titled
"Data-Driven (Parameterized) Tests" to the README's table of contents so it
appears between "Running Tests" and "Best Practices"; update the TOC to include
a link that matches the exact header text ("Data-Driven (Parameterized) Tests")
using the same anchor style as other entries so the new guidance is
discoverable.
---
Outside diff comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 89-155: Run the code formatter and fix the long docstring line:
run black (or your repo formatter) on the file so the new parametrized block
(MODEL_FILTER_TEST_CASES) is formatted, and break/wrap the long docstring
parameter lines inside test_models_list_with_filter (especially the "test_case:
Dictionary containing test parameters..." line) so pylint line-length warnings
go away; ensure the parameter name test_case and the function signature remain
unchanged.
In `@tests/integration/endpoints/test_query_integration.py`:
- Around line 356-372: The docstring line describing the test_case parameter
exceeds pylint's 100-char limit; break the long "test_case: Dictionary
containing test parameters (attachments, expected_status, expected_error)" line
into two or more wrapped lines so each is under 100 characters (e.g.,
"test_case: Dictionary containing test parameters such as attachments and
expected_status" on one line and "expected_error on the next"), keeping the
parameter name "test_case" intact and preserving the same content and
punctuation in the docstring for the query v2 endpoint attachment validation
tests.
---
Nitpick comments:
In `@tests/integration/endpoints/test_model_list.py`:
- Around line 89-129: Remove the redundant expected_count entries from the table
of test cases so the expected_models list is the single source of truth; update
MODEL_FILTER_TEST_CASES by deleting the "expected_count" key from each
pytest.param dict and adjust any test assertions that reference expected_count
to compute count as len(entry["expected_models"]) or to assert against
expected_models directly (search for uses of MODEL_FILTER_TEST_CASES and
expected_count in the tests and replace with len(expected_models) or direct
comparisons).
In `@tests/integration/endpoints/test_tools_integration.py`:
- Around line 30-45: The test table TOOLS_OAUTH_401_TEST_CASES contains a
redundant expect_www_authenticate flag; remove that key from each pytest.param
and update any test that reads expect_www_authenticate to compute the
expectation from the www_authenticate value (e.g., expected = www_authenticate
is not None or bool(www_authenticate)). Locate references to
TOOLS_OAUTH_401_TEST_CASES and the test function(s) that unpack its params and
replace uses of expect_www_authenticate with the derived expression so the table
only carries www_authenticate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 545532b4-198b-4089-ae21-97aa1b36c17e
📒 Files selected for processing (5)
tests/integration/README.mdtests/integration/endpoints/test_model_list.pytests/integration/endpoints/test_query_integration.pytests/integration/endpoints/test_streaming_query_integration.pytests/integration/endpoints/test_tools_integration.py
tests/integration/endpoints/test_streaming_query_integration.py
Outdated
Show resolved
Hide resolved
f2de5ba to
5b1976e
Compare
| }, | ||
| id="filter_llm_returns_llm_model", | ||
| ), | ||
| pytest.param( |
There was a problem hiding this comment.
there is no reason for this test, it tests already tested logic
There was a problem hiding this comment.
So this test was already there, I didn't add any new test. The test was called test_models_list_filter_model_type_embedding.
You could make a case for this test not needing to be there for sure, but that's out of scope for this PR and a follow up PR should remove this test.
Hope that makes sense?
5b1976e to
b954ce9
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration/endpoints/test_query_integration.py`:
- Around line 281-288: The "empty_payload" pytest.param currently omits the
attachments field instead of sending it explicitly as null, so update the test
row with id "empty_payload" to include the attachments key set to None (e.g.,
{"attachments": None, ...}) if you want to assert the explicit null shape, or
rename the id/data to reflect that the field is omitted; modify the pytest.param
entry inside the test suite (the parameter list created with pytest.param and id
"empty_payload") accordingly to cover both shapes or clearly document which
shape is being tested.
- Around line 390-399: The assertion that uses getattr(response, "status_code",
status.HTTP_200_OK) is a false-positive because it defaults missing status_code
to 200; update the test to validate a real HTTP status by either (a) removing
the default and asserting response.status_code == expected_status (ensuring
response actually has a status_code attribute), or (b) invoking the real HTTP
client instead of calling query_endpoint_handler directly so you can assert the
actual HTTP response code; locate the check around query_endpoint_handler and
replace the getattr-based assertion with one of these approaches.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5088272f-9e57-442b-92f2-c7df99f3f964
📒 Files selected for processing (5)
tests/integration/README.mdtests/integration/endpoints/test_model_list.pytests/integration/endpoints/test_query_integration.pytests/integration/endpoints/test_streaming_query_integration.pytests/integration/endpoints/test_tools_integration.py
✅ Files skipped from review due to trivial changes (1)
- tests/integration/endpoints/test_streaming_query_integration.py
b954ce9 to
b3d2f8c
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tests/integration/README.md (1)
256-260:⚠️ Potential issue | 🟡 MinorStandardize the wording variant for parameterized/parametrize.
The section mixes prose variants (“Parameterized”) with the
@pytest.mark.parametrizespelling. Keep the decorator spelling as-is, and use one consistent prose variant in headings/body copy to avoid terminology drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/README.md` around lines 256 - 260, The heading and body mix the prose variant "Parameterized" with the decorator spelling; standardize to the decorator form by updating the heading "Data-Driven (Parameterized) Tests" and any occurrences in the body to use the `@pytest.mark.parametrize` spelling (e.g., "Data-Driven (`@pytest.mark.parametrize`) Tests" and refer to "parametrize" consistently in explanatory text), and ensure references to the decorator name match `@pytest.mark.parametrize` exactly so all mentions and examples use a single consistent term.
🧹 Nitpick comments (2)
tests/integration/endpoints/test_streaming_query_integration.py (2)
285-289: Align mocked 401 detail payload with production error schema.The mocked
HTTPExceptiononly setsdetail["cause"]. Production unauthorized responses include bothdetail.responseanddetail.cause; matching that shape makes this integration test more realistic.Suggested change
oauth_401 = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, - detail={"cause": "MCP server at http://example.com requires OAuth"}, + detail={ + "response": "Missing or invalid credentials provided by client", + "cause": "MCP server at http://example.com requires OAuth", + }, headers={"WWW-Authenticate": www_authenticate} if www_authenticate else None, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_streaming_query_integration.py` around lines 285 - 289, The mocked oauth_401 HTTPException currently only sets detail["cause"]; update the detail payload to match production shape by including both detail["response"] (a dict mimicking the real response shape, e.g., status code and body/content) and detail["cause"]; modify the oauth_401 construction so the detail contains both keys (keeping the existing cause value) to make the test's mocked 401 mirror the production error schema.
213-215: Avoid brittle string-casing assertion on error summary.Line 215 hard-codes
"Invalid"indetail["response"], which is fragile to copy/casing changes and can cause noisy failures unrelated to behavior.Suggested change
if expected_error: assert expected_error in exc_info.value.detail["cause"] - assert "Invalid" in exc_info.value.detail["response"] + assert isinstance(exc_info.value.detail.get("response"), str)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_streaming_query_integration.py` around lines 213 - 215, The test currently asserts a hard-coded casing with assert "Invalid" in exc_info.value.detail["response"], which is brittle; change this to a case-insensitive check by lowercasing the response before assertion (e.g., assert "invalid" in exc_info.value.detail["response"].lower()) so the test no longer fails due to capitalization differences while still verifying the expected summary text; update the block that uses expected_error and exc_info to perform the .lower() check on detail["response"].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/integration/README.md`:
- Around line 256-260: The heading and body mix the prose variant
"Parameterized" with the decorator spelling; standardize to the decorator form
by updating the heading "Data-Driven (Parameterized) Tests" and any occurrences
in the body to use the `@pytest.mark.parametrize` spelling (e.g., "Data-Driven
(`@pytest.mark.parametrize`) Tests" and refer to "parametrize" consistently in
explanatory text), and ensure references to the decorator name match
`@pytest.mark.parametrize` exactly so all mentions and examples use a single
consistent term.
---
Nitpick comments:
In `@tests/integration/endpoints/test_streaming_query_integration.py`:
- Around line 285-289: The mocked oauth_401 HTTPException currently only sets
detail["cause"]; update the detail payload to match production shape by
including both detail["response"] (a dict mimicking the real response shape,
e.g., status code and body/content) and detail["cause"]; modify the oauth_401
construction so the detail contains both keys (keeping the existing cause value)
to make the test's mocked 401 mirror the production error schema.
- Around line 213-215: The test currently asserts a hard-coded casing with
assert "Invalid" in exc_info.value.detail["response"], which is brittle; change
this to a case-insensitive check by lowercasing the response before assertion
(e.g., assert "invalid" in exc_info.value.detail["response"].lower()) so the
test no longer fails due to capitalization differences while still verifying the
expected summary text; update the block that uses expected_error and exc_info to
perform the .lower() check on detail["response"].
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a69f2d26-3100-4715-80b5-b57961a76c51
📒 Files selected for processing (5)
tests/integration/README.mdtests/integration/endpoints/test_model_list.pytests/integration/endpoints/test_query_integration.pytests/integration/endpoints/test_streaming_query_integration.pytests/integration/endpoints/test_tools_integration.py
✅ Files skipped from review due to trivial changes (1)
- tests/integration/endpoints/test_query_integration.py
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/integration/endpoints/test_tools_integration.py
- tests/integration/endpoints/test_model_list.py
…erized tests Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate test code and improving maintainability. Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases Signed-off-by: Anik Bhattacharjee <[email protected]>
b3d2f8c to
cd9f7fa
Compare
|
Looks like ruff is failing in |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/integration/endpoints/test_streaming_query_integration.py (1)
266-295:⚠️ Potential issue | 🟠 MajorMock the OAuth probe, not downstream tool loading.
streaming_query_endpoint_handlerawaitscheck_mcp_auth(...)before MCP tool retrieval, so Line 291 no longer exercises the 401 path this test describes. If auth-probe propagation regresses, this test can still pass because it injects the failure later viautils.responses.get_mcp_tools.🔧 Minimal fix
- When get_mcp_tools raises 401 (with or without WWW-Authenticate), + When check_mcp_auth raises 401 (with or without WWW-Authenticate), the streaming handler should propagate that response to the client. @@ mocker.patch( - "utils.responses.get_mcp_tools", + "app.endpoints.streaming_query.check_mcp_auth", new_callable=mocker.AsyncMock, side_effect=oauth_401, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_streaming_query_integration.py` around lines 266 - 295, The test currently patches utils.responses.get_mcp_tools to raise a 401, but streaming_query_endpoint_handler calls check_mcp_auth(...) first so the 401 path isn't exercised; change the mock to patch the auth probe instead by patching utils.responses.check_mcp_auth (as an AsyncMock) with side_effect=oauth_401 so the handler receives and propagates the 401 from check_mcp_auth rather than from get_mcp_tools.tests/integration/endpoints/test_query_integration.py (1)
169-197:⚠️ Potential issue | 🔴 CriticalChange mock target from
utils.responses.get_mcp_toolstoutils.mcp_oauth_probe.check_mcp_author useapp.endpoints.query.check_mcp_auth.The test mocks
utils.responses.get_mcp_tools, but this function is neither imported nor called in the query endpoint. The actual OAuth 401 handling occurs incheck_mcp_auth(called at line 124 ofsrc/app/endpoints/query.py), which performs the MCP server probe and raises 401 withWWW-Authenticateheaders (lines 101–105 ofsrc/utils/mcp_oauth_probe.py). Mocking the wrong function means the test does not exercise the real OAuth failure path and will not catch regressions in header propagation. Usemocker.patch("utils.mcp_oauth_probe.check_mcp_auth", ...)or patch at the endpoint import level.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/endpoints/test_query_integration.py` around lines 169 - 197, The test currently patches utils.responses.get_mcp_tools but the query endpoint calls check_mcp_auth, so change the mock target to patch the actual probe function (either "utils.mcp_oauth_probe.check_mcp_auth" or the endpoint-level import "app.endpoints.query.check_mcp_auth") and keep the same AsyncMock behavior and side_effect=oauth_401 so the raised HTTPException (with or without WWW-Authenticate) is propagated by the query handler.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@tests/integration/endpoints/test_query_integration.py`:
- Around line 169-197: The test currently patches utils.responses.get_mcp_tools
but the query endpoint calls check_mcp_auth, so change the mock target to patch
the actual probe function (either "utils.mcp_oauth_probe.check_mcp_auth" or the
endpoint-level import "app.endpoints.query.check_mcp_auth") and keep the same
AsyncMock behavior and side_effect=oauth_401 so the raised HTTPException (with
or without WWW-Authenticate) is propagated by the query handler.
In `@tests/integration/endpoints/test_streaming_query_integration.py`:
- Around line 266-295: The test currently patches utils.responses.get_mcp_tools
to raise a 401, but streaming_query_endpoint_handler calls check_mcp_auth(...)
first so the 401 path isn't exercised; change the mock to patch the auth probe
instead by patching utils.responses.check_mcp_auth (as an AsyncMock) with
side_effect=oauth_401 so the handler receives and propagates the 401 from
check_mcp_auth rather than from get_mcp_tools.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0bd802e0-1624-4cab-9716-4f01480e8f7a
📒 Files selected for processing (5)
tests/integration/README.mdtests/integration/endpoints/test_model_list.pytests/integration/endpoints/test_query_integration.pytests/integration/endpoints/test_streaming_query_integration.pytests/integration/endpoints/test_tools_integration.py
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/integration/endpoints/test_model_list.py
Description
Refactor integration tests to use parameterized/data-driven test pattern, eliminating duplicate test code and improving maintainability.
Converted 19 duplicate test functions into 6 parameterized test functions with 21 test cases
Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Tests
Documentation