Search: e2e for sharded with single mongot#823
Search: e2e for sharded with single mongot#823lsierant merged 4 commits intosearch/sharded-clusterfrom
Conversation
|
Warning This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
This stack of pull requests is managed by Graphite. Learn more about stacking. |
d64c803 to
cb1b272
Compare
cb1b272 to
d854b2f
Compare
fd1daca to
22f9e5a
Compare
viveksinghggits
left a comment
There was a problem hiding this comment.
Looks pretty good.
| exec_command = ["tar", "xf", "-", "-C", "/".join(dest_path.split("/")[:-1]) or "/"] | ||
| resp = stream( | ||
| self.core_v1.connect_get_namespaced_pod_exec, | ||
| self.pod_name, | ||
| self.namespace, | ||
| command=exec_command, | ||
| stderr=True, | ||
| stdin=True, | ||
| stdout=True, | ||
| tty=False, | ||
| _preload_content=False, | ||
| ) |
There was a problem hiding this comment.
would it make sense to directly use kubectl cp instead? If we do that we might not have to create the tar of the file separately.
There was a problem hiding this comment.
it would work well when running locally or directly on the evg host. But in CI this is going to run inside the test pod. We don't do kubectl there I believe
| w = Watch() | ||
| for event in w.stream( | ||
| self.core_v1.list_namespaced_pod, | ||
| namespace=self.namespace, | ||
| label_selector="app=mongodb-tools", | ||
| timeout_seconds=120, | ||
| ): | ||
| pod = event["object"] | ||
| if pod.status.phase == "Running": | ||
| # Check if container is ready | ||
| if pod.status.container_statuses: | ||
| for container_status in pod.status.container_statuses: | ||
| if container_status.ready: | ||
| logger.info(f"{self.pod_name} is ready") | ||
| w.stop() | ||
| return | ||
| raise TimeoutError(f"Timed out waiting for {self.pod_name} to be ready") |
There was a problem hiding this comment.
I see we already have utilities get_pod_when_running and get_pod_when_ready in docker/mongodb-kubernetes-tests/kubetester/init.py, should we try to leverage them instead?
There was a problem hiding this comment.
yes, definitely! Will change that
| namespace=namespace, | ||
| ) | ||
|
|
||
| # if try_load(resource): |
There was a problem hiding this comment.
nit: Should we uncomment/remove ?
There was a problem hiding this comment.
nit: given how long is the test I think we should add a comment at the beginning to give an overview
| tags: [ "pr_patch", "staging", "e2e_test_suite", "cloudqa", "cloudqa_non_static" ] | ||
| run_on: | ||
| - ubuntu2404-medium | ||
| - ubuntu2404-large |
There was a problem hiding this comment.
nit: do we need to bump it for the whole cloudqa suite ?
There was a problem hiding this comment.
let's leave it as it is for now, we'll address this later at the merging to master phase
d854b2f to
90007b8
Compare
22f9e5a to
23f196b
Compare
|
evergreen retry |
|
evergreen refresh |
Missing e2e test for sharded search with one mongot instance
Missing e2e test for sharded search with one mongot instance
Missing e2e test for sharded search with one mongot instance
Missing e2e test for sharded search with one mongot instance
Missing e2e test for sharded search with one mongot instance

Missing e2e test for sharded search with one mongot instance