The dev/env directory contains scripts for bringing up a complete ACSCS test environment on different
types of cluster. The following components are set up:
- A Postgres database
- Fleet Manager
- Fleetshard Sync
- RHACS Operator
The RHACS operator can be installed from OpenShift marketplace or Quay. Images for Fleet Manager & Fleetshard Sync can either be pulled from Quay or built directly from the source.
- standard Unix environment with Bash
dockerCLI (or replacement)- Minikube or equivalent (for local deployment)
- operator-sdk (if deploying to clusters not having access to OpenShift Marketplace, like Minikube)
yq&jqkubectloroc
The following scripts exist currently in dev/env/scripts:
lib.sh: Basic initialization and library script for the other executable scripts.apply&delete: Convenience scripts for applying and deleting Kubernetes resources supporting environment interpolation.port-forwarding: Convenient abstraction layer for kubectl port-forwarding.bootstrap.sh: Sets up the basic environment: creates namespaces, injects image-pull-secrets if necessary, installs OLM (if required), installs RHACS operator (if desired), pulls required images, etc.up.sh: Brings up the ACSCS environment consisting of the database,fleet-managerandfleetshard-sync.down.sh: Deletes the resources created byup.sh.
The scripts can be configured using environment variables, the most important options being:
CLUSTER_TYPE: Can beminikube,colima,rancher-desktop,crc,openshift-ci,infra-openshift). Will be auto-sensed in most situations depending on the cluster name.FLEET_MANAGER_IMAGE: Reference for anacs-fleet-managerimage. If unset, build a fresh image from the current source and deploy that.STATIC_TOKEN: Needs to contain a valid test user token (can be found in BitWarden)STATIC_TOKEN_ADMIN: Needs to contain a valid admin token (can be found in BitWarden)
- Install the [necessary tools](#Required tools)
- Set up a test cluster using [one of the supported](#Cluster setup) types
- Ensure the
kubectlcontext is pointing to the desired cluster:kubectl use-context <cluster>
- Set the required environment variables:
STATIC_TOKENSTATIC_TOKEN_ADMIN
The primary way for executing the e2e test suite is by calling
$ ./.openshift-ci/tests/e2e.shThis will trigger the FULL test lifecycle including the cluster bootstrap, building the image (unless FLEET_MANAGER_IMAGE points to a specific image tag), deploying it and running E2E tests.
As an alternative you can invoke e2e tests inside the container:
$ make test/deploy/e2e-dockerizedThis may be useful when you're debugging Openshift CI issues specific to the container environment.
It also does not require extra [tools](#Required tools) dependencies, such as jq, kubectl or operator-sdk.
In certain situations it is also useful to be able to execute the respective building blocks manually:
Prepare the cluster by installing the necessary components, such as stackrox-operator and openshift-router
$ make deploy/bootstrap # points to bootstrap.shThe following command is used for building the Managed Services components image and deploying it on the cluster
$ make deploy/dev # points to up.shThen, after fleet-manager's leader election is complete (check its logs), you can run the e2e test suite manually:
make test/e2eThe env var WAIT_TIMEOUT can be used to adjust the timeout of each individual tests, using a string compatible with Golang's time.ParseDuration, e.g. WAIT_TIMEOUT=20s. If not set all tests use 5 minutes as timeout.
To clean up the environment run
$ make undeploy/dev # points to down.shThe test suite has auto-sensing logic built in to skip DNS e2e tests when the test environment does not support execution of DNS e2e tests. Currently this is only supported in OpenShift environments.
To run the DNS e2e tests additionally to the default e2e test setup the cluster you're running against needs to have the openshift Route Custom Resource Definition installed and you need to set following environment variables:
export ROUTE53_ACCESS_KEY="<key-id>"
export ROUTE53_SECRET_ACCESS_KEY="<secret-key>"
# Depending on cluster type and its default configuration you might need
export ENABLE_CENTRAL_EXTERNAL_DOMAIN_DEFAULT=true
# If the domain you test against is not the default dev domain
export CENTRAL_DOMAIN_NAME="<domain>"Bootstrap a local cluster using one of the options below.
Make sure that Minikube is running with options such as:
$ minikube start --memory=6G \
--cpus=2 \
--apiserver-port=8443 \
--embed-certs=true \
--delete-on-failure=true \
--driver=hyperkit # For exampleand that the docker CLI is in PATH (if not, export DOCKER=... accordingly).
Make sure that Colima is running with options such as:
$ colima start -c 4 -d 60 -m 16 -kand that the colima CLI is in PATH (if not, export DOCKER=/path/to/bin/colima nerdctl -- -n k8s.io accordingly).
Depending on several factors it might even be necessary to increase the cluster resources even further, e.g.:
$ colima start -c 6 -m 16 -d 60 -kCRC needs a lot of resources and so does a Central tenant. At least the following resource settings were required to make the test succeed on CRC.
crc config set memory 18432
crc config set cpus 7There's currently no automated way to upload the fleet-manager image to CRC. Set the FLEET_MANAGER_IMAGE environment variable to an available Image in quay or build locally and load it into CRC registry manually.