This directory contains various sample configurations to help you understand how to deploy and configure the Multigres Operator.
| File | Description |
|---|---|
minimal.yaml |
The simplest possible cluster. Relies entirely on system defaults. |
templated-cluster.yaml |
Demonstrates how to use reusable Templates for configuration. |
overrides.yaml |
Advanced usage showing how to patch/override specific fields on top of templates. |
templates/ |
A directory containing recommended CoreTemplate, CellTemplate, and ShardTemplate specs as individual files. |
default-templates/ |
A directory containing individual default files (cell.yaml, core.yaml, shard.yaml). |
external-etcd.yaml |
Demonstrates connecting to an existing external Etcd cluster instead of deploying one. |
no-templates.yaml |
A verbose example where all configuration is defined inline (no templates used). |
This sample demonstrates the Operator Hardcoded Defaults. You only need to define the physical topology (Cells). The operator fills in the rest.
Input:
apiVersion: multigres.com/v1alpha1
kind: MultigresCluster
metadata:
name: minimal-cluster
spec:
cells:
- name: "zone-a"
zone: "us-east-1a"What happens: The operator's Webhook and Controller inject standard defaults for all missing components:
- GlobalTopoServer: Created with 3 replicas (etcd).
- MultiAdmin: Created with 1 replica.
- Database: A
postgresdatabase is created. - TableGroup: A
defaulttablegroup is created. - Shard: A default shard
0-infis created. - Pools: A default read-write pool is created in
zone-a.
This sample demonstrates the Template System. Instead of defining specs inline, you point to reusable templates.
Prerequisite: Apply the templates first: kubectl apply -f config/samples/templates/
Input:
spec:
templateDefaults:
coreTemplate: "standard-core" # Defines GlobalTopoServer & MultiAdmin
cellTemplate: "standard-cell" # Defines MultiGateway & LocalTopoServer
shardTemplate: "standard-shard" # Defines MultiOrch & Pools
cells:
- name: "us-east-1a"
zone: "us-east-1a" # Inherits 'standard-cell' config
- name: "us-east-1b"
zone: "us-east-1b" # Inherits 'standard-cell' configWhat happens: Every cell and shard created by this cluster will inherit the configuration defined in the referenced templates. This allows you to manage configuration centrally.
This sample shows the interactions between Templates and Overrides. You can use a template as a base and then patch specific fields for a particular Cell or Shard.
Input Snippet:
spec:
cells:
- name: "zone-a"
overrides:
multigateway:
replicas: 3 # <--- Overrides the template's default (e.g. 2)Resulting Child Resource (Cell):
The operator creates a Cell resource where the specific field is updated, but other fields (like resources, affinities) are kept from the template.
apiVersion: multigres.com/v1alpha1
kind: Cell
metadata:
name: overrides-cluster-zone-a
spec:
multigateway:
replicas: 3 # <--- The override is applied
resources: ... # <--- Preserved from standard-cell templateShard Overrides: You can also override specific pools within a shard.
shards:
- name: "0"
overrides:
pools:
"main-app":
storage:
size: "200Gi" # Override storage for this specific poolThe operator resolves configuration using a 4-level precedence chain:
- Inline Spec / Overrides: Highest priority. Defined directly on the component.
- Cluster Template Defaults: Defined in
spec.templateDefaults. - Namespace Defaults: A template named
defaultin the same namespace. - System Defaults: Hardcoded values (fallback).
This allows you to set "sane defaults" at the Namespace level (Level 3), override them for a specific Cluster (Level 2), and tweak them for a specific Cell/Shard (Level 1).
- Minimum:
replicasPerCellmust be at least 1 (CRD-enforced). Setting it to 0 is rejected. - Recommended for
readWritepools: at least 3 replicas per cell. TheANY_2durability policy requires 1 primary + 2 synchronous standbys so that one standby can be drained during rolling upgrades while the other maintains write quorum. The operator issues an admission warning if areadWritepool has fewer than 3 replicas. readOnlypools: no quorum requirement, so any value >= 1 is fine.
If you create your Default Templates before deploying a minimal cluster, the cluster will automatically pick them up.
- Create Templates:
kubectl apply -f config/samples/default-templates - Create Cluster:
kubectl apply -f config/samples/minimal.yaml
Result:
The Operator validates the cluster. It sees you have no inline spec, but it finds the templates named default in the namespace. It links them automatically.
spec:
templateDefaults:
cellTemplate: default
coreTemplate: default
shardTemplate: defaultThe cluster is now "bound" to your templates. If you update the default CoreTemplate, the cluster will roll out the changes (unless you have specific inline overrides).
If you create a Minimal Cluster before the templates exist, the operator is forced to inject Operator Defaults directly into the inline spec. This creates a permanent divergence.
- Create Cluster:
kubectl apply -f config/samples/minimal.yaml- Result: The webhook sees no templates. It injects operator defaults (e.g., GlobalTopoServer replicas: 3) directly into
spec.globalTopoServer.
- Result: The webhook sees no templates. It injects operator defaults (e.g., GlobalTopoServer replicas: 3) directly into
- Create Templates:
kubectl apply -f config/samples/default-templates- (Assume these templates define replicas: 5)
- Re-Apply Cluster:
kubectl apply -f config/samples/minimal.yaml
Result:
The webhook detects the new templates and updates spec.templateDefaults to point to them. HOWEVER, the cluster does not switch to 5 replicas.
Why? The Inline Spec (Level 1) always takes precedence.
- In Step 1, we saved
replicas: 3into the cluster's manifest. - In Step 3, we linked the template (
replicas: 5), but thereplicas: 3is still present in the inline spec. - Therefore,
replicas: 3wins.
Fix: To "re-bind" the cluster to the template, you must manually clear the inline fields that you want to inherit.