Skip to content

Commit 6be655b

Browse files
committed
Added Portworx on Two-Node with Arbiter
1 parent 07fa0ee commit 6be655b

4 files changed

Lines changed: 210 additions & 0 deletions

File tree

content/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ and benefit from the expertise shared in this repository.
2424

2525
|Date|Headline|
2626
|---|---|
27+
|2026-03-19|[Added Portworx on Two-Node with Arbiter](storage/portworx-and-tna/)|
2728
|2026-03-19|[Added Agent-base installation example (proxy)](cluster-installation/agent-base-proxy/)|
2829
|2026-03-17|[Added KubeVirt CSI](kubevirt/kubevirt-csi-driver/)|
2930
|2026-02-12|[Update networking/egressip](networking/egress-ip/)|
Lines changed: 207 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,207 @@
1+
---
2+
title: Portworx on Two-Node with Arbiter
3+
linktitle: Portworx on Two-Node with Arbiter
4+
description: Deploy Portworx on an OpenShift compact (two-node + arbiter) cluster with notes on disks, labels, and replica-2 storage.
5+
tags: ['storage','portworx','TNA','v4.20']
6+
icon: portworx/portworx
7+
---
8+
# Running Portworx on OpenShift Two-Node with Arbiter
9+
10+
**Two-Node with Arbiter (TNA)** is a compact OpenShift topology: two control-plane nodes that also run workloads, plus a third **arbiter** node that participates in quorum for etcd and—for storage—typically hosts **witness / KVDB** services without holding application data replicas. Portworx can run in that layout if you size disks and node roles correctly and align replication with the number of storage nodes.
11+
12+
Official documentation: [Portworx on OpenShift](https://docs.portworx.com/install-portworx/openshift/) (use the version that matches your Portworx release).
13+
14+
**Tested with:**
15+
16+
| Component | Version |
17+
| --------- | --------- |
18+
| OpenShift | v4.20.15 |
19+
| Portworx | v25.6.1 |
20+
21+
## Prerequisites
22+
23+
Use a **Two-Node with Arbiter** cluster. Example node list:
24+
25+
```shell
26+
% oc get nodes
27+
NAME STATUS ROLES AGE VERSION
28+
ocp21-arbiter-0 Ready arbiter 3d11h v1.33.6
29+
ocp21-cp-0 Ready control-plane,master,worker 3d11h v1.33.6
30+
ocp21-cp-1 Ready control-plane,master,worker 3d11h v1.33.6
31+
```
32+
33+
**Additional disks** (example sizing from a lab):
34+
35+
| Node | Additional disks |
36+
| --------------- | ---------------- |
37+
| ocp21-arbiter-0 | `/dev/vdb` — KVDB, min 32 GiB |
38+
| ocp21-cp-0 | `/dev/vdb` — data (~256&nbsp;GiB)<br/>`/dev/vdc` — metadata, min 64&nbsp;GiB |
39+
| ocp21-cp-1 | `/dev/vdb` — data (~256&nbsp;GiB)<br/>`/dev/vdc` — metadata, min 64&nbsp;GiB |
40+
41+
**Node labels** (storage vs storageless + running on control-plane nodes):
42+
43+
| Node | Labels |
44+
| --------------- | ------ |
45+
| ocp21-arbiter-0 | `portworx.io/node-type=storageless` |
46+
| ocp21-cp-0 | `portworx.io/node-type=storage`, `portworx.io/run-on-master=true` |
47+
| ocp21-cp-1 | `portworx.io/node-type=storage`, `portworx.io/run-on-master=true` |
48+
49+
## Install the operator
50+
51+
Install the **Portworx Operator** operator from **OperatorHub** into the `openshift-operaotrs` namespace (default). Wait until the operator deployment is **Available**.
52+
53+
If your cluster uses a private registry or disconnected mirrors, follow Portworx’s air-gapped / mirror steps for that release—image references in `StorageCluster` must resolve in your environment.
54+
55+
## Apply the StorageCluster
56+
57+
The manifest below maps **two storage nodes** to metadata + data devices and uses the **arbiter** for internal KVDB on a dedicated disk. `clusterDomain` values (`master1`, `master2`, `witness`) tie each block to the matching node; adjust names and devices to match your cluster.
58+
59+
```yaml
60+
apiVersion: core.libopenstorage.org/v1
61+
kind: StorageCluster
62+
metadata:
63+
annotations:
64+
portworx.io/is-openshift: 'true'
65+
portworx.io/misc-args: '-rt_opts small_conf=1 -T px-storev2'
66+
portworx.io/disable-storage-class: "true"
67+
name: px-cluster
68+
namespace: portworx
69+
spec:
70+
startPort: 17001
71+
runtimeOptions:
72+
default-io-profile: '6'
73+
stork:
74+
args:
75+
webhook-controller: 'true'
76+
enabled: true
77+
monitoring:
78+
prometheus:
79+
exportMetrics: true
80+
telemetry:
81+
enabled: true
82+
metricsCollector:
83+
enabled: true
84+
kvdb:
85+
enableTLS: false
86+
internal: true
87+
nodes:
88+
- clusterDomain: master1
89+
selector:
90+
nodeName: ocp21-cp-0
91+
storage:
92+
systemMetadataDevice: /dev/vdc
93+
useAll: true
94+
- clusterDomain: master2
95+
selector:
96+
nodeName: ocp21-cp-1
97+
storage:
98+
systemMetadataDevice: /dev/vdc
99+
useAll: true
100+
- clusterDomain: witness
101+
selector:
102+
nodeName: ocp21-arbiter-0
103+
storage:
104+
kvdbDevice: /dev/vdb
105+
useAll: false
106+
imagePullPolicy: Always
107+
secretsProvider: k8s
108+
version: 3.5.2.1
109+
csi:
110+
enabled: true
111+
installSnapshotController: true
112+
image: 'portworx/oci-monitor:3.5.2.1'
113+
storage:
114+
useAll: true
115+
updateStrategy:
116+
rollingUpdate:
117+
disruption:
118+
allow: true
119+
maxUnavailable: 1
120+
type: RollingUpdate
121+
```
122+
123+
For **lab** clusters where you want a clean teardown, you can add:
124+
125+
```yaml
126+
deleteStrategy:
127+
type: UninstallAndWipe
128+
```
129+
130+
!!! warning
131+
132+
`UninstallAndWipe` removes Portworx data from disks allocated to Portworx when the `StorageCluster` is deleted. Use only in non-production or when you accept full data loss on those devices.
133+
134+
## Replica-2 storage class
135+
136+
With **two** data nodes, a **replica factor of 2** matches the topology: each volume is mirrored across both storage nodes. (Higher replication is not meaningful with only two storage members.)
137+
138+
```yaml
139+
kind: StorageClass
140+
apiVersion: storage.k8s.io/v1
141+
metadata:
142+
name: portworx-replica-two
143+
annotations:
144+
storageclass.kubevirt.io/is-default-virt-class: 'true'
145+
storageclass.kubernetes.io/is-default-class: 'true'
146+
provisioner: pxd.portworx.com
147+
parameters:
148+
io_profile: db_remote
149+
repl: '2'
150+
reclaimPolicy: Delete
151+
allowVolumeExpansion: true
152+
volumeBindingMode: Immediate
153+
```
154+
155+
## Appendix: present virtual disks as non-rotational
156+
157+
Portworx expects **non-rotational** (SSD/NVMe) storage. Some lab hypervisors still report virtio disks as rotational, which can block or complicate install. A common workaround is a **udev** rule that sets `queue/rotational` to `0` (and a suitable scheduler) for `vd*` devices.
158+
159+
Rule content:
160+
161+
```log
162+
ACTION=="add|change", KERNEL=="vd[a-z]", ATTR{queue/rotational}="0", ATTR{queue/scheduler}="deadline"
163+
```
164+
165+
Apply per **MachineConfig** pool—for example, arbiter and master roles:
166+
167+
```yaml
168+
apiVersion: machineconfiguration.openshift.io/v1
169+
kind: MachineConfig
170+
metadata:
171+
labels:
172+
machineconfiguration.openshift.io/role: arbiter
173+
name: 99-arbiter-disk-rotational
174+
spec:
175+
config:
176+
ignition:
177+
version: 3.5.0
178+
storage:
179+
files:
180+
- contents:
181+
source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIEtFUk5FTD09InZkW2Etel0iLCBBVFRSe3F1ZXVlL3JvdGF0aW9uYWx9PSIwIiwgQVRUUntxdWV1ZS9zY2hlZHVsZXJ9PSJkZWFkbGluZSIK
182+
mode: 420
183+
overwrite: true
184+
path: /etc/udev/rules.d/99-disk-rotational.rules
185+
```
186+
187+
```yaml
188+
apiVersion: machineconfiguration.openshift.io/v1
189+
kind: MachineConfig
190+
metadata:
191+
labels:
192+
machineconfiguration.openshift.io/role: master
193+
name: 99-master-disk-rotational
194+
spec:
195+
config:
196+
ignition:
197+
version: 3.5.0
198+
storage:
199+
files:
200+
- contents:
201+
source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIEtFUk5FTD09InZkW2Etel0iLCBBVFRSe3F1ZXVlL3JvdGF0aW9uYWx9PSIwIiwgQVRUUntxdWV1ZS9zY2hlZHVsZXJ9PSJkZWFkbGluZSIK
202+
mode: 420
203+
overwrite: true
204+
path: /etc/udev/rules.d/99-disk-rotational.rules
205+
```
206+
207+
After the nodes reconcile, confirm with `cat /sys/block/vd*/queue/rotational` on each node if needed.

mkdocs.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,7 @@ nav:
198198

199199
- Storage:
200200
- storage/index.md
201+
- Portworx and TNA: storage/portworx-and-tna/index.md
201202
- IBM Fussion Access SAN: storage/ibm-fusion-access-san/index.md
202203

203204
- Troubleshooting: troubleshooting/index.md
Lines changed: 1 addition & 0 deletions
Loading

0 commit comments

Comments
 (0)