-
Notifications
You must be signed in to change notification settings - Fork 6
Expand file tree
/
Copy pathskewer.yaml
More file actions
224 lines (213 loc) · 10.9 KB
/
skewer.yaml
File metadata and controls
224 lines (213 loc) · 10.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
title: Sharing a PostgreSQL database across clusters
subtitle: |
This tutorial demonstrates how to share a PostgreSQL database across multiple Kubernetes clusters that are located in different public and private cloud providers.
overview: |
In this tutorial, you will create a Virtual Application Nework that enables communications across the public and private clusters.
You will then deploy a PostgreSQL database instance to a private cluster and attach it to the Virtual Application Network.
This will enable clients on different public clusters attached to the Virtual Application Nework to transparently access the database
without the need for additional networking setup (e.g. no vpn or sdn required).
prerequisites: |
@default@
The basis for the demonstration is to depict the operation of a PostgreSQL database in a private cluster and the ability to access the
database from clients resident on other public clusters. As an example, the cluster deployment might be comprised of:
* A private cloud cluster running on your local machine
* Two public cloud clusters running in public cloud providers
While the detailed steps are not included here, this demonstration can alternatively be performed with three separate namespaces on a single cluster.
sites:
public1:
title: Public 1 cluster
platform: kubernetes
namespace: public1
env:
KUBECONFIG: $PWD/kubeconfigs/public1.config
public2:
title: Public 2 cluster
platform: kubernetes
namespace: public2
env:
KUBECONFIG: $PWD/kubeconfigs/public2.config
private1:
title: Private 1 cluster
platform: kubernetes
namespace: private1
env:
KUBECONFIG: $PWD/kubeconfigs/private1.config
steps:
- standard: platform/access_your_kubernetes_clusters
- standard: platform/install_skupper_on_your_kubernetes_clusters
- standard: platform/create_your_kubernetes_namespaces
- title: Set up the demo
preamble: |
On your local machine, make a directory for this tutorial and clone the example repo:
commands:
public1:
- run: |
cd ~/
mkdir pg-demo
cd pg-demo
git clone -b v2 https://github.com/skupperproject/skupper-example-postgresql.git
- title: Create your sites
preamble: |
A Skupper _Site_ is a location where your application workloads
are running. Sites are linked together to form a network for your
application.
Use the `kubectl apply` command to declaratively create sites in the kubernetes
namespaces. This deploys the Skupper router. Then use `kubectl get site` to see
the outcome.
**Note:** If you are using Minikube, you need to [start minikube
tunnel][minikube-tunnel] before you configure skupper.
[minikube-tunnel]: https://skupper.io/start/minikube.html#running-minikube-tunnel
The **public1** site definition sets `linkAccess: default`, because the other two sites **public2** and **private1**
will establish a Skupper link to **public1**. This extra definition tells that the **public1** site accepts incoming
Skupper links from other sites using the default ingress type for the target cluster (_route_ when using OpenShift or _loadbalancer_ otherwise).
commands:
"public1":
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/site.yaml
"public2":
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/site.yaml
"private1":
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/site.yaml
- title: Link your sites
preamble: |
A Skupper _link_ is a channel for communication between two sites.
Links serve as a transport for application connections and
requests.
Creating an AccessToken requires the creation of an AccessGrant first,
on the target namespace (**public1**), then we can consume the AccessGrant's status
to write an AccessToken and apply if into the target clusters (**public2** and **private1**)
using `kubectl apply`.
**Note:** The link token is truly a *secret*. Anyone who has the
token can link to your site. Make sure that only those you trust
have access to it.
commands:
"public1":
- run: kubectl wait --for=condition=ready site/public1 --timeout 300s
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/accessgrant.yaml
- run: kubectl wait --for=condition=ready accessgrant/public1-grant --timeout 300s
- run: |
kubectl get accessgrant public1-grant -o go-template-file=~/pg-demo/skupper-example-postgresql/kubernetes/token.template > ~/public1.token
"public2":
- run: kubectl apply -f ~/public1.token
output: |
accesstoken.skupper.io/token-public1-grant created
"private1":
- run: kubectl apply -f ~/public1.token
output: |
accesstoken.skupper.io/token-public1-grant created
postamble: |
If your terminal sessions are on different machines, you may need
to use `scp` or a similar tool to transfer the token securely. By
default, tokens expire after a single use or 15 minutes after
being issued.
- title: Deploy the PostgreSQL service
preamble: |
After creating the application router network, deploy the PostgreSQL service.
The **private1** cluster will be used to deploy the PostgreSQL server and the **public1** and **public2** clusters
will be used to enable client communications to the server on the **private1** cluster.
commands:
private1:
- run: |
kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/deployment-postgresql-svc.yaml
output: |
secret/postgresql created
deployment.apps/postgresql created
- await_resource: deployment/postgresql
apply: test
- title: Expose the PostgreSQL on the Virtual Application Network
preamble: |
Now that the PostgreSQL is running in the **private1** cluster, we need to expose it into your Virtual Application Network (VAN).
commands:
private1:
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/connector.yaml
output: |
connector.skupper.io/postgresql created
- title: Making the PostgreSQL database accessible to the public sites
preamble: |
In order to make the PostgreSQL database accessible to the **public1** and **public2** sites, we need to define a `Listener`
on each site, which will produce a Kubernetes service on each cluster, connecting them with the database running on **private1** cluster.
commands:
public1:
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/listener.yaml
output: |
listener.skupper.io/postgresql created
- run: kubectl wait --for=condition=ready listener/postgresql --timeout 300s
public2:
- run: kubectl apply -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/listener.yaml
output: |
listener.skupper.io/postgresql created
- run: kubectl wait --for=condition=ready listener/postgresql --timeout 300s
- title: Create pod with PostgreSQL client utilities
preamble: |
Create a pod named `pg-shell` on each of the public clusters. This pod will be used to
communicate with the PostgreSQL database from **public1** and **public2** clusters.
commands:
public1:
- run: |
kubectl run pg-shell --image quay.io/skupper/simple-pg \
--env="PGUSER=postgres" \
--env="PGPASSWORD=skupper" \
--env="PGHOST=postgresql" \
--command sleep infinity
output: pod/pg-shell created
- run: kubectl wait --for condition=ready --timeout 300s pod/pg-shell
apply: test
public2:
- run: |
kubectl run pg-shell --image quay.io/skupper/simple-pg \
--env="PGUSER=postgres" \
--env="PGPASSWORD=skupper" \
--env="PGHOST=postgresql" \
--command sleep infinity
output: pod/pg-shell created
- run: kubectl wait --for condition=ready --timeout 300s pod/pg-shell
apply: test
- title: Create a database, a table and insert values
preamble: |
Now that we can access the PostgreSQL database from both public sites, let's create a database called **markets**,
then create a table named **product** and load it with some data.
commands:
public1:
- run: |
kubectl exec pg-shell -- createdb -e markets
kubectl exec -i pg-shell -- psql -d markets < ~/pg-demo/skupper-example-postgresql/sql/table.sql
kubectl exec -i pg-shell -- psql -d markets < ~/pg-demo/skupper-example-postgresql/sql/data.sql
output: |
SELECT pg_catalog.set_config('search_path', '', false);
CREATE DATABASE markets;
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
- title: Access the product table from any site
preamble: |
Now that data has been added, try to read them from both the **public1** and **public2** sites.
commands:
public1:
- run: echo "SELECT * FROM product;" | kubectl exec -i pg-shell -- psql -d markets
public2:
- run: echo "SELECT * FROM product;" | kubectl exec -i pg-shell -- psql -d markets
- title: Cleaning up
name: cleaning_up
preamble: |
Restore your cluster environment by returning the resources created in the demonstration. On each cluster, delete the
demo resources and the virtual application Network.
numbered: false
commands:
public1:
- run: kubectl delete pod pg-shell --now
- run: kubectl delete -f ~/pg-demo/skupper-example-postgresql/kubernetes/public1/
public2:
- run: kubectl delete pod pg-shell --now
- run: kubectl delete -f ~/public1.token -f ~/pg-demo/skupper-example-postgresql/kubernetes/public2/
private1:
- run: kubectl delete -f ~/public1.token -f ~/pg-demo/skupper-example-postgresql/kubernetes/private1/
summary: |
Through this example, we demonstrated how Skupper enables secure access to a PostgreSQL database hosted in a
private Kubernetes cluster, without exposing it to the public internet.
By deploying Skupper in each namespace, we established a **Virtual Application Network** (VAN), which allowed
the PostgreSQL service to be securely shared across clusters. The database was made available exclusively within
the VAN, enabling applications in the public1 and public2 clusters to access it seamlessly, as if it were running
locally in their own namespaces.
This approach not only simplifies multi-cluster communication but also preserves strict network boundaries,
eliminating the need for complex VPNs or firewall changes.