This article is a memo on the steps to install Kong on RKE2.
Installing RKE2#
RKE2 Server#
1
2
3
4
5
6
7
8
9
10
| root@ip-10-0-25-27:~# curl -sfL https://get.rke2.io | sh -
[INFO] finding release for channel stable
[INFO] using v1.30.5+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@ip-10-0-25-27:~# systemctl enable rke2-server.service --now
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /usr/local/lib/systemd/system/rke2-server.service.
root@ip-10-0-25-27:~#
|
Post-installation steps
1
2
3
4
5
6
7
| # Token file used by RKE2 Agent
cat /var/lib/rancher/rke2/server/node-token
# Path to the kubectl installed together
cp /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/
# kubeconfig file path
mkdir .kube
cp /etc/rancher/rke2/rke2.yaml .kube/config
|
RKE2 agent#
This needs to be installed on a different node.
1
2
3
4
5
6
7
8
9
| root@ip-10-0-25-118:~# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
[INFO] finding release for channel stable
[INFO] using v1.30.5+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@ip-10-0-25-118:~# systemctl enable rke2-agent.service
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/local/lib/systemd/system/rke2-agent.service.
|
Before starting the service, configure /etc/rancher/rke2/config.yaml
:
1
2
| server: https://<server address>:9345
token: <token>
|
Then start the agent
1
| root@ip-10-0-25-118:~# systemctl start rke2-agent.service
|
Check the cluster status with kubectl
1
2
3
4
| root@ip-10-0-25-27:~# kubectl get node
NAME STATUS ROLES AGE VERSION
ip-10-0-25-27 Ready control-plane,etcd,master 2m46s v1.30.5+rke2r1
ip-10-0-25-118 Ready <none> 79s v1.30.5+rke2r1
|
Installing Kong#
Basically, you can follow the instructions at the following URL, but there are a few points to be aware of:
https://docs.konghq.com/gateway/latest/install/kubernetes/proxy/
Specifying StorageClass#
If you install the Postgre pod together, sometimes it will remain in Pending state and not proceed.
1
2
3
4
5
| root@ip-10-0-25-27:~# kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-cp-kong-7f9bf4f756-s98p4 0/1 Init:1/2 0 45s
kong-cp-kong-init-migrations-mglgs 0/1 Init:0/1 0 45s
kong-cp-postgresql-0 0/1 Pending 0 45s
|
The reason is that the PVC is not created.
1
2
3
4
5
6
7
8
9
10
11
| root@ip-10-0-25-27:~# kubectl describe pod kong-cp-postgresql-0 -n kong
Name: kong-cp-postgresql-0
Namespace: kong
<...>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 64s default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
root@ip-10-0-25-27:~# kubectl get pvc -n kong
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-kong-cp-postgresql-0 Pending <unset> 2m2s
|
Installing a StorageClass will solve this. Other options are fine, but Rancher provides the following, which I used:
https://github.com/rancher/local-path-provisioner
1
| kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml
|
After installation, open the PVC with the following command and add storageClassName: local-path
under spec
.
1
| kubectl edit pvc data-kong-cp-postgresql-0 -n kong
|
With this, the PVC will be provisioned successfully, and the Kong Control Plane installation will complete without issues.
1
2
3
4
5
6
7
8
9
| root@ip-10-0-25-27:~# $ kubectl -n local-path-storage get pod
NAME READY STATUS RESTARTS AGE
local-path-provisioner-d744ccf98-xfcbk 1/1 Running 0 7m
root@ip-10-0-25-27:~# kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-cp-postgresql-0 1/1 Running 0 5m15s
kong-cp-kong-init-migrations-mglgs 0/1 Completed 0 5m38s
kong-cp-kong-7f9bf4f756-s98p4 1/1 Running 0 5m50s
|
Accessing the Data Plane#
On a vanilla RKE2, the EXTERNAL-IP
of the dp’s kong-proxy
will remain in Pending state.
1
2
3
| root@ip-10-0-25-27:~# kubectl get svc kong-dp-kong-proxy -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-dp-kong-proxy LoadBalancer 10.43.148.174 <pending> 80:30153/TCP,443:30749/TCP 57m
|
LoadBalancer is basically the same as NodePort, so you can access the address of the node where the DP is deployed and the NodePort on the right.
1
2
3
4
5
| root@ip-10-0-25-27:~# curl http://localhost:30153
{
"message":"no Route matched with those values",
"request_id":"7cfd8d2697b06377501cf2385fac4076"
}
|
That’s all for my personal notes.