本文记录了在 RKE2 上安装 Kong 的步骤。
RKE2 安装#
RKE2 Server#
1
2
3
4
5
6
7
8
9
10
| root@ip-10-0-25-27:~# curl -sfL https://get.rke2.io | sh -
[INFO] finding release for channel stable
[INFO] using v1.30.5+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@ip-10-0-25-27:~# systemctl enable rke2-server.service --now
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /usr/local/lib/systemd/system/rke2-server.service.
root@ip-10-0-25-27:~#
|
后续处理
1
2
3
4
5
6
7
| # RKE2 Agent 使用的 Token 文件
cat /var/lib/rancher/rke2/server/node-token
# 一同安装的 kubectl 路径
cp /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/
# kubeconfig 文件路径
mkdir .kube
cp /etc/rancher/rke2/rke2.yaml .kube/config
|
RKE2 agent#
需要在其他节点上安装。
1
2
3
4
5
6
7
8
9
| root@ip-10-0-25-118:~# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" sh -
[INFO] finding release for channel stable
[INFO] using v1.30.5+rke2r1 as release
[INFO] downloading checksums at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/sha256sum-amd64.txt
[INFO] downloading tarball at https://github.com/rancher/rke2/releases/download/v1.30.5+rke2r1/rke2.linux-amd64.tar.gz
[INFO] verifying tarball
[INFO] unpacking tarball file to /usr/local
root@ip-10-0-25-118:~# systemctl enable rke2-agent.service
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/local/lib/systemd/system/rke2-agent.service.
|
启动服务前,先配置 /etc/rancher/rke2/config.yaml
:
1
2
| server: https://<server address>:9345
token: <token>
|
然后启动 agent
1
| root@ip-10-0-25-118:~# systemctl start rke2-agent.service
|
用 kubectl 查看集群状态
1
2
3
4
| root@ip-10-0-25-27:~# kubectl get node
NAME STATUS ROLES AGE VERSION
ip-10-0-25-27 Ready control-plane,etcd,master 2m46s v1.30.5+rke2r1
ip-10-0-25-118 Ready <none> 79s v1.30.5+rke2r1
|
kong 安装#
基本可以直接参考以下官方文档,但有几点需要注意:
https://docs.konghq.com/gateway/latest/install/kubernetes/proxy/
StorageClass 的指定#
如果同时安装 Postgre 的 Pod,有时会一直处于 Pending 状态,无法继续。
1
2
3
4
5
| root@ip-10-0-25-27:~# kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-cp-kong-7f9bf4f756-s98p4 0/1 Init:1/2 0 45s
kong-cp-kong-init-migrations-mglgs 0/1 Init:0/1 0 45s
kong-cp-postgresql-0 0/1 Pending 0 45s
|
原因是 PVC 没有被创建。
1
2
3
4
5
6
7
8
9
10
11
| root@ip-10-0-25-27:~# kubectl describe pod kong-cp-postgresql-0 -n kong
Name: kong-cp-postgresql-0
Namespace: kong
<...>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 64s default-scheduler 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
root@ip-10-0-25-27:~# kubectl get pvc -n kong
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
data-kong-cp-postgresql-0 Pending <unset> 2m2s
|
安装 StorageClass 即可解决。其他方案也可以,这里用的是 Rancher 官方提供的:
https://github.com/rancher/local-path-provisioner
1
| kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml
|
安装完成后,用如下命令编辑 PVC,在 spec
下添加 storageClassName: local-path
。
1
| kubectl edit pvc data-kong-cp-postgresql-0 -n kong
|
这样 PVC 就能正常创建,Kong Control Plane 的安装也能顺利完成。
1
2
3
4
5
6
7
8
9
| root@ip-10-0-25-27:~# $ kubectl -n local-path-storage get pod
NAME READY STATUS RESTARTS AGE
local-path-provisioner-d744ccf98-xfcbk 1/1 Running 0 7m
root@ip-10-0-25-27:~# kubectl get pod -n kong
NAME READY STATUS RESTARTS AGE
kong-cp-postgresql-0 1/1 Running 0 5m15s
kong-cp-kong-init-migrations-mglgs 0/1 Completed 0 5m38s
kong-cp-kong-7f9bf4f756-s98p4 1/1 Running 0 5m50s
|
Data Plane 访问#
在原生 RKE2 上,dp 的 kong-proxy
的 EXTERNAL-IP
会一直是 Pending。
1
2
3
| root@ip-10-0-25-27:~# kubectl get svc kong-dp-kong-proxy -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-dp-kong-proxy LoadBalancer 10.43.148.174 <pending> 80:30153/TCP,443:30749/TCP 57m
|
LoadBalancer 实际上和 NodePort 类似,可以直接访问部署 DP 的节点地址和右侧的 NodePort。
1
2
3
4
5
| root@ip-10-0-25-27:~# curl http://localhost:30153
{
"message":"no Route matched with those values",
"request_id":"7cfd8d2697b06377501cf2385fac4076"
}
|
以上就是我的个人备忘。