baicai

白菜

一个勤奋的代码搬运工!

K3s Quick Start Guide: Building a K3S Cluster in a Multi-Cloud Environment

K3s is a lightweight Kubernetes. The server only requires a minimum of 512MB of memory to run.

Different accounts and even different cloud service providers are not connected within the intranet. Therefore, we need to find a way to achieve container network communication across the public network, ensuring that pods and services on any node can access pods and services on any other node, providing a consistent experience with a normal Kubernetes cluster.

Refer to the Quick Start Guide and Multi-Cloud Solution, and reorganize them.

Goal: Build a K3S cluster in a hybrid cloud environment (Tencent Cloud Server + Oracle Server + Microsoft Azure Server)

Server Installation#

# Local network solution
curl -sfL https://get.k3s.io | sh -
# Multi-cloud installation solution
curl -sfL https://get.k3s.io | sh -s - --node-external-ip=Server Public IP --flannel-backend=wireguard-native 

For users in China, you can use the following method to accelerate the installation:

# Local network solution
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
# Multi-cloud installation solution
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn  sh -s - --node-external-ip=Server Public IP --flannel-backend=wireguard-native --flannel-external-ip

After running this installation:

The K3s service will be configured to automatically restart after node reboot, process crash, or being killed.
Additional utilities will be installed, including ```kubectl```, ```crictl```, ```ctr```, ```k3s-killall.sh```, and ```k3s-uninstall.sh```.
The kubeconfig file will be written to ```/etc/rancher/k3s/k3s.yaml```, and the kubectl installed by K3s will automatically use this file.

Installing Additional Agent Nodes#

To install additional agent nodes and add them to the cluster, run the installation script with the K3S_URL and K3S_TOKEN environment variables.

# Local network solution
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
# Multi-cloud installation solution
curl -sfL https://get.k3s.io | K3S_URL=https://Server Public IP:6443 K3S_TOKEN=mynodetoken sh -s - --node-external-ip=Agent Public IP
# Local network solution
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
# Multi-cloud installation solution
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://Server Public IP:6443 K3S_TOKEN=mynodetoken sh -s - --node-external-ip=Agent Public IP

Note:
The K3S_URL parameter will cause the installer to configure K3s as an agent instead of a server. The K3s agent will register with the K3s server listening on the specified URL. The value used for K3S_TOKEN is stored in /var/lib/rancher/k3s/server/node-token on the server node.
Each host must have a unique hostname. If your computer does not have a unique hostname, pass the K3S_NODE_NAME environment variable and provide a valid and unique hostname for each node.

Accessing the K3s Cluster Locally#

Install kubectl

brew install kubectl

Copy the contents of /etc/rancher/k3s/k3s.yaml in the server

Write it to ~/.kube/config file on your local machine.

Refer to the scp copy command:

scp server:/etc/rancher/k3s/k3s.yaml ~/.kube/config

Test Commands#

View node status:

$ kubectl get node
NAME            STATUS   ROLES                  AGE   VERSION
vm-4-10-debian             Ready    <none>      35m   v1.27.6+k3s1
vm-4-9-debian   Ready    control-plane,master   39m   v1.27.6+k3s1

Check cross-network communication:

$ kubectl get pod -A -o wide
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE   IP          NODE            NOMINATED NODE   READINESS GATES
kube-system   local-path-provisioner-957fdf8bc-gcgj4   1/1     Running     0          38m   10.42.0.5   vm-4-9-debian   <none>           <none>
kube-system   coredns-77ccd57875-vsxmt                 1/1     Running     0          38m   10.42.0.6   vm-4-9-debian   <none>           <none>
kube-system   helm-install-traefik-crd-sv9jh           0/1     Completed   0          38m   10.42.0.4   vm-4-9-debian   <none>           <none>
kube-system   metrics-server-5f8b4ffd8-zd4db           1/1     Running     0          38m   10.42.0.3   vm-4-9-debian   <none>           <none>
kube-system   helm-install-traefik-jp8sk               0/1     Completed   2          38m   10.42.0.2   vm-4-9-debian   <none>           <none>
kube-system   svclb-traefik-0782c5d1-wr5kd             2/2     Running     0          37m   10.42.0.7   vm-4-9-debian   <none>           <none>
kube-system   traefik-64f55bb67d-4lr2g                 1/1     Running     0          37m   10.42.0.8   vm-4-9-debian   <none>           <none>
kube-system   svclb-traefik-0782c5d1-444jv             2/2     Running     0          34m   10.42.1.2   vm-4-10-debian  <none>           <none>

View node resource usage:

$ kubectl top node
NAME            CPU(cores)   CPU%        MEMORY(bytes)   MEMORY%     
vm-4-9-debian   24m          2%          1369Mi          69%         
vm-4-10-debian             <unknown>    <unknown>   <unknown>       <unknown> 

View POD resource usage:

$ kubectl top pod -A
NAMESPACE     NAME                                     CPU(cores)   MEMORY(bytes)   
kube-system   coredns-77ccd57875-vsxmt                 1m           20Mi            
kube-system   local-path-provisioner-957fdf8bc-gcgj4   1m           14Mi            
kube-system   metrics-server-5f8b4ffd8-zd4db           3m           24Mi            
kube-system   svclb-traefik-0782c5d1-wr5kd             0m           0Mi             
kube-system   traefik-64f55bb67d-4lr2g                 1m           33Mi

At this point, the K3s cluster deployment is complete. If you have more hosts, you can repeat the agent configuration steps to add them.

Inbound Rules for K3s Server Nodes#

ProtocolPortSourceDestinationDescription
TCP2379-2380ServersServersOnly required for HA with embedded etcd
TCP6443AgentsServersK3s supervisor and Kubernetes API Server
UDP8472All nodesAll nodesOnly required for Flannel VXLAN
TCP10250All nodesAll nodesKubelet metrics
UDP51820All nodesAll nodesOnly required for Flannel Wireguard with IPv4
UDP51821All nodesAll nodesOnly required for Flannel Wireguard with IPv6

All outbound traffic is usually allowed.

References:#

Quick Start Guide

Embedded K3s Multi-Cloud Solution

K3s Quick Start Guide: Building a K3S Cluster in a Multi-Cloud Environment

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.