coolkhz

coolkhz

...
twitter

Deployment of Non-Intranet k3s Cluster under Multiple Servers

WireGuard Implementation for Non-Local Network Deployment of k3s

Environment Introduction#

Server Introduction#
Server NamePublic IP AddressInternal IP AddressVirtual Network IP AddressOperating System
master42.xx.xx.1210.0.16.8192.168.1.1Centos 7.6 64bit
node1122.xx.xxx.11110.0.0.6192.168.1.2Centos 7.6 64bit
node2122.xx.xx.155172.17.0.3192.168.1.3Centos 7.6 64bit
Pre-deployment Preparation#

Enable IP address forwarding on all nodes:

echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf 
echo "net.ipv4.conf.all.proxy_arp = 1" >> /etc/sysctl.conf

# Check status
sysctl -p /etc/sysctl.conf

If echo fails, modify using vi/vim.

Change the hostname on all nodes

# Execute on master 
hostnamectl set-hostname k3s-master 

# Execute on node1 
hostnamectl set-hostname k3s-node1 

# Execute on node2 
hostnamectl set-hostname k3s-node2

Add iptables rules to allow NAT translation on the local machine:

iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i wg0 -o wg0 -m conntrack --ctstate NEW -j ACCEPT
iptables -t nat -A POSTROUTING -s 192.168.1.1/24 -o eth0 -j MASQUERADE

wg0: is your defined virtual network card
192.168.1.1: is your virtual IP address range (be sure to change this IP address when executing on different nodes)
eth0: is your physical network card

Build Virtual Local Area Network#

Before setting up a cross-cloud k3s cluster, we need to install WireGuard. WireGuard has kernel requirements, so the kernel must be upgraded to 5.15.2-1.el7.elrepo.x86_64 or higher.

Installation of the kernel is omitted; this article uses the CentOS 7.6 kernel which does not require additional updates.

The kernel version must be relatively high; otherwise, starting WireGuard will result in an error.

Install WireGuard#

Execute on all nodes#

The installation process is very simple. My system kernel is relatively new and already includes the WireGuard kernel module, so I only need to install the wireguard-tools yum package.

yum install epel-release https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm 
yum install yum-plugin-elrepo kmod-wireguard wireguard-tools -y

Configure WireGuard#

The wireguard-tools package provides the necessary tools wg and wg-quick, which can be used for manual and automatic deployment, respectively.

First, generate the keys for the master node for encryption and decryption as described in the official documentation.

wg genkey | tee privatekey | wg pubkey > publickey

This will generate the privatekey and publickey files in the current directory.

privatekey is the key used for configuration on the local machine, and the publickey needs to be configured on other machines.

cat privatekey publickey

EMWcI01iqM4zkb7xfbaaxxxxxxxxDo2GJUA= 
0ay8WfGOIHndWklSIVBqrsp5LDWxxxxxxxxxxxxxxQ=

Now we need to connect with the peers node1 and node2, whose public IPs (these should be the IPs that can communicate with the master) are 122.xx.xxx.111 and 122.xx.xx.155.

We first install WireGuard and generate the keys for node1 and node2 following the above process.

Then, write the complete configuration file for master for use with wg-quick, in /etc/wireguard/wg0.conf on the master machine.

[Interface]
PrivateKey = EMWcI01iqM4zkb7xfbaaxxxxxxxxDo2GJUA=
Address = 192.168.1.1
ListenPort = 5418

[Peer]
PublicKey = Tencent Cloud publickey
EndPoint = 122.xx.xxx.111:5418
AllowedIPs = 192.168.1.2/32

[Peer]
PublicKey = Alibaba Cloud publickey
EndPoint = 122.xx.xx.155:5418
AllowedIPs = 192.168.1.3/32
Configuration Explanation:#

Interface: This section belongs to the master (i.e., the local machine) configuration.
Address: The virtual IP assigned to the master.
ListenPort: The port used for communication between hosts, which is UDP protocol.
Peer: Information about the peers that need to communicate, add as many Peer sections as there are hosts that need to communicate.
EndPoint: The public IP of node1 and node2 along with the WireGuard listening UDP port.

Note: If your machines can communicate through the internal network, you can directly use the internal IP, but ensure that this IP can be communicated with by all hosts joining the local area network.

AllowedIPs: Refers to which IPs should have their traffic forwarded to this node when initiated by the local machine. For example, if we assign the internal IP 192.168.1.2 to host B, then any packets sent to 192.168.1.2 from host A should be forwarded to this EndPoint, effectively acting as a filter. Additionally, when there are multiple Peers, the IP addresses configured here must not conflict.

The generated privatekey and publickey for each node are as follows:

# master node
[root@k3s-master ~]# cat privatekey publickey
EMWcI01iqM4zkb7xfbaaxxxxxxxxDo2GJUA=
0ay8WfGOIHndWklSIVBqrsp5LDWxxxxxxxxxxxxxxQ=
# node1 node
[root@k3s-node1 ~]# cat privatekey publickey
QGdNkzpnIkuvUU+00C6XYxxxxxxxxxK0D82qJVc=
3izpVbZgPhlM+S5szOogTDTxxxxxxxxxuKuDGn4=
# node2 node
[root@k3s-node2 ~]# cat privatekey publickey
WOOObkWINkW/hqaAME9r+xxxxxxxxxm+r2Q=
0f0dn60+tBUfYgzw7rIihKbqxxxxxxxxa6Wo=

The configuration files for each node are as follows:

# master node
cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = EMWcI01iqM4zkb7xfbaaxxxxxxxxDo2GJUA=
Address = 192.168.1.1
ListenPort = 5418

[Peer]
PublicKey = 3izpVbZgPhlM+S5szOogTDTxxxxxxxxxuKuDGn4=
EndPoint = 122.xx.xxx.111:5418
AllowedIPs = 192.168.1.2/32

[Peer]
PublicKey = 0f0dn60+tBUfYgzw7rIihKbqxxxxxxxxa6Wo=
EndPoint = 122.xx.xx.155:5418
AllowedIPs = 192.168.1.3/32
# node1 node
cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = QGdNkzpnIkuvUU+00C6XYxxxxxxxxxK0D82qJVc=
Address = 192.168.1.2
ListenPort = 5418

[Peer]
PublicKey = 0ay8WfGOIHndWklSIVBqrsp5LDWxxxxxxxxxxxxxxQ=
EndPoint = 42.xx.xx.12:5418
AllowedIPs = 192.168.1.1/32

[Peer]
PublicKey = 0f0dn60+tBUfYgzw7rIihKbqxxxxxxxxa6Wo=
EndPoint = 122.xx.xx.155:5418
AllowedIPs = 192.168.1.3/32
# node2 node
cat /etc/wireguard/wg0.conf
[Interface]
PrivateKey = WOOObkWINkW/hqaAME9r+xxxxxxxxxm+r2Q=
Address = 192.168.1.3
ListenPort = 5418

[Peer]
PublicKey = 0ay8WfGOIHndWklSIVBqrsp5LDWxxxxxxxxxxxxxxQ=
EndPoint = 42.xx.xx.12:5418
AllowedIPs = 192.168.1.1/32

[Peer]
PublicKey = 3izpVbZgPhlM+S5szOogTDTxxxxxxxxxuKuDGn4=
EndPoint = 122.xx.xx.155:5418
AllowedIPs = 192.168.1.2/32

Start WireGuard#

After writing the configuration file, use the wg-quick tool to create the virtual network card.

wg-quick up wg0

The wg0 in the above command corresponds to the /etc/wireguard/wg0.conf configuration file, and the automatically created network card device is named wg0, which is self-explanatory.

After configuring the network card devices for node1 and node2, you can use the wg command to observe the network situation.

[root@k3s-master ~]# wg
interface: wg0
  public key: 0ay8WfGOIHndWklSIVBqrsp5LDWxxxxxxxxxxxxxxQ=
  private key: (hidden)
  listening port: 5418

peer: 0f0dn60+tBUfYgzw7rIihKbqxxxxxxxxa6Wo=
  endpoint: 122.xx.xx.155:5418
  allowed ips: 192.168.1.3/32
  latest handshake: 3 minutes, 3 seconds ago
  transfer: 35.40 KiB received, 47.46 KiB sent

peer: 3izpVbZgPhlM+S5szOogTDTxxxxxxxxxuKuDGn4=
  endpoint: 122.xx.xxx.111:5418
  allowed ips: 192.168.1.2/32
  latest handshake: 5 minutes, 6 seconds ago
  transfer: 24.84 KiB received, 35.21 KiB sent

You can see the information of the peer nodes listed, along with communication measurement data. You can then ping the virtual IP of other hosts or ssh into the IP address of other hosts to check if the network communication is normal.

Automation#

After the system restarts, the network card device created by WireGuard will be lost, so an automation script is needed.

systemctl enable wg-quick@wg0

Using the above command generates a systemd daemon script that will automatically run the up command on boot.

After completing the installation and configuration of WireGuard, we can proceed to install the k3s cluster.

Install K3S Cluster#

Install on Master Node#
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh  -s -  --node-external-ip 42.xx.xx.12 --advertise-address 42.xx.xx.12 --node-ip 192.168.1.1 --flannel-iface wg0

Parameter Explanation:

  • --node-external-ip 42.xx.xx.12: Sets the external IP for the node. The external IP of the Alibaba Cloud VPC is not directly bound to the virtual machine's network card, so I need to set this parameter to avoid the k3s components treating the internal IP as the public IP when setting up load balancing.
  • --advertise-address 42.xx.xx.12: Used to set the address for the kubectl tool and sub-nodes to communicate, which can be an IP or a domain name. This will be set in the valid domain when creating the apiserver certificate.
  • --node-ip 10.20.30.1: If this parameter is not set, the IP on the first network card device will be selected, which is usually the internal IP. However, since I built a virtual local area network, I need to specify the IP of the virtual local area network (which is the WireGuard IP).
  • --flannel-iface wg0: wg0 is the network card device created by WireGuard. I need to use the virtual local area network for communication between nodes, so this needs to be specified as wg0.

Additionally, since all traffic through WireGuard is encrypted, using it for communication between nodes ensures communication security, so there is no need to switch to other CNI drivers; the default can be used.

After executing the above command on the master node, you can see the script prompt that the installation is complete in less than a minute. Check the running status of the master control terminal.

systemctl status k3s

If it is running normally, check whether the container's running status is normal.

kubectl get pods -A

The -A parameter is used to view all namespaces. If all containers are in the running state, the installation is successful, and you can proceed to add the controlled nodes.

Agent Installation#

With the experience of installing the master control, installing the work nodes is even simpler, with some parameter adjustments needed.

For node1:

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.1.1:6443 K3S_TOKEN=K10720eda8a278bdc7b9b6d787c9676a92119bb2cf95048d6a3cd85f15717edfbe5::server:e98b986e8202885cb54da1b7e701f67e sh -s - --node-external-ip 122.xx.xxx.111 --node-ip 192.168.1.2 --flannel-iface wg0

For node2:

curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn K3S_URL=https://192.168.1.1:6443 K3S_TOKEN=K10720eda8a278bdc7b9b6d787c9676a92119bb2cf95048d6a3cd85f15717edfbe5::server:e98b986e8202885cb54da1b7e701f67e sh -s - --node-external-ip 122.xx.xx.155 --node-ip 192.168.1.3 --flannel-iface wg0

Parameter Explanation:

  • K3S_Token: According to the documentation, it can be obtained from /var/lib/rancher/k3s/server/node-token.
  • K3S_URL: Needs to set the communication address and port of the master control, with the default port being 6443. The IP address is the IP of the virtual network domain, so that traffic will be transmitted through WireGuard encrypted.

The other two parameters are similar to the logic of the master control. After executing, wait a moment, and after successful installation, check the service running status.

systemctl status k3s-agent

If there are errors, find solutions based on the error messages.

After everything is installed, check on the master node.

kubectl get nodes -o wide

If there are no issues, it should display three nodes: master, node1, and node2, all in the Ready state.

Thus, the multi-cloud K3S cluster has been successfully set up.

If some commands report permission errors, please execute them with administrator privileges using sudo.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.