Multi-Cloud Kubernetes Cluster on AWS, GCP, and Azure cloud Service providers.

Dileepkumar
7 min readSep 7, 2022

--

Nowadays, most applications are using Kubernetes for their deployments. Kubernetes cluster is generally deployed on the Clouds like AWS or GCP or Azure, etc. But if the entire Kubernetes cluster is deployed on a single cloud and if the specific services of that cloud go down and if it’s affecting our Kubernetes cluster, our entire Kubernetes cluster will go down. So, to overcome this problem, we were required to create a Kubernetes cluster such that the slave nodes for this cluster will be coming from multiple clouds. This kind of cluster is known as Multi-Cloud Kubernetes Cluster.
In this article, we will build our own multi-cloud Kubernetes Cluster using AWS, GCP, and Azure Cloud.

Pre-requisites

  1. For building this cluster you should require a working account on any of the two clouds. I’m will be using three clouds that is AWS, GCP, and Azure.
  2. Ansible should be installed on a workstation.

I have created an Ansible collection for configuring the Multi-Cloud Kubernetes cluster. But if you are not comfortable with Ansible, it’s very much okay. I’m will also be showing you a manual way to configure the Kubernetes slave node.

Step 1: Configure Kubernetes Master node on AWS cloud

Go to the AWS cloud and create one EC2 instance using RedHat AMI. In this instance, we will configure the Kubernetes master node.

AWS EC2 instance for configuring K8s master node

Now, we will configure this instance using the ansible collection named multicloudkubernetescluster.

Ansible Galaxy

Jump start your automation project with great content from the Ansible community

galaxy.ansible.com

If you are new to Ansible and ansible collections, refer following two blogs. This will give you a basic understanding of how to work with Ansible and how to use Ansible Collections.

Let’s move to the Kubernetes master configuration.

Update the Ansible inventory with the public IP of the AWS EC2 instance and the username of the EC2 instance. Also, provide the ssh private key location in the ansible configuration file so that Ansible can log in to this instance.

Ansible tasks file for configuring K8s master node:

The most important part of this file is this command:

$ kubeadm init --pod-network-cidr={{ pod_network_cidr }} --control-plane-endpoint={{ control_plane_endpoint_ip }}:6443 --ignore-preflight-errors=NumCPU  --ignore-preflight-errors=Mem

Normally, we run this command without — control-plane-endpoint options and by default, it takes the private IP of the instance and provides the command for joining the Kubernetes slave nodes. The kubeadm join command with private IP will only work inside the same VPC. But for joining the Kubernetes slave nodes from all over the internet and from multiples cloud providers, we are required to use a public IP of the Kubernetes Master node instance. But we can’t use the EC2 instance public IP directly in the kubeadm join command. At the time of configuration of the Kubernetes master node, we require to tell the public IP and the port number on which Kubernetes master will run using — control-plane-endpoint option.

Install the multicloudkubernetescluster collection using the following command:

Now, we can write one playbook that will use this collection and configure the Kubernetes master node on the AWS EC2 instance.

kube_master.yml

Now run this playbook using the following command:

$ ansible-playbook kube_master.yml

Now, the Kubernetes master node is successfully configured. This will print a command for joining the Kubernetes slave nodes.

Step 2: Configure one slave node on Azure cloud

Let’s create one Virtual machine in Azure using a RedHat image.

Now, for configuring the Kubernetes slave node, we can use the same Ansible collection and use the kube_salve ROLE from this collection for configuring the K8s slave node.

kube_slave.yml

Here, provide the kubeadm join command that we got from the Kubernetes master node to the kube_join_command variable.

Update the Ansible inventory and provide the IP of the Azure VM under the kube_slave host group.

Finally, run this playbook using the following command:

$ ansible-playbook kube_slave.yml

After the slave node gets configured successfully, login to the EC2 instance on which we have configured the K8s master node. Here, run the following command to check if the Kubernetes slave node from Azure cloud is connected to the Kubernetes master and is ready to use.

$ kubectl get nodes

You can see that the slave node from Azure is connected successfully and is ready to use.

Step 3: Configure another slave node on the GCP cloud

Let us configure another slave node on the GCP cloud. Go to the GCP cloud and launch one Virtual Machine using a RedHat image.

Let’s configure this slave node manually so that if you are not comfortable with Ansible, you will understand the flow.

Login to this instance by clicking the SSH button in the GCP console.

Now, follow the following steps for configuring the Kubernetes slave.

1. Install Docker

Create a yum repository for docker.

$ vim /etc/yum.repos.d/docker.repo

and following content to docker.repo file

[docker_repo]
baseurl =
https://download.docker.com/linux/centos/7/x86_64/stable/
gpgcheck = 0
name = Yum repo for docker

Run the following command to install Docker

$ yum install docker-ce --nobest

2. Install Python and required docker dependencies

$ yum install python3$ pip3 install docker

3. Configure docker and start docker services

Change the cgroup driver for docker to systemd

$ mkdir /etc/docker
$ vim /etc/docker/daemon.json

Add this content to the /etc/docker/daemon.json file

{
“exec-opts”: [“native.cgroupdriver=systemd”]
}

Start and enable docker services

$ systemctl start docker$ systemctl enable docker

4. Install Kubernetes

Create a yum repo for Kubernetes

$ vim /etc/yum.repos.d/kubernetes.repo

Add the following content to /etc/yum.repos.d/kubernetes.repo file

[kubernetes]
baseurl =
https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
gpgcheck = 0
name = Yum repo for Kubernetes

Install Kubelet, Kubeadm, and Kubectl

$ yum install kubelet kubeadm kubectl -y

Start and enable kubelet

$ systemctl start kubelet$ systemctl enable kubelet

5. Install iproute-tc package

$ yum install iproute-tc

Create k8s.conf file for bridging

$ vim /etc/sysctl.d/k8s.conf

Add the following content to /etc/sysctl.d/k8s.conf file

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Enabling bridging for kernel in the slave node

$ sysctl --system

6. Joining the slave node

Finally, run the kubeadm join command to join this slave node to the master node.

Now, this slave node is connected to the K8s master running on AWS.

Go to the K8s master node and check if this note is connected successfully and is ready. Run following command in the K8s master node:

$ kubectl get nodes

Finally, the Kubernetes slave nodes running on Azure cloud and GCP cloud are connected to the Kubernetes master node running on AWS cloud. And both the slave nodes are ready. Now, we can run all of the Kubernetes resources on this cluster and these resources will be running on Azure and GCP cloud. The resources from different clouds are also coming from different parts of the world like master node is running on AWS is in the Mumbai region, one slave node is running on Azure is in the US West region and another slave node is running on GCP is in the US Central region.

In this way, we have built a true Multi-Cloud Kubernetes Cluster to achieve high availability.

With the same approach, you can launch a virtual machine in the local data center and make it a K8s slave. Using kubeadm join command, join this slave to the master.

Thank you!

--

--

Dileepkumar
Dileepkumar

Written by Dileepkumar

Passionate learner || Data Science Enthusiast

No responses yet