☸️Configuring K8S Multi-Node Cluster over AWS ☁️ ️using Ansible

rishabhsharma
6 min readMar 18, 2021

--

Hello everyone👋🏻, Back with another very much interesting and important use case based automation article. In this article we will be Configuring K8S Multi-Node Cluster over AWS ☁️.

We will configure everything using Ansible Roles.

We need to follow step by step procedure to achieve this:

  1. We will launch three ec2-instances over AWS cloud one as Master Node and two are Slave/Worker Nodes.

2. Configuring K8S Master

3. Configuring K8S Slaves

👨🏻‍💻 Ansible Configuration file “/etc/ansible/ansible.cfg”

Ansible Role to launch ec2 instances


> Creating role to create
ec2-instance.
ansible galaxy init ec2-launch

After creating ansible roles we need to write the yaml code inside the respective files. we have vars folder to keep variables and tasks folder to write tasks.

tasks file for ec2-launch

---
# tasks file for ec2-launch
- name: "Provisioning ec2 instances over AWS Cloud"
ec2:
image: "{{ image_id }}"
instance_type: "{{ instance_type }}"
region: "{{ region }}"
key_name: "{{ key }}"
wait: yes
count: 1
state: present
vpc_subnet_id: "{{ vpc_subnet_id }}"
group_id: "{{ security_group_id }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
instance_tags:
Name: "{{ item }}"
loop: "{{ OS_Names }}"

vars file for ec2-launch

---
# vars file for ec2-launch
image_id: "ami-0eeb03e72075b9bcc"
instance_type: "t2.micro"
region: "ap-south-1"
key: "arth"
vpc_subnet_id: "subnet-f7d4c19f"
security_group_id: "sg-2904234e"
aws_access_key: "enter access key"
aws_secret_key: "enter secret key"
OS_Names:
- "K8S_Master"
- "K8S_Slave1"
- "K8S_Slave2"

Creating main playbook “setup.yml” to launch ec2 instances over AWS ☁️

- hosts: localhost
roles:
- role: "/k8s-multi-node-cluster/ec2-launch"

The command to execute the playbook “setup.yml”

ansible-playbook setup.yml

Now we can check the ec2 dashboard, we will see the instances has been created and running successfully.

Here I am using dynamic inventory which will dynamically retrieve all the hosts.

Now after launching the ec2-instances we can now configure Kubernetes Multi Node Cluster on these instances.

Kubernetes Multi-Node Cluster

To setup k8s multi node cluster manually visit https://github.com/hrishabhsharma/Kubernetes-Multi-Node-Cluster-Over-AWS-Cloud

Ansible Roles to configure k8s-master and k8s-slaves

> Creating role to configure k8s-master.
ansible galaxy init k8s-master
> Creating role to configure k8s-slaves.
ansible galaxy init k8s-slaves

Steps for configuring Kubernetes Master node :

  1. “Install docker (As we are using Amazon Linux 2 image so we don’t need to configure repo for docker)”
---
# tasks file for k8s-master
- name: "Installing docker"
package:
name: docker
state: present

2. “Starting and Enabling docker service”

- name: "Starting and Enabling docker service"
service:
name: docker
state: started
enabled: yes

3. “Configuring yum repo for Kubernetes”

- name: "Configuring yum repo for kubernetes"
copy:
src: "/k8s-multi-node-cluster/k8s-master/files/kubernetes.repo"
dest: "/etc/yum.repos.d/kubernetes.repo"

4. “Installing kubeadm, kubectl, kubelet”

- name: "Installing kubeadm, kubectl, kubelet"
yum:
name: "{{ item }}"
state: present
disable_excludes: kubernetes
loop: "{{ packages }}"

5. “Starting and Enabling kubelet”

- name: "Starting and Enabling kubelet"
service:
name: kubelet
state: started
enabled: yes

6. “Pulling Images using kubeadm”

- name: "Pulling Images using kubeadm"
shell: "kubeadm config images pull"
changed_when: false

7. “Change driver of docker from cgroupfs to systemd

- name: "Changing the driver in the docker"
copy:
content: |
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
dest: /etc/docker/daemon.json
register: output

8. “Restarting Docker Service”

- name: "Restarting Docker"
service:
name: docker
state: restarted
when: output.changed == true

9. “Installing iproute-tc

- name: "Installing iproute-tc"
package:
name: iproute-tc
state: present

10. “Setting bridge-nf-call-iptables to 1”

- name: "Setting bridge-nf-call-iptables to 1"
shell: |
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
changed_when: false

11. “Initializing Master”

- name: "Initializing Master"
shell: "kubeadm init --pod-network-cidr={{ cidr }} --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem --node-name=master"
ignore_errors: yes

12. “Creating .kube directory”

- name: "Creating .kube directory"
shell: "mkdir -p $HOME/.kube"

13. “Copying /etc/kubernetes/admin.conf $HOME/.kube/config”

- name: "Copying /etc/kubernetes/admin.conf $HOME/.kube/config"
shell: "sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"

14. “Changing owner permission of $HOME/.kube/config”

- name: "changing owner permission"
shell: "sudo chown $(id -u):$(id -g) $HOME/.kube/config"

15. “Configuring with flannel plugin”

- name: "Configuring with flannel plugin"
shell: "kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml"

16. “Generating Token”

- name: "Generating Token"
shell: "kubeadm token create --print-join-command"
register: token
ignore_errors: yes
- debug:
var: token.stdout_lines
register: token

vars file for k8s-master

---
# vars file for k8s-master
cidr: "10.244.0.0/16"
packages:
- kubeadm
- kubelet
- kubectl

We have completed all the required steps to achieve configuration on master node. Now we need to do configuration on Slave nodes. The Configuration on Slave nodes is much simpler as most of the steps are repeated.

Steps for configuring Kubernetes Slave nodes :

  1. “Install docker (As we are using Amazon Linux 2 image so we don’t need to configure repo for docker)”
---
# tasks file for k8s-slaves
- name: "Installing docker"
package:
name: docker
state: present

2. “Starting and Enabling docker service”

- name: "Starting and Enabling docker service"
service:
name: docker
state: started
enabled: yes

3. “Configuring yum repo for Kubernetes”

- name: "Configuring yum repo for kubernetes"
copy:
src: "/k8s-multi-node-cluster/k8s-master/files/kubernetes.repo"
dest: "/etc/yum.repos.d/kubernetes.repo"

4. “Installing kubeadm, kubectl, kubelet”

- name: "Installing kubeadm, kubectl, kubelet"
yum:
name: "{{ item }}"
state: present
disable_excludes: kubernetes
loop: "{{ packages }}"

5. “Starting and Enabling kubelet”

- name: "Starting and Enabling kubelet"
service:
name: kubelet
state: started
enabled: yes

6. “Change driver of docker from cgroupfs to systemd

- name: "Changing the driver in the docker"
copy:
content: |
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
dest: /etc/docker/daemon.json
register: output

7. “Restarting Docker Service”

- name: "Restarting Docker"
service:
name: docker
state: restarted
when: output.changed == true

8. “Installing iproute-tc

- name: "Installing iproute-tc"
package:
name: iproute-tc
state: present

9. “Setting bridge-nf-call-iptables to 1”

- name: "Setting bridge-nf-call-iptables to 1"
shell: |
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
echo "1" > /proc/sys/net/bridge/bridge-nf-call-ip6tables
changed_when: false

10. “Joining Slave to Master”

- name: "Joining Slave to Master"
shell: "{{ master_token }}"
ignore_errors: yes
register: joined
- debug:
var: joined.stdout_lines

vars file for k8s-slaves

---
# vars file for k8s-slaves
packages:
- kubeadm
- kubelet
- kubectl

Creating main playbook “k8s-playbook.yml” to configure master, slaves, using Dynamic Inventory over AWS ☁️

- hosts: ["tag_Name_K8S_Master"]
roles:
- name: "Configuring Master Node"
role: "/k8s-multi-node-cluster/k8s-master"
- hosts: ["tag_Name_K8S_Slave1", "tag_Name_K8S_Slave2"]
vars_prompt:
- name: "master_token"
prompt: "Enter Token to Join Slaves to Master: "
private: no
roles:
- name: "Configuring Slave Nodes"
role: "/k8s-multi-node-cluster/k8s-slaves"

The command to execute the playbook “k8s-playbook.yml”

ansible-playbook k8s-playbook.yml

Now we can check from Master Node that our cluster is successfully configured!

Thanks for reading !!!😊✨ keep Learning!

--

--

rishabhsharma
rishabhsharma

Written by rishabhsharma

Data Engineer | Azure Databricks | AWS | PySpark | DevOps | Machine Learning 🧠 | Kubernetes ☸️ | SQL 🛢

No responses yet