Kubernetes is an open-source container management system, freely available. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, releasing organizations from tedious deployment tasks.
In this pratical class, we are going to:
- setup multi-node Kubernetes Cluster on Ubuntu 20.20 server;
- deploy an application and manage it on our deployed Kubernetes;
You must be connected to N7 with the VPN and be connected to a N7 machine with your studenID. Use the following commands to access your server :
ssh <your enseeiht user name>@<an enseeiht machine>.enseeiht.fr
Type your enseeiht password
The master node port is 130XX and the slave node 130XX+1
xxxxxxxxxx
ssh ubuntu@pc-sepia01 -p 130XX #connection to master node
ssh ubuntu@pc-sepia01 -p 130XX+1 #connection to slave node
Where XX=01-40. This will give you acces to a VM with IP address 192.168.27.(XX+10) and the password is "toto"
xxxxxxxxxx
sudo bash
apt-get update -y # On both node
You need to configure “hosts” file and hostname on each node in order to allow a network communication using the hostname. You begin by setting the master and slave node names.
On the master node run:
xxxxxxxxxx
hostnamectl set-hostname master
On the slave node run :
xxxxxxxxxx
hostnamectl set-hostname slave
You need to configure the hosts file. Therefore, run the following command on both nodes:
xxxxxxxxxx
echo "slaveIP slave" >> /etc/hosts
echo "masterIP master" >> /etc/hosts
You have to disable swap memory on each node. kubelets do not support swap memory and will not work if swap is active. Therefore, you need to run the following command on both nodes:
xxxxxxxxxx
swapoff -a
Docker must be installed on both the master and slave nodes. You start by installing all the required packages.
xxxxxxxxxx
wget -qO- https://get.docker.com/ | sh
Next, you will need to install: kubeadm, kubectl and kubelet on both nodes.
xxxxxxxxxx
modprobe br_netfilter
xxxxxxxxxx
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
xxxxxxxxxx
sysctl --system
xxxxxxxxxx
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
xxxxxxxxxx
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
xxxxxxxxxx
apt-get update
apt-get install kubelet kubeadm kubectl
Good, all the required packages are installed on both servers.
Now, it’s time to configure Kubernetes master Node. First, initialize your cluster using its private IP address with the following command:
xxxxxxxxxx
kubeadm init --pod-network-cidr=10.0.0.0/16
Note:
You should see the following output:
You have to save the 'kubeadm join ... ... ...' command. The command will be used to register new worker nodes to the kubernetes cluster.
To use Kubernetes, you must run some commands as shown in the result (as root).
xxxxxxxxxx
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
We have to check the status of the master node by running the following command:
xxxxxxxxxx
kubectl get nodes
kubectl get pods --all-namespaces
you
can observe form the above output that the master node is listed as not ready. This is because the cluster does not have a Container Networking Interface (CNI).
Next, deploy the flannel network to the kubernetes cluster using the kubectl command.
xxxxxxxxxx
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Wait for a minute and check kubernetes node and pods using commands below.
xxxxxxxxxx
kubectl get nodes
kubectl get pods –all-namespaces
You should see the following output:
And you will get the 'kube-scheduler-master' node is running as a 'master' cluster with status 'ready'.
Kubernetes cluster master initialization and configuration has been completed.
Next, we need to log in to the slave node and add it to the cluster. Remember the join command in the output from the Master Node initialization command and issue it on the Slave Node as shown below:
xxxxxxxxxx
sudo kubeadm join ``xxx``.``xxx``.``xxx``.``xxx``:6443 --token wg42is.1hrm4wgvd5e7gbth --discovery-token-ca-cert-hash sha256:53d1cc33b5b8efe1b974598d90d250a12e61958a0f1a23f864579dbe67f83e30
Once the Node is joined successfully, you should see the following output:
Now, go back to the master node and run the command “kubectl get nodes” to see that the slave node is now ready:
xxxxxxxxxx
kubectl get nodes
We will deploy a little application in our Kubernetes cluster, a Nginx web server. We will use the YAML template. Create a new directory named 'nginx' and go to that directory.
xmkdir -p nginx/
cd nginx/
Now, you have to create the Nginx Deployment YAML file 'nginx-deployment.yaml' and paste the following content.
xapiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
Note:
You can create the deployment by running the kubectl command below.
xkubectl create -f nginx-deployment.yaml
After creating a new 'nginx-deployment', check the deployments list inside the cluster.
xkubectl describe deployment my-nginx
Check the nodes the Pod is running on:.
xkubectl get pods -l run=my-nginx -o wide
Check your pods' IPs:
xkubectl get pods -l run=my-nginx -o yaml | grep podIP
We need to create a new service for our 'nginx-deployment'. Therefore, create a new YAML file named 'nginx-service.yaml' with the following content.
xapiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
Note:
To list the pods created by the deployment:
xxxxxxxxxx
kubectl get pods -l run=my-nginx
Create the Kubernetes service using the kubectl command below.
xkubectl create -f nginx-service.yaml
Now check all available services on the cluster and you will get the 'nginx-service' on the list, then check details of service.
xkubectl get service
kubectl describe service my-nginx
Accessing the service
xxxxxxxxxx
kubectl scale deployment my-nginx --replicas=0
kubectl scale deployment my-nginx --replicas=2
kubectl get pods -l run=my-nginx -o wide
Copy the clusterIP and use it to access your application
xxxxxxxxxx
curl <clusterIP>:80
Y
ou will demonstrate that you followed the session by deploying the tomcat architecture of last class in your Kubernetes cluster (2 tomcats instance and 1 service, no need to use the haproxy.cfg file).
Good luck!