Kubernetes is an open-source container management system, freely available. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, releasing organizations from tedious deployment tasks.
In this pratical class, we are going to:
- setup multi-node Kubernetes Cluster on Ubuntu 20.20 server;
- deploy an application and manage it on our deployed Kubernetes;
You must be connected to N7 with the VPN and be connected to a N7 machine with your studenID. Use the following commands to access your server :
Type your enseeiht password
The master node port is 130XX and the slave node 130XX+1
Where XX=01-40. This will give you acces to a VM with IP address 192.168.27.(XX+10) and the password is "toto"
You need to configure “hosts” file and hostname on each node in order to allow a network communication using the hostname. You begin by setting the master and slave node names.
On the master node run:
On the slave node run :
You need to configure the hosts file. Therefore, run the following command on both nodes:
You have to disable swap memory on each node. kubelets do not support swap memory and will not work if swap is active. Therefore, you need to run the following command on both nodes:
Docker must be installed on both the master and slave nodes. You start by installing all the required packages.
Next, you will need to install: kubeadm, kubectl and kubelet on both nodes.
Good, all the required packages are installed on both servers.
Now, it’s time to configure Kubernetes master Node. First, initialize your cluster using its private IP address with the following command:
You should see the following output:
You have to save the 'kubeadm join ... ... ...' command. The command will be used to register new worker nodes to the kubernetes cluster.
To use Kubernetes, you must run some commands as shown in the result (as root).
We have to check the status of the master node by running the following command:
ou can observe form the above output that the master node is listed as not ready. This is because the cluster does not have a Container Networking Interface (CNI).
Next, deploy the flannel network to the kubernetes cluster using the kubectl command.
Wait for a minute and check kubernetes node and pods using commands below.
You should see the following output:
And you will get the 'kube-scheduler-master' node is running as a 'master' cluster with status 'ready'.
Kubernetes cluster master initialization and configuration has been completed.
Next, we need to log in to the slave node and add it to the cluster. Remember the join command in the output from the Master Node initialization command and issue it on the Slave Node as shown below:
Once the Node is joined successfully, you should see the following output:
Now, go back to the master node and run the command “kubectl get nodes” to see that the slave node is now ready:
We will deploy a little application in our Kubernetes cluster, a Nginx web server. We will use the YAML template. Create a new directory named 'nginx' and go to that directory.
Now, you have to create the Nginx Deployment YAML file 'nginx-deployment.yaml' and paste the following content.
You can create the deployment by running the kubectl command below.
After creating a new 'nginx-deployment', check the deployments list inside the cluster.
Check the nodes the Pod is running on:.
Check your pods' IPs:
We need to create a new service for our 'nginx-deployment'. Therefore, create a new YAML file named 'nginx-service.yaml' with the following content.
To list the pods created by the deployment:
Create the Kubernetes service using the kubectl command below.
Now check all available services on the cluster and you will get the 'nginx-service' on the list, then check details of service.
Accessing the service
Copy the clusterIP and use it to access your application
You will demonstrate that you followed the session by deploying the tomcat architecture of last class in your Kubernetes cluster (2 tomcats instance and 1 service, no need to use the haproxy.cfg file).