How to Install Kubernetes Cluster on CentOS 7
Install Kubernetes Cluster on CentOS 7
[edit]Update the packages
yum update
-
==== Install the dependencies
yum install -y yum-utils device-mapper-persistent-data lvm2 -
Add and enable official Docker Repository to CentOS 7
[edit]
sudo-yum-config-manager---add-repo-httpsdownload.docker.comlinuxcentosdocker-ce.repo sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
-
==== Install the latest Docker version on CentOS 7
yum install docker-ce -
==== Start and enable the docker
systemctl start docker
systemctl enable docker
-
==== Confirm that Docker is active
systemctl status docker -
=== Set up the Kubernetes Repository
vi /etc/yum.repos.d/kubernetes.repo ============
[kubernetes] ===
name=Kubernetes
[edit]
enabled=1
[edit]
gpgcheck=1
[edit]
repo_gpgcheck=1
[edit]
=== gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
==========
[edit]-
=== Install Kubelet on CentOS 7
yum install -y kubelet === -
=== Install kubeadm and kubectl on CentOS 7
yum install -y === -
=== Set hostnames
hostnamectl set-hostname W-node1 sudo exec bash
Repeat the same steps on worker-node
=== -
Add nodes in the master kubernetes server
[edit]
$ vi /etc/hosts
======
IP master-node
IP node1 <hostname of worker node>
-
=== Disable SElinux
$ sudo setenforce 0
$ sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
$ reboot
=== -
=== Add firewall rules
sudo firewall-cmd --permanent --add-port=6443/tcp ===
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10252/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd –reload
Note:
Need to add firewall rule on the worker -node as per the port configured
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd –reload
=== Update iptables config
vi /etc/sysctl.d/k8s.conf
===========
net.bridge.bridge-nf-call-ip6tables = 1 ===
=== net.bridge.bridge-nf-call-iptables = 1
========
$ sysctl --system ===
===
===
-
=== Disable swap
$ sudo sed -i '/swap/d' /etc/fstab
$ sudo swapoff -a
=== -
=== kubeadm initialization
$ sudo kubeadm init
Note:
Save the token generated in safer place, else we can retrieve it from the below command

===
$ kubeadm token create --print-join-command
[edit]
Create required directories and start managing Kubernetes cluster
[edit]
$ mkdir -p $HOME/.kube
[edit]
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[edit]
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[edit]
===
===
Set up Pod network for the Cluster
[edit]
$ kubectl get nodes
[edit]
$ kubectl get pods --all-namespaces
[edit]
=== Note:
If we found master–node is NotReady,the CoreDNS service is also not running. To resolve this execute the below command
$ sudo export kubever=$(kubectl version | base64 | tr -d '\n') ===
=== $ sudo kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$kubever
Then check the node status to confirm the master node is running or not
$ kubectl get nodes ===
=== Download the code from the production server to Kubernetes server and create a Dockerfile to create an image
$ vi Dockerfile
===
==========
- Use the official Nginx image as the base image
FROM nginx:alpine
- Copy the provided files to the Nginx document root
COPY . /usr/share/nginx/html/
- Expose port 80 for HTTP
EXPOSE 80
- Start Nginx when the container starts
CMD ["nginx", "-g", "daemon off;"]
=======
-
=== Run the docker image
$ docker build -t my-node-app .
To list the docker image
$ docker images
=== -
=== Create a container form the docker image
$ docker run -d -p 80:80 my-node-app
To list the docker containers
$ docker ps -a
=== -
=== Login the docker hub
$ docker login -u username
Note: Provide the access token if the 2FA authentication has been enabled
=== -
=== Push the image to the docker hub
$ docker tag image-name your-docker-hub-username/repository-name:tag
=== -
=== Repeat steps from 1 to 15 on the worker node
Note: Replace the IP and hostname where it is required as per the worker node
=== -
=== On the worker node, copy the token generated in the step 15 and replace the IP of the worker node and execute this command to add this to kubernetes cluster
$ sudo kubeadm join <master-node-ip>:6443 --token <token> --discovery-token-ca-cert-hash <ca-cert-hash> ===
===
For example:
$ sudo kubeadm join 69.30.219.165:6443 --token edd9qs.5240fnpey1a4kctw --discovery-token-ca-cert-hash sha256:bd4ceb4caf0615872583d540248439a46c50d1d104da3704f597c0817c0d1458 ===
=== On the master node verify the nodes
$ kubectl get nodes
===
Deployment.yaml
|
apiVersion: apps/v1 kind: Deployment metadata: name: dsmterminal.milta.be spec: replicas: 1 selector: matchLabels: app: dsmterminal.milta.be template: metadata: labels: app: dsmterminal.milta.be spec: imagePullSecrets: - name: regcred containers: - name: dsmmilta-terminal-container image: pheonixsolutions/dsm-milta-terminal:latest ports: - containerPort: 80 # Adjust the port based on your application requirements |
|---|
#kubectl apply -f deployment.yaml
[edit]
Ingress.yaml
[edit]|
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: spec: ingressClassName: nginx rules: - host: dsmterminal.milta.be http: paths: - pathType: Prefix path: "/" backend: service: name: nginx-ingress port: number: 80 |
|---|
#kubectl apply -f ingress.yaml
[edit]
=== Service.yaml file
===
|
apiVersion: v1 kind: Service metadata: name: dsmterminal labels: app: dsmterminal.milta.be spec: ports: - appProtocol: http name: http port: 80 protocol: TCP targetPort: 80 selector: app: dsmterminal.milta.be type: ClusterIP |
|---|
- kubectl apply -f service.yaml