DevOps Project: Jenkins CI/CD for Kubernetes Deployments

  DevOps Tech Hub

116 Followers

Kubernetes Cluster Setup


Setup Kubernetes cluster on Amazon EC2

OS Requirements

Master and Worker nodes must be running with any one of the below Operating Systems:

  • Ubuntu 16.04+
  • Debian 9+
  • CentOS 7
  • Red Hat Enterprise Linux (RHEL) 7
  • Fedora 25+
  • Amazon Linux 2

Hardware Requirements

  • RAM: 2 GB or More
  • CPU: 2 CPU or More

OS Configuration

  • Disable Swap
    • # swapoff -a 
  • Disable SELinux
    • [root@ ~]$setenforce 0
      setenforce: SELinux is disabled
      [root@ ~]$
      # sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    • Disable Firewall
      • # service iptables stop
      • If IP TABLES are enabled then configure IP tables to see bridged traffic
      • # modprobe br_netfilter
        # cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
        net.bridge.bridge-nf-call-ip6tables = 1
        net.bridge.bridge-nf-call-iptables = 1
        EOF
        # sudo sysctl --system
    •  Unique hostname, MAC address, and product_uuid 
      • [root@ ~]$ifconfig | grep ether
        ether 06:b5:b0:04:34:45 txqueuelen 1000 (Ethernet)
        [root@ ~]$
      • [root@ ~]$cat /sys/class/dmi/id/product_uuid
        EC2D3281-B316-79C1-CB8E-79BC63D66FDC
        [root@ ~]$
    • Network connectivity between all cluster nodes including master.

Network Configuration

Ensure below ports are open on Master and Worker nodes.

Control-Plane Node (Master Node)

 Worker Node

Install Packages

Required below packages installed in all nodes( master and worker nodes).

  • docker: Container Runtime 
  • kubeadm:  Command to bootstrap the cluster.
  • kubelet: Service running on all nodes to managing starting pods and containers.
  • kubectl: Command utility to  interact with K8s cluster API server.

Configure Kubernetes Repo:

1.  Run below command to add Kubernetes Repo to the yum repo.

# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

2. Run below command to  install  Packages

# yum install docker kubeadm kubectl kubelet --disableexcludes=kubernetes

Enable Services to start after reboot

[root@ ~]$chkconfig docker on
Note: Forwarding request to 'systemctl enable docker.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@ ~]$chkconfig kubelet on
Note: Forwarding request to 'systemctl enable kubelet.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@ ~]$

 Start Docker RunTime

[root@ ~]$ service docker start

Kubernetes Cluster Setup

Master Node:

Configure CRI driver for Docker

1. Run below command to configure CRI driver for Docker Run time
# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF

 Initialize the K8s Master

Run below command on Master node to  initialize the kubernetes cluster.

[root@ ]$kubeadm init --pod-network-cidr=172.31.0.0/16
W0703 09:06:54.218383    1877 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.5
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-77-56.ec2.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.77.56]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-77-56.ec2.internal localhost] and IPs [172.31.77.56 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-77-56.ec2.internal localhost] and IPs [172.31.77.56 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0703 09:07:11.404192 1877 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0703 09:07:11.405230 1877 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.502283 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ip-172-31-77-56.ec2.internal as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ip-172-31-77-56.ec2.internal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r5k7h2.rzlkqshp8flvwuvs
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.77.56:6443 --token r5k7h2.rzlkqshp8flvwuvs \
--discovery-token-ca-cert-hash sha256:5c17ac5e4649ce9d9314c4591430ef27b620a6e72f7066b8279b8b4dec891773
[root@ ]$

Configure kubectl to run as normal user

  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Apply POD network(Calico)

[root@ ~]$kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@ ~]$

Worker Nodes:

Join Worker Nodes

Run 'kubeadm join' command with token in all three worker nodes to  join worker nodes to the kubernetes cluster. In above 'kubeadm init' command output you can get the 'kubeadm join' command with token details

[root@ ~]$kubeadm join 172.31.77.56:6443 --token r5k7h2.rzlkqshp8flvwuvs --discovery-token-ca-cert-hash sha256:5c17ac5e4649ce9d9314c4591430ef27b620a6e72f7066b8279b8b4dec891773
W0703 09:22:11.181374 1114 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@ ~]$

Congratulations!! You have successfully  configured Kubernetes cluster with three worker nodes. Let's verify the cluster running commands on Master node.

Master Node:

View Cluster configuration

[root@ ~]$kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.31.77.56:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
[root@ ~]$

Verify Worker Nodes

[root@ ~]$kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-66-17.ec2.internal Ready <none> 6m12s v1.18.5
ip-172-31-72-37.ec2.internal Ready <none> 6m4s v1.18.5
ip-172-31-76-168.ec2.internal Ready <none> 6m29s v1.18.5
ip-172-31-77-56.ec2.internal Ready master 21m v1.18.5
[root@ ~]$

 Run Deployment

Below is the Deployment manifest defination file to  create apllication instances as PODs

#Deployment

apiVersion: apps/v1

kind: Deployment

metadata:

  name: kloudways-deploy

  labels:

    app: kloudways-app

spec:

  replicas: 3

  selector:

    matchLabels:

      app: kloudways-app

  template:

    metadata:

      labels:

        app: kloudways-app

    spec:

      containers:

      - name: kloudways-container

        image: dockertest/dpt:1.0

        ports:

        - containerPort: 8080

Run Service Deployment

Below is the service manifest defination file to create service IP for the application instances

#Service Type nodePort

apiVersion: v1

kind: Service

metadata:

  name: kloudways-service

  labels:

    app: kloudways-app

spec:

  selector:

    app: kloudways-app


  type: NodePort

  ports:

  - nodePort: 31000

    port: 8080

    targetPort: 8080



Previous Next