Elastic Kubernetes Service(EKS)

What is AWS EKS?
EKS is one service in AWS that will provide Kubernetes Cluster as a Service. It a fully managed service. Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.
The Kubernetes control plane plays a crucial role in a Kubernetes deployment as it is responsible for how Kubernetes communicates with the cluster — starting and stopping new containers, scheduling containers, performing health checks, and many more management tasks.
The big benefit of EKS is taking away the operational burden involved in running this control plane. Just deploy cluster worker nodes using defined AMIs and with the help of CloudFormation, and EKS will provision, scale and manage the Kubernetes control plane for you to ensure high availability, security and scalability.
What are NodeGroup?
If we want 3 slave/worker nodes in which we want two worker nodes run with instance type t2.micro and one run with t2.small so we can create groups for the nodes that are known as NodeGroup. In NodeGroup we can tell how many nodes we want that are running with same instance type so we group them together .
Set up before starting with Amazon EKS:
- AWS CLI : We can use the AWS Console to create a cluster in EKS, but AWS CLI is easier to interact with.
- Kubectl : It used for communicating with the cluster API server. This endpoint is public by default, but is secured by proper configuration of a VPC
- Eksctl : eksctl is a simple CLI tool for creating clusters on EKS. It Creates a basic cluster in minutes with just one command.
Task Description
Create a Kubernetes cluster on the top of Public Cloud i.e. AWS. They have an inbuilt service Elastic Kubernetes Service (EKS). This service internally creates & manages all the slave nodes/worker nodes. And then create a Kubernetes Deployment & deploy our website via K8S Deployment & make data persistent of that Deployment so that no data loss would be there & reflect the changes in code in real time. Here I am deploying NextCloud application on kubernetes by using Amazon Elastic Kubernetes Service and monitoring is done by Prometheus and visual representation is done by using Grafana.
Steps to be followed:
Step1: Create an IAM user
We need to create a Kubernetes cluster on AWS using EKS. There are three ways to do that but, we’ll be using the CLI option.
We need an IAM user with Administrator Access or the root user.

Step2: Login to aws account through CLI by IAM user

After installing eksctl check the version.

Step3: Create Kubernetes Cluster
In cluster file we have to write how many node groups and nodes we want of which instance type.
#cluster.ymlapiVersion: eksctl.io/v1alpha5
kind: ClusterConfigmetadata:
name: mymaincluster
region: ap-south-1nodeGroups:
- name: ng1
desiredCapacity: 2
instanceType: t2.micro
ssh:
publicKeyName: MyOsKey
- name: ng-mixed
minSize: 2
maxSize: 5
instancesDistribution:
maxPrice: 0.017
instanceTypes: ["t2.micro"]
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName: MyOsKey
This cluster configuration file will create 2 nodegroups namely ng1 and ng-mixed. eksctl has support for spot instances through the MixedInstancesPolicy for Auto Scaling Groups. Here is an example of a nodegroup that uses 50% spot instances and 50% ondemand instance.
OnDemand Instance: AWS On-Demand Instances are virtual servers that run in AWS Elastic Compute Cloud (EC2) or AWS Relational Database Service (RDS) and are purchased at a fixed rate per hour
Spot Instance: A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts, you can lower your Amazon EC2 costs significantly. The hourly price for a Spot Instance is called a Spot price.
Step4: Create Cluster
We have to run this cluster.yml file by using eksctl command . Then eksctl command internally contact to aws services like ec2 for the nodes and creates the entire cluster in EKS,it will create the base Amazon VPC architecture, and then the master control plane. It will take typically 10 to 15 minutes to create the cluster.
eksctl create cluster -f cluster.yml

We can also check from WEB UI that cluster is created.

EKS contacting to cloud formation for setting up the stacks for setting up the cluster.

CloudFormation connect to EC2 for launching nodes/instances.

Step5: Update Config File
We have to make some changes in kubeconfig files of our kubectl command by which kubectl command can be configured for the cluster of EKS.
aws eks update-kubeconfig --name mymaincluster

Step6: View Config file
To check whether the file is updated or not.
kubectl config view

Step7: Create a namespace to launch the application
Namespaces are used to organize code into logical groups and to prevent name collisions that can occur especially when the code base includes multiple libraries.
kubectl cteate namespace new123kubectl config set-context --current --namespace=new123

To Check the cluster information.
kubectl cluster-info

Step8: Creating NextCloud File
For launching NextCloud application we created a nextcloud_deployment.yml which runs in one of the node of our EKS cluster . For this I have created one YAML file in which all the configuration about NextCloud pod is coded. The file consists of 3 parts- Service, PVC and Deployment. The deployment consists of the replica set, container specifications and image details. The PVC will create a request for a persistent volume of size 1GiB. This persistent volume uses EBS( Elastic Block Storage) to store the data. The volume is mounted to the “/var/lib/mysql” folder since it stores all the data. The last part is Service.
#nextcloud_deployment.ymlapiVersion: v1
kind: Service
metadata:
name: nextcloud
labels:
app: nextcloud
spec:
ports:
- port: 80
nodePort: 30001
selector:
app: nextcloud
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nextcloud-pv-claim
labels:
app: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud
labels:
app: nextcloud
spec:
selector:
matchLabels:
app: nextcloud
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: nextcloud
tier: frontend
spec:
containers:
- image: nextcloud:latest
name: nextcloud
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-pass
key: password
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadbuser-pass
key: password
- name: MYSQL_USER
value: shailja
- name: MySQL_DATABASE
value: mydb
ports:
- containerPort: 80
name: nextcloud
volumeMounts:
- name: nextcloud-ps
mountPath: /var/www/html
volumes:
- name: nextcloud-ps
persistentVolumeClaim:
claimName: nextcloud-pv-claim
Step9: Creating MariaDB File
For storing the data of the user of NextCloud application we have to create one MariaDB database which work as a back end for our application. For this I have created one YAML code mariadb_deployment.yml. As MariaDB database deployment is most critical for us since all the necessary services of kubernetes like secret included in this code.
#mariadb_deployment.ymlapiVersion: v1
kind: Service
metadata:
name: nextcloud-mariadb
labels:
app: nextcloud
spec:
ports:
- port: 3306
selector:
app: nextcloud
tier: mariadb
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb-pv-claim
labels:
app: nextcloud
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud-mariadb
labels:
app: nextcloud
spec:
selector:
matchLabels:
app: nextcloud
tier: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: nextcloud
tier: mariadb
spec:
containers:
- image: mariadb:latest
name: mariadb
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb-pass
key: password
- name: MYSQL_USER
value: shailja
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mariadbuser-pass
key: password
- name: MYSQL_DATABASE
value: mydb
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mariadb-ps
mountPath: /var/lib/mysql
volumes:
- name: mariadb-ps
persistentVolumeClaim:
claimName: mariadb-pv-claim
Now create the kustomization.yml file as it lets us deploy the whole setup with just one command and a few other functionalities.
Step10: Creating Kustomization file
kustomization.yml file declares the customization provided by the kustomize program. Since customization is, by definition, custom, there are no default values that should be copied from this file or that are recommended.
#Kustomization.ymlapiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mariadb-pass
literals:
- password=shailja
- name: mariadbuser-pass
literals:
- password=redhat
resources:
- mariadb_deployment.yml
- nextcloud_deployment.yml
Deploy the whole setup using this command.
kubectl create -k .


Now as EKS provides us one public IPin the terms of URL for our CloudNext application. By using this URL we can access our CloudNext application .



Monitoring the cluster running in EKS by using Prometheus and visual representation by Grafana.
For the initial setup of Prometheus and Grafana we require Helm .
What is Helm ?
Like in linux we use yum to install packages for a application. In kubernetes we use HELM to install the packages. In Kubernetes packages is known as charts.
What is Client-HELM?
We run HELM command for installing the packages so Helm act as client side for installing the packages from a location where packages are present.
What is Server-Triller?
From where HELM is downloading/installing the charts/packages is server-side that is known as “Tiller”.
Step11: Initializing Helm
helm init

helm repo list

Step12: Creating Tiller Service
helm repo update

kubectl -n kube-system create serviceaccount tiller

kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

kubectl get pods --namespace kube-system

helm repo add stable https://kubernetes-charts.storage.googleapis.com/

What is Prometheus?
Prometheus is one of the tool used to monitor the kuberenetes cluster . It scrapes metrics from instrumented jobs. It is designed for reliability, to be the system we go to during an outage to allow us to quickly diagnose problems. Each Prometheus server is standalone, not depending on network storage or other remote services.
Step13: Installing Prometheus
Installing prometheus on kubernetes by using helm repository . For these we have to create one namespace called prometheus inside which we launch our prometheus server pod
kubectl create namespace prometheus

helm install stable/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2" --set server.persistentVolume.storageClass="gp2"

kubectl get svc -n prometheus

To expose Prometheus for client access:
kubectl -n prometheus port-forward svc/flailing-buffalo-prometheus-server 8888:80

Now we can access prometheus by using public IP> 127.0.0.1.8888 and we can now monitor our cluster and obtian the result in graphical form .


What is Grafana?
Grafana allows to query, visualize, alert on and understand the metrics no matter where they are stored. create, explore, and share dashboards with the team and foster a data driven culture.

Step14: Grafana Set-up
Now we want that Grafana use the Prometheus Timeseries database for analyzing and visulaizing the condition of the nodes . Prometheus collect the real time data of the nodes/instances of cluster .
kubectl create namespace grafana

helm install stable/grafana --namespace grafana --set persistence.storageClassName="gp2" --set adminPassword='GrafanaAdm!n' --set datasources."datasources\.yaml".apiVersion=1 --set datasources."datasources\.yaml".datasources[0].name=Prometheus --set datasources."datasources\.yaml".datasources[0].type=prometheus --set datasources."datasources\.yaml".datasources[0].url=http://prometheus-server.prometheus.svc.cluster.local --set datasources."datasources\.yaml".datasources[0].access=proxy --set datasources."datasources\.yaml".datasources[0].isDefault=true --set service.type=LoadBalancer

kubectl get secret tinseled-rodent-grafana --namespace grafana -o yaml

After this we can connect to Grafana portal . In grafana portal we have to add our database type. our database is prometheus and give IPof the prometheus server where it is running.

What is AWS Fargate?
AWS Fargate is a managed compute engine for Amazon ECS that can run containers. In Fargate we don’t need to manage servers or clusters.
ECS(Elastic Container Service) is one service that is used to manage the containers. Fargate is one subservice of ECS. Fargate creates Serverless Architecture.
What is Serverless Architecture?
When kubectl(client) send request to master for launching a pod . At that time master will create one worker node with the resources required for launching a pod over it.
In case of Fargate the cluster dont have predefined/precreated worker node like we have in EKS. It will create only when a demand from client comes up for launching a pod.
That is the reason why cluster that is created by Fargate is known as Serverless.
Creating a Fargate file in yml
#FargateClusterapiVersion: eksctl.io/v1alpha5
kind: ClusterConfigmetadata:
name: far-cluster
region: ap-southeast-1fargateProfiles:
- name: fargate-default
selectors:
- namespace: kube-system
- namespace: default
Run the fcluster.yml file to create a Fargate cluster
eksctl create cluster -f fcluster.yml
Cluster launched

Update the config file.
aws eks --region ap-southeast-1 update-kubeconfig --cluster fcluster

Now we can deploy the same setup of MariaDB and CloudNext by running the kustomization.yml file on top of fargate cluster, the only difference here will be ,this time both master, as well as slave/worker nodes, will entirely be managed by fargate cluster i.e complete serverless architecture will be there.
Step15: To Delete the entire Cluster
eksctl delete cluster -f fcluster.yml

It will take around 15–20 mins to delete entire cluster.