Dynamic Jenkins Node Cluster with Kubernetes
Big companies, to roll their updates without even a single second of downtime, is it possible to roll our updates on the fly without the client coming to know even if he is using the application ? Can we make all of this automated ? Yes we can, and that’s the biggest need of today’s market.
Task Description:
1. Create container image that’s has Linux and other basic configuration required to run Slave for Jenkins. ( example here we require kubectl to be configured )
2. When we launch the job it should automatically starts job on slave based on the label provided for dynamic approach.
3. Create a job chain of job1 & job2 using build pipeline plugin in Jenkins
4. Job1 : Pull the Github repository automatically when some developers push repository to Github and perform the following operations as:
4.1 Create the new image dynamically for the application and copy the application code into that corresponding docker image
4.2 Push that image to the docker hub (Public repository)
( Github code contain the application code and Dockerfile to create a new image )
5. Job2 ( Should be run on the dynamic slave of Jenkins configured with Kubernetes kubectl command): Launch the application on the top of Kubernetes cluster performing following operations:
5.1 If launching first time then create a deployment of the pod using the image created in the previous job. Else if deployment already exists then do rollout of the existing pod making zero downtime for the user.
5.2 If Application created first time, then Expose the application. Else don’t expose it.
Requirement from Job1?
Job1 would first of all download the GitHub code and what next? It would dynamically create the image for the application for which the developer pushed the code which in our case is Webserver. That is, we need our job1 to create an image for webhosting and obviously, copy the code into the required directory inside our webserver container. And also, after creating the image will push this image to the docker registery so that everyone publically could access it.
Creating a Dockerfile by which Jenkins would build an image for Webserver.By which Jenkins would build an image for Webserver.


The above commands would first allow the dockerfile to take the code uploaded by the developer and deploy it in the webserver and then for pushing your image to the public registry ie. the docker hub. For that you first need to login by the “ docker login “ command where you need to provide the username and password for your docker hub account and then the “docker push “ command to push the image. This completes job.
We are using Kubernetes to manage our containers because if a container dies or get corrupted docker itself won’t be able to bring it up again. So for the smart management of containers we need Kubernetes. The services that we are going to use is Deployment.
FROM ubuntu:16.04RUN apt-get update && apt-get install -y openssh-server
RUN apt-get install openjdk-8-jre -y
RUN mkdir /var/run/sshd
RUN echo 'root:redhat' | chpasswd
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshdENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
#kubectl setup
RUN apt-get install curl -y
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
In real world we would always have limited computing power i.e. CPU/RAM/Storage and therefore we create a setup that the user would have just one endpoint to which it would connect but behind the scene we would be using the slave nodes just for their computing power which indeed would make our process faster and would allow us to run many jobs in parallel. The dynamic slave node setup would allow us to launch the node as and when the demand comes and use it for running the jobs. when the demand is fulfilled the slave node is terminated on the fly.
So before configuring the slave node we need to make some changes inside our docker, because the tool that is used behind the scene for setting up slave nodes is docker.

We need to edit this in the docker configuration file so that anybody from any IP can connect with docker at the specified port.


For the deployment and the service configuration we created two config files:
- deployment.yaml

2. service.yaml

When job2 would start it would trigger the creation of the slave node with the required label. Then job 2 will do its work of deploying the pods as per requirement.

To execute shell.
sudo sed -i “s/image.*/image: khushi09\/test:${BUILD_NUMBER}/” /kubernetes/deployment.yaml
if kubectl get deployment | grep website
then
kubectl replace -f /kubernetes/deployment.yaml
else
kubectl create -f /kubernetes/deployment.yaml
fi
This creates the deployment and for rolling update replaces the deployment and the pods if they already exist.