Website Deployment over AWS Cloud Automated using Terraform.

Shailja Tripathi
8 min readJun 15, 2020

--

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user.

What is AWS?
AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon.
Different type of services offered by AWS
1. IaaS (Infrastructure as a Service)
2. PaaS (Platform as a Service)
3. SaaS (Software as a Service)

What is Terrafrom ?
Terraform is a free to use SaaS application that provides the best workflow for writing and building infrastructure as code. Share infrastructure as code Empower team to rapidly review, comment, and iterate on Infrastructure as Code. This tool helps in building, changing, and versioning infrastructure safely and efficiently.Terraform is used to manage infrastructure on various cloud platforms. It does so based on configuration files that control the creation, changing, and destruction of all resources .

I have created infrastructure such that it will include various services of AWS. We have to apply EC2, EBS, S3, Key pairs, Security groups, CDN, and many more things together to run our application on cloud.

Problem Statements-

1.Create the key and security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Prerequisites:

1.An account on AWS if not got to = https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/

2.terraform download = https://www.terraform.io/downloads.html

3.AWS cliv2 download = https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

4.Use AWS configure (setup for AWS users in the AWS CLI)

Solution:

STEP1:Profile Creation

First, we have to install aws cli and terraform in host system in my case i am using windows .If done with this installation go the AWS Console, and under services go to the IAM and Add a new user. After profile will create then we have to create a configuration file so, that terraform can access your AWS account. so run this command in command prompt.

aws configure --profile profilename

Profile specifies from which account you are logging in. This would pick up the credentials for that account from your local system.

Here, my profile name is myshailja which is used as a terraform provider.

STEP2:Creating Key-Pair

We can use a pre-created key that is downloaded on our system or create a new one.

STEP3:Creating a new git repository

github file- index.html

URL-https://github.com/shailja025/teraform1.git

Starting with the coding part:

STEP4:Specifing Provider

#providerprovider "aws" {
region = "ap-south-1"
profile = "myshailja"
}

Provider is used to specify the cloud provider that we are going to use as terraform has same syntex for all cloud platforms for which it downloads plugins.Here we are using AWS as provider.

STEP5:Creating Security Group

#security_groupresource "aws_security_group" "http" {
name = "allow_http"
vpc_id = "vpc-98918cf0"
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ping"
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_http"
}
}

Ingress is the traffic coming to our websites. We need to specify the ports accordingly. I have specified 3 ports:

  • SSH so that we can connect to the EC2 instance remotely.
  • HTTP so that the traffic can connect to our website.
  • ICMP so that we can check the connectivity using the ping command.

Egress is used to setup for the outbound traffic and here it has been set to all ports.

STEP6:Launching instance(EC2)

#instance_launchresource "aws_instance" "task1" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "MyOsKey"
security_groups = [ "allow_http" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/HP/Desktop/MyOsKey.pem")
host = aws_instance.task1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "os1"
}
}

After configuring security groups, the next step is to launch the EC2 instance using the above-created security group and keys.For launching a webserver we have to run some basic command we have install httpd server for the client purpose ,we have to install git plugin and start the httpd service.

STEP7:Creating EBS Volume

#creating_EBS_volumeresource "aws_ebs_volume" "myebs1" {
availability_zone = aws_instance.task1.availability_zone
size = 1
tags = {
Name = "volume1"
}
}

Created a new EBS volume and attached it to our instance to make the data inside our webserver persistent and availability zone should be the same as that for your instance, we can use this approach for retrieving the availablity zone of the instance. Here, I have created a volume of size 1 GiB.

STEP8:Format and Mount attached EBS Volume and clone git repository

#attach_volumeresource "aws_volume_attachment" "myebs" {
depends_on = [
aws_ebs_volume.myebs1
]
device_name = "/dev/sdh"
volume_id = aws_ebs_volume.myebs1.id
instance_id = aws_instance.task1.id
force_detach = true
}
#mounting_and_cloning_git_Repositoryresource "null_resource" "nullremote1" {
depends_on = [
aws_volume_attachment.myebs,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/HP/Desktop/MyOsKey.pem")
host = aws_instance.task1.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone
https://github.com/shailja025/teraform1.git /var/www/html"
]
}
}

We have to remote login to the ec2 instance and then first partition, format, and then mount the volume and copy the HTML files in our EBS from our repository.

Force Detach: If volume is mounted to some of the folder in the instance and we would like to terminate it , initially we can’t. Therefore this option helps to detach the volume forcefully even if the volume is mounted.

The git clone command is used to clone the GitHub code pushed by the developer into the /var/www/html folder.

STEP9:Creating S3 bucket

#s3_bucketresource "aws_s3_bucket" "mys3" {
bucket = "shailja85"
acl = "public-read"
tags = {
Name = "bucket1"
}
versioning {
enabled = true
}
}
locals {
s3_origin_id = "mys3Origin"
}

Name of S3 should be unique . Main role of S3 bucket is to clone our image from the specific path from our system and then upload the image to the bucket avoid any latency faced to access the static data of any website from anywhere in the world.

STEP10:Uploading on S3 bucket

#Uploading_on_s3resource "aws_s3_bucket_object" "s3obj" {
depends_on = [
aws_s3_bucket.mys3,
]
bucket = "shailja85"
key = "shailja.jpg"
source = "C:/Users/HP/Desktop/shailja.jpg"
acl = "public-read"
content_type = "image or jpeg"
}

Uplaoding the the static data to the s3 bucket that we just created. Key is the name of the file after the object is uploaded in the bucket and source is the path of the file to be uploaded.

STEP11:Creating CloudFront

#cloud_frontresource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = aws_s3_bucket.mys3.bucket_regional_domain_name
origin_id = local.s3_origin_id
}enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "index.html"
logging_config {
include_cookies = false
bucket = "shailja85.s3.amazonaws.com"
prefix = "myprefix"
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
ordered_cache_behavior {
path_pattern = "/content/immutable/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
headers = ["Origin"]
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = "redirect-to-https"
}
ordered_cache_behavior {
path_pattern = "/content/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = "redirect-to-https"
}
price_class = "PriceClass_200"restrictions {
geo_restriction {
restriction_type = "none"

}
}
tags = {
Environment = "production"
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
output "cloudfront_ip_addr" {
value = aws_cloudfront_distribution.s3_distribution.domain_name
}

CloudFront is the service that is provided by the AWS in which they create small data centres where they store our data to achieve low latency. It will create a CloudFront distribution using an S3 bucket. In this bucket, we have stored all of the assets of our site like images, icons, etc. This CloudFront distribution will provide us one URL. By using this URL, we can access the objects inside the bucket.

STEP12:Displaying the output on our local system (on Chrome)

#output_on_cromeresource "null_resource" "IP_opening_on_crome"  {
depends_on = [
aws_cloudfront_distribution.s3_distribution,
aws_volume_attachment.myebs
]
provisioner "local-exec" {
command = "start chrome
http://${aws_instance.task1.public_ip}/"
}
}

After this crome will directly run website which we created on github.

STEP13:Download Plugins

Use #terraform init for downloading the plugins in the folder that contains the code file (.tf)

Plugins are to be downloaded for the particular cloud provider. These plugins are the one which makes terraform intelligent.

To check our code we can use #terraform validate

STEP14:Run the code

To run use #terraform apply -auto-approve

Just one click and everything terraform will be doing for us…….!!!!!

Outputs:

1.Webserver Created

2. Volumes Created

3. Security Group Created

4.Bucket Created

5. Data Uploaded inside the S3 bucket

6. CloudFront Distribution Created

7.Final Output on Chrome browser

To destroy this complete infrastructure #terraform destroy -auto-approve

Thankyou For Reading!

Do leave your valuable feedbacks

--

--

No responses yet