Creating an Automated Infrastructure Setup on AWS using TERRAFORM!!

Rahul Prajapati
6 min readJun 14, 2020

Terraform:-

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter.

Terraform: Beyond the Basics with AWS. … Terraform by HashiCorp, an AWS Partner Network (APN) Advanced Technology Partner and member of the AWS DevOps Competency, is an “infrastructure as code” tool similar to AWS CloudFormation that allows you to create, update, and version your Amazon Web Services (AWS) infrastructure.

Description:

Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80, 22.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. The developer has uploaded the code into GitHub repo and other repo with some images.

6. Copy the GitHub repo code into /var/www/html.

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.

8. Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

9. Create Snapshot for image.

Prerequisites:

Terraform setup on base OS

Knowledge of AWS in webUI and CLI Mode.

Approach:

Creating the separate folder for web page code and in that create terraform file with extension .tf and after initializing the terraform file so that it can download the required plugins for that particular folder.

web.tf

The following Steps to be followed :

Step1: Setting up the provider for Terraform and need to tell all the configuration with AWS so region and profile are required.

//AWS provider
provider “aws” {
region = “ap-south-1”
profile = “myrahul”
}

Step2: Creating the Security group for instance so our clients can access from other devices as the AWS has some default security setting for not allowing to connect from outside the host so their is firewall which protect from outside for connecting we need to configure the TCP settings which Allow to connect to ports for SSH and HTTP.

//security group creation for firewall

resource “aws_security_group” “mysecurity” {
name = “terra_security”
description = “Allow SSH and HTTP “

ingress {
description = “allow SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “allow HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “terra_security”
}
}

Step3: Launching the instance by terraform in this i have used ami-id to give the which type of instance need to create and i have used pre-created key and ID.

Step4: Creating the “ebs-volume” block storage of 1 GB for attaching it with instance so the whatever data is uploaded can be kept as persistent and we don’t lose the data from this hard-disk drive.

//EBS volume creation
resource “aws_ebs_volume” “ebs1” {
availability_zone = aws_instance.myweb.availability_zone
size = 1

tags = {
Name = “terra_myweb_ebs”
}
}

//Attach Volume to Instance

resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.ebs1.id
instance_id = aws_instance.myweb.id
force_detach = true
}

Step5: For accessing our deployment we need to get the public IP automatically.

output “myweb_ip” {
value = aws_instance.myweb.public_ip
}

Step6: For storing the data in the ebs- volume we need to first create partition in the ami then need to format and then mounting the ebs to /var/www/html/ and after that git is being installed automatically it will download the code by cloning it .

//mounting the external Storage device(like pen drive) and creating partitions in cloning the my html page code

resource “null_resource” “nullremote” {
depends_on = [
aws_volume_attachment.ebs_att,
]

connection {
type = “ssh”
user = “ec2-user”
private_key = file(“C:/Users/Rahul Prajapati/Desktop/awswk/mycloudkey1.pem”)
host = aws_instance.myweb.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/Rahulkprajapati/Terraformwebpage.git /var/www/html/”
]
}
}

for showing it downloaded the code and mounted it.

Step7: Creating the S3 bucket for storing the images so it can be used by public as we can use CloudFront.

//s3 volume creation

resource “aws_s3_bucket” “s3rkpbckt” {
bucket = “s3rkpbckt”
acl = “public-read”

tags = {
Name = “indexpage-s3-bucket”

}
versioning {
enabled =true
}

}

Step8: Creating the CloudFront distribution for instance .

//Cloudfront creation

resource “aws_cloudfront_distribution” “cloudfront” {
origin {
domain_name = “s3rkpbckt.s3.amazonaws.com”
origin_id = “S3-s3rkpbckt”

custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = “match-viewer”
origin_ssl_protocols = [“TLSv1”, “TLSv1.1”, “TLSv1.2”]
}
}

enabled = true

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = “S3-s3rkpbckt”

//cloudFront cashes
forwarded_values {
query_string = false

cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400

}

//Restricting the some location to access the site

restrictions {
geo_restriction {
//restriction type, blacklist, whitelist or none
restriction_type = “none”
}
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

Step9: Creating the Snapshot for the configured image.

//creating Snapshot for Image

resource “aws_ebs_snapshot” “mysnapshot” {
volume_id = aws_ebs_volume.ebs1.id

tags = {
Name = “Configured_snap”
}
}

For automatic accessing the site over chrome browser.

//automatic opening the site over chrome brow

resource “null_resource” “nu1” {
depends_on = [
null_resource.nullremote,
]

provisioner “local-exec” {
command = “chrome ${aws_instance.myweb.public_ip}”
}
}

The Processes behind the creations of infrastructure.

Finally my Loan Calculator web page is being deployed on the IP 13.235.51.224 !!

Thank you

--

--