Last two posts first we saw the basics of terraform "Part-1" and then created simple web server "Part-2".
Now as we know that running single web server in production is never a good idea. We always need services which should be highly available as well as should be scalable as per the requirements.
Creating and managing a cluster always going to be a pain point. Fortunately with new cloud technologies now its possible to automate all this and things will be much easier to manage. In this tutorial we'll use AWS's "Auto Scaling Group (ASG)".
An ASG takes care of everything automatically, including launching a cluster of EC2 instances , monitoring the health of each instance, replacing the failed instances and adjusting the size of cluster in response to the load.
Now as we know that running single web server in production is never a good idea. We always need services which should be highly available as well as should be scalable as per the requirements.
Creating and managing a cluster always going to be a pain point. Fortunately with new cloud technologies now its possible to automate all this and things will be much easier to manage. In this tutorial we'll use AWS's "Auto Scaling Group (ASG)".
An ASG takes care of everything automatically, including launching a cluster of EC2 instances , monitoring the health of each instance, replacing the failed instances and adjusting the size of cluster in response to the load.
A full working ASG stack include multiple resources to make a working cluster. It starts with "launch configuration" which basically specify how a particular EC2 instance will be configured.
Now, in Fig-1, you saw that we have two EC2 instances and each instance will be having its own IP address. Problem is that what will be end point that you will be provide to your users. Also, later on what we had some issue issues with any of the server ASG can destroy the faulty server and launch a server with new IP. It will be difficult to handle such a situation.
One way to solve this issue is to use Load Balancer to distribute traffic to backend servers and give LB IP intact DNS to all the users to access the services.
AWS offers three type of Load Balancers-
1.) Application Load balancer(ALB) :
Best suited for load balancing of HTTP and HTTPS traffic. Operates at the application layer (Layer 7) of the OSI model.
2.) Network Load Balancer(NLB) :
Best suited for load balancing of TCP, UDP and TLS traffic. Operates at the transport layer (Layer 4) of the OSI model.
3.) Classic Load Balancer(CLB) :
This is legacy load balancer that predates both ALB and NLB. It can handle all types of traffic that ALB and NLB can handle.
Now a days most of the application either use ALB or NLB. In our case we are going to handle HTTP traffic, so we will use ALB.
Again, ALB consists of several parts:
1.) Listener - listen on specific port and protocol.
2.) Listener Rule - takes request that comes to listener and send those that match specific paths e.g. /foo or /bar to specific target group.
3.) Target Groups - One of more servers that receive requests from the load balancer. Target group also perform health checks on those servers and only sends requests to the healthy servers.
You can get the code from my GitHub repo here.
Please see all the steps in below screen shots -
Check the status on your aws console and access site from browser as well :
Now, lets destroy whole the whole setup with one command :).
well! Thanks for providing a good stuff
ReplyDeleteDocker and Kubernetes Training
Kubernetes Online Training
Very good explanation. Thank you for sharing.
ReplyDeleteAWS Devops Online Training
ReplyDeleteIt's really great article.. Thanks for sharing...
Docker Training in Hyderabad
Docker and Kubernetes Online Training
Docker Training
Thank you so much for sharing the good post, I appreciate your hard work.Keep blogging.
ReplyDeleteDevOps Training
DevOps Online Training
These things are very important, good think so - I think so too... bitcoin website widget
ReplyDeleteThis comment has been removed by the author.
ReplyDelete