Posts

Showing posts from 2019

Deploy and Scale Kubernetes Application using Spinnaker

Image
Deploy and Scale Kubernetes Application using Spinnaker- In my last post Getting started with Spinnaker , we completed the installation and setup part of spinnaker. In this post, I'll be going through the "Deploying and Scaling application on Kubernetes using spinnaker". In this particular exercise, we'll create a simple "nginx" deployment on kubernetes and expose that as a service. After that we'll see how we can scale up and down the deployment easily from Spinnaker Dashboard itself. Fot this, first make sure we have done port-forwarding for required pods and able to access Spinnaker Dashboard.  Note - Before moving ahead, please make sure that "kubernetes" provider is enabled. You can check this on "Halyard" configuration as below. $ kubectl exec -it  spinnaker-local-spinnake-halyard-0 /bin/bash -n spinnaker $ hal config list | grep -A 37 kubernetes After that click on the Create Application in Applica...

Getting Started with Spinnaker locally using minikube(local Kubernetes)

Image
Before jumping to Installation and setup part, first of all, lets briefly summarize about the "What is Spinnaker" Spinnaker :              Spinnaker is an open-source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.           I am not going in many details about the functionality here but would like to highlight the main architectural component which I think we should know at least before starting playing with this. This will help you in troubleshooting if got stuck in between. So, Spinnaker is composed on multiple components. You will be able to see all these after we complete the setup. List of different components is as below(currently just copy-pasting from the official site)- Deck  is the browser-based UI. Gate  is the API gateway. The Spinnaker UI and all api callers communicate with Spinnaker via Gate. Orca  is the or...

Taints and Tolerations in Kubernetes

Image
              We all know that Kubernetes is powerful orchestration tool in the world of containers. The whole complexity of managing, distributing multiple containers across the cluster is being taken care Kubernetes OOB. In shorts it takes care of all the heavy and complex lifting for us. Since, its K8S who takes care of all distribution and scheduling of pods across different nodes in the cluster. So, what we if we want to run specific pod on specific node only. Luckily we have option to manage this as well. In K8S its called " taint and toleration ". In general terms:         - Taint is the capability of the node which makes node to do not let any pod to be scheduled on it.         - On the other hand, Toleration is another capability makes that particular pod to be tolerated by specific Node. To summarise, Taint and Toleration are used to set restrictions on the what po...

Terraform setting up clustered web server !! Getting Started Part-3!!

Image
Last two posts first we saw the basics of terraform " Part-1 " and then created simple web server " Part-2 ".  Now as we know that running single web server in production is never a good idea. We always need services which should be highly available as well as should be scalable as per the requirements. Creating and managing a cluster always going to be a pain point. Fortunately  with new cloud technologies now its possible to automate all this and things will be much easier to manage. In this tutorial we'll use AWS's "Auto Scaling Group (ASG)".     An ASG takes care of everything automatically, including launching a cluster of EC2 instances , monitoring the health of each instance, replacing the failed instances and adjusting the size of cluster in response to the load. A full working ASG stack include multiple resources to make a working cluster. It starts with " launch configuration " which basically specify how a ...