Wednesday, November 27, 2019

Deploy and Scale Kubernetes Application using Spinnaker

Deploy and Scale Kubernetes Application using Spinnaker-


In my last post Getting started with Spinnaker, we completed the installation and setup part of spinnaker. In this post, I'll be going through the "Deploying and Scaling application on Kubernetes using spinnaker".

In this particular exercise, we'll create a simple "nginx" deployment on kubernetes and expose that as a service. After that we'll see how we can scale up and down the deployment easily from Spinnaker Dashboard itself.

Fot this, first make sure we have done port-forwarding for required pods and able to access Spinnaker Dashboard. 

Note - Before moving ahead, please make sure that "kubernetes" provider is enabled. You can check this on "Halyard" configuration as below.

$ kubectl exec -it  spinnaker-local-spinnake-halyard-0 /bin/bash -n spinnaker
$ hal config list | grep -A 37 kubernetes




After that click on the Create Application in Application tab to below popup.



After filling the required information and hitting the create button. You'll get landed on below screen.

There are few other terms which you can check on documents like "Clusters, Load Balancers, Server Group" etc.

Here, we'll create a Server Group which will basically contain our deployment manifest(below).

---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector: 
app: nginx 

Once you click on "Create Server Group", You'll see below screen where we need to paste above yaml and hit create.



Below will be the end state, if everything goes well.


Now, lets inspect our deployment. 
On Cluster tab we can see the deployment and number of replicas(pod) available in this deployment as below -


Please verify the same from CLI using kubectl-


Check for the services in Load Balancer Section -

Verify same using CLI -


Access the service -




Scale up the Deployment from 1 to 4 pods and verify the results-






So, this was the simple how-to for managing the K8S manifest. In the next post, I will try to explore the integration on Jenkins with Spinnaker and auto triggering Spinnaker Deployment based on Jenkins events.



Tuesday, November 26, 2019

Getting Started with Spinnaker locally using minikube(local Kubernetes)

Before jumping to Installation and setup part, first of all, lets briefly summarize about the "What is Spinnaker"



Spinnaker : 

            Spinnaker is an open-source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence.

          I am not going in many details about the functionality here but would like to highlight the main architectural component which I think we should know at least before starting playing with this. This will help you in troubleshooting if got stuck in between.

So, Spinnaker is composed on multiple components. You will be able to see all these after we complete the setup. List of different components is as below(currently just copy-pasting from the official site)-
  1. Deck is the browser-based UI.
  2. Gate is the API gateway.
    The Spinnaker UI and all api callers communicate with Spinnaker via Gate.
  3. Orca is the orchestration engine. It handles all ad-hoc operations and pipelines. Read more on the Orca Service Overview.
  4. Clouddriver is responsible for all mutating calls to the cloud providers and for indexing/caching all deployed resources.
  5. Front50 is used to persist the metadata of applications, pipelines, projects and notifications.
  6. Rosco is the bakery. It produces immutable VM images (or image templates) for various cloud providers.
    It is used to produce machine images (for example GCE imagesAWS AMIsAzure VM images). It currently wraps packer, but will be expanded to support additional mechanisms for producing images.
  7. Igor is used to trigger pipelines via continuous integration jobs in systems like Jenkins and Travis CI, and it allows Jenkins/Travis stages to be used in pipelines.
  8. Echo is Spinnaker’s eventing bus.
    It supports sending notifications (e.g. Slack, email, SMS), and acts on incoming webhooks from services like Github.
  9. Fiat is Spinnaker’s authorization service. 
    It is used to query a user’s access permissions for accounts, applications and service accounts.
  10. Kayenta provides automated canary analysis for Spinnaker.
  11. Halyard is Spinnaker’s configuration service.
Halyard manages the lifecycle of each of the above services. It only interacts with these services during Spinnaker startup, updates, and rollbacks.

Note - In our setup "Fiat and Kayenta" will not be present as this is not available in the helm chart that we have installed on minikube.
Along with Architecture, I guess we should know the ports mapping as well.


Minikube - 

        Minikube provides a way to setup Kubernetes locally for development purpose. I am not going in details about the installation. Please go through my previous blog post if you want to install minikube.

After installation, let's start minikube cluster. I am starting with my custom configuration so that it should be able to handle the load.



Other Tools -

     Apart from minikube, below are the other tools that we need and I am supposing that these are already installed.
  1. helm
  2. kubectl



Install Spinnaker -

        Now, we have minikube with helm installed and running. We are ready to install spinnaker. We will install spinnaker using helm chart.
Helm is a templating engine for k8s deployments. We need to provide values those templates. So, to start with they are providing the default set of values which we are going to use.


Download the default values file form above helm repo.
$ curl -Lo values.yaml https://raw.githubusercontent.com/kubernetes/charts/master/stable/spinnaker/values.yaml



Now, let's install the spinnaker to K8S cluter.
$ helm install -n spinnaker-local stable/spinnaker -f values.yaml --timeout 300   --namespace spinnaker


Tip - In case you get timed out exception in first run(like below). Then please delete the helm installation using "helm del --purge {release-name}" and re-run the same command again.


After successful installation, check for the pods in "spinnaker" namespace. All should be in running state. 



P.S. - Please ignore the hal status in above output. Its taking some time to start :).

To access the Spinnaker UI, follow the above instructions. If you notice then in above output we are doing the port-forward for two pods. As per the architecture, these two components are responsible for below functionalities.
  • First one is "deck" - which is providing the UI dashboard
  • Second one the "gate" - which is responsible for accesing the apis.
#export DECK_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-deck" -o jsonpath="{.items[0].metadata.name}")
#export GATE_POD=$(kubectl get pods --namespace spinnaker -l "cluster=spin-gate" -o jsonpath="{.items[0].metadata.name}")
#echo $DECK_POD
#echo $GATE_POD
#alias ui='kubectl port-forward --namespace spinnaker $DECK_POD 9000'
#alias api='kubectl port-forward --namespace spinnaker     $GATE_POD 8084'
#ui & api &




Access the Spinnaker Dashboard




In next post we'll try to create pipelines which will deploy entities on kubernetes. Also in later posts we'll explore more on the integration with different providers e.g. Jenkins/Cloud vendors.


Friday, November 8, 2019

Taints and Tolerations in Kubernetes


              We all know that Kubernetes is powerful orchestration tool in the world of containers. The whole complexity of managing, distributing multiple containers across the cluster is being taken care Kubernetes OOB. In shorts it takes care of all the heavy and complex lifting for us.

Since, its K8S who takes care of all distribution and scheduling of pods across different nodes in the cluster. So, what we if we want to run specific pod on specific node only. Luckily we have option to manage this as well. In K8S its called "taint and toleration".

In general terms:
        - Taint is the capability of the node which makes node to do not let any pod to be scheduled on it.
        - On the other hand, Toleration is another capability makes that particular pod to be tolerated by specific Node.

To summarise, Taint and Toleration are used to set restrictions on the what pods can be scheduled on a node.

Let us suppose we have 3 node cluster as below and below is the state when we have pods running in normal scenarios.


Now suppose, we got a requirements where we want to schedule only specific pods on Node1 and nothing should be scheduled on that. For this now lets add a taint called "taint=blue" on Node1. After this no pod will be able to schedule on this Node, until we add tolerations to specific pod to get scheduled on Node1.
Below, we added "blue" toleration to pod "D" and then after below will be the status.


Demo -
             In below, fresh setup we'll see we don't have any taint set on worker node, though we have a taint set on Master node. That is the reason that by default nothing will be scheduled on master node.






Now, lets add a taint "Taint=Dog" to worker node and try to schedule a pod on this.




Create a pod and see the status of the pod-





You noticed that status is pending, let's see what logs say. If you see the last line it says "0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate".




Now lets create a new pod "dog" which should be tolerate to Node1.




You'll see that after adding tolerance to the pod it got scheduled to Node01 and the other pod is still in pending state.

I hope this clears out the concept of taint and toleration in K8S.

Wednesday, August 21, 2019

Terraform setting up clustered web server !! Getting Started Part-3!!

Last two posts first we saw the basics of terraform "Part-1" and then created simple web server "Part-2". 

Now as we know that running single web server in production is never a good idea. We always need services which should be highly available as well as should be scalable as per the requirements.

Creating and managing a cluster always going to be a pain point. Fortunately  with new cloud technologies now its possible to automate all this and things will be much easier to manage. In this tutorial we'll use AWS's "Auto Scaling Group (ASG)".

    An ASG takes care of everything automatically, including launching a cluster of EC2 instances , monitoring the health of each instance, replacing the failed instances and adjusting the size of cluster in response to the load.





A full working ASG stack include multiple resources to make a working cluster. It starts with "launch configuration" which basically specify how a particular EC2 instance will be configured.

Now, in Fig-1, you saw that we have two EC2 instances and each instance will be having its own IP address. Problem is that what will be end point that you will be provide to your users. Also, later on what we had some issue issues with any of the server ASG can destroy the faulty server and launch a server with new IP. It will be difficult to handle such a situation.

     One way to solve this issue is to use Load Balancer to distribute traffic to backend servers and give LB IP intact DNS to all the users to access the services. 




AWS offers three type of Load Balancers-

  1.) Application Load balancer(ALB) :
         Best suited for load balancing of HTTP and HTTPS traffic. Operates at the application layer (Layer 7) of the OSI model.
  
  2.) Network Load Balancer(NLB) :
         Best suited for load balancing of TCP, UDP and TLS traffic. Operates at the transport layer (Layer 4) of the OSI model.

  3.) Classic Load Balancer(CLB) :
         This is legacy load balancer that predates both ALB and NLB. It can handle all types of traffic that ALB and NLB can handle.

Now a days most of the application either use ALB or NLB. In our case we are going to handle HTTP traffic, so we will use ALB.

Again, ALB consists of several parts:

1.) Listener - listen on specific port and protocol.
2.) Listener Rule - takes request that comes to listener and send those that match specific paths e.g. /foo or /bar to specific target group.
3.) Target Groups - One of more servers that receive requests from the load balancer. Target group also perform health checks on those servers and only sends requests to the healthy servers.





You can get the code from my GitHub repo here.

Please see all the steps in below screen shots -


Check the status on your aws console and access site from browser as well :




Now, lets destroy whole the whole setup with one command :).









Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 ( https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/ ) there are so many features releases for...