Wednesday, November 27, 2019

Deploy and Scale Kubernetes Application using Spinnaker

Deploy and Scale Kubernetes Application using Spinnaker-


In my last post Getting started with Spinnaker, we completed the installation and setup part of spinnaker. In this post, I'll be going through the "Deploying and Scaling application on Kubernetes using spinnaker".

In this particular exercise, we'll create a simple "nginx" deployment on kubernetes and expose that as a service. After that we'll see how we can scale up and down the deployment easily from Spinnaker Dashboard itself.

Fot this, first make sure we have done port-forwarding for required pods and able to access Spinnaker Dashboard. 

Note - Before moving ahead, please make sure that "kubernetes" provider is enabled. You can check this on "Halyard" configuration as below.

$ kubectl exec -it  spinnaker-local-spinnake-halyard-0 /bin/bash -n spinnaker
$ hal config list | grep -A 37 kubernetes




After that click on the Create Application in Application tab to below popup.



After filling the required information and hitting the create button. You'll get landed on below screen.

There are few other terms which you can check on documents like "Clusters, Load Balancers, Server Group" etc.

Here, we'll create a Server Group which will basically contain our deployment manifest(below).

---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
  selector: 
app: nginx 

Once you click on "Create Server Group", You'll see below screen where we need to paste above yaml and hit create.



Below will be the end state, if everything goes well.


Now, lets inspect our deployment. 
On Cluster tab we can see the deployment and number of replicas(pod) available in this deployment as below -


Please verify the same from CLI using kubectl-


Check for the services in Load Balancer Section -

Verify same using CLI -


Access the service -




Scale up the Deployment from 1 to 4 pods and verify the results-






So, this was the simple how-to for managing the K8S manifest. In the next post, I will try to explore the integration on Jenkins with Spinnaker and auto triggering Spinnaker Deployment based on Jenkins events.



4 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. The evolution of machine learning provides us information about the birth of Machine Learning and AI. And it is surprising that machine learning started taking its original shape in the 1970s. Well, the future of Machine Learning will be brighter for sure.

    ReplyDelete

Kubernetes 1.31 || Testing the Image Volume mount feature using Minikube

With Kubernetes new version 1.31 ( https://kubernetes.io/blog/2024/08/13/kubernetes-v1-31-release/ ) there are so many features releases for...